Information Theory, Ignorance and Superstition

Hey all,

I had a discussion last night with a theist regarding yet more “arguments” for the existence of a “god” or “gods”. As is typically the case, the poster was completely off-topic and wandering all over the logical landscape of this discussion. Thus it invariably landed in the completely irrelevant Oz of the Theory of Evolution, something theists seem predisposed to attack with all measure of vigor. But the argument I heard last night was new to me, and as a deconverter I needed to understand this apologetic.

So, I’ve done some research and am sorry to report that this “argument” has all the trappings of Pascal’s Wager; drivel.

The argument is based on what is called Information Theory; and as applied to DNA/RNA. The basic argument, I think, is to say that genetic systems contain information, or “code”, that could not come from less information; that is, it is irreducibly complex. And furthermore, this code, being an intelligent code that cannot derive from less information, could have only come from a “mind”.

Of course, the immediate problem you can see here is that a “mind” need not be god’s, but we won’t tangent into a discussion of aliens and all the other possibilities because the premise of the statement is invalid anyway.

Information Theory is a mathematical treatment of the evolution, or change, in information within a system over a period of time.

Specifically, it defines information as a reduction in uncertainty; caused by an event for which the outcome can be predicted only in terms of probability. Therefore, once the event occurs and a “decision” is made, the uncertainty is reduced and that reduction in uncertainty constitutes information.

Therefore, in this treatment, information has a very specific mathematical meaning. To illustrate it, consider the exercise of flipping a coin. The act of flipping the coin is an event, call it x, that is binary in the sense that it has two possible outcomes. Therefore, the total number of outcomes available is M=2. So, information can be precisely treated as a function I such that:

I (x) = logu(M)    [equation 1.0]

where u is the base used and the units in which I(x) is measured. So, the units of measure for Information are likewise precisely defined as the base of the logarithm of M. Suppose we choose base 2 so that we can express our result in binary notation. Then flipping a coin yields exactly 1 bit of information after the event occurs.

Contrast this with the definition of “information” in DNA; i.e. biological systems: “information” in DNA is base pair sequencing corresponding to an amino acid. We will return to this shortly.

These are completely different definitions and Information theory is not even applicable to the problem of information in Biology. But for the sake of demonstration, we will momentarily grant this falsehood. Let me expound:

Let x be an information source (an event). These events are any events in nature that can be characterized using an understanding of math and physics.

As an aside, this analysis also illustrates why you cannot depend on wikipedia for all of your information, or quote it as an authority, as atheists, in particular, love to do. This is because here we have yet another example of people posting on wikipedia who don’t know what they are talking about. In the wikipedia entry it is stated that “entropy” is a concept in the Second Law of Thermodynamics and does not apply to Information Theory. This is a reflection of ignorance of Information Theory since Information Theorists are using this term in a completely different manner; i.e. they have created their own operational definition that has nothing to do with physics. If they do not even know this much, a poster should not be trusted in any conversation about Information Theory. But I digress.

Entropy, denoted in the literature as a function H(x), is a measurement of our average uncertainty when we don’t know the outcome of an information source. That means it’s a measurement of how much information we don’t have (before the event) or how much information we gain (after the event).

H(x) =  ΣMi=1 pi  log2 (1/pi)   [equation 2.0]

where p is the probability of a symbol, i. Much of the terminology in Information Theory is unfortunate. The term “symbol” is an obfuscated way of referring to one possible, measurable outcome of an event, x. M is the total number of “symbols” the source, x, can possibly generate. Therefore, using the simplest example, a binary event with equal probabilities in outcome will generate exactly 1 bit of information from an antecedent condition consisting of 0 bits of information. Information has increased. As we can see from the equation above, equal probabilities of symbols in an event x generate the greatest amount of information.

Therefore, in a biological system information is generated from a lack of information; exactly the opposite of what the Information Theory proponents argue. This is represented as a change in the genotype due to an event, z, which alters the base pair sequence such that the sequence before event z is not identical the sequence afterward. Once a selection pressure, for example, forces a change in a genotype, information is increased, not decreased. And from no information thus can come information.

To prove this we can take our previous equation 1.0 and explicitly solve the coin flipping event scenario:

I (x) = logu(M)

=> I(x) = log2 (2) = 1 bit.

But what about the condition prior to the event x? Does it contain more or less information? Since the number of total possible outcomes, M=1, we set that value appropriately and solve:

I(x) = log2 (1) = 0 bits.

Q.E.D.

Therefore, information is gained from no information. And this also denies any possibility of it “requiring” an intelligent “mind” to  create it, as it derives of “disorder”, randomness or a lack of information. As with so many academic arguments so easily dispatched already, there truly is nothing to see here.

- kk

About these ads
2 comments
  1. Dennis Ochei said:

    My only issue is that information theoretic entropy and thermodynamic entropy are in fact the same. The similarities between the equations for Shannon entropy and Boltzmann’s entropy are not coincidental. One must only model a volume of matter (say a cubic meter of warm gas) as an information source, the information channel as a macroscopic interaction with those microscopic particles, and a symbol be any precise arrangement of those molecules.

    • Hey Dennis,

      Really? That’s amazing. As far as the equations are concerned, can it be shown that they are equivalent, or just similar? Thanks,

      - kk

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 145 other followers

%d bloggers like this: