Search This Blog

Thursday, November 10, 2011

Do dice play God?
A discussion of Irreligion


A discussion of Irreligion: a mathematician explains why the arguments for God just don't add up (Hill and Wang division of Farrar, Straus and Giroux 2008)
Please contact Conant at krypto...at...gmail...dot....com to
report errors or make comments.
Thank you.
Relevant links found at bottom of page.
Posted Nov 9, 2010. Minor revision posted Sept. 7, 2012.
A previous version of this discussion is found on Angelfire. 
 
By PAUL CONANT
John Allen Paulos has done a service by compiling the various purported proofs of the existence of a (monotheistic) God and then shooting them down in his book Irreligion: a mathematician explains why the arguments for God just don't add up.

Paulos, a Temple University mathematician who writes a column for ABC News, would be the first to admit that he has not disproved the existence of God. But, he is quite skeptical of such existence, and I suppose much of the impetus for his book comes from the intelligent design versus accidental evolution controversy.(1).

Really, this review isn't exactly kosher, because I am going to cede most of the ground. My thinking is that if one could use logico-mathematical methods to prove God's existence, this would be tantamount to being able to see God, or to plumb the depths of God. Supposing there is such a God, is he likely to permit his creatures, without special permission, to go so deep?

This review might also be thought rather unfair because Paulos is writing for the general reader and thus walks a fine line on how much mathematics to use. Still, he is expert at describing the general import of certain mathematical ideas, such as Gregory Chaitin's retooling of Kurt Goedel's undecidability theorem and its application to arguments about what a human can grasp about a "higher power."

Many of Paulos' counterarguments essentially arise from a Laplacian philosophy wherein Newtonian mechanics and statistical randomness rule all and are all. The world of phenomena, of appearances, is everything. There is nothing beyond. As long as we agree with those assumptions, we're liable to agree with Paulos. 

Just because...
Yet a caveat: though mathematics is remarkably effective at describing physical relations, mathematical abstractions are not themselves the essence of being (though even on this point there is a Platonic dispute), but are typically devices used for prediction. The deepest essence of being may well be beyond mathematical or scientific description -- perhaps, in fact, beyond human ken (as Paulos implies, albeit mechanistically, when discussing Chaitin and Goedel).2

Paulos' response to the First Cause problem is to question whether postulating a highly complex Creator provides a real solution. All we have done is push back the problem, he is saying. But here we must wonder whether phenomenal, Laplacian reality is all there is. Why shouldn't there be something deeper that doesn't conform to the notion of God as gigantic robot?

But of course it is the concept of randomness that is the nub of Paulos' book, and this concept is at root philosophical, and a rather thorny bit of philosophy it is at that. The topic of randomness certainly has some wrinkles that are worth examining with respect to the intelligent design controversy.

One of Paulos' main points is that merely because some postulated event has a terribly small probability doesn't mean that event hasn't or can't happen. There is a terribly small probability that you will be struck by lightning this year. But every year, someone is nevertheless stricken. Why not you?

In fact, zero probability doesn't mean impossible. Many probability distributions closely follow the normal curve, where each distinct probability is exactly zero.

Paulos applies this line of reasoning to the probabilities for the origin of life, which the astrophysicist Fred Hoyle once likened to the chance of a tornado whipping through a junkyard and leaving a fully assembled jumbo jet in its wake. (Nick Lane in Life Ascending: The Ten Great Inventions of Evolution (W.W. Norton 2009) relates some interesting speculations about life self-organizing around undersea hydrothermal vents. So perhaps the probabilities aren't so remote after all, but, really, we don't know.) 

Shake it up, baby
What is the probability of a specific permutation of heads and tails in say 20 fair coin tosses? This is usually given as 0.520, or about one chance in a million. What is the probability of 18 heads followed by 2 tails? The same, according to one outlook.

Now that probability holds if we take all permutations, shake them up in a hat and then draw one. All permutations in that case are equiprobable.4
However, intuitively it is hard to accept that 18 heads followed by 2 tails is just as probable as any other ordering. In fact, there are various statistical methods for challenging that idea.5

One, which is quite useful, is the runs test, which determines the probability that a particular sequence falls within the random area of the related normal curve. A runs test of 18H followed by 2T gives a z score of 3.71, which isn't ridiculously high, but implies that the ordering did not occur randomly with a confidence of 0.999.

Now compare that score with this permutation: HH TTT H TT H TT HH T HH TTT H. A runs test z score gives 0.046, which is very near the normal mean.
To recap: the probability of drawing a number with 18 ones (or heads) followed by 2 zeros (or tails) from a hat full of all 20-digit strings is on the order of 10-6. The probability that that sequence is random is on the order of 10-4. For comparison, we can be highly confident the second sequence is, absent further information, random. (I actually took it from irrational root digit strings.)

Again, those permutations with high runs test z scores are considered to be almost certainly non-random.3

At the risk of flogging a dead horse, let us review Paulos' example of a very well-shuffled deck of ordinary playing cards. The probability of any particular permutation is about one in 1068, as he rightly notes. But suppose we mark each card's face with a number, ordering the deck from 1 to 52. When the well-shuffled deck is turned over one card at a time, we find that the cards come out in exact sequential order. Yes, that might be random luck. Yet the runs test z score is a very large 7.563, which implies effectively 0 probability of randomness as compared to a typical sequence. (We would feel certain that the deck had been ordered by intelligent design.) 

Does not compute
The intelligent design proponents, in my view, are trying to get at this particular point. That is, some probabilities fall, even with a lot of time, into the nonrandom area. I can't say whether they are correct about that view when it comes to the origin of life. But I would comment that when probabilities fall far out in a tail, statisticians will say that the probability of non-random influence is significantly high. They will say this if they are seeking either mechanical bias or human influence. But if human influence is out of the question, and we are not talking about mechanical bias, then some scientists dismiss the non-randomness argument simply because they don't like it.

Another issue raised by Paulos is the fact that some of Stephen Wolfram's cellular automata yield "complex" outputs. (I am currently going through Wolfram's A New Kind of Science (Wolfram Media 2002) carefully, and there are many issues worth discussing, which I'll do, hopefully, at a later date.)

Like mathematician Eric Schechter (see link below), Paulos sees cellular automaton complexity as giving plausibility to the notion that life could have resulted when some molecules knocked together in a certain way. Wolfram's Rule 110 is equivalent to a Universal Turing Machine and this shows that a simple algorithm could yield any computer program, Paulos points out.
Paulos might have added that there is a countable infinity of computer programs. Each such program is computed according to the initial conditions of the Rule 110 automaton. Those conditions are the length of the starter cell block and the colors (black or white) of each cell.

So, a relevant issue is, if one feeds a randomly selected initial state into a UTM, what is the probability it will spit out a highly ordered (or complex or non-random) string versus a random string. Runs test scores would show the obvious: so-called complex strings will fall way out under a normal curve tail. 

Grammar tool
I have run across quite a few ways of gauging complexity, but, barring an exact molecular approach, it seems to me the concept of a grammatical string is relevant.

Any cell, including the first, may be described as a machine. It transforms energy and does work (as in W = 1/2mv2). Hence it may be described with a series of logic gates. These logic gates can be combined in many ways, but most permutations won't work (the jumbo jet effect).

For example, if we have 8 symbols and a string of length 20, we have 125,970 different arrangements. But how likely is it that a random arrangement will be grammatical?

Let's consider a toy grammar with the symbols a,b,c. Our only grammatical rule is that b may not immediately follow a.

So for the first three steps, abc and cba are illegal and the other four possibilities are legal. This gives a (1/3) probability of error on the first step.
In this case, the probability of error at every third step is not independent of the previous probability as can be seen by the permutations:
 abc  bca  acb  bac  cba  cab
That is, for example, bca followed by bac gives an illegal ordering. So the probability of error increases with n.

However, suppose we hold the probability of error at (1/3). In that case the probability of a legal string where n = 30 is less than (2/3)10 = 1.73%. Even if the string can tolerate noise, the error probabilities rise rapidly. Suppose a string of 80 can tolerate 20 percent of its digits wrong. In that case we make our n = 21.333. That is the probability of success is (2/3)21.333 = 0.000175.
And this is a toy model. The actual probabilities for long grammatical strings are found far out under a normal curve tail. 

This is to inform you
A point that arises in such discussions concerns entropy (the tendency toward decrease of order) and the related idea of information, which is sometimes thought of as the surprisal value of a digit string. Sometimes a pattern such as HHHH... is considered to have low information because we can easily calculate the nth value (assuming we are using some algorithm to obtain the string). So the Chaitin-Kolmogorov complexity is low, or that is, the information is low. On the other hand a string that by some measure is effectively random is considered here to be highly informative because the observer has almost no chance of knowing the string in detail in advance.

However, we can also take the opposite tack. Using runs testing, most digit strings (multi-value strings can often be transformed, for test purposes, to bi-value strings) are found under the bulge in the runs test bell curve and represent probable randomness. So it is unsurprising to encounter such a string. It is far more surprising to come across a string with far "too few" or far "too many" runs. These highly ordered strings would then be considered to have high information value.

This distinction may help address Wolfram's attempt to cope with "highly complex" automata. By these, he means those with irregular, randomlike stuctures running through periodic "backgrounds." If a sufficiently long runs test were done on such automata, we would obtain, I suggest, z scores in the high but not outlandish range. The z score would give a gauge of complexity.

We might distinguish complicatedness from complexity by saying that a random-like permutation of our grammatical symbols is merely complicated, but a grammatical permutation, possibly adjusted for noise, is complex. (We see, by the way, that grammatical strings require conditional probabilities.) 

A jungle out there
Paulos' defense of the theory of evolution is precise as far as it goes but does not acknowledge the various controversies on speciation among biologists, paleontologists and others.

Let us look at one of his counterarguments:

The creationist argument "goes roughly as follows: A very long sequence of individually improbable mutations must occur in order for a species or a biological process to evolve. If we assume these are independent events, then the probability that all of them will occur in the right order is the product of their respective probabilities" and hence a speciation probability is miniscule. "This line of argument," says Paulos, "is deeply flawed."

He writes: "Note that there are always a fantastically huge number of evolutionary paths that might be taken by an organism (or a process), but there is only one that actually will be taken. So, if, after the fact, we observe the particular evolutionary path actually taken and then calculate the a priori probability of its having been taken, we will get the miniscule probability that creationists mistakenly attach to the process as a whole."

Though we have dealt with this argument in terms of probability of the original biological cell, we must also consider its application to evolution via mutation. We can consider mutations to follow conditional probabilities. And though a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation and current environment), we must consider the entire chain of mutations represented by an extant species.

If we consider each species as representing a chain of mutations from the primeval organism, then we have for each a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.

As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields a cumulative probability of 1.65 x 10-19. In other words, the more mutations and ancestral species attributed to an extanct species, the less likely that species is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.

Think of it this way: During an organism's lifetime, there is a fantastically large number of possible mutations. What is the probability that the organism will happen upon one that is beneficial? That event would, if we are talking only about passive natural selection, be found under a probability distribution tail (whether normal, Poisson or other). The probability of even a few useful mutations occurring over 3.5 billion years isn't all that great (though I don't know a good estimate).

A 'botific vision
Let us, for example, consider Wolfram's cellular automata, which he puts into four qualitative classes of complexity. One of Wolfram's findings is that adding complexity to an already complex system does little or nothing to increase the complexity, though randomized initial conditions might speed the trend toward a random-like output (a fact which, we acknowledge, could be relevant to evolution theory).

Now suppose we take some cellular automata and, at every nth or so step, halt the program and revise the initial conditions slightly or greatly, based on a cell block between cell n and cell n+m. What is the likelihood of increasing complexity to the extent that a Turing machine is devised? Or suppose an automaton is already a Turing machine. What is the probability that it remains one or that a more complex-output Turing machine results from the mutation?

I haven't calculated the probabilities, but I would suppose they are all out under a tail.

Paulos has elsewhere underscored the importance of Ramsey theory, which has an important role in network theory, in countering the idea that "self-organization" is unlikely. Actually, with sufficient n, "highly organized" networks are very likely.6 Whether this implies sufficient resources for the self-organization of a machine is another matter. True, high n seem to guarantee such a possibility. But, the n may be too high to be reasonable. 

Darwin on the Lam?
However, it seems passive natural selection has an active accomplice in the extraordinarily subtle genetic machinery. It seems that some form of neo-Lamarckianism is necessary, or at any rate a negative feedback system which tends to damp out minor harmful mutations without ending the lineage altogether (catastrophic mutations usually go nowhere, the offspring most often not getting a chance to mate). 

Matchmaking
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.

For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.

That is, we don't calculate, say 11^-11 (3.5x10^-15), but we have that our series approximates very closely 1 - e^-1 = 0.63.

Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?

The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.

These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.

And so
It may be that the in's and out's of evolution arguments were beyond the scope of Irreligion, but I don't think Paulos has entirely refuted the skeptics in this matter.(7)

Nevertheless, the book is a succinct reference work and deserves a place on one's bookshelf.

1. Paulos finds himself disconcerted by the "overbearing religiosity of so many humorless people."
Whenever one upholds an unpopular idea, one can expect all sorts of objections from all sorts of
people, not all of them well mannered or well informed. Comes with the territory. Unfortunately,
I think this backlash may have blinded him to the many kind, cheerful and non-judgmental
Christians and other religious types in his vicinity.  Some people, unable to persuade Paulos of
God's existence, end the conversation with "I'll pray for you..." I can well imagine that he
senses that the pride of the other person is motivating a put-down. Some of these souls might try
not letting the left hand know what the right hand is doing.

2. Paulos recounts this amusing fable:
The great mathematician Euler was called to court to debate the necessity of God's existence with
a well-known atheist. Euler opens with: "Sir, (a + bn)/n = x. Hence, God exists. Reply."
Flabbergasted, his mathematically illiterate opponent walked away, speechless. Yet, is this joke
as silly as it at first seems? After all, one might say that the mental activity of mathematics
is so profound (even if the specific equation is trivial) that  the existence of a Great Mind is
implied.

3. We should caution that the runs test, which works for n_1 and n_2, each at least equal to 8
fails for the
pattern HH TT HH TT... This failure seems to be an artifact of the runs test assumption that a
usual number of runs is about n/2. I suggest that we simply say that the probability of that
pattern is less than or equal to H T H T H T..., a pattern whose z score rises rapidly with n.
Other patterns such as HHH TTT HHH... also climb away from the randomness area slowly with n.
With these cautions, however, the runs test gives striking results.

4. Thanks to John Paulos for pointing out an embarrassing misstatement in a previous draft. I
somehow mangled the probabilities during the editing. By the way, my tendency to write flubs
when I actually know better is a real problem for me and a reason I need attentive readers to
help me out.

5. I also muddled this section. Josh Mitteldorf's sharp eyes forced a rewrite.

6. Paulos in a column writes: 'A more profound version of this line of thought can be traced
back to British mathematician Frank Ramsey, who proved a strange theorem. It stated that if you
have a sufficiently large set of geometric points and every pair of them is connected by either
a red line or a green line (but not by both), then no matter how you color the lines, there will
always be a large subset of the original set with a special property. Either every pair of the
subset's members will be connected by a red line or every pair of the subset's members will be
connected by a green line.  If, for example, you want to be certain of having at least three
points all connected by red lines or at least three points all connected by green lines, you will
need at least six points. (The answer is not as obvious as it may seem, but the proof isn't
difficult.)  For you to be certain that you will have four points, every pair of which is
connected by a red line, or four points, every pair of which is connected by a green line,
you will need 18 points, and for you to be certain that there will be five points with this
property, you will need -- it's not known exactly - between 43 and 55. With enough points,
you will inevitably find unicolored islands of order as big as you want, no matter how you color
the lines.

7. Paulos, interestingly, tells of how he lost a great deal of money by an ill-advised enthusiasm
for WorldCom stock in A Mathematician Plays the Stock Market (Basic Books, 2003). The expert
probabalist and statistician found himself under a delusion which his own background should have
fortified him against. (The book, by the way, is full of penetrating insights about the
probability and the market.) One wonders whether Paulos might also be suffering from another
delusion: that probabilities favor atheism.

Wikipedia article on Chaitin-Kolmogorov complexity
In search of a blind watchmaker (by Paul Conant)
Wikipedia article on runs test
Eric Schechter on Wolfram vs intelligent design
On Hilbert's sixth problem (by Paul Conant)
The scientific embrace of atheism (by David Berlinski) 
John Allen Paulos' home page

No comments:

Post a Comment