In search of a blind watchmaker
Richard Dawkins' web site
Wikipedia article on Dawkins
Wikipedia article on Francis Crick
Abstract of David Layzer's two-tiered adaptation
Joshua Mitteldorf's home page
Do dice play God? A book review
A discussion of The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design by Richard Dawkins First posted Oct. 5, 2010 and revised as of Oct. 8, 2010 Please notify me of errors or other matters at "krypto78...at...gmail...dot...com"
By PAUL CONANT Surely it is quite unfair to review a popular science book published years ago. Writers are wont to have their views evolve over time.1 Yet in the case of Richard Dawkins' The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986), a discussion of the mathematical concepts seems warranted, because books by this eminent biologist have been so influential and the "blind watchmaker" paradigm is accepted by a great many people, including a number of scientists.
Dawkins' continuing importance can be gauged by the fact that his most recent book, The God Delusion (Houghton Mifflin 2006), was a best seller, and by the links above. In fact, Watchmaker, also a best seller, was re-issued in 2006.
I do not wish to disparage anyone's religious or irreligious beliefs, but I do think it important to point out that non-mathematical readers should beware the idea that Dawkins has made a strong case that the "evidence of evolution reveals a universe without design."
There is little doubt that some of Dawkins' conjectures and ideas in Watchmaker are quite reasonable. However, many readers are likely to think that he has made a mathematical case that justifies the theory(ies) of evolution, in particular the "modern synthesis" that combines the concepts of passive natural selection and genetic mutation.
Dawkins wrote his apologia back in the eighties when computers were becoming more powerful and accessible, and when PCs were beginning to capture the public fancy. So it is understandable that, in this period of burgeoning interest in computer-driven chaos, fractals and cellular automata, he might have been quite enthusiastic about his algorithmic discoveries.
However, interesting computer programs may not be quite as enlightening as at first they seem.
Cumulative selection
Let us take Dawkins' argument about "cumulative selection," in which he uses computer programs as analogs of evolution. In the case of the phrase, "METHINKS IT IS LIKE A WEASEL," the probability -- using 26 capital letters and a space -- of coming up with such a sequence randomly is 27-28 (the astonishingly remote 8.3 x 10-41). However, that is also the probability for any random string of that length, he notes, and we might add that for most probability distributions. when n is large, any distinct probability approaches 0.
Such a string would be fantastically unlikely to occur in "single step evolution," he writes. Instead, Dawkins employs cumulative selection, which begins with a random 28-character string and then "breeds from" this phrase. "It duplicates it repeatedly, but with a certain chance of random error -- 'mutation' -- in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly most resembles the target phrase, METHINKS IT IS LIKE A WEASEL."
Three experiments evolved the precise sentence in 43, 64 and 41 steps, he wrote.
Dawkins' basic point is that an extraordinarily unlikely string is not so unlikely via "cumulative selection."
Once he has the readers' attention, he concedes that his notion of natural selection precludes a long-range target and then goes on to talk about "biomorph" computer visualizations (to be discussed below).
Yet it should be obvious that Dawkins' "methinks" argument applies specifically to evolution once the mechanisms of evolution are at hand. So the fact that he has been able to design a program which behaves like a neural network, really doesn't say much about anything. He has achieved a proof of principle that was not all that interesting, although I suppose it would answer a strict creationist, which was perhaps his basic aim.
But which types of string are closer to the mean? Which ones occur most often? If we were to subdivide chemical constructs into various sets, the most complex ones -- which as far as we know are lifeforms -- would be farthest from the mean.(Dawkins, in his desire to appeal to the lay reader, avoids statistics theory other than by supplying an occasional quote from R.A. Fisher.)
Let us, like Dawkins, use a heuristic analog2. Suppose we take the set of all grammatical English sentences of 28 characters. The variable is an English word rather than a Latin letter or space. What would be the probability of any 28-character English sentence appearing randomly?
My own sampling of a dictionary found that words with eight letters appear with the highest probability of 21%. So assuming the English lexicon to contain 500,000 words, we obtain about 105,000 words of length 8.
Now let us do a Fermi-style rough estimate. For the moment ignoring spaces, we'll posit average word length of 2 to 9 as covering virtually all combinations. That is, we'll pretend there are sentences composed of only two-letter words, only three-letter and so on up to nine letters. Further, we shall put an upper bound of 105 on the set of words of any relevant length (dropping the extra 5,000 for eight-letter words as negligible for our purposes).
This leads to a total number of combinations of (105)2 + 108 + ... + 1014, which approximates 1014.
We have not considered spaces nor (directly) combinations of words of various lengths. It seems overwhelmingly likely that any increases would be canceled by the stricture that sentences be grammatical, something we haven't modeled. But, even if the number of combinations were an absurd 10 orders of magnitude higher, the area under the part of some typical probability curve that covers all grammatical English sentences of length 28 would take up a miniscule percentage of a tail.
Analogously, to follow Dawkins, we would suspect that the probability is likewise remote for random occurrence of any information structure as complex as a lifeform.
To reiterate, the entire set of English sentences of 28 characters is to be found far out in the tail of some probability distribution. Of course, we haven't specified which distribution because we have not precisely defined what is meant by "level of complexity." This is also an important omission by Dawkins.
We haven't really done much other than to underscore the lack of precision of Dawkins' analogy.
Dawkins then goes on to talk about his "biomorph" program, in which his algorithm recursively alters the pixel set, aided by his occasional selecting out of unwanted forms. He found that some algorithms eventually evolved insect-like forms, and thought this a better analogy to evolution, there having been no long-term goal. However, the fact that "visually interesting" forms show up with certain algorithms again says little. In fact, the remoteness of the probability of insect-like forms evolving was disclosed when he spent much labor trying to repeat the experiment because he had lost the exact initial conditions and parameters for his algorithm. (And, as a matter of fact, he had become an intelligent designer with a goal of finding a particular set of results.)
Again, what Dawkins has really done is use a computer to give his claims some razzle dazzle. But on inspection, the math is not terribly significant.
It is evident, however, that he hoped to counter Fred Hoyle's point that the probability of life organizing itself was equivalent to a tornado blowing through a junkyard and assembling from the scraps a fully functioning 747 jetliner, Hoyle having made this point not only with respect to the origin of life, but also with respect to evolution by natural selection.
So before discussing the origin issue, let us turn to the modern synthesis.
The modern synthesis
I have not read the work of R.A. Fisher and others who established the modern synthesis merging natural selection with genetic mutation, and so my comments should be read in this light.
Dawkins argues that, although most mutations are either neutral or harmful, there are enough progeny per generation to ensure that an adaptive mutation proliferates. And it is certainly true that, if we look at artificial selection -- as with dog breeding -- a desirable trait can proliferate in very short time periods, and there is no particular reason to doubt that if a population of dogs remained isolated on some island for tens of thousands of years that it would diverge into a new species, distinct from the many wolf sub-species.
But Dawkins is of the opinion that neutral mutations that persist because they do no harm are likely to be responsible for increased complexity. After all, relatively simple life forms are enormously successful at persisting.
And, as Stephen Wolfram points out (A New Kind of Science, Wolfram Media 2006), any realistic population size at a particular generation is extremely unlikely to produce a useful mutation because the ratio of possible mutations to the number of useful ones is some very low number. So Wolfram also believes neutral mutations drive complexity.
We have here two issues:
1. If complexity is indeed a result of neutral mutations alone, increases in complexity aren't driven by selection and don't tend to proliferate.
2. Why is any species at all extant? It is generally assumed that natural selection winnows out the lucky few, but does this idea suffice for passive filtering?
Though Dawkins is correct when he says that a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation), we must consider the entire chain of mutations represented by a species.
If we consider each species as representing a chain of mutations from the primeval organism, then we have a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.
As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields a cumulative probability of 1.65 x 10-19.
In other words, the more mutations and ancestral species attributed to an extanct species, the less likely it is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.
Dawkins' algorithm demonstrating cumulative evolution fails to account for this difficulty. Though he realizes a better computer program would have modeled lifeform competition and adaptation to environmental factors, Dawkins says such a feat was beyond his capacities. However, had he programed in low probabilities for "positive mutations," cumulative evolution would have been very hard to demonstrate.
Our second problem is what led Hoyle to revive the panspermia conjecture, in which life and proto-life forms are thought to travel through space and spark earth's biosphere. His thinking was that spaceborne lifeforms rain down through the atmosphere and give new jolts to the degrading information structures of earth life. (The panspermia notion has received much serious attention in recent years, though Hoyle's conjectures remain outside the mainstream.)
From what I can gather, one of Dawkins' aims was to counter Hoyle's sharp criticisms. But Dawkins' vigorous defense of passive natural selection does not seem to square with the probabilities, a point made decades previously by J.B.S. Haldane.
Without entering into the intelligent design argument, we can suggest that the implausible probabilities might be addressed by a neo-Lamarkian mechanism of negative feedback adaptations. Perhaps a stress signal on a particular organ is received by a parent and the signal transmitted to the next generation. But the offspring's genes are only acted upon if the other parent transmits the signal. In other words, the offspring embryo would not strengthen an organ unless a particular stress signal reached a threshold.
If that be so, passive natural selection would still play a role, particularly with respect to body parts that lose their role as essential for survival.
Dawkins said Lamarkianism had been roundly disproved, but since the time he wrote the book molecular biology has shown the possibility of reversal of genetic information (retroviruses and reverse transcription). However, my real point here is not about Lamarkianism but about Dawkins' misleading mathematics and reasoning.
Joshua Mitteldorf, an evolutionary biologist with a physics background and a Dawkins critic, points out that an idea proposed more than 30 years ago by David Layzer is just recently beginning to gain ground as a response to the cumulative probabilities issue. Roughly I would style Layzer's proposal a form of neo-Lamarckianism. The citation3 is found at the bottom of this essay and the link is posted above.
On origins Dawkins concedes that the primeval cell presents a difficult problem, the problem of the arch. If one is building an arch, one cannot build it incrementally stone by stone because at some point, a keystone must be inserted and this requires that the proto-arch be supported until the keystone is inserted. The complete arch cannot evolve incrementally. This of course is the essential point made by the few scientists who support intelligent design.
Dawkins essentially has no answer. He says that a previous lifeform, possibly silicon-based, could have acted as "scaffolding" for current lifeforms, the scaffolding having since vanished. Clearly, this simply pushes the problem back. Is he saying that the problem of the arch wouldn't apply to the previous incarnation of "life" (or something lifelike)?
Some might argue that there is a possible answer in the concept of phase shift, in which, at a threshold energy, a disorderly system suddenly becomes more orderly. However, this idea is left unaddressed in Watchmaker. I would suggest that we would need a sequence of phase shifts that would have a very low cumulative probability, though I hasten to add that I have insufficient data for a well-informed assessment.
Cosmic probabilities
Is the probability of life in the cosmos very high, as some think? Dawkins argues that it can't be all that high, at least for intelligent life, otherwise we would have picked up signals. I'm not sure this is valid reasoning, but I do accept his notion that if there are a billion life-prone planets in the cosmos and the probability of life emerging is a billion to one, then it is virtually certain to have originated somewhere in the cosmos.
Though Dawkins seems to have not accounted for the fact that much of the cosmos is forever beyond the range of any possible detection as well as the fact that time gets to be a tricky issue on cosmic scales, let us, for the sake of argument, grant that the population of planets extends to any time and anywhere, meaning it is possible life came and went elsewhere or hasn't arisen yet, but will, elsewhere.
Such a situation might answer the point made by Peter Ward and Donald Brownlee in Rare Earth: Why Complex Life Is Uncommon in the Universe (Springer 2000) that the geophysics undergirding the biosphere represents a highly complex system (and the authors make efforts to quantify the level of complexity), meaning that the probability of another such system is extremely remote. (Though the book was written before numerous discoveries concerning extrasolar planets, thus far their essential point has not been disproved. And the possibility of non-carbon-based life is not terribly likely because carbon valences permit high levels of complexity in their compounds.)
Now some may respond that it seems terrifically implausible that our planet just happens to be the one where the, say, one-in-a-billion event occurred. However, the fact that we are here to ask the question is perhaps sufficient answer to that worry. If it had to happen somewhere, here is as good a place as any. A more serious concern is the probability that intelligent life arises in the cosmos.
The formation of multicellular organisms is perhaps the essential "phase shift" required, in that central processors are needed to organize their activities. But what is the probability of this level of complexity? Obviously, in our case, the probability is one, but, otherwise, the numbers are unavailable, mostly because of the lack of a mathematically precise definition of "level of complexity" as applied to lifeforms.
Nevertheless, probabilities tend to point in the direction of cosmically absurd: there aren't anywhere near enough atoms -- let alone planets -- to make such probabilities workable. Supposing complexity to result from neutral mutations, probability of multicellular life would be far, far lower than for unicellular forms whose speciation is driven by natural selection. Also, what is the survival advantage of self-awareness, which most would consider an essential component of human-like intelligence?
Hoyle's most recent idea was that probabilities were increased by proto-life in comets that eventually reached earth. But, despite enormous efforts to resolve the arch problem (or the "jumbo jet problem"), in my estimate he did not do so.
(Interestingly, Dawkins argues that people are attracted to the idea of intelligent design because modern engineers continually improve machinery designs, giving a seemingly striking analogy to evolution. Something that he doesn't seem to really appreciate is that every lifeform may be characterized as a negative-feedback controlled machine, which converts energy into work and obeys the second law of thermodynamics. That's quite an "arch.")
The intelligent design proponents, however, face a difficulty when relying on the arch analogy: the possibility of undecidability. As the work of Godel, Church, Turing and Post shows, some theorems cannot be proved by tracking back to axioms. They are undecidable. If we had a complete physical description of the primeval cell, we could encode that description as a "theorem." But, that doesn't mean we could track back to the axioms to determine how it emerged. If the "theorem" were undecidable, we would know it to be "true" (having the cell description in all detail), but we might be forever frustrated in trying to determine how it came to exist.
In other words, a probabilistic argument is not necessarily applicable.
The problem of sentience
Watchmaker does not examine the issue of emergence of human intelligence, other than as a matter of level of complexity.
Hoyle noted in The Intelligent Universe (Holt, Rhinehart and Winston 1984) that over a century ago, Alfred Russel Wallace was perplexed by the observation that "the outstanding talents of man... simply cannot be explained in terms of natural selection."
Hoyle quotes the Japanese biologist S. Ohno:
"Did the genome (genetic material) of our cave-dwelling predecessors contain a set or sets of genes which enable modern man to compose music of infinite complexity and write novels with profound meaning? One is compelled to give an affirmative answer...It looks as though the early Homo was already provided with the intellectual potential which was in great excess of what was needed to cope with the environment of his time."
Hoyle proposes in Intelligent that viruses are responsible for evolution, accounting for mounting complexity over time. However, this seems hard to square with the point just made that such complexity doesn't seem to occur as a result of passive natural winnowing and so there would be no selective "force" favoring its proliferation.
At any rate, I suppose that we may assume that Dawkins in Watchmaker saw the complexity inherent in human intelligence as most likely to be a consequence of neutral mutations.
Another issue not addressed by Dawkins (or Hoyle for that matter) is the question of self-awareness. Usually the mechanists see self-awareness as an epiphenomenon of a highly complex program (a notion Roger Penrose struggled to come to terms with in The Emperor's New Mind (Oxford 1986) and Shadows of the Mind (Oxford 1994).)
But let us think of robots. Isn't it possible in principle to design robots that multiply replications and maintain homeostasis until they replicate? Isn't it possible in principle to build in programs meant to increase probability of successful replication as environmental factors shift?
In fact, isn't it possible in principle to design a robot that emulates human behaviors quite well? (Certain babysitter robots are even now posing ethics concerns as to an infant's bonding with them.)
And yet there seems to be no necessity for self-awareness in such designs. Similarly, what would be the survival advantage of self-awareness for a species?
I don't suggest that some biologists haven't proposed interesting ideas for answering such questions. My point is that Watchmaker omits much, making the computer razzle dazzle that much more irrelevant.
Conclusion
In his autobiographical What Mad Pursuit (Basic Books 1988) written when he was about 70, Nobelist Francis Crick expresses enthusiasm for Dawkins' argument against intelligent design, citing with admiration the "methinks" program.
Crick, who trained as a physicist and was also a panspermia advocate (see link above), doesn't seem to have noticed the difference in issues here. If we are talking about an analog of the origin of life (one-step arrival at the "methinks" sentence), then we must go with a distinct probability of 8.3 x 10-41. If we are talking about an analog of some evolutionary algorithm, then we can be convinced that complex results can occur with application of simple iterative rules (though, again, the probabilities don't favor passive natural selection).
One can only suppose that Crick, so anxious to uphold his lifelong vision of atheism, leaped on Dawkins' argument without sufficient criticality. On the other hand, one must accept that there is a possibility his analytic powers had waned.
At any rate, it seems fair to say that the theory of evolution is far from being a clear-cut theory, in the manner of Einstein's theory of relativity. There are a number of difficulties and a great deal of disagreement as to how the evolutionary process works. This doesn't mean there is no such process, but it does mean one should listen to mechanists like Dawkins with care.
******************
1. In a 1996 introduction to Watchmaker, Dawkins wrote that "I can find no major thesis in these chapters that I would withdraw, nothing to justify the catharsis of a good recant." 2. My analogy was inadequately posed in previous drafts. Hopefully, it makes more sense now. 3. Genetic Variation and Progressive Evolution David Layzer The American Naturalist Vol. 115, No. 6 (Jun., 1980), pp. 809-826 (article consists of 18 pages) Published by: The University of Chicago Press for The American Society of Naturalists Stable URL: http://www.jstor.org/stable/2460802
Note: An early draft contained a ridiculous mathematical error that does not affect the argument but was very embarrassing. Naturally, I didn't think of it until after I was walking outdoors miles from an internet terminal. It has now been put right.