First published Wednesday, November 01, 2006
Does math back 'intelligent design'?
Two of the main arguments favoring "intelligent design" of basic biotic machines:
. Mathematician William A. Dembski (Science and Evidence for Design in the Universe) says that if a pattern is found to have an extraordinarily low probability of random occurrence -- variously 10^(-40) to 10^(-150) -- then it is reasonable to infer design by a conscious mind. He points out that forensics investigators typically employ such a standard, though heuristically.
. Biochemist Stephen C. Meyer (Darwin's Black Box) says that a machine is irreducibly complex if some parts are interdependent. Before discussing intricate biological mechanisms, he cites a mousetrap as a machine composed of interdependent parts that could not reasonably be supposed to fall together randomly.
Meyer is aware of the work of Stuart Kauffman, but dismisses it because Kauffman does not deal with biological specifics. Kauffman's concept of self-organization via autocatalysis however lays the beginnings of a mathematical model demonstrating how systems can evolve toward complexity, including sudden phase transitions from one state -- which we might perceive as "primitive" -- to another state -- which we might perceive as "higher." (Like the word "complexity," the term "self-organization" is sometimes used rather loosely; I hope to write something on this soon.)
Kauffman's thinking reflects the work of Ilya Prigogine who made the reasonable point that systems far from equilibrium might sometimes become more sophisticated before degenerating in accordance with the "law of entropy."
This is not to say that Meyer's examples of "irreducible complexity" -- including cells propelled by the cilium "oar" and the extraordinarily complex basis of blood-clotting -- have been adequately dealt with by the strict materialists who sincerely believe that the human mind is within reach of grasping the essence of how the universe works via the elucidation of some basic rules.
One such scientist is Stephen Wolfram whose New Kind of Science examines "complexity" via iterative cellular automaton graphs. He dreams that the CA concept could lead to such a breakthrough. (But I argue that his hope, unless modified, is vain; see sidebar link on Turing machines.)
Like Kauffman, Wolfram is a renegade on evolution theory and argues that his studies of cellular atomata indicate that constraints -- and specifically the principle of natural selection -- have little impact on development of order or complexity. Complexity, he finds, is a normal outcome of even "simple" sets of instructions, especially when initial conditions are selected at random.
Thus, he is not surprised that complex biological organisms might be a consequence of some simple program. And he makes a convincing case that some forms found in nature, such as fauna pigmentation patterns, are very close to patterns found according to one or another of his cellular automatons.
However, though he discusses n-dimensional automata, the findings are sketchy (the combinatorial complexity is far out of computer range) and so cannot give a three-dimensional example of a complex dynamical system emerging gestalt-like from some simple algorithm.
Nevertheless, Wolfram's basic point is strong: complexity (highly ordered patterns) can emerge from simple rules recursively applied.
Another of his claims, which I have not examined in detail, is that at least one of his CA experiments produced a graph, which, after sufficient iterations, statistically replicated a random graph. That is, when parts of the graph were sampled, the outcome was statistically indistinguishable from a graph generated by computerized randomization. This claim isn't airtight, and analysis of specific cases needs to be done, but it indicates the possibility that some structures are somewhat more probable than a statistical sampling would indicate. However, this possibility is no disproof of Dembski's approach. (By the way, Wolfram implicitly argues that "pseudorandom" functions refer to a specific class of generators that his software Mathematica avoids when generating "random" numbers. Presumably, he thinks his particular CA does not fall into such a "pseudorandom" set, despite its being fully deterministic.)
However, Wolfram also makes a very plausible case (I don't say proof because I have not examined the claim at that level of detail) that his cellular automata can be converted into logic languages, including ones that are sufficiently rich for Godel's incompleteness theorem to apply.
As I understand Godel's proof, he has demonstrated that, if a system is logically consistent, then there is a class of statements that cannot be derived from axioms. He did this through an encipherment system that permits self-referencing and so some have taken his proof to refer only to an irrelevant semantical issue of self-referencing (akin to Russell's paradox). But my take is that the proof says that statements exist that cannot be proved or derived.
So, in that case, if we model a microbiotic machine as a statement in some logic system, we see immediately that it could be a statement of the Godel type, meaning that the statement holds but cannot be derived from any rules specifying the evolution of biological systems. If such a statement indeed were found to be unprovable, then many would be inclined to infer that the machine specified by this unprovable statement must have been designed by a conscious mind. However, such an inference is a philosophical (which does not mean trival) difficulty.
No comments:
Post a Comment