Search This Blog

Thursday, November 10, 2011

First published Thursday, November 02, 2006

Pseudorandom thoughts on complexity

Draft 2


This post supplements the previous post "Does math back 'intelligent design'?"

With respect to the general concept of evolution, or simply change over time, what do we mean by complexity?

Consider Stephen Wolfram's cellular automata graphs. We might think of complexity as a measure of the entropy of the graph, which evolves row by row from an initial rule whereby change occurs only locally, in minimal sets of contiguous cells. Taken in totality, or after some row n, the graphs register different quantities of entropy. That is, "more complex" graphs convey higher average information than "less complex" ones. Some graphs become all black or all white after some row n, corresponding to 0 information after that row. There exists a significant set of graphs that achieve neither maximum nor minimum entropy, of course.

How would we define information in a Wolfram cellular automaton graph? We can use several criteria. A row would have maximum entropy if the probability of the pattern of sequential cell colors is indistinguishable from random coloring. [To be fussy, we might use a double-slit single photon detector to create a random sequence whereby a color chosen for a cell is a function of the number of the quadrant where a photon is detected at time t.]

Similarly for a column.

Obviously, we can consider both column and row. And, we might also consider sets of rows and-or columns that occur as a simple period. Another possibility is to determine whether such sets recur in "smooth curve quasi-periods" such as every n^2. We may also want to know whether such sets zero out at some finite row.



Another consideration is the appearance of "structures" over a two-dimensional region. This effectively means the visual perception of at least one border, whether closed or open. The border can display various levels of fuzziness. A linear feature implies at least one coloration period (cycle) appearing in every mth row or every nth column. The brain, in a Gestalt effect, collates the information in these periods as a "noteworthy structure." Such a structure may be defined geometrically or topologically (with constraints). That is, the periodic behavior may yield a sequence of congruent forms (that proliferate either symmetrically or asymmetrically) or of similar forms (as in "nested structures"), or of a set of forms each of which differs from the next incrementally by interior angle, creating the illusion of morphological change, as in cartoon animation.

At this juncture we should point out that there are only 254 elementary cellular automata. However, the number of CA goes up exponentially with another color or two and when all possible initial conditions are considered.

So what we are describing, with the aid of Wolfram's graphs, is deterministic complexity, which differs from the concept of chaos more on a philosophical plane than a mathematical one.

We see that, depending on criteria chosen, CA graphs, after an evolution of n steps, differ in their maximum entropy and also differ at the infinite limit in their maximum entropy. Each graph is asymptotic toward some entropy quantity. By no means does every graph converge toward maximum entropy as defined by a truly random pattern.

So we may conclude that, as Wolfram argues, simple instructions can yield highly complex fields. The measure of complexity is simply the quantity of information in a a graph or subgraph defined by our basic criteria. And what do we mean in this context by information? If we went through all n steps of the rule and examined the sequence of colors in, for example, row n, the information content would be 0 because we have eliminated the uncertainty.

If, however, we don't examine how row n's sequence was formed, then we can check the probability of such a sequence with the resulting information value. At this point we must beware: Complete aperiodicity of cell colors in row n is NOT identical with maximum entropy of row n. Think of asking a high school student to simulate flipping of a coin by haphazardly writing down 0 or 1 in 100 steps. If one then submits the sequence to an analyst, he or she is very likely to discover that the sequence was not produced randomly because most people avoid typical sub-sequences such as 0 recurring six times consecutively.

So then, true randomness (again, we can use our quantum measuring device), which corresponds to maximum entropy, is very likely to differ significantly from computed chaos. This fact is easily seen if one realizes that the set of aperiodic computable irrational numbers is of a lower cardinality than the set of random digit sequences. Still, it must be said that the foregoing lemma doesn't mean there is always available a practical test to distinguish a pseudorandom sequence from a random sequence.

We might also think of deterministic complexity via curves over standard axes, with any number of orthogonal axes we like. Suppose we have a curve y = x. Because there is no difference between x and y, there is effectively no information in curve f(x). No work is required to determine f(x) from x. The information in y = 2x is low because minimal work (as counted by number of simple steps in the most efficient algorithm known) is required to determine g(x) from x. Somewhat more information is found for values of h(x) = x^2 because the computation is slightly slower.

A curve whose values hold maximum information -- implying the most work to arrive at an arbitrary value -- would be one whereby the best method of determining f(x+k) requires knowledge of the value f(x). Many recursive functions fit this category. In that case, we would say that a computed value whose computational work cannot be reduced from n steps of the recursive function or iterative algorithm holds maximum information (if we don't do the work).

So let's say we have the best-arranged sieve of Eratosthenes to produce the sequence of primes. On an xyz grid, we map this discrete curve z = f(y) over y = x^2, using only integer values of x. Now suppose we perceived this system in some other way. We might conclude that a chaotic system shows some underlying symmetry.

It is also possible to conceive of two maximally difficult functions mapped onto each other. But, there's a catch! There is no overall increase in complexity. That is, if f(x) is at maximum complexity, g(f(x)) cannot be more complex -- though it could conceivably be less so.

This conforms to Wolfram's observation that adding complexity to rules does little to increase the complexity of a CA.

Now what about the idea of "phase transitions" whereby order suddenly emerges from disorder? Various experiments with computer models of nonlinear differential equations seem to affirm such possibilities.

Wolfram's New Kind of Science shows several, as I call them, catastrophic phase transitions, whereby high entropy rapidly follows a "tipping point" as defined by a small number of rows. Obviously one's perspective is important. A (notional) graph with 10^100 iterations could have a "tipping point" composed of millions of rows.

Wolfram points out that minor aymmetries in a high entropy graph up to row n are very likely to amplify incrementally -- though the rate of change (which can be defined in several ways) can be quite rapid -- into a "complex" graph after row n. I estimate that these are low entropy graphs, again bearing in mind the difference between true randomness and deterministic chaos or complexity: the entropies in most cases differ.

What we arrive at is the strong suggestion -- that I have not completely verified -- that a high information content in a particular graph could easily be indicative of a simple local rule and does not necessarily imply an externally imposed design [or substitution by another rule] inserted at some row n.

However, as would be expected, the vast majority of Wolfram's graphs are high-entropy affairs -- no matter what criteria are used -- and this fact conforms to the anthropomorphic observation that the cosmos is en toto a low entropy configuration, in that most sets of the constants of physical law yield dull, lifeless universes.

I should note that New Kind of Science also analyzes the entropy issue, but with a different focus. In his discussion of entropy, Wolfram deploys graphs that are "reversible." That is, the rules are tweaked so that the graph mimics the behavior of reversible physical processes. He says that CA 37R shows that the trend of increasing entropy is not universal because the graph oscillates between higher and lower entropy eternally. However, one must be specific as to what information is being measured. If the entropy of the entire graph up to row n is measured, then the quantity can change with n. But the limiting value as n goes to infinity is a single number. It is true, of course, that this number can differ substantially from the limiting value entropy of another graph.

Also, even though the graphs display entropy, the entropy displayed by physical systems assumes energy conservation. But Wolfram's graphs do not model energy conservation, though I have toyed with ways in which they might.

The discussion above is all about classical models arranged discretely, an approach that appeals to the computer science crowd and to those who argue that quantum physics militates against continuous phenomena. However, I have deliberately avoided deep issues posed by the quantum measurement/interpretation problem that might raise questions as to the adequacy of any scientific theory for apprehending the deepest riddles of existence.

It should be noted that there is a wide range of literature on what the Santa Fe Institute calls "complexity science" and others sometimes call "emergence of order." I have not reviewed much of this material, though I am aware of some of the principle ideas.

A big hope for the spontaneous order faction is network theory, which shows some surprising features as to how orderly systems come about. However, I think that Wolfram graphs suffice to help elucidate important ideas, even though I have not concerned concerned myself here with New Kind of Science's points about networks and cellular automata.

No comments:

Post a Comment