Freaky facts about 9 and 11
Scroll down to Force 1089 for a fun psychic power trick
It's easy to come up with strange coincidences regarding the numbers 9 and 11. See, for example,
http://www.unexplained-mysteries.com/forum/index.php?showtopic=56447
How
seriously you take such pecularities depends on your philosophical
point of view. A typical scientist would respond that such coincidences
are fairly likely by the fact that one can, with p/q the probability of
an event, write (1-p/q)^n, meaning that if n is large enough the
probability is fairly high of "bizarre" classically independent
coincidences.
But you might also think about Schroedinger's
notorious cat, whose live-dead iffy state has yet to be accounted for by
Einsteinian classical thinking, as I argue in this longish article:
http://www.angelfire.com/ult/znewz1/qball.html
Elsewhere
I give a mathematical explanation of why any integer can be quickly
tested to determine whether 9 or 11 is an aliquot divisor.
http://www.angelfire.com/az3/nfold/iJk.html
Here are some fun facts about divisibility by 9 or 11.
#
If integers k and j both divide by 9, then the integer formed by
stringing k and j together also divides by 9. One can string together as
many integers divisible by 9 as one wishes to obtain that result.
Example:
27, 36, 45, 81 all divide by 9
In that case, 27364581 divides by 9 (and equals 3040509)
# If k divides by 9, then all the permutations of k's digit string form integers that divide by 9.
Example:
819/9 = 91
891/9 = 99
198/9 = 22
189/9 =21
918/9 = 102
981/9 = 109
# If an integer does not divide by 9, it is easy to form a new integer that does so by a simple addition of a digit.
This
follows from the method of checking for factorability by 9. To wit, we
add all the numerals, to see if they add to 9. If the sum exceeds 9,
then those numerals are again added and this process is repeated as many
times as necessary to obtain a single digit.
Example a.:
72936. 7 + 2 + 9 + 3 + 6 = 27. 2 + 7 = 9
Example b.:
Number chosen by random number generator:
37969. 3 + 7 + 9 + 6 + 9 = 34. 3 + 4 = 7
Hence, all we need do is include a 2 somewhere in the digit string.
372969/9 = 4144
Mystify
your friends. Have them pick any string of digits (say 4) and then you
silently calculate (it looks better if you don't use a calculator) to
see whether the number divides by 9. If so, announce, "This number
divides by 9." If not, announce the digit needed to make an integer
divisible by 9 (2 in the case above) and then have your friend place
that digit anywhere in the integer. Then announce, "This number divides
by 9."
In the case of 11, doing tricks isn't quite so easy, but possible.
We
check if a number divides by 11 by adding alternate digits as positive
and negative. If the sum is zero, the number divides by 11. If the sum
exceeds 9, we add the numerals with alternating signs, so that a sum 11
or 77 or the like, will zero out.
Let's check 5863.
We sum 5 - 8 + 6 - 3 = 0
So we can't scramble 5863 any way and have it divide by 11.
However,
we can scramble the positively signed numbers or the negatively signed
numbers how we please and find that the number divides by 11.
6358 = 11*578
We can also string numbers divisible by 11 together and the resulting integer is also divisible by 11.
253 = 11*23, 143 = 11*13
143253 = 11*13023
Now let's test this pseudorandom number:
70517. The sum of digits is 18 (making it divisible by 9).
We
need to get a -18. So any digit string that sums to -18 will do. The
easiest way to do that in this case is to replicate the integer and
append it since each positive numeral is paired to its negative.
7051770517/11 = 641070047
Now let's do a pseudorandom 4-digit number:
4556. 4 - 5 + 5 - 6 = - 2. Hence 45562 must divide by 11 (obtaining 4142).
Sometimes another trick works.
5894.
5 - 8 + 9 - 4 = 2. So we need a -2, which, in this case can be had by
appending 02, ensuring that 2 is found in the negative sum.
Check: 589402/11 = 53582
Let's play with 157311.
Positive digits are 1,7,1
Negative digits are 5, 3, 1
Positive permutations are
117, 711, 171
Negative permutations are
531, 513, 315, 351, 153, 135
So integers divisible by 11 are, for example:
137115 = 11*12465
711315 = 11*64665
Sizzlin' symmetry
There's just something about symmetry...
To form a number divisible by both 9 and 11, we play around thus:
Take
a number, say 18279, divisible by 9. Note that it has an odd number of
digits, meaning that its copy can be appended such that the resulting
number 1827918279 yields a pattern pairing each positive digit with its
negative, meaning we'll obtain a 0. Hence 1827918279/11 = 166174389 and
that integer divided by 9 equals 20312031. Note that 18279/9 = 2031,
We can also write 1827997281/11 = 166181571 and that number divided by 9 equals 203110809.
Suppose
the string contains an even number of digits. In that case, we can
write say 18271827 and find it divisible by 9 (equaling 2030203). But it
won't divide by 11 in that the positives pair with positive clones and
so for negatives. This is resolved by using a 0 for the midpoint.
Thence 182701827/11 = 16609257. And, by the rules given above, 182701827 is divisible by 9, that number being 20300203.
Force 1089
An amusing property of the numbers 9 and 11 in base
10 arithmetic is demonstrated by Force 1089, the name of a trick you
can use to show off your "mentalist" powers.
"It is important
that you clear your mind," you say to your target. "Psychic researchers
have found that a bit of simple arithmetic will help your subconscious
mind to focus, and create a paranormal atmosphere."
You then instruct the person to choose any three-digit number where
the first and last digit differ by at least 2. "Please do not show me
the number or your calculations," you insruct, handing the person a
calculator to help assure they don't flub the arithmetic.
The target is then told to reverse the order and subtract the
smaller from the larger number. Take that difference and do a reversal
of order on that. Then add those two numbers.
You toss a couple
of books over to the target and say, "Be careful to conceal your number.
Now use the first three digits to find a page in one of the books you
choose." Once the page is found, you instruct the person to use the last
digit to count words along the top line and stop once reaching that
number.
"Now, PLEASE, help me, and concentrate hard on the word you read."
After a few moments, you announce -- of course -- the exact word your target is looking at.
Your secret is that the algorithm always yields the constant 1089, meaning you only have two words to memorize.
These symmetries always, in base 10, produce numbers divisible by 9
and 11 (or 99), even though the first number may not be so factorized.
Consider 854-458 = 396. 396+693 = 1089.
Further, 396 = 4*99; 693 = 7*99; 1089 = 9*11^2.
If we differ the first and last digit by 0, of course, the algorithm
zeros out on the next step. If we differ them by 1, the subtraction
yields 99, as in 100 - 001 = 99, which turns out to be a constant at
this step.
Consider 433-334 = 99.
In the case of a number with 2n
digits, we discover that the algorithm always yields numbers divisible
by 9. However, for 2n+1 digits, the algorithm always yields a set of
numbers divisible by 99. BUT, it does not yield a constant.
(I have not taken the trouble to prove rigorously the material on Force 1089.)
Ah, wonderful symmetry.
Search This Blog
Monday, September 24, 2012
Tuesday, July 24, 2012
Mathematics of evolution: a note on Chaitin's model
In his book Proving Darwin: Making Biology Mathematical (Knopf Doubleday 2012), Gregory Chaitin offers a "toy model" to demonstrate that progressive evolution works in principle. The idea is that DNA behavior is very similar to that of cyber software.
http://pantheon.knopfdoubleday.com/2012/05/08/proving-darwin-by-gregory-chaitin/
Computation of numbers then becomes the work of his evolution system. A mathematician, he finds, can "intelligently design" numbers that get very close to a Busy Beaver number with work growing at n. An exhaustive search of all possible computable numbers under some BB(n) requires exponential work. But, he found, his system of a climbing random walk arrived at numbers close to BB(n) on the order of n^2.
Chaitin posits a Turing-style oracle to sieve out the "less fit" (lower) numbers and the dud algorithms (those that get stuck without producing a number). The oracle represents the filtering of natural selection.
Caveat:
His system requires random mutations of relatively high information value. These "algorithmic mutations" alter multi-node sets. Point mutations, he found, were unproductive. Hence, he has not succeeded in demonstrating how the DNA system itself might have evolved.
Chaitin says he was concerned that there existed no mathematical justification for evolution. But, this assertion gives pause. The existence of the universal Turing machine would seem to demonstrate that progressive evolution is possible, though not necessarily highly probable. But granting that Chaitin was focused on probability, we can agree that if a system is of a high enough order, the probability of progressive evolution is strong. So in that respect, one may agree that Darwin has been proved right.
However, there's an old saying among IT people: "Garbage in, garbage out." The probability that random inputs or alterations will yield increased functionality is remote. One cannot say that Darwin has been proved right about the origin of life.
Remark
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.
For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.
That is, we don't calculate, say 11^-11 (3.5x10^-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e^-1 = 0.63.
Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?
The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.
These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.
In his book Proving Darwin: Making Biology Mathematical (Knopf Doubleday 2012), Gregory Chaitin offers a "toy model" to demonstrate that progressive evolution works in principle. The idea is that DNA behavior is very similar to that of cyber software.
http://pantheon.knopfdoubleday.com/2012/05/08/proving-darwin-by-gregory-chaitin/
Computation of numbers then becomes the work of his evolution system. A mathematician, he finds, can "intelligently design" numbers that get very close to a Busy Beaver number with work growing at n. An exhaustive search of all possible computable numbers under some BB(n) requires exponential work. But, he found, his system of a climbing random walk arrived at numbers close to BB(n) on the order of n^2.
Chaitin posits a Turing-style oracle to sieve out the "less fit" (lower) numbers and the dud algorithms (those that get stuck without producing a number). The oracle represents the filtering of natural selection.
Caveat:
His system requires random mutations of relatively high information value. These "algorithmic mutations" alter multi-node sets. Point mutations, he found, were unproductive. Hence, he has not succeeded in demonstrating how the DNA system itself might have evolved.
Chaitin says he was concerned that there existed no mathematical justification for evolution. But, this assertion gives pause. The existence of the universal Turing machine would seem to demonstrate that progressive evolution is possible, though not necessarily highly probable. But granting that Chaitin was focused on probability, we can agree that if a system is of a high enough order, the probability of progressive evolution is strong. So in that respect, one may agree that Darwin has been proved right.
However, there's an old saying among IT people: "Garbage in, garbage out." The probability that random inputs or alterations will yield increased functionality is remote. One cannot say that Darwin has been proved right about the origin of life.
Remark
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.
For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.
That is, we don't calculate, say 11^-11 (3.5x10^-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e^-1 = 0.63.
Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?
The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.
These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.
Saturday, June 2, 2012
Periodicity 'versus' randomness
A few observations, with no claim to originality
Please also see a more recent post on the probabilities of periodicity
http://kryptograff5.blogspot.com/2013/08/draft-1-please-let-me-know-of-errors.html
We do have a baseline notion of randomness, I suggest. Within constraints, the clicks of a Geiger counter occur, physicists believe, at truly random intervals. An idealized Geiger counter hooked up to an idealized printer might -- assuming some sort of time compression -- print out a noncomputable transcendental. The string's destiny as a noncomputable has probability 1, though, from a human vantage point, once could never be sure the string wasn't destined to be a computable number.
Stephen Wolfram's New Kind of Science, on the other hand, tends to see randomness in terms of pseudorandomness or effective randomness. In my view, an effectively random digit string is one that passes various statistical tests for randomness (most of which are based on the normal approximation of binomial probabilities, which are natural to use with binary strings).
Now, if presented with a finite segment of, say, a binary string, we have no way of knowing, prima facie, whether the string represents a substring of some random string (which we usually expect also to be of finite length). Perhaps the longer string was published by a quantum noise generator. Even so, inspection of the substring may well yield the suspicion that a nonrandom, deterministic process is at work.
Now if one encounters, absent a Turing machine (TM) executor or equivalent, the isolated run 0 0 0 0 0 0 0 0 0 0, one is likely to suspect nonrandomness. Why so? We are doubtless assuming that truly random binary digits occur with a probability of 1/2 and so our belief is that a run of ten zeros is too far from the expectation value of five zeros and five ones. Even so, such a run by itself may simply imply a strong bias for 0, but an otherwise nondeterministic process and so we need to filter out the effect of bias to see whether a process is largely deterministic.
We are much less likely to see the string 0101010101 as anything other than almost surely deterministic, regarding it as a strong candidate for nonrandomness. If we use independence of event (digit on space m) as a criterion, the probability of such a string is 2^(-10), or one chance in 1024 -- the same probability as for any other permutation. Still, such a quantification doesn't tell the whole story. If you predict the sequence 10011110101, you have one chance in 1024 of being right, but if instead you predict some sequence that contains six 0s and four 1s, then you'll find that the relevant set contains 210 strings, yielding a probability that you are correct of 210x2^(-10), or better than 20 percent.
So why do we regard 0101010101 as having a strong probability of nonrandom generation? Because it is part of a small subset of permutations with what I call symmetry. In this case, the symmetry accords with periodicity, or 10 ≡ 0 (mod 2), to be precise.
Is the sequence 100111010 likely to be the result of a random process. A quick inspection leaves one uncertain. This is because this particular string lacks any obvious symmetry. The first sequence is a member of a small subset of length 10 strings that are periodic or "nearly periodic" (having a small non-zero remainder) or that have other symmetries. Many strings, of course, are the result of deterministic processes and yet display no easily detected symmetry. (We might even consider "pseudo-symmetries" in which "periods" are not constant but are polynomial or exponential; some such strings might escape statistical detection of nonrandomness.)
Consider 01001001001
In this case we may regard the string as a candidate for nonrandomness based on its near periodicity of 10 ≡ 1 (mod 3), which includes three full periods.
Consider
0110101101011
The sequence is too short for a runs test, but we may suspect nonrandomness, because we have the period 01101, giving 13 ≡ 3 (mod 5).
We see here that strings of prime length have no periods of form a ≡ 0 (mod c), and hence the subset of "symmetrical" substrings is lower than for a nearby composite. So prime length strings are in general somewhat more likely to look nonrandom.
To get an idea of the smallness of the set of exactly periodic strings, observe that the binary string 101,XXX,XXX has 15 permutations of the last six digits, only one of which yields the periodic string 101101. By similar reasoning, we see that subsets of near-periodic strings are relatively small, as long as the remainder is small with respect to length n. It might be handy to find some particular ratio m/n -- remainder n over string length n -- that one uses to distinguish a union of a periodic and near-periodic subsets, but I have not bothered to do so.
Aside from periodicity and near-periodicity, we have what I call "flanking symmetry," which occurs for any string length. To wit:
0001000
or
0111110
And then we have mirror symmetry (comma included for clarity):
01101,10110
which is equivalent to two complete periods (no remainder) but with the right sequence reversing the order of the left.
We might try to increase the symmetries by, let's say, alternating mirror periods. But note that
0101,1010,0101 is equivalent to 010110,100101
and so there is no gain in what might loosely be called complexity.
Speaking of complexity, what of patterns such that g(n) is assigned to digit 0 and f(n) to digit 1 as a means of determining run lengths? In that case, as is generally true for decryption attacks, length of sample is important in successful detection of nonrandomness.
We may also note the possibility of a climbing function g(0,m) = run length x alternating with a substring of constant length y, each of which is composed ofpsuedorandom (or even random) digits, as in
0,101101,00,111001,000...
In this case, we have required that the pseudorandom strings be bracketed by the digit 1, thus reducing the statistical randomness, of course. And, again, sample size is important.
That is, though 010010001 has a whiff of nonrandomness, when one sees 010010000100001000001, one strongly suspects two functions. To wit, f(x) = k = 1 for runs of 1s and g(x) = next positive integer for runs of 0s. Though a human observer swiftly cognizes the pattern on the second string, a runs test would reveal a z score pointing to nonrandmoness.
So let us use the periodicity test to estimate probability of a deterministic process thus: For n = 20, we have the four aliquot factors, 2, 4, 5, 10. The permutations of length 2 are 00, 01 and their mirrors 11, 10, for 4 strings of period 2. For factor 4, we have 4C2 = 6, yielding, with mirrors, 12 strings of period 4. For factor 5, we have 5C2 =10, yielding, with mirrors, 20 strings of period 5. For factor 10, we have 10C2 = 45, yielding, with mirrors, 90 strings of period 10. So we arrive at 132 periodic strings, which gives a probability of one chance in 138,412,032 if we consider every period to be equiprobable. This concession however may sometimes be doubtful. And, the probability of nonrandomness is also affected by other elements of the set of symmetries discussed above. And of course there are standard tests, such as the runs and chi square tests that must be given due consideration.
Now consider
00000000001111111111
Why does this string strike one as nonrandom? For one thing it is "mirror periodic," with a period of 2. However, one can also discern its apparent nonrandomness using a runs test, which yields a high z score. The advantage of a runs test is that it is often effective on aperiodic strings (though this string doesn't so qualify). Similarly, a goodness of fit test can be used on aperiodic strings to detect the likeliness of human tweaking. And one might, depending on context, apply Benford's law (see http://mathworld.wolfram.com/BenfordsLaw.html ) to certain aperiodic strings.
So it is important to realize that though a small set of symmetrical strings of length n exists whose members are often construed as indicative of nonrandom action, there exists another small set of aperiodic strings of the same length whose members are considered to reveal traits of nonrandom action.
For example, a runs test of sufficiently large n would disclose the nonrandom behavior of Liouville's transcendental number.
Worthy of note is yet another conception of randomness, which is encapsulated by the question: how often does the number 31 appear in an infinite random digit string? That is, if an infinite digit string is formed by a truly random process, then a substring that might occur once in a billion digit spaces, would have probability 1 of recurring infinitely many times.
That is, in base 10, "31" reaches a 95% probability of a first occurrence at the 17,974,385th space. In base 2, "31" is expressed "111111" and a 95% probability of a first occurrence is reached at the 1441th space. Similarly, in an infinite string, we have probability 1 that a run of 10^(100^100) zeros will recur not just one, but infinitely often. Yet if one encountered a relatively short run of, say, 10^10 zeros, one would be convinced of bias and argue that the string doesn't pass statistical randomness tests.
The idea that such strings could occur randomly offends our intuition, which is to say the memory of our frequency ratios based on everyday empirical experience. However, if you were to envisions an infinity of generators of truly random binary strings lined up in parallel with strings stretching from where you stand to the horizon, there is prbability 0 that you happen upon such a stupendous run as you wander along the row of generators. (Obviously, probability 1 or 0 doesn't mean absolute certainty. There is probability 1 that Goldbach's conjecture is true, and yet perhaps there is some case over infinity where it doesn't hold.
Concerning whether "31" recurs infinitely often, one mathematician wrote: "The answer lies in an explicit definition of the term 'infinite random digit string.' Any algorithm for producing the digit string will at once tell you your answer and also raise the objection that the digit string isn't really random. A physical system for producing the digits (say, from output of a device that measures quantum noise) turns this from a mathematical into an experimental question."
Or, perhaps, a philosophical question. We know, according to Cantorian set theory -- as modified by Zermelo, Frankel and Von Neumann (and including the axiom of choice) -- that there is a nondenumerable infinity of noncomputable numbers. So one of these noncomputables, R0 -- that could only be "reached" by some eternal randomization process -- does contain a (countable) infinity of 31s. But this means there is also a number Ra that contains the same infinite string as R0 except for lacking all instances of the substring denoted 31. Of course, one doesn't know with certainty that a randomization process yields a noncomputable number. Ro might be a noncomputable and R1 a computable. Even so, such a procedure might yield a noncomputable R1. So we see that there is some noncomputable number where the string "31" never shows up.
If we regard Zermelo-Frankel set theory as our foundation, it is pretty hard to regard an element of the noncomputables as anything but a random number.
The next question to ponder is whether some pseudorandom aperiodic string mimics randomness well enough so that we could assume probability 1 for an infinity of 31s. Plenty of people believe that the pi decimal extension will eventually yield virtually every finite substring an infinity of times. And yet it is not even known whether this belief falls into the category of undecidable theorems. And even if pi could be shown to be "universal," there is no general way of determining whether an arbitrary program is universal.
No discussion of randomness can readily ignore the topic of Bayesian inference, which, despite a history of controversy, is used by philosophers of science to justify the empirical approach to establishment of scientific truth. Recurrence of well defined events is viewed as weight of evidence, with probability modified as evidence accumulates. Such thinking is the basis of "rules of succession, so that a sequence of, say, five 1s would imply a probability of (m+1)/(m+2) -- 86% -- that the next digit will also be a 1.
In this case, a uniform a priori distribution is assumed, as is dependence. So here, Bayesian inference is conjecturing some deterministic process that yields a high degree of bias, assumptions which impose strong constraints on "background" randomness. (Interestingly, Alan Turing used Bayesian methods in his code-breaking work.)
A longer discussion of Bayesian ideas is found in the appendix of my essay, The Knowledge delusion, found at http://kryptograff5.blogspot.com/2011/11/draft-03-knowledge-delusion-essay-by.html
A few observations, with no claim to originality
Please also see a more recent post on the probabilities of periodicity
http://kryptograff5.blogspot.com/2013/08/draft-1-please-let-me-know-of-errors.html
We do have a baseline notion of randomness, I suggest. Within constraints, the clicks of a Geiger counter occur, physicists believe, at truly random intervals. An idealized Geiger counter hooked up to an idealized printer might -- assuming some sort of time compression -- print out a noncomputable transcendental. The string's destiny as a noncomputable has probability 1, though, from a human vantage point, once could never be sure the string wasn't destined to be a computable number.
Stephen Wolfram's New Kind of Science, on the other hand, tends to see randomness in terms of pseudorandomness or effective randomness. In my view, an effectively random digit string is one that passes various statistical tests for randomness (most of which are based on the normal approximation of binomial probabilities, which are natural to use with binary strings).
Now, if presented with a finite segment of, say, a binary string, we have no way of knowing, prima facie, whether the string represents a substring of some random string (which we usually expect also to be of finite length). Perhaps the longer string was published by a quantum noise generator. Even so, inspection of the substring may well yield the suspicion that a nonrandom, deterministic process is at work.
Now if one encounters, absent a Turing machine (TM) executor or equivalent, the isolated run 0 0 0 0 0 0 0 0 0 0, one is likely to suspect nonrandomness. Why so? We are doubtless assuming that truly random binary digits occur with a probability of 1/2 and so our belief is that a run of ten zeros is too far from the expectation value of five zeros and five ones. Even so, such a run by itself may simply imply a strong bias for 0, but an otherwise nondeterministic process and so we need to filter out the effect of bias to see whether a process is largely deterministic.
We are much less likely to see the string 0101010101 as anything other than almost surely deterministic, regarding it as a strong candidate for nonrandomness. If we use independence of event (digit on space m) as a criterion, the probability of such a string is 2^(-10), or one chance in 1024 -- the same probability as for any other permutation. Still, such a quantification doesn't tell the whole story. If you predict the sequence 10011110101, you have one chance in 1024 of being right, but if instead you predict some sequence that contains six 0s and four 1s, then you'll find that the relevant set contains 210 strings, yielding a probability that you are correct of 210x2^(-10), or better than 20 percent.
So why do we regard 0101010101 as having a strong probability of nonrandom generation? Because it is part of a small subset of permutations with what I call symmetry. In this case, the symmetry accords with periodicity, or 10 ≡ 0 (mod 2), to be precise.
Is the sequence 100111010 likely to be the result of a random process. A quick inspection leaves one uncertain. This is because this particular string lacks any obvious symmetry. The first sequence is a member of a small subset of length 10 strings that are periodic or "nearly periodic" (having a small non-zero remainder) or that have other symmetries. Many strings, of course, are the result of deterministic processes and yet display no easily detected symmetry. (We might even consider "pseudo-symmetries" in which "periods" are not constant but are polynomial or exponential; some such strings might escape statistical detection of nonrandomness.)
Consider 01001001001
In this case we may regard the string as a candidate for nonrandomness based on its near periodicity of 10 ≡ 1 (mod 3), which includes three full periods.
Consider
0110101101011
The sequence is too short for a runs test, but we may suspect nonrandomness, because we have the period 01101, giving 13 ≡ 3 (mod 5).
We see here that strings of prime length have no periods of form a ≡ 0 (mod c), and hence the subset of "symmetrical" substrings is lower than for a nearby composite. So prime length strings are in general somewhat more likely to look nonrandom.
To get an idea of the smallness of the set of exactly periodic strings, observe that the binary string 101,XXX,XXX has 15 permutations of the last six digits, only one of which yields the periodic string 101101. By similar reasoning, we see that subsets of near-periodic strings are relatively small, as long as the remainder is small with respect to length n. It might be handy to find some particular ratio m/n -- remainder n over string length n -- that one uses to distinguish a union of a periodic and near-periodic subsets, but I have not bothered to do so.
Aside from periodicity and near-periodicity, we have what I call "flanking symmetry," which occurs for any string length. To wit:
0001000
or
0111110
And then we have mirror symmetry (comma included for clarity):
01101,10110
which is equivalent to two complete periods (no remainder) but with the right sequence reversing the order of the left.
We might try to increase the symmetries by, let's say, alternating mirror periods. But note that
0101,1010,0101 is equivalent to 010110,100101
and so there is no gain in what might loosely be called complexity.
Speaking of complexity, what of patterns such that g(n) is assigned to digit 0 and f(n) to digit 1 as a means of determining run lengths? In that case, as is generally true for decryption attacks, length of sample is important in successful detection of nonrandomness.
We may also note the possibility of a climbing function g(0,m) = run length x alternating with a substring of constant length y, each of which is composed ofpsuedorandom (or even random) digits, as in
0,101101,00,111001,000...
In this case, we have required that the pseudorandom strings be bracketed by the digit 1, thus reducing the statistical randomness, of course. And, again, sample size is important.
That is, though 010010001 has a whiff of nonrandomness, when one sees 010010000100001000001, one strongly suspects two functions. To wit, f(x) = k = 1 for runs of 1s and g(x) = next positive integer for runs of 0s. Though a human observer swiftly cognizes the pattern on the second string, a runs test would reveal a z score pointing to nonrandmoness.
So let us use the periodicity test to estimate probability of a deterministic process thus: For n = 20, we have the four aliquot factors, 2, 4, 5, 10. The permutations of length 2 are 00, 01 and their mirrors 11, 10, for 4 strings of period 2. For factor 4, we have 4C2 = 6, yielding, with mirrors, 12 strings of period 4. For factor 5, we have 5C2 =10, yielding, with mirrors, 20 strings of period 5. For factor 10, we have 10C2 = 45, yielding, with mirrors, 90 strings of period 10. So we arrive at 132 periodic strings, which gives a probability of one chance in 138,412,032 if we consider every period to be equiprobable. This concession however may sometimes be doubtful. And, the probability of nonrandomness is also affected by other elements of the set of symmetries discussed above. And of course there are standard tests, such as the runs and chi square tests that must be given due consideration.
Now consider
00000000001111111111
Why does this string strike one as nonrandom? For one thing it is "mirror periodic," with a period of 2. However, one can also discern its apparent nonrandomness using a runs test, which yields a high z score. The advantage of a runs test is that it is often effective on aperiodic strings (though this string doesn't so qualify). Similarly, a goodness of fit test can be used on aperiodic strings to detect the likeliness of human tweaking. And one might, depending on context, apply Benford's law (see http://mathworld.wolfram.com/BenfordsLaw.html ) to certain aperiodic strings.
So it is important to realize that though a small set of symmetrical strings of length n exists whose members are often construed as indicative of nonrandom action, there exists another small set of aperiodic strings of the same length whose members are considered to reveal traits of nonrandom action.
For example, a runs test of sufficiently large n would disclose the nonrandom behavior of Liouville's transcendental number.
Worthy of note is yet another conception of randomness, which is encapsulated by the question: how often does the number 31 appear in an infinite random digit string? That is, if an infinite digit string is formed by a truly random process, then a substring that might occur once in a billion digit spaces, would have probability 1 of recurring infinitely many times.
That is, in base 10, "31" reaches a 95% probability of a first occurrence at the 17,974,385th space. In base 2, "31" is expressed "111111" and a 95% probability of a first occurrence is reached at the 1441th space. Similarly, in an infinite string, we have probability 1 that a run of 10^(100^100) zeros will recur not just one, but infinitely often. Yet if one encountered a relatively short run of, say, 10^10 zeros, one would be convinced of bias and argue that the string doesn't pass statistical randomness tests.
The idea that such strings could occur randomly offends our intuition, which is to say the memory of our frequency ratios based on everyday empirical experience. However, if you were to envisions an infinity of generators of truly random binary strings lined up in parallel with strings stretching from where you stand to the horizon, there is prbability 0 that you happen upon such a stupendous run as you wander along the row of generators. (Obviously, probability 1 or 0 doesn't mean absolute certainty. There is probability 1 that Goldbach's conjecture is true, and yet perhaps there is some case over infinity where it doesn't hold.
Concerning whether "31" recurs infinitely often, one mathematician wrote: "The answer lies in an explicit definition of the term 'infinite random digit string.' Any algorithm for producing the digit string will at once tell you your answer and also raise the objection that the digit string isn't really random. A physical system for producing the digits (say, from output of a device that measures quantum noise) turns this from a mathematical into an experimental question."
Or, perhaps, a philosophical question. We know, according to Cantorian set theory -- as modified by Zermelo, Frankel and Von Neumann (and including the axiom of choice) -- that there is a nondenumerable infinity of noncomputable numbers. So one of these noncomputables, R0 -- that could only be "reached" by some eternal randomization process -- does contain a (countable) infinity of 31s. But this means there is also a number Ra that contains the same infinite string as R0 except for lacking all instances of the substring denoted 31. Of course, one doesn't know with certainty that a randomization process yields a noncomputable number. Ro might be a noncomputable and R1 a computable. Even so, such a procedure might yield a noncomputable R1. So we see that there is some noncomputable number where the string "31" never shows up.
If we regard Zermelo-Frankel set theory as our foundation, it is pretty hard to regard an element of the noncomputables as anything but a random number.
The next question to ponder is whether some pseudorandom aperiodic string mimics randomness well enough so that we could assume probability 1 for an infinity of 31s. Plenty of people believe that the pi decimal extension will eventually yield virtually every finite substring an infinity of times. And yet it is not even known whether this belief falls into the category of undecidable theorems. And even if pi could be shown to be "universal," there is no general way of determining whether an arbitrary program is universal.
No discussion of randomness can readily ignore the topic of Bayesian inference, which, despite a history of controversy, is used by philosophers of science to justify the empirical approach to establishment of scientific truth. Recurrence of well defined events is viewed as weight of evidence, with probability modified as evidence accumulates. Such thinking is the basis of "rules of succession, so that a sequence of, say, five 1s would imply a probability of (m+1)/(m+2) -- 86% -- that the next digit will also be a 1.
In this case, a uniform a priori distribution is assumed, as is dependence. So here, Bayesian inference is conjecturing some deterministic process that yields a high degree of bias, assumptions which impose strong constraints on "background" randomness. (Interestingly, Alan Turing used Bayesian methods in his code-breaking work.)
A longer discussion of Bayesian ideas is found in the appendix of my essay, The Knowledge delusion, found at http://kryptograff5.blogspot.com/2011/11/draft-03-knowledge-delusion-essay-by.html
Monday, May 7, 2012
In death's borderland
By PAUL CONANT
By PAUL CONANT
--------------------------------------------------------------------------------
Time: 12:30 a.m. to 6:30 p.m. Sept. 12, 2001.
Place: In the Manhattan buffer zone uptown from the twin towers catastrophe.
---------------------------------------------------------------------------------
When I first arrived in the buffer zone between 14th and Houston streets, surrealistic scenes greeted me.
Police cars in motion covered with ash and dust; a convoy of giant earth movers filled with skyscraper rubble; emergency rescue vehicles on unspecified missions.
No one was afoot except for me and a few drunks, addicts and homeless persons.
At the key intersection of Houston and 6th Av. (also known as the Avenue of the Americas) I shared a bench with a homeless woman, watching as emergency vehicles came and went, convoys of dump trucks were deployed and city buses ferried police, firefighters, volunteers and construction workers in and out of the death zone.
I wandered up and down East Houston, noting the trucks laden with scaffolding parked and ready to roll. I stood on a footbridge over FDR Drive watching streams of emergency vehicles, some marked, some not, some with lights flashing and sirens blaring, some not, streaming in and out of Houston Street or heading around the FDR curve to approach the disaster from the toe of this forever-changed island.
In my wanderings, I frequently came across homeless men sleeping fitfully on sidewalks and loading docks, in jarring contrast to the more than 10,000 dead buried a few blocks away. Yet, I noticed around 4 a.m. that most residences in the buffer zone had all lights out, so I presumed that many New Yorkers must have simply gone to bed.
All I could think when watching the emergency activities was that New York should be glad of such efficiency and cool-headedness in response to this outrage.
Once dawn came, I saw groups of professionals hustling off toward West Street (aka the West Side Highway), apparently on their way to work.
Overheard snatches of conversation:
"We all know somebody who is dead," said a woman striding along with two men.
"We had a very, very close friend who was on the 92d floor," says a bearded man into a cellphone.
Cellphones of course were ubiquitous. People at Houston and 6th were using them to report details of what they were seeing. One man sitting on the by-now packed bench was reading his notes in French, most probably to an editor at the other end of his cellphone.
Channel 3 News from Hartford encamped at the intersection at about 5 a.m. and all day long conducted TV interviews with New Yorkers who had been at or near the catastrophe site.
Police cars in motion covered with ash and dust; a convoy of giant earth movers filled with skyscraper rubble; emergency rescue vehicles on unspecified missions.
No one was afoot except for me and a few drunks, addicts and homeless persons.
At the key intersection of Houston and 6th Av. (also known as the Avenue of the Americas) I shared a bench with a homeless woman, watching as emergency vehicles came and went, convoys of dump trucks were deployed and city buses ferried police, firefighters, volunteers and construction workers in and out of the death zone.
I wandered up and down East Houston, noting the trucks laden with scaffolding parked and ready to roll. I stood on a footbridge over FDR Drive watching streams of emergency vehicles, some marked, some not, some with lights flashing and sirens blaring, some not, streaming in and out of Houston Street or heading around the FDR curve to approach the disaster from the toe of this forever-changed island.
In my wanderings, I frequently came across homeless men sleeping fitfully on sidewalks and loading docks, in jarring contrast to the more than 10,000 dead buried a few blocks away. Yet, I noticed around 4 a.m. that most residences in the buffer zone had all lights out, so I presumed that many New Yorkers must have simply gone to bed.
All I could think when watching the emergency activities was that New York should be glad of such efficiency and cool-headedness in response to this outrage.
Once dawn came, I saw groups of professionals hustling off toward West Street (aka the West Side Highway), apparently on their way to work.
Overheard snatches of conversation:
"We all know somebody who is dead," said a woman striding along with two men.
"We had a very, very close friend who was on the 92d floor," says a bearded man into a cellphone.
Cellphones of course were ubiquitous. People at Houston and 6th were using them to report details of what they were seeing. One man sitting on the by-now packed bench was reading his notes in French, most probably to an editor at the other end of his cellphone.
Channel 3 News from Hartford encamped at the intersection at about 5 a.m. and all day long conducted TV interviews with New Yorkers who had been at or near the catastrophe site.
By 10 a.m., the pace was picking up, as more and more New Yorkers ventured out, looking for newspapers (none delivered in the buffer zone), visiting neighbors and just plain looking around.
But it was eery. A perfect summery day. The residents of the buffer zone were perhaps defiantly nonchalant. Those in the streets showed no trace of fear, spoke animatedly to one another and played with their children, doing their best to enjoy a very bad day. And they somehow were succeeding.
If the point was to terrorize the New Yorkers, I can tell you they were not at all terrorized. New Yorkers were well behaved. A few onlookers were a bit of a pain at the key intersections, but when one considers the number of people in New York, things went very well. And the harried police handled the onlookers good naturedly.
Among the contrasts:
Throngs of curious Manhattanites near the death zone acting as if it was a nice day off (and that feeling of confidence was quite clearly contagious); yet every once in a while silent rescue workers, individually and in small groups, would trudge past the police checkpoint and walk uptown. You knew who they were even if they were not in uniform. Their footgear was covered in ash. They trudged, stonefaced, staring straight ahead, overcome with exhaustion, both physical and emotional.
As a Vietnam veteran, I could identify with them somewhat. Once past the checkpoint, many in the crowds failed to notice them, but of course that didn't really matter.
Over on West Street, crowds made a point of cheering and applauding the rescue workers as they drove to and from the death zone. Many news trucks lined West Street. It gave one of the clearest views in Manhattan of the 'hole' in the skyline.
I couldn't help but wonder: is it wise to put so many people in one place? Do we really need skyscrapers anyway in this new economic era of computer teleconferencing?
I recall seeing a family walking their children up the bikepath, their little girl racing along gaily, clutching her dolly -- completely oblivious to the tower of smoke billowing up behind her.
But it was eery. A perfect summery day. The residents of the buffer zone were perhaps defiantly nonchalant. Those in the streets showed no trace of fear, spoke animatedly to one another and played with their children, doing their best to enjoy a very bad day. And they somehow were succeeding.
If the point was to terrorize the New Yorkers, I can tell you they were not at all terrorized. New Yorkers were well behaved. A few onlookers were a bit of a pain at the key intersections, but when one considers the number of people in New York, things went very well. And the harried police handled the onlookers good naturedly.
Among the contrasts:
Throngs of curious Manhattanites near the death zone acting as if it was a nice day off (and that feeling of confidence was quite clearly contagious); yet every once in a while silent rescue workers, individually and in small groups, would trudge past the police checkpoint and walk uptown. You knew who they were even if they were not in uniform. Their footgear was covered in ash. They trudged, stonefaced, staring straight ahead, overcome with exhaustion, both physical and emotional.
As a Vietnam veteran, I could identify with them somewhat. Once past the checkpoint, many in the crowds failed to notice them, but of course that didn't really matter.
Over on West Street, crowds made a point of cheering and applauding the rescue workers as they drove to and from the death zone. Many news trucks lined West Street. It gave one of the clearest views in Manhattan of the 'hole' in the skyline.
I couldn't help but wonder: is it wise to put so many people in one place? Do we really need skyscrapers anyway in this new economic era of computer teleconferencing?
I recall seeing a family walking their children up the bikepath, their little girl racing along gaily, clutching her dolly -- completely oblivious to the tower of smoke billowing up behind her.
Similar scenes played out in Washington Square Park where parents supervised toddlers laughing an giggling under the turtle sprinkler.
Looking downtown, three crosses atop churches abutting the park stood out in stark relief against the heavy pall of smoke.
I heard a helicopter whipping high overhead and was watching it for a while before I realized: I had heard it because there was no traffic making the usual Manhattan cacophony around the park.
There just wasn't enough noise in the air.
Looking downtown, three crosses atop churches abutting the park stood out in stark relief against the heavy pall of smoke.
I heard a helicopter whipping high overhead and was watching it for a while before I realized: I had heard it because there was no traffic making the usual Manhattan cacophony around the park.
There just wasn't enough noise in the air.
Saturday, March 17, 2012
Sinister brownouts of red troubles
(First published ca. 2001)
British authorities are weighing bringing charges against about 10 persons suspected of having been agents of the East German secret police, the Stasi, according to Stephen Grey and John Goetz of London's Sunday Times.
They were identified as the result of the cracking of a Stasi computer code by a computer buff who once lived under communist rule. However, not only British spies were exposed. Agents operating in the United States and elsewhere were evidently exposed.Reputedly, the CIA has tried to thwart exposure of the red agents, whether in America or elsewhere, claiming it would compromise some secret operations.
The Sunday Times tells of concerns that the CIA is covering up for a nest of high-level traitors. It will be recalled that three ex-East German agents, including a lawyer working at the top rung of the Pentagon, have been convicted.Vladimir Putin, president of Russia, was Moscow's representative to the Stasi when the newly exposed agents were active. The question is, did this red network remain active, perhaps under Russian control, after the collapse of East Germany? Putin may have been advised that the code was uncrackable.
On another Putin matter, John Sweeney of Britain's Observer links Putin and security service allies to the Moscow bombings that were used as a reason for the Chechnya war.
The most interesting bomb was the one that didn't go off, the incident being bizarrely transformed into a "training exercise." News agencies and media serving America have virtually ignored the Stasi story, as if FBI interest -- or lack of it -- in communist networks in our government is ho-hum news in America.
And, investigative news reports on the Moscow bombings are difficult to come across, particularly in the United States. Other topics on ConantNews pages include China's nuclear espionage offensive and MI6's battle to censor the press, along with a discussion of the mathematics of Florida's presidential election. Links between pages have proved inadequate.
(First published ca. 2001)
British authorities are weighing bringing charges against about 10 persons suspected of having been agents of the East German secret police, the Stasi, according to Stephen Grey and John Goetz of London's Sunday Times.
They were identified as the result of the cracking of a Stasi computer code by a computer buff who once lived under communist rule. However, not only British spies were exposed. Agents operating in the United States and elsewhere were evidently exposed.Reputedly, the CIA has tried to thwart exposure of the red agents, whether in America or elsewhere, claiming it would compromise some secret operations.
The Sunday Times tells of concerns that the CIA is covering up for a nest of high-level traitors. It will be recalled that three ex-East German agents, including a lawyer working at the top rung of the Pentagon, have been convicted.Vladimir Putin, president of Russia, was Moscow's representative to the Stasi when the newly exposed agents were active. The question is, did this red network remain active, perhaps under Russian control, after the collapse of East Germany? Putin may have been advised that the code was uncrackable.
On another Putin matter, John Sweeney of Britain's Observer links Putin and security service allies to the Moscow bombings that were used as a reason for the Chechnya war.
The most interesting bomb was the one that didn't go off, the incident being bizarrely transformed into a "training exercise." News agencies and media serving America have virtually ignored the Stasi story, as if FBI interest -- or lack of it -- in communist networks in our government is ho-hum news in America.
And, investigative news reports on the Moscow bombings are difficult to come across, particularly in the United States. Other topics on ConantNews pages include China's nuclear espionage offensive and MI6's battle to censor the press, along with a discussion of the mathematics of Florida's presidential election. Links between pages have proved inadequate.
China's nuke spy war
May 20, 2001--The Chinese espionage uproar took a new turn when the Washington Post disclosed that the Pentagon and the CIA were blocking publication of a U.S. nuclear scientist's memoirs of his visits to Chinese nuclear arms facilities.
Danny B. Stillman, a retired Los Alamos scientist and intelligence analyst, made many visits to the Chinese nuclear program between 1990 and 1999 and simply asked fellow scientists what they were doing, wrote the Post's Steve Coll. Stillman felt that the Chinese were able to make strides in nuclear arms not because of espionage but because computers aided their task.
Rep. Curt Weldon, a member of the Cox committee which probed U.S. nuclear security issues, targeted Clinton's decision to permit sale of some 700 supercomputers to the Chinese. Weldon demanded a copy of Stillman's book from federal officials, along with supporting materials.You can read the Post story or Weldon's statement by hitting ConantNews features and then hitting the links 'Scientist fights gag order' or 'Clinton faulted on supercomputers.' This report might have gone online sooner had not the Washington Post's email news alert system been down when the Stillman and Weldon stories emerged.
Computer problems prevented me from adding a page with links to those stories and, rather than waste more time playing games, I leave you the addresses: Stillman story: www.washingtonpost.com/wp-dyn/articles/A29474-2001May15.html Weldon story: www.fas.org/sgp/congress/2001/h051601.html
PRESIDENT'S MEN CITE NUKE SPY WAR
The Associated Press's online articles about reported Chinese espionage and Los Alamos security woes includes links to the congressional Cox report but not to a key White House report. The report of the President's Foreign Intelligence Advisory Board on nuclear security at Los Alamos portrays decades of incredible security negligence at the Los Alamos National Laboratory, where nuclear weapons are designed. This negligence continued in the face of repeated warnings from a variety of investigations, the report says.
The report, with the input of the FBI, CIA and other security arms, asserts that China has mounted a massive and highly successful spy war against our nuclear secrets.The report's appendix contains an eyebrow-raising chronology and damage assessment.
The New York Times, in its Feb. 4 and 5 editions, made good on its pledge to take a thorough look at the Wen Ho Lee affair. Times reporters noted that U.S. policy promoted fraternization of Chinese and American nuclear scientists, including those involved in the weapons program. In this climate, disinterest in security was rampant, it seems. Knowing the aggressiveness of Chinese intelligence, America would be foolish not to assume that the Chinese took full advantage of such neglect. Now the unpleasant question arises as to how many agents the communists have insuated in to America's weapons establishment. The Times did not address that question.
May 20, 2001--The Chinese espionage uproar took a new turn when the Washington Post disclosed that the Pentagon and the CIA were blocking publication of a U.S. nuclear scientist's memoirs of his visits to Chinese nuclear arms facilities.
Danny B. Stillman, a retired Los Alamos scientist and intelligence analyst, made many visits to the Chinese nuclear program between 1990 and 1999 and simply asked fellow scientists what they were doing, wrote the Post's Steve Coll. Stillman felt that the Chinese were able to make strides in nuclear arms not because of espionage but because computers aided their task.
Rep. Curt Weldon, a member of the Cox committee which probed U.S. nuclear security issues, targeted Clinton's decision to permit sale of some 700 supercomputers to the Chinese. Weldon demanded a copy of Stillman's book from federal officials, along with supporting materials.You can read the Post story or Weldon's statement by hitting ConantNews features and then hitting the links 'Scientist fights gag order' or 'Clinton faulted on supercomputers.' This report might have gone online sooner had not the Washington Post's email news alert system been down when the Stillman and Weldon stories emerged.
Computer problems prevented me from adding a page with links to those stories and, rather than waste more time playing games, I leave you the addresses: Stillman story: www.washingtonpost.com/wp-dyn/articles/A29474-2001May15.html Weldon story: www.fas.org/sgp/congress/2001/h051601.html
PRESIDENT'S MEN CITE NUKE SPY WAR
The Associated Press's online articles about reported Chinese espionage and Los Alamos security woes includes links to the congressional Cox report but not to a key White House report. The report of the President's Foreign Intelligence Advisory Board on nuclear security at Los Alamos portrays decades of incredible security negligence at the Los Alamos National Laboratory, where nuclear weapons are designed. This negligence continued in the face of repeated warnings from a variety of investigations, the report says.
The report, with the input of the FBI, CIA and other security arms, asserts that China has mounted a massive and highly successful spy war against our nuclear secrets.The report's appendix contains an eyebrow-raising chronology and damage assessment.
The New York Times, in its Feb. 4 and 5 editions, made good on its pledge to take a thorough look at the Wen Ho Lee affair. Times reporters noted that U.S. policy promoted fraternization of Chinese and American nuclear scientists, including those involved in the weapons program. In this climate, disinterest in security was rampant, it seems. Knowing the aggressiveness of Chinese intelligence, America would be foolish not to assume that the Chinese took full advantage of such neglect. Now the unpleasant question arises as to how many agents the communists have insuated in to America's weapons establishment. The Times did not address that question.
When axioms collide
(first published ca. 2002)
When axioms collide, a new truth emerges.
Here we discuss the Zermelo-Fraenkel infinite set axiom and its collision with the euclidean one-line-per-point-pair axiom. The consequence is another 'euclidean' (as opposed to 'non-euclidean') geometry that uses another, and equally valid, axiom, permitting an infinite number of lines per planar point pair.The ZF infinity axiom establishes a prototype infinite set, with the null set as its 'initial' element that permits a set x'' = {x' u {x'}}. Infinite recursion requires a denumerable set.
Fraenkel appears to have wanted the axiom in order to justify the set N. (From the set N, it is then possible to justify nondenumerable infinite sets.) So the axiomatic infinite set permits both denumerable and nondenumerable infinite sets composed of elements with a property peculiar to the set. The ZF infinite set does not of itself imply existence of, say, N. But the axiom, along with the recursion algorithm f(n) = n + 1, which is a property that can be made one-to-one with the axiomatic set elements, does imply N's existence.
So now let us graph a summation formula, such as zeta(-2), and draw a line L through each partial sum height y parallel to the x axis. That is, f(n) is asymptotic to (lim n->inf.)f(n).In other words, the parallels drawn through consecutive values of f(n) squeeze closer and closer together. At the limit, infinite density of lines is reached, where the distance between parallels is 0.Such a scenario, which might be called a singularity, is not permitted by the euclidean and pseudo-euclidean axiom of one line per point pair.
Yet the set J = {a parallel to the x axis through zeta(-2)} is certainly bijective with the axiomatic infinite set.However, by euclidean axiom, the set's existence must be disregarded. The fact that ZF permits the set J to exist and that it takes another axiom to knock it out means that J exists in another 'euclidean' geometry where the one-line-per-point-pair axiom is replaced. We can either posit a newly found geometry or we can modify the old one. We retain the parallel postulate, but say that an infinitude of lines runs through two planar points.
Two infinite sets of lines would then be said to be distinct if there is non-zero distance between the sets of parallels positioned at (lim x-garbled inf.)f(x) and (lim x->inf.)g(x).In addition, it is only the infinite subset of J with elements 0 distance apart that overlays two specific points.So we may say that an infinitude of lines runs through two planar points which are (geometrically) indistinct from a single line running through those two points.
So we axiomatically regard, in this case, an infinitude of lines to be (topologically?) equivalent to one line.
(first published ca. 2002)
When axioms collide, a new truth emerges.
Here we discuss the Zermelo-Fraenkel infinite set axiom and its collision with the euclidean one-line-per-point-pair axiom. The consequence is another 'euclidean' (as opposed to 'non-euclidean') geometry that uses another, and equally valid, axiom, permitting an infinite number of lines per planar point pair.The ZF infinity axiom establishes a prototype infinite set, with the null set as its 'initial' element that permits a set x'' = {x' u {x'}}. Infinite recursion requires a denumerable set.
Fraenkel appears to have wanted the axiom in order to justify the set N. (From the set N, it is then possible to justify nondenumerable infinite sets.) So the axiomatic infinite set permits both denumerable and nondenumerable infinite sets composed of elements with a property peculiar to the set. The ZF infinite set does not of itself imply existence of, say, N. But the axiom, along with the recursion algorithm f(n) = n + 1, which is a property that can be made one-to-one with the axiomatic set elements, does imply N's existence.
So now let us graph a summation formula, such as zeta(-2), and draw a line L through each partial sum height y parallel to the x axis. That is, f(n) is asymptotic to (lim n->inf.)f(n).In other words, the parallels drawn through consecutive values of f(n) squeeze closer and closer together. At the limit, infinite density of lines is reached, where the distance between parallels is 0.Such a scenario, which might be called a singularity, is not permitted by the euclidean and pseudo-euclidean axiom of one line per point pair.
Yet the set J = {a parallel to the x axis through zeta(-2)} is certainly bijective with the axiomatic infinite set.However, by euclidean axiom, the set's existence must be disregarded. The fact that ZF permits the set J to exist and that it takes another axiom to knock it out means that J exists in another 'euclidean' geometry where the one-line-per-point-pair axiom is replaced. We can either posit a newly found geometry or we can modify the old one. We retain the parallel postulate, but say that an infinitude of lines runs through two planar points.
Two infinite sets of lines would then be said to be distinct if there is non-zero distance between the sets of parallels positioned at (lim x-garbled inf.)f(x) and (lim x->inf.)g(x).In addition, it is only the infinite subset of J with elements 0 distance apart that overlays two specific points.So we may say that an infinitude of lines runs through two planar points which are (geometrically) indistinct from a single line running through those two points.
So we axiomatically regard, in this case, an infinitude of lines to be (topologically?) equivalent to one line.
Bush plants old fox in FBI chicken coop
(first published 2001)
FEB 23 -- As the FBI mole scandal rocked Washington, President Bush, in a hastily convened news conference, yesterday called for "civil discourse" -- defending FBI Director Louis J. Freeh and upholding Freeh's appointment of William H. Webster, who formerly headed the FBI and CIA, to probe security problems at the bureau.
It has become clear since the spy case broke that the FBI was aware of concerns about adversary penetration of its security but failed to take countermeasures, such as requiring random polygraph tests of agents.
In response to a question, Bush said, "I have confidence in Director Freeh. I think he is doing a good job," adding: "He has made the right move in selecting Judge Webster to review all procedures in the FBI to make sure this doesn't happen again."In his initial comments, Bush said he wished for GOP and Democratic civility. "One of my missions has been to change the tone of the nation's capital to encourage civil discourse," he said.
On Feb. 21, 2001, this page [website] noted the curiousness of the appointment of Webster to investigate security problems at the FBI in the wake of the arrest of a longtime mole for Russia, top FBI counterspy Robert Philip Hanssen.
Webster was CIA chief while Aldrich Ames was doing his dirty work against America. Many voices inside the CIA had warned during the mid to late 1980s that there was penetration near the top because of too much going wrong.
The White Houses of Reagan and the senior Bush should have been warned about the security problem but the elder Bush made it clear that he had complete confidence in the CIA, and no high-level security breach was identified. His son, George W., was a key White House assistant at that time.It wasn't until extraordinary pressure from people in the field was felt that the FBI and CIA set up a joint mole-hunting task force near the end of Bush's term.
But the Ames debacle was continuing under Webster, who, it appears, left the Reagan and Bush White Houses out of the loop on the severity of the problem at the CIA. Also under Webster, a number of other security scandals erupted.
Freeh's appointment of the 76-year-old Webster, who is a former FBI chief, to investigate the security problems appears to be largely political damage control but it is unlikely to give professional security and intelligence people much comfort.
This page updated Feb. 23, Feb. 28, 2001
(first published 2001)
FEB 23 -- As the FBI mole scandal rocked Washington, President Bush, in a hastily convened news conference, yesterday called for "civil discourse" -- defending FBI Director Louis J. Freeh and upholding Freeh's appointment of William H. Webster, who formerly headed the FBI and CIA, to probe security problems at the bureau.
It has become clear since the spy case broke that the FBI was aware of concerns about adversary penetration of its security but failed to take countermeasures, such as requiring random polygraph tests of agents.
In response to a question, Bush said, "I have confidence in Director Freeh. I think he is doing a good job," adding: "He has made the right move in selecting Judge Webster to review all procedures in the FBI to make sure this doesn't happen again."In his initial comments, Bush said he wished for GOP and Democratic civility. "One of my missions has been to change the tone of the nation's capital to encourage civil discourse," he said.
On Feb. 21, 2001, this page [website] noted the curiousness of the appointment of Webster to investigate security problems at the FBI in the wake of the arrest of a longtime mole for Russia, top FBI counterspy Robert Philip Hanssen.
Webster was CIA chief while Aldrich Ames was doing his dirty work against America. Many voices inside the CIA had warned during the mid to late 1980s that there was penetration near the top because of too much going wrong.
The White Houses of Reagan and the senior Bush should have been warned about the security problem but the elder Bush made it clear that he had complete confidence in the CIA, and no high-level security breach was identified. His son, George W., was a key White House assistant at that time.It wasn't until extraordinary pressure from people in the field was felt that the FBI and CIA set up a joint mole-hunting task force near the end of Bush's term.
But the Ames debacle was continuing under Webster, who, it appears, left the Reagan and Bush White Houses out of the loop on the severity of the problem at the CIA. Also under Webster, a number of other security scandals erupted.
Freeh's appointment of the 76-year-old Webster, who is a former FBI chief, to investigate the security problems appears to be largely political damage control but it is unlikely to give professional security and intelligence people much comfort.
This page updated Feb. 23, Feb. 28, 2001
The axiom of choice and non-enumerable reals
[Posted online March 13, 2002; revised July 30, 2002, Aug. 28, 2002, Oct. 12, 2002, Oct. 24, 2002; June 2003]
The following proposition is presented for purposes of discussion. I agree that, according to standard Zermelo-Fraenkel set theory, the proposition is false. Proposition: The Zermelo-Fraenkel power set axiom and the axiom of choice are inconsistent if an extended language is not used to express all the reals.
Discussion: The power set axiom reads: 'Given any set X there is a set Y which has as its members all the subsets of X.' The axiom of choice reads: 'For any nonempty set X there is a set Y which has precisely one element in common with set X.'*
*Definitions taken from 'Logic for Mathematicians,' A.G. Hamilton, Cambridge, revised 1988.
---It is known that there is a denumerable set X of writable functions f such that f defines r e R, where R is the nondenumerable set of reals. By writable, we mean f can be written in a language L that has a finite set of operations on a finite set of symbols.
In other words, X contains all computable reals.[This theorem stems from the thought that any algorithm for computing a number can be encoded as a single unique number. So, it is argued, since the set of algorithms is denumerable, so is the set of computable reals. However, we must be cautious here. It is possible for a Brouwerian choice rule (perhaps using a random number generator) to compute more than one real.]
X is disjoint from a nondenumerable set Y, subset of R, that contains all noncomputable and hence non-enumerable reals. P(Y) contains all the subsets of Y, and, like Y, is a nondenumerable infinity. Yet y e Y is not further definable. We cannot distinguish between elements of Y since they cannot be ordered, or even written. Hence, we cannot identify a 'choice' set Z that contains one element from every set in P(Y).
[However, it is important to note that some non-enumerables can be approximated as explicit rationals to any degree of accuracy in a finite number of steps, though such numbers are not Turing computable. See 'Thoughts on diagonal reals' above.]
Remark: It may be that an extended language L' could resolve this apparent inconsistency. The basic criticism from two mathematicians is that merely because a choice set Z cannot be explicitly identified by individual elements does not prevent it from existing axiomatically. Dan Velleman, an Amherst logician, remarked: 'But the axiom of choice does not say that 'we can form'
[I later replaced 'form' with 'identify'] a choice set. It simply says that the choice set exists. Most people interpret AC as asserting the existence of certain sets that we cannot explicitly define.'
My response is that we are then faced with the meaning of 'one' in the phrase 'one element in common.' The word 'one' doesn't appear to have a graspable meaning for the set Z. Clearly, the routine meaning of 'choice' is inapplicable, there being nothing that can be selected. The set Z must be construed as an abstract ideal that is analogous to the concept of infinitesimal quantity, which is curious since set theory arose as an answer to the philosophical objection to such entities.
It is amusing to consider two types of vacuous truth (using '$' for the universal quantifier and '#' for the existential quantifier):
I. $w e W Fw & W = { }.
II. Consider the countable set X containing all computable reals and the noncountable set Y containing all noncomputable reals. The statement $r e R Fr --> $y e Y Fy even though no y e Y can be specified from information given in Y's definition. That is, I. is vacuously true because W contains no elements, whereas II. is vacuously true because Y contains no elements specified by Y's definition.
It is just the set Y that the intuitionist opposes, of course. Rather than become overly troubled by the philosophy of existence, it may be useful to limit ourselves to specifiability, which essentially means the ability to pair an element with a natural number.
Those numbers in turn can be paired with a successor function, such as S...S(O). We should here consider the issue of transitivity, whereby the intuitionist admits to A --> {A} but does not accept A --> {{A}} without first specifying, defining or expressing {A}. That is, the intuitionist says A --> {{A}} only if {{A}} --> {A}, which is only true if {A} has been specified, which essentially means paired with n e N.
In their book, Philosophies of Mathematics (Blackwell, 2002), Alexander George and Velleman offer a deliberately weak proof of the theorem that says that every infinite set has a denumerable subset: 'Proof: Suppose A is an infinite set. Then A is certainly not the empty set, so we can choose an element a0 e A. Since A is infinite, A =/= {a0}, so we can choose some a1 e A such that a1 =/= a0. Similarly, A =/= {a0,a1}, so we can choose a2 e A such that a2 =/= a0 and a2 =/= a1. Continuing in this way, we can recursively choose an e A such that an ~e {a0,a1,...,an-1}. Now let R = {<0,a0>, <1,a1>, <2,a2>...} .
Then R is a one-to-one correspondence between N and the set {a0,a1,a2...}, which is a subset of A. Therefore, A has a denumerable subset.' The writers add, 'Although this proof seems convincing, it cannot be formalized using the set theory axioms that we have listed so far. The axioms we have discussed guarantee the existence of sets that are explicitly specified in various ways -- for example, as the set of all subsets of some set (Axiom of Power Sets), or as the set of all elements of some set that have a particular property (Axiom of Comprehension).
But the proof of [the theorem above] does not specify the one-to-one correspondence R completely, because it does not specify how the choices of the elements a0,a1,a2... are to be made. To justify the steps in the proof, we need an axiom guaranteeing the existence of sets that result from such arbitrary choices:'
The writers give their version of the axiom of choice: 'Axiom of choice. Suppose F is a set of sets such that Æ =/= F. Then there is a function C with domain F such that, for every X e F, C(X) e X.' They add, 'The function C is called a choice function because it can be thought of as choosing one element C(X) from each X e F.' The theorem cited seems to require an abstracted construction algorithm.
However, how does one select a0,a1,a2... if the elements of Y are individually nondefinable? AC now must be used to justify counting elements that can't be identified. So now AC is used to assert a denumerable subset by justifying a construction algorithm that can, in principle, never be performed. Suppose we define a real as an equivalence class of Cauchy sequences. If [{an}] e X, then [{an}] is computable and orderable. By computable, we mean that there is some rule for determining, in a finite number of steps, the exact rational value of any term an and that this rule must always yield the same value for an. A brouwerian choice sequence fails to assure that an has the same value on every computation, even though {an} is cauchy. Such numbers are defined here as 'non-computable,' though perhaps 'non-replicable' is a better characterization. A brouwerian cauchy sequence {an}, though defined, is not orderable since, in effect, only a probability can be assigned to its ordering between 1/p and 1/q. Now we are required by AC to say that either {an} is equivalent to {bn} and hence that [{an}] = [{bn}] or that the two sequences are not equivalent and that the two numbers are not equal. Yet, a brouwerian choice sequence defines a subset W of Y, whereby the elements of W cannot be distinguished in a finite number of steps.
Yet AC says that the trichotomy law applies to w1 and w2. We should note that W may contain members that coincide with some x e X. For example, we cannot rule out that a random-number generator might produce all the digits in pi. In a 1993 Philosophical Review article, Constructivism liberalized Velleman defends the notion that only denumerable sets qualify as actual infinities. In that case, AC would, I suppose, not apply to a nondenumerable set X since the choice function could only apply to a denumerable subset of X. One can't apply the choice function to something that doesn't exist.
Essentially, Velleman 1993 is convinced that Cantor's reducto ad absurdum proof of nondenumerability of the reals should be interpreted: 'If a set of all reals exists, that set cannot be countable.' By this, we avoid the trap of assuming, without definition, that a set of all reals exists. He writes that 'to admit the existence of completely unspecifiable reals would violate our principle that if we want to treat real numbers as individuals, it is up to us to individuate them.'
'As long as we maintain this principle, we cannot accept the classical mathematician's claim that there are uncountably many completely unspecifiable real numbers. Rather, the natural conclusion to draw from Cantor's proof seems to be that any scheme for specifying reals can be extended to a more inclusive one, and therefore the reals form an indefinitely extensible totality.' He favors use of intuitionist logic for nondenumerable entities while retaining classical logic for denumerable sets. 'The arguments of the constructionists have shaken my faith in the classical treatment of real numbers, but not natural numbers,' Velleman 1993 writes in his sketching of a philisophical program he calls 'liberal constructivism.'
Unlike strict constructivists, he accepts 'actual infinities,' but unlike classical mathematicians, he eschews uncountable totalities. For example, he doubts that 'the power set operation, when applied to an infinite set, results in a well-defined totality.' His point can be seen by considering the Cantorian set of reals. We again form the denumerable set X of all computable, and enumerable, reals and then write the complement set R-X. Now if we apply AC to R-X in order to form a subset Y, does it not seem that Y ought to be perforce denumerable, especially if we are assuming that Y may be constructed?
That is, the function C(R-X) seems to require some type of instruction to obtain a relation uRv. If an instruction is required in order to pair u and v, then Y would be denumerable, the set of instructions being denumerable. But does not AC imply that no instruction is required? Of course, we can then write (R-X)-Y to obtain a nondenumerable subset of R. We can think of two versions of AC1: the countable version and the noncountable. In the countable version, AC says that it is possible to select one element from every set in a countable collection of sets. In the noncountable version, AC says that the choice function may be applied to a nondenumerable collection of sets. In strong AC, we must think of the elements being chosen en masse, rather than in a step-by-step process.
The wildness implicit in AC is further shown by the use of a non-formulaic function to pair noncomputables in Y with noncomputables in a subset of Y, as in f:Y->Y. That is, suppose we take all the noncomputables in the interval (0,1/2) and pair each with one noncomputable in (1/2,1), without specifying a means of pairing, via formula or algorithm. Since we can do the same for every other noncomputable in (1/2,1), we know there exists a nondenumerable set of functions pairing noncomputables. This is strange. We have shown that there is a nondenumerable set of nonformulaic functions to pair non-individuated members of domY with a non-individuated member of ranY. If x e domY, we say that x varies, even though it is impossible to say how it varies.
If yo e ranY, we can't do more than approximate it on the real line. We manipulate quantities that we can't grasp. They exist courtesy of AC alone. In an August 2002 email, Velleman said that though still attracted to this modified intuitionism, he is not committed to a particular philosophy.
Jim Conant, a Cornell topologist, commented that the reason my exposition is not considered to imply a paradox is that 'the axioms of set theory merely assert existence of sets and never assert that sets can be constructed explicitly.' The choice axiom 'in particular is notorious for producing wild sets that can never be explicitly nailed down.' He adds, 'A platonist would say that it is a problem of perception: these wild sets are out there but we can never perceive them fully since we are hampered by a denumerable language. Others would question the meaning of such a statement.'
Also, the ZF axioms are consistent only if the ZF axioms + AC are consistent, he notes, adding that 'nobody knows whether the ZF axioms are consistent.' (In fact, his former adviser, topologist Mike Freedman, believes there are ZF inconsistencies 'so complicated' that they have yet to be found.) 'Therefore I take the point of view that the axiom of choice is simply a useful tool for proving down to earth things.' Yet it is hard to conceive of Y or P(Y)\Y as 'down to earth.' For example, because Y contains real numbers, they are axiomatically 'on' the real number line. Yet no element of Y can be located on that line. y is a number without a home.
Note added in April 2006: Since arithmetic can be encoded in ZFC, we know from Kurt Godel that ZFC is either inconsistent or incomplete. That is, there is at least one true statement in ZFC that cannot be proved from axioms or ZFC contains a contradiction. We also know that ZFC is incomplete in the sense that the continuum hypothesis can be expressed in ZFC, its truth status is independent of ZFC axioms, as Godel and Paul Cohen have shown.
1. Jim Conant brought this possibility to my attention.
The following proposition is presented for purposes of discussion. I agree that, according to standard Zermelo-Fraenkel set theory, the proposition is false. Proposition: The Zermelo-Fraenkel power set axiom and the axiom of choice are inconsistent if an extended language is not used to express all the reals.
Discussion: The power set axiom reads: 'Given any set X there is a set Y which has as its members all the subsets of X.' The axiom of choice reads: 'For any nonempty set X there is a set Y which has precisely one element in common with set X.'*
*Definitions taken from 'Logic for Mathematicians,' A.G. Hamilton, Cambridge, revised 1988.
---It is known that there is a denumerable set X of writable functions f such that f defines r e R, where R is the nondenumerable set of reals. By writable, we mean f can be written in a language L that has a finite set of operations on a finite set of symbols.
In other words, X contains all computable reals.[This theorem stems from the thought that any algorithm for computing a number can be encoded as a single unique number. So, it is argued, since the set of algorithms is denumerable, so is the set of computable reals. However, we must be cautious here. It is possible for a Brouwerian choice rule (perhaps using a random number generator) to compute more than one real.]
X is disjoint from a nondenumerable set Y, subset of R, that contains all noncomputable and hence non-enumerable reals. P(Y) contains all the subsets of Y, and, like Y, is a nondenumerable infinity. Yet y e Y is not further definable. We cannot distinguish between elements of Y since they cannot be ordered, or even written. Hence, we cannot identify a 'choice' set Z that contains one element from every set in P(Y).
[However, it is important to note that some non-enumerables can be approximated as explicit rationals to any degree of accuracy in a finite number of steps, though such numbers are not Turing computable. See 'Thoughts on diagonal reals' above.]
Remark: It may be that an extended language L' could resolve this apparent inconsistency. The basic criticism from two mathematicians is that merely because a choice set Z cannot be explicitly identified by individual elements does not prevent it from existing axiomatically. Dan Velleman, an Amherst logician, remarked: 'But the axiom of choice does not say that 'we can form'
[I later replaced 'form' with 'identify'] a choice set. It simply says that the choice set exists. Most people interpret AC as asserting the existence of certain sets that we cannot explicitly define.'
My response is that we are then faced with the meaning of 'one' in the phrase 'one element in common.' The word 'one' doesn't appear to have a graspable meaning for the set Z. Clearly, the routine meaning of 'choice' is inapplicable, there being nothing that can be selected. The set Z must be construed as an abstract ideal that is analogous to the concept of infinitesimal quantity, which is curious since set theory arose as an answer to the philosophical objection to such entities.
It is amusing to consider two types of vacuous truth (using '$' for the universal quantifier and '#' for the existential quantifier):
I. $w e W Fw & W = { }.
II. Consider the countable set X containing all computable reals and the noncountable set Y containing all noncomputable reals. The statement $r e R Fr --> $y e Y Fy even though no y e Y can be specified from information given in Y's definition. That is, I. is vacuously true because W contains no elements, whereas II. is vacuously true because Y contains no elements specified by Y's definition.
It is just the set Y that the intuitionist opposes, of course. Rather than become overly troubled by the philosophy of existence, it may be useful to limit ourselves to specifiability, which essentially means the ability to pair an element with a natural number.
Those numbers in turn can be paired with a successor function, such as S...S(O). We should here consider the issue of transitivity, whereby the intuitionist admits to A --> {A} but does not accept A --> {{A}} without first specifying, defining or expressing {A}. That is, the intuitionist says A --> {{A}} only if {{A}} --> {A}, which is only true if {A} has been specified, which essentially means paired with n e N.
In their book, Philosophies of Mathematics (Blackwell, 2002), Alexander George and Velleman offer a deliberately weak proof of the theorem that says that every infinite set has a denumerable subset: 'Proof: Suppose A is an infinite set. Then A is certainly not the empty set, so we can choose an element a0 e A. Since A is infinite, A =/= {a0}, so we can choose some a1 e A such that a1 =/= a0. Similarly, A =/= {a0,a1}, so we can choose a2 e A such that a2 =/= a0 and a2 =/= a1. Continuing in this way, we can recursively choose an e A such that an ~e {a0,a1,...,an-1}. Now let R = {<0,a0>, <1,a1>, <2,a2>...} .
Then R is a one-to-one correspondence between N and the set {a0,a1,a2...}, which is a subset of A. Therefore, A has a denumerable subset.' The writers add, 'Although this proof seems convincing, it cannot be formalized using the set theory axioms that we have listed so far. The axioms we have discussed guarantee the existence of sets that are explicitly specified in various ways -- for example, as the set of all subsets of some set (Axiom of Power Sets), or as the set of all elements of some set that have a particular property (Axiom of Comprehension).
But the proof of [the theorem above] does not specify the one-to-one correspondence R completely, because it does not specify how the choices of the elements a0,a1,a2... are to be made. To justify the steps in the proof, we need an axiom guaranteeing the existence of sets that result from such arbitrary choices:'
The writers give their version of the axiom of choice: 'Axiom of choice. Suppose F is a set of sets such that Æ =/= F. Then there is a function C with domain F such that, for every X e F, C(X) e X.' They add, 'The function C is called a choice function because it can be thought of as choosing one element C(X) from each X e F.' The theorem cited seems to require an abstracted construction algorithm.
However, how does one select a0,a1,a2... if the elements of Y are individually nondefinable? AC now must be used to justify counting elements that can't be identified. So now AC is used to assert a denumerable subset by justifying a construction algorithm that can, in principle, never be performed. Suppose we define a real as an equivalence class of Cauchy sequences. If [{an}] e X, then [{an}] is computable and orderable. By computable, we mean that there is some rule for determining, in a finite number of steps, the exact rational value of any term an and that this rule must always yield the same value for an. A brouwerian choice sequence fails to assure that an has the same value on every computation, even though {an} is cauchy. Such numbers are defined here as 'non-computable,' though perhaps 'non-replicable' is a better characterization. A brouwerian cauchy sequence {an}, though defined, is not orderable since, in effect, only a probability can be assigned to its ordering between 1/p and 1/q. Now we are required by AC to say that either {an} is equivalent to {bn} and hence that [{an}] = [{bn}] or that the two sequences are not equivalent and that the two numbers are not equal. Yet, a brouwerian choice sequence defines a subset W of Y, whereby the elements of W cannot be distinguished in a finite number of steps.
Yet AC says that the trichotomy law applies to w1 and w2. We should note that W may contain members that coincide with some x e X. For example, we cannot rule out that a random-number generator might produce all the digits in pi. In a 1993 Philosophical Review article, Constructivism liberalized Velleman defends the notion that only denumerable sets qualify as actual infinities. In that case, AC would, I suppose, not apply to a nondenumerable set X since the choice function could only apply to a denumerable subset of X. One can't apply the choice function to something that doesn't exist.
Essentially, Velleman 1993 is convinced that Cantor's reducto ad absurdum proof of nondenumerability of the reals should be interpreted: 'If a set of all reals exists, that set cannot be countable.' By this, we avoid the trap of assuming, without definition, that a set of all reals exists. He writes that 'to admit the existence of completely unspecifiable reals would violate our principle that if we want to treat real numbers as individuals, it is up to us to individuate them.'
'As long as we maintain this principle, we cannot accept the classical mathematician's claim that there are uncountably many completely unspecifiable real numbers. Rather, the natural conclusion to draw from Cantor's proof seems to be that any scheme for specifying reals can be extended to a more inclusive one, and therefore the reals form an indefinitely extensible totality.' He favors use of intuitionist logic for nondenumerable entities while retaining classical logic for denumerable sets. 'The arguments of the constructionists have shaken my faith in the classical treatment of real numbers, but not natural numbers,' Velleman 1993 writes in his sketching of a philisophical program he calls 'liberal constructivism.'
Unlike strict constructivists, he accepts 'actual infinities,' but unlike classical mathematicians, he eschews uncountable totalities. For example, he doubts that 'the power set operation, when applied to an infinite set, results in a well-defined totality.' His point can be seen by considering the Cantorian set of reals. We again form the denumerable set X of all computable, and enumerable, reals and then write the complement set R-X. Now if we apply AC to R-X in order to form a subset Y, does it not seem that Y ought to be perforce denumerable, especially if we are assuming that Y may be constructed?
That is, the function C(R-X) seems to require some type of instruction to obtain a relation uRv. If an instruction is required in order to pair u and v, then Y would be denumerable, the set of instructions being denumerable. But does not AC imply that no instruction is required? Of course, we can then write (R-X)-Y to obtain a nondenumerable subset of R. We can think of two versions of AC1: the countable version and the noncountable. In the countable version, AC says that it is possible to select one element from every set in a countable collection of sets. In the noncountable version, AC says that the choice function may be applied to a nondenumerable collection of sets. In strong AC, we must think of the elements being chosen en masse, rather than in a step-by-step process.
The wildness implicit in AC is further shown by the use of a non-formulaic function to pair noncomputables in Y with noncomputables in a subset of Y, as in f:Y->Y. That is, suppose we take all the noncomputables in the interval (0,1/2) and pair each with one noncomputable in (1/2,1), without specifying a means of pairing, via formula or algorithm. Since we can do the same for every other noncomputable in (1/2,1), we know there exists a nondenumerable set of functions pairing noncomputables. This is strange. We have shown that there is a nondenumerable set of nonformulaic functions to pair non-individuated members of domY with a non-individuated member of ranY. If x e domY, we say that x varies, even though it is impossible to say how it varies.
If yo e ranY, we can't do more than approximate it on the real line. We manipulate quantities that we can't grasp. They exist courtesy of AC alone. In an August 2002 email, Velleman said that though still attracted to this modified intuitionism, he is not committed to a particular philosophy.
Jim Conant, a Cornell topologist, commented that the reason my exposition is not considered to imply a paradox is that 'the axioms of set theory merely assert existence of sets and never assert that sets can be constructed explicitly.' The choice axiom 'in particular is notorious for producing wild sets that can never be explicitly nailed down.' He adds, 'A platonist would say that it is a problem of perception: these wild sets are out there but we can never perceive them fully since we are hampered by a denumerable language. Others would question the meaning of such a statement.'
Also, the ZF axioms are consistent only if the ZF axioms + AC are consistent, he notes, adding that 'nobody knows whether the ZF axioms are consistent.' (In fact, his former adviser, topologist Mike Freedman, believes there are ZF inconsistencies 'so complicated' that they have yet to be found.) 'Therefore I take the point of view that the axiom of choice is simply a useful tool for proving down to earth things.' Yet it is hard to conceive of Y or P(Y)\Y as 'down to earth.' For example, because Y contains real numbers, they are axiomatically 'on' the real number line. Yet no element of Y can be located on that line. y is a number without a home.
Note added in April 2006: Since arithmetic can be encoded in ZFC, we know from Kurt Godel that ZFC is either inconsistent or incomplete. That is, there is at least one true statement in ZFC that cannot be proved from axioms or ZFC contains a contradiction. We also know that ZFC is incomplete in the sense that the continuum hypothesis can be expressed in ZFC, its truth status is independent of ZFC axioms, as Godel and Paul Cohen have shown.
1. Jim Conant brought this possibility to my attention.
Time thought experiments
[Written some years ago.]
Godel's theorem and a time travel paradox In How to Build a Time Machine (Viking 2001), the physicist Paul Davies gives the 'most baffling of all time paradoxes.' Writes Davies: 'A professor builds a time machine in 2005 and decides to go forward ... to 2010. When he arrives, he seeks out the university library and browses through the current journals. In the mathematics section he notices a splendid new theorem and jots down the details. Then he returns to 2005, summons a clever student, and outlines the theorem. The student goes away, tidies up the argument, writes a paper, and publishes it in a mathematics journal. It was, of course, in this very journal that the professor read the paper in 2010.'
Davies finds that, from a physics standpoint, such a 'self-consistent causal loop' is possible, but, 'where exactly did the theorem come from?... it's as if the information about the theorem just came out of thin air.' Davies says many worlds proponent David Deutsch, author of The Fabric of Reality and a time travel 'expert,' finds this paradox exceptionally disturbing, since information appears from nowhere, in apparent violation of the principle of entropy.
This paradox seems well suited to Godel's main incompleteness theorem, which says that a sufficiently rich formal system if consistent, must be incomplete. Suppose we assume that there is a formal system T -- a theory of physics -- in which a sentence S can be constructed describing the mentioned time travel paradox. If S strikes us as paradoxical, then we may regard S as the Godel sentence of T. Assuming that T is a consistent theory, we would then require that some extension of T be constructed. An extension might, for example, say that the theorem's origin is relative to the observer and include a censorship, as occurs in other light-related phenomena.
hat is, the professor might be required to forget where he got the ideas to feed his student. But, even if S is made consistent, there must then be some other sentence S', which is not derivable from T'. Of course, if T incorporates the many worlds view, S would likely be consistent and derivable from T.
However, assuming T is a sufficiently vigorous mathematical formalism, there must still be some other sentence V that may be viewed as paradoxical (inconsistent) if T is viewed as airtight. How old is a black hole? Certainly less than the age of the cosmos, you say. The black hole relativistic time problem illustrates that the age of the cosmos is determined by the yardstick used. Suppose we posit a pulsar pulsing at the rate T, and distance D from the event horizon of a black hole.
Our clock is timed to strike at T/2, so that pulse A has occurred at T=0. We now move the pulsar closer to the event horizon, again with our clock striking at what we'll call T'/2. Now because of the gravitational effect on observed time, the time between pulses is longer. That is T' > T, and hence T'=0 is farther in the past than T=0. Of course, as we push the pulsar closer to the event horizon, the relative time TN becomes asymptotic to infinity (eternity).
So, supposing the universe was born 15 billion years ago in the big bang, we can push our pulsar's pulse A back in time beyond 15 billion years ago by pushing the pulsar closer to the event horizon. No matter how old we make the universe, we may always obtain a pulse A that is older than the cosmos. Yes, you say, but a real pulsar would be ripped to shreds and such a happening is not observable.
Nevertherless, the general theory of relativity requires that we grant that time calculations can yield such contradictions. Anthropic issues A sense of awe often accompanies the observation: 'The conditions for human (or any) life are vastly improbable in the cosmic scheme of things.' This leads some to assert that the many worlds scenario answers that striking improbability, since in most other universes, life never arose and never will. I point out that the capacity for the human mind to examine the cosmos is perhaps 2.5 x 104 years old, against a cosmic time scale of 1.5 x 109. In other words, we have a ratio of 2.5(104)/1.5(109) = 1.6/105.
n other words, humanity is an almost invisible drop in the vast sea of cosmic events. Yet here we are! Isn't that amazing?! It seems as though the cosmos conspired to make our little culture just for us, so we could contemplate its vast mysteries. However, there is the problem of the constants of nature. Even slight differences in these constants would, it seems, lead to universes where complexity just doesn't happen. Suppose that these constants depend on initial cosmic conditions which have a built-in random variability.
In that case, the existence of a universe with just the right constants for life (in particular, humanity) to evolve is nothing short of miraculously improbable. Some hope a grand unified theory will resolve the issue. Others suggest that there is a host of bubble universes, most of which are not conducive to complexity, and hence the issue of improbability is removed (after all, we wouldn't be in one of the barren cosmoses). For more on this issue, see the physicist-writers John Barrow, Frank Tipler and Paul Davies.
At any rate, it doesn't seem likely that this drop will last long, in terms of cosmic scales, and the same holds for other such tiny drops elsewhere in the cosmos. Even granting faster-than-light 'tachyon radio,' the probability is very low that an alien civilization exists within communications range of our ephemeral race. That is, the chance of two such drops existing 'simultaneously' is rather low, despite the fond hopes of the SETI crowd. On the other hand, Tipler favors the idea that once intelligent life has evolved, it will find the means to continue on forever. Anyway, anthropomorphism does seem to enter into the picture when we consider quantum phenomena: a person's physical reality is influenced by his or her choices.
[Written some years ago.]
Godel's theorem and a time travel paradox In How to Build a Time Machine (Viking 2001), the physicist Paul Davies gives the 'most baffling of all time paradoxes.' Writes Davies: 'A professor builds a time machine in 2005 and decides to go forward ... to 2010. When he arrives, he seeks out the university library and browses through the current journals. In the mathematics section he notices a splendid new theorem and jots down the details. Then he returns to 2005, summons a clever student, and outlines the theorem. The student goes away, tidies up the argument, writes a paper, and publishes it in a mathematics journal. It was, of course, in this very journal that the professor read the paper in 2010.'
Davies finds that, from a physics standpoint, such a 'self-consistent causal loop' is possible, but, 'where exactly did the theorem come from?... it's as if the information about the theorem just came out of thin air.' Davies says many worlds proponent David Deutsch, author of The Fabric of Reality and a time travel 'expert,' finds this paradox exceptionally disturbing, since information appears from nowhere, in apparent violation of the principle of entropy.
This paradox seems well suited to Godel's main incompleteness theorem, which says that a sufficiently rich formal system if consistent, must be incomplete. Suppose we assume that there is a formal system T -- a theory of physics -- in which a sentence S can be constructed describing the mentioned time travel paradox. If S strikes us as paradoxical, then we may regard S as the Godel sentence of T. Assuming that T is a consistent theory, we would then require that some extension of T be constructed. An extension might, for example, say that the theorem's origin is relative to the observer and include a censorship, as occurs in other light-related phenomena.
hat is, the professor might be required to forget where he got the ideas to feed his student. But, even if S is made consistent, there must then be some other sentence S', which is not derivable from T'. Of course, if T incorporates the many worlds view, S would likely be consistent and derivable from T.
However, assuming T is a sufficiently vigorous mathematical formalism, there must still be some other sentence V that may be viewed as paradoxical (inconsistent) if T is viewed as airtight. How old is a black hole? Certainly less than the age of the cosmos, you say. The black hole relativistic time problem illustrates that the age of the cosmos is determined by the yardstick used. Suppose we posit a pulsar pulsing at the rate T, and distance D from the event horizon of a black hole.
Our clock is timed to strike at T/2, so that pulse A has occurred at T=0. We now move the pulsar closer to the event horizon, again with our clock striking at what we'll call T'/2. Now because of the gravitational effect on observed time, the time between pulses is longer. That is T' > T, and hence T'=0 is farther in the past than T=0. Of course, as we push the pulsar closer to the event horizon, the relative time TN becomes asymptotic to infinity (eternity).
So, supposing the universe was born 15 billion years ago in the big bang, we can push our pulsar's pulse A back in time beyond 15 billion years ago by pushing the pulsar closer to the event horizon. No matter how old we make the universe, we may always obtain a pulse A that is older than the cosmos. Yes, you say, but a real pulsar would be ripped to shreds and such a happening is not observable.
Nevertherless, the general theory of relativity requires that we grant that time calculations can yield such contradictions. Anthropic issues A sense of awe often accompanies the observation: 'The conditions for human (or any) life are vastly improbable in the cosmic scheme of things.' This leads some to assert that the many worlds scenario answers that striking improbability, since in most other universes, life never arose and never will. I point out that the capacity for the human mind to examine the cosmos is perhaps 2.5 x 104 years old, against a cosmic time scale of 1.5 x 109. In other words, we have a ratio of 2.5(104)/1.5(109) = 1.6/105.
n other words, humanity is an almost invisible drop in the vast sea of cosmic events. Yet here we are! Isn't that amazing?! It seems as though the cosmos conspired to make our little culture just for us, so we could contemplate its vast mysteries. However, there is the problem of the constants of nature. Even slight differences in these constants would, it seems, lead to universes where complexity just doesn't happen. Suppose that these constants depend on initial cosmic conditions which have a built-in random variability.
In that case, the existence of a universe with just the right constants for life (in particular, humanity) to evolve is nothing short of miraculously improbable. Some hope a grand unified theory will resolve the issue. Others suggest that there is a host of bubble universes, most of which are not conducive to complexity, and hence the issue of improbability is removed (after all, we wouldn't be in one of the barren cosmoses). For more on this issue, see the physicist-writers John Barrow, Frank Tipler and Paul Davies.
At any rate, it doesn't seem likely that this drop will last long, in terms of cosmic scales, and the same holds for other such tiny drops elsewhere in the cosmos. Even granting faster-than-light 'tachyon radio,' the probability is very low that an alien civilization exists within communications range of our ephemeral race. That is, the chance of two such drops existing 'simultaneously' is rather low, despite the fond hopes of the SETI crowd. On the other hand, Tipler favors the idea that once intelligent life has evolved, it will find the means to continue on forever. Anyway, anthropomorphism does seem to enter into the picture when we consider quantum phenomena: a person's physical reality is influenced by his or her choices.
Tuesday, March 13, 2012
First published elsewhere Monday, January 7, 2008
Thoughts on the Shroud of Turin
I haven't read The Da Vinci Code but...
. . . I
have scanned a book by the painter David Hockney, whose internet-driven survey
of Renaissance and post-Renaissance art makes a strong case for a trade secret:
use of a camera obscura technique for creating precision realism in
paintings.
Hockney's book, Secret Knowledge: rediscovering the lost
legacy of the old masters, 2001, uses numerous paintings to show that
European art guilds possessed this technical ability, which was a closely
guarded and prized secret. Eventually the technique, along with the related
magic lantern projector, evolved into photography. It's possible the technique
also included the use of lenses and mirrors, a topic familiar to Leonardo da
Vinci.
Apparently the first European mention of a camera obscura is in
Codex Atlanticus.
I didn't know about this when first mulling over
the Shroud of Turin controversy and so was quite perplexed as to how such an
image could have been formed in the 14th century, when the shroud's existence
was first reported. I was mistrustful of the carbon dating, realizing that the
Kremlin had a strong motive for deploying its agents to discredit the purported
relic. (See my old page Science, superstition and the Shroud of Turin
http://www.angelfire.com/az3/nuzone/shroud.html)
But Hockney's book helps
to bolster a theory by fellow Brits Lynn Picknell and Clive Prince that the
shroud was faked by none other than Leonardo, a scientist, "magician" and
intriguer. Their book The Turin Shroud was a major source of inspiration
for The Da Vinci Code, it has been reported. The two are not
professional scientists but, in the time-honored tradition of English amateurs,
did an interesting sleuthing job. As they point out, the frontal head
image is way out of proportion with the image of the scourged and crucified
body.
They suggest the face is quite reminiscent of a self-portrait by Leonardo.
Yet, two Catholic scientists at the Jet Propulsion Lab who used a computer
method in the 1980s to analyze the image had supposedly demonstrated that it was
"three-dimensional." But a much more recent analysis, commissioned by Picknell
and Prince, found that the "three-dimensionalism" did not hold up. From what I
can tell, the Jet Propulsion pair proved that the image was not made by
conventional brushwork but that further analysis indicates some type of
projection.
Picknell and Prince suggest that Leonardo used projected
images of a face and of a body -- perhaps a cadaver that had been inflicted with
various crucifixion wounds -- to create a death mask type of impression. But the
image collation was imperfect, leaving the head size wrong and the body that of,
by Mideast standards, a giant. This is interesting, in that Hockney discovered
that the camera obscura art often failed at proportion and depth of field
between spliced images, just as when a collage piece is pasted onto a
background.Still the shroud's official history begins in 1358, about a
hundred years prior to the presumed Da Vinci hoax.
It seems plausible that
either some shroud-like relic had passed to a powerful family and that its
condition was poor, either because of its age or because it wasn't that
convincing upon close inspection. The family then secretly enlisted Leonardo,
the theory goes, in order to obtain a really top-notch relic. Remember, relics
were big business in those days, being used to generate revenues and political
leverage.For if Leonardo was the forger, we must account for the fact
that the highly distinctive "Vignon marks" on the shroud face have been found in
Byzantine art dating to the 7th century.
I can't help but wonder whether
Leonardo only had the Mandylion (the face) to work with, and added the body as a
bonus (I've tried scanning the internet for reports of exact descriptions of the
shroud prior to da Vinci's time but haven't succeeded). The Mandylion
refers to an image not made by hands. This "image of Edessa" must have been very
impressive, considering the esteem in which it was held by Byzantium. Byzantium
also was rife with relics and with secret arts -- which included what we'd call
technology along with mumbo-jumbo.
The Byzantine tradition of iconography may
have stemmed from display of the Mandylion.Ian Wilson, a credentialed
historian who seems to favor shroud authenticity, made a good case for the
Mandylion having been passed to the Knights Templar -- perhaps when the
crusaders sacked Constantinople in 1204. The shroud then showed up in the hands
of a descendant of one of the Templars after the order was ruthlessly
suppressed. His idea was that the shroud and the Mandylion were the same, but
that in the earlier centuries it had been kept folded in four, like a map, with
the head on top and had always been displayed that way.
The other
possibility is that a convincing relic of only the head was held by the
Templars. A discovery at Templecombe, England, in 1951 showed that regional
Templar centers kept paintings of a bearded Jesus face, which may well have been
copies of a relic that Templar enemies tried to find but couldn't. The Templars
had been accused of worshiping a bearded idol.Well, what made the
Mandylion so convincing?
A possibility: when the Templars obtained the relic
they also obtained a secret book of magical arts that told how to form such an
image. This of course implies that Leonardo discovered the technique when
examining this manuscript, which may have contained diagrams. Or, it implies
that the image was not counterfeited by Leonardo but was a much, much older
counterfeit.Obviously all this is pure speculation. But one cannot deny
that the shroud images have a photographic quality but are out of kilter with
each other and that the secret of camera obscura projection in Western art seems
to stem from Leonardo's studios.
The other point is that the 1988 carbon
analysis dated the shroud to the century before Leonardo. If one discounts
possible political control of the result, then one is left to wonder how such a
relic could have been so skillfully wrought in that era. Leonardo was one of
those once-in-a-thousand-year geniuses who had the requisite combination of
skills, talents, knowledge and impiety to pull off such a stunt.Of
course, the radiocarbon dating might easily have been off by a hundred years
(but, if fairly done, is not likely to have been off by 1300 years).
All
in all, I can't be sure exactly what happened, but I am strongly inclined to
agree that the shroud was counterfeited by Leonardo based on a previous relic.
The previous relic must have been at least "pretty good" or why all the fuss in
previous centuries? But, it is hard not to suspect Leonardo's masterful hand in
the Shroud of Turin.Of course, the thing about the shroud is that there
is always more to it. More mystery. I know perfectly well that, no matter how
good the scientific and historical analysis, trying to nail down a proof one way
or the other is a wil o' the wisp.
Thoughts on the Shroud of Turin
I haven't read The Da Vinci Code but...
. . . I
have scanned a book by the painter David Hockney, whose internet-driven survey
of Renaissance and post-Renaissance art makes a strong case for a trade secret:
use of a camera obscura technique for creating precision realism in
paintings.
Hockney's book, Secret Knowledge: rediscovering the lost
legacy of the old masters, 2001, uses numerous paintings to show that
European art guilds possessed this technical ability, which was a closely
guarded and prized secret. Eventually the technique, along with the related
magic lantern projector, evolved into photography. It's possible the technique
also included the use of lenses and mirrors, a topic familiar to Leonardo da
Vinci.
Apparently the first European mention of a camera obscura is in
Codex Atlanticus.
I didn't know about this when first mulling over
the Shroud of Turin controversy and so was quite perplexed as to how such an
image could have been formed in the 14th century, when the shroud's existence
was first reported. I was mistrustful of the carbon dating, realizing that the
Kremlin had a strong motive for deploying its agents to discredit the purported
relic. (See my old page Science, superstition and the Shroud of Turin
http://www.angelfire.com/az3/nuzone/shroud.html)
But Hockney's book helps
to bolster a theory by fellow Brits Lynn Picknell and Clive Prince that the
shroud was faked by none other than Leonardo, a scientist, "magician" and
intriguer. Their book The Turin Shroud was a major source of inspiration
for The Da Vinci Code, it has been reported. The two are not
professional scientists but, in the time-honored tradition of English amateurs,
did an interesting sleuthing job. As they point out, the frontal head
image is way out of proportion with the image of the scourged and crucified
body.
They suggest the face is quite reminiscent of a self-portrait by Leonardo.
Yet, two Catholic scientists at the Jet Propulsion Lab who used a computer
method in the 1980s to analyze the image had supposedly demonstrated that it was
"three-dimensional." But a much more recent analysis, commissioned by Picknell
and Prince, found that the "three-dimensionalism" did not hold up. From what I
can tell, the Jet Propulsion pair proved that the image was not made by
conventional brushwork but that further analysis indicates some type of
projection.
Picknell and Prince suggest that Leonardo used projected
images of a face and of a body -- perhaps a cadaver that had been inflicted with
various crucifixion wounds -- to create a death mask type of impression. But the
image collation was imperfect, leaving the head size wrong and the body that of,
by Mideast standards, a giant. This is interesting, in that Hockney discovered
that the camera obscura art often failed at proportion and depth of field
between spliced images, just as when a collage piece is pasted onto a
background.Still the shroud's official history begins in 1358, about a
hundred years prior to the presumed Da Vinci hoax.
It seems plausible that
either some shroud-like relic had passed to a powerful family and that its
condition was poor, either because of its age or because it wasn't that
convincing upon close inspection. The family then secretly enlisted Leonardo,
the theory goes, in order to obtain a really top-notch relic. Remember, relics
were big business in those days, being used to generate revenues and political
leverage.For if Leonardo was the forger, we must account for the fact
that the highly distinctive "Vignon marks" on the shroud face have been found in
Byzantine art dating to the 7th century.
I can't help but wonder whether
Leonardo only had the Mandylion (the face) to work with, and added the body as a
bonus (I've tried scanning the internet for reports of exact descriptions of the
shroud prior to da Vinci's time but haven't succeeded). The Mandylion
refers to an image not made by hands. This "image of Edessa" must have been very
impressive, considering the esteem in which it was held by Byzantium. Byzantium
also was rife with relics and with secret arts -- which included what we'd call
technology along with mumbo-jumbo.
The Byzantine tradition of iconography may
have stemmed from display of the Mandylion.Ian Wilson, a credentialed
historian who seems to favor shroud authenticity, made a good case for the
Mandylion having been passed to the Knights Templar -- perhaps when the
crusaders sacked Constantinople in 1204. The shroud then showed up in the hands
of a descendant of one of the Templars after the order was ruthlessly
suppressed. His idea was that the shroud and the Mandylion were the same, but
that in the earlier centuries it had been kept folded in four, like a map, with
the head on top and had always been displayed that way.
The other
possibility is that a convincing relic of only the head was held by the
Templars. A discovery at Templecombe, England, in 1951 showed that regional
Templar centers kept paintings of a bearded Jesus face, which may well have been
copies of a relic that Templar enemies tried to find but couldn't. The Templars
had been accused of worshiping a bearded idol.Well, what made the
Mandylion so convincing?
A possibility: when the Templars obtained the relic
they also obtained a secret book of magical arts that told how to form such an
image. This of course implies that Leonardo discovered the technique when
examining this manuscript, which may have contained diagrams. Or, it implies
that the image was not counterfeited by Leonardo but was a much, much older
counterfeit.Obviously all this is pure speculation. But one cannot deny
that the shroud images have a photographic quality but are out of kilter with
each other and that the secret of camera obscura projection in Western art seems
to stem from Leonardo's studios.
The other point is that the 1988 carbon
analysis dated the shroud to the century before Leonardo. If one discounts
possible political control of the result, then one is left to wonder how such a
relic could have been so skillfully wrought in that era. Leonardo was one of
those once-in-a-thousand-year geniuses who had the requisite combination of
skills, talents, knowledge and impiety to pull off such a stunt.Of
course, the radiocarbon dating might easily have been off by a hundred years
(but, if fairly done, is not likely to have been off by 1300 years).
All
in all, I can't be sure exactly what happened, but I am strongly inclined to
agree that the shroud was counterfeited by Leonardo based on a previous relic.
The previous relic must have been at least "pretty good" or why all the fuss in
previous centuries? But, it is hard not to suspect Leonardo's masterful hand in
the Shroud of Turin.Of course, the thing about the shroud is that there
is always more to it. More mystery. I know perfectly well that, no matter how
good the scientific and historical analysis, trying to nail down a proof one way
or the other is a wil o' the wisp.
Monday, March 5, 2012
Nix some Angelfire pages, please
Alas, this writer has lost access to his Angelfire
accounts -- probably from cock-up, not conspiracy -- and so asks
readers to use this site for up-to-date versions of those
pages.The accounts include az3/nfold, az3/nuzone, az3/newzone and
ult/znewz1. There may be others.The writer specifically requests that
his page on the four-color theorem be deep-sixed unread. The offending URL is www.angelfire/az3/nfold/444.html.
Alas, this writer has lost access to his Angelfire
accounts -- probably from cock-up, not conspiracy -- and so asks
readers to use this site for up-to-date versions of those
pages.The accounts include az3/nfold, az3/nuzone, az3/newzone and
ult/znewz1. There may be others.The writer specifically requests that
his page on the four-color theorem be deep-sixed unread. The offending URL is www.angelfire/az3/nfold/444.html.
Subscribe to:
Posts (Atom)