Sinister brownouts of red troubles
(First published ca. 2001)
British authorities are weighing bringing charges against about 10 persons suspected of having been agents of the East German secret police, the Stasi, according to Stephen Grey and John Goetz of London's Sunday Times.
They were identified as the result of the cracking of a Stasi computer code by a computer buff who once lived under communist rule. However, not only British spies were exposed. Agents operating in the United States and elsewhere were evidently exposed.Reputedly, the CIA has tried to thwart exposure of the red agents, whether in America or elsewhere, claiming it would compromise some secret operations.
The Sunday Times tells of concerns that the CIA is covering up for a nest of high-level traitors. It will be recalled that three ex-East German agents, including a lawyer working at the top rung of the Pentagon, have been convicted.Vladimir Putin, president of Russia, was Moscow's representative to the Stasi when the newly exposed agents were active. The question is, did this red network remain active, perhaps under Russian control, after the collapse of East Germany? Putin may have been advised that the code was uncrackable.
On another Putin matter, John Sweeney of Britain's Observer links Putin and security service allies to the Moscow bombings that were used as a reason for the Chechnya war.
The most interesting bomb was the one that didn't go off, the incident being bizarrely transformed into a "training exercise." News agencies and media serving America have virtually ignored the Stasi story, as if FBI interest -- or lack of it -- in communist networks in our government is ho-hum news in America.
And, investigative news reports on the Moscow bombings are difficult to come across, particularly in the United States. Other topics on ConantNews pages include China's nuclear espionage offensive and MI6's battle to censor the press, along with a discussion of the mathematics of Florida's presidential election. Links between pages have proved inadequate.
Search This Blog
Saturday, March 17, 2012
China's nuke spy war
May 20, 2001--The Chinese espionage uproar took a new turn when the Washington Post disclosed that the Pentagon and the CIA were blocking publication of a U.S. nuclear scientist's memoirs of his visits to Chinese nuclear arms facilities.
Danny B. Stillman, a retired Los Alamos scientist and intelligence analyst, made many visits to the Chinese nuclear program between 1990 and 1999 and simply asked fellow scientists what they were doing, wrote the Post's Steve Coll. Stillman felt that the Chinese were able to make strides in nuclear arms not because of espionage but because computers aided their task.
Rep. Curt Weldon, a member of the Cox committee which probed U.S. nuclear security issues, targeted Clinton's decision to permit sale of some 700 supercomputers to the Chinese. Weldon demanded a copy of Stillman's book from federal officials, along with supporting materials.You can read the Post story or Weldon's statement by hitting ConantNews features and then hitting the links 'Scientist fights gag order' or 'Clinton faulted on supercomputers.' This report might have gone online sooner had not the Washington Post's email news alert system been down when the Stillman and Weldon stories emerged.
Computer problems prevented me from adding a page with links to those stories and, rather than waste more time playing games, I leave you the addresses: Stillman story: www.washingtonpost.com/wp-dyn/articles/A29474-2001May15.html Weldon story: www.fas.org/sgp/congress/2001/h051601.html
PRESIDENT'S MEN CITE NUKE SPY WAR
The Associated Press's online articles about reported Chinese espionage and Los Alamos security woes includes links to the congressional Cox report but not to a key White House report. The report of the President's Foreign Intelligence Advisory Board on nuclear security at Los Alamos portrays decades of incredible security negligence at the Los Alamos National Laboratory, where nuclear weapons are designed. This negligence continued in the face of repeated warnings from a variety of investigations, the report says.
The report, with the input of the FBI, CIA and other security arms, asserts that China has mounted a massive and highly successful spy war against our nuclear secrets.The report's appendix contains an eyebrow-raising chronology and damage assessment.
The New York Times, in its Feb. 4 and 5 editions, made good on its pledge to take a thorough look at the Wen Ho Lee affair. Times reporters noted that U.S. policy promoted fraternization of Chinese and American nuclear scientists, including those involved in the weapons program. In this climate, disinterest in security was rampant, it seems. Knowing the aggressiveness of Chinese intelligence, America would be foolish not to assume that the Chinese took full advantage of such neglect. Now the unpleasant question arises as to how many agents the communists have insuated in to America's weapons establishment. The Times did not address that question.
May 20, 2001--The Chinese espionage uproar took a new turn when the Washington Post disclosed that the Pentagon and the CIA were blocking publication of a U.S. nuclear scientist's memoirs of his visits to Chinese nuclear arms facilities.
Danny B. Stillman, a retired Los Alamos scientist and intelligence analyst, made many visits to the Chinese nuclear program between 1990 and 1999 and simply asked fellow scientists what they were doing, wrote the Post's Steve Coll. Stillman felt that the Chinese were able to make strides in nuclear arms not because of espionage but because computers aided their task.
Rep. Curt Weldon, a member of the Cox committee which probed U.S. nuclear security issues, targeted Clinton's decision to permit sale of some 700 supercomputers to the Chinese. Weldon demanded a copy of Stillman's book from federal officials, along with supporting materials.You can read the Post story or Weldon's statement by hitting ConantNews features and then hitting the links 'Scientist fights gag order' or 'Clinton faulted on supercomputers.' This report might have gone online sooner had not the Washington Post's email news alert system been down when the Stillman and Weldon stories emerged.
Computer problems prevented me from adding a page with links to those stories and, rather than waste more time playing games, I leave you the addresses: Stillman story: www.washingtonpost.com/wp-dyn/articles/A29474-2001May15.html Weldon story: www.fas.org/sgp/congress/2001/h051601.html
PRESIDENT'S MEN CITE NUKE SPY WAR
The Associated Press's online articles about reported Chinese espionage and Los Alamos security woes includes links to the congressional Cox report but not to a key White House report. The report of the President's Foreign Intelligence Advisory Board on nuclear security at Los Alamos portrays decades of incredible security negligence at the Los Alamos National Laboratory, where nuclear weapons are designed. This negligence continued in the face of repeated warnings from a variety of investigations, the report says.
The report, with the input of the FBI, CIA and other security arms, asserts that China has mounted a massive and highly successful spy war against our nuclear secrets.The report's appendix contains an eyebrow-raising chronology and damage assessment.
The New York Times, in its Feb. 4 and 5 editions, made good on its pledge to take a thorough look at the Wen Ho Lee affair. Times reporters noted that U.S. policy promoted fraternization of Chinese and American nuclear scientists, including those involved in the weapons program. In this climate, disinterest in security was rampant, it seems. Knowing the aggressiveness of Chinese intelligence, America would be foolish not to assume that the Chinese took full advantage of such neglect. Now the unpleasant question arises as to how many agents the communists have insuated in to America's weapons establishment. The Times did not address that question.
When axioms collide
(first published ca. 2002)
When axioms collide, a new truth emerges.
Here we discuss the Zermelo-Fraenkel infinite set axiom and its collision with the euclidean one-line-per-point-pair axiom. The consequence is another 'euclidean' (as opposed to 'non-euclidean') geometry that uses another, and equally valid, axiom, permitting an infinite number of lines per planar point pair.The ZF infinity axiom establishes a prototype infinite set, with the null set as its 'initial' element that permits a set x'' = {x' u {x'}}. Infinite recursion requires a denumerable set.
Fraenkel appears to have wanted the axiom in order to justify the set N. (From the set N, it is then possible to justify nondenumerable infinite sets.) So the axiomatic infinite set permits both denumerable and nondenumerable infinite sets composed of elements with a property peculiar to the set. The ZF infinite set does not of itself imply existence of, say, N. But the axiom, along with the recursion algorithm f(n) = n + 1, which is a property that can be made one-to-one with the axiomatic set elements, does imply N's existence.
So now let us graph a summation formula, such as zeta(-2), and draw a line L through each partial sum height y parallel to the x axis. That is, f(n) is asymptotic to (lim n->inf.)f(n).In other words, the parallels drawn through consecutive values of f(n) squeeze closer and closer together. At the limit, infinite density of lines is reached, where the distance between parallels is 0.Such a scenario, which might be called a singularity, is not permitted by the euclidean and pseudo-euclidean axiom of one line per point pair.
Yet the set J = {a parallel to the x axis through zeta(-2)} is certainly bijective with the axiomatic infinite set.However, by euclidean axiom, the set's existence must be disregarded. The fact that ZF permits the set J to exist and that it takes another axiom to knock it out means that J exists in another 'euclidean' geometry where the one-line-per-point-pair axiom is replaced. We can either posit a newly found geometry or we can modify the old one. We retain the parallel postulate, but say that an infinitude of lines runs through two planar points.
Two infinite sets of lines would then be said to be distinct if there is non-zero distance between the sets of parallels positioned at (lim x-garbled inf.)f(x) and (lim x->inf.)g(x).In addition, it is only the infinite subset of J with elements 0 distance apart that overlays two specific points.So we may say that an infinitude of lines runs through two planar points which are (geometrically) indistinct from a single line running through those two points.
So we axiomatically regard, in this case, an infinitude of lines to be (topologically?) equivalent to one line.
(first published ca. 2002)
When axioms collide, a new truth emerges.
Here we discuss the Zermelo-Fraenkel infinite set axiom and its collision with the euclidean one-line-per-point-pair axiom. The consequence is another 'euclidean' (as opposed to 'non-euclidean') geometry that uses another, and equally valid, axiom, permitting an infinite number of lines per planar point pair.The ZF infinity axiom establishes a prototype infinite set, with the null set as its 'initial' element that permits a set x'' = {x' u {x'}}. Infinite recursion requires a denumerable set.
Fraenkel appears to have wanted the axiom in order to justify the set N. (From the set N, it is then possible to justify nondenumerable infinite sets.) So the axiomatic infinite set permits both denumerable and nondenumerable infinite sets composed of elements with a property peculiar to the set. The ZF infinite set does not of itself imply existence of, say, N. But the axiom, along with the recursion algorithm f(n) = n + 1, which is a property that can be made one-to-one with the axiomatic set elements, does imply N's existence.
So now let us graph a summation formula, such as zeta(-2), and draw a line L through each partial sum height y parallel to the x axis. That is, f(n) is asymptotic to (lim n->inf.)f(n).In other words, the parallels drawn through consecutive values of f(n) squeeze closer and closer together. At the limit, infinite density of lines is reached, where the distance between parallels is 0.Such a scenario, which might be called a singularity, is not permitted by the euclidean and pseudo-euclidean axiom of one line per point pair.
Yet the set J = {a parallel to the x axis through zeta(-2)} is certainly bijective with the axiomatic infinite set.However, by euclidean axiom, the set's existence must be disregarded. The fact that ZF permits the set J to exist and that it takes another axiom to knock it out means that J exists in another 'euclidean' geometry where the one-line-per-point-pair axiom is replaced. We can either posit a newly found geometry or we can modify the old one. We retain the parallel postulate, but say that an infinitude of lines runs through two planar points.
Two infinite sets of lines would then be said to be distinct if there is non-zero distance between the sets of parallels positioned at (lim x-garbled inf.)f(x) and (lim x->inf.)g(x).In addition, it is only the infinite subset of J with elements 0 distance apart that overlays two specific points.So we may say that an infinitude of lines runs through two planar points which are (geometrically) indistinct from a single line running through those two points.
So we axiomatically regard, in this case, an infinitude of lines to be (topologically?) equivalent to one line.
Bush plants old fox in FBI chicken coop
(first published 2001)
FEB 23 -- As the FBI mole scandal rocked Washington, President Bush, in a hastily convened news conference, yesterday called for "civil discourse" -- defending FBI Director Louis J. Freeh and upholding Freeh's appointment of William H. Webster, who formerly headed the FBI and CIA, to probe security problems at the bureau.
It has become clear since the spy case broke that the FBI was aware of concerns about adversary penetration of its security but failed to take countermeasures, such as requiring random polygraph tests of agents.
In response to a question, Bush said, "I have confidence in Director Freeh. I think he is doing a good job," adding: "He has made the right move in selecting Judge Webster to review all procedures in the FBI to make sure this doesn't happen again."In his initial comments, Bush said he wished for GOP and Democratic civility. "One of my missions has been to change the tone of the nation's capital to encourage civil discourse," he said.
On Feb. 21, 2001, this page [website] noted the curiousness of the appointment of Webster to investigate security problems at the FBI in the wake of the arrest of a longtime mole for Russia, top FBI counterspy Robert Philip Hanssen.
Webster was CIA chief while Aldrich Ames was doing his dirty work against America. Many voices inside the CIA had warned during the mid to late 1980s that there was penetration near the top because of too much going wrong.
The White Houses of Reagan and the senior Bush should have been warned about the security problem but the elder Bush made it clear that he had complete confidence in the CIA, and no high-level security breach was identified. His son, George W., was a key White House assistant at that time.It wasn't until extraordinary pressure from people in the field was felt that the FBI and CIA set up a joint mole-hunting task force near the end of Bush's term.
But the Ames debacle was continuing under Webster, who, it appears, left the Reagan and Bush White Houses out of the loop on the severity of the problem at the CIA. Also under Webster, a number of other security scandals erupted.
Freeh's appointment of the 76-year-old Webster, who is a former FBI chief, to investigate the security problems appears to be largely political damage control but it is unlikely to give professional security and intelligence people much comfort.
This page updated Feb. 23, Feb. 28, 2001
(first published 2001)
FEB 23 -- As the FBI mole scandal rocked Washington, President Bush, in a hastily convened news conference, yesterday called for "civil discourse" -- defending FBI Director Louis J. Freeh and upholding Freeh's appointment of William H. Webster, who formerly headed the FBI and CIA, to probe security problems at the bureau.
It has become clear since the spy case broke that the FBI was aware of concerns about adversary penetration of its security but failed to take countermeasures, such as requiring random polygraph tests of agents.
In response to a question, Bush said, "I have confidence in Director Freeh. I think he is doing a good job," adding: "He has made the right move in selecting Judge Webster to review all procedures in the FBI to make sure this doesn't happen again."In his initial comments, Bush said he wished for GOP and Democratic civility. "One of my missions has been to change the tone of the nation's capital to encourage civil discourse," he said.
On Feb. 21, 2001, this page [website] noted the curiousness of the appointment of Webster to investigate security problems at the FBI in the wake of the arrest of a longtime mole for Russia, top FBI counterspy Robert Philip Hanssen.
Webster was CIA chief while Aldrich Ames was doing his dirty work against America. Many voices inside the CIA had warned during the mid to late 1980s that there was penetration near the top because of too much going wrong.
The White Houses of Reagan and the senior Bush should have been warned about the security problem but the elder Bush made it clear that he had complete confidence in the CIA, and no high-level security breach was identified. His son, George W., was a key White House assistant at that time.It wasn't until extraordinary pressure from people in the field was felt that the FBI and CIA set up a joint mole-hunting task force near the end of Bush's term.
But the Ames debacle was continuing under Webster, who, it appears, left the Reagan and Bush White Houses out of the loop on the severity of the problem at the CIA. Also under Webster, a number of other security scandals erupted.
Freeh's appointment of the 76-year-old Webster, who is a former FBI chief, to investigate the security problems appears to be largely political damage control but it is unlikely to give professional security and intelligence people much comfort.
This page updated Feb. 23, Feb. 28, 2001
The axiom of choice and non-enumerable reals
[Posted online March 13, 2002; revised July 30, 2002, Aug. 28, 2002, Oct. 12, 2002, Oct. 24, 2002; June 2003]
The following proposition is presented for purposes of discussion. I agree that, according to standard Zermelo-Fraenkel set theory, the proposition is false. Proposition: The Zermelo-Fraenkel power set axiom and the axiom of choice are inconsistent if an extended language is not used to express all the reals.
Discussion: The power set axiom reads: 'Given any set X there is a set Y which has as its members all the subsets of X.' The axiom of choice reads: 'For any nonempty set X there is a set Y which has precisely one element in common with set X.'*
*Definitions taken from 'Logic for Mathematicians,' A.G. Hamilton, Cambridge, revised 1988.
---It is known that there is a denumerable set X of writable functions f such that f defines r e R, where R is the nondenumerable set of reals. By writable, we mean f can be written in a language L that has a finite set of operations on a finite set of symbols.
In other words, X contains all computable reals.[This theorem stems from the thought that any algorithm for computing a number can be encoded as a single unique number. So, it is argued, since the set of algorithms is denumerable, so is the set of computable reals. However, we must be cautious here. It is possible for a Brouwerian choice rule (perhaps using a random number generator) to compute more than one real.]
X is disjoint from a nondenumerable set Y, subset of R, that contains all noncomputable and hence non-enumerable reals. P(Y) contains all the subsets of Y, and, like Y, is a nondenumerable infinity. Yet y e Y is not further definable. We cannot distinguish between elements of Y since they cannot be ordered, or even written. Hence, we cannot identify a 'choice' set Z that contains one element from every set in P(Y).
[However, it is important to note that some non-enumerables can be approximated as explicit rationals to any degree of accuracy in a finite number of steps, though such numbers are not Turing computable. See 'Thoughts on diagonal reals' above.]
Remark: It may be that an extended language L' could resolve this apparent inconsistency. The basic criticism from two mathematicians is that merely because a choice set Z cannot be explicitly identified by individual elements does not prevent it from existing axiomatically. Dan Velleman, an Amherst logician, remarked: 'But the axiom of choice does not say that 'we can form'
[I later replaced 'form' with 'identify'] a choice set. It simply says that the choice set exists. Most people interpret AC as asserting the existence of certain sets that we cannot explicitly define.'
My response is that we are then faced with the meaning of 'one' in the phrase 'one element in common.' The word 'one' doesn't appear to have a graspable meaning for the set Z. Clearly, the routine meaning of 'choice' is inapplicable, there being nothing that can be selected. The set Z must be construed as an abstract ideal that is analogous to the concept of infinitesimal quantity, which is curious since set theory arose as an answer to the philosophical objection to such entities.
It is amusing to consider two types of vacuous truth (using '$' for the universal quantifier and '#' for the existential quantifier):
I. $w e W Fw & W = { }.
II. Consider the countable set X containing all computable reals and the noncountable set Y containing all noncomputable reals. The statement $r e R Fr --> $y e Y Fy even though no y e Y can be specified from information given in Y's definition. That is, I. is vacuously true because W contains no elements, whereas II. is vacuously true because Y contains no elements specified by Y's definition.
It is just the set Y that the intuitionist opposes, of course. Rather than become overly troubled by the philosophy of existence, it may be useful to limit ourselves to specifiability, which essentially means the ability to pair an element with a natural number.
Those numbers in turn can be paired with a successor function, such as S...S(O). We should here consider the issue of transitivity, whereby the intuitionist admits to A --> {A} but does not accept A --> {{A}} without first specifying, defining or expressing {A}. That is, the intuitionist says A --> {{A}} only if {{A}} --> {A}, which is only true if {A} has been specified, which essentially means paired with n e N.
In their book, Philosophies of Mathematics (Blackwell, 2002), Alexander George and Velleman offer a deliberately weak proof of the theorem that says that every infinite set has a denumerable subset: 'Proof: Suppose A is an infinite set. Then A is certainly not the empty set, so we can choose an element a0 e A. Since A is infinite, A =/= {a0}, so we can choose some a1 e A such that a1 =/= a0. Similarly, A =/= {a0,a1}, so we can choose a2 e A such that a2 =/= a0 and a2 =/= a1. Continuing in this way, we can recursively choose an e A such that an ~e {a0,a1,...,an-1}. Now let R = {<0,a0>, <1,a1>, <2,a2>...} .
Then R is a one-to-one correspondence between N and the set {a0,a1,a2...}, which is a subset of A. Therefore, A has a denumerable subset.' The writers add, 'Although this proof seems convincing, it cannot be formalized using the set theory axioms that we have listed so far. The axioms we have discussed guarantee the existence of sets that are explicitly specified in various ways -- for example, as the set of all subsets of some set (Axiom of Power Sets), or as the set of all elements of some set that have a particular property (Axiom of Comprehension).
But the proof of [the theorem above] does not specify the one-to-one correspondence R completely, because it does not specify how the choices of the elements a0,a1,a2... are to be made. To justify the steps in the proof, we need an axiom guaranteeing the existence of sets that result from such arbitrary choices:'
The writers give their version of the axiom of choice: 'Axiom of choice. Suppose F is a set of sets such that Æ =/= F. Then there is a function C with domain F such that, for every X e F, C(X) e X.' They add, 'The function C is called a choice function because it can be thought of as choosing one element C(X) from each X e F.' The theorem cited seems to require an abstracted construction algorithm.
However, how does one select a0,a1,a2... if the elements of Y are individually nondefinable? AC now must be used to justify counting elements that can't be identified. So now AC is used to assert a denumerable subset by justifying a construction algorithm that can, in principle, never be performed. Suppose we define a real as an equivalence class of Cauchy sequences. If [{an}] e X, then [{an}] is computable and orderable. By computable, we mean that there is some rule for determining, in a finite number of steps, the exact rational value of any term an and that this rule must always yield the same value for an. A brouwerian choice sequence fails to assure that an has the same value on every computation, even though {an} is cauchy. Such numbers are defined here as 'non-computable,' though perhaps 'non-replicable' is a better characterization. A brouwerian cauchy sequence {an}, though defined, is not orderable since, in effect, only a probability can be assigned to its ordering between 1/p and 1/q. Now we are required by AC to say that either {an} is equivalent to {bn} and hence that [{an}] = [{bn}] or that the two sequences are not equivalent and that the two numbers are not equal. Yet, a brouwerian choice sequence defines a subset W of Y, whereby the elements of W cannot be distinguished in a finite number of steps.
Yet AC says that the trichotomy law applies to w1 and w2. We should note that W may contain members that coincide with some x e X. For example, we cannot rule out that a random-number generator might produce all the digits in pi. In a 1993 Philosophical Review article, Constructivism liberalized Velleman defends the notion that only denumerable sets qualify as actual infinities. In that case, AC would, I suppose, not apply to a nondenumerable set X since the choice function could only apply to a denumerable subset of X. One can't apply the choice function to something that doesn't exist.
Essentially, Velleman 1993 is convinced that Cantor's reducto ad absurdum proof of nondenumerability of the reals should be interpreted: 'If a set of all reals exists, that set cannot be countable.' By this, we avoid the trap of assuming, without definition, that a set of all reals exists. He writes that 'to admit the existence of completely unspecifiable reals would violate our principle that if we want to treat real numbers as individuals, it is up to us to individuate them.'
'As long as we maintain this principle, we cannot accept the classical mathematician's claim that there are uncountably many completely unspecifiable real numbers. Rather, the natural conclusion to draw from Cantor's proof seems to be that any scheme for specifying reals can be extended to a more inclusive one, and therefore the reals form an indefinitely extensible totality.' He favors use of intuitionist logic for nondenumerable entities while retaining classical logic for denumerable sets. 'The arguments of the constructionists have shaken my faith in the classical treatment of real numbers, but not natural numbers,' Velleman 1993 writes in his sketching of a philisophical program he calls 'liberal constructivism.'
Unlike strict constructivists, he accepts 'actual infinities,' but unlike classical mathematicians, he eschews uncountable totalities. For example, he doubts that 'the power set operation, when applied to an infinite set, results in a well-defined totality.' His point can be seen by considering the Cantorian set of reals. We again form the denumerable set X of all computable, and enumerable, reals and then write the complement set R-X. Now if we apply AC to R-X in order to form a subset Y, does it not seem that Y ought to be perforce denumerable, especially if we are assuming that Y may be constructed?
That is, the function C(R-X) seems to require some type of instruction to obtain a relation uRv. If an instruction is required in order to pair u and v, then Y would be denumerable, the set of instructions being denumerable. But does not AC imply that no instruction is required? Of course, we can then write (R-X)-Y to obtain a nondenumerable subset of R. We can think of two versions of AC1: the countable version and the noncountable. In the countable version, AC says that it is possible to select one element from every set in a countable collection of sets. In the noncountable version, AC says that the choice function may be applied to a nondenumerable collection of sets. In strong AC, we must think of the elements being chosen en masse, rather than in a step-by-step process.
The wildness implicit in AC is further shown by the use of a non-formulaic function to pair noncomputables in Y with noncomputables in a subset of Y, as in f:Y->Y. That is, suppose we take all the noncomputables in the interval (0,1/2) and pair each with one noncomputable in (1/2,1), without specifying a means of pairing, via formula or algorithm. Since we can do the same for every other noncomputable in (1/2,1), we know there exists a nondenumerable set of functions pairing noncomputables. This is strange. We have shown that there is a nondenumerable set of nonformulaic functions to pair non-individuated members of domY with a non-individuated member of ranY. If x e domY, we say that x varies, even though it is impossible to say how it varies.
If yo e ranY, we can't do more than approximate it on the real line. We manipulate quantities that we can't grasp. They exist courtesy of AC alone. In an August 2002 email, Velleman said that though still attracted to this modified intuitionism, he is not committed to a particular philosophy.
Jim Conant, a Cornell topologist, commented that the reason my exposition is not considered to imply a paradox is that 'the axioms of set theory merely assert existence of sets and never assert that sets can be constructed explicitly.' The choice axiom 'in particular is notorious for producing wild sets that can never be explicitly nailed down.' He adds, 'A platonist would say that it is a problem of perception: these wild sets are out there but we can never perceive them fully since we are hampered by a denumerable language. Others would question the meaning of such a statement.'
Also, the ZF axioms are consistent only if the ZF axioms + AC are consistent, he notes, adding that 'nobody knows whether the ZF axioms are consistent.' (In fact, his former adviser, topologist Mike Freedman, believes there are ZF inconsistencies 'so complicated' that they have yet to be found.) 'Therefore I take the point of view that the axiom of choice is simply a useful tool for proving down to earth things.' Yet it is hard to conceive of Y or P(Y)\Y as 'down to earth.' For example, because Y contains real numbers, they are axiomatically 'on' the real number line. Yet no element of Y can be located on that line. y is a number without a home.
Note added in April 2006: Since arithmetic can be encoded in ZFC, we know from Kurt Godel that ZFC is either inconsistent or incomplete. That is, there is at least one true statement in ZFC that cannot be proved from axioms or ZFC contains a contradiction. We also know that ZFC is incomplete in the sense that the continuum hypothesis can be expressed in ZFC, its truth status is independent of ZFC axioms, as Godel and Paul Cohen have shown.
1. Jim Conant brought this possibility to my attention.
The following proposition is presented for purposes of discussion. I agree that, according to standard Zermelo-Fraenkel set theory, the proposition is false. Proposition: The Zermelo-Fraenkel power set axiom and the axiom of choice are inconsistent if an extended language is not used to express all the reals.
Discussion: The power set axiom reads: 'Given any set X there is a set Y which has as its members all the subsets of X.' The axiom of choice reads: 'For any nonempty set X there is a set Y which has precisely one element in common with set X.'*
*Definitions taken from 'Logic for Mathematicians,' A.G. Hamilton, Cambridge, revised 1988.
---It is known that there is a denumerable set X of writable functions f such that f defines r e R, where R is the nondenumerable set of reals. By writable, we mean f can be written in a language L that has a finite set of operations on a finite set of symbols.
In other words, X contains all computable reals.[This theorem stems from the thought that any algorithm for computing a number can be encoded as a single unique number. So, it is argued, since the set of algorithms is denumerable, so is the set of computable reals. However, we must be cautious here. It is possible for a Brouwerian choice rule (perhaps using a random number generator) to compute more than one real.]
X is disjoint from a nondenumerable set Y, subset of R, that contains all noncomputable and hence non-enumerable reals. P(Y) contains all the subsets of Y, and, like Y, is a nondenumerable infinity. Yet y e Y is not further definable. We cannot distinguish between elements of Y since they cannot be ordered, or even written. Hence, we cannot identify a 'choice' set Z that contains one element from every set in P(Y).
[However, it is important to note that some non-enumerables can be approximated as explicit rationals to any degree of accuracy in a finite number of steps, though such numbers are not Turing computable. See 'Thoughts on diagonal reals' above.]
Remark: It may be that an extended language L' could resolve this apparent inconsistency. The basic criticism from two mathematicians is that merely because a choice set Z cannot be explicitly identified by individual elements does not prevent it from existing axiomatically. Dan Velleman, an Amherst logician, remarked: 'But the axiom of choice does not say that 'we can form'
[I later replaced 'form' with 'identify'] a choice set. It simply says that the choice set exists. Most people interpret AC as asserting the existence of certain sets that we cannot explicitly define.'
My response is that we are then faced with the meaning of 'one' in the phrase 'one element in common.' The word 'one' doesn't appear to have a graspable meaning for the set Z. Clearly, the routine meaning of 'choice' is inapplicable, there being nothing that can be selected. The set Z must be construed as an abstract ideal that is analogous to the concept of infinitesimal quantity, which is curious since set theory arose as an answer to the philosophical objection to such entities.
It is amusing to consider two types of vacuous truth (using '$' for the universal quantifier and '#' for the existential quantifier):
I. $w e W Fw & W = { }.
II. Consider the countable set X containing all computable reals and the noncountable set Y containing all noncomputable reals. The statement $r e R Fr --> $y e Y Fy even though no y e Y can be specified from information given in Y's definition. That is, I. is vacuously true because W contains no elements, whereas II. is vacuously true because Y contains no elements specified by Y's definition.
It is just the set Y that the intuitionist opposes, of course. Rather than become overly troubled by the philosophy of existence, it may be useful to limit ourselves to specifiability, which essentially means the ability to pair an element with a natural number.
Those numbers in turn can be paired with a successor function, such as S...S(O). We should here consider the issue of transitivity, whereby the intuitionist admits to A --> {A} but does not accept A --> {{A}} without first specifying, defining or expressing {A}. That is, the intuitionist says A --> {{A}} only if {{A}} --> {A}, which is only true if {A} has been specified, which essentially means paired with n e N.
In their book, Philosophies of Mathematics (Blackwell, 2002), Alexander George and Velleman offer a deliberately weak proof of the theorem that says that every infinite set has a denumerable subset: 'Proof: Suppose A is an infinite set. Then A is certainly not the empty set, so we can choose an element a0 e A. Since A is infinite, A =/= {a0}, so we can choose some a1 e A such that a1 =/= a0. Similarly, A =/= {a0,a1}, so we can choose a2 e A such that a2 =/= a0 and a2 =/= a1. Continuing in this way, we can recursively choose an e A such that an ~e {a0,a1,...,an-1}. Now let R = {<0,a0>, <1,a1>, <2,a2>...} .
Then R is a one-to-one correspondence between N and the set {a0,a1,a2...}, which is a subset of A. Therefore, A has a denumerable subset.' The writers add, 'Although this proof seems convincing, it cannot be formalized using the set theory axioms that we have listed so far. The axioms we have discussed guarantee the existence of sets that are explicitly specified in various ways -- for example, as the set of all subsets of some set (Axiom of Power Sets), or as the set of all elements of some set that have a particular property (Axiom of Comprehension).
But the proof of [the theorem above] does not specify the one-to-one correspondence R completely, because it does not specify how the choices of the elements a0,a1,a2... are to be made. To justify the steps in the proof, we need an axiom guaranteeing the existence of sets that result from such arbitrary choices:'
The writers give their version of the axiom of choice: 'Axiom of choice. Suppose F is a set of sets such that Æ =/= F. Then there is a function C with domain F such that, for every X e F, C(X) e X.' They add, 'The function C is called a choice function because it can be thought of as choosing one element C(X) from each X e F.' The theorem cited seems to require an abstracted construction algorithm.
However, how does one select a0,a1,a2... if the elements of Y are individually nondefinable? AC now must be used to justify counting elements that can't be identified. So now AC is used to assert a denumerable subset by justifying a construction algorithm that can, in principle, never be performed. Suppose we define a real as an equivalence class of Cauchy sequences. If [{an}] e X, then [{an}] is computable and orderable. By computable, we mean that there is some rule for determining, in a finite number of steps, the exact rational value of any term an and that this rule must always yield the same value for an. A brouwerian choice sequence fails to assure that an has the same value on every computation, even though {an} is cauchy. Such numbers are defined here as 'non-computable,' though perhaps 'non-replicable' is a better characterization. A brouwerian cauchy sequence {an}, though defined, is not orderable since, in effect, only a probability can be assigned to its ordering between 1/p and 1/q. Now we are required by AC to say that either {an} is equivalent to {bn} and hence that [{an}] = [{bn}] or that the two sequences are not equivalent and that the two numbers are not equal. Yet, a brouwerian choice sequence defines a subset W of Y, whereby the elements of W cannot be distinguished in a finite number of steps.
Yet AC says that the trichotomy law applies to w1 and w2. We should note that W may contain members that coincide with some x e X. For example, we cannot rule out that a random-number generator might produce all the digits in pi. In a 1993 Philosophical Review article, Constructivism liberalized Velleman defends the notion that only denumerable sets qualify as actual infinities. In that case, AC would, I suppose, not apply to a nondenumerable set X since the choice function could only apply to a denumerable subset of X. One can't apply the choice function to something that doesn't exist.
Essentially, Velleman 1993 is convinced that Cantor's reducto ad absurdum proof of nondenumerability of the reals should be interpreted: 'If a set of all reals exists, that set cannot be countable.' By this, we avoid the trap of assuming, without definition, that a set of all reals exists. He writes that 'to admit the existence of completely unspecifiable reals would violate our principle that if we want to treat real numbers as individuals, it is up to us to individuate them.'
'As long as we maintain this principle, we cannot accept the classical mathematician's claim that there are uncountably many completely unspecifiable real numbers. Rather, the natural conclusion to draw from Cantor's proof seems to be that any scheme for specifying reals can be extended to a more inclusive one, and therefore the reals form an indefinitely extensible totality.' He favors use of intuitionist logic for nondenumerable entities while retaining classical logic for denumerable sets. 'The arguments of the constructionists have shaken my faith in the classical treatment of real numbers, but not natural numbers,' Velleman 1993 writes in his sketching of a philisophical program he calls 'liberal constructivism.'
Unlike strict constructivists, he accepts 'actual infinities,' but unlike classical mathematicians, he eschews uncountable totalities. For example, he doubts that 'the power set operation, when applied to an infinite set, results in a well-defined totality.' His point can be seen by considering the Cantorian set of reals. We again form the denumerable set X of all computable, and enumerable, reals and then write the complement set R-X. Now if we apply AC to R-X in order to form a subset Y, does it not seem that Y ought to be perforce denumerable, especially if we are assuming that Y may be constructed?
That is, the function C(R-X) seems to require some type of instruction to obtain a relation uRv. If an instruction is required in order to pair u and v, then Y would be denumerable, the set of instructions being denumerable. But does not AC imply that no instruction is required? Of course, we can then write (R-X)-Y to obtain a nondenumerable subset of R. We can think of two versions of AC1: the countable version and the noncountable. In the countable version, AC says that it is possible to select one element from every set in a countable collection of sets. In the noncountable version, AC says that the choice function may be applied to a nondenumerable collection of sets. In strong AC, we must think of the elements being chosen en masse, rather than in a step-by-step process.
The wildness implicit in AC is further shown by the use of a non-formulaic function to pair noncomputables in Y with noncomputables in a subset of Y, as in f:Y->Y. That is, suppose we take all the noncomputables in the interval (0,1/2) and pair each with one noncomputable in (1/2,1), without specifying a means of pairing, via formula or algorithm. Since we can do the same for every other noncomputable in (1/2,1), we know there exists a nondenumerable set of functions pairing noncomputables. This is strange. We have shown that there is a nondenumerable set of nonformulaic functions to pair non-individuated members of domY with a non-individuated member of ranY. If x e domY, we say that x varies, even though it is impossible to say how it varies.
If yo e ranY, we can't do more than approximate it on the real line. We manipulate quantities that we can't grasp. They exist courtesy of AC alone. In an August 2002 email, Velleman said that though still attracted to this modified intuitionism, he is not committed to a particular philosophy.
Jim Conant, a Cornell topologist, commented that the reason my exposition is not considered to imply a paradox is that 'the axioms of set theory merely assert existence of sets and never assert that sets can be constructed explicitly.' The choice axiom 'in particular is notorious for producing wild sets that can never be explicitly nailed down.' He adds, 'A platonist would say that it is a problem of perception: these wild sets are out there but we can never perceive them fully since we are hampered by a denumerable language. Others would question the meaning of such a statement.'
Also, the ZF axioms are consistent only if the ZF axioms + AC are consistent, he notes, adding that 'nobody knows whether the ZF axioms are consistent.' (In fact, his former adviser, topologist Mike Freedman, believes there are ZF inconsistencies 'so complicated' that they have yet to be found.) 'Therefore I take the point of view that the axiom of choice is simply a useful tool for proving down to earth things.' Yet it is hard to conceive of Y or P(Y)\Y as 'down to earth.' For example, because Y contains real numbers, they are axiomatically 'on' the real number line. Yet no element of Y can be located on that line. y is a number without a home.
Note added in April 2006: Since arithmetic can be encoded in ZFC, we know from Kurt Godel that ZFC is either inconsistent or incomplete. That is, there is at least one true statement in ZFC that cannot be proved from axioms or ZFC contains a contradiction. We also know that ZFC is incomplete in the sense that the continuum hypothesis can be expressed in ZFC, its truth status is independent of ZFC axioms, as Godel and Paul Cohen have shown.
1. Jim Conant brought this possibility to my attention.
Time thought experiments
[Written some years ago.]
Godel's theorem and a time travel paradox In How to Build a Time Machine (Viking 2001), the physicist Paul Davies gives the 'most baffling of all time paradoxes.' Writes Davies: 'A professor builds a time machine in 2005 and decides to go forward ... to 2010. When he arrives, he seeks out the university library and browses through the current journals. In the mathematics section he notices a splendid new theorem and jots down the details. Then he returns to 2005, summons a clever student, and outlines the theorem. The student goes away, tidies up the argument, writes a paper, and publishes it in a mathematics journal. It was, of course, in this very journal that the professor read the paper in 2010.'
Davies finds that, from a physics standpoint, such a 'self-consistent causal loop' is possible, but, 'where exactly did the theorem come from?... it's as if the information about the theorem just came out of thin air.' Davies says many worlds proponent David Deutsch, author of The Fabric of Reality and a time travel 'expert,' finds this paradox exceptionally disturbing, since information appears from nowhere, in apparent violation of the principle of entropy.
This paradox seems well suited to Godel's main incompleteness theorem, which says that a sufficiently rich formal system if consistent, must be incomplete. Suppose we assume that there is a formal system T -- a theory of physics -- in which a sentence S can be constructed describing the mentioned time travel paradox. If S strikes us as paradoxical, then we may regard S as the Godel sentence of T. Assuming that T is a consistent theory, we would then require that some extension of T be constructed. An extension might, for example, say that the theorem's origin is relative to the observer and include a censorship, as occurs in other light-related phenomena.
hat is, the professor might be required to forget where he got the ideas to feed his student. But, even if S is made consistent, there must then be some other sentence S', which is not derivable from T'. Of course, if T incorporates the many worlds view, S would likely be consistent and derivable from T.
However, assuming T is a sufficiently vigorous mathematical formalism, there must still be some other sentence V that may be viewed as paradoxical (inconsistent) if T is viewed as airtight. How old is a black hole? Certainly less than the age of the cosmos, you say. The black hole relativistic time problem illustrates that the age of the cosmos is determined by the yardstick used. Suppose we posit a pulsar pulsing at the rate T, and distance D from the event horizon of a black hole.
Our clock is timed to strike at T/2, so that pulse A has occurred at T=0. We now move the pulsar closer to the event horizon, again with our clock striking at what we'll call T'/2. Now because of the gravitational effect on observed time, the time between pulses is longer. That is T' > T, and hence T'=0 is farther in the past than T=0. Of course, as we push the pulsar closer to the event horizon, the relative time TN becomes asymptotic to infinity (eternity).
So, supposing the universe was born 15 billion years ago in the big bang, we can push our pulsar's pulse A back in time beyond 15 billion years ago by pushing the pulsar closer to the event horizon. No matter how old we make the universe, we may always obtain a pulse A that is older than the cosmos. Yes, you say, but a real pulsar would be ripped to shreds and such a happening is not observable.
Nevertherless, the general theory of relativity requires that we grant that time calculations can yield such contradictions. Anthropic issues A sense of awe often accompanies the observation: 'The conditions for human (or any) life are vastly improbable in the cosmic scheme of things.' This leads some to assert that the many worlds scenario answers that striking improbability, since in most other universes, life never arose and never will. I point out that the capacity for the human mind to examine the cosmos is perhaps 2.5 x 104 years old, against a cosmic time scale of 1.5 x 109. In other words, we have a ratio of 2.5(104)/1.5(109) = 1.6/105.
n other words, humanity is an almost invisible drop in the vast sea of cosmic events. Yet here we are! Isn't that amazing?! It seems as though the cosmos conspired to make our little culture just for us, so we could contemplate its vast mysteries. However, there is the problem of the constants of nature. Even slight differences in these constants would, it seems, lead to universes where complexity just doesn't happen. Suppose that these constants depend on initial cosmic conditions which have a built-in random variability.
In that case, the existence of a universe with just the right constants for life (in particular, humanity) to evolve is nothing short of miraculously improbable. Some hope a grand unified theory will resolve the issue. Others suggest that there is a host of bubble universes, most of which are not conducive to complexity, and hence the issue of improbability is removed (after all, we wouldn't be in one of the barren cosmoses). For more on this issue, see the physicist-writers John Barrow, Frank Tipler and Paul Davies.
At any rate, it doesn't seem likely that this drop will last long, in terms of cosmic scales, and the same holds for other such tiny drops elsewhere in the cosmos. Even granting faster-than-light 'tachyon radio,' the probability is very low that an alien civilization exists within communications range of our ephemeral race. That is, the chance of two such drops existing 'simultaneously' is rather low, despite the fond hopes of the SETI crowd. On the other hand, Tipler favors the idea that once intelligent life has evolved, it will find the means to continue on forever. Anyway, anthropomorphism does seem to enter into the picture when we consider quantum phenomena: a person's physical reality is influenced by his or her choices.
[Written some years ago.]
Godel's theorem and a time travel paradox In How to Build a Time Machine (Viking 2001), the physicist Paul Davies gives the 'most baffling of all time paradoxes.' Writes Davies: 'A professor builds a time machine in 2005 and decides to go forward ... to 2010. When he arrives, he seeks out the university library and browses through the current journals. In the mathematics section he notices a splendid new theorem and jots down the details. Then he returns to 2005, summons a clever student, and outlines the theorem. The student goes away, tidies up the argument, writes a paper, and publishes it in a mathematics journal. It was, of course, in this very journal that the professor read the paper in 2010.'
Davies finds that, from a physics standpoint, such a 'self-consistent causal loop' is possible, but, 'where exactly did the theorem come from?... it's as if the information about the theorem just came out of thin air.' Davies says many worlds proponent David Deutsch, author of The Fabric of Reality and a time travel 'expert,' finds this paradox exceptionally disturbing, since information appears from nowhere, in apparent violation of the principle of entropy.
This paradox seems well suited to Godel's main incompleteness theorem, which says that a sufficiently rich formal system if consistent, must be incomplete. Suppose we assume that there is a formal system T -- a theory of physics -- in which a sentence S can be constructed describing the mentioned time travel paradox. If S strikes us as paradoxical, then we may regard S as the Godel sentence of T. Assuming that T is a consistent theory, we would then require that some extension of T be constructed. An extension might, for example, say that the theorem's origin is relative to the observer and include a censorship, as occurs in other light-related phenomena.
hat is, the professor might be required to forget where he got the ideas to feed his student. But, even if S is made consistent, there must then be some other sentence S', which is not derivable from T'. Of course, if T incorporates the many worlds view, S would likely be consistent and derivable from T.
However, assuming T is a sufficiently vigorous mathematical formalism, there must still be some other sentence V that may be viewed as paradoxical (inconsistent) if T is viewed as airtight. How old is a black hole? Certainly less than the age of the cosmos, you say. The black hole relativistic time problem illustrates that the age of the cosmos is determined by the yardstick used. Suppose we posit a pulsar pulsing at the rate T, and distance D from the event horizon of a black hole.
Our clock is timed to strike at T/2, so that pulse A has occurred at T=0. We now move the pulsar closer to the event horizon, again with our clock striking at what we'll call T'/2. Now because of the gravitational effect on observed time, the time between pulses is longer. That is T' > T, and hence T'=0 is farther in the past than T=0. Of course, as we push the pulsar closer to the event horizon, the relative time TN becomes asymptotic to infinity (eternity).
So, supposing the universe was born 15 billion years ago in the big bang, we can push our pulsar's pulse A back in time beyond 15 billion years ago by pushing the pulsar closer to the event horizon. No matter how old we make the universe, we may always obtain a pulse A that is older than the cosmos. Yes, you say, but a real pulsar would be ripped to shreds and such a happening is not observable.
Nevertherless, the general theory of relativity requires that we grant that time calculations can yield such contradictions. Anthropic issues A sense of awe often accompanies the observation: 'The conditions for human (or any) life are vastly improbable in the cosmic scheme of things.' This leads some to assert that the many worlds scenario answers that striking improbability, since in most other universes, life never arose and never will. I point out that the capacity for the human mind to examine the cosmos is perhaps 2.5 x 104 years old, against a cosmic time scale of 1.5 x 109. In other words, we have a ratio of 2.5(104)/1.5(109) = 1.6/105.
n other words, humanity is an almost invisible drop in the vast sea of cosmic events. Yet here we are! Isn't that amazing?! It seems as though the cosmos conspired to make our little culture just for us, so we could contemplate its vast mysteries. However, there is the problem of the constants of nature. Even slight differences in these constants would, it seems, lead to universes where complexity just doesn't happen. Suppose that these constants depend on initial cosmic conditions which have a built-in random variability.
In that case, the existence of a universe with just the right constants for life (in particular, humanity) to evolve is nothing short of miraculously improbable. Some hope a grand unified theory will resolve the issue. Others suggest that there is a host of bubble universes, most of which are not conducive to complexity, and hence the issue of improbability is removed (after all, we wouldn't be in one of the barren cosmoses). For more on this issue, see the physicist-writers John Barrow, Frank Tipler and Paul Davies.
At any rate, it doesn't seem likely that this drop will last long, in terms of cosmic scales, and the same holds for other such tiny drops elsewhere in the cosmos. Even granting faster-than-light 'tachyon radio,' the probability is very low that an alien civilization exists within communications range of our ephemeral race. That is, the chance of two such drops existing 'simultaneously' is rather low, despite the fond hopes of the SETI crowd. On the other hand, Tipler favors the idea that once intelligent life has evolved, it will find the means to continue on forever. Anyway, anthropomorphism does seem to enter into the picture when we consider quantum phenomena: a person's physical reality is influenced by his or her choices.
Tuesday, March 13, 2012
First published elsewhere Monday, January 7, 2008
Thoughts on the Shroud of Turin
I haven't read The Da Vinci Code but...
. . . I
have scanned a book by the painter David Hockney, whose internet-driven survey
of Renaissance and post-Renaissance art makes a strong case for a trade secret:
use of a camera obscura technique for creating precision realism in
paintings.
Hockney's book, Secret Knowledge: rediscovering the lost
legacy of the old masters, 2001, uses numerous paintings to show that
European art guilds possessed this technical ability, which was a closely
guarded and prized secret. Eventually the technique, along with the related
magic lantern projector, evolved into photography. It's possible the technique
also included the use of lenses and mirrors, a topic familiar to Leonardo da
Vinci.
Apparently the first European mention of a camera obscura is in
Codex Atlanticus.
I didn't know about this when first mulling over
the Shroud of Turin controversy and so was quite perplexed as to how such an
image could have been formed in the 14th century, when the shroud's existence
was first reported. I was mistrustful of the carbon dating, realizing that the
Kremlin had a strong motive for deploying its agents to discredit the purported
relic. (See my old page Science, superstition and the Shroud of Turin
http://www.angelfire.com/az3/nuzone/shroud.html)
But Hockney's book helps
to bolster a theory by fellow Brits Lynn Picknell and Clive Prince that the
shroud was faked by none other than Leonardo, a scientist, "magician" and
intriguer. Their book The Turin Shroud was a major source of inspiration
for The Da Vinci Code, it has been reported. The two are not
professional scientists but, in the time-honored tradition of English amateurs,
did an interesting sleuthing job. As they point out, the frontal head
image is way out of proportion with the image of the scourged and crucified
body.
They suggest the face is quite reminiscent of a self-portrait by Leonardo.
Yet, two Catholic scientists at the Jet Propulsion Lab who used a computer
method in the 1980s to analyze the image had supposedly demonstrated that it was
"three-dimensional." But a much more recent analysis, commissioned by Picknell
and Prince, found that the "three-dimensionalism" did not hold up. From what I
can tell, the Jet Propulsion pair proved that the image was not made by
conventional brushwork but that further analysis indicates some type of
projection.
Picknell and Prince suggest that Leonardo used projected
images of a face and of a body -- perhaps a cadaver that had been inflicted with
various crucifixion wounds -- to create a death mask type of impression. But the
image collation was imperfect, leaving the head size wrong and the body that of,
by Mideast standards, a giant. This is interesting, in that Hockney discovered
that the camera obscura art often failed at proportion and depth of field
between spliced images, just as when a collage piece is pasted onto a
background.Still the shroud's official history begins in 1358, about a
hundred years prior to the presumed Da Vinci hoax.
It seems plausible that
either some shroud-like relic had passed to a powerful family and that its
condition was poor, either because of its age or because it wasn't that
convincing upon close inspection. The family then secretly enlisted Leonardo,
the theory goes, in order to obtain a really top-notch relic. Remember, relics
were big business in those days, being used to generate revenues and political
leverage.For if Leonardo was the forger, we must account for the fact
that the highly distinctive "Vignon marks" on the shroud face have been found in
Byzantine art dating to the 7th century.
I can't help but wonder whether
Leonardo only had the Mandylion (the face) to work with, and added the body as a
bonus (I've tried scanning the internet for reports of exact descriptions of the
shroud prior to da Vinci's time but haven't succeeded). The Mandylion
refers to an image not made by hands. This "image of Edessa" must have been very
impressive, considering the esteem in which it was held by Byzantium. Byzantium
also was rife with relics and with secret arts -- which included what we'd call
technology along with mumbo-jumbo.
The Byzantine tradition of iconography may
have stemmed from display of the Mandylion.Ian Wilson, a credentialed
historian who seems to favor shroud authenticity, made a good case for the
Mandylion having been passed to the Knights Templar -- perhaps when the
crusaders sacked Constantinople in 1204. The shroud then showed up in the hands
of a descendant of one of the Templars after the order was ruthlessly
suppressed. His idea was that the shroud and the Mandylion were the same, but
that in the earlier centuries it had been kept folded in four, like a map, with
the head on top and had always been displayed that way.
The other
possibility is that a convincing relic of only the head was held by the
Templars. A discovery at Templecombe, England, in 1951 showed that regional
Templar centers kept paintings of a bearded Jesus face, which may well have been
copies of a relic that Templar enemies tried to find but couldn't. The Templars
had been accused of worshiping a bearded idol.Well, what made the
Mandylion so convincing?
A possibility: when the Templars obtained the relic
they also obtained a secret book of magical arts that told how to form such an
image. This of course implies that Leonardo discovered the technique when
examining this manuscript, which may have contained diagrams. Or, it implies
that the image was not counterfeited by Leonardo but was a much, much older
counterfeit.Obviously all this is pure speculation. But one cannot deny
that the shroud images have a photographic quality but are out of kilter with
each other and that the secret of camera obscura projection in Western art seems
to stem from Leonardo's studios.
The other point is that the 1988 carbon
analysis dated the shroud to the century before Leonardo. If one discounts
possible political control of the result, then one is left to wonder how such a
relic could have been so skillfully wrought in that era. Leonardo was one of
those once-in-a-thousand-year geniuses who had the requisite combination of
skills, talents, knowledge and impiety to pull off such a stunt.Of
course, the radiocarbon dating might easily have been off by a hundred years
(but, if fairly done, is not likely to have been off by 1300 years).
All
in all, I can't be sure exactly what happened, but I am strongly inclined to
agree that the shroud was counterfeited by Leonardo based on a previous relic.
The previous relic must have been at least "pretty good" or why all the fuss in
previous centuries? But, it is hard not to suspect Leonardo's masterful hand in
the Shroud of Turin.Of course, the thing about the shroud is that there
is always more to it. More mystery. I know perfectly well that, no matter how
good the scientific and historical analysis, trying to nail down a proof one way
or the other is a wil o' the wisp.
Thoughts on the Shroud of Turin
I haven't read The Da Vinci Code but...
. . . I
have scanned a book by the painter David Hockney, whose internet-driven survey
of Renaissance and post-Renaissance art makes a strong case for a trade secret:
use of a camera obscura technique for creating precision realism in
paintings.
Hockney's book, Secret Knowledge: rediscovering the lost
legacy of the old masters, 2001, uses numerous paintings to show that
European art guilds possessed this technical ability, which was a closely
guarded and prized secret. Eventually the technique, along with the related
magic lantern projector, evolved into photography. It's possible the technique
also included the use of lenses and mirrors, a topic familiar to Leonardo da
Vinci.
Apparently the first European mention of a camera obscura is in
Codex Atlanticus.
I didn't know about this when first mulling over
the Shroud of Turin controversy and so was quite perplexed as to how such an
image could have been formed in the 14th century, when the shroud's existence
was first reported. I was mistrustful of the carbon dating, realizing that the
Kremlin had a strong motive for deploying its agents to discredit the purported
relic. (See my old page Science, superstition and the Shroud of Turin
http://www.angelfire.com/az3/nuzone/shroud.html)
But Hockney's book helps
to bolster a theory by fellow Brits Lynn Picknell and Clive Prince that the
shroud was faked by none other than Leonardo, a scientist, "magician" and
intriguer. Their book The Turin Shroud was a major source of inspiration
for The Da Vinci Code, it has been reported. The two are not
professional scientists but, in the time-honored tradition of English amateurs,
did an interesting sleuthing job. As they point out, the frontal head
image is way out of proportion with the image of the scourged and crucified
body.
They suggest the face is quite reminiscent of a self-portrait by Leonardo.
Yet, two Catholic scientists at the Jet Propulsion Lab who used a computer
method in the 1980s to analyze the image had supposedly demonstrated that it was
"three-dimensional." But a much more recent analysis, commissioned by Picknell
and Prince, found that the "three-dimensionalism" did not hold up. From what I
can tell, the Jet Propulsion pair proved that the image was not made by
conventional brushwork but that further analysis indicates some type of
projection.
Picknell and Prince suggest that Leonardo used projected
images of a face and of a body -- perhaps a cadaver that had been inflicted with
various crucifixion wounds -- to create a death mask type of impression. But the
image collation was imperfect, leaving the head size wrong and the body that of,
by Mideast standards, a giant. This is interesting, in that Hockney discovered
that the camera obscura art often failed at proportion and depth of field
between spliced images, just as when a collage piece is pasted onto a
background.Still the shroud's official history begins in 1358, about a
hundred years prior to the presumed Da Vinci hoax.
It seems plausible that
either some shroud-like relic had passed to a powerful family and that its
condition was poor, either because of its age or because it wasn't that
convincing upon close inspection. The family then secretly enlisted Leonardo,
the theory goes, in order to obtain a really top-notch relic. Remember, relics
were big business in those days, being used to generate revenues and political
leverage.For if Leonardo was the forger, we must account for the fact
that the highly distinctive "Vignon marks" on the shroud face have been found in
Byzantine art dating to the 7th century.
I can't help but wonder whether
Leonardo only had the Mandylion (the face) to work with, and added the body as a
bonus (I've tried scanning the internet for reports of exact descriptions of the
shroud prior to da Vinci's time but haven't succeeded). The Mandylion
refers to an image not made by hands. This "image of Edessa" must have been very
impressive, considering the esteem in which it was held by Byzantium. Byzantium
also was rife with relics and with secret arts -- which included what we'd call
technology along with mumbo-jumbo.
The Byzantine tradition of iconography may
have stemmed from display of the Mandylion.Ian Wilson, a credentialed
historian who seems to favor shroud authenticity, made a good case for the
Mandylion having been passed to the Knights Templar -- perhaps when the
crusaders sacked Constantinople in 1204. The shroud then showed up in the hands
of a descendant of one of the Templars after the order was ruthlessly
suppressed. His idea was that the shroud and the Mandylion were the same, but
that in the earlier centuries it had been kept folded in four, like a map, with
the head on top and had always been displayed that way.
The other
possibility is that a convincing relic of only the head was held by the
Templars. A discovery at Templecombe, England, in 1951 showed that regional
Templar centers kept paintings of a bearded Jesus face, which may well have been
copies of a relic that Templar enemies tried to find but couldn't. The Templars
had been accused of worshiping a bearded idol.Well, what made the
Mandylion so convincing?
A possibility: when the Templars obtained the relic
they also obtained a secret book of magical arts that told how to form such an
image. This of course implies that Leonardo discovered the technique when
examining this manuscript, which may have contained diagrams. Or, it implies
that the image was not counterfeited by Leonardo but was a much, much older
counterfeit.Obviously all this is pure speculation. But one cannot deny
that the shroud images have a photographic quality but are out of kilter with
each other and that the secret of camera obscura projection in Western art seems
to stem from Leonardo's studios.
The other point is that the 1988 carbon
analysis dated the shroud to the century before Leonardo. If one discounts
possible political control of the result, then one is left to wonder how such a
relic could have been so skillfully wrought in that era. Leonardo was one of
those once-in-a-thousand-year geniuses who had the requisite combination of
skills, talents, knowledge and impiety to pull off such a stunt.Of
course, the radiocarbon dating might easily have been off by a hundred years
(but, if fairly done, is not likely to have been off by 1300 years).
All
in all, I can't be sure exactly what happened, but I am strongly inclined to
agree that the shroud was counterfeited by Leonardo based on a previous relic.
The previous relic must have been at least "pretty good" or why all the fuss in
previous centuries? But, it is hard not to suspect Leonardo's masterful hand in
the Shroud of Turin.Of course, the thing about the shroud is that there
is always more to it. More mystery. I know perfectly well that, no matter how
good the scientific and historical analysis, trying to nail down a proof one way
or the other is a wil o' the wisp.
Monday, March 5, 2012
Nix some Angelfire pages, please
Alas, this writer has lost access to his Angelfire
accounts -- probably from cock-up, not conspiracy -- and so asks
readers to use this site for up-to-date versions of those
pages.The accounts include az3/nfold, az3/nuzone, az3/newzone and
ult/znewz1. There may be others.The writer specifically requests that
his page on the four-color theorem be deep-sixed unread. The offending URL is www.angelfire/az3/nfold/444.html.
Alas, this writer has lost access to his Angelfire
accounts -- probably from cock-up, not conspiracy -- and so asks
readers to use this site for up-to-date versions of those
pages.The accounts include az3/nfold, az3/nuzone, az3/newzone and
ult/znewz1. There may be others.The writer specifically requests that
his page on the four-color theorem be deep-sixed unread. The offending URL is www.angelfire/az3/nfold/444.html.
Subscribe to:
Posts (Atom)