Search This Blog

Monday, November 11, 2013

What is an algorithm?


This page went online in 2001 and was revised in June 2003.

The word 'algorithm' is all the rage these days and it is easy to give special cases of algorithms, such as Euclid's method of isolating the greatest common divisor.

And what's a computer program without its algorithms? But what is a satisfactory general definition for those of us uninterested in computer programing? I thought it wise to explore the concept of 'algorithm,' which I use in an intuitive sense when justifying alternatives to infinite sets on my page 'When is truth vacuous? Is infinity a bunch of nothing? [Haven't been able to recover it]

"The notion of algorithm," writes A.G. Hamilton (Logic for Mathematicians, Cambridge, revised 2000) "is an intuitive one, not a mathematically precise one..." which he defines as:

* an explicit effective set of instructions for a computing procedure (not necessarily numerical) which may be used to find the answers to any of a given class of questions.

Other definitions:

* a procedure for solving a usually complicated problem by carrying out a precisely determined sequence of simpler, unambiguous steps." [Academic American Encyclopedia, 1996]

* A plan of how the arithmetic is to be organized to solve a specific class of problems is called an algorithm. The detailed organization of procedures that a machine may take to implement a given algorithm is called a (machine) program. The list of detailed instructions or commands a specific machine employs to carry out a program is called a code. [Encyclopaedia Britannica 1998; 8:828:3b]

* a procedure, or sequence of instructions, used to evaluate a function. We write A(f(x)) to indicate that algorithm A implements, realizes or evaluates f at all arguments x in the domain f(X). [Marvin C. Paull, Algorithm Design: a recursion transformation framework, John Wiley & Sons, 1988]

* any step-by-step procedure to calculate just about anything. [Cornell topologist Jim Conant]

* a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete steps, 1, 2, 3..., whose execution requires no insight, cleverness, intuition, intelligence, or perspicacity, and that sooner or later comes to an end. [David Berlinski, The Advent of the Algorithm: the idea that rules the world, Harcourt, 2000] I suppose Berlinski wasn't thinking of procedures that 'reach' a limit at an infinity of steps. So here is my definition, followed by discussion.

* an n-step procedure, where n may be finite or unbounded, for converting an element of a set of input values into an element of a set of output values.

This is rather too vague, so I will get more specific.

But first, back to Berlinski: Berlinski, who studied logic under Alonzo Church, says that the pre-1930s fuzziness of definition was removed by four great 20th century logicians, who each gave a different definition, each of which turned out to be equivalent to the other.Though Berlinski describes the achievements of Church, Kurt Godel, Alan M. Turing and Emil Post in the area of mechanical computability, Berlinski does not tell us precisely what their definitions are.

I think he means that a successful program for a Turing machine or a Post machine is an algorithm. And that Church's lamda conversion system can be used to formulate recursive definitions. How one sets up the recursive definition is an algorithm. Godel's primitive recursive functions are also defined via 'algorithm.'I'm not terribly satisfied with Berlinski here but, let's grant that he is trying to get across some rarefied stuff to a bunch of laypeople (me included).

In order to fully understand the concept of 'algorithm,' one needs a course in mathematical logic. It is of importance in order to get at such concepts as calculability and decidability.In logician Herbert B. Enderton's interpretation, Church's thesis says that recursiveness is the correct formalization of our informal idea of calculability.

Recursion implies that a procedure inevitably yields the same output value on every run. No randomness allowed.Recursive functions, according to Hamilton, are composed of addition, multiplication and projection functions.Suppose we think of an algorithm as the instruction for a Turing machine (a hypothetical tape which has symbols erased and changed at every move) to calculate a particular final output value (if the tape is known to halt).

There can be only a finite set of symbols arranged in a 'legal' way (a well-formed formula following specific grammatical rules (another algorithm!)). Now each of these formulas is unique and can hence be encoded as a single number (perhaps a Godel number, but not necessarily).

Now since each formula is a sentence of n symbols, we find that we can make as many legal formulas as we like, since n is unbounded. However, for any n, there is only a finite set of formulas. By this, it is known that the set of algorithmic formulas is denumerable (is one-to-one with the set of integers, but not with the set of irrationals). In this conceptualization, there is really no difference between turing machine n and set of instructions n.

Another way to think of an algorithm is as a decision tree, which I won't bother to draw. But let's use logic notation to get across the thought: A simple one-step algorithm might be: P --> Q v R where P = {a,b}, Q = {q}, R = {r} P is the set of input values, and Q and R are sets of output values. So an algorithm would specify truth values, such as: (a --> q) & (a --> ~r)(b --> r) & (b --> ~q) But another, shall we say, sub-algorithm might be: a --> q v a --> r [which includes a --> q & r] We can continue the steps indefinitely of course, where every step has its own sets of output values. By this it is clear that a particular path or solution to this decision tree is recursive in the sense of, speaking loosely, f(f(f(x))).

However, our decision tree need not be expressed as a function, though perhaps it may be expressed, possibly, as a composite relation. But my son Jim is iffy on restricting the word 'algorithm' to relations. What about mazes? he asks. At any rate, let's look at a generalized algorithm expressed less formally: Step0 > select input value1 > perform operation 1, obtain output value2 > use step 1 input value, perform operation 2, obtain output value. ...and so forth. (For convenience, we number the operations independently of the steps, so that at step x, operation n means the nth operation performed since step 1. Also, note that operation m may or may not equal operation n.) Now here we introduce 'meta-algorithms.' Notation: Suppose we call the example above algorithm A, which we might specify as A[o].

Conditions prior to step 1 are expressed A(0) or A[o](0), with A(x) an arbitrary step and A(n) the final step, if one exists. A(x) means the xth step, not the xth input or output value.For two algorithms A and B, we may wish to determine how they interact mathematically, if they do. If B is a meta-algorithm on A[x], then B = A[x+1] A meta-algorithm A[x+1] simply erases operations in A[x] and possibly substitutes operations. From the notation, we see that we can have as many meta-algorithms as desired, analagous to a composite function. An example of a meta-algorithm:We have, holding c constant, algorithm A[c](n). Add operation m and erase operation k, giving A[c+1](n). We might be able to decide values for any element of the meta-algorithm set specified B[x]. For example, if x is even, use A[c]; if x is odd, use A[c+1]. On the other hand, we might run into 'undecidability,' as in: A[x] where x --> inf.; if x is composite, use A[c]; if x is prime, use A[c+1].

No comments:

Post a Comment