The Limits of Mathematics
I think that most people would agree that though everything else seems to be uncertain, that at least mathematics gives us universal truths.
Unfortunately, that's not the case. All truths in mathematics are local. No form of mathematics applies to everything. And no matter how you define a mathematical system, there will always be valid statements within it which are true but which can't be proved within that mathematical system.
Any given mathematical system isn't "true" or "false"; those words don't apply. A mathematician rather uses the terms "consistent" and "inconsistent"; a system is consistent if it contains no contradictions. Inconsistent systems are uninteresting.
But even a consistent system may not yield practical results. A mathematical system is an artificial construct which may or may not be applicable to a real system.
Geometry is an example, or rather geometries, for there are many. Euclidian geometry, the granddaddy of them all, starts as does any mathematical systems with a small set of axioms. An axiom is a statement which is assumed to be true. Most of Euclid's axioms are clean (e.g. a circle has one center which is equidistant from all points on its circumference) but one struck later observers as being less clean than the others: given a line, and a point not on that line, there is exactly one line passing through that point which is parallel to the original line.
Over the centuries much ingenuity was applied to attempting to remove that axiom by proving it using the others -- and they all failed. Finally, someone decided to try reductio ad absurdum, a new mathematical concept. It goes more or like this: it is impossible for a true premise to yield a false conclusion. If you work with an assumption and from it you prove a false conclusion, then the assumption is false.
In this particular case, the idea was that if you were to alter Euclids fifth axiom, then if an altered form lead to a system which was inconsistent (capable of proving any statement as both true or false) then the altered axiom was false. If every way of altering that axiom lead to an inconsistent system, then the only alternative would be the axiom -- and it wouldn't be needed as an axiom!
But much to their surprise, when they changed it the results were consistent -- and interesting, too. For instance, if you change it to "...then there are no lines passing through the point..." you get spherical geometry, which is geometry performed on the surface of a sphere instead of on a plane. Spherical geometry has a large number of other results which are different. For instance, in plane geometry, the sum of the angles of a triangle are always a constant, but that's not the case in spherical geometry, where the sum can range from 180 to 540 degrees. So which is correct? Well, they both are. If you try to use Euclidian geometry to navigate, you'll get lost. Spherical geometry gives you the right answer.
Or if you change the assumption to "...then there are an infinite number of lines..." then you get hyperbolic geometry. This is geometry done on a specific curved surface shaped a bit like a saddle. (To understand this, it's necessary to understand that the word "parallel" means "the lines will never intersect". It doesn't mean "the lines are equidistant at all points", which is true for Euclidian geometry but not for hyperbolic geometry. Equidistance is a side effect of parallelism in Euclidian geometry, not its definition.)
In actuality, what they discovered is that there are an infinite number of geometries which are consistent -- and contradictory. Euclid was right; he really did need that axiom.
One of the gods of computer science was Alan Turing. In the 1930's he laid the groundwork for modern computer science before anything like a modern computer existed, and not only laid a rich theoretical groundwork for everything we do, but also explored the limits to the concept. Before the first electronic digital computer was ever built, Alan Turing proved that there were problems they could not solve in finite time. He did this by designing a degenerate digital computer (now known as a "Turing machine") which was both sufficiently powerful to be useful and simple enough to be susceptible to mathematical analysis. It's not a practical design; it's much too painful to program it for normal operations and it's terribly inefficient. But the point is that anything that any modern digital computer can do, a Turing machine can do. Contrariwise, if there's something a Turing machine cannot do, then no other digital computer can do it either. This entire area of theory has been very fruitful -- but it's always necessary to recognize its limits. Turing wouldn't have made such a mistake, but many people now try to generalize too far: by proving that a problem is isomorphic to the "Stopping Problem", they then declare that it can't be solved.
No, not correct. It means it can't be solved by a Turing machine or any computer that a Turing machine is capable of simulating. But there are kinds of computation devices which cannot be simulated by a Turing machine, and such proofs don't apply to them.
There are many ways to define a computing system which can't be simulated by a Turing machine. A computer with two synchronous parts such that the ratio of the two clocks is a transcendental number is one. Any analog computer can't be simulated. Any digital computer which is not synchronous (which means that its section don't run to a metronome, but rather make decisions as soon as sufficient data is available) is outside the rules. Human brains are analog and asynchronous, and aren't in the game.
So a proof of uncomputability doesn't apply to any of them. That doesn't mean they can solve the problem after all; it just means we can't prove they cannot -- and sometimes they can indeed do it.
I wrote elsewhere of a demonstration of this: a Turing machine cannot precisely calculate the value of the square root of two in a finite number of steps, but an analog computer (such as the classic compass-and-straight-edge) can. And there are many other problems insoluble by a Turing machine which are readily accessible to analog computers.
Which doesn't mean Turing was "wrong"; that's not an appropriate word. It merely means his findings aren't universal.
But in the 20th century, it was proved that no mathematical system is universal. No matter how you define a mathematical system, something will always be left out. "Everything" is an internally inconsistent word; it's impossible for it to actually mean what it is used to mean because any rigorous definition of it is always incomplete. (Credit Bertrand Russell for this proof.)
So while mathematics is enormously useful, it's always necessary when trying to apply it to a particular circumstance to ask two questions: 1. Is the particular mathematics I'm trying to use really applicable to this real-world case? 2. Is what I'm trying to demonstrate something which is true-but-unprovable within that system?
And sometimes this isn't obvious. It's always been an article of faith that space-time was fundamentally Euclidian, yeah, we know that navigation has to use spherical geometry but that's a special case; but if you're travelling in space then you use Euclidian geometry.
That game got spoiled by Einstein's General Relativity. It turns out that space is non-euclidian -- and very complicated. Every mass distorts the fabric of space, and "straight lines" aren't. GR says that light always follow a straight line, but two light beams can be equidistant over some stretches but can intersect other places. If evaluated in euclidian terms, "straight" lines can curve. But the problem is that Euclidian geometry is giving the wrong answer. It's not that Euclidian geometry is "false"; a better term would be "inapplicable" or perhaps "irrelevant". There's nothing wrong with euclidian geometry; it's just that it doesn't give the right answer about the universe.
And that's the point. Mathematics isn't "true" and there's no guarantee that it's giving you the right answer.
In many, many, many cases it does. In many cases the applicability of a given form of mathematics and also its limits are extremely well understood. Statistics gives insurance companies the right answer. Probability theory (and statistics) permit casinos to predict very accurately how much money they'll make. Calculus permits civil engineers to correctly calculate the stresses on bridges. Turing's work (and later refinements in a field now known as "theory of computing") permit those of us who develop software for modern computers to figure out how well our algorithms will scale.
It's just that it's necessary at all times to recognize the limits inherent in any given form of math, and to notice when you've stepped out-of-bounds.
This page has been viewed 2218 times since 20010726.