Complicated "Simplicity"

Herbert Simon writes on page 1 of his book The Sciences of the Artificial that "the central task of a natural science is to make the wonderful commonplace: to show that complexity, correctly viewed, is only a mask for simplicity; to find pattern hidden in apparent chaos."

Is the world really simple and what is the process by which science finds simple patterns in the apparent chaos? Let us take one paradigmatic example and analyze it in detail. The second law of Newtonian dynamics is justly celebrated for its elegance, simplicity, and coverage. The simple equation F = m.a, together with a very small number of other simple and elegant equations, accounts for a huge range of phenomena.

But is the law F = m.a really simple, after all? Consider the acceleration term a. What does this little character on the paper mean? To help get our heads around it, let us simplify matters further still and talk about velocity instead of acceleration. Once we know how to define velocity, acceleration turns out to be a piece of cake. Velocity, however, is a tricky concept as the following excerpt from The Feynman Lectures on Physics demonstrates.


Speed

Even though we know roughly what "speed" means, there are still some rather deep subtleties; consider that the learned Greeks were never able to adequately describe problems involving velocity. The subtlety comes when we try to comprehend exactly what is meant by "speed". The Greeks got very confused about this, and a new branch of mathematics had to be discovered beyond the geometry and algebra of the Greeks, Arabs, and Babylonians. As an illustration of the difficulty, try to solve this problem by sheer algebra: A balloon is being inflated so that the volume of the balloon is increasing at the rate of 100 cubic centimeters per second; at what speed is the radius increasing when the volume is 1000 cm3 ? The Greeks were somewhat confused by such problems, being helped, of course, by some very confusing Greeks. To show that there were difficulties in reasoning about speed, Zeno produced a large number of paradoxes, of which we shall mention one to illustrate his point that there are obvious difficulties in thinking about motion. "Listen," he says, "to the following argument: Achilles runs 10 times as fast as a tortoise, nevertheless he can never catch the tortoise. For, suppose that they start in a race where the tortoise is 100 meters ahead of Achilles; then when Achilles has run the 100 meters to the place where the tortoise was, the tortoise has proceeded 10 meters, having run one-tenth as fast. Now, Achilles has to run another 10 meters to catch up with the tortoise, but on arriving at the end of that run, he finds that the tortoise is still 1 meter ahead of him; running another meter, he finds the tortoise 10 centimeters ahead, and so on, ad infinitum. Therefore, at any moment the tortoise is always ahead of Achilles and Achilles can never catch up with the tortoise." What is wrong with that? It is that a finite amount of time can be divided into an infinite number of pieces, just as a length of line can be divided into an infinite number of pieces by dividing repeatedly by two. And so, although there are an infinite number of steps (in the argument) to the point at which Achilles reaches the tortoise, it doesn't mean that there is an infinite amount of time. We can see from this example that there are indeed some subtleties in reasoning about speed.

In order to get to the subtleties in a clearer fashion, we remind you of a joke which you surely must have heard. At the point where the lady in the car is caught by a cop, the cop comes to her and says, "Lady, you were going 60 miles an hour!" She says, "That's impossible, sir, I was travelling for only seven minutes. It is ridiculous -- how can I go 60 miles an hour when I wasn't going an hour?" How would you answer her if you were the cop? Of course, if you were really the cop, then no subtleties are involved; it is very simple: you say, "Tell that to the judge!" But let us suppose that we do not have that escape and we make a more honest, intellectual attack on the problem, and try to explain to this lady what we mean by the idea that she was going 60 miles an hour. Just what do we mean? We say, "What we mean, lady, is this: if you kept on going the same way as you are going now, in the next hour you would go 60 miles." She could say, "Well, my foot was off the accelerator and the car was slowing down, so if I kept on going that way it would not go 60 miles." Or consider a falling ball and suppose we want to know its speed at the time three seconds if the ball kept on going the way it is going. What does that mean -- kept on accelerating, going faster? No -- kept on going with the same velocity. But that is what we are trying to define! For if the ball keeps on going the way it is going, it will just keep on going the way it is going. Thus we need to define the velocity better. What has to be kept the same? The lady can also argue this way: "If I kept going the way I'm going for one more hour, I would run into that wall at the end of the street!" It is not so easy to say what we mean.

Many physicists think that measurement is the only definition of anything. Obviously, then, we should use the instrument that measures the speed -- the speedometer -- and say, "Look, lady, your speedometer reads 60." So she says, "My speedometer is broken and didn't read at all." Does this mean that the car is standing still? We believe that there is something to measure before we build the speedometer. Only then can we say, for example, "The speedometer isn't working right," or "the speedometer is broken." That would be a meaningless sentence if the velocity had no meaning independent of the speedometer. So we have in our minds, obviously, an idea that is independent of the speedometer, and speedometer is meant only to measure this idea. So let us see if we can get a better definition of the idea. We say, "Yes, of course, before you went an hour, you would hit that wall, but if you went one second, you would go 88 feet; lady, you were going 88 feet per second, and if you kept on going, the next second it would be 88 feet per second, and if you kept on going, the next second it would be 88 feet, and the wall down there is farther away than that." She says, "Yes, but there is no law against going 88 feet per second! There is only a law against going 60 miles an hour." "But," we reply, "it's the same thing." If it is the same thing, it should not be necessary to go into this circumlocution about 88 feet per second. In fact, the falling ball could not keep going the same way even one second because it would be changing speed, and we shall have to define speed somehow.

Now we seem to be getting on the right track; it goes something like this: If the lady kept on going for another 1/1000 of an hour, she would go 1/1000 of 60 miles. In other words, she does not have to keep on going for the whole hour; the point is that for a moment she is going at that speed. Now what that means is that if she went just a little bit more in time, the extra distance she goes would be the same as that of a car that goes at a steady speed of 60 miles an hour. Perhaps the idea of the 88 feet per second is right; we see how far she went in the last second, divide by 88 feet, and if it comes out 1 the speed was 60 miles an hour. In other words, we can define the speed in this way: We ask, how far do we go in a very short time? We divide the distance by the time, and that gives the speed. But the time should be made as short as possible, the shorter the better, because some change could take place during that time. If we take the time of a falling body as an hour, the idea is ridiculous. If we take it as a second, the result is pretty good for a car, because there is not much change in speed, but not for a falling body; so in order to get the speed more and more accurately, we should take a smaller and smaller time interval. What we should do is take a millionth of a second, and divide that distance by a millionth of a second. The result gives the distance per second, which is what we mean by velocity, so we can define it that way. That is a successful answer for the lady, or rather, that is the definition that we are going to use.

The foregoing definition involves a new idea, an idea that was not available to the Greeks in general form. That idea is to take an infinitesimal distance and the corresponding infinitesimal time, form the ratio, and watch what happens to that ratio as the time we use gets smaller and smaller. In other words, take a limit of the distance travelled divided by the time required, as the time taken gets smaller and smaller, ad infinitum. This idea was invented by Newton and Leibnitz, independently, and is the beginning of a new branch of mathematics, called the differential calculus. Calculus was invented in order to describe motion, and its first application was to the problem of defining what is meant by going "60 miles an hour."

Richard Feynman, The Feynman Lectures on Physics, Vol.I (1963/1989; p.8-2)


OK, let's now return to Newton's F = m.a and reassess its simplicity. We have the following facts:

In light of this evidence, is Newton's second law simple or isn't it? Well, yes and no. On one hand, there is tremendous intellectual investment hidden behind the notation. On the other hand, for a person who has mastered the prerequisite concepts, the equation does seem simple and elegant. And it is a wonderful empirical fact that apples, heavenly bodies, and all sorts of other objects obey this law.

So, is this genuine simplicity of nature or mere notational shorthand? Consider a different example. Nobody has managed to invent a notation in which turbulence looks simple. Many smart people have tried and failed. Because of the great practical importance of this topic there is abundant funding and whole research institutes are working day and night. Relatively little progress has been made so far. There is even a growing sense that the task is impossible in principle. There are too many elements that interact in nonlinear ways, which gives rise to sensitive dependence on the initial conditions, and so forth.

A similar story can be told replacing the word "turbulence" with "the human brain". Can we ever formulate a simple description of the human brain? Who knows? How can one know? Is this a meaningful question? In particular, what is the meaning of the word "simple"?

Richard Feynman discusses the issue of notational tricks vs. genuine simplicity of nature at several places in his Lectures on Physics. Here is one more excerpt, this time from the volume on electromagnetism. The example involves concepts that I [Alex Petrov] have not fully mastered yet and therefore the "beautiful and simple" Equation (25.22) below does not seem so simple to me (but perhaps seems even more beautiful because of that ;-).


The Invariance of the Equations of Electrodynamics

We have found that the potentials phi and A taken together form a four-vector which we call Amu, and that the wave equations -- the full equations which determine the Amu in terms of the jmu -- can be written as in Eq. (25.22). This equation, together with the conservation of charge, Eq. (25.19), gives us the fundamental law of the electromagnetic field:

Equation 25.19 (25.19)
Equation 25.22 (25.22)

There, in one tiny space on the page, are all of the Maxwell equations -- beautiful and simple. Did we learn anything from writing the equations in this way, besides that they are beautiful and simple? In the first place, is it anything different from what we had before when we wrote everything out in all the various components? Can we from this equation deduce something that could not be deduced from the wave equations for the potentials in terms of the charges and currents? The answer is definitely no. The only thing we have been doing is changing the names of things -- using a new notation. We have written a square symbol to represent the derivatives, but it still means nothing more nor less than the second derivative with respect to t, minus the second derivative with respect to x, minus the second derivative with respect to y, minus the second derivative with respect to y. And the mu means that we have four equations, one each for mu = t, x, y, or z. What then is the significance of the fact that the equations can be written in this simple form? From the point of view of deducing anything directly, it doesn't mean anything. Perhaps, though, the simplicity of the equation means that nature also has a certain simplicity.

Let us show you something interesting that we have recently discovered: All of the laws of physics can be contained in one equation. That equation is

U = 0 (25.30)

What a simple equation! Of course, it is necessary to know what the symbol means. U is a physical quantity which we will call "unworldliness" of the situation. And we have a formula for it. Here is how you calculate the unworldliness. You take all of the known physical laws and write them in a special form. For example, suppose you take the law of mechanics, F = m.a , and rewrite it as F - m.a = 0. Then you call (F - m.a) -- which should, of course, be zero -- the "mismatch," of mechanics. Next, you take the square of this mismatch and call it U1, which can then be called the "unworldliness of mechanical effects". In other words, you take

U1 = (F - ma)^2 (25.31)

Now you write another physical law, say, nabla E = rho / epsilon0 and define

U2 = ...

which you might call "the gaussian unworldliness of electricity." You continue to write U3, U4, and so on -- one for every physical law there is.

Finally, you call the total unworldliness U of the world the sum of the various unworldlinesses Ui, from all the subphenomena that are involved. Then the great "law of nature" is

U = 0 (25.32)

This "law" means, of course, that the sum of the squares of all the individual mismatches is zero, and the only way the sum of a lot of squares can be zero is for each one of terms to be zero.

So the "beautifully simple" law in Eq. (25.32) is equivalent to the whole series of equations that you originally wrote down. It is therefore absolutely obvious that a simple notation that just hides the complexity in the definitions of symbols is not real simplicity. It is just a trick. The beauty that appears in Eq. (25.32) -- just from the fact that several equations are hidden within it -- is no more than a trick. When you unwrap the whole thing, you get back where you were before.

However, there is more to the simplicity of the laws of electromagnetism written in the form of Eq. (25.19). It means more, just as a theory of vector analysis means more. The fact that the electromagnetic equations can be written in a very particular notation which was designed for the four-dimensional geometry of the Lorentz transformations -- in other words, as a vector equation in the four-space -- means that it is invariant under the Lorentz transformations. It is because the Maxwell equations are invariant under those transformations that they can be written in a beautiful form.

It is no accident that the equations of electrodynamics can be written in the beautifully elegant form of Eq. (25.29). The theory of relativity was developed because it was found experimentally that the phenomena predicted by Maxwell's equations were the same in all inertial systems. And it was precisely by studying the transformation properties of Maxwell's equations that Lorentz discovered his transformation as the one which left the equations invariant.

There is, however, another reason for writing the equations in this way. It has been discovered -- after Einstein guessed that it might be so -- that all of the laws of physics are invariant under the Lorentz transformation. That is the principle of relativity. Therefore, if we invent a notation which shows immediately when a law is written down whether it is invariant or not, we can be sure that in trying to make new theories we will write only equations which are consistent with the principle of relativity.

The fact that the Maxwell equations are simple in this particular notation is not a miracle, because the notation was invented with them in mind. But the interesting physical thing is that every law of physics -- the propagation of meson waves or the behavior of neutrinos in beta decay, and so forth -- must have this same invariance under the same transformation. Then when you are moving at a uniform velocity in a spaceship, all of the laws of nature transform together in such a way that no new phenomenon will show up. It is because the principle of relativity is a fact of nature that in the notation of four-dimensional vectors the equations of the world will look simple.

Richard Feynman, The Feynman Lectures on Physics, Vol.II (1963/1989; p.25-10)


One can relate Feynman's thought above to the domain of Newtonian mechanics. The fact that the Newton equations are simple in the notation of derivatives is not a miracle, because the notation was invented with them in mind. But the interesting physical thing is that the same conceptual machinery works well for all other kinds of change. Differential calculus is an extremely powerful language for describing change in general. Nature needn't necessarily be like this--one can conceive of a universe in which the mathematical machinery useful for describing linear motion doesn't provide any leverage in the domain of chemical kinetics, for instance. That it does is an empirical fact reflecting an additional layer of order in the universe we live in.

Before closing this topic, consider two final brief excerpts.


It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what tiny piece of space/time is going to do. So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities. But this speculation is of the same nature as those other people make -- "I like it", "I don't like it" -- and it is not good to be too prejudiced about these things.

Richard Feynman, The Character of Physical Law (1965; p.58)


A reasonable starting point for a discussion of the many-body problem might be the question of how many bodies are required before we have a problem. Prof. G.E. Brown has pointed out that, for those interested in exact solutions, this can be answered by a look at history. In eighteenth-century Newtonian mechanics, the three-body problem was insoluble. With the birth of general relativity around 1910, and quantum electrodynamics around 1930, the two- and one-body problems became insoluble. And with modern quantum field theory, the problem of zero bodies (vacuum) is insoluble. So, if we are out after exact solutions, no bodies at all is already too many.

Richard Mattuck, A Guide to Feynman Diagrams and the Many-Body Problem (1976)


How can the universe start with a few types of elementary particles at the big bang, and end up with life, history, economics, and literature? The question is screaming out to be answered but it is seldom even asked. Why did the big bang not form a simple gas of particles, or condense into one big crystal? We see complex phenomena around us so often that we take them for granted without looking for further explanation. In fact, until very recently very little scientific effort was devoted to understanding why nature is complex.

[...] However, we do not live in a simple, boring world composed only of planets orbiting other planets, regular infinite crystals, and simple gases or liquids. Our everyday situation is not that of falling apples. If we open the window, we see an entirely different picture. The surface of the earth is an intricate conglomerate of mountains, oceans, islands, rivers, volcanoes, glaciers, and earthquake faults, each of which has its own characteristic dynamics. Unlike very ordered or disordered systems, landscapes differ from place to place and from time to time. It is because of this variation that we can orient ourselves by studying the local landscape around us. I will define systems with large variability as complex. The variability may exist on a wide range of length scales. If we zoom in closer and closer, or look out further and further, we find variability at each level of magnification, with more and more new details appearing.

Per Bak, How Nature Works: The Science of Self-Organized Criticality (1996; pp.1-5)

[ Home ][ Memes ]


Check the validity of this page's HTML Page maintained by Alex Petrov
Created 2000-03-11, last updated 2005-04-13.