A. P. French
Department of Physics, Massachusetts Institute of Technology, Camnbridge, MA, USA


The world is full of experiences that cry out for explanations. Think, for example, of the colors of rainbows and soap bubbles, the vapor trails of high-flying aircraft, the fact that liquid water abruptly changes into solid ice at a certain temperature, the production of lightning and the thunder that follows it in a storm, the beautiful hexagonal symmetry of small snowflakes; all these, and a limitless list of other phenomena, fall within the province of the science of physics. The essence of science in general is the observation and exploration of the world around us with a view to identifying some underlying order or pattern in what we find. And physics is that part of science which deals primarily with the inanimate world, and which furthermore is concerned with trying to identify the most fundamental and unifying principles. The first of these conditions -- restriction to the inanimate world -- separates physics, at least provisionally, from biology; the second separates it from chemistry, which, at least in its theoretical aspects, builds on some specific areas of physics but can ignore some others. Mathematics, of course, although indispensable to the practice of physics, is an entirely different field of study, since it is self-contained and is ultimately independent of observations of the real world.
The subject of this article could be approached in many different ways. One way of obtaining some insight into the nature of physics is to look at the story of how physics has developed from its beginnings until now. That is what this article does, although it makes no attempt to be exhaustive and omits many topics that some might consider important or even essential. Its main purpose is not to offer a chronological survey for its own sake, but just to illustrate how the consistent aim of physics is to relate our knowledge of phenomena to a minimal number of general principles.


It can be plausibly argued that physics began with mechanics -- the science of machines, forces and motion. There has always been a close link between physics and practical devices, and this connection was already established in mechanics in ancient times. The best example is perhaps the lever, the principle of which was recognized by Archimedes about 250 B.C.: "...unequal weights are in equilibrium only when they are inversely proportional to the arms from which they are suspended." Here, in a simple case, was a theoretical statement, a generalization of particular experiences, of the kind that typifies the nature of physics. This result may have been the first example of an actual physical law. It became the basis of a weighing device -- the steelyard or Roman balance (Fig. 1) -- that has been used since Roman times and is still in use today. It is worth taking this example a little further. Presumably the balancing of unequal weights was initially an empirical matter. Then Archimedes produced his quantitative and general statement of the relationship. But he was not content with this, and he sought to base it on one of the most powerful concepts used by physicists -- that of symmetry. Archimedes took it as axiomatic (and certainly easily verifiable) that equal weights (W) at equal distances (l) from the pivot (the fulcrum) were in balance. He then imagined one of these weights being replaced by two weights of magnitude W/2, one at the fulcrum and the other at a distance 2l from the fulcrum. Since the first half-weight would clearly exert no turning effect about the fulcrum, he argued that it was obvious that W/2 at 2l would balance W at l, and that the general law of the lever could be inferred from an extension of this argument.

Fig. 1. A Steelyard medal struck for Frederick I (1688-1713)

The argument is in fact not valid. If the law of the lever were W1l12 = W2l22, it would still be true that equal weights at equal distances would balance, but it would not be true that W/2 at 2l would balance W at l. The correct law had to be based on actual observations with unequal weights. Nevertheless, an appeal to symmetry, where it is justified, is an extremely fruitful tool, as we shall see.


Even before Archimedes and his work in mechanics, Aristotle (384-322 B.C.), who introduced the Greek word for physics into our vocabulary, had considered the motion of bodies. Certainly, space and time have traditionally been the most fundamental concepts in our view of nature, and position as a function of time has always been the basis of our description of motion. Aristotle discussed these matters, but he made a distinction between what was viewed as the perfect circular motion of stars, etc. (actually. of course, a reflection of the earth's rotation on its axis), and the imperfect trajectories of objects at the Earth's surface. But it was clear that, when it came to physics, Aristotle did not study phenomena at first hand. He made the famous assertion that "a weight which is twice as great will fall from the same height in half the time" -- something that could have been disproved by a single experiment. The Middle Ages saw a number of investigations of motion of projectiles, but it was not until the 17th century that Galileo (1564-1642) established by a combination of experiment and theory the correct picture of free fall and the parabolic motion of projectiles. I mention this not for the sake of the particular result, but because it points to another essential feature of physics -- the dependence on direct observation or experiment. Without such direct interaction with Nature, we could not have a science of physics. It has often been said that such observational or experimental evidence is the starting point from which all physical theories are constructed, but this, I think, is going too far. All that one can justifiably say is that the progress of physics depends on a continuing interaction between experiment and theory. It may well be that a theory comes first, and suggests the possibility of experimental tests by which it will either be supported or discredited. It has never been the case that a given set of observations points uniquely to a fundamental theory that accounts for them, although it may be true that experiment points uniquely to a particular connection between observed quantities -- e.g., distance proportional to the square of the time in free fall under gravity (but that is not a theory of gravitation).


The first great flowering of physics came, as is well known, in the 17th century, and its basis was the study of the collisions between objects. It was Isaac Newton (1642-1727) who first realized that the results of all such experiments were consistent with a single conservation principle -- the conservation of linear momentum.*


* Others (including Descartes) had contributed to this principle in a less complete or correct way. Newton had the genius or good fortune to use it as the basis of his mechanics.


This, by itself, did not explain in detail the results of every possible type of collision. Nevertheless, the conservation of total momentum (mv) was never violated in the collision of two objects. The statement of this principle involved two important concepts:

1 The concept of mass, defined in a somewhat intuitive way as the quantity of matter in a body;

2 The concept of a physical frame of reference, with respect to which the velocity of other objects could be measured, and which could be taken, in these early experiments (and even in similar experiments today) as being provided by the seemingly immovable body of the Earth.

Both of these concepts have been subjected to much discussion and refinement since those early days, but this fact illustrates another very important aspect of the nature of physics -- an acceptance of working hypotheses that are quite adequate at a particular stage in the development of the subject, but which are always liable to later modification. Thus, for example, it was well known, even in the 17th century, that the Earth was not stationary, but was rotating on its axis and orbiting around the Sun, but both of these facts could be ignored in the analysis of laboratory-sized experiments on collisions. Only when larger-scale motions were involved did these facts become relevant; to introduce them at the beginning would make for unnecessary and obstructive complications.
Another important but less general conservation principle was recognized at about the same time as the conservation of momentum. It was limited to what are called elastic collisions, in which two colliding objects recoil from one another as vigorously as they approach. If one considers a collision along a straight line between two objects of masses m1 and m2, and if one denotes their initial and final velocities by u1, u2 and v1, v2, then the conservation of momentum is expressed by the equation: m1u1 + m2u2 = m1v1 + m2v2.  This held good whether the collision was elastic or inelastic (less than perfect rebound). But, if the collision was elastic, it was also true that the following relation held: m1u12 + m2u22 = m1v12 + m2v22.

With the further development of mechanics, it came to be recognized that this second relationship was an expression of the conservation of kinetic energy in elastic collisions, where the kinetic energy of a body was later defined as mv2/2, not mv2, for reasons that we will not go into here.

In addition to these conservation principles, another fundamental physical principle applicable to collisions was recognized by Newton's great contemporary Christian Huygens (1629-1695).  This was what we would now call the equivalence of different frames of reference. Huygens considered an elastic collision between two equally massive spheres with equal and opposite velocities (± v). He argued that, by symmetry, they would recoil with their velocities reversed. He now imagined such a collision taking place on a boat that was itself moving at velocity v with respect to the shore (Fig. 2). If this collision were observed by a man standing on the river bank, he would see it as a collision between a stationary sphere and one moving with velocity 2v. Or, if the boat were moving with speed u, the initial velocities of the spheres would be u + v and u - v. In both cases the velocities as seen by the man on the bank would be interchanged by the collision. In other words, on the basis of the original symmetric collision one could predict the results of all collisions between these two objects occurring with the same relative initial velocity.


Figure 2. An elastic collision between two spheres as seen from two different reference frames. (From C. Huygens, Oeuvres Complètes, Vol. 16, The Hague: Martinus Nijhoff, 1940). (The diagram above the sketch was added by Ernst Mach in his book, The Science of Mechanics.)

Underlying all these phenomena was another condition, never explicitly stated. This was that the total mass of the objects involved in a collision remained constant -- the principle of the conservation of mass. This was taken for granted in these physical systems, but an explicit statement of the conservation of mass, based on direct experiment, did not come until more than a century later, in chemistry, when Antoine Lavoisier (1743-1794) established it for chemical reactions, involving much more drastic rearrangements of matter than did the collision experiments of Newton's contemporaries.

This is by no means the last we shall hear of conservation principles, but before continuing on that path we shall consider some other matters.


Observers of the physical world have always been interested in discovering or recognizing the causes of things. Perhaps the most famous example of this is the modern mathematical statement of Newton's second law of motion -- F = ma. On the left is the force; on the right is the mass multiplied by the acceleration produced in it by the force. In other words, the left side is interpreted as a cause, and the right side as the effect produced by that cause. The two sides of the equation do not play equivalent roles. This is a feature that one does not find in the equations of mathematics. Nor are all physics equations of this type. For example, what is probably the most famous equation in all of physics -- Einstein's E = mc2 -- is a simple statement of the equivalence of mass and energy. But when a physical equation is the expression of a cause/effect relationship it assumes a special significance.


During the two centuries after Newton, the scope of physics grew enormously. Already in his time the science of optics was well developed, and Newton himself was a major contributor to it. But then, during the 18th and 19th centuries, the knowledge of the physical world expanded to include such areas as heat, sound, electricity, and magnetism. Initially these, as well as mechanics and optics, were seen as separate fields of study, but then something very important happened: connections between them began to be perceived. Sound, for example, came to be understood as the mechanical vibrations of air columns, strings, and so on, and heat as the chaotic mechanical motions of atoms and molecules (for, although atoms as such could not be observed, there was a confident belief in their existence). Along with this came a great enlargement of the concept of energy and its conservation. It came to be realized that, when mechanical energy apparently disappeared -- as for example in the inelastic collision of two objects -- one could account for the loss in terms of a transfer to the thermal energy of the colliding objects, as expressed in an increase of their temperature. Thus conservation of energy came to be seen as a general principle, although the extension of it to electricity and magnetism did not happen immediately.

Early in the 19th century, connections between the phenomena of electricity and magnetism were discovered: the flow of electric charge through a wire caused magnetic effects, and a changing magnetic field could produce a current in a closed loop of wire. And then, toward the end of the century, the great physicist James Clerk Maxwell (1831-1879) showed how, by uniting the equations that described electric and magnetic fields, he could account for the transmission of light through space at the amazing speed of about 3 x 108 meters per second -- a value that was already known from experiment.

The net result was a tremendous unification of physics. For many years it had seemed that the diversity of physical phenomena was expanding almost without limit as new discoveries were made. Then it came to be seen that the divisions traditionally made between different areas of physics were really the result of our ignorance of their fundamental interconnections. As a matter of convenience, but perhaps unfortunately, these different areas continued to be treated for the most part as separate fields of study, and textbooks continued to be divided up accordingly. This was not very serious, however, as long as it was recognized that, in a fundamental sense, physics was a single discipline.


One of the main goals of physics is to develop plausible conceptual models, as they are called, in terms of which various physical phenomena can be described and explained. Perhaps the outstanding example of this is the attempt to find a successful model for the phenomena of light. According to some of the ancient Greeks, our ability to see an object depended on the emission of something from the eye -- an idea that should have been easily disprovable by experiment (for example, the invisibility of an object in a darkened room). Others, more plausibly, thought that an object became visible by virtue of particles of some sort emitted by the object itself. The production of sharp shadows by a small luminous source led naturally to the picture of light as consisting of particles traveling in straight lines from a source or from an object illuminated by it. This model was reinforced by the discovery of the law of reflection of a beam of light at a plane mirror -- angle of reflection equals angle of incidence. Newton favored and supported this particle model. But his contemporary Huygens devised and promoted a very different model -- that light consists of waves traveling through a medium. He considered that the immense speed of light, and the ability of beams of light to pass through one another without any interaction, were evidence against light being composed of material particles. Also he thought that vision must depend on the retina of the eye being shaken by the light. He was able to explain the rectilinear propagation of light as arising from the superposition of circular or spherical waves that originated from different points on the advancing wave-front of a beam.

It seemed obvious at the time that the particle and wave models of light were mutually exclusive. Thanks primarily to the great authority of Newton, the particle model became generally accepted, and remained unchallenged for about 100 years. But then something very astonishing happened. In about 1801 Thomas Young (1773-1829) showed that a beam of light, if divided into two overlapping beams, showed the phenomenon of interference -- the production of alternating bright and dark regions on a screen placed to receive the light (Fig. 3). The appearance of dark regions -- destructive interference -- was inconceivable on a particle model; how could one particle of light be annihilated by another? Thus the particle model of light was abandoned, and evidence supporting a wave model of light continued to accumulate during the remainder of the 19th century. The culmination came when, as mentioned in the previous section, Maxwell showed that he could account for the propagation of light as an electromagnetic disturbance passing through a medium that was called the ether, and that was conceived as filling all space. The triumph of a wave model of light seemed complete and permanent, but this was not to be. What had been assumed to be a simple case of either/or turned out to be something much more surprising and mysterious, as we shall discuss shortly.

Fig. 3. A schematic diagram of Young's two-slit interference experiment. Places where the waves from the two slits reinforce are shown by black dots, places where they cancel are shown by open circles. The interference pattern has a central maximum and other maxima on each side. In practice the wavelength of the light is very small indeed compared to the spacing of the slits; this means that the interference fringes are extremely numerous and very close together.


As the 19th century approached its end, the physicists of the time felt that physics was almost a completed subject. Its primary ingredients were absolute space and time, the causal laws of mechanics, electricity and magnetism (embodying a wave model of light), and a picture of matter as consisting of discrete and indivisible particles obeying these laws. But such complacency was about to be shattered. In the space of less than 10 years came radioactivity, the discovery of the electron, the quantum of energy, and special relativity; each of them, in its way, called for a drastic revision of our picture of the physical world.

a) Radioactivity

This phenomenon, discovered in 1895 by Henri Becquerel (1852-1908), had as its chief feature the spontaneous emission of various kinds of previously unknown radiations from certain of the heaviest atoms known to chemistry. The source of these emissions and their energy was a great puzzle, and it was at one stage suggested that the principle of the conservation of energy would have to be abandoned. Further research showed that this was not necessary, but an even more precious principle had to be sacrificed: the unique relation of cause to effect. For it came to be clear that, in a group of identical radioactive atoms, the times at which they underwent a change to a different kind of atom were quite random; there was nothing, so far as could be discovered, that caused a particular atom to undergo a radioactive change at a particular time; the atoms decayed spontaneously and independently. This was established in an experiment by Ernest Rutherford (1871-1937), the dominant figure in the early days of nuclear physics. But physics did not cease to be an exact science with immense predictive power. We shall have more to say about this later.

b) X Rays and the Electron

During the last decade of the 19th century, much research was focused on the subject of electric discharges in gases at low pressures. This became possible, in large part, because of the development of efficient ways of producing vacuum -- a good example of how advances in technology directly affect the progress of fundamental physics. A whole range of new phenomena came into view. Perhaps the most dramatic of these was the discovery of x rays by Wilhelm Conrad Roentgen (1845-1923). The ability of these rays to penetrate the human body and expose its internal structure was quickly exploited. At first the nature of these rays was a mystery, but after a few years it was established that they were electromagnetic waves, like light but of a much shorter wavelength (by a factor of about 1000). But behind these x rays lay something destined to have a much greater influence on the course of physics. They were produced by the impact on a solid "target" of so-called cathode rays, emitted from a negatively charged electrode in an evacuated tube. What were these cathode rays? It was Joseph John Thomson (1856-1940) who found that they were negatively charged particles with a far smaller mass, in relation to their charge, than any particle previously known. In fact, if one assumed that the size of their charge was equal to that of a hydrogen ion in electrolysis (a proposition that was verified by experiment later) their mass was less than 1/1000 of the mass of a hydrogen atom. Furthermore, their properties did not depend at all on the material used as the cathode (negative electrode) from which they came. The implication was that all atoms had an internal structure that included these novel particles, which of course we know today as electrons. The old idea of atoms as indivisible (the Greek basis of their name) was gone forever. The question naturally arose: what other constituents went into the structure of atoms, which were electrically neutral? That question was not properly answered for more than another 10 years, when Rutherford found that the positive part of the atom was a nucleus smaller in diameter by a factor of about 10,000 than the atom as a whole. We shall return to that development in the next section of this article.

c) The Quantum

A general acquaintance with the radiation from hot objects is as old as mankind, yet a full understanding of its properties did not come until the dawn of the 20th century. Well before then, it had come to be realized that radiant heat was a form of electromagnetic radiation, which became visible when an object was sufficiently hot but also included radiation at much longer wavelengths. The spectrum of such radiation (intensity versus wavelength) for a body at a particular temperature was a rather uninteresting-looking curve (Fig. 4), whose peak shifted to shorter wavelengths as the temperature of the radiating body was raised. Attempts to explain this spectrum in terms of the basic classical theory of electromagnetic radiation-- a well understood theory -- did not work at all well.

Fig. 4. A qualitative graph of intensity versus wavelength or frequency for the radiation from a hot body. As the temperature is raised, the overall amount of radiation increases and the peak shifts toward shorter wavelength (higher frequency).

The German physicist Max Planck (1858-1947) set himself the task of finding a better fit. To his surprise and chagrin, he found himself driven to the conclusion (in 1900) that energy from a hot body could only be released in discrete amounts, proportional to the frequency (inversely proportional to the wavelength) of the emitted radiation, according to the formula E = hf, where f is the frequency and h is what quickly came to be known as Planck's constant. Thus the quantum was born. Planck shrank from proposing that the radiation itself was quantized -- the classical wave theory of light still stood supreme -- but Albert Einstein (1879-1955) advanced this hypothesis in what he called a heuristic way (something that works but may not be the last word) in 1905. Its consequences were very far-reaching; we shall come to that later.

d) Relativity

The discoveries in atomic physics and radiation were enough to shake classical physics to its core, but more was to come. Ever since the time of Newton, it had been accepted that space and time were absolute, even though Newton himself acknowledged that we could not identify absolute space and had to content ourselves with the study of relative motions. But then, in 1905, Einstein came forward with his revolutionary proposal that neither time nor space was absolute, that they were related to one another, and that both depended on measurements made with respect to a chosen frame of reference, which had to be identified. This meant, in particular, that one could not state categorically that two events occurring at different places were simultaneous; the judgment as to whether they were simultaneous or not depended on the frame of reference that one was in.

This theory -- the special theory of relativity -- is not basically difficult or complicated; in a simplified form, it can be presented with nothing more than high-school algebra. Its challenge is a conceptual one, because it requires us to abandon intuitive ideas that all of us grow up with. It is no trivial matter to make such an adjustment, but it soon became clear to Einstein's contemporaries (at least, to many of them) that the new theory had a predictive power that could not be denied. The slowing down of a clock that is in motion with respect to us, for example, might seem to be science fiction -- and in the form of the "twin paradox" with a human traveler staying young, while his brother on Earth gets old, it is; nevertheless, the basic effect has been directly confirmed by observations using precise atomic clocks transported around the Earth in commercial jet aircraft.

One aspect of relativity that was particularly troubling to the traditionalists was its denial that there existed a single preferred frame of reference. Such a frame was assumed to be defined by what Huygens called the ether -- the hypothetical medium that was deemed to be essential as the carrier of light and all other kinds of electromagnetic waves. The notion of waves that did not require any material medium to carry their vibrations was regarded as an absurdity. But the failure of all experiments to detect the motion of the Earth through this medium was one of the important supports for the correctness of Einstein's ideas. Physicists had to get used to the idea that electromagnetic waves did not need a medium to wave in; this picture was required only if one demanded a purely mechanical model of the wave propagation. In the latter part of the 19th century, much effort was expended on creating such mechanical models, until Einstein made them superfluous.


By the beginning of the 20th century it had come to be accepted that atoms were objects with diameters of the order of 10-10 m. Prominent among the reasons for this belief was the knowledge of the value of Avogadro's number -- the number of atoms or molecules in a mole of an element -- which had been inferred from such phenomena as the viscosity of gases, and which also emerged from Planck's theoretical analysis of thermal radiation. (Notice once again the interconnectedness of physics!) Assuming that in materials such as metals the atoms were closely packed together, it was just a matter of geometry to deduce the approximate diameter of an individual atom.

After the electron had been discovered and the magnitude of its electric charge known, classical electromagnetic theory could be used to deduce that its diameter was of the order of 10-14 m.*


* In modern theoretical models, this particular value is no longer accepted. The electron is treated as a point particle.


Given this number, together with the fact that electrons accounted for only about 1/10,000 of the mass of an atom, it was natural to picture the atom as a ball of positively charged material with a diameter of about 10-10 m, with the almost point-like electrons embedded in it. This was the model that J. J. Thomson himself invented. There were various problems with this model, however; one of them was its inability to account for the wavelengths of the light emitted by atoms.

Then, in 1911, the situation was completely turned around when, as mentioned earlier, Ernest Rutherford discovered, from the violent deflections suffered by alpha particles (ionized helium atoms) fired at thin metal foils, that most of the mass of an atom of materials such as gold or silver was concentrated within a radius of about 10-14 m.

Building on this discovery, Niels Bohr (1885-1962) in 1913 proposed his famous model of the atom as a kind of miniature solar system, with electrons orbiting like planets around the positive nucleus. Nobody appreciated better than Bohr himself that it was a very arbitrary model. He simply postulated, without any theoretical justification, that electrons in their orbits emitted no light (which classical electromagnetic theory would have required them to do). He also proposed, with an ingenious use of Planck's idea of the quantization of energy, that these orbits were limited to a discrete set of radii. It was really a thoroughly makeshift theory -- but it worked! It accounted triumphantly well for the known spectrum of light emitted by hydrogen atoms, and predicted other sets of hydrogen lines (in the ultraviolet and infrared) that had not previously been observed.

The theory did, however, have serious limitations. It had little success in explaining spectra other than those of hydrogen and so-called "hydrogen-like" systems -- those with only one outer electron to produce the radiation, such as certain positive ions. It was clearly not the last word. It is an interesting fact that Bohr, like Planck before him, did not believe that the light itself was quantized, until he was finally convinced, many years later, by direct experimental evidence of collisions between light quanta and electrons (the Compton effect).


We have seen how people's ideas about the nature of light oscillated between a particle model and a wave model. It had seemed axiomatic that these two models were mutually exclusive. Certainly the wave properties of light could not be denied. But then, early in the 20th century, experiments were made on the photoelectric effect -- the ejection of electrons from metals by light -- that were consistent with Einsein's proposal that the energy of light was emitted and absorbed in minute packets -- quanta -- that came to be called photons. In other words, light had properties that embraced those of both particles and waves. This was a totally new idea.

Then, about 20 years later, Louis de Broglie (1892-1987) went one step further, and made the complementary suggestion that electrons, which had been unequivocally accepted as particles, might have wave-like properties, with a wavelength equal to h/p, where h is Planck's constant and p is the momentum mv. Within a few years this, too, was confirmed. Electrons of a specific energy were diffracted by crystal lattices in just the same way as x rays (Fig. 5). In other words, our accepted categorization of the basic elements of the physical world ceased to apply at the atomic level. In fact, at this level our ordinary language, with all its customary associations, simply broke down. It was necessary to accept a photon or an electron simply for what it was, defined not by words of our own making but by its behavior.

Before long it was discovered that every kind of physical object that had been labeled as a particle -- neutrons and protons and every kind of neutral atom or molecule -- also had this wave property, with a wavelength given by de Broglie's formula.


The randomness of radioactivity and the wave/particle duality could simply not be fitted into the framework of classical physics, yet it was clear that the classical picture worked very well for many purposes. What was to be done? The answer was soon provided by two brilliant theorists, Werner Heisenberg (1901-1976) and Erwin Schroedinger (1887-1961). In 1925-26, using very different approaches, which were not at first recognized as equivalent, Heisenberg and Schroedinger created the new science of quantum mechanics.

X-rays .........electrons

Fig. 5. A pair of photographs showing the diffraction of electrons and x-rays of similar wavelength. These ring patterns are obtained when a beam of electrons or x-rays passes through a thin foil made up of small crystals of a material (aluminum) oriented randomly in all directions. The diffracted waves (particles) are received on a photographic plate on the other side of the foil. (After A. P. French and Edwin F. Taylor, Introduction to Quantum Physics, New York: W. W. Norton. 1978.)

The approach taken by Schroedinger is the easier one to appreciate, and builds directly on the wave/particle duality we have just described. By accepting the wave character attributed to particles by de Broglie, Schroedinger was able to construct an equation that led to the solution of a vast range of atomic problems. (This version of quantum mechanics is called wave mechanics.) There were strong similarities to acoustics. We know that, in the open air, sound of any wavelength or frequency can be transmitted, but in an enclosed space, such as the interior of a room or the body of a wind instrument, only certain wavelengths and frequencies are possible. Similarly, in empty space electrons of any wavelength are possible, but the interior of an atom is like an enclosure, with rather soft walls defined by the attraction of the positive nucleus for the electrons. Electrons of less than a certain energy cannot escape, and such electrons are restricted to certain discrete energies. The results of Bohr's theory of the hydrogen atom emerged naturally and automatically from this model, and it was applicable to many other atomic systems.

There remained a basic question: What are these waves? This question has most often been discussed in the context of an analog of Thomas Young's first two-slit experiment on the interference of light. One can imagine a similar experiment being done with electrons (in fact, 35 years after wave mechanics was invented, such an experiment was performed). Whether the experiment is done with light or electrons (or other particles) the main features are the same.

Let us discuss it in terms of light, since this is universally accessible, whereas electron beams are not. If the intensity of the light is high, one gets a classical wave interference pattern, with smooth variations of intensity between maxima and minima as measured, for example, on a photographic light meter. But if the intensity of the light is reduced to an extremely low level, and if the light meter is replaced by an extremely sensitive device that can detect individual photons (a photoelectric multiplier tube) then an amazing result emerges. The experiment can be done under such conditions that only one photon passes through the apparatus at a time. It arrives at a particular point on the detector screen -- that is, it is detected as a particle. Its point of arrival on the screen is completely unpredictable. However, if millions of photons pass successively through the system the distribution of the individual impacts builds up to the classical interference pattern. The key point is that each photon, in some sense, is passing through both slits of the apparatus and interfering with itself; that, at least, is the simple-minded way of accounting for the results of the experiment. Does this mean that the photon literally divides? The answer is no; something more subtle is involved.

If one tries to discover which slit the photon passes through, the interference pattern disappears. To describe such phenomena, Bohr introduced the concept of what he called complementarity. The wave and particle aspects of photons are complementary. Photons are detected as particles, at a particular point, but their motion from source to detector is described by a wave equation. It was Max Born (1882-1970) who proposed that Schroedinger's waves are waves of probability (or, to be more exact, of probability amplitude, the square root of a probability). Despite many subsequent developments, this interpretation has stood the test of time. It is a strange result that raises many questions, as every physicist acknowledges. Among other things, it points to the very close association between physics and mathematics -- a phenomenon that was the subject of an essay entitled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" by the distinguished theorist Eugene Wigner (1902-1995).

One further comment is appropriate here. Phenomena such as radioactivity and the double-slit interference experiment show that individual events on the atomic scale can have a random property. Does this mean that physics has ceased to be an exact science? The answer is "No"! The development of classical physics had led us to believe that all kinds of individual events are subject to strict causal laws. Quantum phenomena have forced us to recognize that this is not true. But it remains true that the statistical behavior of large populations of identical atomic systems is rigorously predictable. This is not in itself a novel idea, although its importation into basic physics is new. We are all familiar with the fact that the properties of large populations of humans are amenable to precise descriptions and predictions, although what happens to individuals may not be. Thus, for example, insurance companies can base their business on a very well defined knowledge of the distribution of human lifetimes, although the fate of a single person is completely unpredictable. But the statistical predictions of quantum physics are more exquisitely precise than anything in human affairs.


We have long been familiar with the fact that the basic constituents of atomic nuclei are what are called nucleons -- protons and neutrons. The proton, the nucleus of a hydrogen atom, had been known since about 1910. The existence of its partner of approximately equal mass, the neutron, was suggested by Rutherford in 1920 and was established experimentally by James Chadwick (1891-1974) in 1932. Shortly after this the new field of nuclear theory came into existence and proceeded to grow at a rapid rate. It was quickly recognized that a hitherto unknown type of force was involved. It is rather astonishing to realize that, until nuclear forces were introduced, all of the physical phenomena known up to that time were explainable in terms of only two basic kinds of force -- gravitational and electromagnetic. Gravity, intrinsically an extremely weak force, becomes important only when exerted by very large bodies, such as the Earth. All other forces could be described in terms of electric and magnetic interactions. Nuclear forces are of extremely short range -- their effect scarcely extends beyond the boundary of an atomic nucleus and so plays no role at all in the interaction between different atoms. It is only in circumstances completely beyond our own experience -- at the centers of stars and, even more, in such objects as neutron stars, consisting of close-packed nucleons -- that nuclear forces play a central role. It came to be recognized that there are two types of nuclear force, labeled simply as strong and weak. The strong force is what holds the protons and neutrons in a nucleus together, against the electrical repulsion of the protons; the weak force is the agent behind some forms of radioactive decay. We shall not consider either of these forces in any detail here; it is enough to know that they exist.

Once the picture of nuclear structure in terms of neutrons and protons was well established, the attention of many physicists turned to the next step down the ladder -- the possible internal structure of the nucleons themselves. The quest entailed the construction of bigger and bigger particle accelerators, acting as sources of more and more energetic particles -- such as electrons -- as probes. A primary reason for this need for ever increasing energies resides in the de Broglie relation: wavelength equals Planck's constant divided by momentum. Modern particle accelerators are like microscopes for the study of objects smaller by many powers of l0 than anything that can be examined in an optical microscope. To do this requires wavelengths smaller than that of visible light by a similar factor, and the only way to achieve this is to increase the momentum and hence the energy of the probing particles. This research at first generated what seemed like an unlimited list of new and exotic (and short-lived) particles, most of which were clearly not basic building-blocks of nuclear matter. But then, in l964, the proposal was made that nucleons were composed of triads of quarks -- a name given to them by their inventor, Murray Gell-Mann (1929 - ). The consequences of this theory were far-reaching. and extended well beyond the internal constitution of nucleons. Essentially all the known "heavy" particles (i.e., other than the electron and its relatives, such as the neutrino) could be pictured as combinations of either two or three quarks. Sophisticated symmetry arguments were involved in all this analysis, which led to the prediction of a previously unobserved particle, a kind of excited state of a nucleon. Successful prediction such as this is, as we have said before, an important criterion for a good theory.


There is, of course, much more to physics than the search for new fundamental particles. Indeed, the number of people doing research in this area is probably quite small compared to the number who have been engaged in various aspects of what goes under the name of condensed matter physics -- primarily the physics of solids.. Until the invention of quantum theory, the properties of solid materials -- for example, whether they were transparent or opaque to light, whether they were electrical conductors or insulators -- were just a matter for empirical study. This does not mean that the field was largely unexplored. Indeed, for crystalline solids in particular the use of x-rays had led to a very detailed and accurate picture of the arrangements of the atoms. But the reasons for their physical properties were largely a mystery. The application of quantum ideas changed all that.

The first calculations in quantum mechanics had been of the energy states of electrons in individual atoms. The next step was to consider how those energy states would be changed as assemblies of similar atoms were brought more and more closely together. It was discovered that, when this was done, some fraction of the electrons would no longer be attached to a particular atom but would be shared over the whole extent of the assembly. In some cases this would mean that the assembly would become a good electrical conductor, in others it would be an insulator. And there were intermediate cases -- the semiconductors. It was realized that these properties were controllable through the addition of other types of atoms -- what is called doping. Out of this came the transistor, and then the whole science of solid state electronics, which now dominates our communications and computer technology. The transistor was invented in 1947 by John Bardeen (1908 - 1991), Walter Brattain (1902-1987) and William Shockley (1910 - 1989).

Another important area of condensed matter physics is that of low temperatures. Whereas the nuclear and particle physicists were concerned with exploring the properties of matter at higher and higher energies, the low-temperature physicists have been interested in phenomena at the lowest attainable energies, down into the region of millionths of a degree above absolute zero. In terms of energy per particle, this is a factor of about l022 (l0,000,000,000,000,000,000,000) less than the highest energy achievable by modern particle accelerators. Under less extreme conditions, but still in the low temperature range (up to about 100 degrees above absolute zero) much research has been done on the phenomenon of electrical superconductivity, in which the electrical resistance of certain materials falls to zero. The practical possibilities presented by this behavior are immense, especially if materials can be found for which the superconducting property can be pushed up close to room temperature.


We have described how the theory of solids developed from the consideration of what happens when large numbers of atoms are brought into close interaction with one another through their electrons. A comparable although different situation concerns large numbers of atoms interacting through the exchange of quanta of radiation. This can take place in the condensed solid state but also in gases at low pressures -- even in the near vacuum of interstellar space -- and the controlled exploitation of it made possible the invention of the laser. This is another noteworthy example of how fundamental physics can lead to major contributions to technology.

Once again the story begins with Einstein. In 1916 he developed a new way of deriving Planck's formula for the spectrum of radiation from a hot object. It was already accepted that a quantum of radiation would be spontaneously emitted by an atom in an excited energy state when it fell to a state of lower energy. It was also accepted that an atom in the lower state could be raised to the higher state if a light quantum of the correct energy fell upon it and was absorbed. To this, Einstein added a further possibility -- that the transition of an atom from its exited state to its lower state would be enhanced if it were struck by a photon of the same energy as it would emit spontaneously -- a process of stimulated emission. This process would lead to the appearance of two quanta of a certain energy where there had only been one before, so that if one could start with a large population of atoms in the excited state there would be the possibility of a sort of chain reaction; a single incident photon of the right energy could give rise to a big burst of radiation of the same frequency and wavelength. This was the laser concept.

It was first turned into reality in 1953 by Charles Townes (1915 - ) and his students, using radiation of about l cm wavelength emitted and absorbed by ammonia molecules. This is in the range of electromagnetic radiation called microwaves, and they decided to call their invention a maser -- Microwave Amplification by Stimulated Emission of Radiation. Seven years later a similar device using visible light was created by Theodore Maiman (1927 - ) The word "microwave" was replaced by "light" in the acronym devised by Townes and his colleagues, and thus the laser got its name. Its most obvious feature is the amazing purity of its emitted light -- that is, the extremely small range of wavelengths in its emitted light compared to light from the same atomic transition in an ordinary light source. Along with this goes the possibility of producing a beam of light of very high intensity and very small angular divergence so that, for example, it has been possible to place reflectors on the Moon and observe the light returned by them to the position of a laser source on the Earth.


Although this topic does not involve any fundamentally new concepts, no survey of physics would be complete without at least a brief mention of what are called plasmas. A plasma is essentially a gas raised to such a high temperature that a large fraction of its atoms have lost an electron, thereby becoming positive ions. The negative electrons remain in the system, so that it is electrically neutral as a whole. A fluorescent light is a familiar example of a plasma. It may not feel hot to the touch, but its electrical temperature, as measured by the energy of the free electrons in it, may correspond to tens of thousands of degrees.

Plasma has been called "the fourth state of matter." Although (except for natural phenomena such as lightning bolts and the aurora) special steps must be taken to create it on Earth -- primarily electric discharges in gases -- most of the visible matter in the universe is in the plasma state. Virtually the whole volume of a normal star is in the plasma state, with temperatures up to tens of millions of degrees. That is why it is important to include plasmas in any discussion of the physical world. However, the particular interest of plasmas for us on Earth is the possibility of using them to generate "clean" energy. The idea is to achieve this by creating a plasma of certain light elements -- in particular the hydrogen isotopes of atomic mass 2 and 3 -- and make the system hot enough to produce nuclear fusion reactions. Work in this direction has been going on for about 50 years. Success has often seemed tantalizingly close; the view now appears to be that useful power from plasma fusion may be achieved by the middle of the 21st century.


The preceding account has pointed out that physicists have come to recognize four different types of force: gravitational, weak nuclear, electromagnetic, and strong nuclear. (Arranged this way they are in order of increasing strength.) The dream of many theoretical physicists has been to find some basis for combining all these in a single unified theory of forces. Einstein worked fruitlessly for many years, right up until his death in 1955, in an effort to combine gravitation (which was the subject of his general relativity theory) with electromagnetism. Others then look over. One major achievement, in 1967, was a theory that united the electromagnetic and the weak nuclear interactions; this was done (independently) by Sheldon Glashow (1932 -), Abdus Salam (1926 - 1996) and Stephen Weinberg (1933- ). At the time of writing (1996) there has been no clear advance beyond this point. There have been interesting suggestions that the strong nuclear force was fused with the weak and electromagnetic forces at an early stage in the birth of our universe, when (according to the "big bang" model) the temperature was higher by a huge factor than any that exists today. The gravitational force, despite many efforts, has remained outside the framework of the other three forces, but perhaps it, too, will be brought into a unified scheme one day. It is so incredibly weak compared to the other forces that its very existence is a mystery.


We have pointed out that the study of quantum phenomena forced upon us a revision of our beliefs about the predictability of individual atomic events. But it continued to be an article of faith among most physicists that the laws of strict cause and effect allow us to predict, in principle, the course of all events above the atomic level. The great French physicist Pierre Simon de Laplace (1749-1827) articulated this belief in a famous statement:

"An intelligence which, at a certain instant, knew all the forces of nature and also the situations [(positions and velocities] of the entities in it, and which furthermore was capable of analyzing all these data, could encompass in the same formula the motions of the largest bodies in the universe and those of the lightest atom; to it [this intelligence] nothing would be uncertain, and the future would be as clear to it as the past."

This faith was based on a feature that we mentioned earlier; the power of mathematics to describe the physical processes of nature. It was recognized that certain problems (for example, turbulent motion of fluids) were in practice so complicated that they defied formal mathematical analysis. Nevertheless, it was believed that this was just a practical limitation, not a fundamental one. Another great French scientist, Henri Poincaré (1854 -1912) realized, however, that there was more to the situation than this, that -- even with strictly causal mathematical equations -- there were fundamental limits to predicting the long-term history of certain types of physical systems. The key feature was the existence of so-called non-linear terms in the equations of motion. Before the advent of modern electronic computers, the behavior of such systems could not be adequately explored, because -- for example in periodic motions such as the motion of a pendulum -- it would have been prohibitively time-consuming to follow the motion through thousands or millions of cycles. But this is the kind of work -- so called "iterative calculations" -- for which modern computers are ideally suited. What they do can be called experimental mathematics; the equations are well defined, but their implications can be followed out only if one can repeat a numerical program over and over again. And the results were startling. It had previously been believed that a very small change in initial conditions would produce correspondingly small changes in the final outcome. But it was discovered that the final outcome might be so sensitive to the initial conditions that the long-term situation was in effect unpredictable, that totally different outcomes might be possible*.


* This has suggested such fancies as that the beating of a butterfly's wings might change the world's weather pattern!


This is what Poincaré realized. The phenomenon is called deterministic chaos; it differs from the intrinsic failure of causality in quantum systems, but its consequences are in some ways similar.

The exploration of chaotic systems has become a major field of mathematical physics. Although its chief physical application has probably continued to be in fluid mechanics, it has found applications in solid state physics, acoustics, plasma physics, elementary particle physics and astrophysics, as well as in biology and chemistry.


If one looks at the development of physics throughout time, it is a story of a continuing effort to push back the frontiers of our knowledge of the universe. Much o£ this advance has consisted in the extension of the range of our knowledge in terms of distance and time. When Man was limited to the use of his natural faculties, he could not see anything smaller than a speck of dust -- say about l/l000 cm in diameter. At the other extreme, although he could see the stars and realize that they were very far away, finding the distance of anything farther away than the Moon (about 400,000 km) was beyond the scope of his abilities. Now we have specific knowledge of distances as small as 10-l8 m and as large as 10+15 m. With regard to time, the eye could not separate events occurring less than about 1/50 s apart, and a human lifetime set an upper limit of about 10+9 s to the duration observable by any one individual, although of course a sense of history would have permitted people to have some awareness of times up to perhaps a few thousand years*.


* And, of course, 19th-century geologists envisaged time scales of hundreds of millions of years without the benefit of well-defined measurements of time.


But contrast that with what has become possible today through physical measurements. The marvels of modern electronics permit the study of times as short as about 10-15 s, and the combination of observation and inference enables astrophysicists to talk with some confidence about times as long as billions of years (10l7 s). So, with respect to both space and time, physics has now given access to phenomena over a range embracing factors of more than 10 30 And the attempts to extend these ranges continue.

Our knowledge of how the various components of matter interact with one another to produce the immense (and increasing) variety of specific physical phenomena is not quantifiable in this way, but there can be no doubt that the ability of physics to detect, explain and control them is an ongoing process. So also is its reach. It has even been suggested that the traditional goal of seeking a minimal number of fundamental principles is in the process of being replaced by a program of using these well established principles to explore an ever expanding catalogue of specific applications.

The truth, I think, is that both processes are at work, and will continue to be. The expansionist part of the program is undoubtedly being aided by the power of modern computational methods, and impinges on other sciences also. All of chemistry can, at least in principle, now be explained in terms of electromagnetic forces and quantum theory, and biology is beginning to derive valuable insights from the application of basic physical principles. This is not to suggest that physics is in any way superior to those other sciences; anyone who looks at the astonishing achievements of chemists and biologists, especially in the 20th century, will quickly be disabused of any such idea. Nor would I wish to suggest that the fate of chemistry and biology will be to become part of physics. Certainly the complexity of biological systems, with which both chemists and biologists are concerned, is so great that it calls for a qualitatively different approach from that of the physicist. The special status of physics is simply that, in a universe built of elementary particles and their interactions, it has been the task of physics to understand these things at the most primitive level. That sentence, in fact, epitomizes what it has been the purpose of this article to explore.

I wish to thank Professor E. L. Jossem for reading this article in draft, and for making a number of valuable suggestions.


Section B1, The Nature of Physics from: Connecting Research in Physics Education with Teacher Education
An I.C.P.E. Book © International Commission on Physics Education 1997,1998
All rights reserved under International and Pan-American Copyright Conventions

Return to the Table of Contents