|Call it heresy, but all the big cosmological problems will simply melt away,
if you break one rule, says John D. Barrow--the rule that says the speed of light never
EVER SINCE 1905, when
Albert Einstein revealed his special theory of relativity to the world, the speed of light
has had a special status in the minds of physicists. In a vacuum, light travels at 299 792
458 metres per second, regardless of the speed of its source. There is no faster way of
transmitting information. It is the cosmic speed limit. Our trust in its constancy is
reflected by the pivotal role it plays in our standards of measurement. We can measure the
speed of light with such accuracy that the standard unit of length is no longer a sacred
metre bar kept in Paris but the distance travelled by light in a vacuum during one 299 792
458th of a second.
It took cosmologists half a century to the full cosmological importance of a finite speed
of light. It divides the Universe into two parts: visible and invisible. At any time there
is a spherical "horizon" around us, defined by the distance light has been able
to travel since the Universe began. As time passes, this horizon expands. Today, it is
about fifteen billion light years away.
This horizon creates a host of problems for cosmologists. Because no signals can travel
faster than light, it is only within the horizon that light has had time to establish some
degree of uniformity from place to place in terms of density and temperature. However, the
Universe seems more coordinated than it has any right to be. There are other ways, too, in
which the Universe seems to have adopted special characteristics for no apparent reason.
Over the years, cosmologists have proposed many different explanations for these
characteristics--all with their attendant difficulties. In the past year, though, a new
explanation has come to light. All you have to do is break one sacred rule--the rule that
says the speed of light is invariable--and everything else may well fall into place.
The first of the problems cosmologists need to explain is a consequence of the way the
cosmological horizon stretches as the Universe expands. Think about a patch of space which
today reaches right to the horizon. If you run the expansion of the Universe backwards, so
that the distances between objects are squeezed smaller, you find that at some early time
T after the big bang that same patch of space would lie beyond the horizon that existed
then. In other words, by time T there would not have been enough time for light to have
travelled from one edge of the sphere bounded by our present horizon to the opposite side.
Because of this, there would have been no time to smooth out the temperature and density
irregularities between these two patches of space at opposite extremes of our present
horizon. They should have remained uncoordinated and irregular. But this is not what we
see. On the largest cosmic scales the temperature and density in the Universe differ by no
more than a few parts in one hundred thousand. Why? This is the horizon problem.
Another, closely related cosmological problem arises because the distribution of mass and
energy in our Universe appears to be very close to the critical divide that separates
universes destined to expand for ever from those that will eventually collapse back to a
"big crunch". This is problematic because in a universe that contains only the
forms of matter and radiation that we know about, any deviation from the critical divide
grows larger and larger as time passes. Our Universe has apparently been expanding for
nearly 15 billion years, during which time its size has increased by a factor of at least
1032. To have remained so close to the critical divide today the Universe must have been
incredibly close to this distribution of mass and energy when it started expanding--an
initial state for which there is no known justification. This is the flatness problem, so
called because the critically expanding state requires the geometry of space to be flat
rather than curved.
The third major problem with the expansion of the Universe is that Einstein's theory of
gravitation--general relativity--allows the force of gravity to have two components. The
better known one is just a refinement of Newton's famous inverse-square force law. The
other component behaves quite differently. If it exists, it increases in direct proportion
to the distance between objects. Lambda was the Greek symbol used by Einstein to denote
the strength of this force in his theory. Unfortunately, his theory of gravitation does
not tell us how strong this long-range force should be or even whether it should push
masses apart rather than pull them together. All we can do is place stronger limits on how
big it is allowed to be, the longer we fail to see its effects.
Particle physicists have for many years argued that this extra component of the
gravitational force should appear naturally as a residue of quantum effects in the early
Universe and its direction should be opposite to that of Newton's law of gravity: it
should make all masses repel one another. Unfortunately, they also tell us that it should
be about 10120 times larger than astronomical observations permit it to be.
This is called the lambda problem.
Since 1981, the most popular solution to the flatness and horizon problems has been a
phenomenon called inflation that is said to have occurred very soon after the big bang,
accelerating the Universe's expansion dramatically for a brief interval of time. This
allows the region of the Universe seen within our horizon today to have expanded from a
much smaller region than if inflation had not occurred. Thus it could have been small
enough for light signals to smooth it from place to place. Moreover, by the end of this
bout of acceleration the expansion would be driven very close to the critical divide for
flatness. This is because making a curved surface very large ensures that any local
curvature becomes less noticeable, just as we have no sense of the Earth's curved surface
when we move a short distance.
Why should the Universe have suddenly inflated like this? One possibility is that strange,
unfamiliar forms of matter existed in the very high temperatures of the early Universe.
These could reverse the usual attractive force of gravity into repulsion and cause the
Universe to inflate briefly, before decaying into ordinary radiation and particles, while
the Universe adopted its familiar state of decelerating expansion.
Compelling as inflation appears, it cannot solve the lambda problem. It has also had to
confront some new observations of the rates at which distant supernovae are receding from
us. These imply that the lambda force is influencing the expansion of the Universe today
fifth element", New Scientist, 3 April). Even though the density of matter
might be just 10 per cent of the critical value, the influence of the lambda force means
the geometry of space might still be very close to flatness. If these observations are
corroborated, they make the flatness and lambda problems worse: why is the Universe quite
close to the critical rate of expansion (1part in 5, say, rather than 1 part in 100 000)
and why is lambda finite and having a similar influence on the expansion of the Universe
as the matter in the Universe today? Since these two influences change at different rates
as the Universe ages it seems a very weird coincidence that they just happen to be similar
in strength today when we are here to observe them. These are called the quasi-flatness
and quasi-lambda problems, respectively.
Last year, with a view to providing some alternative to inflation, Andreas Albrecht of the
University of California at Davis, and Jo�o Magueijo of Imperial College, London,
investigated an idea first suggested by John Moffat, a physicist at the University of
Toronto. Moffat had proposed that the speed of light might not be such a sacrosanct
quantity after all. What are the cosmological consequences if the speed of light changed
in the early life of the Universe? This could happen either suddenly, as Albrecht,
Magueijo and Moffat first proposed, or steadily at a rate proportional to the Universe's
expansion rate, as I suggested in a subsequent paper.
The idea is simple to state but not so easy to formulate in a rigorous theory, because the
constancy of the speed of light is woven into the warp and weft of physics in so many
ways. However, when this is done in the simplest possible way, so that the standard theory
of cosmology with constant light speed is recovered if the variation in light speed is
turned off, some remarkable consequences follow.
If light initially moved much faster than it does today and then decelerated sufficiently
rapidly early in the history of the Universe, then all three cosmological problems--the
horizon, flatness and lambda problems--can be solved at once. Moreover, Magueijo and I
then found that there are also a range of light-slowing rates which allow the
quasi-flatness and quasi-lambda problems to be solved too.
So how can a faster speed of light in the far distant past help to solve the horizon
problem? Recall that the problem arises because regions of the Universe now bounded by our
horizon appear to have similar, coordinated temperatures and densities even though light
had not had time to travel between them at the moment when these attributes were fixed.
However, if the speed of light were higher early on, then light could have travelled a
greater distance in the same time. If it were sufficiently greater than it is today it
could have allowed light signals to traverse a region larger than would expand to fill our
horizon today (see Figure).
As regards the flatness problem, we need to explain why the energy density in the Universe
has remained at the critical divide that yields a flat, Euclidean space, even though its
expansion should have taken it farther and farther from this divide. And as for the lambda
problem, we need to explain why the lambda force is so small--instead of the huge value
that particle physicists calculate.
The key point here is that the magnitude of the expansion force that drives the Universe
away from the critical divide, and the magnitude of the lambda force, are both partially
determined by the speed of light. The magnitude of each is proportional to the square of
the speed of light, so a sufficiently rapid fall in its value compared with the rate of
expansion of the Universe will render both these forces negligible in the long run. The
lambda force is harder to beat than the drive away from flatness. Consequently, a slightly
faster rate of fall in light speed is needed to solve the flatness, horizon, and lambda
problems than is required just to solve the flatness and horizon problems.
Remarkably, a more modest slowing of light allows the quasi-flatness problem to be solved:
it leads to a Universe in which the forces that drive the Universe away from a critical
state ultimately keep pace with one another, neither overwhelming the other. In the same
way, a suitable rate of change of light speed can result in an approach to a critical rate
of expansion in which the lambda force keeps pace with the gravitational influence of
One advantage that the varying light speed hypothesis has over inflation is that it does
not require unknown gravitationally repulsive forms of matter. It works with forms of
matter and radiation that are known to be present in the Universe today. Another advantage
is that it offers a possible explanation for the lambda problem--something inflation has
yet to solve.
The simplicity of this new model and the striking nature of its predictions suggest that
we should investigate it more seriously. We should find the most comprehensive formulation
of gravity theories that includes a varying speed of light and that recovers existing
theories when that variation is turned off; search for further testable predictions of
these theories; and pursue observational evidence for varying constants that depend on
The standard picture of inflation makes fairly specific predictions about the patterns of
fluctuations that should be found in temperature maps of the radiation left over from the
early stages of the Universe. Future satellite missions, including NASA's MAP
satellite--due for launch next year--and the European Space Agency's Planck Surveyor, due
for launch several years later, will seek out those fluctuations to see if they match the
predictions. Now we need to work out if a past variation of the speed of light makes
equally specific predictions.
In recent years, dozens of theoretical physicists have been studying the properties of new
superstring theories that attempt to unite the fundamental forces of nature within a
quantum gravitational framework. They have revealed that traditional constants of nature,
such as Newton's gravitational constant or the fine structure constant--formed by dividing
the square of the electric charge on a single electron (e2) by the product of
the speed of light (c) and Max Planck's quantum constant (h/2 PI)--do not need to be quite
as constant as we thought. If extra dimensions of space exist, as these theories seem to
require, then any change in those extra dimensions will produce variations in the
constants that underpin our three-dimensional space.
Another exciting possibility for astronomers to check is whether variations in constants
that involve the speed of light could still be observable today and not just confined to
the first split second of cosmic history. Recently, Magueijo and I have found that a tiny
residual effect may remain in the Universe--similar in form to that revealed by the
supernova observations--left over from a significant variation of the speed of light in
its very early stages. Since laboratory experiments are not sensitive enough to detect
such small variations, we must look to astronomy for a probe.
Two years ago, John Webb, Michael Drinkwater and Victor Flambaum--all then at the
University of New South Wales, Sydney--and I looked at the spectra of carbon monoxide
molecules and hydrogen atoms in gas clouds. Because of the finite speed of light, looking
at distant cosmological objects is equivalent to looking back to earlier times in the
Universe's history. We checked whether the ratios of energy levels of these atoms and
molecules were intrinsically different at the time of the gas cloud compared with their
values on Earth today. These ratios depend on the square of the fine structure constant
and we found it to be constant to better than 5 parts in a million--a limit a hundred
times better than that found by direct laboratory experiments.
More recently, joined by Chris Churchill of Pennsylvania State University in College Park,
we devised a very sensitive technique for comparing relativistic atomic transition
frequencies between iron and magnesium in the spectra from 30 quasars. For the closest and
most distant quasars, we confirmed the limits set by the gas clouds, but those in a narrow
range of distances in between display a shift that is consistent with a variation in the
fine structure constant. This shift could also be caused by a problem that astronomers
call "line blending". Further data have been gathered and should unambiguously
reveal the source of the observed shift.
New telescopes open up the exciting possibility of measuring physical constants far more
stringently than is possible in laboratory experiments. The stimulus provided by
superstring-inspired theories of high-energy physics, together with the theory that a
change in the speed of light in the early Universe may have propelled it into the peculiar
state of near smoothness and flatness that we see today, should provoke us to take a
wide-ranging look at the constancy of nature's "constants". Tiny variations in
their values may provide us with the window we are searching for into the next level of
D. Barrow is the new professor of mathematical sciences at the Department of Applied
Mathematics and Theoretical Physics and director of the Millennium Mathematics Project at
the University of Cambridge. His latest book, Between Inner Space and Outer Space,
is published by Oxford University Press. Impossibility has recently been published
in paperback by Vintage.