The things we’ve been concerned with in the last three articles draws a very good picture for String Theory. It gives us a number of predictions that clearly distinguish it from the Standard Model, and from other theories of quantum gravity, such as Loop-Quantum Gravity and a few others. Not only that, but it unifies the laws of Einstein’s General Relativity with those of Quantum Theory under one overarching framework. But there is a cost. The majority of the predictions made by String Theory deal with such fantastically small numbers that experimental tests for the theory seem to be a long way off, if reachable at all. Nothing would be more pleasant for a string theorist than to present a list of testable predictions achievable in a fairly short time frame – within a decade or so. Clearly, there is no clear way to confirm a theory other than by testing its predictions experimentally. Despite the beauty and elegancy of a theory in question – as long as it fails to provide a way to test it experimentally, it describes our universe no better than Sims 4. String theorists, however, have come up with a few ideas about such a test. In this article we are going to consider these ideas, which I’ve come across – as always – in Brian Greene’s book “The Elegant Universe”.

Edward Witten once said that String Theory has already made an impressive prediction which has been experimentally proven. “String Theory has a remarkable property: it *predicts *gravity.” What Witten meant here was that despite the fact that both Newton and Einstein developed their theories of gravity a long time ago, they did that according to what they and other people observed. The very existence of gravity is predicted by neither of those theories, regardless of how important and beautiful these theories are. Conversely, if string theorists had known nothing about Newton’s and Einstein’s theories, nor about gravity for that matter, they inevitably would have found it while working with String Theory. Because of the existence of a vibrational pattern corresponding to the massless graviton with spin 2, gravity is a necessary part of String Theory. Of course we should not use the word “prediction” here because physicists actually explained gravity long before String Theory; but what’s cool about this is that gravity *naturally emerges* from the mathematical apparatus of String Theory.

As we saw earlier in this series, however, it is not enough for a theory to give such an explanation of an existing phenomenon to be in any way convincing. The majority of physicists would rather accept it if String Theory either makes a testable prediction of a new physical phenomenon or gives an explicit mathematically backed description to some physical characteristics – for example the mass of an electron – that have no such description today. In this chapter we’re going to consider a few possible ways which string theorists are exploring in order to arrive to such a prediction.

Although in the physics community String Theory holds the promise of becoming the best thing since sliced bread, today, after a few decades of work, scientists are still not able to pin down the theory so that it gives clear testable predictions. This is as if you had bought a new air conditioning system for the first time and wanted to set it up yourself, but unfortunately there were no instructions which would explain you how to do that. Similarly, physicists are usually not able to apply the concepts of a new theory to real-world problems unless they write down a complete user manual. Anyway, as we are going to see in this chapter, with some luck string theorists might obtain experimental confirmation of some of the essential components of the theory in the coming years.

Is String Theory correct? We don’t know. For those who believe that the laws of the microworld should not be separated from the laws describing physical phenomena occurring in the macroworld, and for those who believe that we should not stop our research until we have a theory with unconfined domain of applicability, String Theory continues to be the best bet. Of course it might seem that this widespread concern around String Theory emerged just by chance, and that this place could as easily have been taken by some other theory was it as mathematically adaptable as String Theory is. There are quite a lot of physicists who believe that this is indeed the case. Moreover, they occasionally state that exploring the theory whose domain of applicability is limited to the Planck scale is just a waste of time.

In the mid 1980s, when String Theory gained a high level of attention among theorists, some of the most prominent physicists of that time criticised it severely for its untestability. For example, Nobel laureate Sheldon Glashow said in his speech that physics naturally progresses when theory and experiment both have access to a given question. But instead – he said – string theorists are looking for a kind of harmony which is based on mathematical elegance and uniqueness rather than on testable predictions. The very existence of a theory – he continued – holds on a bunch of magical mathematical coincidences. He argued that this is not in any way enough in order to believe in the actuality of the picture depicted by String Theory, and that such an approach cannot compete with experiment. In his speech he also said that String Theory seems to be so ambitious that it should either be completely right or completely wrong, but the problem is that nobody could even guess how long would it take to finish its development. And Howard Georgi, another astonishingly strong physicist and a famous partner of Glashow in Harvard, was also an outspoken critic of String Theory in the late 1980s.

Richard Feynman shortly before his death indicated that he did not believe that String Theory is the only approach capable of resolving the problem of infinities. He said that in his opinion there could be more than one way to reconcile Quantum Mechanics with General Relativity, and the fact that String Theory helps us to avoid infinities isn’t enough to believe in its special place in physics.

As it usually happens, though, for every sceptic in one camp there is an enthusiast in the other one. Witten once said that when he familiarised himself with the way String Theory unifies quantum theory and gravity, this became the greatest ‘intellectual shock’ in his life. One of the most prominent string theorists, Cumrun Vafa from Harvard, once mentioned that String Theory, without doubt, gives us the deepest understanding of the physical world. And another Nobel laureate Murray Gell-Mann regarded String Theory as a fantastic achievement which will eventually become the theory of everything.

So as we can see, the debates around String Theory were focused on both physical aspects and more philosophical ideas about how physics should progress. People with more ‘traditional’ viewpoints wanted physics to continue following the route which has been so successful in the last centuries: i.e. they wanted theory to be tightly connected with experiment. Others thought that physics had come to a point where theorists could try to push the boundaries with no help from experiment.

Theorist David Gross beautifully expressed his thoughts regarding this matter in 1988. He wrote that it had hitherto always been the case that when scientists started investigating some new areas of physics, the paving of the path was done by experimentalists, and theorists typically followed behind. When experimentalists encountered some unliftable rock, they dropped it on the heads of theorists who then figured out what to do with that rock. The scientists doing experiments were then receiving information about what kind of obstacle they encountered and what to do with it the next time. Even Einstein, who developed an entirely new way of looking at gravity had a lot of experimental data at his disposal to start his work with. Starting from about 1970s, however, the table turned upside down, and, with some exceptions, since that time theorists have had to pave the way for experimentalists.

The theorists working with String Theory do not want to climb the mountain alone. They would rather share their task with their colleagues from the experimental camp, but the technology we have at our disposal today is, most probably, a long way off from being sufficient to achieve the energies required to test String Theory. But even so, as we shall see in this article, string theorists do have some ideas about how to test the theory at least indirectly.

In the 1990s some critics accepted that String Theory actually looks promising. Glashow, for example, connected this with two facts. Firstly, he noticed that in the 1980s the majority of string theorists were over-enthusiastic and often declared that they would find the answers to all questions in physics very soon. However, In the 1990s they became a lot more careful in their statements, so that initial Glashow’s criticism became less applicable. He also pointed out that researchers whose works weren’t connected with String Theory weren’t very successful in the late 1980s and in 1990s, and the Standard Model seemed to be stuck at that time. Which made him accept that String Theory may hold promise to answer the questions which Standard Model seemed to be unable to answer.

Georgi agreed with Glashow and recalled his statements from the mid 1980s in a similar manner. He said that initially String Theory seemed to have bitten off more than it could chew, but a decade later he found that a few ideas from the theory led to some important results which could be relevant for his own works. Thereafter he regarded String Theory as something really useful. This marked a temporary end of criticism in the late 1990s, but later, in the mid 2000s criticism raised again furiously due to the following fact.

In the late 1990s the physics community was shocked to realise that the rate at which the Universe expands is actually speeding up, instead of slowing down as was expected by everybody. This was yet another case when experimentalists observed an unexpected phenomenon, and theorists started to look for an explanation post factum. The found explanation we now call ‘dark energy’ which is driving the expansion of the Universe to accelerate. Some critics of String Theory claimed that it should have predicted dark energy, which would have provided very strong support for the theory. But since it was not the case, string theorists made attempts to incorporate this new fact about the Universe into the theory. And they discovered a way to do that but at a cost.

The new form of the theory with dark energy included resulted in the number of solutions which is beyond imagining. Some estimates have been that this new version of String Theory contained *as many possible solutions as 10 ^{500}*! This number is so absurdly large that it should be regarded as virtually infinite, thus it brought about a second wave of criticism.

As we saw earlier, the critics of String Theory typically strike at the question about whether or not it is scientific in the traditional sense of this word. This is because many believe that a theory in the domain of physics is scientific only if it can provide some means of experimental test in order to either confirm or to falsify it. So the first rise of critics focussed on the idea that String Theory actually *explains nothing* because none of its predictions could be tested. Conversely, this time the critics focussed on another idea, which was that String Theory *explains too much*. Although it might seem to be the opposite to the first one, it still refers to the same problem: whether or not String Theory is scientific.

And the latter of those two might be even worse than the former. If a theory does not have any predictions to be tested, theorists can make an argument that it is due to the fact that further development of the theory is needed in order to eventually arrive at some predictions which will allow experiment to either rule it out or to confirm it. But in the latter case the number of possible versions of String Theory became so enormously huge that there could basically be no way to falsify it. The new model with dark energy incorporated into String Theory was published in 2003, and after that time the discontent that had been present under the surface began to take place at physics conferences and the front pages of some leading science magazines.

This criticism became especially noticeable in 2006 after the publication of the two books written for general public and attacking String Theory. These books are Lee Smolin’s “The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next”, and Peter Woit’s “Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law”.

Neither of these authors argue to abandon the study of String Theory entirely, but rather they would like to see scientists pay more attention to alternative theories such as Loop Quantum Gravity and the like. Nevertheless, the majority of string theorists dismiss Smolin’s and Woit’s claims as being just failed attempts to discredit String Theory.

Regardless of the number of attempts to put into question the usefulness of String Theory, however, if it eventually makes some prediction which is going to be confirmed by experiment, critics would have to either accept the theory as it is or to provide a better way to explain those results. To find such a prediction that would allow experimentalists to test the theory at least indirectly has been the main focus of many string theorists in the last couple of decades. In the remaining parts of this article we are going to see what those theorists have been able to accomplish.

Without a radical breakthrough in our technology we will, probably, never gain access to the ultramicroscopic scale which is required for the direct observation of strings, if they exist. With the most powerful particle accelerator to date – the Large Hadron Collider – scientists are able to penetrate scales a bit smaller than a billionth of a billionth of a metre. In order to investigate sizes smaller than that we need more energy, and hence more powerful and bigger accelerators which would be capable of focusing more energy on single particles. Planck scale is many orders of magnitude smaller than what is achievable nowadays, and according to some estimates in order to see and measure the properties of a single string we would need an accelerator the size comparable to our galaxy’s. Actually, the above estimate is based on linear extrapolation and is, most probably, *over-optimistic*. As some researchers showed, we would rather need an accelerator *the size of the entire observable Universe!* The required amount of energy is actually not overly large, but the problem is that it’s extraordinarily hard to concentrate that amount of energy in a single particle (or a single string for that matter). Thus we have to find a way to test String Theory at least indirectly. We need to derive some physical consequences that the theory leads to and which would manifest themselves on much larger scales than the Planck size, and which, ideally, would be measurable in the coming decades. In their 1985 paper “Vacuum Configurations for Superstrings” Candelas, Horowitz, Strominger, and Witten made first steps in that direction. Not only did they establish that additional dimensions in String Theory are curled up into Calabi-Yau manifolds, but based on this they also found some consequences for the vibrational patterns of strings. One of the most crucial results of their work even shed light on some old questions in particle physics.

Recall that elementary particles found by physicists in the XX century are being separated into three generations with the corresponding members of each generation having exactly the same characteristics except for different masses. The question in the physics community before String Theory had been: “why are there *three* generations instead of, say, two, or four, or ten? And also, why do particles form these three generations instead of there being just a single list of different particles with no resemblance to one another?” String Theory answers this question as follows. The Calabi-Yau shapes have (quite loosely speaking) openings in them, as shown in the figure 1 below. Such openings may be of many different eccentric kinds, including those associated with multiple dimensions, but the main idea can also be seen in the figure 2 where we show just a simple torus to make things simpler and less abstract. Candelas, Horowitz, Strominger and Witten made careful analysis of the influence these openings exert on vibrating in additional dimensions strings. What they found was pretty intriguing.

Each such opening turns out to be associated with a *generation of vibrational patterns*, each of those generations corresponding to strings with a minimal amount of energy allowed by the mathematical apparatus of the theory. Since the theory demands that the known elementary particles correspond to the vibrational patterns with the lowest amounts of energy, the existence of a few openings in the Calabi-Yau manifolds implies that the vibrational patterns of strings break up into the corresponding number of generations. If a compactified Calabi-Yau shape has three openings, then we are to find three generations of elementary particles! As we can see, according to String Theory the found separation of elementary particles into three families is not inexplicable, but rather is determined by the number of openings in the geometrical shape formed by additional spatial dimensions.

That said, you might think that in 1985 physicists came close to finding the Calabi-Yau shape which would yield the physical properties we observe in the Universe. The number of openings in different Calabi-Yau shapes, though, can vary drastically. Some of those shapes contain three openings, some five, some ten, and up to nearly 500. And the problem lies in the fact that physicists still have no way of discerning which type of the Calabi-Yau manifolds bears investigating more than others.

You might think that since we certainly know that there are three generations of elementary particles, we should pick out those shapes that have three openings and examine them one by one until we find the one that yields the characteristics similar to those our Universe possesses. Unfortunately, this idea has two shortcomings. Firstly, we are not really sure about there being no more than three generations of particles. Three is the number of families we currently know of, but we cannot rule out the possibility that we shall find more in the future. Secondly, as we shall see shortly, this restriction would actually do no good for us since the number of shapes to consider would still be infinite. Thus such a method of arriving at testable predictions is out of the question and we need to find a better way to rule out the majority of candidate shapes. Despite all of this, however, the very fact that String Theory seems to be capable of explaining why there are three generations of elementary particles is fascinating in its own right, and represents a great deal of progress in particle physics.

The number of generations of elementary particles isn’t the only thing which could be explained once the shape of extra-dimensions is determined. Due to the influence of the Calabi-Yau manifolds’ shape on the modes of the strings’ vibration, the extra-dimensions also heavily influence the characteristics of both the matter particles and the force mediators. What Strominger and Witten also showed in their work is that the masses of particles in each of the three generations of matter particles depend on the way the multidimensional openings intersect and coincide with one another. If this doesn’t make much sense to you, do not be afraid, you are not alone. This kind of detail in String Theory is barely visualisable, but to give you a rough idea, this implies that the coordinates of those openings and the way Calabi-Yau shapes wrap around them determine the possible vibrational patterns for the strings vibrating inside those shapes. So as we can see, the question which had no answer whatsoever in the previous theories – namely the underlying mechanism making the masses of electrons, quarks, neutrinos and other matter particles have certain values – possibly obtains an answer in String Theory, albeit we would first need to find the precious Calabi-Yau shape corresponding to the structure of our Universe.

The previous paragraph should have given you an idea about how String Theory might be capable of explaining the characteristics of matter particles such as their mass and charge, but of course string theorists also hope that one day they will be able to explain the characteristics of the force carrier particles in a similar way. When strings vibrate in 10 dimensions, some of their vibrational patterns correspond to the particles having an integer-value spin. These modes are the candidates for the force mediators such as photons and gluons. And, strikingly, regardless of the shape of the extra-dimensions a mode corresponding to the massless particle with spin 2 is always present. As you might guess, we identify this mode as a graviton – the force mediator of gravity. The list of strings corresponding to the particles with spin 1, however, is largely dependent upon the geometric shape of extra-dimensions. The same can be said about their characteristics such as the intensity of the interactions they mediate, their gauge symmetry and others. To summarise this part of the chapter we can say that String Theory might provide us with a scheme explaining all the richness we see in the microworld, which is very exciting, but without knowing which of the huge number of possible Calabi-Yau manifolds the extra-dimensions in our Universe are rolled up into we cannot derive a testable prediction out of the theory.

Why can’t we find out which of the candidate shapes is the one we are looking for? As we mentioned in the earlier chapters, the mathematical apparatus of the theory is so astonishingly complex that we cannot even derive the final set of equations. What physicists do instead is use approximate equations, and in these equations all of the possible candidate shapes are on equal footings; none of them seems to be more promising than others. Thus particular testable predictions still elude physicists.

We can rephrase our question as follows: even if the mathematical equations of the theory don’t allow us to find out which Calabi-Yau shape the theory chooses, does *any *of those choices conform with the phenomena observed in the Universe? In other words, if we calculated the physical characteristics given by every possible Calabi-Yau manifold, and gather it into a single enormous catalogue, would we be able to find one (or maybe more) that describes our Universe? This is a very serious question but there are a couple of reasons why we cannot answer it exhaustively.

It would be reasonable to start that kind of research by taking only those Calabi-Yau shapes with three openings, thus leading to three generations of elementary particles, right? This would shorten the number of candidate shapes considerably. But the problem here lies in the fact that we can easily deform a torus with three openings (the same, of course, applies to Calabi-Yau shapes) from one form into many others – actually, into an infinite number of forms – without changing the number of openings. One of such transformations of the previously considered torus is shown in the figure 3 below.

Similarly, we can change the form of a Calabi-Yau shape through an infinite number of transformations. And when we were talking about 10^{500} possible Calabi-Yau shapes, we actually grouped all such transformations and considered an infinite group as a single manifold. To add insult to injury, for each 10^{500} shapes there is an infinite number of transformations, and strings’ vibrational patterns depend largely on such transformations. That’s why restricting the possible shapes by three generations of elementary particles would actually not help us to find the solution to this problem at all. Even if all the people in the world were trying to find the answer to this question by simply considering the shapes one by one, with the infinite number of possibilities they would never succeed.

On top of that, the approximate equations used by string theorists are not sufficient for determining precisely which physical characteristics correspond to a manifold under consideration. Those equations allow physicists to make big steps forward and to obtain a rough picture of the characteristics of a vibrating string, but the exact picture – including the mass of a particle, the intensity of interaction and the like – requires equations whose accuracy would surpass the ones from the approximate scheme by far. In earlier articles we mentioned that a typical energy scale for String Theory is comparable to the Planck energy, and the modes of vibration corresponding to the known elementary particles are derived by an extremely precise mechanism of cancellations. Such delicate cancellations require precise calculations because even a small margin of error might influence the obtained result heavily.

So, what to do? After so many solutions to the String Theory equations were derived in the mid 2000s some researchers lost heart concluding that the theory would never make definitive testable predictions. But here the idea of a Multiverse came to the rescue. There are many different versions of the Multiverse theory some of which can be found in our blog, and here we won’t go into details, but what is relevant to us is that in some of those versions the number of universes within the Multiverse might be enormously large, up to infinity. Combining this with the Inflationary model and with the anthropic principle we get the result according to which *each *of those Calabi-Yau shapes *is *actually real, but different shapes are spread out through different universes. As we saw earlier in this chapter, this result led to the major lines of criticism toward String Theory from quite a few physicists, but this fact does not stop the researchers working with the theory to look for new methods in the theoretical framework and to eventually derive some testable predictions.

The description of all the characteristics of elementary particles, and of fundamental interactions would be one of the greatest – if not *the *greatest – accomplishments of theoretical physics. But after all this discussion you might have a completely reasonable question: are there any features in String Theory which would, probably, not confirm but at least favour the theory once found in experimental results, and which could be tested in the near future (or perhaps are being already tested)? Yes, there are such features.

Our inability to derive certain testable predictions dictates that we should focus on general rather than specific features of the theory for now. By the word general we refer to such characteristics which are not a subject to the subtle details of the theory such as the shape of extra-dimensions, but rather represent an attribute that would, most probably, be a part of String Theory forever. Such characteristics have high credibility associated with them because the theory relies heavily on them. So even without a fully developed theoretical framework most researchers believe that if String Theory is correct so as these features. In the remaining part of this article we shall focus our attention on such general characteristics starting with supersymmetry.

As we were talking in the seventh chapter of this series, String Theory makes use not only of already established principles of symmetry such as translational and rotational symmetries, but also of seemingly the greatest possible mathematical kind of symmetry which has been called supersymmetry. As was discussed, this implies that in String Theory all the matter particles are tied up to the force mediators making the pairs whose members differ from each other by the value of their spin, the difference being 1/2. (Recall that all the matter particles have half-integer spin, whereas force mediators’ spin is integer valued.) This connection between fermions (matter particles) and bosons (force mediators) allows String Theory to make a prediction: each known particle should have a superpartner. And what’s important here is that none of the known particles is a superpartner to any other. Which implies that according to String Theory (and actually to other theories that rely on supersymmetry) there should be particles that have never been observed before. Some researchers even speculate that these superpartners might represent one of the greatest mysteries of modern cosmology – dark matter. The existence of superpartners is one of the essential components of String Theory, and it does not depend on the subtle characteristics of extra-dimensions and other things.

Superpartners have not been found to date. This might imply that supersymmetry is just wrong, but the other possibility is that superpartners are too heavy to be detected with the modern day accelerators. A lot of String Theory proponents hoped that superpartners would be found at the LHC, but these expectations have not been met yet. Researchers actually made a handful of speculations about the masses of the lightest superpartners, and according to some of those speculations they should have already been found. Unfortunately, this hasn’t been the case yet, but we certainly cannot rule out the possibility that the masses of the lightest superpartners are higher than was expected. There is a great deal of hope that with the upgrade of the LHC in 2016 the accelerator obtained enough power to find superpartners. Time will tell. Another possibility is that superpartners’ masses are not that high, but we need more elaborate methods to detect their presence. So even though the first expectations weren’t met, a lot of researchers continue to believe that supersymmetry will be confirmed pretty soon.

There is one detail, however, that we should bear in mind. Even if superpartners are found, this fact alone would not be sufficient to state that String Theory is correct. As we have seen, there are other theories that rely on supersymmetry, and they will also get credence in case supersymmetry is confirmed. Even though supersymmetry was found by researchers when they were working with String Theory, it can be included in other theories quite easily, and hence isn’t applied uniquely to String Theory. Anyway, if superpartners are going to be found either at the LHC or at some other particle accelerator, this would be a very strong point in favour of the theory.

Another possible experimental confirmation of String Theory’s ideas has to do with particles having fractional electric charge. Of course we already know of some elementary particles having fractional electric charge – quarks. But the range of electric charges the known particles have is very limited. Quarks and anti-quarks have charges whose values (taking the electric charge of an electron as a basic unit of measurement) are equal to 1/3 and 2/3. The charges of all the other particles in the Standard Model are 0, +1, and -1. All the matter in the Universe we currently know of is made of the combination of these particles from the Standard Model. String Theory, however, admits of the possibility of there being the modes of vibration corresponding to some exotic electric charges, e.g. 1/5, 1/11, 1/53 and such. These strange charges may occur when the curled up dimensions possess a certain geometric property, which is too technical to be discussed here, and whose description would only over-complicate things, so we won’t go into this kind of detail.

Some Calabi-Yau shapes possess this geometric property, some do not, so this possibility isn’t as fundamental to String Theory as the existence of superpartners. On the other hand, it has some benefit over supersymmetry. As was mentioned above, supersymmetry can be applied to other theories as easily as to String Theory. The prediction about the existence of particles with such exotic electric charges is much more unique to String Theory. Of course, such particles can be included into the models which are based on dimensionless particles, but it would seem very awkward and unsatisfying under those models, whereas in String Theory their existence could easily be explained by the shape of extra-dimensions.

In 1997 physicists observed what’s known as quasiparticles having fractional electric charge. But these are not real elementary particles, but rather an emergent entity which behaves like a particle. A real particle having such a weird electric charge has never been observed to date. So again, we are left with two possibilities: either they don’t exist in our universe, being just a mathematical artefact of String Theory’s equations, or they have such big masses as to not be observed by the operating particle accelerators. However, if someday they are found – this would also be a very strong point in favour of String Theory.

There are some other possibilities where either the confirmation of some String Theory’s predictions or finding the explanation to a previously inexplicable phenomena could give quite a bit of credence to the theory. For instance, as Witten once pointed out, it is possible that astronomers might find clear evidence confirming String Theory’s ideas. As we showed in the sixth chapter of the series, strings typically have enormously small size – about Planck length. But strings with huge amounts of energy might grow much larger. As we know, right after the Big Bang all the energy in the Universe was concentrated within a tiny volume, which might have caused a few strings to grow and become macroscopically sized. Those strings would continue to grow with time and today they would have an astronomically large size, albeit remaining one-dimensional (we will come to the strings of greater dimensionality when we start considering M-theory in a few chapters). If this really happened, one day we might be able to find the evidence of the presence of such a string (for example, a little but noticeable effect in the cosmic microwave background radiation). As Witten said, he couldn’t be more happy than to see this kind of confirmation of the theory’s ideas, albeit it now seems like a story from sci-fi movies.

Apart from that, there are several other possibilities which are related to the experimental facilities working with micro-scales rather than with cosmological ones. Below are a few examples.

- The explanation of dark matter. Nowadays, the two greatest mysteries in cosmology are known as Dark Matter and Dark Energy (I believe these names are unfortunate since they can easily be confused by the general public, though they represent two completely different things). The second of those has become the greatest failed possibility for String Theory, since had the existence of dark energy been predicted, its confirmation would have been the greatest success for the theory. Yet, this was not the case, and later, as we’ve seen, its incorporation into the theory yielded an enormous number of solutions to the theory’s equations. The other mystery – dark matter – had already had a great focus for studying before String Theory. The solution, however, is still to be found. String Theory might provide a testable prediction about DM which would be a very serious stake for experimental studies. One of the possible explanations, as we have already seen, is that those dark matter particles might be the superpartners to the known particles or to one another. In the last decade this question became one of the cornerstones of modern cosmology, and a lot of existing theories might get great credence for providing a solution to this puzzle.
- Another possibility is the derivation of definite values for the masses of neutrinos. Initially, the Standard Model predicted that neutrinos should have zero mass, but in 2002 experimentalists found that this is not the case, and all three sorts of neutrino (electron neutrino, muon neutrino and tau neutrino) actually do have very small masses associated with them. The problem is, though, we only have a certain range of masses for neutrinos, and don’t know the exact values. Measuring them experimentally is extremely difficult, but with time better experimental facilities are built and this range is narrowed down. Thus if String Theory could find an explanation for the characteristics of neutrino particles and derive a clear prediction about their masses, this would be a very strong candidate for the experimental proof.
- The third possible way of getting String Theory to experiment is to look for new fundamental interactions, however crazy this might sound. The thing is, some Calabi-Yau manifolds seem to be compatible with such vibrational patterns that correspond to new interactions. The fields for such interactions may be associated with relatively low intensities and a big range of propagation. If new interactions are found in experiments, they might be explained by String Theory, while other theories keep silent about such interactions. Recently a team of physicists even published a paper in which they tried to explain that they might have found footprints of such a new interaction. It is very premature to ring a bell, because as it often happens, that paper is not very convincing, but I just wanted to point to the fact that the idea about new fundamental interactions is actually not as crazy as it might sound.
- Moreover, there are some hypothetical processes which are forbidden by the Standard Model, but are allowed by the String Theory’s framework. One of them is the proton decay, allowed by a few so-called grand unified theories (GUTs) such as Georgi–Glashow model, Pati–Salam model, SO(10), Flipped SO(10), and also by the theories which are close to but not quite GUTs: Technicolor models, Loop quantum gravity, Causal dynamical triangulation, String Theory (M-Theory) and others. The proton is stable under the Standard Model because baryon number is conserved there. GUTs allow for a proton decay by explicitly breaking the baryon number symmetry. A half-life of a proton, however, is around 10
^{31}to 10^{36}years (that is, a typical proton would decay not more often than once in 10 billion billion trillion years), so that should not be very surprising that we have not observed this type of decay if it actually occurs. Experimentalists have been trying to find this type of decay throughout the last decade, but so far with no success. In the following years, however, this might change, which would be a breeding ground for explanation by the aforementioned theories. - And finally, the fifth way where new theories might be very successful is in explaining the observed value of the cosmological constant. As we were discussing in the third article of this series, it was introduced to the field equations by Einstein to make the theory compatible with static universe, which seemed obvious at that time. Later, after Edwin Hubble showed that the Universe does expand and is not static, the value of the cosmological constant was always considered to be zero. This was until 1990s, when two teams of astronomers established that the rate at which the Universe expands is getting higher and higher, which is compatible with the cosmological constant having some positive value. The most natural way of obtaining a positively valued cosmological constant from the current theories is by assuming that it takes its energy from the pairs of virtual particles constantly appearing and annihilating everywhere including empty space. The problem is that when we calculate the value of the cosmological constant in that way we get a result which is 120 orders of magnitude higher than the observed value.
*That is 1 followed by 120 zeroes*! This fact has been called the biggest fail of the current theories at explaining the Universe we live in. That’s why this is one of the most important questions in theoretical physics nowadays. If String Theory was able to explain this huge discrepancy and give a testable prediction about the exact value (again, the observed value lies within a certain range) of the cosmological constant – this would be a major success for the theory.

The history of physics has a number of examples of ideas which initially seemed completely impossible to put under test. These ideas were later confirmed experimentally, thanks to new experimental facilities and to new methods of testing which had been unexpected when the idea was put forward. A few examples of such ideas are the atomic structure of matter, Pauli’s hypothesis about the existence of neutrinos, and finally the idea about the existence of neutron stars and black holes. Each of these three are now established physics facts.

String Theory has always been in such a position where it was not possible to test it experimentally. However, as we know from Quantum Theory, it might take several decades for a theory to mature. And Quantum Theory had an advantage to String Theory because right from the start it had an access to experiment. In spite of this, however, it took physicists about three decades to fully develop a logical structure to QT, and two more decades to merge it with Einstein’s Special Relativity. String Theory attempts to consolidate QT with General Relativity, which is quite more complicated; and it does not have an access to experiment as yet.

How long will it take physicists to bring String Theory to experiment? Nobody knows. Which means that a big number of specialists play a risky game where it is quite possible that the work of their entire lives will remain unconfirmed or even be proven to be wrong in the future. The progress in theoretical and experimental research will certainly be considerable in the following decades, but will it be sufficient to overcome all the obstacles and eventually put the theory under test? Would the indirect tests described above help us to either prove or disprove the theory? These questions are very important for everybody working on String Theory, but nobody is able to give an explicit answer to them. Only time will tell. The way in which String Theory solves the greatest contradiction of the XX century physics, its potential capability of describing all the matter and fundamental interactions under one overarching framework, and potentially unbounded predictive power make it worth exploring and justify the risks, though.

Thanks everyone.

Previous articles in this series can be found at:

Part 1: Following Einstein’s Dream

Part 2: Special Relativity – the Picture of Space and Time

Part 3: General Relativity – The Heart Of Gravity

Part 4: Quantum Mechanics – the World of Weirdness

Part 5: General Relativity vs Quantum Mechanics

]]>

*cover credit: pinterest.com*

In his Special and General theories of Relativity Einstein resolved two of the three main conflicts in physics. And his theories radically changed our understanding of the Universe. String Theory, in turn, allowed physicists to find the resolution to the third contradiction which had remained, arguably, the most important and most challenging of them all. And String Theory has demanded even more dramatic changes to our view of physics. The tremor of the basic concepts has been so strong that it put into question even such seemingly unshakable notions as the number of spatial dimensions in the Universe. This question is what we are going to be concerned with in this chapter. For those who would like a more detailed explanation of these concepts I suggest reading Brian Greene’s book “The Elegant Universe”, from which the main ideas of this article are taken.

**A Typical, Seemingly Undoubtful View**

Our intuition is being continuously fed by our daily experience. But the role of experience goes beyond that: it forms the basis on which we interpret and analyse events around us. The experience of two different people raised in two different cultures might be anything but similar. There are such phenomena, though, that are experienced by *anyone* regardless of where they grew up. Typically it is those beliefs which are underpinned by such universal experience that are extraordinarily hard to reassess. Let us consider a simple example. If you now stand up from your chair and decide to go somewhere, there are essentially 3 independent directions which you could move in, i.e. 3 spatial dimensions. Here you might object that actually you could move in much more directions, but *all* of them are going to be the combination of “left or right”, “forward or backward”, and “up or down”. This is what we mean by three independent directions, or three spatial dimensions. Each time you make a step, you are making three independent decisions regarding the direction of your next step in each dimension.

When we were considering Special Relativity we saw that each point in the Universe can be defined by using three parameters indicating its position in three dimensions. In New York, for instance, you can arrange a meeting with your boss by indicating a street (left-right direction), an avenue (forward-backward direction), and the floor number (up-down direction). Einstein’s theory also showed us that time can be treated as the fourth dimension, which can be thought of as the “future-past” direction. So when arranging a meeting you should also define time, apart from street, avenue, and floor. This extends the number of dimensions to four (three spatial and one temporal), thus events in the Universe are defined by the indication of where and when they occurred (or will occur).

This property seems so fundamental and obvious that rarely is it even mentioned. Nonetheless, in 1919 a Polish mathematician Theodor Kaluza had enough bravery to propose that there might be an additional spatial dimension that we just had never been aware of. Initially this proposal did not quite pan out because, as Carl Sagan’s famous quote goes, “Extraordinary claims require extraordinary evidence”, but later this idea hit stride. The extension of it is essential for String Theory to be mathematically consistent.

**Kaluza-Klein Theory**

The idea of there being a greater number of spatial dimensions than three might sound bizarre, mystical and even pointless, but as we shall see, it is actually based on thorough reasoning. We can start thinking of it with a simple example. Suppose you are looking at a wire on the street from a distance of 500 metres. As you can imagine, from such a distance the wire looks like a line, i.e. it extends only in one dimension from our perspective (we won’t use our binoculars). If you imagine an ant living on that wire, you might think that it would have only one independent direction to travel in (up or down in the figure 1 below). If you were to define its position, you would only need to specify its distance away from the top or bottom point of the wire, and that would be enough. What I want to show in this example is that from the distance of 500 meters the wire seems to be one-dimensional object.

Of course we know that this picture is delusive; if we are to approach the wire, we would see that it has circumference, another independent direction for our ant to travel across. Although it’s hard to recognise the circumference from 500m distance, if we do use our binoculars we would certainly see that the ant is travelling in two independent directions (up-down and across the circumference). You can see this in the magnified part of the figure 1. Now we understand that we have to specify two independent numbers in order to define the ant’s position: one representing its position on the vertical axis (up-down), and another one being its position on the circumference (clockwise – counter-clockwise direction). This reflects the fact that the surface of our wire is two-dimensional. Of course if we are to consider the object (wire) itself, we know that it’s actually three-dimensional, but since in this example we were concerned only with its *surface*, the two numbers mentioned above were enough to explicitly define ant’s location.

These two dimensions have an apparent difference. One is well extended and easily noticeable. The other one (clockwise – counter-clockwise) is short, “curled up” and hardly recognisable. In order to notice this second dimension we had to use some tools with greater resolution.

What we’ve just seen in this example is that dimensions could be “lengthy”, extended, and be easily noticeable by an unaided eye, but they also might be “tightly packed” and hardly noticeable. Of course in our example we could easily find the hidden dimension solely by using binoculars, but if the width of the wire were much smaller, like the width of a human hair, then this task would be quite complicated.

In 1919 Kaluza wrote an article and sent it to Einstein. In that article he proposed an outstanding hypothesis: namely, that the Universe might have more than three spatial dimensions which we are familiar with from our everyday experience. The motive for such a radical proposal was the following: this hypothesis provided the way to construct an elegant and powerful mathematical apparatus combining Einstein’s General theory of Relativity and Maxwell’s theory of electromagnetic field into one conceptual system which we will examine a little bit later. But how this hypothesis could be conformed to the obvious fact that we see and perceive only three dimensions?

Kaluza’s work did not contain an explicit answer to that question, but a Swedish mathematician Oskar Klein improved initial Kaluza’s proposal in 1926 and expressed the answer to the question above explicitly. His work showed that the structure of our Universe may contain *both extended and curled up dimensions*. This implies that the three dimensions we know of are like the up-down direction in the figure 1, but apart from them the Universe might contain additional dimensions that are curled up so tightly that they remain inaccessible even to the most powerful instruments to date. Specifically, Kaluza and Klein suggested that the Universe might have 3 extended spatial dimensions familiar to us, but also one dimension whose presence we’ve had no chance to identify. Thus the overall number of spatial dimensions, according to Kaluza and Klein, is equal to four, which gives us a total number of 5 dimensions: 4 for space and 1 for time.

If we draw the picture of the Universe proposed by Kaluza and Klein, we are to find the following. Imagine that we have an extremely powerful apparatus that could magnify the resolution billions of billions times more than the most precise microscopes to date. If we take a part of completely empty space, and start increasing the resolution using our apparatus, we would see increasingly small packages of nothingness. After a few such magnifications, however, something interesting happens. An additional fourth dimension having the form of a flat circle unfolds, as shown in the figure 2 below.

Here we see the structure of space as was seen by Kaluza and Klein. There is one important simplification on this figure though. The circles are shown only at the particular points on the grid. What Kaluza and Klein actually proposed was that these loops are actually present at *each* point of space given by three extended dimensions, just as the additional circumferential dimension exists at each point of extended dimension of the wire shown in the figure 1.

In spite of the similarity with the previous example with the wire, this picture has a couple of things that clearly distinguish between them. The Universe contains three extended dimensions (we showed two of them in the picture) as opposed to one dimension which we were considering speaking about the wire. And what’s even more important, in the former example we showed one extended dimension of an object (the wire) existing *inside* the Universe, whereas in the latter we’ve been considering the structure of the Universe itself, which is quite a big deal. The main idea, however, remains the same: the curled up dimensions might be so exceedingly small that it would be extraordinarily hard to find a way to detect their presence.

As we’ve already mentioned, the circles representing an additional dimension in the figure 2 are placed at particular locations just for the sake of demonstrativeness, but the theory suggests that those circles exist at each point all around us. If our ant from the wire was small enough, it would be able to walk across this fourth dimension, so that we would have to provide four numbers in order to specify ant’s exact location. On top of which, we, of course, would have to provide the fifth number specifying the ‘location’ of the ant in the temporal dimension.

So we’ve got to quite an interesting conclusion. Following the reasoning of Kaluza and Klein we see that even though we can perceive only three spatial dimensions, this does not eliminate the possibility of additional curled up (or “compactified”, as string theorists call them) dimensions being hidden in the very structure of space. The Universe might have more dimensions than the eye is capable of perceiving. But how small should these dimensions be? In 1926 Klein combined the initial Kaluza’s idea with the quantum mechanical apparatus and found out that the additional dimension should have the Planck size – or 10 to the negative 35 metres – whose investigation requires energies which are way beyond our current and even imaginable limits. Things are getting less clear when String Theory comes into play though. We shall come to considering the additional dimensions in String Theory shortly.

**Examining a Wire Universe**

The examples given above help us understand why there is a reason to believe that our Universe might contain additional spatial dimensions besides those three that we are familiar with. But even for those who study this subject intensively, with mathematics and all the rest, it is very hard – if possible at all – to clearly imagine the picture of a universe having more than three spatial dimensions. For this reason, physicists usually try to derive an intuitive perception of the question by taking a step back and analysing how we would see three-dimensional space if we lived in a two- or even one-dimensional universe. For example, what would happen if the entire Universe was like a wire which we considered in the earlier section? This “Wire Universe” should be considered, literally, as an entire universe, which means that our wire is all there is, so that we should forget about looking at the wire as a remote observer. Instead, we shall imagine ourselves being some kind of a living organism in that universe. Now, for a more complete analogy, let us start with an extreme example, for which we shall assume that the circumferential dimension in this universe is compactified, and that we have no idea about its presence. Now we are left with only one dimension, so we will call this universe the “Lineland”, because a line is a one-dimensional object as well (this example is taken from Brian Greene’s book, so I have no credit for it).

Life in this universe differs dramatically from what we are familiar with. For example, the bodies from our Universe, such as ants, just cannot exist in the Lineland for quite a simple reason: there is no place for them to be there, since there is only one dimension in the Lineland. Being a creature within the Lineland, you have to find room inside it. Try to imagine that and you’ll immediately find out that you can live in the Lineland *only* if your body has neither width nor height, but only length.

Now imagine that you, like a mammal from our Universe, have two eyes. But being a one-dimensional creature, the only reasonable location of your eyes would be at each end of the body because otherwise you would not be able to see anything except some point inside of your body, which is far from being practical! Moreover, instead of being capable of revolving in the eye pits like in a human body, your eyes are eternally focused in a single possible direction: straight ahead. This is not an anatomic limitation, but rather the only solution open to evolution! And there is something more. If you think about it, you’ll realise that all you see in the Lineland is an object – or creature – right next you! And this is true for both your eyes. There is just no way for you to see anything except that object because, once again, the entire universe is a straight line.

We could continue reasoning through possible implications of living in the Lineland, but would very quickly conclude that such a life is not very rich in various possibilities. In the Lineland, you cannot walk past your friend “standing” in front of you. Moreover, you don’t even see their body, but only one eye! That is, after the inhabitants of the Lineland have settled in, they would have no chance to swap positions between each other. As we can see, the life in such a universe is far from being interesting.

But suppose one day two friends from this universe with the names Lineluza and Kline appeared to have a brilliant idea. What if – these two suggested – the Lineland is not actually one-dimensional, but instead has an additional circumferential dimension that has remained unseen due to its extraordinarily small size? They started to envisage all those magical implications which would take place if only the size of that additional dimension were to be made larger – the possibility which could not be ruled out according to the earlier works of their colleague Linestein. Lineluza and Kline described a magical place where the inhabitants could easily bypass each other by means of the second spatial dimension; they would also be able to see each others’ bodies at different angles, and their lives would obtain a whole range of new colours. This idea, no matter how radical it sounds, brings a great deal of hope into the heart of the Lineland inhabitants.

Now, if such a thing really happened – scientists from the Lineland magnified the size of the compactified circumferential dimension – the Lineland becomes nothing but The Wire Universe, or the “Wireland”. Not only did the inhabitants acquire the opportunity to walk through the universe in two dimensions (as shown in the figure 3 above), and to easily bypass each other, but also an entire new field is now open to evolution: the bodies of our inhabitants will become two-dimensional in some time. So that if we look at the Wireland in the next several thousand or several million years, we will find a whole range of different shapes there, including those that you used to study in the secondary school under the subject of Euclidean geometry. With time the inhabitants of the Wireland will inevitably become two-dimensional flat creatures just like in the famous Edwin A. Abbott’s novel “Flatland” published in 1884. Abbot’s Flatland is rich with culture and even some kind of a caste system which is based on the geometric form of inhabitants’ bodies.

Although it is hard to imagine *anything* interesting in the one-dimensional Lineland, the life in the Wireland is very rich! The shift from one to two dimensions transforms life in the universe radically.

But why stop there? Our Wireland might contain yet another additional dimension curled up in its structure. If this additional dimension were to unfold, the inhabitants of the Wireland would happen to experience another set of radical changes. In fact, if this third dimension becomes sufficiently large, this would be the universe such as ours. After some time has passed, you will find there a whole range of complex creatures such as human beings!

But wait, we can now ask the same question again: why do we need to stop there? This brings us to the Kaluza-Klein theory, according to which our Universe might contain additional spatial dimension. In the later sections of this chapter we will come to consider even more radical changes required by String Theory. In the early versions of String Theory the number of additional dimensions given by mathematical analysis was equal to 6 (the critical number of dimensions in bosonic string theory grows up to 26, and in superstring theory to 10 – i.e. 4 space-time dimensions familiar to us plus 6 additional dimensions required for the equations to work – but the details are too technical, so we won’t dive into them). If some – or all – of these additional dimensions really exist, and if they unfolded up to macroscopic size, life as we know it would experience dramatic changes. But what’s interesting here is that even if these additional dimensions remain compactified indefinitely, just their existence already leads to some profound consequences which we come to consider later.

**Gravity and Electromagnetism Go Hand In Hand in the Kaluza-Klein World**

Although Kaluza’s proposal about the existence of the fourth spatial dimension rolled up into the fabric of space-time is tantalising in its own right, physicists’ interest in this idea comes from a slightly different aspect. Einstein developed his General theory of Relativity in a particular way – with three spatial and one temporal dimension – because he had initially started up with such a model in mind. But if he wanted to, it seems like he could have developed his theory for two, five, or any other number of dimensions. And indeed, the mathematical apparatus of his theory can be extended to incorporate additional dimensions of space. And Kaluza did just that. He performed mathematical analysis and derived additional equations for a universe with four spatial dimensions instead of three.

What he found was rather astounding. He found that in this mathematical formulation the equations related to three spatial dimensions were basically similar to Einstein’s field equations, but in the case of one more dimension additional equations showed up. These additional equations were nothing but the equations derived by James Clerk Maxwell to explain electromagnetism! So, as you now see, by adding one additional spatial dimension Kaluza combined gravity and electromagnetism into one mathematical framework.

The gravitational and the electromagnetic interactions had hitherto been considered to be two separate and distinct forms of interaction. There seemed to be no connection between the two apart from the very similar formulae given by Newton’s law of universal gravitation and Coulomb’s law of electrostatic force. For those interested in this similarity I provide the two: Newton’s law of gravity is given by F = Gm_{1}m_{2}/r^{2} and Coulomb’s law of electrostatic force by F = k_{e}|q_{1}q_{2}|/r^{2}. Anyway, this similarity represented no connection in physical reality at that time. By making a radical assumption about the existence of a never before seen spatial dimension, however, Kaluza showed that these two forces might actually be intrinsically connected. His theory suggests that both of these interactions are related to waves propagating through the fabric of space-time. Gravity is mediated by waves propagating in three dimensions of space familiar to us, whereas electromagnetism is mediated by waves in an additional rolled up dimension.

Kaluza sent the article explaining the results of his work to Einstein. Initially, Einstein became very interested in these results. In his response to Kaluza in 1919 he wrote that he had never even thought that such unification might be achieved by looking at a five-dimensional world. A week later, however, in his other letter Einstein wrote that he finds Kaluza’s idea very interesting and doesn’t see any obvious flaws in there, but on the other hand he has to admit that he doesn’t find the arguments really convincing either. Two more years had passed and Einstein wrote yet another letter to Kaluza where he said that he had reassessed Kaluza’s work and is finally ready to present it in academy.

Unfortunately, when physicists started to explore the implications of the idea they found out that it had some serious contradictions with experimental data. Physicists’ attempt to include the electron into the theory led to the prediction about the ratio of its mass to its charge that clearly violated the experimental data. Since there was no clear way of resolving this problem, many physicists quickly lost interest in the model. Although Einstein and several other scientists continued exploring the idea, it did not get much attention among physicists to say the least.

Nevertheless, it now seems that Kaluza’s and Klein’s idea was just ahead of its time. In 1920s, when it came about, the entire new field of quantum mechanics was being developed intensively. New principles of the quantum world were catching theoretical physicists’ attention, which resulted in the development of quantum field theories. Experimentalists were deriving new ways of testing the predictions of the theory and performed experiments, which, in turn, adjusted the theories. The theory directed the experiment, and the experiment refined the theory. All of this resulted in deriving and developing the Standard Model of particle physics, whose relevance can hardly be overestimated. It does not seem surprising that at that time the proposal about an additional dimension hidden deep in the structure of the Universe didn’t bear fruit.

However, by the early 1970s the principles of the Standard Model had been established, and by the early 1980s the majority of its predictions had been confirmed. Even though some of those predictions – for example the existence of the Higgs boson – would remain unconfirmed for decades, the majority of physicists working on these questions did not have much doubt that those predictions will later be confirmed by experiment. So eventually it became clear that the time had come to explore and resolve the major conflict of XX century theoretical physics: the inconsistency between General Relativity and Quantum Mechanics. The success in the formulation of quantum field theories for the three fundamental interactions has inspired physicists to look for a similar theory for the gravitational interaction: a theory of quantum gravity. After numerous hypotheses have failed, the physics community became more appreciative of more radical approaches. The Kaluza-Klein hypothesis, which was left to die by itself in the late 1920s, was now revitalised.

**The Kaluza-Klein Theory Resurrection**

Since the time of the Kaluza-Klein proposal’s appearance the understanding of the physical world surrounding us has experienced a dramatic boost. The laws of quantum theory had been pretty much established and gained very strong experimental support by the 1970s. Two new kinds of fundamental forces unknown in 1920s were now observed and included into quantum theory, so that they were now on equal footings with electromagnetism and gravity. Many physicists at that time started to believe that Kaluza’s hypothesis initially failed due to the very conservative approach used by the author. He tried to unify two fundamental interactions and didn’t even consider the possibility of there being more. In the 1970s physicists showed that even though one additional spatial dimension could have led to the unification of gravity and electromagnetism, it was just insufficient.

By the mid 1970s there had appeared a whole bunch of various theoretical research aimed at developing theories with greater dimensionality in order to explain the world around us. In the figure 5 below you can see an example with two additional dimensions curled up into spherical shape. Similar to the case where we had one additional dimension, these two exist at *each* point of space-time in our Universe with three extended spatial dimensions. These two additional dimensions may also be rolled up into different shapes. For instance, they can have a toroidal shape shown in the figure 6.

Next we can imagine more complex structures having more than 2 additional dimensions, albeit these can hardly be represented on a two-dimensional screen. Mathematically, we can have any number of additional dimensions curled up in different shapes, and there are whole fields of mathematics – linear algebra, abstract algebra, complex analysis, topology, and others – whose purpose is partially to explore these ideas of higher dimensionality. But as long as there has been no experimental evidence of the existence of additional dimensions, they have to be compactified strongly enough to remain inaccessible for our modern-day experimental apparatuses (in later articles we’ll explore another interesting idea regarding additional dimensions, but for now we shall assume that they are indeed compactified).

The most promising theories concerned with extra-dimensions have been those which included supersymmetry as well. Physicists have hoped that the partial cancelling of intense quantum fluctuations provided by the pairs of superpartners would mitigate the discrepancies between gravitation and quantum fields. These theories were given a name *Supergravity*.

As with the original Kaluza’s hypothesis, supergravity theories superficially looked very promising. As a result of adding more dimensions they brought up additional equations which are very similar to those used to explain electromagnetic, strong and weak nuclear forces. Further careful analysis, however, showed that old problems were still remaining. The catastrophic quantum fluctuations that had been occurring in former theories were mitigated by supersymmetry, but insufficiently to make the theory consistent. Moreover, when working on the theories of supergravity physicists ran into trouble trying to include in them the concept of *chirality* which represents an essential part of the Standard Model. The concept of chirality is too abstract and is hard to understand conceptually, but for our purposes it would be enough if I give a very rough explanation. In the mid 1900s experimenters showed that there are some phenomena in our Universe which are *not identical* *to their mirror images*, i.e. our Universe is *chiral*. Essentially, they showed that some phenomena related to the weak nuclear interaction cannot have their mirror analogues. That is, those mirror analogues *cannot* exist in our Universe. So if you watch a movie showing some physical phenomenon and you notice that it contains a process violating this rule, you can be certain that what you are watching is a mirror version, not a real phenomenon. And this rule seemed almost impossible to include in a theory of supergravity.

As we now see, although the pieces of a final unified theory started to find their places in the puzzle, a central element capable of binding these pieces together was still absent. In 1984 superstring theory was shown to be the main candidate for taking the place of this central element.

**Additional Dimensions in Superstring Theory**

For now you should have got the idea of why our Universe might have additional spatial dimensions. Indeed, unless we have sufficiently advanced technology to test the structure of space on extraordinarily small scales, we cannot prove they do not exist. Anyway, these additional dimensions might seem to be just a mathematical focus pocus. As long as we have no way to experimentally test the presence of those dimensions, we can speculate that there might be whole civilizations of some extravagant creatures at Planck-scales. Even though this speculation does not have any mathematical support, postulating any of these currently untestable ideas might seem to be equally arbitrary.

For the people working with String Theory, however, things change dramatically. The mathematical framework of this theory successfully unifies Quantum Theory and General Relativity, hence it solves the main contradiction of modern-day theoretical physics. Furthermore, it unifies our understanding of the basic components of matter and the four fundamental forces. And what’s important here is that String Theory *demands* the existence of extra dimensions!

Let’s see why this is so. As we saw in chapter 4, the laws of quantum mechanics state that it is impossible to measure some parameters more precisely than Heisenberg’s uncertainty principle allows. The results of mathematical calculations based on the rules of Quantum Theory are represented by *probabilities* having certain values. As you probably know, probabilities can have values ranging either between 0 and 1 or between 0% and 100% (these, essentially, are equal representations: one can easily be converted to the other). Any probability with a value outside of this range has no meaning whatsoever. As physicists figured out in 1900s, in some situations the rules of QT break down because the calculated probabilities happen to have values outside of this range. As we also saw in chapter 5, the conflict between GR and QT – with its model based on dimensionless particles – occurs when the calculated probabilities reach an *infinite* value. As was discussed in chapter 6, String Theory solves this conundrum and gives us answers lying within the admissible range. What we haven’t touched upon, however, is that in the first versions of String Theory physicists found some probabilities which had *negative* values. These are again outside of the allowable range for a probability. Thus, at the first glance, String Theory also seemed inconsistent.

Nevertheless, scientists did not give up and looked for the cause of these unacceptable results. Eventually they found that cause, and I’m going to try to build a conceptual sense of it for you. Let’s start by assuming that strings exist in the Wireland which we considered in earlier sections (remember that it was a 2-dimensional universe). In such a universe our strings would have only two independent directions to vibrate in: left-right and back-forth (i.e. they would have only two degrees of freedom). If there were a third spatial dimension in the Wireland, however, the number of independent directions for our strings to vibrate would increase to three, i.e. the direction up-down becomes available. Further extension of this idea may be troublesome to imagine, but the overall scheme remains the same: larger number of spatial dimensions allows additional vibrational patterns for strings.

Now you might probably guess the source of the meaningless results contained in the early versions of String Theory. These results occurred because physicists considered a version which had too little a number of degrees of freedom. The cause of negative probabilities was in the inconsistency between what reality seemed to dictate and the requirements of the theory. The performed calculations clearly showed that if strings could vibrate in 9 dimensions of space, then the negative probabilities would go away. Anyway, this is important for the theory, but who cares? If String Theory seeks a way to explain the world with three spatial dimensions but works consistently only in a world with nine dimensions, then we are still in trouble.

Or are we? The idea put forward by Kaluza and Klein gives us a way around this trouble. Since strings are so small they can vibrate not only in large unfolded spatial dimensions, but also in those miniscule rolled up dimensions which we do not have access to. Thus we can satisfy the requirements of String Theory by assuming – like Kaluza and Klein did – that apart from three dimensions that we all know of, our Universe contains six additional compactified dimensions. Moreover, instead of *postulating* the existence of additional dimensions, String Theory *demands* it. For String Theory to be mathematically consistent, the Universe has to be 10 dimensional: 9 dimensions for space, and 1 for time. (In later articles we shall see that in 1995 this number grew up to 11, starting the second superstring revolution) Thereby, the descendant of Kaluza’s and Klein’s initial proposal has steadily come to the scene.

**Some Questions Remained**

After all this revolution, which String Theory brought up, a few important questions immediately showed up. Firstly, why does String Theory demand *that* number of additional spatial dimensions to be mathematically consistent instead of any other? Unfortunately, this question is one which is extremely hard to answer without resorting to the mathematical apparatus of the theory. Strings need 10 degrees of freedom to avoid getting inconsistent results with negative probabilities and such. The calculations performed on the theory’s equations lead to this result, but nobody could give an explanation having a clear conceptual picture without technical details involved. This is partly due to the fact that String Theory is still work in progress, and a lot of its details aren’t known as yet. This happened before, though. For example, the Standard Model of particle physics, which is based entirely on quantum theory, was derived and finalised more than 5 decades after Quantum Theory itself had been developed. And Quantum Theory still has its own questions that need to be addressed. Here I must remind you that although later work by E. Witten showed that String Theory actually demands 10 spatial dimensions, we shall be considering 9 unless we’ve got to the second superstring revolution in later articles.

Secondly, if the equations of String Theory show us that the Universe contains 9 spatial and 1 temporal dimensions, why did only 3 spatial dimensions unfold at some point in time, while the others have remained curled up? Why aren’t all of them unfolded, or why did all of them not remain compactified? Why was some other configuration of those dimensions not realised in our Universe? Currently we do not have an answer to this question. If String Theory is correct, then, hopefully, someday we will find the answer, but for now we don’t have such. This doesn’t mean though that nobody has even tried to find the explanation in the existing framework of String Theory. For instance, works on cosmological implications of the theory have shown that it is possible that initially all of the dimensions were rolled up, but moments after the Big Bang three of them unfolded while others remained compactified. In later articles we shall consider these possibilities, but I should say that those conjectures that we will touch upon are still exist in the form of speculation, and a lot of further work needs to be done in order to either support them or rule them out. For now we shall be taking it as a given: three spatial dimensions are unfolded, while others are compactified.

Thirdly, if String Theory demands the existence of extra spatial dimensions, couldn’t it be the case that its solutions might imply the existence of additional *temporal* dimensions as well? Think about it a little bit and you’ll see how weird a possibility that would be. We live in the Universe with a few spatial dimensions, so we could at least imagine the existence of others, right? But what could it even mean to have more than one time? If one of those would be the one familiar to us, then what would be the others? Could some of them run backwards? This idea becomes even more bizarre if those extra temporal dimensions could be compactified the way additional spatial dimensions are thought to be. For example, if a Planck-size ant were able to run across extra space-dimensions, it would be able to return to its original location every time it’s completed a full ‘cycle’. Nothing fanciful here since you are already used to returning to your apartment every workday. But if we had an additional temporal dimension, and our ant were to travel across it, then it would return to its initial position in *time*! You’ve got it right: no time has passed for the ant when it ran across the additional temporal dimension in this case. This, of course, goes far beyond our everyday experience. Time for us seems to run only in one possible direction: forward. Never do we return to a moment that has already passed. Surely enough, such additional curled up temporal dimensions could have different characteristics than ‘our’ time. As opposed to the extra spatial dimensions, though, those temporal dimensions – if only they exist – might alter our perception of the physical world surrounding us far more drastically. Some theorists are exploring the possibilities of including additional temporal dimensions into the framework of the theory, but these attempts are far from being in any way conclusive. In our further considerations we shall be concerned with one time dimension, though we should not forget about the intriguing possibilities which some researchers are playing around with.

**Physical Implications of the Additional Dimensions**

Although the size of the extra dimensions of space has to be extraordinarily small in order to explain why we haven’t detected their presence, according to String Theory they have an important indirect influence on some physical phenomena. In order to understand this let us remind ourselves that the mass and electric charge of a particle is determined by the vibrational patterns of the corresponding strings. It should be obvious to us that those vibrational patterns, in turn, are influenced by the space around them. Think about an ocean wave. Given the entire surface of the ocean, a single isolated wave might take any form and move in any direction because of the many degrees of freedom it would enjoy. If, on the other hand, our wave were to move through a very narrow canal, the number of degrees of freedom would substantially drop down because the form of the wave and the number of possible directions in which it can move would be largely determined by the width, depth and form of the canal. The compactified dimensions influence the number of degrees of freedom for fundamental strings in a similar way. As long as fundamental strings vibrate in all accessible spatial dimensions including additional ones, their vibrational patterns are largely determined by the shape of these extra dimensions. The shape, in turn, is determined by the geometry of the extra dimensions. We can summarise thus: *the geometry of additional spatial dimensions determines the fundamental physical characteristics of a particle, such as its mass, charge and spin* which we detect in our experiments.

This result is one of the most important implications of String Theory. And since this is so important researchers have spent decades trying to figure out which shape of the extra dimensions might lead to the fundamental characteristics of the particles we know of. So far, however, no one has been able to do this. We’ll come to this question in later articles but for now let us see what string theorists have been able to find on this avenue.

**The Form of Extra Dimensions**

The extra dimensions in String Theory cannot be compactified in an arbitrary way. That is, the equations of the theory restrict the number of possible shapes which those dimensions can take. In 1984 string theorists Philip Candelas, Andrew Strominger and Gary Horowitz alongside Edward Witten introduced the idea of compactification to String Theory. In their work these physicists showed that the mathematical apparatus of the theory requires that additional dimensions must be curled up into a form called *Calabi-Yau shape* (or Calabi-Yau manifold). It is named after two mathematicians Eugenio Calabi and Shing-Tung Yau, whose work, completed before the development of String Theory started, has played a crucial role in understanding the properties of these shapes. The mathematical description of Calabi-Yau manifolds is very complicated, but, thankfully, we don’t need it in order to get a sense of what they might look like.

The example of a Calabi-Yau shape can be seen in the Figure 7 above. You should remember though that this picture has some limitations, because a 6-dimensional manifold here is represented on a 2-dimensional surface, which inevitably leads to some sort of distortion. Nevertheless, the image shows a rough sketch of what this shape would look like for an external observer. In this image we see just one example of *virtually* infinite number of shapes allowed by String Theory, so such a restriction might seem not a big deal. However, if you consider the situation from a mathematical perspective, you quickly find out that without such a restriction this number would *literally* be infinite. In mathematics such a difference might lead to profound consequences, and some string theorists still hope that further development in the theory might provide some insights which will eventually help to rule out the majority of possible Calabi-Yau shapes and deal with a manageable number.

Now, remember the figure 5 where we depicted a 2-dimensional sphere at each point of space. To extend this idea with our new information we need to replace each sphere with a Calabi-Yau manifold that would also exist at each point of our familiar 3-dimensional space.

In the figure 9 you can see this picture more vividly.

In other words, according to String Theory six additional spatial dimensions rolled up into this kind of peculiar shape should exist everywhere around you and me, as well as at completely different corner of the Universe. Despite this, though, these dimensions are curled up so tightly that when you move through three unfolded ones you cannot perceive that your body does an amazing journey through these additional dimensions.

This is an astonishing prediction of String Theory, but it’s certainly not enough for an idea to be just mathematically consistent in order to be proven valid. We want any of our ideas to be tested somehow in order to either get confidence in them, or rule them out as just a mathematical artefact. Since those extra dimensions are so extraordinarily small, we want to find some traces of the influence which those dimensions would leave on things we *can* access, such as particles and the like. In the next article we shall consider possible ways to test the predictions of String Theory, but before we go on I’d like to give an answer to another very important question.

**Are these Dimensions Really Necessary?**

Regarding this question I can say that there is no definite answer as yet. Some physicists feel uncomfortable with the idea of extra space-dimensions which have never been seen. Because of that some theorists are trying to get round this idea by making use of some exotic mathematical constructions whose scope is far beyond this series of articles. However, to conclude this chapter I just wanted to mention a few models which try to build a 4-dimensional String Theory by introducing the additional degrees of freedom by different means. Those include Asymmetric Orbifold models, Four-Dimensional Covariant Lattices, Non-geometric Calabi-Yau Compactifications, 4d N=2 strings (what do all these names even mean?) and others. I’ve given you the links to some of the arxiv papers, but I’d like to warn you: these models are exceedingly complex even by the String Theory standards, which is why the majority of string theorists remain in the other camp. In our later articles we will be avoiding these 4-dimensional models because they are still in their infancy.

So as you now see, the mathematical apparatus of String Theory is so vast that even the theory’s essential assumptions can be questioned, which some physicists see as the theory’s serious drawback, while others regard this as its major benefit.

Thanks everyone for taking your time to read the article, and I hope to see you all next time.

*Previous articles in this series can be found at*:

Part 1: Following Einstein’s Dream

Part 2: Special Relativity – the Picture of Space and Time

Part 3: General Relativity – The Heart Of Gravity

Part 4: Quantum Mechanics – the World of Weirdness

Part 5: General Relativity vs Quantum Mechanics

Part 6: The Basic Principles

Part 7: Supersymmetry

Tagged: String theory ]]>

After the expedition of Sir Arthur Eddington had confirmed the prediction of General Relativity about the deviation of starlight’s straight path due to the gravitational pull of the Sun, one of Einstein’s students asked him how he would react if the prediction weren’t confirmed. Einstein answered: “Then I would feel very sorry for the dear Lord since the theory is correct”. What this famous quote tells us is that Einstein believed that his theory was too beautiful to fail.

Such aesthetic arguments, however, do not play the main role when one is considering a scientific question. The success of a physical theory is usually defined by how well is stands against various experiments. There is an important remark to the last statement though. If a theory is in the process of its formulation, the full list of its predictions, which could be experimentally tested, may be inaccessible. Nonetheless, physicists working with the theory should define the direction which the later stages of theory development will take. Sometimes such decisions are dictated by internal consistency, since we certainly don’t want our theory to contain logically absurd notions. Other times, though, physicists do use their aesthetic arguments to make those decisions. This happens because sometimes the physical and mathematical structure at hand is so beautiful and elegant that it strongly attracts physicists’ attention towards itself. We saw in the last article that the first attempt of String Theory to describe the strong nuclear force was unsuccessful, but nevertheless some scientists continued working on it just because of its elegancy. And this has led us to astonishing results. Moreover, because some of the modern day physics theories are extraordinarily complicated to test experimentally, those aesthetic arguments start playing a very important role in the early stages of theory development.

In physics, like in art, one of the major roles in aesthetic principles is played by symmetry. In physics this notion is defined very precisely. As we saw in the third article, Einstein used the notion of symmetry to combine the notions of his Special theory of Relativity with the principles of gravitational interaction in the theory of General Relativity. Nowadays the principles of symmetry used by physicists provided the way to combine the particles representing matter (e.g. quarks, electrons) with those responsible for the fundamental interactions (e.g. photons, gluons). Typically these two types are considered completely different as we saw in the first article of the series. But according to the principle which we are to consider in this article, these types are connected much more closely than we could ever imagine. Basically, this is the largest possible degree of symmetry. Thus this principle is called “supersymmetry”, and String Theory is one of those theories which are based on this principle. The explanation of this principle, given in this article, is taken from Brian Greene’s book “The Elegant Universe”.

**Symmetries in Physics**

Now I introduce you to the idea of symmetries conceptually, and show you a couple of the most basic, yet extremely important, examples of symmetries in physics. First of all, suppose you live in a universe where the nature of physical laws perpetually changes from one moment to another. Of course we know that even a slight shift of some such parameters – e.g. the Planck constant or the electron’s electric charge – would lead to the demise of any complex creatures, but let’s suppose that the laws of physics remain in the range compatible with life, but they do change each day. Life in such a universe would be not boring at all, since each day you would have to relearn the very basic things. Yet this universe would be a complete nightmare for a physicist living in it, because each day they would have to rediscover the laws of Nature. Your teacher of physics would probably just let you go home each class, because what they prepared for a lesson would make no sense at all when the class takes place. Fortunately, we all know that the Nature doesn’t behave that way. The laws of physics remain unchanged even though sometimes we find out that we need to make slight corrections to the previously derived values, and certainly discover new laws from time to time. Of course this unchangeability does not imply the static Universe, but rather it means that the laws governing our Universe have remained the same throughout its existence. But are we sure that this is indeed the case? The answer is no. There is, for example, an ongoing debate about whether or not the value of the fine-structure constant, and therefore of the speed of light, might change over time. But even though we are not sure about that, all the evidence we have unambiguously show us that even if the laws of physics change over time, they do it *extraordinarily* slow, so that we can use their current form to figure out what happened in the past up to a fraction of a second after the Big Bang.

Now suppose the laws of Nature change from one place to another and the ones which are applicable and well tested in our part of the Universe aren’t applicable in other parts, where you would find a completely different set of laws. Imagine even that these laws change from one place to another here on Earth, as well as the laws in one country may be completely different from the laws in a bordering country. Such a world would again be a nightmare for a physicist because what they discovered and carefully tested at LHC (located on the border between Switzerland and France) would not apply, for example, in Italy. So in this case physicists would have to derive and test an infinite number of theories to construct a reasonably accurate picture of the entire Universe. Again, we know that this is not the way the Universe behaves. Even though we can’t be 100% sure that the laws we have here are the exact same laws as in a galaxy, say, 50 billion light-years away, we are reasonably confident that this is indeed the case. At least all the evidence we have at our hands shows that the laws of Nature don’t depend upon the location in the Universe. For example, we have tested some aspects of Einstein’s General theory of Relativity by measuring the predicted effects, such as gravitational lensing, in clusters of galaxies located billions of light-years away. And the predictions of Einstein’s theory match the observed data to a very high degree of accuracy.

Again, this does not mean that the Universe is similar at any place. If an astronaut, for example, were to land onto the surface of the comet 67P/Churyumov-Gerasimenko, they would be able to easily jump off of it to space. This fact, of course, does not imply the different laws of physics on Earth and on the surface of that comet. We know that this is due to a very low escape velocity of the comet, which, in turn, comes from its tiny mass compared to Earth’s.

Physicists call these two properties – namely that the laws of physics don’t depend on the spatial location and the moment of time when you conduct your experiment – the symmetries of Nature. This term means that Nature treats any moment of time and any point in space identically, and guarantees that all of them are subjects to one set of fundamental physical laws. These kinds of symmetry evoke a great deal of satisfaction, because they emphasise the elegancy and order in the functioning of the Universe.

When we considered Einstein’s Special and General theories of Relativity we became acquainted with other types of symmetry. For example, we saw that in Special Relativity, according to the principle of relativity, any observer moving with a constant speed is treated identically (symmetrically), which implies that the laws of physics are the same for every observer constantly moving in some direction. Any such observer can treat themselves as being stationary and all the others moving relative to him. Again, this does not mean that a picture would be the same for any observer – those pictures could differ quite dramatically – but rather that all those pictures are governed by one set of physical laws.

Likewise, when Einstein established his equivalence principle, he sufficiently broadened this principle of symmetry by showing that *any* observer – regardless of their motion, either with a constant speed or with acceleration – can be treated identically, by adding the gravitational field into the picture. Thus Einstein showed that any observer is subjected to one set of the laws of physics.

Another principle of symmetry, which we have not considered yet, is the independency of the laws of physics to the angle at which you’re conducting an experiment. This means that if you carry out an experiment and obtain some result, this result *must* be the same if you rotate your apparatus by any angle. This is called the rotational symmetry, and it has a similar importance to the ones described above.

Are there any other types of symmetry that we’ve missed? Well, there are actually a lot of them, for example Gauge symmetry which we considered in the fifth chapter, but now we are interested in those symmetries which are related to space and time. In 1967 two physicists, Sidney Coleman and Jeffrey Mandula, proposed a theorem according to which there could be no other symmetries related to the notion of space and time. However, later other theorists found out that this theorem did not take into account one important aspect of quantum theory – spin.

**What is Spin?**

Elementary particles – like electrons – could revolve around atomic nuclei just as planets orbit their parent stars (of course this analogy isn’t entirely correct, but for our current purpose we can assume it is). It might seem, however, that elementary particles can in no way spin on their axis. After all, every point inside a given rotating body – a billiard ball, a planet, or such as like – moves around the axis of rotation, but the points located exactly on that axis don’t move. So this leads us to the seemingly obvious conclusion that an object which consists of just one such point – a dimensionless elementary particle – cannot spin on its axis since there are no other points that would be located outside of this axis. Many years ago, however, the experimental investigations of this question revealed a new astounding property of the micro-world.

In 1925 Dutch physicists George Uhlenbeck and Samuel Goudsmit found that a number of experimental results related to the characteristics of emitted and absorbed light could be explained if they assumed that electrons have some peculiar magnetic properties. About 100 years before, a French physicist Andre-Marie Ampere had established that magnetism appears due to the motion of electric charges. Investigating this fact Uhlenbeck and Goudsmit established that only one type of electron motion can lead to these magnetic properties – the rotational motion that we now call *spin*. In contrast to the laws of classical physics, quantum mechanical objects – even if they are point-like particles – do spin on their axes!

But wait a minute. Did these researchers *really* believe in that seemingly nonsensical conclusion which implied that dimensionless particles somehow magically can spin on their axes? Not quite. Spin represents an inherent characteristic of a particle, akin to the rotational motion but being essentially nothing but a quantum phenomenon. It is one of those strange quantum phenomena having no analogy on macro-scales, and the work of Uhlenbeck and Goudsmit showed just that. Furthermore, their work showed that the magnitude of an electron’s spin *never* changes and remains the same for indefinitely long. What this shows us is that the spin of electron isn’t a state of its motion, but rather an inherent characteristic like its electric charge or rest mass. *If an electron weren’t spinning, it would not be electron*!

Although the aforementioned word took into account only electrons, physicists later established that all matter particles share this characteristic. And what’s even more interesting is that *any* matter particle – including even their antiparticles – has a magnitude of spin similar to that of an electron. Moreover, physicists also showed that the force-carrier particles such as photons, gluons, and elusive gravitons can also be characterised by their spin, which has just a different magnitude. If we are to be a little bit more precise, all the matter particles (known as fermions) have half-integer spin such as 1/2, 3/2, or 5/2, while force-carriers (known as bosons) have integer spins, such as 0, 1, and 2.

What’s particularly interesting about bosons is that the mediators of three fundamental forces – electromagnetic, weak and strong nuclear interactions – all have their spin magnitude being equal to 1, the Higgs boson’s spin magnitude equals zero, and finally, all theoretical considerations related to the elusive mediator of gravitational interaction – graviton – show that the magnitude of its spin should be equal to 2.

In String Theory spin, as well as mass and coupling constants, is determined by the mode of a string’s vibration. Now we can build up a conceptual understanding of how String Theory includes gravity apart from other forces. In their work in 1974 Scherk and Schwarz found that one of the vibrational patterns predicted by the theory corresponded to a *massless particle with a spin whose magnitude is equal to 2*. As we now see, these characteristics exactly coincide with those of graviton. Hence, gravity is an essential part of the theory.

Now, after we built up a basic concept of spin, let’s return to Coleman’s and Mandula’s theorem and see how the notion of spin can lead us to another symmetry which was missed by that theorem.

**Supersymmetry and Superpartners**

As we have previously seen, even though the notion of spin has some similarities with the rotational motion of an object on its axis, it quite differs from such a motion at the same time. The discovery of spin in 1925 showed that this new kind of rotational motion just does not have a classical analogy.

This difference leads us to the following question. If the rotation of an object around its axis provides rotational symmetry to the laws of physics, and if spin differs from it, then wouldn’t spin itself lead to another kind of symmetry? By 1971 physicists had established that the answer to this question is indeed positive. Although the proof here is quite involved, the main idea behind it implies that if we are to consider spin from the mathematical perspective, there appears exactly one additional symmetry in the laws of Nature, and it’s been called *supersymmetry*.

Unlike other types of symmetry, supersymmetry can’t be explained by simple translations of an observer’s frame of reference just because, as shown by the Coleman-Mandula theorem, all kinds of symmetry related to the change of a reference frame are already used up. But because spin is kind of a quantum mechanical analogue of rotational motion, we can think of supersymmetry as based on a reference frame translation in the ‘quantum mechanical extension of space-time’. The mathematical details of the principle of supersymmetry are very subtle (here I should say that I don’t quite understand them myself), so we will not delve into them, but instead will focus our attention on the implications.

In the 1970s scientists found that if the Universe follows the principle of supersymmetry, then its particles – no matter whether they are point-like objects or tiny vibrating filaments of energy – should enter the list of fundamental objects by pairs. Such pairs are now called *superpartners*, or *sparticles*. What’s interesting to us here is that the members of one such pair should have different magnitudes of spin, and moreover, these magnitudes should differ by ½. Now remember that matter particles have half-integer spin and mediators have integer spin, which implies that according to the supersymmetry principle these completely different kinds of particles are in fact tightly connected. In other words, matter particles (fermions) and force-carriers (bosons) are superpartners to one another.

After the discovery of the mathematical model of supersymmetry physicists started looking for a way to include supersymmetry into the standard model, but what they found was that *none* of the known particles could be a superpartner to any other. As later rigorous analysis showed, if the Universe really follows the principle under consideration, then each known particle must have a yet-unknown superpartner with the spin of magnitude ½ less than its known counterpart. For example, the superpartner of an electron – known as a supersymmetric electron, or *selectron *– would be a bosonic version of the electron with the magnitude of spin being equal to zero. Likewise, for bosons such as photons or gluons, the superpartners would have spin ½ and they are called *photino* and *gluino*.

So as we can see, supersymmetry isn’t quite a conservative principle. It requires the existence of quite a few additional particles duplicating those components that we are already familiar with. Furthermore, since this principle was suggested, there has been no single piece of evidence of a superpartner’s existence. So maybe it’s nothing but a wild mathematical idea which many physicists have taken more seriously than they should have? This might seem to be the case, but there are a few points that make physicists seriously consider this possibility. Let us have a look at those.

**The Arguments in Favour of Supersymmetry before String Theory**

Firstly, physicists just couldn’t put up with the fact that Nature has put into action almost all – but not quite – mathematically consistent types of symmetry. Although such aesthetic arguments don’t always turn out to be correct, we’ve seen before that sometimes they do play an important role in physics. Of course we cannot omit the possibility of not all symmetries being at work in the Universe, but after all, that would be really disturbing.

Secondly, even in the Standard Model – a theory which does not include gravity – some delicate problems related to quantum fluctuations could be solved painlessly with the principle of supersymmetry. These problems in the Standard Model are mainly related to the fact that each type of particles makes its contribution to the quantum chaos. By carefully examining this chaos physicists found out that they could cope with it only through the extraordinarily precise tweaking of some parameters. The precision needed exceeds the value of 10 to the negative 15! Although the Standard Model concedes such precise regulating of its parameters, many physicists find the fact that the theory at hand breaks down after so slight change of one of its parameters quite unsatisfying.

Supersymmetry radically changes this picture. Bosons’ and fermions’ contributions to quantum fluctuations have the tendency to cancel each other out, and as long as bosons and fermions always enter the list of particles in pairs in supersymmetry, that delicate tweaking of the model’s parameters becomes needless. In mathematical analysis those contributions are opposite in sign, meaning that a boson’s contribution is positive while a fermion’s is negative, or vice versa. As a result of this truncation the supersymmetric standard model stops being dependent on that very suspicious tweaking of its input parameters.

The last of the arguments in favour of supersymmetry that we are going to consider here is quite a bit more subtle, so I encourage you to read this part very carefully. This point is related to the notion of grand unification – the unification of 3 out of 4 fundamental interactions. One of the strangest characteristics of fundamental interactions is the huge difference between their strengths. You can see it on the figure 1 below.

If you remember, in the fifth chapter we considered the unification of two of these interactions provided by the work of Sheldon Glashow, Abdus Salam and Steven Weinberg. For their work they were awarded a Nobel Prize in physics, and the unified interaction became known as the *electroweak interaction*. Later, Glashow with his colleague Howard Georgi suggested that this connection between different forces can be extended to include the strong interaction as well. The work on electroweak interaction showed that electromagnetic and weak interactions merge together under the temperature of one million billion degrees higher than the absolute zero (10 to the 15 Kelvin). Georgi and Glashow showed that the unification with the strong interaction becomes apparent at even much higher temperatures (10 to the 28 Kelvin). Such enormous temperature translated to energy would be only 4 orders of magnitude less than the Planck mass.

Now let us take a step back. We know that the intensity of electromagnetic interaction between two particles with opposite electric charges, as well as the intensity of gravitational interaction, increases when the distance between two interacting bodies decreases. These are simple and very familiar facts from classical physics. Surprises start to emerge when we investigate the influence of quantum physics on the intensity of interactions. Why does quantum mechanics exert any influence on them? This again is due to quantum fluctuations. When we examine the electric field of an electron, in fact we analyse it through the ‘fog’ of virtual particles perpetually appearing and annihilating in the area surrounding the electron. A few decades ago physicists found out that this fog naturally ‘masks’ the actual intensity of the electron’s electric field, just like fog on Earth weakens the intensity of light from a lighthouse. If we keep shortening the distance from the electron in question, we are essentially diffusing the effects of this fog, thus the intensity of the electric field generated by the electron would *increase*!

This increase isn’t quite the same as the increase of the *inherent* intensity of the electromagnetic interaction due to the shortening of distance. That’s why physicists differentiate between them. Thus when we are getting closer to an electron, the intensity of its electromagnetic interaction increases not only due to the decrease of distance, but also because the quantum effects around the electron become less apparent. And although we’ve considered electrons in this example, similar conclusions are applicable to *any* particles carrying electric charge. As we now see, quantum effects increase the intensity of electromagnetic interaction when the distance from a particle decreases.

Conversely, this quantum fog actually *increases* the strength of the strong interaction. This was discovered in 1973 by David Gross, Frank Wilczek and, independently, David Politzer in their works on asymptotic freedom (they were awarded the Nobel Prize in physics in 2004 for this work). What asymptotic freedom means is that when we examine two quarks being extremely close to one another, the closer they get, the less the strong interaction is between them. When the quarks are in extreme proximity (the distance between them approaches, but not quite gets to, zero) they start behaving as if they were *free* particles.

Later, Georgi, Weinberg and Helen Quinn used this idea to extend it to a brilliant result. What they showed was that by taking into account all those quantum fluctuations we find that the intensities of the three non-gravitational interactions start to approach each other when we decrease the distance from which we are analysing particles. Although the strengths of these interactions differ tremendously on those scales that are accessible with modern-day instruments, the conclusions made by Georgi, Quinn and Weinberg imply that this difference exists due to the different influence provided by the quantum fog of virtual particles. Their calculations showed that if we were to examine quantum particles from the distance of 10 to the negative 29 centimetre (only 4 orders of magnitude higher than the Planck length), then the intensities of non-gravitational interactions would appear *equal* to each other.

The energies associated with such fantastically small distances are way beyond what we can expect to achieve in the following decades, but such energies were ubiquitous a fraction of a second after the Big Bang (10 to the negative 39 s) when temperature in the Universe was on the order of 10 to the 28 Kelvin. We can draw a parallel here: just like various types of material such as wood, glass, metals and minerals merge together and form homogeneous and uniform plasma when we heat them to a very high temperature, the three non-gravitational interactions were merged similarly when the temperature was enormous. You can see it on the figure 2 below. Here the force of gravity is also included in the picture, and as you might guess, physicists do hope that eventually even the force of gravity will be included in one unified interaction. However, as long as in this part we are discussing the quantum mechanical picture that does not include gravity, we want to focus our attention on the idea of grand unification of only non-gravitational interactions.

Even though we don’t have such instruments that would allow us to test such small sizes or such high temperatures, in the last few decades experimentalists have been able to slightly redefine the values of the intensities of non-gravitational interactions. In 1991 physicists Ugo Amaldi, Wim de Boer, and Hermann Furstenau improved the initial values derived by Georgi, Quinn and Weinberg by using new experimental results, and showed a couple of interesting facts. Firstly, the intensity of the three interactions were actually slightly off (they’ve been *almost* equal *but not quite*). Secondly, this slight divergence in the strength of interactions *disappears* if the principle of supersymmetry is included in the picture! The reason for this is in the additional quantum fluctuations provided by superpartners. With those additional fluctuations the intensities of interactions become exactly equivalent. Physicists are very reluctant to believe that those intensities are so close to each other, but nonetheless are not equivalent. And supersymmetry elegantly resolves this conundrum.

Another important consequence of the result that I’ve mentioned above is that it gives us a possible answer to the question as to why superpartners haven’t been detected yet. The calculations performed by various physicists show us that these superpartners should be much heavier than those particles whose existence is experimentally established. The problem here lies in the fact that we still have too wide a range of possible masses for superpartners. I should also say that based on some calculations the detection of the lightest superpartners was expected even in the first run of the LHC. These expectations weren’t met, and, consequently, some physicists claimed that supersymmetry is most probably not put into action in our Universe. Such claims, however, are premature, and experimentalists continue searching for superpartners. We shall consider the question of possible confirmation of supersymmetry more deeply in the following articles.

Of course the arguments in favour of supersymmetry provided above aren’t unequivocal. We’ve shown how supersymmetry brings the highest possible level of symmetry into the theoretical apparatus of modern physics. You might argue, however, that Nature probably does not press for that level. We’ve also provided the information about how supersymmetry releases us from the necessity of tweaking some of the parameters of quantum theory very precisely, but again, you might not find it to be a compelling argument. After all, a lot of other parameters in our Universe seem to be fine-tuned for any form of complex structures, including living organisms, to exist; so why not add another parameter to this list? We’ve also turned your attention to the fact that on fantastically small scales supersymmetry provides the way for the strengths of non-gravitational interactions to find each other, which would allow these forces to be combined into a single grand interaction. Again, you might say that there is nothing demanding such unification, and those forces might not originate from a single one. Finally, you may have an opinion that superpartners haven’t been found just because the Universe isn’t supersymmetric, and superpartners just don’t exist.

No one can confute any of these objections. However, when we take String Theory into account, the arguments in favour of supersymmetry become largely strengthened.

**Supersymmetry in String Theory**

As we discussed in the last article, the foundation for the birth of String Theory was laid by the work of Gabriele Veneziano in the late 1960s. This first variant of the theory included all types of symmetry which we discussed at the start of this article, but it didn’t contain supersymmetry (which had not even been proposed at that time). You might remember that the first version was aimed at explaining only the strong nuclear force, hence it contained only the mediators of this force in its spectrum. Because of that, the objects considered by the theory all had the magnitude of spin equal to 1, and hence the theory was called *bosonic string theory*. That version had a serious issue though.

The spectrum of vibrational modes in bosonic string theory contained a particle known as a *tachyon* whose mass – or the mass squared to be more precise – was *negative*. For those of you who remember high-school algebra, this might not be too much of a surprise, since *imaginary numbers* do have this exact characteristic: squaring such a number would result in a negative number. Anyway, if you do not remember those aspects of algebra, don’t be afraid, we are not going to delve into them, so bear with me.

The possibility of the existence of tachyons has been examined since the time even before String Theory, but so far no one has been able to find a way to derive a consistent theory with tachyons being present. By the time bosonic string theory emerged, some researchers had already shown that it would be extraordinarily hard – if possible at all – to construct a consistent theory with tachyons. Likewise, physicists made all kinds of attempts to find a reason behind this tachyon mode in bosonic string theory, but those attempts weren’t successful either. This problem unambiguously showed that the bosonic version of string theory was certainly missing some important details.

In 1971 Pierre Ramond, a professor of physics from the University of Florida, modified the bosonic version of String Theory by including the fermionic modes of vibration into it. This work along with the later work of John Schwarz laid the foundation of the new version of String Theory. There was a surprising aspect of this new version: bosonic and fermionic modes of vibration entered this new theory in pairs. Each bosonic mode had a corresponding fermionic mode, and vice versa. Later, Joel Scherk, David Olive, and others demonstrated the reason behind this grouping in pairs. New version of String Theory contained the principle of supersymmetry, and this pairing reflected a high degree of symmetry of the new theory. At that time, *Superstring Theory* emerged. And what’s also important, the works mentioned above showed that the tachyon mode did not appear in this new version of the theory.

Initially, though, the works of Scherk, Olive, and others made contribution mainly to quantum field theory rather than to string theory. By 1973 other physicists had found that supersymmetry, which was discovered in the process of reformulation of String Theory, is also applicable to the theories based on point-like particles. They quickly made all the necessary steps in order to include supersymmetry into quantum field theory. At that stage supersymmetric quantum field theory was born. We saw the consequences in the previous section. Therefore, important results in the development of String Theory even had its impact on Quantum Field Theory.

Even though supersymmetry, as we’ve seen, plays an important role in Quantum Field Theory, in String Theory its role can hardly be overestimated. String Theory is our best bet for reconciling General Relativity with Quantum Mechanics; and only the supersymmetric version of the theory is devoid of a fatal tachyon mode, and also contains the fermionic modes of vibration. Physicists believe that if String Theory is correct, then so is supersymmetry.

The last thing I should mention in this chapter is that from the time when Superstring Theory was discovered, and until the mid-1990s there remained one important problem in the theory.

**A Problem of Redundancy**

Suppose someone told you that they have a proven explanation of the Bermuda Triangle mystery. Initially you might feel sceptical about their words since you already know that there are a lot of suggested explanations none of which has actually been proven. But being a marine expert you decide to listen to this person because there might be interesting information for your own research. When this person starts explaining their idea elaborately, you find out that they have a number of documentarily proven pieces of evidence to support their idea – let’s say they advocate that the actual reason for aircraft and ship disappearance in the Bermuda Triangle has to do with some particular aspects of the Gulf Stream (this is being one of the actual explanation attempts). In this case most probably you would listen to this explanation entirely, and who knows, maybe this person will even convince you in the correctness of their explanation.

Ok, that’s all well and good, but what if this person, right after that first explanation, tells you that they have another one? You patiently listen to this one as well, and, surprisingly, find out this alternative explanation is supported by evidence as good as the first one. Even though both of these explanations seem very reasonable, this fact makes you feel doubtful, because, being a scientist, you usually look for a single resolution to a conundrum instead of having multiple solutions. But suppose after that you are given the third, fourth, and even the fifth explanation, and each of those 5 are equally reasonable and are equally supported by various pieces of evidence. By the end of this discussion you would certainly have no more insight into the secrets of the Bermuda mystery than you had initially. This example was made up to show you that in solving fundamental questions “more” is sometimes “less”.

By 1985, despite being deservedly respected among physicists, String Theory had been shown to have this exact problem that we touched upon in the previous paragraph. This was due to the fact that supersymmetry – the central part of the theory – could have been included into String Theory by using not one, but five different approaches. All of those five approaches led to the grouping of fermionic and bosonic modes, but the details of this grouping, as well as the results provided by each particular approach, differed quite significantly. Although the names of these 5 theories aren’t really important to us at this point, I’ll mention them for those readers who are eager to look for more information on their own. We shall return to these at a later chapter when we will be considering the unification of those 5 at the start of the second superstring revolution. The names are: type I string theory, type IIA and type IIB string theories, Heterotic string theory SO(32), and Heterotic string theory *E*_{8} × *E*_{8}. All the characteristics which we considered earlier are hold for each of those theories, the differences come about only in details.

Having 5 different versions of the theory which is considered to be a candidate for explaining all the richness around us is quite a bit too much. We live in one universe and therefore are looking for one explanation.

One possible solution to this dilemma could be that four out of five theories are going to be ruled out by experiment, and only one of them will remain. But even if such an approach were led to the expected result, then we would be left with the following question: why are those other four theories even mathematically consistent? Why haven’t they been ruled out at the stage of a theory’s formulation? As Edward Witten once said, “If one of these theories describes our universe, then who lives in other four?” Ideally, the final theory, whether it would be String Theory or anything else, should be the way it is just because there is no other way to derive it. If we were to discover only one logically consistent way to combine the concepts of General Relativity with those of Quantum Mechanics, then many physicists would feel that humankind has achieved the deepest understanding of the laws of Nature.

As we shall see in the later articles, in the mid 1990s string theorists made a giant step in this direction by showing that these 5 theories actually represent 5 different ways of describing one universal theory, which is now called M-theory. We will consider these questions later, but in the next chapter we are going to see that the elegant unification provided by String Theory requires yet another radical reassessment of our beliefs regarding the way the Universe works.

Tagged: String theory, Supersymmetry ]]>

These are the opening words to David Bowies brilliant song Life on Mars and although the song is great Bowie couldn’t have been more wrong. As I write this Europe’s latest mission, ExoMars has just been launched by a Russian Proton rocket and is on its way to the Red Planet to look for the signs of life.

I won’t go into details here about the mission and how it will work but I would like to use this space, (no joke intended!), to think a bit about what it would mean if we found, or indeed didn’t find, life on Mars. OK, just a few details then! Exo Mars will arrive in October and consists of an orbiter and a lander called Schiaparelli. Now they will not be looking for little green men or in fact, little green anythings. What the mission is designed to test for is the gases that could indicate that there is biological activity on Mars.

Over the years we have heard that large quantities of methane have been detected and then we have heard that there is no methane. The latest seems to be that there might be methane or at least in some areas some of the time. The Viking landers in the 1970s tested for chemical signs of life but this was also inconclusive. One experiment showed a definite positive result and the other showed a definite negative which is about as inconclusive as you can get. About as inconclusive as asking a politician for a simple yes or no answer!

So let’s jump a year forward in time and look at what we might ind and what that finding would mean. Scenario number one- we find no signs of life at all. This would not in fact be a final, conclusive answer. As Carl Sagan said- “Absence of evidence is not evidence of absence”. So that would mean that it was possible that Mars is completely dead, that we hadn’t looked in the right way or that we were just unlucky. By “hadn’t looked in the right way” I mean that the experiments were wrong in that they were testing for life as we know it and that Martian life is different in some way. If we later found life that was different from Earth life then that would tell us a lot about how life develops.

Scenario number two is that we find life and that it is the same as Earth life. This would mean that life probably got knocked off Earth or Mars in a meteorite impact and landed on the other planet. The most likely route is Mars to Earth as Mars cooled and became hospitable before Earth and it is smaller so it is easier to knock bits off it by hurling rocks at it.

The bits you knock off also fall inwards which also requires less energy than going the other way. Tests have also showed that it is possible for bacteria to survive inside rocks, in space and during entry into Earth’s atmosphere. Also this happened a lot in the early Solar System, partly because there were a lot of rocks flying about and partly because Bruce Willis hadn’t been born yet. We don’t know whether this has happened but we know it is possible. It could also mean that life is pretty much the same everywhere and that would make the origin of life rather inconclusive. If it is the same everywhere then we can’t tell if it started one place and got transfered somewhere else, or if it is the same even though it originates independantly on different planets.

Alternatively, we find life that clearly does not have a common origin with life on Earth which would mean that wherever the conditions are suitable, life appears. Discovering life on Mars, or any other planet or moon, would be a huge thing but discovering that life is almost innevitable wherever the conditions are right would have huge implications.

]]>

After I had written about how we think the Universe got started I realized that we had left off with a Universe consisting only of gas and radiation. That it came into being at all is pretty amazing but to be honest, a ball of hot gas and energy isn’t very useful, especially if you have ambitions of becoming a complex organism that can explore space, investigate the mysteries of creation and write blogs! So this piece will be about what happended next, how we went from a ball of energy and gas to stars, galaxies and planets. Oh yes, and blog writers.

We left off with a universe that had suddenly grown to many times its original size and as we know, when something expands it cools, ok nearly everything, my waist line has expanded over the years but I wouldn’t claim it has got cooler! So as the Universe expanded and cooled it reached a point where matter as we know it could condense out and form a swarm of protons and electrons and some neutrons. A proton on its own is the nucleus of a Hydrogen atom. Some protons and neutrons got squeezed togther to form Deuterium and some Helium and tiny amounts of Lithium and Berylium but nothing heavier was created until the first stars got to work. At this point our baby universe is between 10 and 20 seconds old.

There is also a little mystery here. In fact quite a big mystery. When matter appeared something else appeared too, antimatter, particles with that are identicle but with opposite charges and when the two meet they annihilate each other completely. Now as the Universe was tiny the matter and antimatter would have had no chance of avoiding each other so everything we see today is the left overs from all that destruction. What puzzles physisists is why there was a very tiny difference that allowed a small fraction of our kind of matter to survive.

Until the point where electrons joined protons, the ‘cosmic soup’, was too thick with particles for light to get through it. The photons in there couldn’t travel far before they hit something. As the Universe expanded and cooled even more the protons and electrons were able to join to create the first atoms, one proton and one electron giving a hydrogen atom. When the first atoms formed the ‘soup’ thinned and suddenly the photons could travel almost unhidered through the Universe. This era is called ‘Recombination’ which is a bit of a silly name as up til then nothing had been combined before but the old names seem to stick so that’s what we call it. By now the little universe is 380,000 years old, still an infant.

Now at this point the tempreature is still about 3000 degrees Celcius so that equates to a lot of energy which equates to very high frequency light and we will come back to that and why it is significant shortly. What happened next is, well nothing much really for quite a while. In fact this period is known as ‘The Dark Ages’ because so far there are no stars to light up the skies. It wasn’t until about 150 million to about a billion years after the Big Bang, that the first stars began to form but first we need to back track a bit, well a lot actually. Right at the very beginning when matter first started to form there were two kinds, that which we call ‘matter’ and an as yet unidentified kind of matter that does not interact with photons at all so we can’t detect it directly. This is what we call ‘Dark Matter’ and although it doesn’t interact with photons, it does interact with gravity and this is very important. It means that it can clump together but unlike ‘ordinary’ matter it can’t get blown apart in the high energy enviroment of the early universe by all those photons whizzing about which means that all the clumps of Dark Matter that formed right at the very begining are still around as the Universe expands and and this creates a kind of skeleton for all the stuff that forms to cling to.

The light that started travelling across the Universe when the ‘soup’ thinned was at very short wavelengths, but because spacetime is expanding the wavelength of this light has become stretched. Now wavelength and temperature are related so as the wavelength stretches it corresponds to a lower temperature and we can measure that temperature, 2.75 Kelvin and this is what we call the Cosmic Background Microwave radiation and it marks the furthest back we can see. Before that point light couldn’t go far so we can’t see anything from before this point.

At about this time the first stars are forming and these are big, I mean really big, giants many times the size and mass of our Sun and when a star is very big it acts like many human stars, they burn brightly and then go out with a bang. Live fast, die young! These giant stars form heavier elements which are blown into space when the stars go supernova and this forms the basis of the next generation of stars which can then make even heavier elements that can make small rocky planets…

As the first stars are forming they are also gathering together in groups. First dozens, then hundreds, then thousands, millions etc, the first galaxies and groups of galaxies were built up around the scaffolding of the original clumps of Dark Matter and these first galaxies also grouped together into groups and these groups in turn into laŕger groups. From the original, microscopic, clumps of Dark Matter in the very early unnivere we can now observe clusters of a few hundred galaxies that are part of super clusters containing millions of galaxies, spirals like our own Milky Way, elipticals, irregulars. There are tiny dwarf galaxies of a few hundred stars up to giant galaxies that make our Milky Way look puny.

So this is where we are today, with a universe full of billions of galaxies and this is what it looks like if you happen to own a large telescope orbiting your home planet-

]]>

The Standard Model of particle physics describes all elementary particles as point-like objects. Hence, they neither extend in any direction nor possess any internal structure. Despite its tremendous success in predicting any physical phenomena in particle physics, we can be sure that it is incomplete since it describes only three interactions, leaving gravitational interaction behind. Moreover, all attempts to include gravity into the picture have failed due to the fierce quantum fluctuations at the micro-scale of the very space-time fabric itself. This contradiction has led physicists to search for a deeper understanding of Nature. In the last several decades String Theory has been on the theoretical frontier of the search for a unified theory.

String Theory proposes a unique way of describing the Universe on the tiniest of scales. The change of description, as physicists figured out, helps both General Relativity and Quantum Mechanics to be peacefully unified under a new overarching framework. String Theory suggests that the elementary components of matter are *not* point-like particles like in the Standard Model, but rather they are one-dimensional fibres, which could be thought of as infinitely thin strands perpetually vibrating each with a particular pattern. These strands are called strings. But unlike the strings on a guitar or violin, which consist of molecules and atoms, the strings of String Theory do not consist of anything, being essentially a fundamental and indivisible component of matter. According to String Theory all the elementary particles of the Standard Model – quarks, electrons, neutrinos, photons and others – are represented by one fundamental entity – vibrating string. These strings, however, are so small that they appear to be point-like objects even if we examine them with the most energetic particle accelerators to date, such as LHC (Large Hadron Collider).

But a simple theoretical change of point-like objects to strings already leads to profound consequences. Firstly, it seems to resolve the contradiction between GR and QM. Secondly, as I’ve mentioned above, all the matter particles and all the force-carriers are represented by one fundamental entity, so that we could say that String Theory is a unified theory of all matter and interactions that we know of. Finally, as we shall see in this and the following articles, String Theory once again dramatically changes our understanding of the physical world surrounding us. The description that you’ll see is taken from Brian Greene’s book “The Elegant Universe”.

**A Brief History**

In 1968^{th} a young theoretical physicist Gabriele Veneziano was analysing the experimental results of the strong nuclear interaction. He had spent several years on that task until eventually he realised that a particular exotic mathematical formula called Euler’s beta-function seemed to be capable of explaining all the characteristics of particles involved in the strong interaction. This Veneziano’s work led to a multitude of other works which used the beta-function to describe the vast arrays of data. Veneziano’s realisation, however, was incomplete, since everybody understood that this function works, but nobody could explain why it is so. By 1970^{th} Leonard Susskind, Yoichiro Nambu and Holger Bech Nielsen had managed to find the physical reason behind the Euler beta-function. They showed that if we replace dimensionless particles with tiny vibrating one-dimensional strings, then the strong nuclear interaction would be precisely described by the beta-function.

But even though this theory was simple and intuitively straight-forward, it was soon found to be flawed. In the 1970s experimentalists were able to look deeper into the subatomic world and what they found was that some predictions of the theory based on strings instead of point-like particles was in a direct contradiction with experimental data. At the same time a part of quantum field theory – Quantum Chromodynamics – was being developed intensively. This theory, which is based on the model of point-like particles, was extremely successful in explaining the characteristics of the strong interaction, which led the majority of physicists to abandon string theory.

Some researchers, however, did not want to scrap the theory since its mathematical structure was so beautiful that they believed that it had to point to something profound. Initially, one of the problems with String Theory was that it had too wide a range of characteristics for the force-carriers of the strong interaction. Some of these did describe the behaviour of gluons, but some predicted the behaviour of some other particles which had nothing to do with the strong interaction. Now here was a surprise for physicists. In 1974^{th} two physicists, John Schwarz and Joel Scherk, realised that this drawback was actually a huge benefit for String Theory. They examined those strange modes of string vibrations and found out that one of these modes coincided strikingly with the characteristics of a long-sought particle responsible for the gravitational interaction – graviton. Although these particles are still beyond detection, physicists can explicitly describe some of their characteristics. Schwarz and Scherk found that these characteristics are precisely realised for some modes of vibration. Based on that they concluded that the first step of String Theory into physics was unsuccessful just because physicists narrowed down its domain of usage too much, looking only for the description of the strong interaction. What Schwarz and Scherk showed was that String Theory is not just the theory of the strong interaction, but rather it is a theory that includes the force of gravity apart from everything else.

The physics community, however, reacted on this suggestion quite composedly. String Theory failed on its attempt to describe the strong nuclear force, and the majority of physicists thought that trying to apply this theory to such a global issue as combining all the forces is just a waste of time. Further analysis showed that String Theory has its own inconsistencies with Quantum Theory, so that at the start of 1980s it seemed that the gravitational interaction still keeps itself not incorporable into the quantum picture. This was until 1984, when Schwarz and Michael Green – another pioneer of String Theory – showed that those inconsistencies with Quantum Theory could be resolved. Moreover, they showed that String Theory does have sufficient broadness to include all four interactions and all kinds of matter. This news spread across the entire physics community and this time it had tremendous success. A lot of physicists, including even undergraduate students, immediately started working with the theory with huge passion.

The period 1984 – 1986 has since been known as the “first superstring revolution”, where superstring refers to the title which the theory got at that time – Superstring theory. The prefix ‘super’ has to do with one of the main characteristics of the theory – supersymmetry – which we shall consider in the later articles. In that period there were thousands of scientific articles written by a huge number of physicists. These works showed that many characteristics of the Standard Model naturally result from the String Theory system. Moreover, many of those characteristics obtain a more complete description in String Theory than they do in the Standard Model. These achievements convinced many physicists that the theory may eventually become the grand unified theory for all matter and interactions.

However, after all those successes physicists continued facing significant obstacles. In theoretical physics obtaining precise solutions to the given equations may occasionally be very difficult. Usually physicists are trying to obtain approximate solutions, which could make a ton of sense, in this case. And this strategy works very well when you have a complete form of analysed equations. In String Theory, however, the situation is far more complicated. Here we have a situation where even the very derivation of equations has become so astonishingly complex that physicists have managed to derive only their approximate form. Thus in String Theory we have to search for approximate solutions to approximate equations. After the first superstring revolution physicists faced the situation where these approximate equations were incapable of giving the right answers to some crucial questions. After many unsuccessful attempts to deal with this situation many physicists, feeling frustration, have stopped working on String Theory and returned to their previous works. For those who remained in the camp it became clear that they have to develop new methods that would allow them to extravagate beyond those approximate solutions that had been obtained.

The end of this stagnation was provided by one of the strongest physicists of our era, Edward Witten, in his presentation at a string theory conference in 1995. In that presentation Witten showed the way to overcome the problem with approximate equations and laid the foundation for the second superstring revolution. In this and the next five articles we shall explore the achievements made in the first superstring revolution, and then Witten’s work with the achievements of the second superstring revolution will be considered.

**What are Strings?**

As we have previously seen, String Theory suggests that all particles – if examined with an extraordinary level of precision, many times that of the contemporary instruments – would be seen as tiny vibrating filaments of energy. As we shall see later, the length of a typical string is close to the *Planck length*, which is one hundred billion billion (10 to the 20^{th} power) times smaller than the size of an atomic nucleus. So this is not surprising that contemporary accelerators cannot test the string nature of matter. Thus currently we have to rely on our theoretical investigations. We will describe the astonishing conclusions based on these investigations, but first let us consider the question about the nature of strings themselves. What are they made of? How could we be sure that they are truly a fundamental ingredient of matter? From the time of The Ancient Greeks the atom was believed for a long time to be this fundamental ingredient. Then, when the existence of atoms was confirmed, we discovered that these atoms themselves consist of protons, neutrons and electrons. Yet again this was not the end of the story, and eventually we found out that protons and neutrons consist of even smaller particles – quarks. So aren’t strings just a layer in that puzzle?

There are two answers to this question. The answer which naturally emerges from the mathematical apparatus of String Theory suggests that strings are truly fundamental, similar to the notion of atoms of The Ancient Greeks. So the question as to what they are made of is devoid of any meaning. Here we can think of a linguistic analogy, where letters represent the fundamental layer in any text. The text itself consists of paragraphs, paragraphs consist of sentences, sentences of words, words of letters, and there we have it; the question as to what letters consist of does not make any sense. Likewise, strings in String Theory are considered fundamental, albeit having some structure. So they do not consist of anything smaller. This is the first answer.

The second possibility is rather different. Despite all its successes we still don’t know whether String Theory is actually correct, and whether it would represent the final unified theory if it is confirmed. If String Theory is correct but fails to represent the final theory, strings might be just another layer in the Nature’s nest-doll. This is also quite possible.

In this series of articles, apart from the last few chapters, we shall be considering strings in the sense given by the first answer above, i.e. regard them as the truly fundamental component of matter.

**The Unification through String Theory**

Apart from the absence of gravitational interaction in its description, the Standard Model has another problem. It gives no explanation of the nature of some parameters it works with. For example, why those particles which we described in the first and fourth articles of this series have the characteristics we observe? Why the parameters of the particles such as their mass, electric charge, and others have the particular values we’ve measured? And finally, why does Nature have these particles arranged in three families? Did these properties emerge just by chance in our universe or do they hide some profound physical meaning?

The Standard Model is not capable of giving answers to these questions since it takes all of these experimentally obtained parameters as its input data. Without this input data the Standard Model would be incapable of making testable predictions. You might say that one of the pieces listed above – the mass of particles – has a mechanism which explains the experimentally observed values. Indeed, this mechanism was confirmed in July of 2012 when a team working at CERN announced the discovery of the long-sought Higgs boson. This particle is the smallest pocket of the Higgs field, just as a photon is the smallest pocket of the electromagnetic field. This Higgs field represents the mechanism by which elementary particles acquire mass. The intensity of interaction between any type of particle and this field gives the particular masses to different particles. But here the question about one parameter (mass) just translates to the question about other parameters – the characteristics of the Higgs field (or of the Higgs boson). Again, the Standard Model gives no answer to this question, so although the importance of the Higgs boson discovery is hard to overestimate, we still have the list of parameters in the Standard Model whose values are taken as the input data.

In String Theory all these characteristics are defined by only one parameter – the mode of a string vibration. We can first consider an analogy with a simple violin string. Any string can perform essentially an infinite number of different resonance oscillations. You can see a short list of such oscillations in figure 1 below.

The term resonance oscillation means that it has a certain frequency (which you can think of as the number of periods per unit time) and that the number of oscillations between the two ends of a string could have only an integer value (similar to the properties of waves that we were considering in the article regarding the principles of Quantum Mechanics). Your ear perceives resonance oscillations of strings as different musical notes. Likewise, strings in String Theory also have these characteristics. Some examples of strings with different modes of vibration are shown in the figure 2.

String Theory suggests that like strings of a violin being exposed to different resonance oscillations produce different musical tones, tiny fundamental strings being exposed to different modes of vibration *produce different masses and different coupling constants*. This implies that all those particle characteristics that the Standard Model takes as input data are determined by the modes of vibration being implemented by the strings sitting inside particles. We can conceptualise this by considering the mass of a particle. The energy of a particular mode of vibration is determined by two parameters: the amplitude (the distance between the midline and the peaks), and the frequency (the number of periods per unit time). The larger these two parameters are – the higher the energy. This means that more intense vibrations cause a string to be of higher energy, while less intense vibrations are associated with strings having lower energy, which makes intuitive sense.

Now recall that according to the Special Theory of Relativity mass and energy are two sides of the same coin. The higher the energy – the more massive an object is, and vice versa. And here we can draw a conclusion: according to String Theory the mass of a particle is determined by the vibrational energy of the inner string. Therefore the strings inside heavy particles vibrate intensely, while the vibration of a string inside a light particle is quite composed. And because an object’s mass determines its gravitational characteristics, we have a direct connection between the mode of vibration of a string and the response to gravity of the corresponding particle.

Using more abstract reasoning researchers found a similar correspondence between the particles’ response to other interactions and other characteristics of strings’ vibration. For example, the coupling constants of the strong, weak, and electromagnetic interactions and electric charge of a particle are determined by the mode of vibration. Moreover, a similar principle holds for the force-carrier particles as well. Photons, gluons, weak gauge bosons, and – what’s particularly important – gravitons represent nothing but other vibrational modes of the same fundamental physical entity – string.

Thus in String Theory vibration determines *everything*! The measured characteristics of all elementary particles are determined by certain modes of vibration of inner strings. This view is radically different from the view we had before String Theory, where it was assumed that the difference between elementary particles is dictated by the fact that these particles are made, in a sense, of different ‘material’. Conversely, String Theory suggests that the material of all matter and all fundamental forces is essentially one. Each different particle represents one particular string, and all these strings are essentially equivalent to one another. And the difference between particles is dictated by the different modes of strings’ vibration. What we thought of as different material appears to represent the different tones performed by fundamental strings. The Universe consisting of countless strings works like a cosmic symphony.

**Cosmic Symphony**

As we’ve just seen, the characteristics of elementary particles are determined by the modes of vibration of corresponding strings. This suggests that if we are to determine precisely what modes are allowed, then we would be able to describe those elementary particles’ characteristics and see if the theory’s predictions match the experimental results. To figure out the list of allowed modes we have to ‘push’ a string by every possible means. But as we’ve seen, these strings are way too small for us to carry out such an experiment. Instead what we can do is to use our mathematical apparatus in order to ‘push’ strings theoretically. In the 1980s many researchers felt that this list would shortly be found and that the theory of everything was already in our hands. As was later figured out, however, the elation was premature. String Theory might eventually become the theory of everything, but it has some obstacles not allowing researchers to determine the spectrum of vibrational modes to a desired degree of accuracy. Therefore, despite some truly outstanding achievements, String Theory is incapable of providing the necessary testable details yet. So even though the theory is, in principle, capable of describing all the elementary particles’ characteristics, a lot of work needs to be done in order to achieve this goal.

In the following articles we shall see the problems which the experts in String Theory are facing, but first let us briefly familiarise ourselves with some of them. We are all familiar with what tension is. The objects around us could experience quite different levels of tension load. For example, the strings of a violin have much lower level of tension than the strings of a grand piano. You can feel it when you play such a musical instrument: you have to exert more force to play a melody on a grand piano than on a violin. That is because the grand piano’s strings with their high tension require more external energy to move. We can measure the tension of the aforementioned strings because we know their stiffness. In String Theory, again, we can’t conduct a direct experiment on a string since it is too small. In 1974, however, when Schwarz and Scherk found out that one pattern of string vibration corresponds to the graviton, by using an indirect method they also managed to determine its typical tension. What they found was shocking. Their calculations showed that the intensity of an interaction (in this case – gravitational) is inversely proportional to the tension of the corresponding string, and since the gravitational interaction is so extraordinarily weak, the derived value was colossal: one thousand billion billion billion billion (10 to the 39^{th}) tons – the so called *Planck tension*. Thus the fundamental strings are much stiffer than ordinary strings. This result has three important consequences.

**The Consequences of Stiff Strings**

Firstly, the strings of a violin or a grand piano are anchored, which guarantees that their length remains constant. Conversely, there is nothing to restrict the shortening of fundamental strings. As a result, the colossal tension forces these strings to be squeezed up to ultramicroscopic size. The calculations demonstrate that a typical string being exposed to the Planck tension squeezes to the Planck length – 10 to the negative 35^{th} meters.

Secondly, because of such strong tension the energy of a typical string takes on an extraordinarily large value. Recall that you have to exert more force on a string of a grand piano to make it vibrate than on a string of a violin because of different tension. Therefore, two strings vibrating with the exact same pattern but having different tension will possess different amounts of energy. The string with a higher tension will possess more energy than the string with a lesser tension. This tells us that the amount of energy of a string depends on two parameters: the particular mode of vibration and the particular level of tension. With this description you might think that if we decrease the frequency of a string and its amplitude in a continuous fashion, its energy would decrease correspondingly, until it reaches zero. But recall the quantum mechanical picture that we were discussing in the fourth chapter of this series. After that description we have known that according to quantum mechanics any fluctuations and wave-like perturbations – including the vibration of strings – can have *only discrete amounts of energy*. Thus the amount of energy in possession of a particular string is represented by the product of an integer and the minimal potential value of energy. This minimal value is proportional to the string’s tension and its frequency, and the integer number is defined by the amplitude.

A very important implication here is the following. Since the minimal energy value of a string is proportional to its enormous tension, this value would also be enormous compared to the energy of elementary particles that we are familiar with. It would be equal to the value known as the *Planck energy*. If we translate this value into mass by using *E = mc^2*, we obtain the mass roughly 10 billion billion (10 to the 19^{th}) that of a proton. This value, you guessed it, is known in physics as the *Planck mass* (as you can see, almost everything in String Theory is ‘Plancktized’). This means that the typical mass of a string is equal to the product of an integer value and the Planck mass.

Here an important question emerges: if the natural scale of string theory has such overwhelmingly huge values for both energy and mass, how could it be used for much lighter particles such as protons, electrons and neutrinos?

The answer comes out from the laws of quantum mechanics again. Heisenberg’s Uncertainty Principle guarantees that there is no such thing as rest state for an elementary particle (recall the fourth chapter again). All particles perpetually experience quantum fluctuations. This is also true for strings: regardless of how ‘calm’ a string seems to be, it is always subject to quantum fluctuations. A relevant fact that the string theory researchers found in the 1970s shows that string vibrations and quantum fluctuations cancel each other out to a large degree, if we look at them from the energy perspective. This turns out to be possible if we take into account that the energy of quantum fluctuations is negative in value from the quantum mechanical point of view. Moreover, the value of this energy is approximately equal to the Planck energy value, hence it significantly reduces the positive energy of string vibration. This implies that the minimal energy (which we thought was equal to the Planck energy) in many cases cancels out to such a degree that the string’s mass gets close to the mass of elementary particles which the LHC and other contemporary accelerators deal with. Consequently, these very modes with the minimal value of energy correspond to the elementary particles which we have been aware of to date. For example, when Schwarz and Scherk were investigating that particular mode of vibration corresponding to the graviton, they found out that the energy of this mode cancels out entirely leading to a massless particle. And since it has been experimentally proven that the force of gravity propagates with the speed of light, and only massless particles can move with that speed, this provided credence for Schwarz’s and Scherk’s work.

However, such low-energy modes are the exception rather than the rule in String Theory, and typical vibrational patterns are much heavier than these. This suggests that all the particles that we were considering in the first article represent just a tiny island in the ocean full of high-energy strings. Even such heavy particles at the t-quark and the Higgs boson (with the masses around 187 and 125 that of a proton respectively) are detected in experiments because the enormous energy of their inner strings is significantly reduced by quantum fluctuations.

This leads us directly to the third consequence which is particularly significant in String Theory. There is, literally, an infinite number of possible modes of vibration. Does that not imply that there should be an infinite number of elementary particles which would certainly contradict the experimental data? The answer to this question is positive, but it does not necessarily imply the contradiction. String Theory does suggest that each possible vibrational pattern should correspond to an elementary particle. Our previous analysis, however, shows that the vast majority of vibrational modes corresponds to extremely heavy particles, many times that of the Planck mass. And since the most powerful particle accelerator to date (LHC) is capable of achieving energy roughly one million billion (10 to the 15^{th}) times *lower* than the Planck mass, we can conclude that our possibility to thoroughly test such energies is long way off. However, this is only the start of our investigations on String Theory and later we will see that there could be other methods to test some of the theory’s predictions.

**Gravity and Quantum Mechanics in String Theory**

The unified picture in String Theory which we’ve just seen looks really tantalising. But the most astonishing achievement of the theory is surely the resolution of the contradiction between General Relativity and Quantum Mechanics. Recall from the previous chapter that the problem of their unification lies in the inconsistency between the main principles of the two. GR’s main principle requires the fabric of space to be nice and smooth, while one of the main principles of QM – Heisenberg’s Uncertainty Principle – says that such a picture is unapproachable if we consider this fabric on the Planck scale. On that scale the fierce quantum fluctuations of that very fabric lead to a disruption of the smooth geometric shape of space.

There are two different answers to the question as to how String Theory resolves this problem. One of those answers is rough, but it helps to build up a conceptual picture, and the second one is more accurate, though it is a bit more complicated. We shall consider both subsequently.

The rough answer could be interpreted as follows. Although this might sound a bit naive, one way to investigate the structure of objects is in throwing other objects at them and looking at how the thrown objects behave after being reflected off the object under consideration. As a simple example you can recall that the way we see any objects at all is because the photons of light reflect off them and are imposed upon our retina, and the information about them is transmitted into our brains. Particle accelerators use a similar principle: two particles are accelerated close to the speed of light and are smashed onto each other. After that collision scientists analyse the debris leftover and obtain the information about the structure of those particles.

The basic rule for such research is that the size of thrown objects (particles in this case) defines the resolution limit of the particle accelerator. To better understand this statement suppose you’ve got a peculiar task. You have some rigid object – say, a peach stone – which you can’t see, and an apparatus that strikes smaller rigid spheres at it. You need to depict the stone just looking at the trajectories of the reflected spheres. Of course, this might seem to be an impossible task, but we are considering the situation in principle, rather than in practice.

At the first try your apparatus strikes quite large spheres at the stone, say only two times smaller than the stone itself. In this case even if you are an expert in this game, the trajectories of reflected spheres would only be capable of providing you the information about nothing but the overall shape of the stone. This is because the resolution of the spheres which have a large size in comparison with the stone is insufficient for the small details in the stone’s structure to leave a noticeable fingerprint on the trajectory of reflected spheres.

Next time, the apparatus is charged with much smaller spheres – 5 millimeters in diameter. In this case the resolution becomes a lot better since much smaller details in the stone’s structure leave a sufficient fingerprint on the sphere trajectory for a much more accurate depiction.

Finally, on the last try the apparatus is charged with even tinier spheres – only ½ millimeter in diameter. Here the very subtle details in the structure of the stone influence the behaviour of the reflected spheres, so that the picture depicted by you could be considered a masterpiece.

The idea behind this imaginary situation is simple. The size of the measuring probe must be sufficiently small compared to the investigated physical characteristics; otherwise their resolution would be insufficient to study the structures of interest.

The same conclusions are also applicable, of course, if we decide to investigate the structure of our stone on molecular, atomic, and subatomic scales. The resolution capacity of the ½ millimeter spheres will tell us nothing about such tiny structures; they are too large to explore molecular scales. That’s why particle accelerators use particles, such as protons and electrons, as their measuring probes. On the subatomic scales, where the laws of quantum theory change our everyday notions, the most appropriate resolution is given by the quantum wavelength, which defines the uncertainty in the location of a particle. If this sounds strange, you might want to review the explanation of the Uncertainty Principle given in the fourth article. We established there that the minimal uncertainty in a particle’s location is approximately equal to the wavelength of the particle which is used as the measuring probe. In that chapter, however, we were also told that the wavelength of a particle is inversely proportional to its momentum which is defined, roughly speaking, by the particle’s energy. Thus, by increasing the energy of a measuring probe we can reduce its wavelength, and hence amplify its resolution capacity. Which makes intuitive sense: particles with higher energy have higher penetrating capacity and can be used to investigate finer and finer details.

This leads us to the obvious difference between dimensionless particles and string fibres. Since a string does have some length, it cannot be used to investigate structures shorter than its length (whose value, as we’ve seen, is approximately equal to the Planck length). In 1988 two string theorists – David Gross and Paul Mende – showed that the continuous increase of a string’s energy *does not* lead to the continuous increase of its resolution capacity, which would be the case for point-like particles. These physicists demonstrated that for a string, increases in energy initially lead to increases in its resolution capacity, but eventually, when it reaches some value, further increases in energy amplify the *size* of the string instead of increasing its resolution capacity. And as we’ve seen, when the string gets larger, its resolution capacity *diminishes*. The typical size of a string is close to the Planck length, but if we were to pump it up with an enormous amount of energy – the amount we can hardly imagine, but which was typical right after the Big Bang – then the string could be inflated to the macroscale. This would be a horrible instrument to explore the microworld! This implies that whatever method you use, the physical size of strings would not allow you to use them for digging into the sub-planck scales.

And the conflict between GR and QM appears exactly on the sub-planck scales. *But if strings are elementary and fundamental components of matter, and they cannot be used to investigate the sub-planck scales, then nothing consisting of them could experience the effects of quantum fluctuations on those scales even in principle*. We can draw an analogy by thinking of what we feel when we stroke the surface of a table. Even though the table appears completely smooth to our hand, we know that it actually consists of discrete objects, and in fact is granular and rough. Our fingers are just too large objects to notice that. Likewise, strings appear to be too large to experience the destructive effects of quantum fluctuations, and since they are the smallest objects, nothing would be subject to those effects whatsoever. Particularly, String Theory wipes off the fatal infinite solutions in physical equations. I think I should repeat this conclusion once again just to hit the point home. Strings are considered the most fundamental components of matter in String Theory. Yet, they appear to be too large to notice the destructive quantum fluctuations on the sub-planck scales. This suggests that there is no method which would allow anything at all to experience the effects of those fluctuations, even if they exist on the sub-planck scales.

**Have We Solved Anything At All?**

The explanation given above might seem unsatisfying to you at this point. Instead of resolving the conflict on the sub-planck scales we seem to use the non-zero size of strings just to get round the problem. Have we solved anything at all? Yes, in fact we have. This part would allow us to better understand this.

An important thing to comprehend from the above explanation is that the conflict between GR and QM lies in our very assumption that the structure of space *can* be defined on the sub-planck scale. And this assumption is based on point-like nature of elementary particles. This implies that the main conflict of the XX century physics has been begotten by ourselves. And since we assumed that all elementary particles should have no physical dimensionality, we had to consider the structure of space on arbitrarily small scales. And there we faced the contradiction which seemed almost impossible to resolve. String Theory suggests that we bumped up against this problem just because our assumption *was incorrect*. According to the rules of String Theory, *there is* a limit to which the definition of space can make any sense at all. The fatal inconsistencies between GR and QM emerge out of our unawareness of this limit.

Here I imagine a question that might pop up in your mind. If String Theory proposes such a simple answer to this important problem, why had so much time passed before physicists came up to the idea of elementary components of matter having some physical extension? You might be surprised to hear that this idea had already been up for several decades. Some of the greatest minds of the XX century such as Heisenberg, Dirac, Pauli and Feynman did have a conjecture that the elementary components of matter might have a form of a tiny droplet instead of being point-like objects. When they were trying to develop a theory with such objects as its elementary components, however, they run into a problem of building up a theory which would be consistent with the prevailing principles of physics, such as the conservation of information (required by Quantum Mechanics), and the incapability of information to travel faster than the speed of light (one of the main principles of the theories of relativity). Each of their attempts, as well as the attempts of other physicists, encountered the non-compliance with at least one of these principles. That’s why it was thought for a long time that it was impossible to construct a theory based not on point-like particles. The researchers on String Theory, however, have been showing again and again that the theory is consistent with all the fundamental principles of physics.

**The More Accurate Answer**

The answer given above has familiarised us with the basic concept of how String Theory could cope with the devastating quantum fluctuations on the sub-planck scales, and we could easily jump right to the next article at this point. But as long as we are familiar with the basic concepts behind the Special theory of Relativity, we now have the means to describe how String Theory has resolved the contradiction in a more accurate way.

Firstly, let us consider how two dimensionless particles would interact with each other, and how they would be used as measuring probes if they existed. If we consider a pair of particles as some sort of billiard balls, and let them follow two crossing paths, eventually they would collide, which in turn would influence the direction of their motion.

Quantum field theory shows that a similar thing happens in its domain: the two particles collide with each other and change the direction of their motion. But the details of this process in quantum field theory are different. Let’s imagine that our pair of particles is an electron and its antiparticle – a positron. When a particle and its antiparticle collide they annihilate giving off pure energy, which then translates into a virtual photon. This photon, in turn, travels a short distance and releases the energy it contains in the form of another electron-positron pair. These particles then continue following the same path as they would if they were two billiard balls (you can see it in the figure 7 below). What’s of particular interest to us here is the point at which two initial particles collide with each other and annihilate. As we shall see, *this point can be precisely determined if we work with point-like objects*.

What would happen if we replace point-like particles with tiny oscillating loops (strings)? The basic properties of the interaction would remain unchanged, unless we examine the interaction on the Planck scales. Suppose we have two strings with the modes of vibration corresponding to an electron and positron. The process would seem similar to that described above. Two strings would follow crossing paths, collide and annihilate releasing a virtual photon (which is nothing but another string), that would travel a short distance and would again be splitted up into two strings. Since the photon is just another string, we would see that two initial strings, in a sense, merged together and became one for a fraction of a second. You can see it in the figure 8 below. Here we have a picture of so-called world surface, which we can split by 1-dimensional vertical slices and obtain the position of the strings at each point of time. As we can see, at the centre of this world surface our strings collide and merge together into one string representing a virtual photon.

As we have emphasised above, the interaction between two point-like particles happens at a point which can be precisely determined. Any observer, no matter how fast they are moving, would agree on where the interaction has occurred. For the interaction between one-dimensional strings, however, this is *not the case*.

Suppose we have two observers – John and Kate – moving relative to each other. Now we return to our world surface shown above. By slicing it we can rebuild the process of string interaction moment after moment. Firstly, we shall take John’s perspective and see how the interaction would appear to him. Below we can see how the world surface would be sliced from John’s point of view. Each slice is showing the simultaneous events – or simultaneous positions of strings in this case – from John’s frame of reference. As always, here we have only two-dimensional array of events, but the argument that we are drawing would be applicable for a 3-dimensional world surface as well. The particular importance is in the third slice where the strings come in contact and merge.

Now let’s take Kate’s perspective and repeat the process once again. As we were discussing in the second article, Kate and John would *disagree on the question of simultaneity* because of their relative motion. From Kate’s reference frame simultaneous events would lie on our world surface at a different angle.

Comparing two frames of reference – John’s and Kate’s – we see that their opinions on where and when the strings came in contact are different. This implies that *there is no sharply defined point in space and moment in time where the strings did come in contact*! Both these characteristics are once again dependent on the observer’s frame of reference. So according to String Theory the point of interaction has, in a sense, been spread out over the entire surface represented by two figures above on slices (c).

Conversely, if we are to consider the interaction between point-like particles, we will again conclude that *there is* a definite point in space and moment in time where the interaction has occurred.

Interactions between dimensionless particles happen at such definite points of spacetime. When the particle under consideration is a graviton, this leads to a catastrophic outcome – an equation gives an infinite answer. Conversely, the non-zero length of strings smears that point of spacetime where the interaction takes place. And since different observers register the interaction occurring at different points of spacetime, *the actual place where the interaction takes place is literally being smeared-out throughout this entire area*. This magnification, if applied to the gravitational interaction, helps us to get rid of infinite answers, thus it resolves the contradiction between GR and QM. This is the more accurate description of the rough answer given above.

The details on the sub-planck scales, which we would be able to examine with the point-like objects, appear just not reachable in String Theory. And if the theory does represent the ultimate description of Nature, *there is no way to go into the realms of sub-planck scales by any means*. We can steer clear of the conflict between GR and QM in the Universe where there is a limit to how deep we can get in terms of scales. This is a universe of String Theory, in which the laws of General Relativity peacefully coexist with the laws of Quantum Mechanics.

I thank everyone for taking some time to read this article. The next time we shall be considering a very important concept in String Theory, and will familiarise ourselves with why the theory changed its name to “Superstring Theory” after the first superstring revolution.

Tagged: String theory ]]>

Let’s go right back to the begining, or at least try to, where did the Universe come from? How did it get here? We once thought, well just assumed really because it seemed to have always been there, that the Universe was eternal and unchanging. This is understandable as nothing seemed to have changed “up there” in all of human history. Then along came Edwin Hubble and discovered that when we look far out into the Universe, everything seems to be receeding from us, the Universe is getting bigger! But not long before that this guy-had produced a theory that was a huge leap forward in our understanding of the Universe and how it works. There was just one problem. Good old Albert assumed like everyone else that the Universe was unchanging, but his own equations showed that this could not be the case so he assumed that his equations were wrong and added the ‘Cosmological Constant’ to balance everything out and give a nice steady state universe.

The main problem with this ‘Steady State Universe’ is that it would be incredibly unstable. Gravity would be trying to pull everything in so there would have to be a repulsive force of some kind to counteract this. If something happened to make even part of the Universe shrink a little then gravity would become a tiny bit stronger and would overcome the repulsive force a little. That would make the Universe shrink a little bit more, making gravity even stronger compared to its rival and so on until the whole thing collapses in on itself. On the other hand, if it expanded even just a tiny bit then gravity would become a little bit weaker thus causing the Universe to expand some more until the whole thing expands and expands… you get the picture.

Since then more and more evidence points to a Universe that is expanding from a very small starting point, so let’s look at what that very small starting point might be and what happened just after that.

There are various ideas about how the Universe started. One idea is that there are membranes, called Branes, that are seperated by something called “The Bulk”. These (mem)Branes attract each other gravitationally until they bump into each other. This ‘bump’ is powerful enough to fill the two Branes with enough energy to create new universes and push the Branes apart. Eventually they will begin to approach each other again until they bump again and start the whole process once more.

Another idea is that the vacuum is not at its lowest possible energy level, and that a quantum fluctuation, a kind of random ‘flip’ in the fabric of spacetime, tips the vacuum over the edge and into a lower, more stable, energy level thus releasing a whole cartload of energy that sets the whole thing expanding. There are also theories of giant stars in higher dimensional space that go supernova and collapse into a black hole and that our universe is inside that black hole.

It is also possible that inflation is simply a matter of a quantum event that sets a little piece of a larger universe expanding suddenly. The logical consequence of that is that this could, and probably would, happen a lot giving rise to lots of universes, possibly an infinite number of universes each of which would probably have its own physics so we couldn’t even exist in most of them.

Whatever it was that pressed the ‘Start’ button, what happened next is that a tiny fraction of a second later, the whole thing suddenly got a serious case of ‘Inflation’. When we first discovered that the Universe was expanding it was assumed that it was a steady expansion, probably a bit faster at first, later slowing down like a bomb exploding. That sounds fine and logical but there are problems with that. The main one is what we call the ‘Horizon Problem’. Analysis of the Cosmic Microwave Background shows that it is almost exactly the same temperature everywhere. This is a problem because in order for that to happen, the equalizing of the temperature would have taken too long to have spread across the whole Universe. The Universe is simply too young for that to have happened yet. The solution seems to be that the Universe, or rather spacetime itself, expanded fantastically quickly. This spreads the energy out nice and evenly and thus explains the nice, smooth Cosmic Microwave Background.

Now don’t be afraid of this graph and all the weird numbers on it. The horizontal axis shows the age of the Universe in seconds and 10ˆ-45 means 0. and then 45 zeros followed by a one so that is a very small fraction of a second. 10ˆ5 is 1,00,000 seconds, that’s all and on the verticle side we can see the radius of the Universe at that time. What is really the whole point is to show how suddenly the Universe went from very tiny to very big. The whole inflationary period lasted only a fraction of a fraction of a moment but altered the Universe totally.

Of course this is all conjecture but all the evidence, from studies of the large scale structure of the Unniverse, to the smallest particles inside particle accelerators like the LHC keep telling us that we are at least on the right track. Maybe one day someone will find some data that contradicts the current model but until then it seems that our Universe did indeed start out unimaginably tiny, suddenly swelled up to billions of times its original size, and then settled down to expand steadily for about 11 billion years until it started accelerating again. This is sort of what we are talking about as a description of the history of the Universe up until now-

What happens in the future is not certain. Maybe the expansion will settle down again and go slower. Maybe it will stop and start collapsing again, although this seems to be unlikely, and maybe the expansion will keep accelerating until the whole thing is so big and thinly spread that the Universe dies with a thin, feeble whimper. It may even end up with the very fabric of spacetime being ripped apart in a ‘Big Rip’ that tears everything, right down to the subatomic level, apart. Watch this space…

]]>

In the last century our understanding of the Universe has gotten an unimaginable boost. Theoretical concepts of both General Relativity and Quantum Mechanics have allowed us to comprehend and, in some cases, even predict physical phenomena taking place on atomic and subatomic scales, and on the scales of the entire Universe, so to speak. This is truly a fundamental achievement. The fact that the civilization inhabiting a planet revolving about an ordinary star in quite an ordinary galaxy have managed to figure out such astonishing aspects of the physical world really impresses. But scientific institutions are built in such a way that scientists, and physicists in particular, are not going to stop until they have come to the deepest understanding and have figured out all the aspects of the physical world.

There is a good bit of evidence that GR and QM do not allow us to achieve this deepest understanding. This is what we touched upon in the first article of this series, and here we are going to consider this in a bit of detail. Since the application fields of these theories are so different, it is usually the case that *either* GR’s concepts *or* QM’s concepts need to be involved in solving a particular problem. But as we saw earlier, there are situations where both these theories are necessary to get a picture of what is going on. The centre of a black hole and the Universe as a whole in the moment of the Big Bang are a couple of such examples. But our attempts to combine the two lead to nothing but a catastrophe. For example, when we combine the equations of these theories together, a reasonable question leads to an answer which makes no sense at all, such as a probability equalling not 20 or 75 or 100 percent but *infinity*! But what does a probability greater than one, let alone infinity, even mean? In this case we have to conclude that there is a flaw in our understanding of physics. This inconsistency, which Brian Greene explains in a distinct chapter in his book “The Elegant Universe”, is what we shall focus on in this article.

**The Uncertainty Principle**

When Heisenberg derived his uncertainty principle physics concepts hugely shifted in a way which had never been imagined before. Probabilities, wave functions, quanta and all that demanded a radical change from a previously deterministic point of view. The uncertainty principle unambiguously brings about an indeterministic aspect into the physical framework. We considered this principle in the latest article, but for those who have not seen it, I should briefly mention what it leads to. According to the uncertainty principle, the Universe becomes extremely outrageous when we investigate space and time on micro scales. In the previous article we showed that there are some pairs of characteristics of a particle, whose exact values could not be known at the same time. One of such pairs is particle’s position and velocity. In order to define the exact position of a particle you have to light it up (bring a photon in contact with it). But the photon carries energy, hence it brings a high uncertainty in the value of particle’s velocity. And if we decrease the energy of a photon, its wavelength increases, which brings a high uncertainty in the position of a particle. What this tells us is that the world is essentially chaotic on its tiniest of scales.

This short explanation could bring up a natural question: does this uncertainty show up only when we – tactless observers – poke our nose into the microworld? No, this is *not the case*! Another example from the previous article, which considered the behaviour of a particle in a box whose edges are contracting, shows the fundamentality of the uncertainty principle clearer, since in this case we don’t bring a photon in contact with the particle in question. But this example does not uncover all the stunning aspects of the uncertainty principle either. What this principle shows is that even in the calmest situation which we can imagine – a completely empty space – there is miraculous activity on the subatomic scales. And this activity increases as we investigate the space-time fabric on smaller and smaller scales.

This conclusion is based on the fact that another pair of characteristics – energy and time – is also tied by the uncertainty principle. If you have a financial problem you could borrow some money to solve it, and then return the quantity back. Similarly, a particle could borrow energy from the Universe and then return it back. In this case though, the energy can be borrowed for a very short period of time. And the amount of this energy depends on how fast it will be returned. Thus, if a particle borrows energy for an infinitesimally small period of time, the amount of energy could be quite large.

The uncertainty principle shows that the exact energy and momentum values are *uncertain* on subatomic scales. These values fluctuate from one to another in a completely spontaneous manner. It seems like *nothing* (an empty region of space) borrows energy and momentum from the Universe and gives them back all the time. And here is a twist. Einstein’s formula **E = mc^2** tells us that energy can be transformed into matter and vice versa. For example, if the fluctuation of energy is sufficiently large, the borrowed energy could be transformed into matter, and a pair of particles with opposite electric charges (e.g. an electron and a positron) emerges out of what seemed as nothing. But because the energy must be returned very shortly, these particles immediately meet and annihilate each other, giving the energy off, hence they are called *virtual* particles. To speak a little bit more precisely, we can say that a region of space is empty when the intensity of *all* fields in that region is zero. But according to the uncertainty principle the amplitude of a wave and its rate of change are inversely proportional to each other. And since the intensity equalling zero implies zero amplitude, there is a high uncertainty in the rate of change of that amplitude, which means that at the next moment the amplitude will *not* be equal to zero! *On average* though, the amplitude remains undisturbed since in some places it takes a positive value whereas in others the value is negative. Quantum Mechanical uncertainty clearly shows that the Universe is highly blusterous and chaotic on its micro scales. But because the amount of borrowed energy on average equals the amount of returned energy, this tempestuous activity is never observed on normal scales. And as we shall see later, this chaos is the main obstacle for GR and QT to be merged together.

**Quantum Field Theory**

Throughout the 1930s and 1940s a huge number of physicists were working on finding a robust mathematical apparatus that would have helped to becalm the microscopic chaos. It was figured out then that Schrödinger’s equation represents just an approximation to the quantum mechanical realm since it does not take into account Einstein’s theory of relativity. In fact, Schrödinger initially tried to include Special Relativity in his equations, but this attempt was unsuccessful since the predictions based on it were inconsistent with experimental data. So he decided to make the first step towards the unified theory leaving Special Relativity out of consideration but providing a mathematical apparatus that was consistent with the idea of wave-particle duality and other experimental data. But later, physicists realised that Special Relativity is essential for the description of the microworld, since otherwise they did not take into account the interchangeability of matter, energy and momentum.

Initially physicists focused on the unification of Special Relativity and the part of Quantum Mechanics which describes the electromagnetic field and its interaction with matter. As a result, the theory known as *Quantum Electrodynamics* (QED) was developed. This was one of the theories dubbed *relativistic quantum field theory*. This theory is *relativistic* because it takes into account Special Relativity; *quantum *because it is formulated based on the principles of Quantum Mechanics; and it is a *field theory* because it combines the quantum theory with the notion of a classical force field, in this case Maxwell’s electromagnetic field.

QED is, without doubt, the most precise theory which has ever been developed. Physicists had been using it to derive the predictions (using the most powerful computers at a time) which then were experimentally confirmed to the precision of more than one billionth! What this means is that the results of theoretical considerations match experimental results up to nine decimal places and even more. This accordance of abstract mathematics with real-world experimental data is simply astonishing to say the least. The details of QED are so subtle and its role in physics is so vast that there are entire books written about it, for example this brilliant book by Richard Feynman, who was one of the main contributors to the development of QED.

The success of QED has led other physicists to try to describe other forces – strong, weak and gravitational – in a similar way, through a quantum field theory. This approach has proved very successful for strong and weak nuclear forces. Physicists have been able to describe these forces by the means of a quantum field theory; as a result *Quantum Chromodynamics* and *Electroweak* theories emerged. The former describes strong force with a fantastic precision, while the latter shows that both electromagnetic and weak nuclear forces have the same origin! With the conditions of unimaginably high temperatures and energies – which the Universe had a fraction of a second after the Big Bang – these two forces manifest themselves as one unified force. In a work, for which Sheldon Glashow, Abdus Salam and Steven Weinberg were jointly awarded with a Nobel Prize in physics in 1979, they showed that those two forces naturally merge together into one force in quantum field description even though they seem to have no commonalities in our cold Universe. In a fraction of a second after the Big Bang the temperatures dropped enough for these two forces to be separated out due to the process known as the *spontaneous symmetry breaking* which we shall consider later. Then the Universe continued to cool down so that we now have both these forces having very distinct properties.

So by 1970s physicists had got a very accurate description of three out of four fundamental forces of Nature – strong, weak and electromagnetic – and also showed that at least two of them can be unified in our physical framework. There have been many attempts to put strong nuclear force into this picture, in which case this would become a *Grand Unified Theory*, but so far no one has been able to accomplish this. However, the predictions of both Electroweak theory and Quantum Chromodynamics have been thoroughly tested with all sorts of adjustments, and so far this model has been proven correct countless times. Because of that, we call it the *Standard Model* of particle physics.

According to the Standard Model photons represent the smallest ‘packets’ of the electromagnetic field. Likewise, as we saw in the first article of this series, gluons and weak gauge bosons (W- and Z- bosons) represent the smallest components of strong and weak interactions respectively. Standard Model says that each of these particles is elementary, which means that they do not have any internal structure, just like quarks, electrons and neutrinos.

Photons, gluons and weak gauge bosons provide a microscopic mechanism for the transmission of interactions between matter particles. For example, two electrically charged particles with the same electric charge repel each other because they are surrounded by the swarm of photons whose interaction, in a sense, transfers the information to the particles that they must be repelled of each other. Similarly, two particles with opposite electric charge receive the ‘message’ according to which they must converge. Likewise, strong interaction is transmitted by gluons and weak interaction by weak gauge bosons.

**Symmetries**

You might have noticed that the quantum field theory leaves gravitational interaction behind the scene. But since we know that physicists successfully used this theory for the description of other forces, you might expect that such attempts were made. In such a theory, a particle carrying gravitational interaction would be *graviton*; and the connection of gravity with other forces becomes even clearer if we look at the examples of what is known as *gauge symmetries*.

First let us recall that according to Einstein’s theories of relativity, any observer – irrespective of their motion – could say that it is him, who is at rest, and all other observers are moving, so that all of their points of view have equal weight. Even those who move with acceleration could reconcile it by putting the appropriate gravitational interaction in. Thus, gravity provides a symmetry: it guarantees that all points of view, irrespective of their reference frame, have equal weight. Likewise, strong, weak and electromagnetic interactions are connected to other symmetries, even though these symmetries are far more abstract than that of gravitation.

In order to get an idea of these subtle sorts of symmetry, let us consider strong nuclear force. Every quark could be coloured into one of three ‘colours’ (bizarrely called red, green and blue, although these do not have any resemblance to the real colours). These colours determine quarks’ behaviour in response to the strong interaction, just like electric charge determines particles’ behaviour under electromagnetic interaction. Symmetry steps in when we consider the interaction of quarks with a particular colour. All interactions between quarks with the same colour (red-red, green-green and blue-blue) are identical. Similarly, all interactions between quarks with different colours (red-green, green-blue, blue-red) are also identical. But what’s even more surprising is that if we shifted three colours (three different strong charges, we could say) in a certain way, i.e. if we changed our red, green and blue to, say, magenta, lime and cyan, then even if the shifting parameters were to change from one point in space to another, and from one moment to another, the interaction between quarks would not change at all!

A good analogy could be drawn with a perfect sphere. A sphere is an example of a body having rotational symmetry: it will look the same irrespective of a point from which you are looking at it. In such a sense, we could say that our Universe has the *strong interaction symmetry*: physical phenomena do not change if the charges of strong interaction are shifted. This symmetry is the example of gauge symmetry, as was mentioned earlier.

And what’s particularly important here is that Hermann Weyl in 1920s and Chen-Ning Yang with Robert Mills in 1950s showed that gauge symmetry *demands* the existence of strong, weak and electromagnetic forces, just like the symmetry of all reference frames demands the existence of gravity! According to Yang and Mills, certain types of force fields *compensate* for the charges’ shift, keeping the interaction between particles changeless. In the case of gauge symmetry connected to the changes of quarks’ colours, the demanded force is nothing but strong nuclear force. This means that if there was no strong interaction, physical framework would change with such a colour shift. This shows that even though strong and gravitational interactions are so different, they are connected, in the sense that each of them is essential for maintaining certain kinds of symmetry. Moreover, the existence of electromagnetic and weak nuclear force is also connected to a certain kind of gauge symmetry. Therefore, all four known fundamental forces are directly connected to the principles of symmetry.

As we’ve just seen, we have long known that the four fundamental forces have quite a lot in common, which means that we should probably search for a quantum theory of gravitational interaction within the quantum field theory framework. This searching has been continuously kept up by many physicists for decades, but so far nobody has been able to accomplish it. In the last part of this article we shall try to figure out why this route has been so complex.

**Why Can We Not Live Together**

The laws of GR are usually applied on large scales: from everyday objects all the way up to planets, stars, galaxies and the Universe as a whole. According to Einstein’s picture, the absence of mass implies the flatness of the space-time structure in that region. In order to combine the laws of GR with those of QM we have to investigate the properties of space and time on the microscopic scales. Let us see what happens in this case.

You can see the successive diminishing of scales in the figure 1. The bottom in this figure represents an empty region of space on our everyday scales, and each next level shows the tiny areas of the same region investigated on smaller and smaller scales. As you can see, initially – on the first three steps of magnification – nothing happens at all, the structure of space keeps its initial guise. If we continued to magnify this structure taking into account only classical physics, we would expect to see the same picture with every successive magnification, irrespective of how small the investigated scales are. Quantum Mechanics, however, radically changes this picture. According to QM, *everything*, including gravitational field, experiences quantum fluctuations, which are caused by the uncertainty principle. Although classical picture says that gravitational field in empty space equals zero, quantum picture claims that it fluctuates from one value to another and equals zero only on average. Moreover, the uncertainty principle tells us that the magnitude of fluctuations grows with every successive magnification.

Since the presence of the gravitational field implies the curvature of space-time, those quantum fluctuations lead to dramatic deformation of space as shown in the figure 1. On the fourth level of magnification these deformations start showing themselves, but the overall structure remains quite smooth. On the fifth level, however, the deformations become incredibly strong, so that the space does not look flat and smooth at all – it becomes curved to an unimaginable extent. The investigated region of space takes on a turbulent and curled form. This is known as *quantum foam*, the term first suggested by John Wheeler. This is where the notions such as *left and right*, *up and down* and even *before and after* lose any meaning! It is here where we encounter the fundamental discrepancy between General Relativity and Quantum Mechanics. One of the basic principles of GR, the flat and smooth geometry of space-time, breaks down under furious conditions of microscopic realm. On subatomic scales the inherent property of quantum theory – the uncertainty principle – comes into contradiction with the basic principle of General Relativity – the flat and smooth geometric model of space and time.

This conflict manifests itself in a certain way. The calculations based on both GR and QM usually give a similar nonsensical answer – infinity. This implies that there is a fundamental flaw in our physical framework and that General Relativity just cannot cope with the furious aspects of quantum foam. Here I should mention that infinity also appeared in the results of the calculations based on other types of quantum field theory as well. However, physicists were able to calm down these infinities, using the process known as *renormalization*.

If we reversed our magnification process and went back to ordinary scales, however, the fluctuations of gravitational field would cancel each other out, hence we would see a flat region of space again. This is like an image you see on the web. If you look at an image it seems like the changes of colour occur in a continuous manner. If you sufficiently magnified the image, however, you would be able to see that it consists of discrete individual points – or pixels, as we call them. But in order to obtain the information about the pixellation of an image you had to magnify it; if we did not do that, the image would look smooth. Likewise, an empty space-time region would look flat and smooth on regular scales (and even on the scales of atoms), until we inspect those regions on extraordinarily tiny scales.

The principles of GR and QM allow us to calculate the approximate distances on which the devastating nature of quantum fluctuations would make space-time structure look like what is shown at the last level of magnification of the figure 1. The incredible smallness of both Planck’s constant and universal gravitational constant leads to the value of Planck’s length – which involves both of these constants – being so tiny that it is beyond imagining. Its value is approximately equal to *10 to the negative 35 metres*, which is one hundred-millionth of one billionth of one billionth of one billionth of a metre! *If we were to expand an atom to the size of observable universe, Planck’s length would correspond to merely the height of an average tree*!

If this conflict shows itself only on such fantastically tiny scales, should we care, you might ask. Some physicists would argue that such scales need not be considered in order to make and test physically meaningful predictions. Others, however, are highly concerned by the fact that two pillars on which the framework of theoretical physics currently holds are in a direct contradiction with each other.

As I’ve already mentioned, there have been lots of attempts to overcome this contradiction. But despite the fact that some of these attempts were very elaborate, they all have failed. This was the case until String Theory stepped in. (There are some other interesting approaches on combining the concepts of GR and QM, e.g. Roger Penrose’s Twistor Theory and Abhay Ashtekar’s, Ted Jakobson’s and Lee Smolin’s Loop Quantum Gravity which both are not of our concern in this series, but you can find plenty of information on them on the web.) Next time we shall finally start considering String Theory step by step.

Thanks everyone for your time!

Tagged: General Relativity, quantum mechanics, quantum physics, String theory ]]>

You and Jim are a little bit tired after all those experiments you conducted in order to test Special and General Relativity and are now looking for a bar here on Earth. You have finally found something called ‘H-bar’ which promises you a unique experience unlike anything you have ever encountered before. Of course, you have decided to enter it, where you’ve ordered your favorite Ardbeg whisky and Jim has taken a glass of long island ice tea. While waiting for your drinks you have decided to smoke your Cuban cigar which you have kept untouched for quite a long time. You sit down into a luxurious chair, set the cigar on fire and are ready to make the first inhalation when, suddenly, you notice that the cigar is not in your mouth anymore. You immediately look at your shirt and pants, thinking that the cigar has somehow slipped out from your mouth, but they remain unharmed. Then you look at the floor around you but cannot find the cigar there either. Jim approaches you and asks you what has happened. When you tell him that you’ve lost your cigar he answers that it is lying on the table behind you. You turn around and see that it is really there. But the only way it could happen to be there is by having passed right through my head, you say. I have no idea Jim replies. Right at that moment the barman calls you, signalling that your drinks are ready, so you decide that the incident with the cigar is just due to a strange sequence of events. The miracles in H-bar, however, did not stop occurring.

When you look in your glass you notice that the ice cubes in it are permanently moving at high speed, constantly colliding with each other and with the glass’ edges. But this time Jim is even more surprised. His glass of long island is narrower than yours, and the ice cubes in it are moving so fast that you could not even recognize their form. But this is not the end. The next moment you witness a completely unexpected and strange event. One of the ice cubes has passed right *through* the glass and landed on the table. You immediately take the glass but find it intact. The ice cube passed, literally, right through the glass in some completely mysterious way without causing it any damage! It seems like we are hallucinating after those space trips you say. Jim agrees, so you drink your beverages in one shot and go back home to sleep. While doing that you did not even notice that you left the building not through the actual exit, but through the door *depicted* on the wall. But the bar staff did not even notice that since such things happen all the time in this place.

The above made-up story, which I found just outstanding, is taken from Brian Greene’s book “The Elegant Universe” at the start of the chapter considering the basic principles of Quantum Mechanics, the physical framework which declares that such weird events as in the H-bar constantly happen in the microworld. In this article I shall try to introduce you to these principles and explain why such events are no weirder than an ordinary breakfast if we consider the microworld.

**The Way to Quantum Mechanics**

The first step towards Quantum Mechanics was made by German physicist Max Planck, who was considering a puzzling problem in the early 1900s. This problem regarded Black Body radiation. A black body is one that absorbs all electromagnetic radiation incident upon it, and in order to stay in thermal equilibrium it must radiate energy at the same rate. A typical star like our Sun is a good approximation of a black body, but to make things a bit clearer we can consider another good example, a cavity with one small hole in it. The light incident upon the hole goes into the cavity and, if we make the walls in the cavity capable of absorbing light, it is never reflected back since it would have to undergo a huge number of reflections in order to do so, and would in any case be absorbed before that happens. Thus, the cavity makes an almost perfect black body.

There is a simple relationship between the energy density inside the cavity and the energy radiated off by a black body. In the start of the XX century British physicist Lord Rayleigh with Sir James Jeans derived the Rayleigh-Jeans law which was based on the concepts of classical physics. It fit observational results well on long wavelengths but led to a problem on short wavelengths. In fact, the calculations based on this law predicted that the energy density inside the cavity, and hence the emission spectrum of a black body, would go to infinity! This was called the “ultraviolet catastrophe”, and it is the problem that Planck was able to solve with his new approach.

In order to see how he did that, we need consider the problem in a bit more detail. At that time light was considered to exist in the form of waves according to Maxwell’s model. These waves are described by trigonometric functions. If you are familiar with those, you can skip the next section where I will briefly describe what wavelength, frequency and amplitude mean in this context.

As shown on the picture above, wavelength is the distance between two adjacent maxima or minima of a wave. Wavelength represents the period of our trigonometric function, or one cycle of it. If we consider a wave in a certain region, for example in our cavity, the greater number of maxima and minima would correspond to the shorter wavelength, and vice versa.

The frequency of a wave represents the number of those cycles accomplished by the wave in one second. Frequency and wavelength are interdependent parameters: the greater the frequency the shorter the wavelength and vice versa, the lower the frequency the longer the wavelength.

Finally, amplitude represents the maximum height or depth of the wave. Or, to be more precise, it is the distance between the peaks and the midline, as shown in the figure 2 above.

We can make this picture even more clear taking sound waves as an example. The frequency of sound waves corresponds to the frequency of generated sound. The shorter the wavelength of a sound wave the greater the frequency, and hence the higher the sound tone. Amplitude in this example simply represents the sound’s volume. Greater amplitude corresponds to greater volume and vice versa.

The problem with the Rayleigh-Jeans relationship was in its indistinguishability of the amount of energy which those waves can carry. According to this relationship, all the waves carried the same amount of energy, and since the number of waves inside our cavity is essentially infinite (because we assume that they can be of any wavelength), the amount of energy that they would carry is also infinite. But everybody understood that this was nonsense; a cavity could not possess infinite energy. Physicists were trying to overcome this paradox and Planck was the first to achieve this.

**Planck’s Solution**

In 1900 Planck suggested that electromagnetic waves carry energy in discrete portions, or quanta. This suggestion allowed physicists to solve the conundrum with infinite energy and brought Planck the Nobel prize in physics in 1918. Let us see what this means. These portions can only be given by integers, decimals are not allowed. Correspondingly, energy is also transmitted with such portions. This is like the face values of various pieces of money having discrete quantities. For example, in the U.S. you can’t have a coin with the face value of one third of a cent, or 12.5 cents. Similarly, an electromagnetic wave cannot carry energy of 1.5 quanta. According to Planck, the ‘face value’ of energy transmitted by an electromagnetic wave is defined by the wave’s frequency. More precisely, he postulated that the minimum value of energy carried by an electromagnetic wave is proportional to the frequency of the wave. Higher frequency, and hence shorter wavelength, implies greater minimum value of energy and vice versa, lower frequency (longer wavelength) implies lesser minimum value of energy.

This discreteness immediately solved the problem of infinite energy. Suppose you are in a market with a single $100 piece and you want to purchase something that costs $4. The shop clerk tells you that they don’t have change, and you have to leave the market without purchasing anything. Likewise, if the minimum value of energy transmitted by a wave is higher than its ‘expected’ value, then it contributes nothing to the overall energy inside the cavity. More precisely, Planck established that the waves, whose minimal value of energy is greater than their average expected contribution, are suppressed exponentially. The extent of suppression magnifies abruptly with the increase of frequency. When we are considering waves in our cavity with greater and greater frequency, eventually their minimal value of energy becomes higher than their expected contribution, hence they contribute nothing. This obviously leads to the finite number of waves contributing to the overall energy, and hence the energy also becomes finite in value. You can see it in the figure below.

What made physicists convinced in the correctness of this model is its phenomenal agreement with experimental results. Planck’s formula for the calculation of the energy contribution of different electromagnetic waves has been the following: **E = ћv **where **E** stands for the energy, **ћ** for Planck’s constant and **v** – for the frequency of the wave in question. Planck found that by accurately tuning one of those parameters, which represents the ratio of proportionality between the wave’s frequency and its minimum value of energy, he could predict the results of measuring the energy of any black body with any given temperature. This ratio of proportionality is of course Planck’s constant **ћ** (pronounced h-bar; I think you can see now why that strange bar which we started this article with was named H-bar). This constant has an infinitesimally small value, 6.63 × 10 to the negative 34 power, which tells us that those quanta of energy are vanishingly small in value. This is why we cannot notice the discreteness of energy ‘packets’ and when we smoothly increase the volume of our speakers we think that it is changed continuously, whereas in real world it is changed discretely, but with such small steps that we are not able to notice them. This is how Planck solved the paradox of infinite energy.

**What are those Quanta?**

However, just as Newton derived a way of calculating the strength of gravitational attraction but left unanswered the question as to how gravity actually works, Planck solved the infinite energy conundrum but did not explain why his solution is the way Nature works. Nobody had a rational explanation of why this should be true; nobody apart from Einstein. And it is this work which brought him the Nobel prize in physics in 1921, not Special or General relativity.

Einstein got to his solution by considering the problem of the photoelectric effect. At that time physicist had known that some metals eject electrons when illuminated by electromagnetic waves (light). When light hits the surface of metals it gives off its energy, which in turn ejects electrons from these metals, not a big deal. Here you might suspect that if we increase the *intensity of light* – meaning that we increase its overall energy – it would increase the velocity of the ejected electrons. Interestingly, this is *not* what happens. Instead, in this case it is the *number of electrons* which is getting increased. It was also showed that if we increase the *frequency of light*, this leads to the *greater velocity* of the ejected electrons and vice versa, the velocity is decreasing when we decrease the frequency. If we continue to do that, eventually the value of velocity reaches zero and electrons stop being ejected at all *irrespective of the intensity of light*. A clear-cut conclusion had to be drawn from this: the frequency of light was responsible for the energy of the ejected electrons, not its intensity.

Based on this and on Planck’s model of discreteness, Einstein suggested that each beam of light consists of countless individual particles that we now call photons. I’ve used the word ‘countless’ with purpose, since a 100 Watt light bulb emits approximately one hundred billion billion (10 to the 20^{th }power) photons a second! Einstein solved the problem with the photoelectric effect by postulating that an electron is ejected from the surface of metals if it gets impinged by a photon with sufficient energy. And since it had already been shown by Planck that energy of light is defined by its frequency, the energy of an individual photon must also be defined by the frequency of the electromagnetic wave in question. This explains those strange properties behind the photoelectric effect. By increasing the intensity of light we just increase the *number* of photons, hence our light ejects the greater number of electrons whose velocity stays constant. Conversely, if we increase the frequency of light instead of intensity, the number of ejected electrons stays the same but their velocity increases – which means they possess more energy.

All this is confirmed by experiment, and there is no doubt that it is the fundamental property of light. Thus, Einstein showed that Planck’s model with the discrete packets of energy tells us that electromagnetic waves themselves consist of elementary particles – photons – which represent those packets, or quanta. The energy of light is given in discrete portions because it consists of discrete objects.

**Wave-Particle Duality**

Here you might recall that water and, correspondingly, waves in a river consist of H2O molecules, so is it that surprising that light waves also consist of particles? This is where things start getting a bit bizarre. The idea of light being represented by elementary particles dates back to Newton. This idea, however, was much more controversial among physicists than his theory of gravity; many were still standing by the wave nature of light point of view. Unfortunately, there were no such devices back then that could have tested which model is correct. It is the early 1800s when such an experiment had first been carried out by a British physicist Thomas Young, and this experiment – which is now known as double-slit experiment – proved that Newton’s opponents were right. This experiment is such a big deal in quantum theory, that we need consider it in a bit detail.

The initial setup of the experiment is shown in the figure 7 above. Here we have a coherent source of light such as a laser beam, a plate pierced by two parallel slits, and a screen which detects the light after it’s passed through the plate. The detector registers those points where emitted light hits the screen.

We start our experiment with only one of two slits being open. In this case if we continue our experiment for some time, the resulting picture on the detector would be as shown in the figure 8 below. This result is not surprising since the light passes through only the upper slit, hence it concentrates at the region of the screen behind the corresponding slit. Similarly, if we left the lower slit open and close the upper one, the detector will show the light concentrated at a particular region behind the lower slit.

The particle model of light predicts that if we conduct the experiment with two slits being open, we will eventually get such a picture wherein the light concentrates at the two regions behind both slits, which is just a combination of two pictures that we’ve got previously (with the slits being open sequentially).

The wave model of light, however, leads to a completely different prediction. If we send a wave towards the plate, it will propagate through both slits at the same time, which means that it splits into two waves, one that has propagated through the upper slit and the other that’s propagated through the lower one. Then these two waves show an interesting phenomenon which is known as the *interference pattern*. Where two maxima (crests) of both waves are superimposed upon each other at a particular point, the resulting amplitude doubles in value. Likewise, where two minima (troughs) of both waves coincide, the depth of the resulting trough doubles in value. Where a crest of one wave coincides with a trough of the other, they mutually neutralize each other. Finally, there is a full spectrum of partial amplification and partial reduction between these extreme cases. This leads to the conclusion that the resulting picture on our detector would look like this.

The brightest regions here correspond to the points where two crests (or troughs) are superimposed upon each other, the dark regions correspond to the point where a crest of one wave coincides with a trough of the other, leading to the mutual neutralization, and the whole spectrum of partial amplification and partial reduction is given by the slightly brighter and slightly darker spots. And indeed this is what the results of Young’s experiment showed.

This experiment, consequently, confirmed the wave nature of light that was later given a robust theoretical underpinning by Maxwell’s model.

But Einstein, who precipitated the robust Newton’s theory of gravity, appeared to revive Newton’s corpuscular model of light. Needless to say, his new model had to explain the results of the double-slit experiment. At first glance, it may seem that, as water, composed of H2O molecules, shows its wave properties when a huge number of those molecules are moving together, the huge number of photons moving together would explain the resulting interference pattern. In fact, however, the microworld is much more subtle. If we diminish the intensity of our light source such that it emits *a single photon* at a time, the resulting picture will be exactly the same as was shown in the figure 11. The interference pattern remains even in the case of the sequential emission of photons. This is mind boggling. How could single photons propagating through the plate eventually construct an interference pattern appropriate to the wave behaviour? Intuition tells us that each photon must propagate through either one slit or the other and the resulting picture should be like that which is shown in the figure 9. In fact, however, this is not what happens.

As we have just seen, Einstein’s corpuscles of light quite differ from Newton’s. Even though they are particles, they behave as waves at the same time. The fact that their energy is defined by a parameter used for the description of waves, namely frequency, is the first indication of the dual nature of light, but both the photoelectric effect and the double-slit experiment puzzle us even more. The first clearly indicates that light is represented by particles, whereas the other unambiguously shows its wave nature. Together they forced the physical community to conclude that light is indeed represented by both *particles and waves simultaneously*. Sometimes Nature works in such a way which is completely unfamiliar to our intuition.

**Matter Particles also Have Dual Nature**

In 1923 a French physicist Louis de Broglie suggested that matter particles should also exhibit wave characteristics. He came to this idea continuing the chain of reasoning of Einstein’s with his famous formula **E = mc^2**. As we saw in the previous articles, mass and energy are interchangeable according to this formula. But as we’ve just seen, Planck showed that the energy of light depends upon its wavelength. Combining these two facts de Broglie concluded that mass should also be dependent upon wavelength, and hence it should manifest wave properties as well. Following this logic and considering the wave-particle duality of photons de Broglie suggested that the constituents of matter also have dual nature and can behave as particles and waves simultaneously. Einstein accepted this idea right from the get go, since it represented a logical consequence based on his own contribution to both the theory of relativity and quantum physics, but it must have been confirmed experimentally in order to be accepted by the whole physical community.

In the mid 1920s such an experiment was conducted in the laboratory of the Bell Telephone Company. This experiment slightly differed in details from the double-slit one, but essentially they were identical apart from the fact that in this experiment physicists used electrons instead of photons. We need not be concerned with the details of this experiment here, but what is relevant to us is that the electrons showed the interference pattern just as the photons in the double-slit experiment. And the interference pattern, as we’ve seen above, is an undisputable characteristic of waves. Even if we decrease the intensity of our electron generator such that it emits one electron at a time, we would nonetheless see the resulting interference pattern. Electrons, for some reason, interfere with themselves as photons do. This leads to an unequivocal conclusion: electrons exhibit the wave characteristics as well as the particle ones.

The experiment described above took only electrons into consideration, but similar experiments show that any quantum objects (i.e. any particles) have both particle and wave characteristics. But why do we not experience these wave characteristics in our lives? De Broglie provided the formula with which you can determine the wavelength of matter particles, and this formula shows that the wavelength is defined as the Planck’s constant **ћ **divided by the particle’s momentum **p**. And since the value of **ћ **is infinitesimally small, the wavelength of matter particles exhibiting their wave characteristics is so tiny that the dual nature of them can be defined only in experiments with very high precision.

**Waves of What?**

The resulting interference pattern in experiments with electrons clearly demonstrated that they can be described as waves. But a natural question immediately arises: waves of *what*? The first attempt to answer this question was made by an Austrian physicist Erwin Schrödinger who suggested that these waves represent, in a sense, scattered electrons. This suggestion, however, had a flaw, since when a water wave meets an obstacle on its path it is essentially being spread into two waves, but any particles, including electrons, are clearly given only by integers (you cannot have a half of an electron). So when an electron wave meets such an obstacle it could in no way be spread into two different waves. The resolution came in 1926 from a German physicist Max Born, who became a Nobel laureate in 1954, and his suggestion is still used by scientists. It interprets an electron wave as a *probability wave*. At those places where the *absolute value* of the wave amplitude is largest in value, the detection of the electron is *most probable*. This probability lessens with the decrease of the amplitude, hence rarely are electrons found at those places where the amplitude is small. Finally, the likelihood of finding an electron at the places with the amplitude of zero is essentially zero, hence never do electrons appear there. If you conduct an experiment measuring the position of an electron, before you do that you can only determine at which point in your laboratory the electron is most likely to be detected, but you can in no way know the exact position for certain. The two-dimensional analogue of the probability wave is shown in the figure 12 below.

It is kind of strange idea, because usually we use probabilities when we play cards, or throw a dice, or toss a coin. But in these situations the necessity of using probabilities lies in the lack of our knowledge about a particular system in question. For example, if we toss a coin we do not know its exact weight, the exact force with which you have tossed it, the exact characteristics of the surrounding environment (e.g. the direction and strength of the wind) and even whether or not the coin is fair. Thus, we have to use the mathematical rules associated with probabilities. But if we had all this knowledge and, probably, a sufficiently powerful computer, we could calculate the exact result, namely whether the coin will land on heads or on tails. Thus, this probability does not tell us anything about the fundamental features of the Universe. Conversely, quantum theory introduces the concept of probability on a very deep level. The presence of the wave properties in matter particles implies that a fundamental description of matter does include probabilities. De Broglie’s formula shows that the wave characteristics of macroscopic objects are essentially undetectable, and the quantum mechanical probability associated with them could be completely ignored. On the other hand, it tells us that this probability is the inherent property of the microworld, and the best you can tell about the location of a certain particle is the likelihood of its presence at some point.

This implies that if we conduct a certain experiment with the exact same initial conditions over and over again, we will *not* get the same result. Recurrent experiments will give us a bunch of different results, and larger probability would imply that the electron is found more frequently at the corresponding point. If the *likelihood* of the electron to be found at the point A is *two times greater* than at B, then it will be detected at A *twice as frequent* as at B. Thus quantum mechanics does not allow us to determine the result of a particular experiment; but we can verify its predictions by conducting the same experiment again and again. And so far, quantum theory has been the most successful of all physical theories since its predictions match the experimental results *extraordinarily *well.

Those predictions are derived from one of the most important formulae in all of physics, namely Schrödinger’s equation. This equation gives a very precise description of the behaviour of these probability waves (or, as they are now called, *wave functions*). As Roger Penrose clearly explains in his book “Shadows of the Mind”, quantum framework could be split into two major procedures, one of which determines the behaviour of wave functions by using Schrödinger’s equation, and it is *completely deterministic*! It is the other procedure, which is called the *State-Vector Reduction *(or you could have also seen the term* Collapse of the Wave Function*), that introduces the probabilistic aspect in this framework. This State-Vector Reduction procedure can be explained by the process known as *Quantum Decoherence*, but it surely deserves an entire article, so for our current purposes we could just say that this process inevitably occurs in any quantum experiment, and there is no way to avoid it. That means that the probabilistic aspect is truly an inherent characteristic of the microworld.

Many would argue that such a conclusion is completely unacceptable since physics is about *predicting* the results of various experiments and not about deriving just probable outcomes of these experiments. One of those who did not accept the probabilistic point of view was Einstein. You might have seen his famous quote “God does not play dice”, which shows his reluctance to accept this indeterministic sort of physical laws. He thought that probability appears in our physical framework for the exact same reason as it appears when we toss a coin, namely as a consequence of the lack of our understanding of the underlying principles behind the quantum theory. But numerous experiments performed one after another have been continuously showing that it was Einstein who was wrong, not Quantum Mechanics.

Nevertheless, the debates on what quantum theory actually tells us about reality have never stopped. Everybody agrees on how to use QM’s equations in order to obtain fantastically precise predictions, but there are many different approaches to the question of interpreting wave functions and explaining the process of Quantum Decoherence. How does a particle ‘choose’ which location to appear in when experiment is carried out? There is no agreement even on the question as to whether or not it chooses at all. One popular approach to Quantum Mechanics suggests that every possible outcome of an experiment is realized each in a different universe. There are many great books in favour of each of those approaches, but we shall focus our attention on a particular one since it will play an important role when we shall be considering String Theory.

**Richard Feynman’s Path Integral Formulation**

Richard Feynman was one of the greatest physicists of the XX century. He completely accepted the probabilistic aspect of quantum theory, but in 1948 he suggested an entirely new way of looking at QM. To get an idea of his proposal let us consider the double-slit experiment with electrons.

The problem with the interpretation of the interference pattern shown in the figure 11 lies in the picture which is drawn by our intuition. Intuition tells us that an electron must pass either through the top slit or through the bottom one and thus we expect to see the result shown in the figure 9. You might recall that even if our electron source generates one electron at a time, the interference pattern is still there, hence there must be something that would be sensitive to both slits simultaneously and ‘check’ whether or not both of them are open. Schrödinger, de Broglie, Born and other physicists described this phenomenon by wave functions associated with each electron. A wave propagates through both slits simultaneously, recombines, interferes with itself and, consequently, produces the interference pattern.

Feynman developed another approach. He brought into question the very assumption according to which an electron as a particle would pass through only one slit. At first glance this assumption is so fundamental that it could not be argued. After all, can we not just figure out which slit the electron passed through after it has done so? Yes, we can, but in this case we would change the outcome of our experiment! To detect the electron after it has passed through the plate we have to light it up, which means that we need bring a photon into contact with it. But while photons, being vanishingly small packets of energy, do not affect macroscopic objects in any noticeable way, they do affect the motion of electrons since those are infinitesimally small ingredients of matter, and just a tiny push of a photon is enough to displace an electron and change the direction of its motion. Therefore, if we continuously define which slit each electron has passed through, the interference pattern is *destroyed*, and the resulting picture would look like that shown in the figure 9! The microworld *guarantees* that as long as we have figured out which slit an electron has passed through, the interference pattern is lost. What this tells us is that we have no way to test the validity of that assumption which seems so undisputable.

What Feynman proposed was that each electron passes *through both slits* simultaneously as a particle. You might think that such an idea would make sense only in science fiction, but not that fast. Feynman postulated that not only does an electron pass through both slits but it essentially follows *every possible path simultaneously*! The electron in this picture simply makes it through the top slit. *At the same time* it passes through the bottom slit. *At the same time* it makes it to your apartment and comes back to the plate and passes it through the top one. Yet *at the same time* it makes a long journey to the Andromeda galaxy, then turns back and eventually passes through the bottom slit. Apart from these, it follows an infinite number of trajectories from the initial point (the electron source) to the final destination (the detector screen). Some of such trajectories, connecting two points A and B, are shown in the figure 13 below.

The mathematical details of this model are quite complex, but roughly speaking Feynman showed that each of those paths could be associated with some number and the average of all those numbers gives us the exact same probability as the conventional quantum mechanical interpretation with wave functions at play. According to Feynman, there is no need to associate a wave function with our electron, and the probability of its appearance at certain points on the detector screen is given by the overall effect of all the trajectories leading the electron to that point. However bizarre and inadequate this model might seem, its predictions exactly match the predictions of QM’s Copenhagen interpretation with the usage of wave functions, which, in turn, are confirmed by experiment to an extraordinary degree of precision. We must let Nature decide what is reasonable and what is not.

But macro objects all consist of elementary particles, so why don’t they follow many paths simultaneously if this model is correct, you might ask. The answer is actually straightforward. As path integral formulation shows, the contributions of the trajectories of large enough objects mutually eliminate each other such that only one trajectory remains. And this trajectory, you guessed it, is the one which follows Newton’s laws of motion! This is why we can in no way get to a similar solution considering the motion of baseballs, rocks, planets or whatever. But for micro world’s objects Feynman’s rule says that the actual path, which those objects follow, often defined by the contribution of many different possible trajectories. I’d like to emphasize once again that this model and Bohr’s Copenhagen interpretation provide the *exact same* predictions, and therefore both have been confirmed. These two models unambiguously support each other. Later we shall see that Feynman’s approach plays an important role in some aspects of String theory.

**What is Responsible for those Strange Events in the H-bar?**

We have familiarized ourselves with some aspects of Quantum Mechanics throughout this article. Some of them may have been too bizarre to make intuitive sense out of them. What I have not yet touched on is those made up events in the H-bar which we started with. As it turns out the explanation of their occurrence could also help us to build up a somewhat intuitive picture of what is going on in the microworld, even though it is not less weird itself. This explanation lies in what is known as the *uncertainty principle* derived by German physicist Werner Heisenberg in 1927.

If you recall, when we considered the double-slit experiment applied on electrons, we established that the act of defining whether or not an electron has passed through one of the slits inevitably influences the result of our experiment, because in order to do that we must impinge a photon upon the electron, which, in turn, will change the direction of the electron’s movement. But why can we not use a photon with such low energy that it would ‘touch’ our electron so gently that its influence would barely be noticeable? If you remember, by diminishing the intensity of light we do not lessen the photon’s energy, but only their number. When we diminish the intensity of our light source such that it emits only one photon at a time, there is no way to make it even more ‘gentle’ other than turn it off completely. This is the fundamental quantum mechanical limit of ‘gentleness’.

On the other hand, we saw that we can reduce the energy of a photon by regulating its frequency. So why can’t we make our photons more ‘gentle’ by lessening their frequency (and correspondingly, increasing their wavelength)? As it turns out, we cannot circumvent our limit even in this case, and here is why. When we direct a wave onto an object, the obtained information about that object is sufficient for defining its location with an inherent margin of error which is proportional to the wavelength. Imagine that we are trying to define the location of a glacier whose surface is below the sea level, but we know it’s there because the shape of the waves passing close to it is changed by its presence. Before they do that, they create an ordered pattern of repeating crests and troughs. After they pass above the glacier, however, their form is changed; but one cycle (wavelength) of a wave represents a singular unit in their sequence, hence it gives us the maximum accuracy with which we can define the location of the glacier. Similarly, a photon represents one cycle of an electromagnetic wave, and its wavelength is the limit of accuracy for our attempts of locating the electron.

In this sense, we have a kind of quantum mechanical compensation for the accuracy of our measurement of one parameter, which is the inevitable inaccuracy of the measurement of the other. We can define very precisely the position of an electron by using a high frequency (short wavelength) photon, but such a photon would carry so much energy such that it brings about high uncertainty in the measurement of the electron’s velocity. And otherwise, if we use a low frequency (long wavelength) photon, we could determine the electron’s velocity with high accuracy, but this will bring a high uncertainty in its position.

Heisenberg expressed it in the mathematical equation which tells us that these parameters (the accuracy of defining position and velocity) are inversely proportional to each other, meaning that the accuracy in defining one of them inevitably brings a high uncertainty to the other. What’s important here is that this equation holds true for *any* experiment even though we have shown it only in respect to the double-slit one. And apart from electrons we could use any other particle as well. This is where quantum physics is so different from classical. According to Newton, Einstein and other classical physicists, the state of a particle is described by its position and velocity, but QM tells us that these parameters cannot have definite values both at the same time.

Einstein tried to minimize this recession from classical physics stating that even though QM brings a limit to our knowledge of these parameters, real particles *do have* definite position and velocity. But the progress in both theoretical physics and technology of the second half of the XX century, and, particularly, the experimental data obtained by French physicist Alain Aspect clearly showed that Einstein was wrong. Heisenberg’s uncertainty principle is truly an inherent property of the microworld. If you were to put a particle into a box and start to move the outer boundaries of the box closer to each other, your electron’s velocity would increase dramatically with time. Now, if you remember the events in the H-bar, where we increased the value of Planck’s constant **ћ **such that those strange things from the microworld could be noticeable by everyday scales, the ice cubes in your glass were moving very fast, but in Jim’s glass, which was narrower than yours (the analogue for our box squeezed even more), they were moving so fast that you could not even recognize their form. Now you know the reason behind this!

The uncertainty principle lies in the basis of yet another outstanding effect of the microworld which helps our Sun to produce light, namely *quantum tunneling*. If you shoot a bullet into a concrete wall the result will be pretty straight forward: the bullet will hit the wall, bounce back for some distance and fall to the ground. The reason for that is simply in the insufficient amount of energy of our bullet to break through the wall. But if we delve into the level of quantum world, each particle composing the bullet has a tiny probability of making it through. How could this be the case? According to Heisenberg, the uncertainty principle connects not only position and momentum but some other parameters as well, one of them being the pair of *energy and time*. The accuracy of your measurement of a particle’s energy is inversely proportional to the time taken to perform this measurement. According to QM you cannot claim that a particle has a certain amount of energy at a certain point in time. For the precise measurement of a particle’s energy you have to pay. Your experiment should take a noticeable amount of time in order to provide such a precise measurement of the particle’s energy. And otherwise, the energy of a particle fluctuates significantly if we carry out a very rapid experiment. What this tells us is that a particle could borrow a sufficient amount of energy to break through a wall if it returns it very quickly.

Mathematical apparatus of the quantum theory shows that the higher the value of energy a particle needs, the less the probability of that happening. But even if the energy barrier is quite high, particles sometimes do borrow that energy so that they could pass through a solid object, which would be completely impossible from the point of view of classical physics. When we consider macro-objects consisting of countless particles, the probability of quantum tunneling persists but becomes infinitesimally small, since *all* the particles composing an object have to quantum tunnel through the wall simultaneously. Weird events such as the disappearing of your cigar, the ice cube passing through the glass and the passage of yours through the depicted on the wall door, however, *might* happen in the real world. If you smashed into a concrete wall every second, hoping to get through to the other side, it would take even longer than the length of time the Universe has existed for an opportunity to arise! But if you had infinite patience and similar expectancy, eventually you would make it through.

Next time we shall consider in a bit more detail the inconsistency between Einstein’s General Relativity and Quantum Mechanics and then we will eventually get to the main topic of this series, String Theory. I thank everybody who has made it this far and has read the article entirely, and I hope to see you all next time.

Tagged: microworld, quantum mechanics, quantum physics ]]>

In his Special theory of Relativity, which we were considering in the previous chapter, Einstein resolved the physical conflict which regarded the speed of light and our intuitive understanding of motion. He showed that our perceptions of motion need be changed when we are talking about objects that are moving extremely fast. However, soon after that Einstein and other physicists realized that one of the main concepts of Special Relativity, which tells us that nothing can move faster than the speed of light, is in contradiction with Newton’s theory of gravity. By solving the first conflict Special Relativity lead to the other. 10 years later Einstein solved this conundrum in his General theory of Relativity (I shall denote it GR later on) whereby he dramatically altered our understanding of reality once again. In this article we shall be concerned with the main principles and consequences of GR. You can find a more detailed and a very clear explanation of these matters in Brian Greene’s book “The Elegant Universe”.

**The Inconsistency between Newton’s Mechanics and Special Relativity**

Newton was an absolutely astonishing physicist. The starting point of modern science dates back to his works in the XVII century. He had such a mighty intellect that when he realized that his works require some mathematical apparatus which had not been discovered yet, he found it! We now call this apparatus “Differential Calculus”. (Apart from Newton it was independently discovered by Gottfried Leibniz.) And what’s remarkable about Newton’s theory of gravity is that it is still used by scientists in many situations, such as the calculation of satellites’ trajectories in order to reach any point in our Solar system.

Newton perfectly described how gravity works but what he left unanswered in his theory is the question as to what gravity actually is. According to Newton, gravity depended on just two parameters – the mass of the object which exerts the gravitational pull on you and the distance between you and that object. However, this is the force which makes Earth orbit the Sun at the average distance of 93 million miles. How could this force be exerted on Earth without any direct interaction? In Newton’s picture gravity was some mysterious force which acted *instantaneously* through immense distances. For example, if, all of a sudden, the Sun exploded, the Earth would immediately go off its orbit despite the fact that we would still see the Sun shining in the sky, and only in roughly 8.3 minutes we would obtain the visual information about the catastrophe.

And this is exactly where the profound inconsistency between this extraordinarily precise theory and Einstein’s Special Relativity emerges. Einstein’s theory, as we have seen in the previous article, restricts everything from travelling faster than the speed of light. And not only is an object not capable of travelling faster than light, but so is any interaction or perturbation. That is to say, any sort of information cannot propagate faster than light. *Nothing* can surpass a photon, *including gravity*.

Therefore at the start of the XX century Einstein realized that there was a direct contradiction between his theory of Special Relativity and Newton’s theory of gravity which almost nobody had doubted in for several centuries. Being absolutely confident in the correctness of Special Relativity, Einstein started looking for a new theory of gravity which would be consistent with his former theory. This searching has eventually led him to the General theory of Relativity which again forced us to reassess our views on the working of the Universe.

**The Equivalence Principle**

In 1907, while working on a new model of gravity, Einstein managed to feel around for a way to do that. It is based on the physical concept which had been known since the times of Newton. In the early 1900s Einstein applied this concept to formulate his “equivalence principle”. To get an idea of what it means we should first take the question of the different sorts of mass into consideration. (Sorry, a bit of maths is coming out!)

The first kind of mass is called the inertial mass, and it is given by the second law of Newton with the famous formula **F = ma**, where **F** stands for the force which you need exert on an object to make it accelerate, **m** – for the mass of that object, and **a** – for the acceleration vector of the object. What this means is that the force that you must exert on an object to make it accelerate depends on the mass of the object and on how much you want it to be accelerated. That is, the more mass an object has the more force you have to exert on it (the harder you have to push it) for a similar acceleration. Imagine two things, e.g. a smartphone and an empty cup, on your table and let us say that the smartphone weighs 1 kg, whilst the cup weighs 2 kg (those are a pretty heavy smartphone and cup, aren’t they?). So to make them both accelerate at the same rate you have to push your cup twice as hard as the smartphone. Note that we are ignoring the friction against the table and those kinds of things here.

The other sort of mass has to do with gravity. As you might have heard, any object exerts a gravitational pull on other objects. As I have mentioned in the previous section, the force of gravity is given by Newton’s law of universal gravitation and is expressed mathematically as **F = Gm1m2/r^2**. Here **F** refers to the force of gravity, **G** stands for the universal gravitational constant, **m1** and **m2** – for the respective masses of two bodies under consideration, and finally **r** – for the distance between these bodies. Since the masses of both objects appear in the numerator, bodies of higher mass exert a stronger gravitational pull. And the squared distance in the denominator tells us that the force of gravity weakens with the squared rule when the distance increases. That is, if you measure the gravitational force between your smartphone and the cup which are located, say, one meter apart, and you obtain some value, then this value will be 4 times less when you increase the distance between them twice (make them 2 meters apart), and 9 times less if you put them 3 meters apart.

The interesting thing about all this emerges when you are combining these equations together to calculate the acceleration caused by the force of gravity. In this case we get **Gm1m2/r^2 = m2a. **Here we are assuming that M1 represents a heavier body and M2 a lighter body (e.g. an apple falling down from a tree under the force of gravity exerted by the Earth). So in our example **G** stands for the universal gravitational constant, **m1** – for the Earth’s mass, **m2** – for the apple’s mass, **r** – for the distance between the barycenters of the Earth and the apple, and finally **a** – for the acceleration rate at which the apple falls down. What’s particularly interesting for us here is that we now have the mass of our apple on both sides of the equation, which means that we can divide through it such that this parameter has no effect on the resulting value. What this tells us is that *whatever mass a falling object has, it will accelerate at the exact same rate under the force of gravity*!

I hope that you have been following this mathematical description because if you did, you now conceptually understand the equivalence between inertial and gravitational masses. What we shall see next is how Einstein applied this concept and showed that acceleration and gravitational pull are actually two sides of the same coin!

It is conceptually well presented by the famous thought experiment. Suppose you are in a spaceship floating in a completely empty space. It neither moves in any direction nor experiences the force of gravity. You hold your smartphone and your empty cup, whose weights are different (1 kg and 2 kg), and you let them go at the same time. Since you are in weightlessness, it should not be very surprising for you that both weights will just go around next you and will not move in any particular direction. Now what happens when we introduce acceleration? The smartphone and the cup are still floating next you, therefore the spaceship’s floor has to meet them both at the same time irrespective of their masses. And this is exactly the same situation that we’ve got when we considered an object falling down in the presence of the gravitational field. You can see it shown on the picture below. Although here we have only one weight, I think you get the idea.

What Einstein showed in his theory is that these situations are actually completely equivalent. If you are placed in a sealed box, instead of being in a spacecraft, you will have no chance to tell whether you have been pulled towards the floor because the force of gravity is pulling you towards the floor or because the box itself is accelerating upwards. And there is no experiment that would allow you to distinguish between these two things. All the experiments which you carry out in an accelerated reference frame will give you the exact same results as the experiments conducted in a stationary reference frame with the presence of the gravitational field.

In the previous article we were talking about the physical indistinguishability between the points of view of two different observers moving with a constant speed relative to each other. Particularly, you might recall that if you were moving inside a train that moves with a constant speed with no acceleration, you won’t be able to define the speed at which the train is moving, or even whether or not it is moving at all. But if we introduced acceleration, you will immediately recognize that the train is moving. What Einstein showed in General Relativity is that if you take into account a corresponding gravitational field you can see that the laws of physics are completely invariant not only for objects moving with a constant speed, but for *any object irrespective of its motion*! Thus, GR has completed the work started in 1905 by Special Relativity.

**Accelerated Motion and the Curvature of Space-Time**

The equivalence principle was a very important step towards the formulation of GR, but in order to achieve his goal Einstein had to go further and explore how gravity actually works. Luckily for him, a piece of mathematical apparatus had already been developed by an astonishing XIX century German mathematician Bernhard Riemann, which allowed Einstein to work on his new theory explicitly. Based on the concepts provided by Riemannian geometry Einstein was able to build his notion of curved space and time which I shall try to explain next.

For this we shall consider a carousel which is an amusement ride consisting of a rotating circular platform with seats for riders.

Suppose we’ve got the specification for the carousel in question in which we can see its circumference and radius. Using basic Euclidean geometry we obtain the ratio of the circumference to the radius as being equal to **2π**. However, this specification gives us the measure of these parameters while the carousel is stationary. Now suppose we see it only in motion, such that it never stops. If we now measure its circumference and radius, would our results be consistent with the specification and would the ratio of the circumference to the radius still equal **2π**?

What I should emphasize is that the motion of our carousel is *accelerated*, because its direction constantly changes, so it differs from motion with a constant speed which we were concerned with in the previous article.

If we ask our friend Jim to perform this measurement, and he starts with the circumference by putting his tape measure to it, we would immediately see (if we happened to observe this act from above) that his result would not be the same as in the specification (and as we see it from above as well). This is due to the Lorentz contraction which we talked about in the previous article. Since the carousel is in motion, Jim’s tape measure *is getting contracted* *along the line of the motion*, but Jim himself is certain that it is still of the same length since there is no relative motion between him and the tape measure. But because the tape measure is contracted Jim would obtain a greater value for the circumference than we had in the specification.

Now you might think that when Jim measures the radius of the carousel he will also obtain a greater value so that the ratio would still be **2π**. But not that fast. When Jim measures the radius with the same tape measure, its length *is not contracted since it is now placed perpendicular to the direction of motion*! Thus, the value for the radius obtained by Jim would not change. What this means is that the ratio of the circumference to the radius of our carousel is *greater* than **2π** according to Jim’s measurement.

But how could this be true? According to the geometry of The Ancient Greeks, *any* circle must have this ratio being equal to **2π**. Have we just found the case where Nature avoids mathematical description? Well, not actually. As Einstein explained, Euclidean geometry still holds good in any situation where we depict a figure on a flat plane (e.g. a piece of paper or a table), but the shape of this figure will be distorted if it is depicted on a curved plane, such that the aforementioned ratio for the circle would not be **2π**. That is shown on the picture below. Here we can assume that the radii on every circle are the same, but since the radial lines on the circle depicted on the spherical surface are converging, the resulting circumference would be less than **2πr** which is the case for the standard Euclidean surface. Likewise, the circumference measure of the circle depicted on the hyperbolic surface would not be equal to **2πr** either, but here this value is greater than **2πr**, which was the case in our example with the carousel.

Similarly, our millennial notions of triangles need be altered when we are talking about curved surfaces. The standard rule according to which the sum of three angles in a triangle always equals 180° does not hold in a triangle depicted on a curved surface. A triangle on a positively curved (spherical) surface has angles whose sum is greater than 180°, whereas a surface with a negative (hyperbolic) curvature would have triangles whose angles sum up to less than 180°.

These ideas led Einstein to conclude that such violations of Euclidean geometry are due to the curvature of the very space-time fabric. When one moves with acceleration, Euclidean geometry rules no longer hold from their perspective.

Okay, now we’ve got an idea of what space curvature is, but what do we mean by saying that time is also curved when we are moving with acceleration? As we previously saw, Special Relativity declares the unity of space and time. Consequently, if we say that something is true for space, it must also be true for time. But how could we make conceptual sense out of the curvature of time? To achieve this let us join Jim in our experiment on the carousel. Let’s ask him to stand at the edge of the platform and we shall stand at the platform’s center. Then we shall start slowly converging him and compare the rate at which time is passing for ourselves and for Jim. Here we should take notice of the fact that the greater someone’s distance from the center, the greater their speed. This is because they must pass a greater distance to finish one full cycle of the carousel. In this sense, Jim is moving faster than we do if he is standing further from the center. And as we know from the concepts of Special Relativity, the greater the speed of your motion through space, the slower time elapses for you! This tells us that our clock is ticking quicker than Jim’s clock does, and unless we approach him this will be true. This is what we mean by the curvature of time due to accelerated motion. Time is curved when its rate is being changed.

We have considered two main ideas which allowed Einstein to make his final step to the new theory of gravity. After he showed that gravity and accelerated motion are, in a sense, two sides of the same coin, and that accelerated motion is directly related to the curvature of space and time, it was obvious to conclude that gravity itself represents nothing but the curvature of space-time. Let’s see how we can make intuitive sense of this.

**The Curvature of Space due to Gravity**

Let us start again with the curvature of space. According to GR, empty space represents a completely flat surface on which a pictured triangle will be perfectly well described by Euclidean geometry; hence its angles will sum up to 180 degrees.

Now if we ask what happens when a massive body is present on that surface, Newton’s theory of gravity would say that nothing happens at all, since space is just a background on which events take place. As Einstein showed, however, the presence of mass curves the very structure of space.

In the two images above we see the 2 dimensional representation of both flat and curved space. We can also get a good visual representation with a rubber band analogy. Imagine such a rubber band. With no objects on it, it represents empty space, and if we introduce a very light spherical object (e.g. a ping-pong ball) and give it a push, it will follow a straight path until it reaches the edge of the surface of our band. Now what would happen if we placed a massive body at the center of the rubber band? Imagine we put a billiard ball there. The rubber band now curves due to the presence of this ball, and if we now place our ping-pong ball and give it an appropriate push – not too weak and not too strong – it will follow a circular path around the billiard ball. Moreover, if we could ignore the friction against the rubber band, our ping-pong ball would settle into orbit around the billiard ball. And this is exactly what happens in space, such that in our Solar System. The Sun, being a body of a huge mass, bends space around it, so that objects of less mass – such as the Earth, Venus, Jupiter and all the other planets, and asteroids as well – settle into orbit around the Sun.

What’s also important is that all the planets and asteroids are massive objects as well. Correspondingly, they bend space around them also. And if we recall, the force of gravity between two bodies depends upon their mass and the distance between them. That is why planets, possessing much less mass than the Sun, can capture objects which get too close to them. This is why the Moon has settled into the orbit around the Earth, just as a great number of moons of both Jupiter and Saturn have settled into orbits around those planets. In this sense, when a parachutist jumps off their air vehicle, they glide down into the well in space caused by the presence of the Earth.

Now we see that Einstein did explain how gravity really acts. It does that through the curvature of space. Fascinating!

The last thing that I want to consider in this section is that our analogies with the pictures shown above and with the rubber band are incomplete, albeit very helpful. For one reason, both of them are 2 dimensional, but space has three dimensions (according to String theory it has even more, but we won’t consider this in this article). So our analogy is somewhat limited, because the mass of the Sun actually bends space in three spatial dimensions, not just two.

The second problem with the above analogy is that the rubber band is bent because something pulls it down. As we have seen above, this is not what actually happens. The reason why objects settle into orbit around massive bodies such as the Sun is simply that the motion through the path *around* this object fulfills the principle of stationary action. Or simply speaking, when a celestial body follows this trajectory, it experiences the least resistance; hence this trajectory is the most stable.

The last problem with the rubber band analogy is that it takes into account only space and leaves time behind. Although it is good for a construction of a conceptual picture of the curvature of space, it is fundamentally incomplete, because as we already know, space and time are indispensable to each other. Thus, in order to encompass the entire picture we must consider time, which we shall do in a moment.

**The Resolution of the Contradiction**

Okay, this is all well and good, you might say, but what about the contradiction which we started this article with? Does GR resolve it? It does! Let’s consider the rubber band analogy once again. If our band is flat with no massive objects on it, and we introduce the ping-pong ball, it follows a straight path. If we suddenly put our billiard ball there, the direction of the ping-pong ball’s motion will change, but *not instantaneously*. If we, for example, take a video of this and watch it in slow-motion, we shall see that it takes some time for the perturbation of the rubber band structure to reach the ping-pong ball and change the direction of its motion. That perturbation resembles waves in a pond after you drop a stone into it. The rate of the perturbation’s propagation depends on the particular material of which our rubber band is made.

The same is true for the structure of space. If an object moves in empty space, and a huge mass suddenly appears at some distance, the gravitational perturbations will reach the object after some time has passed. And what’s important, Einstein calculated the speed at which these perturbations propagate, and this speed exactly equals the speed of light! In the situation with the Sun’s explosion, which we considered at the start of the article, the gravitational perturbation caused by this catastrophic event would reach us at the exact same moment as light, roughly 8.3 minutes. Thus, in this case we shall not obtain the information about the catastrophe until we see it with our eyes. This showed that the main feature of Special Relativity, which tells us that nothing can move faster than light, is correct, and Einstein proved it once again in GR.

**Gravitational Time-Dilation**

From the pictures shown above, we can intuitively understand how space warps due to the presence of mass, and now we need concern the curvature of time due to gravity. This question is not that trivial, since we do not have a picture of time in our heads. But we can approach it by looking at an example.

Let us assume that we with Jim use two spaceships both with very accurate clocks precisely synchronized in advance. Jim will approach the Sun, while we will stay sufficiently far away from it, and we both will compare the rate at which time elapses for us and for Jim. When he starts moving away, the rate would be the same for both clocks. However, as he gradually approaches the Sun, we could notice that his clock is ticking more slowly than our clock does. But for an ordinary star such as our Sun, the time dilation effect is very tiny. If our spaceship is located, say, 1 billion km away from the Sun, and Jim’s one is very close to its surface, his clock would tick just 0.0002 % slower than ours. But if he was near the surface of a neutron star, the time dilation would be very noticeable, up to 76 %. And for a black hole it is even more extreme. This is saying that the stronger the gravitational field, the more it curves the very structure of both space and time.

What I should also mention is that we are already familiar with the concept of time dilation from the previous article, and here I am also talking about time dilation, but the effects are slightly different. If you recall, previously we compared two observers moving relative to each other at a constant speed, and we concluded that both these observers could say that their clocks tick with a normal rate and that the other’s tick more slowly. That is to say, there is a symmetry between their points of view, so that both of them are correct. However, when Jim approaches the Sun he certainly feels its gravitational pull, so that he feels that he is exposed to gravity. What this tells us is that we’ve lost the symmetry in this situation, and now the passengers on *both* spaceships are certain that Jim’s clock is ticking slower than ours. Both of these effects are described in a similar manner, but they slightly differ in details as we’ve just seen.

**Experimental Confirmation of GR**

As we’ve seen, GR provides an amazingly elegant description of gravity. It shows that the very unity of space-time and gravity is far more dynamic than in Newton’s picture. But regardless of its elegancy, we want an experimental confirmation of any theory. If a theory in question eludes experimental confirmation then it must be thrown away despite its beauty.

Newton’s theory of gravity had been experimentally confirmed again and again, but in the XIX century a French mathematician Urbain Le Verrier established that the orbit of the planet Mercury is slightly shifted relative to the Newtonian theory’s predictions. This anomaly was titled “the anomalous precession of the perihelion of Mercury” and there were various proposals to the solution of this contradiction, such as the gravitational influence of an unknown planet close to the Sun, the flattening of the Sun and several others. However, none of them were particularly robust. In 1915 Einstein calculated the precession using the equations of his new theory and obtained the result which *exactly* matched the observed value of the precession. For Einstein this was a fantastic success of his new theory. Most other physicists, however, wanted a prediction of yet unknown phenomena, rather than the explanation of an existing anomaly. This is not a surprise because it is how science actually works. A new theory will not be very robust until it makes some testable predictions, and they are verified by experiment. And Einstein made such a prediction.

Sometimes we are fascinated while looking at the starlit sky. We cannot usually see stars in the skies in the daytime, because their light is too dim in comparison with sunlight. However, during a solar eclipse this can be done, since the Moon blocks a sufficient amount of sunlight for stars to be visible in the sky. And this is where Einstein’s prediction comes about. GR predicts that mass bends space, and the higher an object’s mass the greater curvature it causes in the space-time structure. And when the light from a distant object passes very close to the Sun’s surface, it must be deflected from its straight path by the magnitude which is easily derivable by applying relevant mathematical concepts.

This causes a star to appear slightly shifted from our perspective here on the Earth. This shift then can be compared to the actual position of a star (obtained when its light does not pass close to the Sun). In 1915 Einstein calculated the magnitude of such a shift being equal to 0.00049 degrees (1.75 arcseconds). This is an extraordinarily tiny angle, but our tools at that time could already obtain such a precise measurement. On May 29, 1919 two British teams of astronomers made their observations of an apparent position of a star, whose light passed very close to the Sun, during a solar eclipse. One of these teams, which made their observation at Principe, the island in West Africa, was led by Sir Arthur Eddington, and the other group, led by Charles Davidson, observed the apparent position of a star from Sobral, Brazil.

After these observations were made, the data was analyzed for roughly 5 months, and eventually it was declared, at the summit of the Royal Society on November 6, 1919, that Einstein’s prediction is confirmed. The Sun, as well as other stars, deflects the trajectory of light which happens to pass nearby. This was Einstein’s moment of glory. The news of this success spread all over the world in a short time, and on the next day after the summit the main article in London “The Times” read about a new revolution in science and about the downfall of Newton’s ideas. Since that time, there was no single experiment giving a result inconsistent with GR. But some of GR consequences were too extreme even for Einstein himself. Nevertheless, even they have been shown to be true. This will be the topic of the last two sections of this article.

**GR Consequences: Black Holes**

Soon after Einstein finished his works on GR, German physicist and astronomer Karl Schwarzschild managed to derive a very precise picture of the space curvature in the vicinity of an ideally spherical star. What’s important is that Schwarzschild’s solution of the GR equations showed that if the mass of a star is squeezed to a very small volume, the gravitational pull of this star becomes too strong for even light to escape the region around the star which we now call event horizon. And as we know from Special Relativity, nothing can move faster than light, so if even light is not capable of escaping such a region then nothing can escape it at all. At first, such theoretical entities were called “Dark Stars”, or sometimes “Frozen Stars”, but then John Wheeler suggested the name “Black Holes” which has persisted.

We can start by looking at the image above. Here we can see a dramatic distortion of space-time fabric caused by a black hole. The orange circle here shows the event horizon, a point of no-return. When anything crosses it, there is no way for it to get back out of the black hole. But although the gravitational influence of a black hole is enormous in the regions close to the event horizon, you won’t feel any gravitational difference between a regular star and a similar mass black hole if you are at a safe distance from it. That is, if our Sun turns into a black hole, our planet would orbit it the way it usually does, because what matters, as we saw earlier, is the mass of the body exerting a gravitational pull on you, and your distance from it. So the only reason why the replacement of the Sun with a similar mass black hole would be bad for us is because it will stop shining. If you are not familiar with the physics of black holes, and try to cross an event horizon, then you will be exposed to extremely strong tidal forces. If you are falling into a black hole and your feet are, say, 2 meters closer to the BH than your head, then your feet will be experiencing much stronger gravitational force than your head does. Because of this, your body will have been being stretched until it is torn apart. Physicists even gave this process a humorous name, “spaghettification”.

If you know the principles of GR and do not cross the event horizon, you could use a black hole as a time-travel machine. Suppose the BH has a mass of 1,000 solar masses, and you come very close to its event horizon but don’t cross it. As we have previously seen, the gravitational field of an object with a huge mass curves time in such a way that it passes more slowly if you are close to that object. So let us say you get extremely close to the event horizon, 3 cm above it. In this case the passage of time for you will be slowed down *very much*. In fact your clock would tick roughly 10,000 times slower than the clock of your friend on Earth. That is to say, if you stay there for only one minute, the Earth will count roughly 7 days at the same time! And if you stay there for a year, you will return to the Earth more than 10 thousand years later.

To give you a sense of scales involved, I shall provide a couple of examples. To turn a star into a black hole we have to squeeze it such that its radius reaches its “Schwarzschild radius” value. Our Sun, for example, has to be squeezed into a sphere with a radius of roughly 3 km. You can get a sense of the density involved by looking at the actual radius of the Sun, which is roughly 695,800 km. So if you somehow managed to squeeze all of its mass to the size of Manhattan, then it will have become a black hole. An object of Earth’s mass has to be squeezed to the sphere with a radius less than centimeter. A lot of physicists were skeptical to the possibility of such extreme configurations of matter, but the existence of BHs has now been observationally tested to an extraordinary level of confidence. This can be done by observing a star, typically a red giant, orbiting some invisible companion. When the companion is much denser, it strips off the material from the red giant. This material then spirals on the companion and heats up to enormous temperatures emitting very bright X-rays and visible light. Observing such a binary we can gather the necessary data to calculate the mass of the companion and its size. Some of these companions turn out to have their radius less than Schwarzschild radius, which is a direct indication of the black hole nature of the object in question.

Apart from this, we can observe the behavior of stars close to the center of our galaxy. They turn out to move so fast that we can calculate that there is an object out there whose mass is roughly 4 million solar masses. But even this fades out in comparison with quasars, the objects so bright that they easily outshine their entire galaxies. These quasars are powered by black holes at the centers of galaxies, and the masses of such black holes are *billions* that of the Sun! This was a very short description of black holes, and I encourage those who are interested to read Peter Cooper’s article on this topic.

**GR Consequences: The Big Bang and the Expansion of the Universe**

The most profound consequence of GR was in showing that the Universe is not static as had been thought for ages. At the start of 1920s Russian physicist and mathematician Alexander Friedmann used Einstein’s equations to draw this conclusion. As he showed, the Universe as a whole cannot be static; according to the GR equations, it must either contract or expand. This was too much even for Einstein himself, so that he slightly modified his equations to return ourselves to the comfortable conditions of a static Universe. He added a new parameter to the equations which is now called the cosmological constant. However, several years later American astronomer Edwin Hubble experimentally established that the Universe is indeed expanding. Despite Einstein’s reluctance to accept this conclusion, his theory *predicted* it! And it was confirmed again and again, so there is no doubt at all that the Universe is indeed expanding.

Since we now know that the Universe is getting bigger with time, we can imaginatively reverse the flow of time to study its origins. When we go back into the past, the Universe is contracting, galaxies gradually come closer, the density of the Universe increases. If we go back in time for roughly 13.8 billion years, any complex structures could not exist at that time, and all the matter was in the form of hot plasma with an unimaginable density. If we go even further, the entire Universe is squeezed to the size of a planet, then to the size of an apple, and eventually becomes just a dimensionless point which we call a singularity. (We shall look at the models that slightly reassess this picture, such as Inflationary cosmology and String theory, in later articles, but the Big Bang theory itself will in no way be reassessed, only some of its details.) According to our modern understanding of the Universe, this was a starting point, the Big Bang. And this theory is now confirmed experimentally, since it predicted such things as the Cosmic Microwave Background and explained the abundance of various chemical elements in the Universe. Both these predictions and several others give the expected values which match the observed results extraordinarily well.

And apart from everything else, GR’s concepts could tell us about the ultimate fate of the Universe. I shall not dig deep into this right now, but the shape of the whole Universe could shed light on what is going to happen with it. In a closed universe, which is positively curved – an ellipse – its expansion eventually stops and changes to contraction. This will lead to the collapse of the universe to a final singularity, termed the “Big Crunch”. An open universe, which is negatively curved – a saddle – expands forever and ends up with either “Big Freeze”, in which the Universe cools down until it reaches the state of maximum entropy, or “Big Rip”, in which the repulsive force of Dark Energy eventually becomes so powerful that even atoms cannot be held together by strong nuclear forces. Finally, if the average density of the Universe exactly equals the critical density, its shape is flat and it expands forever at a continually decelerated rate, with expansion asymptotically approaching zero. However, with the presence of dark energy even a flat universe could share the fate of an open one. Our latest observations with WMAP and Planck show that the average density is very close to the critical, and that our Universe is most probably flat. We shall talk about this later.

This year marks the 100^{th} anniversary of General theory of Relativity. It is an extraordinary theory which familiarized us with a lot of very subtle concepts behind the nature of the Universe. And as we saw at the start, it was constructed upon an inconsistency between Newton’s mechanics and Special Relativity. But GR led to another contradiction itself, which I mentioned in the first article of this series. Some of GR notions are contradicted by another extraordinarily successful theory of the XX century, namely Quantum Mechanics. To understand where this contradiction takes place, we need to look at the main principles of QM, which we will do in the next article.

Thanks everybody.

Tagged: astrophysics, cosmology, General Relativity, String theory ]]>