You and Jim are a little bit tired after all those experiments you conducted in order to test Special and General Relativity and are now looking for a bar here on Earth. You have finally found something called ‘H-bar’ which promises you a unique experience unlike anything you have ever encountered before. Of course, you have decided to enter it, where you’ve ordered your favorite Ardbeg whisky and Jim has taken a glass of long island ice tea. While waiting for your drinks you have decided to smoke your Cuban cigar which you have kept untouched for quite a long time. You sit down into a luxurious chair, set the cigar on fire and are ready to make the first inhalation when, suddenly, you notice that the cigar is not in your mouth anymore. You immediately look at your shirt and pants, thinking that the cigar has somehow slipped out from your mouth, but they remain unharmed. Then you look at the floor around you but cannot find the cigar there either. Jim approaches you and asks you what has happened. When you tell him that you’ve lost your cigar he answers that it is lying on the table behind you. You turn around and see that it is really there. But the only way it could happen to be there is by having passed right through my head, you say. I have no idea Jim replies. Right at that moment the barman calls you, signalling that your drinks are ready, so you decide that the incident with the cigar is just due to a strange sequence of events. The miracles in H-bar, however, did not stop occurring.

When you look in your glass you notice that the ice cubes in it are permanently moving at high speed, constantly colliding with each other and with the glass’ edges. But this time Jim is even more surprised. His glass of long island is narrower than yours, and the ice cubes in it are moving so fast that you could not even recognize their form. But this is not the end. The next moment you witness a completely unexpected and strange event. One of the ice cubes has passed right *through* the glass and landed on the table. You immediately take the glass but find it intact. The ice cube passed, literally, right through the glass in some completely mysterious way without causing it any damage! It seems like we are hallucinating after those space trips you say. Jim agrees, so you drink your beverages in one shot and go back home to sleep. While doing that you did not even notice that you left the building not through the actual exit, but through the door *depicted* on the wall. But the bar staff did not even notice that since such things happen all the time in this place.

The above made-up story, which I found just outstanding, is taken from Brian Greene’s book “The Elegant Universe” at the start of the chapter considering the basic principles of Quantum Mechanics, the physical framework which declares that such weird events as in the H-bar constantly happen in the microworld. In this article I shall try to introduce you to these principles and explain why such events are no weirder than an ordinary breakfast if we consider the microworld.

**The Way to Quantum Mechanics**

The first step towards Quantum Mechanics was made by German physicist Max Planck, who was considering a puzzling problem in the early 1900s. This problem regarded Black Body radiation. A black body is one that absorbs all electromagnetic radiation incident upon it, and in order to stay in thermal equilibrium it must radiate energy at the same rate. A typical star like our Sun is a good approximation of a black body, but to make things a bit clearer we can consider another good example, a cavity with one small hole in it. The light incident upon the hole goes into the cavity and, if we make the walls in the cavity capable of absorbing light, it is never reflected back since it would have to undergo a huge number of reflections in order to do so, and would in any case be absorbed before that happens. Thus, the cavity makes an almost perfect black body.

There is a simple relationship between the energy density inside the cavity and the energy radiated off by a black body. In the start of the XX century British physicist Lord Rayleigh with Sir James Jeans derived the Rayleigh-Jeans law which was based on the concepts of classical physics. It fit observational results well on long wavelengths but led to a problem on short wavelengths. In fact, the calculations based on this law predicted that the energy density inside the cavity, and hence the emission spectrum of a black body, would go to infinity! This was called the “ultraviolet catastrophe”, and it is the problem that Planck was able to solve with his new approach.

In order to see how he did that, we need consider the problem in a bit more detail. At that time light was considered to exist in the form of waves according to Maxwell’s model. These waves are described by trigonometric functions. If you are familiar with those, you can skip the next section where I will briefly describe what wavelength, frequency and amplitude mean in this context.

As shown on the picture above, wavelength is the distance between two adjacent maxima or minima of a wave. Wavelength represents the period of our trigonometric function, or one cycle of it. If we consider a wave in a certain region, for example in our cavity, the greater number of maxima and minima would correspond to the shorter wavelength, and vice versa.

The frequency of a wave represents the number of those cycles accomplished by the wave in one second. Frequency and wavelength are interdependent parameters: the greater the frequency the shorter the wavelength and vice versa, the lower the frequency the longer the wavelength.

Finally, amplitude represents the maximum height or depth of the wave. Or, to be more precise, it is the distance between the peaks and the midline, as shown in the figure 2 above.

We can make this picture even more clear taking sound waves as an example. The frequency of sound waves corresponds to the frequency of generated sound. The shorter the wavelength of a sound wave the greater the frequency, and hence the higher the sound tone. Amplitude in this example simply represents the sound’s volume. Greater amplitude corresponds to greater volume and vice versa.

The problem with the Rayleigh-Jeans relationship was in its indistinguishability of the amount of energy which those waves can carry. According to this relationship, all the waves carried the same amount of energy, and since the number of waves inside our cavity is essentially infinite (because we assume that they can be of any wavelength), the amount of energy that they would carry is also infinite. But everybody understood that this was nonsense; a cavity could not possess infinite energy. Physicists were trying to overcome this paradox and Planck was the first to achieve this.

**Planck’s Solution**

In 1900 Planck suggested that electromagnetic waves carry energy in discrete portions, or quanta. This suggestion allowed physicists to solve the conundrum with infinite energy and brought Planck the Nobel prize in physics in 1918. Let us see what this means. These portions can only be given by integers, decimals are not allowed. Correspondingly, energy is also transmitted with such portions. This is like the face values of various pieces of money having discrete quantities. For example, in the U.S. you can’t have a coin with the face value of one third of a cent, or 12.5 cents. Similarly, an electromagnetic wave cannot carry energy of 1.5 quanta. According to Planck, the ‘face value’ of energy transmitted by an electromagnetic wave is defined by the wave’s frequency. More precisely, he postulated that the minimum value of energy carried by an electromagnetic wave is proportional to the frequency of the wave. Higher frequency, and hence shorter wavelength, implies greater minimum value of energy and vice versa, lower frequency (longer wavelength) implies lesser minimum value of energy.

This discreteness immediately solved the problem of infinite energy. Suppose you are in a market with a single $100 piece and you want to purchase something that costs $4. The shop clerk tells you that they don’t have change, and you have to leave the market without purchasing anything. Likewise, if the minimum value of energy transmitted by a wave is higher than its ‘expected’ value, then it contributes nothing to the overall energy inside the cavity. More precisely, Planck established that the waves, whose minimal value of energy is greater than their average expected contribution, are suppressed exponentially. The extent of suppression magnifies abruptly with the increase of frequency. When we are considering waves in our cavity with greater and greater frequency, eventually their minimal value of energy becomes higher than their expected contribution, hence they contribute nothing. This obviously leads to the finite number of waves contributing to the overall energy, and hence the energy also becomes finite in value. You can see it in the figure below.

What made physicists convinced in the correctness of this model is its phenomenal agreement with experimental results. Planck’s formula for the calculation of the energy contribution of different electromagnetic waves has been the following: **E = ћv **where **E** stands for the energy, **ћ** for Planck’s constant and **v** – for the frequency of the wave in question. Planck found that by accurately tuning one of those parameters, which represents the ratio of proportionality between the wave’s frequency and its minimum value of energy, he could predict the results of measuring the energy of any black body with any given temperature. This ratio of proportionality is of course Planck’s constant **ћ** (pronounced h-bar; I think you can see now why that strange bar which we started this article with was named H-bar). This constant has an infinitesimally small value, 6.63 × 10 to the negative 34 power, which tells us that those quanta of energy are vanishingly small in value. This is why we cannot notice the discreteness of energy ‘packets’ and when we smoothly increase the volume of our speakers we think that it is changed continuously, whereas in real world it is changed discretely, but with such small steps that we are not able to notice them. This is how Planck solved the paradox of infinite energy.

**What are those Quanta?**

However, just as Newton derived a way of calculating the strength of gravitational attraction but left unanswered the question as to how gravity actually works, Planck solved the infinite energy conundrum but did not explain why his solution is the way Nature works. Nobody had a rational explanation of why this should be true; nobody apart from Einstein. And it is this work which brought him the Nobel prize in physics in 1921, not Special or General relativity.

Einstein got to his solution by considering the problem of the photoelectric effect. At that time physicist had known that some metals eject electrons when illuminated by electromagnetic waves (light). When light hits the surface of metals it gives off its energy, which in turn ejects electrons from these metals, not a big deal. Here you might suspect that if we increase the *intensity of light* – meaning that we increase its overall energy – it would increase the velocity of the ejected electrons. Interestingly, this is *not* what happens. Instead, in this case it is the *number of electrons* which is getting increased. It was also showed that if we increase the *frequency of light*, this leads to the *greater velocity* of the ejected electrons and vice versa, the velocity is decreasing when we decrease the frequency. If we continue to do that, eventually the value of velocity reaches zero and electrons stop being ejected at all *irrespective of the intensity of light*. A clear-cut conclusion had to be drawn from this: the frequency of light was responsible for the energy of the ejected electrons, not its intensity.

Based on this and on Planck’s model of discreteness, Einstein suggested that each beam of light consists of countless individual particles that we now call photons. I’ve used the word ‘countless’ with purpose, since a 100 Watt light bulb emits approximately one hundred billion billion (10 to the 20^{th }power) photons a second! Einstein solved the problem with the photoelectric effect by postulating that an electron is ejected from the surface of metals if it gets impinged by a photon with sufficient energy. And since it had already been shown by Planck that energy of light is defined by its frequency, the energy of an individual photon must also be defined by the frequency of the electromagnetic wave in question. This explains those strange properties behind the photoelectric effect. By increasing the intensity of light we just increase the *number* of photons, hence our light ejects the greater number of electrons whose velocity stays constant. Conversely, if we increase the frequency of light instead of intensity, the number of ejected electrons stays the same but their velocity increases – which means they possess more energy.

All this is confirmed by experiment, and there is no doubt that it is the fundamental property of light. Thus, Einstein showed that Planck’s model with the discrete packets of energy tells us that electromagnetic waves themselves consist of elementary particles – photons – which represent those packets, or quanta. The energy of light is given in discrete portions because it consists of discrete objects.

**Wave-Particle Duality**

Here you might recall that water and, correspondingly, waves in a river consist of H2O molecules, so is it that surprising that light waves also consist of particles? This is where things start getting a bit bizarre. The idea of light being represented by elementary particles dates back to Newton. This idea, however, was much more controversial among physicists than his theory of gravity; many were still standing by the wave nature of light point of view. Unfortunately, there were no such devices back then that could have tested which model is correct. It is the early 1800s when such an experiment had first been carried out by a British physicist Thomas Young, and this experiment – which is now known as double-slit experiment – proved that Newton’s opponents were right. This experiment is such a big deal in quantum theory, that we need consider it in a bit detail.

The initial setup of the experiment is shown in the figure 7 above. Here we have a coherent source of light such as a laser beam, a plate pierced by two parallel slits, and a screen which detects the light after it’s passed through the plate. The detector registers those points where emitted light hits the screen.

We start our experiment with only one of two slits being open. In this case if we continue our experiment for some time, the resulting picture on the detector would be as shown in the figure 8 below. This result is not surprising since the light passes through only the upper slit, hence it concentrates at the region of the screen behind the corresponding slit. Similarly, if we left the lower slit open and close the upper one, the detector will show the light concentrated at a particular region behind the lower slit.

The particle model of light predicts that if we conduct the experiment with two slits being open, we will eventually get such a picture wherein the light concentrates at the two regions behind both slits, which is just a combination of two pictures that we’ve got previously (with the slits being open sequentially).

The wave model of light, however, leads to a completely different prediction. If we send a wave towards the plate, it will propagate through both slits at the same time, which means that it splits into two waves, one that has propagated through the upper slit and the other that’s propagated through the lower one. Then these two waves show an interesting phenomenon which is known as the *interference pattern*. Where two maxima (crests) of both waves are superimposed upon each other at a particular point, the resulting amplitude doubles in value. Likewise, where two minima (troughs) of both waves coincide, the depth of the resulting trough doubles in value. Where a crest of one wave coincides with a trough of the other, they mutually neutralize each other. Finally, there is a full spectrum of partial amplification and partial reduction between these extreme cases. This leads to the conclusion that the resulting picture on our detector would look like this.

The brightest regions here correspond to the points where two crests (or troughs) are superimposed upon each other, the dark regions correspond to the point where a crest of one wave coincides with a trough of the other, leading to the mutual neutralization, and the whole spectrum of partial amplification and partial reduction is given by the slightly brighter and slightly darker spots. And indeed this is what the results of Young’s experiment showed.

This experiment, consequently, confirmed the wave nature of light that was later given a robust theoretical underpinning by Maxwell’s model.

But Einstein, who precipitated the robust Newton’s theory of gravity, appeared to revive Newton’s corpuscular model of light. Needless to say, his new model had to explain the results of the double-slit experiment. At first glance, it may seem that, as water, composed of H2O molecules, shows its wave properties when a huge number of those molecules are moving together, the huge number of photons moving together would explain the resulting interference pattern. In fact, however, the microworld is much more subtle. If we diminish the intensity of our light source such that it emits *a single photon* at a time, the resulting picture will be exactly the same as was shown in the figure 11. The interference pattern remains even in the case of the sequential emission of photons. This is mind boggling. How could single photons propagating through the plate eventually construct an interference pattern appropriate to the wave behaviour? Intuition tells us that each photon must propagate through either one slit or the other and the resulting picture should be like that which is shown in the figure 9. In fact, however, this is not what happens.

As we have just seen, Einstein’s corpuscles of light quite differ from Newton’s. Even though they are particles, they behave as waves at the same time. The fact that their energy is defined by a parameter used for the description of waves, namely frequency, is the first indication of the dual nature of light, but both the photoelectric effect and the double-slit experiment puzzle us even more. The first clearly indicates that light is represented by particles, whereas the other unambiguously shows its wave nature. Together they forced the physical community to conclude that light is indeed represented by both *particles and waves simultaneously*. Sometimes Nature works in such a way which is completely unfamiliar to our intuition.

**Matter Particles also Have Dual Nature**

In 1923 a French physicist Louis de Broglie suggested that matter particles should also exhibit wave characteristics. He came to this idea continuing the chain of reasoning of Einstein’s with his famous formula **E = mc^2**. As we saw in the previous articles, mass and energy are interchangeable according to this formula. But as we’ve just seen, Planck showed that the energy of light depends upon its wavelength. Combining these two facts de Broglie concluded that mass should also be dependent upon wavelength, and hence it should manifest wave properties as well. Following this logic and considering the wave-particle duality of photons de Broglie suggested that the constituents of matter also have dual nature and can behave as particles and waves simultaneously. Einstein accepted this idea right from the get go, since it represented a logical consequence based on his own contribution to both the theory of relativity and quantum physics, but it must have been confirmed experimentally in order to be accepted by the whole physical community.

In the mid 1920s such an experiment was conducted in the laboratory of the Bell Telephone Company. This experiment slightly differed in details from the double-slit one, but essentially they were identical apart from the fact that in this experiment physicists used electrons instead of photons. We need not be concerned with the details of this experiment here, but what is relevant to us is that the electrons showed the interference pattern just as the photons in the double-slit experiment. And the interference pattern, as we’ve seen above, is an undisputable characteristic of waves. Even if we decrease the intensity of our electron generator such that it emits one electron at a time, we would nonetheless see the resulting interference pattern. Electrons, for some reason, interfere with themselves as photons do. This leads to an unequivocal conclusion: electrons exhibit the wave characteristics as well as the particle ones.

The experiment described above took only electrons into consideration, but similar experiments show that any quantum objects (i.e. any particles) have both particle and wave characteristics. But why do we not experience these wave characteristics in our lives? De Broglie provided the formula with which you can determine the wavelength of matter particles, and this formula shows that the wavelength is defined as the Planck’s constant **ћ **divided by the particle’s momentum **p**. And since the value of **ћ **is infinitesimally small, the wavelength of matter particles exhibiting their wave characteristics is so tiny that the dual nature of them can be defined only in experiments with very high precision.

**Waves of What?**

The resulting interference pattern in experiments with electrons clearly demonstrated that they can be described as waves. But a natural question immediately arises: waves of *what*? The first attempt to answer this question was made by an Austrian physicist Erwin Schrödinger who suggested that these waves represent, in a sense, scattered electrons. This suggestion, however, had a flaw, since when a water wave meets an obstacle on its path it is essentially being spread into two waves, but any particles, including electrons, are clearly given only by integers (you cannot have a half of an electron). So when an electron wave meets such an obstacle it could in no way be spread into two different waves. The resolution came in 1926 from a German physicist Max Born, who became a Nobel laureate in 1954, and his suggestion is still used by scientists. It interprets an electron wave as a *probability wave*. At those places where the *absolute value* of the wave amplitude is largest in value, the detection of the electron is *most probable*. This probability lessens with the decrease of the amplitude, hence rarely are electrons found at those places where the amplitude is small. Finally, the likelihood of finding an electron at the places with the amplitude of zero is essentially zero, hence never do electrons appear there. If you conduct an experiment measuring the position of an electron, before you do that you can only determine at which point in your laboratory the electron is most likely to be detected, but you can in no way know the exact position for certain. The two-dimensional analogue of the probability wave is shown in the figure 12 below.

It is kind of strange idea, because usually we use probabilities when we play cards, or throw a dice, or toss a coin. But in these situations the necessity of using probabilities lies in the lack of our knowledge about a particular system in question. For example, if we toss a coin we do not know its exact weight, the exact force with which you have tossed it, the exact characteristics of the surrounding environment (e.g. the direction and strength of the wind) and even whether or not the coin is fair. Thus, we have to use the mathematical rules associated with probabilities. But if we had all this knowledge and, probably, a sufficiently powerful computer, we could calculate the exact result, namely whether the coin will land on heads or on tails. Thus, this probability does not tell us anything about the fundamental features of the Universe. Conversely, quantum theory introduces the concept of probability on a very deep level. The presence of the wave properties in matter particles implies that a fundamental description of matter does include probabilities. De Broglie’s formula shows that the wave characteristics of macroscopic objects are essentially undetectable, and the quantum mechanical probability associated with them could be completely ignored. On the other hand, it tells us that this probability is the inherent property of the microworld, and the best you can tell about the location of a certain particle is the likelihood of its presence at some point.

This implies that if we conduct a certain experiment with the exact same initial conditions over and over again, we will *not* get the same result. Recurrent experiments will give us a bunch of different results, and larger probability would imply that the electron is found more frequently at the corresponding point. If the *likelihood* of the electron to be found at the point A is *two times greater* than at B, then it will be detected at A *twice as frequent* as at B. Thus quantum mechanics does not allow us to determine the result of a particular experiment; but we can verify its predictions by conducting the same experiment again and again. And so far, quantum theory has been the most successful of all physical theories since its predictions match the experimental results *extraordinarily *well.

Those predictions are derived from one of the most important formulae in all of physics, namely Schrödinger’s equation. This equation gives a very precise description of the behaviour of these probability waves (or, as they are now called, *wave functions*). As Roger Penrose clearly explains in his book “Shadows of the Mind”, quantum framework could be split into two major procedures, one of which determines the behaviour of wave functions by using Schrödinger’s equation, and it is *completely deterministic*! It is the other procedure, which is called the *State-Vector Reduction *(or you could have also seen the term* Collapse of the Wave Function*), that introduces the probabilistic aspect in this framework. This State-Vector Reduction procedure can be explained by the process known as *Quantum Decoherence*, but it surely deserves an entire article, so for our current purposes we could just say that this process inevitably occurs in any quantum experiment, and there is no way to avoid it. That means that the probabilistic aspect is truly an inherent characteristic of the microworld.

Many would argue that such a conclusion is completely unacceptable since physics is about *predicting* the results of various experiments and not about deriving just probable outcomes of these experiments. One of those who did not accept the probabilistic point of view was Einstein. You might have seen his famous quote “God does not play dice”, which shows his reluctance to accept this indeterministic sort of physical laws. He thought that probability appears in our physical framework for the exact same reason as it appears when we toss a coin, namely as a consequence of the lack of our understanding of the underlying principles behind the quantum theory. But numerous experiments performed one after another have been continuously showing that it was Einstein who was wrong, not Quantum Mechanics.

Nevertheless, the debates on what quantum theory actually tells us about reality have never stopped. Everybody agrees on how to use QM’s equations in order to obtain fantastically precise predictions, but there are many different approaches to the question of interpreting wave functions and explaining the process of Quantum Decoherence. How does a particle ‘choose’ which location to appear in when experiment is carried out? There is no agreement even on the question as to whether or not it chooses at all. One popular approach to Quantum Mechanics suggests that every possible outcome of an experiment is realized each in a different universe. There are many great books in favour of each of those approaches, but we shall focus our attention on a particular one since it will play an important role when we shall be considering String Theory.

**Richard Feynman’s Path Integral Formulation**

Richard Feynman was one of the greatest physicists of the XX century. He completely accepted the probabilistic aspect of quantum theory, but in 1948 he suggested an entirely new way of looking at QM. To get an idea of his proposal let us consider the double-slit experiment with electrons.

The problem with the interpretation of the interference pattern shown in the figure 11 lies in the picture which is drawn by our intuition. Intuition tells us that an electron must pass either through the top slit or through the bottom one and thus we expect to see the result shown in the figure 9. You might recall that even if our electron source generates one electron at a time, the interference pattern is still there, hence there must be something that would be sensitive to both slits simultaneously and ‘check’ whether or not both of them are open. Schrödinger, de Broglie, Born and other physicists described this phenomenon by wave functions associated with each electron. A wave propagates through both slits simultaneously, recombines, interferes with itself and, consequently, produces the interference pattern.

Feynman developed another approach. He brought into question the very assumption according to which an electron as a particle would pass through only one slit. At first glance this assumption is so fundamental that it could not be argued. After all, can we not just figure out which slit the electron passed through after it has done so? Yes, we can, but in this case we would change the outcome of our experiment! To detect the electron after it has passed through the plate we have to light it up, which means that we need bring a photon into contact with it. But while photons, being vanishingly small packets of energy, do not affect macroscopic objects in any noticeable way, they do affect the motion of electrons since those are infinitesimally small ingredients of matter, and just a tiny push of a photon is enough to displace an electron and change the direction of its motion. Therefore, if we continuously define which slit each electron has passed through, the interference pattern is *destroyed*, and the resulting picture would look like that shown in the figure 9! The microworld *guarantees* that as long as we have figured out which slit an electron has passed through, the interference pattern is lost. What this tells us is that we have no way to test the validity of that assumption which seems so undisputable.

What Feynman proposed was that each electron passes *through both slits* simultaneously as a particle. You might think that such an idea would make sense only in science fiction, but not that fast. Feynman postulated that not only does an electron pass through both slits but it essentially follows *every possible path simultaneously*! The electron in this picture simply makes it through the top slit. *At the same time* it passes through the bottom slit. *At the same time* it makes it to your apartment and comes back to the plate and passes it through the top one. Yet *at the same time* it makes a long journey to the Andromeda galaxy, then turns back and eventually passes through the bottom slit. Apart from these, it follows an infinite number of trajectories from the initial point (the electron source) to the final destination (the detector screen). Some of such trajectories, connecting two points A and B, are shown in the figure 13 below.

The mathematical details of this model are quite complex, but roughly speaking Feynman showed that each of those paths could be associated with some number and the average of all those numbers gives us the exact same probability as the conventional quantum mechanical interpretation with wave functions at play. According to Feynman, there is no need to associate a wave function with our electron, and the probability of its appearance at certain points on the detector screen is given by the overall effect of all the trajectories leading the electron to that point. However bizarre and inadequate this model might seem, its predictions exactly match the predictions of QM’s Copenhagen interpretation with the usage of wave functions, which, in turn, are confirmed by experiment to an extraordinary degree of precision. We must let Nature decide what is reasonable and what is not.

But macro objects all consist of elementary particles, so why don’t they follow many paths simultaneously if this model is correct, you might ask. The answer is actually straightforward. As path integral formulation shows, the contributions of the trajectories of large enough objects mutually eliminate each other such that only one trajectory remains. And this trajectory, you guessed it, is the one which follows Newton’s laws of motion! This is why we can in no way get to a similar solution considering the motion of baseballs, rocks, planets or whatever. But for micro world’s objects Feynman’s rule says that the actual path, which those objects follow, often defined by the contribution of many different possible trajectories. I’d like to emphasize once again that this model and Bohr’s Copenhagen interpretation provide the *exact same* predictions, and therefore both have been confirmed. These two models unambiguously support each other. Later we shall see that Feynman’s approach plays an important role in some aspects of String theory.

**What is Responsible for those Strange Events in the H-bar?**

We have familiarized ourselves with some aspects of Quantum Mechanics throughout this article. Some of them may have been too bizarre to make intuitive sense out of them. What I have not yet touched on is those made up events in the H-bar which we started with. As it turns out the explanation of their occurrence could also help us to build up a somewhat intuitive picture of what is going on in the microworld, even though it is not less weird itself. This explanation lies in what is known as the *uncertainty principle* derived by German physicist Werner Heisenberg in 1927.

If you recall, when we considered the double-slit experiment applied on electrons, we established that the act of defining whether or not an electron has passed through one of the slits inevitably influences the result of our experiment, because in order to do that we must impinge a photon upon the electron, which, in turn, will change the direction of the electron’s movement. But why can we not use a photon with such low energy that it would ‘touch’ our electron so gently that its influence would barely be noticeable? If you remember, by diminishing the intensity of light we do not lessen the photon’s energy, but only their number. When we diminish the intensity of our light source such that it emits only one photon at a time, there is no way to make it even more ‘gentle’ other than turn it off completely. This is the fundamental quantum mechanical limit of ‘gentleness’.

On the other hand, we saw that we can reduce the energy of a photon by regulating its frequency. So why can’t we make our photons more ‘gentle’ by lessening their frequency (and correspondingly, increasing their wavelength)? As it turns out, we cannot circumvent our limit even in this case, and here is why. When we direct a wave onto an object, the obtained information about that object is sufficient for defining its location with an inherent margin of error which is proportional to the wavelength. Imagine that we are trying to define the location of a glacier whose surface is below the sea level, but we know it’s there because the shape of the waves passing close to it is changed by its presence. Before they do that, they create an ordered pattern of repeating crests and troughs. After they pass above the glacier, however, their form is changed; but one cycle (wavelength) of a wave represents a singular unit in their sequence, hence it gives us the maximum accuracy with which we can define the location of the glacier. Similarly, a photon represents one cycle of an electromagnetic wave, and its wavelength is the limit of accuracy for our attempts of locating the electron.

In this sense, we have a kind of quantum mechanical compensation for the accuracy of our measurement of one parameter, which is the inevitable inaccuracy of the measurement of the other. We can define very precisely the position of an electron by using a high frequency (short wavelength) photon, but such a photon would carry so much energy such that it brings about high uncertainty in the measurement of the electron’s velocity. And otherwise, if we use a low frequency (long wavelength) photon, we could determine the electron’s velocity with high accuracy, but this will bring a high uncertainty in its position.

Heisenberg expressed it in the mathematical equation which tells us that these parameters (the accuracy of defining position and velocity) are inversely proportional to each other, meaning that the accuracy in defining one of them inevitably brings a high uncertainty to the other. What’s important here is that this equation holds true for *any* experiment even though we have shown it only in respect to the double-slit one. And apart from electrons we could use any other particle as well. This is where quantum physics is so different from classical. According to Newton, Einstein and other classical physicists, the state of a particle is described by its position and velocity, but QM tells us that these parameters cannot have definite values both at the same time.

Einstein tried to minimize this recession from classical physics stating that even though QM brings a limit to our knowledge of these parameters, real particles *do have* definite position and velocity. But the progress in both theoretical physics and technology of the second half of the XX century, and, particularly, the experimental data obtained by French physicist Alain Aspect clearly showed that Einstein was wrong. Heisenberg’s uncertainty principle is truly an inherent property of the microworld. If you were to put a particle into a box and start to move the outer boundaries of the box closer to each other, your electron’s velocity would increase dramatically with time. Now, if you remember the events in the H-bar, where we increased the value of Planck’s constant **ћ **such that those strange things from the microworld could be noticeable by everyday scales, the ice cubes in your glass were moving very fast, but in Jim’s glass, which was narrower than yours (the analogue for our box squeezed even more), they were moving so fast that you could not even recognize their form. Now you know the reason behind this!

The uncertainty principle lies in the basis of yet another outstanding effect of the microworld which helps our Sun to produce light, namely *quantum tunneling*. If you shoot a bullet into a concrete wall the result will be pretty straight forward: the bullet will hit the wall, bounce back for some distance and fall to the ground. The reason for that is simply in the insufficient amount of energy of our bullet to break through the wall. But if we delve into the level of quantum world, each particle composing the bullet has a tiny probability of making it through. How could this be the case? According to Heisenberg, the uncertainty principle connects not only position and momentum but some other parameters as well, one of them being the pair of *energy and time*. The accuracy of your measurement of a particle’s energy is inversely proportional to the time taken to perform this measurement. According to QM you cannot claim that a particle has a certain amount of energy at a certain point in time. For the precise measurement of a particle’s energy you have to pay. Your experiment should take a noticeable amount of time in order to provide such a precise measurement of the particle’s energy. And otherwise, the energy of a particle fluctuates significantly if we carry out a very rapid experiment. What this tells us is that a particle could borrow a sufficient amount of energy to break through a wall if it returns it very quickly.

Mathematical apparatus of the quantum theory shows that the higher the value of energy a particle needs, the less the probability of that happening. But even if the energy barrier is quite high, particles sometimes do borrow that energy so that they could pass through a solid object, which would be completely impossible from the point of view of classical physics. When we consider macro-objects consisting of countless particles, the probability of quantum tunneling persists but becomes infinitesimally small, since *all* the particles composing an object have to quantum tunnel through the wall simultaneously. Weird events such as the disappearing of your cigar, the ice cube passing through the glass and the passage of yours through the depicted on the wall door, however, *might* happen in the real world. If you smashed into a concrete wall every second, hoping to get through to the other side, it would take even longer than the length of time the Universe has existed for an opportunity to arise! But if you had infinite patience and similar expectancy, eventually you would make it through.

Next time we shall consider in a bit more detail the inconsistency between Einstein’s General Relativity and Quantum Mechanics and then we will eventually get to the main topic of this series, String Theory. I thank everybody who has made it this far and has read the article entirely, and I hope to see you all next time.

I thoroughly enjoyed that Aleksei. Explaining QM witout the maths is a difficult task. As Mermin said- “Shut up and calculate!” and in QM this might be very wise advice if you wish to work with QM and still retain your sanity! When reading long, non-mathamatical explanations I have a tendency to say the opposite- “Shut up and give me an equation!” but I didn’t think this once here. Looking forward to the next part!

LikeLiked by 1 person

Thank you so much for your support Peter! It’s always a pleasure to read your comments!

LikeLiked by 1 person

Thanks, Aleksei! Now I have something to read on the plane!

LikeLiked by 1 person