# Tag Archives: Quantum Mechanics

## Whence we know the photon exists

In my previous post, I laid out the argument discussing why the photoelectric effect does not imply the existence of photons. In this post, I want to outline, not the first, but probably the conceptually simplest experiment that showed that photons do indeed exist. It was performed by Grangier, Roger and Aspect in 1986, and the paper can be found at this link (PDF!).

The idea can be described by considering the following simple experiment. Imagine I have light impinging on a 50/50 beamsplitter and detectors at both of the output ports, as pictured below. In this configuration, 50% of the light will be transmitted, labelled t below, and 50% of the light will be reflected, labeled r below.

Now, if a discrete and indivisible packet of light, i.e. a photon, is shone on the beam splitter, then it must either be reflected (and hit D1) or be transmitted (and hit D2). The detectors are forbidden from clicking in coincidence. However, there is one particularly tricky thing about this experiment. How do I ensure that I only fire a single photon at the beam splitter?

This is where Aspect, Roger and Grangier provide us with a rather ingenious solution. They used a two-photon cascade from a calcium atom to solve the issue. For the purpose of this post, one only needs to know that when a photon excites the calcium atom to an excited state, it emits two photons as it relaxes back down to the ground state. This is because it relaxes first to an intermediate state and then to the ground state. This process is so fast that the photons are essentially emitted simultaneously on experimental timescales.

Now, because the calcium atom relaxes in this way, the first photon can be used to trigger the detectors to turn them on, and the second photon can impinge on the beam splitter to determine whether there are coincidences among the detectors. A schematic of the experimental arrangement is shown below (image taken from here; careful, it’s a large PDF file!):

Famously, they were essentially able to extrapolate their results and show that the photons are perfectly anti-correlated, i.e. that when a photon reflects off of the beam splitter, there is no transmitted photon and vice versa. Alas the photon!

However, they did not stop there. To show that quantum mechanical superposition applies to single photons, they sent these single photons through a Mach-Zehnder interferometer (depicted schematically below, image taken from here).

They were able to show that single photons do indeed interfere. The fringes were observed with visibility of about 98%. A truly triumphant experiment that showed not only the existence of photons cleanly, but that their properties are non-classical and can be described by quantum mechanics!

## The photoelectric effect does not imply photons

When I first learned quantum mechanics, I was told that we knew that the photon existed because of Einstein’s explanation of the photoelectric effect in 1905. As the frequency of light impinging on the cathode material was increased, electrons came out with higher kinetic energies. This led to Einstein’s famous formula:

$K.E. = \hbar\omega - W.F.$

where $K.E.$ is the kinetic energy of the outgoing electron, $\hbar\omega$ is the photon energy and $W.F.$ is the material-dependent work function.

Since the mid-1960s, however, we have known that the photoelectric effect does not definitively imply the existence of photons. From the photoelectric effect alone, it is actually ambiguous whether it is the electronic levels or the impinging radiation that should be quantized!

So, why do we still give the photon explanation to undergraduates? To be perfectly honest, I’m not sure whether we do this because of some sort of intellectual inertia or because many physicists don’t actually know that the photoelectric effect can be explained without invoking photons. It is worth noting that Willis E. Lamb, who played a large role in the development of quantum electrodynamics, implored other physicists to be more cautious when using the word photon (see for instance his 1995 article entitled Anti-Photon, which gives an interesting history of the photon nomenclature and his thoughts as to why we should be wary of its usage).

Let’s return to 1905, when Einstein came up with his explanation of the photoelectric effect. Just five years prior, Planck had heuristically explained the blackbody radiation spectrum and, in the process, evaded the ultraviolet catastrophe that plagued explanations based on the classical equipartition theorem. Planck’s distribution consequently provided the first evidence of “packets of light”, quantized in units of $\hbar$. At the time, Bohr had yet to come up with his atomic model that suggested that electron levels were quantized, which had to wait until 1913. Thus, from Einstein’s vantage point in 1905, he made the most reasonable assumption at the time — that it was the radiation that was quantized and not the electronic levels.

Today, however, we have the benefit of hindsight.

According to Lamb’s aforementioned Anti-Photon article, in 1926, G. Wentzel and G. Beck showed that one could use a semi-classical theory (i.e. electronic energy levels are quantized, but light is treated classically) to reproduce Einstein’s result. In the mid- to late 1960’s, Lamb and Scully extended the original treatment and made a point of emphasizing that one could get out the photoelectric effect without invoking photons. The main idea can be sketched if you’re familiar with the Fermi golden rule treatment to a harmonic electric field perturbation of the form $\textbf{E}(t) = \textbf{E}_0 e^{-i \omega t}$, where $\omega$ is the frequency of the incoming photon. In the dipole approximation, we can write the potential as $V(t) = -e\textbf{x}(t)\cdot\textbf{E}(t)$ and we get that the transition rate is:

$R_{i \rightarrow f} = \frac{1}{t} \frac{1}{\hbar^2}|\langle{f}|e\textbf{x}(t)\cdot\textbf{E}_0|i \rangle|^2 [\frac{\textrm{sin}((\omega_{fi}-\omega)t/2)}{(\omega_{fi}-\omega)/2}]^2$

Here, $\hbar\omega_{fi} = (E_f - E_i)$ is the difference in energies between the initial and final states. Now, there are a couple things to note about the above expression. Firstly, the term in brackets (containing the sinusoidal function) peaks up when $\omega_{fi} \approx \omega$. This means that when the incoming light is resonant between the ground state and a higher energy level, the transition rate sharply increases.

Let us now interpret this expression with regard to the photoelectric effect. In this case, there exists a continuum of final states which are of the form $\langle x|f\rangle \sim e^{i\textbf{k}\cdot\textbf{r}}$, and as long as $\hbar\omega > W.F.$, where $W.F.$ is the work function of the material, we recover $\hbar\omega = W.F. + K.E.$, where $K.E.$ represents the energy given to the electron in excess of the work function. Thus, we recover Einstein’s formula from above!

In addition to this, however, we also see from the above expression that the current on the photodetector is proportional to $\textbf{E}^2_0$, i.e. the intensity of light impinging on the cathode. Therefore, this semi-classical treatment improves upon Einstein’s treatment in the sense that the relation between the intensity and current also naturally falls out.

From this reasoning, we see that the photoelectric effect does not logically imply the existence of photons.

We do have many examples that non-classical light does exist and that quantum fluctuations of light play a significant role in experimental observations. Some examples are photon anti-bunching, spontaneous emission, the Lamb shift, etc. However, I do agree with Lamb and Scully that the notion of a photon is indeed a challenging one and that caution is needed!

A couple further interesting reads on this subject at a non-technical level can be found here: The Concept of the Photon in Physics Today by Scully and Sargent and The Concept of the Photon – Revisited in OPN Trends by Muthukrishnan, Scully and Zubairy (pdf!)

## Jahn-Teller Distortion and Symmetry Breaking

The Jahn-Teller effect occurs in molecular systems, as well as solid state systems, where a molecular complex distorts, resulting in a lower symmetry. As a consequence, the energy of certain occupied molecular states is reduced. Let me first describe the phenomenon before giving you a little cartoon of the effect.

First, consider, just as an example, a manganese atom with valence $3d^4$, surrounded by an octahedral cage of oxygen atoms like so (image taken from this thesis):

The electrons are arranged such that the lower triplet of orbital states each contain a single “up-spin”, while the higher doublet of orbitals only contains a single “up-spin”, as shown on the image to the left. This scenario is ripe for a Jahn-Teller distortion, because the electronic energy can be lowered by splitting both the doublet and the triplet as shown on the image on the right.

There is a very simple, but quite elegant problem one can solve to describe this phenomenon at a cartoon level. This is the problem of a two-dimensional square well with adjustable walls. By solving the Schrodinger equation, it is known that the energy of the two-dimensional infinite well has solutions of the form:

$E_{i,j} = \frac{h}{8m}(i^2/a^2 + j^2/b^2)$                where $i,j$ are integers.

Here, $a$ and $b$ denote the lengths of the sides of the 2D well. Since it is only the quantity in the brackets that determine the energy levels, let me factor out a factor of $\gamma = a/b$ and write the energy dependence in the following way:

$E \sim i^2/\gamma + \gamma j^2$

Note that $\gamma$ is effectively an anisotropy parameter, giving a measure of the “squareness of the well”. Now, let’s consider filling up the levels with spinless electrons that obey the Pauli principle. These electrons will fill up in a “one-per-level” fashion in accordance with the fermionic statistics. We can therefore write the total energy of the $N-$fermion problem as so:

$E_{tot} \sim \alpha^2/ \gamma + \gamma \beta^2$

where $\alpha$ and $\beta$ parameterize the energy levels of the $N$ electrons.

Now, all of this has been pretty simple so far, and all that’s really been done is to re-write the 2D well problem in a different way. However, let’s just systematically look at what happens when we fill up the levels. At first, we fill up the $E_{1,1}$ level, where $\alpha^2 = \beta^2 = 1^2$. In this case, if we take the derivative of $E_{1,1}$ with respect to $\gamma$, we get that $\gamma_{min} = 1$ and the well is a square.

For two electrons, however, the well is no longer a square! The next electron will fill up the $E_{2,1}$ level and the total energy will therefore be:

$E_{tot} \sim 1/\gamma (1+4) + \gamma (1+1)$,

which gives a $\gamma_{min} = \sqrt{5/2}$!

Why did this breaking of square symmetry occur? In fact, this is very closely related to the Jahn-Teller effect. Since the level is two-fold degenerate (i.e. $E_{2,1} = E_{1,2}$), it is favorable for the 2D well to distort to lower its electronic energy.

Notice that when we add the third electron, we get that:

$E_{tot} \sim 1/\gamma (1+4+1) + \gamma (1+1+4)$

and $\gamma_{min} = 1$ again, and we return to the system with square symmetry! This is also quite similar to the Jahn-Teller problem, where, when all the states of the degenerate levels are filled up, there is no longer an energy to be gained from the symmetry-broken geometry.

This analogy is made more complete when looking at the following level scheme for different $d$-electron valence configurations, shown below (image taken from here).

The black configurations are Jahn-Teller active (i.e. prone to distortions of the oxygen octahedra), while the red are not.

In condensed matter physics, we usually think about spontaneous symmetry breaking in the context of the thermodynamic limit. What saves us here, though, is that the well will actually oscillate between the two rectangular configurations (i.e. horizontal vs. vertical), preserving the original symmetry! This is analogous to the case of the ammonia ($NH_3$) molecule I discussed in this post.

## Wannier-Stark Ladder, Wavefunction Localization and Bloch Oscillations

Most people who study solid state physics are told at some point that in a totally pure sample where there is no scattering, one should observe an AC response to a DC electric field, with oscillations at the Bloch frequency ($\omega_B$). These are the so-called Bloch oscillations, which were predicted by C. Zener in this paper.

However, the actual observation of Bloch oscillations is not as simple as the textbooks would make it seem. There is an excellent Physics Today article by E. Mendez and G. Bastard that outline some of the challenges associated with observing Bloch oscillations (which was written while this paper was being published!). Since the textbook treatments often use semi-classical equations of motion to demonstrate the existence of Bloch oscillations in a periodic potential, they implicitly assume transport of an electron wave-packet. To generate this wave-packet is non-trivial in a solid.

In fact, if one undertakes a full quantum mechanical treatment of electrons in a periodic potential under the influence of an electric field, one arrives at the Wannier-Stark ladder, which shows that an electric field can localize electrons! It is this ladder and the corresponding localization which was key to observing Bloch oscillations in semiconductor superlattices.

Let me use the two-well potential to give you a picture of how this localization might occur. Imagine symmetric potential wells, where the lowest energy eigenstates look like so (where S and A label the symmetric and anti-symmetric states):

Now, imagine that I start to make the wells a little asymmetric. What happens in this case? Well, it turns out that that the electrons start to localize in the following way (for the formerly symmetric and anti-symmetric states):

G. Wannier was able to solve the Schrodinger equation with an applied electric field in a periodic potential in full and showed that the eigenstates of the problem form a Stark ladder. This means that the eigenstates are of identical functional form from quantum well to quantum well (unlike in the double-well shown above) and the energies of the eigenstates are spaced apart by $\Delta E=\hbar \omega_B$! The potential is shown schematically below. It is also shown that as the potential wells slant more and more (i.e. with larger electric fields), the wavefunctions become more localized (the image is taken from here (pdf!)):

A nice numerical solution from the same document shows the wavefunctions for a periodic potential well profile with a strong electric field, exhibiting a strong wavefunction localization. Notice that the wavefunctions are of identical form from well to well.

What can be seen in this solution is that the stationary states are split by $\hbar \omega_B$, but much like the quantum harmonic oscillator (where the levels are split by $\hbar \omega$), nothing is actually oscillating until one has a wavepacket (or a linear superposition of eigenstates). Therefore, the Bloch oscillations cannot be observed in the ground state (which includes the the applied electric field) in a semiconducting superlattice since it is an insulator! One must first generate a wavepacket in the solid.

In the landmark paper that finally announced the existence of Bloch oscillations, Waschke et. al. generated a wavepacket in a GaAs-GaAlAs superlattice using a laser pulse. The pulse was incident on a sample with an applied electric field along the superlattice direction, and they were able to observe radiation emitted from the sample due to the Bloch oscillations. I should mention that superlattices must be used to observe the Wannier-Stark ladder and Bloch oscillations because $\omega_B$, which scales with the width of the quantum well, needs to be fast enough that the electrons don’t scatter from impurities and phonons. Here is the famous plot from the aforementioned paper showing that the frequency of the emitted radiation from the Bloch oscillations can be tuned using an electric field:

This is a pretty remarkable experiment, one of those which took 60 years from its first proposal to finally be observed.

## Consistency in the Hierarchy

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

## Kapitza-Dirac Effect

We are all familiar with the fact that light can diffract from two (or multiple) slits in a Young-type experiment. After the advent of quantum mechanics and de Broglie’s wave description of matter, it was shown by Davisson and Germer that electrons could be diffracted by a crystal. In 1927, P. Kapitza and P. Dirac proposed that it should in principle be possible for electrons to be diffracted by standing waves of light, in effect using light as a diffraction grating.

In this scheme, the electrons would interact with light through the ponderomotive potential. If you’re not familiar with the ponderomotive potential, you wouldn’t be the only one — this is something I was totally ignorant of until reading about the Kapitza-Dirac effect. In 1995, Anton Zeilinger and co-workers were able to demonstrate the Kapitza-Dirac effect with atoms, obtaining a beautiful diffraction pattern in the process which you can take a look at in this paper. It probably took so long for this effect to be observed because it required the use of high-powered lasers.

Later, in 2001, this experiment was pushed a little further and an electron-beam was used to demonstrate the effect (as opposed to atoms), as Dirac and Kapitza originally proposed. Indeed, again a diffraction pattern was observed. The article is linked here and I reproduce the main result below:

(Top) The interference pattern observed in the presence of a standing light wave. (Bottom) The profile of the electron beam in the absence of the light wave.

Even though this experiment is conceptually quite simple, these basic quantum phenomena still manage to elicit awe (at least from me!).

## Bohr-van Leeuwen Theorem and Micro/Macro Disconnect

A couple weeks ago, I wrote a post about the Gibbs paradox and how it represented a case where, if particle indistinguishability was not taken into account, led to some bizarre consequences on the macroscopic scale. In particular, it suggested that entropy should increase when partitioning a monatomic gas into two volumes. This paradox therefore contained within it the seeds of quantum mechanics (through particle indistinguishability), unbeknownst to Gibbs and his contemporaries.

Another historic case where a logical disconnect between the micro- and macroscale arose was in the context of the Bohr-van Leeuwen theorem. Colloquially, the theorem says that magnetism of any form (ferro-, dia-, paramagnetism, etc.) cannot exist within the realm of classical mechanics in equilibrium. It is quite easy to prove actually, so I’ll quickly sketch the main ideas. Firstly, the Hamiltonian with any electromagnetic field can be written in the form:

$H = \sum_i \frac{1}{2m_i}(\textbf{p}_i - e\textbf{A}_i)^2 + U_i(\textbf{r}_i)$

Now, because the classical partition function is of the form:

$Z \propto \int_{-\infty}^\infty d^3\textbf{r}_1...d^3\textbf{r}_N\int_{-\infty}^\infty d^3\textbf{p}_1...d^3\textbf{p}_N e^{-\beta\sum_i \frac{1}{2m_i}(\textbf{p}_i - e\textbf{A}_i)^2 + U_i(\textbf{r}_i)}$

we can just make the substitution:

$\textbf{p}'_i = \textbf{p}_i - e\textbf{A}_i$

without having to change the limits of the integral. Therefore, with this substitution, the partition function ends up looking like one without the presence of the vector potential (i.e. the partition function is independent of the vector potential and therefore cannot exhibit any magnetism!).

This theorem suggests, like in the Gibbs paradox case, that there is a logical inconsistency when one tries to apply macroscale physics (classical mechanics) to the microscale and attempts to build up from there (by applying statistical mechanics). The impressive thing about this kind of reasoning is that it requires little experimental input but nonetheless exhibits far-reaching consequences regarding a prevailing paradigm (in this case, classical mechanics).

Since the quantum mechanical revolution, it seems like we have the opposite problem, however. Quantum mechanics resolves both the Gibbs paradox and the Bohr-van Leeuwen theorem, but presents us with issues when we try to apply the microscale ideas to the macroscale!

What I mean is that while quantum mechanics is the rule of law on the microscale, we arrive at problems like the Schrodinger cat when we try to apply such reasoning on the macroscale. Furthermore, Bell’s theorem seems to disappear when we look at the world on the macroscale. One wonders whether such ideas, similar to the Gibbs paradox and the Bohr-van Leeuwen theorem, are subtle precursors suggesting where the limits of quantum mechanics may actually lie.