## Coherent phonons: mechanisms, phases and symmetry

In the past few decades, pulsed laser sources have become more and more often found in condensed matter physics and physical chemistry labs. A typical use of these sources is a so-called “pump-probe” experiment. In these investigations, an initial laser “pump” pulse excites a sample and a subsequent “probe” pulse then monitors how the sample relaxes back to equilibrium. By varying the time delay between the initial “pump” pulse and the subsequent “probe” pulse, the relaxation can be tracked as a function of time. One of the first remarkable observations seen with this technique is what I show in the figure below — oscillations of a solid’s reflectivity after short-pulse photoexcitation (although many other experimental observables can also exhibit these kinds of oscillations). These oscillations arise from the excitation of a vibrational lattice mode (i.e. a phonon).

Why is this observation interesting? After all, isn’t this just the excitation of a vibrational mode which can be also be excited thermally or in a scattering experiment? In some sense yes, but what makes this different is that the excited phonon is coherent. This means, unlike in the context of a scattering experiment, the atomic motion is phase-locked across the entire photo-excited area; the atoms move back and forth in perfect synchrony. This is why the oscillations show up in the measured observables — the macroscopic lattice is literally wobbling with the symmetry and frequency of the normal mode. (In a scattering experiment, by contrast, the incident particles, be they electrons, neutrons or photons, are continuously shone onto the sample. Therefore, the normal modes are excited, but at different times. These different excitation times result in varying phases of the normal mode oscillations, and the coherent oscillations thus wash out on average.)

There are many different ideas on how to use these coherent oscillations to probe various properties of solids and for more exotic purposes. For example, in this paper, the Shen group from Stanford showed that by tracking the oscillations in an X-ray diffraction peak (from which the length scale of atomic movements can be obtained) and the same vibrational mode oscillations in a photoemission spectrum (from which the change in energy of a certain band can be obtained), one can get a good estimate of the electron-phonon coupling strength (at least for a particular normal mode and band). In this paper from the Ropers group at Gottingen, on the other hand, an initial pulse is used to melt an ordered state through the large amplitude excitation of a coherent mode. A subsequent pulse then excites the same mode out-of-phase leading to a “revival” of the ordered state.

When oscillations of these optical phonon modes first started appearing in the literature in the late 1980s and early 90s, there was a lot of debate about how they were generated. The first clue was that only Raman-active modes showed up; infrared-active oscillations could not be observed (in materials with inversion symmetry). While the subsequent proposed Raman-based mechanisms could explain almost all observations, there were certain modes, like the one in Bismuth depicted in the figure above, that did not conform to the Raman-type excitation scheme. A new theory was put forward suggesting that this mode (and other similar ones) were excited through a so-called “displacive” mechanism.

One distinction between the two generation mechanisms is that the Raman-type theory predicted a sine-like oscillation, whereas the displacive-type theory predicted a cosine-like oscillation (i.e. there was a distinction in terms of the phase). Another prediction of the “displacive” theory was that only totally symmetric modes could be excited in this way (see image below). In the image depicting the oscillations in Bismuth above, an arrow in the inset points to an energy where a vibrational excitation is seen with spontaneous Raman spectroscopy but is not present in the pump-probe experiment. The only visible vibrational mode is the totally symmetric one, consistent with the displacive excitation theory.

In this post, I’m going to go through some toy model-like ideas behind the two generation mechanisms and briefly go over the symmetry arguments that allow their excitation. In particular, I’ll explain the difference between the sine and cosine-like oscillations and also why the “displacive” mechanism can only excite totally symmetric modes.

Impulsive stimulated Raman scattering (ISRS) is the rather intimidating name given to the Raman-type generation mechanism. Let’s just briefly try to understand what the words mean. Impulsive refers to the width of the light pulse, $\Delta t$, being significantly shorter than the phonon period, $T$. In this limit, the light pulse acts almost like a delta function in time, i.e. like an impulse function. Now, the word stimulated, in contrast to “spontaneous”, means that because the frequency difference between two photons in the light pulse can match a phonon frequency ($\omega_1 - \omega_2 = \omega_{phonon}$), one of the photons can stimulate the excitation of a phonon (see image below). By contrast, in a spontaneous Raman scattering process, a monochromatized continuous wave beam with a narrow frequency width is shone upon a sample. In this case, two photons cannot achieve the $\omega_1 - \omega_2 = \omega_{phonon}$ condition. The difference between these two processes can be described pictorially in the following way:

Now that we have a picture of how this process works, let us return to our first question: why does the Raman-type generation process result in a sine-like time dependence? Consider the following equation of motion, which describes a damped harmonic oscillator subject to an external force that is applied over an extremely short timescale (i.e. a delta function):

$\ddot{Q} + 2\gamma\dot{Q} + \omega_0^2 Q = g\delta(t)$

where $Q$ represents the normal mode coordinate of a particular lattice vibration, $\gamma$ is a phenomenological damping constant and $g$ characterizes the strength of the delta function perturbation. We can solve this equation using a Fourier transform and contour integration (though there may be simpler ways!) to yield:

$Q(t) = \Theta(t) g e^{-\gamma t}\left\{\frac{\textrm{sin}((\omega_0^2 - \gamma^2)t)}{\sqrt{\omega_0^2 - \gamma^2}}\right\}$

Below is a qualitative schematic of this function:

Seeing the solution to this equation should demonstrate why a short pulse perturbation would give rise to a sine-like oscillation.

So the question then becomes: how can you get something other than a sine-like function upon short-pulse photoexcitation? There are a couple of ways, but this is where the displacive theory comes in. The displacive excitation of coherent phonons (DECP) mechanism (another intimidating mouthful of a term) requires absorption of the photons from the laser pulse, in contrast to the Raman-based mechanism which does not. Said another way, one can observe coherent phonons in a transparent crystal with a visible light laser pulse only through the Raman-based mechanism; the displacive excitation of coherent phonons is not observed in that case.

What this tells us is that the displacive mechanism depends on the excitation of electrons to higher energy levels, i.e. the redistribution of the electronic density after photoexcitation. Because electrons are so much lighter than the nuclei, the electrons can come to equilibrium among themselves long before the nuclei can react. As a concrete example, one can imagine exciting silicon with a short laser pulse with photon energies greater than the band gap. In this case, the electrons will quickly relax to the conduction band minimum within 10s of femtoseconds, yielding an electronic density that is different from the equilibrium density. It will then take some nanoseconds before the electrons from the conduction band edge recombine with the holes at the valence band maximum. Nuclei, on the other hand, are only capable of moving on the 100s of femtoseconds timescale. Thus, they end up feeling a different electrostatic environment after the initial change in the electronic density which, at least in the case of silicon, appears almost instantaneously and lasts for nanoseconds.

What I am trying to say in words is that the driving force due to the redistribution of electronic density is more appropriately modeled as a Heaviside step function rather than a delta function. So we can write down the following equation with a force that has a step function-like time dependence:

$\ddot{Q} + 2\gamma\dot{Q} + \omega_0^2 Q = \kappa\Delta n\Theta(t)$

where $\Delta n$ is the change in the electronic density after photoexcitation and $\kappa$ is a constant that linearly relates the change in density to the electrostatic force on the normal mode. Now, in reality, $\Delta n$ can have a more complicated profile than the Heaviside step function we are using here. For example, it could be a step function times an exponential decay. But the results are qualitatively similar in both cases, so I just chose the simplest mathematical form to illustrate the concept (the Heaviside step function).

In this case, we can solve this differential equation for $t>0$ by making the substitution $Q' = Q - \kappa\Delta n$. We then get the following simple equation:

$\ddot{Q'} + 2\gamma\dot{Q'} + \omega_0^2 Q' = 0$

The solution to this equation gives both sine and cosine terms, but because the light pulse does not change the velocity of the nuclei, we can use the initial condition that $\dot{Q}(0) = 0$. Our second initial condition is that $Q(0) = 0$ because the nuclei don’t move from their positions initially. But because the normal mode equilibrium position has shifted (or been “displaced”), this results in an oscillation. (An analogous situation would be a vertically hanging mass on a spring in a zero gravity environment suddenly being brought into a gravitational environment. The mass would start oscillating about its new “displaced” equilibrium position.) Quantitatively, for small damping $\gamma/\omega_0^2 \ll 1$, we get for $t>0$:

$Q(t) = -\frac{\kappa\Delta n}{\omega_0^2} e^{-\gamma t}\textrm{cos}((\omega_0^2 - \gamma^2)t) +\frac{\kappa\Delta n}{\omega_0^2}$

which this time exhibits a cosine-like oscillation like in the schematic depicted below:

Now that the difference in terms of phase between the two mechanisms is hopefully clear, let’s talk about the symmetry of the modes that can be excited. Because the frequency of light used in these experiments is much higher than the typical phonon frequency, the incident light is not going to be resonant with a phonon. So in this limit ($\omega_{photon} \gg \omega_{phonon}$), infrared active phonons won’t absorb light, and light will instead scatter from Raman active modes.

From a symmetry vantage point, which modes will be excited is determined by the following equation:

$\ddot{Q}^{(\Gamma_i)} + 2\gamma\dot{Q}^{(\Gamma_i)} + \omega_0^2 Q^{(\Gamma_i)} = \sum_{\Gamma_j}F^{(\Gamma_j)}\delta_{\Gamma_i,\Gamma_j}$

where $\Gamma_i$ labels the symmetry of the mode (or in the language of group theory, the irreducible representation of a particular vibrational mode) and $\delta_{\Gamma_i,\Gamma_j}$ is the Kronecker delta. As I explained in a previous post in a cartoonish manner, when the symmetry of the force matches the symmetry of a mode, that mode can be excited. This is enforced mathematically above by the Kroenecker delta. Any force can be decomposed into the basis of the normal modes and if the force is non-zero for the normal mode in question, that normal mode can be excited. For the displacive mechanism, this rule immediately suggests that only the totally symmetric mode can be excited. Because the electrons quickly thermalize among themselves before the nuclei can react, and thermalization leads to a symmetric charge distribution, the driving force will be invariant under all crystallographic symmetry operations. Thus the force can only excite totally symmetric modes. In a slightly awkward language, we can write:

$\ddot{Q}^{(tot. symm.)} + 2\gamma\dot{Q}^{(tot. symm.)} + \omega_0^2 Q^{(tot. symm.)} = \kappa\Delta n$

For Raman active modes, the symmetry rules get a little more cumbersome. We can write an expression for the force in terms of the Raman polarizability tensor:

$\ddot{Q} + 2\gamma\dot{Q} + \omega_0^2 Q = R_{\mu \nu} E^{(i)}_\mu(t) E^{(s)}_\nu(t)$

where $E^{(i)}_\mu (E^{(s)}_\nu)$ is the incident (scattered) electric field vector with polarization in the direction of $\mu (\nu)$ and $R_{\mu \nu}$ is the Raman polarizability tensor. This Raman polarizability tensor is determined by the symmetry of the vibrational mode in question and can be looked up in various group theoretical textbooks (or by figuring it out). Choosing the polarization of the incident light pulse will determine whether the force term on the right-hand side will be non-zero. Ultimately, the force term is constrained by the symmetry of the vibration and the incident and scattered light polarizations.

Although this post has now gone on way too long, I hope it helps to serve those who are using laser pulses in their own labs get a start on a topic that is rather difficult to bridge in the existing literature. Please feel free to comment below if there’s anything I can make clearer in the post or expand on in a future post.

Summary of key differences between the two generation mechanisms:

Impulsive stimulated Raman scattering (ISRS):

1. Observed in both opaque and transparent crystals
2. Away from resonance, oscillations are usually observed with a sine-like phase
3. Only Raman-active modes are observed
4. Light’s electric field is the “driving force” of oscillations

Displacive excitation of coherent phonons (DECP):

1. Observed only if material is opaque (at the frequency of the incoming light)
2. Oscillations are observed with cosine-like phase
3. Only totally symmetric modes can be excited
4. Change in electronic density, and thus a new electrostatic environment, is the “driving force” of oscillations

## No longer naked: IR selection rules and greenhouse gases

In the previous post, I walked through the simple “naked planet” model to calculate the average temperature of planets. This resulted in an excellent approximation to the average temperature of Mercury and was a little off when it came to the earth. What I didn’t say was how horrendously this model performs for Venus. In that case, our model predicts a temperature of 232 K while the observed temperature is 735 K. Put more terrestrially, we predicted that the temperature would be like a Siberian winter (-40 degrees C (or F)), whereas the real temperature is hot enough to melt lead. Why is the model so far off in this case?

Simply stated, we failed to take into consideration the atmospheric greenhouse effect. Mercury happens to lack a substantial atmosphere, so the greenhouse effect doesn’t play a role. It turns out that Venus’ atmosphere is a hot, dense gaseous stew of carbon dioxide (~96% $CO_2$) and other gases (including sulfuric acid ($H_2SO_4$)!) — the greenhouse effect is extremely dramatic there. However, on Earth, the greenhouse effect leads to a comparatively small warming of the surface, but this small warming effect is decisive for the biology of the planet.

So, what is the greenhouse effect, anyway? Essentially, it is a one-way light (and thus heat) absorber. Visible light from the sun passes through the atmosphere and heats up the surface of the earth. Because Earth is much cooler than the sun, the blackbody radiation emitted from the earth is in the infrared part of the spectrum. So while the atmosphere is almost transparent to visible light from the sun, it absorbs the infrared light from the earth. Energy thus gets in, but has trouble leaving, resulting in a heating effect.

There are a lot of questions that one can ask about the greenhouse effect, and in this post I’ll be addressing a couple of the them: (1) What makes a gas a greenhouse gas? and (2) Can we model the greenhouse effect to get an idea about how it warms the earth?

In our atmosphere, there are many gases — nitrogen (~78% $N_2$), oxygen (~21% $O_2$), argon (~1% $Ar$), and trace amounts of other gases (~0.04% $CO_2$ and <1% water vapor). So why is it that we are so concerned with trace amounts of $CO_2$ (and water vapor)? Why are these greenhouse gases? Is there a way to predict whether a gas will be a greenhouse gas?

Greenhouse gases consist of molecules that possess “infrared active” vibration modes that absorb light at wavelengths that the earth emits. Infrared active modes vibrate asymmetrically, and these asymmetric vibrations are the only ones capable of absorbing a significant amount of light. (A couple posts ago, I described the symmetry principles describing how infrared active vibrations interact with light’s electric field.) Gases like $N_2$ and $O_2$ only possess symmetric vibration modes and are thus not capable of absorbing light (at least to leading order, i.e. in the electric dipole approximation). Now let’s take a look at the vibration modes of the $CO_2$ molecule to see why it is critical to the climate.

$CO_2$ possesses four modes of vibration, as shown in the image above. Of these four, three of them are infrared active, because they are inversion asymmetric vibrations (again, see this post if that doesn’t make sense.) Most important are the bending vibrations, because their frequencies are close to where most of the light is emitted from earth. Below is a simulation (using this applet) showing the emission spectrum from Earth as viewed 70 km from the earth’s surface. On the left hand side, a few greenhouse gases are included in the atmosphere, but no $CO_2$. On the right, 0.042% (or 420 ppm) of $CO_2$ is added to the atmosphere. (Also included in the plot are the blackbody spectra at various temperatures.) As you can see, the effect is very dramatic — a lot of light is now absorbed by the bending vibration of the $CO_2$ molecule at 666 $cm^{-1}$. About half of this absorbed light will be re-radiated back down towards the earth. Water vapor and methane also have dramatic effects on the spectrum, which is why much of the radiation is absorbed above 1400 $cm^{-1}$. (Below about 500 $cm^{-1}$, it is mostly the water molecule’s rotational degrees of freedom that are responsible for absorption.)

These plots suggest that the presence or absence of trace amounts of $CO_2$ (and water vapor) can potentially have quite a dramatic effect on the earth’s climate. To see whether this is the case, it helps to see this in a simple model; we thus need to update the naked planet model from the previous post and dress it up. One way to do this is to consider a medium surrounding the earth that transmits visible light and absorbs infrared light — something like in the image below (adapted from here). In this crude model, the atmosphere absorbs all the infrared light being emitted from the earth and then re-emits it isotropically.

The calculation proceeds in a similar way to the one from the previous post. Each tier is in a steady state — the power coming in must equal the power going out. Above the atmosphere, this means that the incoming power from the sun must equal the power leaving the atmosphere:

$P_{sun} = L(1-A)\pi R_E^2 = P_{a} = \sigma T_a^4*(4\pi R_E^2)$

Because this equation is identical to the one solved in the previous post, this gives $T_a$ = 255 K. Now, we can solve for the earth’s surface temperature by considering the second tier. Here, the incoming power from the sun and the atmosphere must be balanced by the outgoing power from the earth’s surface:

$P_{sun} + P_{a} = L(1-A)\pi R_E^2 + \sigma T_a^4*(4\pi R_E^2) = P_{s} = \sigma T_s^4*(4\pi R_E^2)$

Having already solved for $T_a$ from the previous equation, $T_s$ can be obtained. $T_s = 2^{1/4} T_a$ = 303 K. Remember that the naked Earth model yielded a temperature of 255 K. So this atmosphere, which in our model absorbs all outgoing light, increases the temperature of the earth by roughly 48 K. Now, the actual temperature of the earth’s surface is 288 K, so this model overestimates the greenhouse effect quite a bit. But considering how crude this model was, in that it absorbs all outgoing radiation, it’s not surprising that this model inflates the actual temperature.

Nonetheless, I find that this model gives a good intuition about how the greenhouse effect works. It captures the essential physics of what is going on and makes us realize that these changes we are observing in our climate actually arise from some very simple and basic fundamental principles.

Note: This model does not work very well for Venus because its atmosphere is not a single thin layer. It is a dense and thick medium. It would be more appropriate to have many layers tiled atop one another.

## Naked planets

Today, there is no greater global challenge than climate change — an existential threat to most lifeforms on the planet, including to us humans. Thus, while our day-to-day work is important, it probably helps to understand and think about the basics behind the warming of the earth.

In this post, I want to give a general overview of the main scientific idea behind the temperature of most planets. Rather than focusing on data, which has been presented in various places, in this post, I’ll be focusing on the simple concept that the sun heats the planets up. We can actually get a fairly good estimate of the average temperature of some planets by considering only this effect. I’ll do this for Mercury first and then the Earth to illustrate.

To start, we treat the sun as an idealized blackbody with a surface temperature of roughly 5800 K. Using the Stefan-Boltzmann law, we can figure out the total power emitted by the sun:

$P = \sigma T^4 S$,

where $\sigma = 5.67\times 10^{-8} W/K^4m^2$ and $S$ is the surface area over which radiation is emitted (i.e. $S = 4 \pi R^2$, where $R$ is the radius of the sun). Since blackbodies emit in a totally isotropic fashion, we can calculate the power per unit area, $L$, at Mercury’s distance from the sun by dividing the power by $4\pi R_{s-to-M}^2$, where $R_{s-to-M}$ is the distance from the sun to Mercury. It helps to appeal to the image below (adapted from here) to see where this expression comes from.

To obtain an expression for the total power received by Mercury, we need to multiply the power per unit area times the area of Mercury that receives the light. Because only the part of Mercury that faces the sun receives the light, we would need to integrate the power over the exposed part of Mercury’s surface. However, there is a simpler way to do this calculation and that is to consider the shadow behind Mercury (see the image below, which is taken from here.)

By considering this image, we can see that the area intercepting the light is the area of the circle that casts the shadow. The total power absorbed by Mercury will then be:

$P_{M-abs} = L (1-A) \pi R_{Mer}^2$,

where $R_{Mer}$ is the radius of Mercury and $A$ is the “albedo” which quantifies how much light is reflected from the surface of the planet. For instance, some of the sun’s rays incident on Earth will be reflected by clouds, ice covering land or ocean, aerosols and other reflective objects. For Mercury, the albedo is roughly 0.1, meaning that 10% of the light is reflected back into space, and is not absorbed, and thus has no heating effect on the planet.

The last idea we have to use is that of the steady state. In the steady state, the power absorbed by Mercury is the same as that emitted, so that the planet stabilizes at some temperature. (Away from the steady-state, the planet would heat up or cool with time). To figure out how much power is emitted, we treat Mercury as a blackbody, so that the power emitted is:

$P_{M-emit} = \sigma T^4 S_{Mer}$,

where $S_{Mer} = 4 \pi R_{Mer}^2$. Note here that even though only the part of Mercury that faces the sun absorbs light, the whole surface area of Mercury emits radiation (in line with the definition of a blackbody). Now equating the power absorbed to the power emitted, we can solve for the temperature of Mercury. By plugging in the numbers, I get that Mercury’s temperature is 437.7 K (~165 C). The observed temperature of Mercury, by the way, is roughly 440 K — only a couple of degrees off!

Were we to perform the same calculation for Earth, we’d get a frigid ~256 K for the Earth’s temperature. However, in this case, the observed average temperature is ~288 K.

Considering the crudeness of this naked (meaning no atmosphere) planet model, it performs quite well for certain planets. However, the obvious question that arises is: Why is the discrepancy for the Earth greater than that for Mercury? This is where the atmosphere comes in (and the greenhouse effect…). I want to say that I’ll address greenhouse gases in a subsequent post, but my track record for these kinds of promises is not something I’m particularly proud of (see here and here for instance.)

For those of you who would like to carry out the calculation yourselves, I used the following numbers:

Radius of the sun, $R = 696.3$ million meters

Sun-to-Mercury distance, $R_{s-to-M} = 58$ billion meters

Radius of Mercury, $R_{Mer} = 2.4$ million meters

Sun-to-Earth distance, $R_{s-to-E} = 150$ billion meters

Radius of Earth, $R_{Ear} = 6.38$ million meters

Earth’s albedo, $A_{Ear} = 0.3$

## Decoherence does not imply wavefunction “collapse”

While most physicists will agree with the statement in the title, some (especially those not working in quantum information) occasionally do get confused about this point. But there are a very beautiful set of experiments demonstrating the idea of “false decoherence”, which explicitly show that entangling a particle with the environment does not necessarily induce “wavefunction collapse”. One can simply “disentangle” a particle from its environment and interference phenomena (or coherence) can be recovered. Sometimes, these experiments fall under the heading of quantum eraser experiments, but false decoherence happens so often that most of the time it isn’t even noticed!

To my mind, the most elegant of these experiments was first performed by Zou, Wang and Mandel in 1991. However, the original article is a little opaque in describing the experiment, but, luckily, it is described very accessibly in an exceptional Physics Today article by Greenberger, Horne and Zeilinger that was written in 1993.

As a starting point, it is worth reviewing the Mach-Zehnder interferometer. An image of this interferometer is below (taken from here):

Quickly, laser light from the left is incident on a 50/50 beam splitter. A photon incident on the beam splitter thus has a probability amplitude of going through either the upper path or the lower path. Along each path, the photon hits a mirror and is then recombined at a second 50/50 beam splitter. The surprising thing about this experimental arrangement is that if the two path lengths are identical and the laser light is incident as shown in the image above, the photon will emerge at detector D1 with 100% probability. Now, if we add a phase shifter, $\phi$, to our interferometer (either by inserting a piece of glass or by varying the path length of, say, the upper path, the photon will have a non-zero probability of emerging at detector D2. As the path length is varied, the probability oscillates between D1 and D2, exhibiting interference. In a previous post, I referred to the beautiful interference pattern from the Mach-Zehnder interferometer with single photons, taken by Aspect, Grangier and Roger in 1986. Below is the interference pattern:

Now that the physics of the Mach-Zehnder interferometer is hopefully clear, let us move onto the variant of the experiment performed by Zou, Wang and Mandel. In their experiment, after the first beam splitter, they inserted a pair of non-linear crystals into each branch of the interferometer (see image below). These non-linear crystals serve to “split” the photon into two photons. For instance, a green incident photon may split into a red and yellow photon where $\omega_{green} = \omega_{red} +\omega_{yellow}$ and energy is conserved. (For those that like big words, this process is referred to as spontaneous parametric down-conversion). Now, what we can do is to form a Mach-Zehnder interferometer with the yellow photons. (Let’s ignore the red photons for now). The \$64k question is: will we observe interference like the in the original Mach-Zehnder interferometer? Think about this for a second before you continue reading. Why would or why wouldn’t you expect interference of the yellow photons? Do the non-linear crystals even make a difference?

It turns out that the yellow photons will not interfere. Why not? Because the red photons provide “which-path information”. If one were to put a detector at O in the figure or below the dichroic mirror D3, we would be able to detect the red photon, which would tell us which path the yellow photon took! So we know that if there is a red photon at O (D3), the yellow photon would have taken the upper (lower) path. We can no longer have single-particle interference when the object to be interfered is entangled with another object (or an environment) that can yield which-path information. Mathematically, interference can be observed when an object is in a superposition state:

$\left|\psi\right\rangle = 1/\sqrt{2} ( \left|U\right\rangle + e^{i\phi}\left|L\right\rangle )$

But single-particle interference of the yellow photon cannot be observed when it is entangled with a red photon that yields which-path information (though two-particle interference effects can be observed — we’ll save that topic for another day):

$\left|\psi\right\rangle = 1/\sqrt{2} ( \left|Y_U\right\rangle \left|R_U\right\rangle + e^{i\phi} \left|Y_L\right\rangle \left|R_L\right\rangle )$

In this experiment, it is helpful to think of the red photon as the “environment”. In this interpretation, this experiment provides us with a very beautiful example of decoherence, where entangling a photon with the “environment” (red photon), disallows the observation of an interference pattern. I should mention that a detector is not required after D3 or at O for the superposition of the yellow photons to be disallowed. As long as it is possible, in principle, to detect the red photon that would yield “which-path information”, interference will not be observed.

Now comes the most spectacular part of this experiment, which is what makes this Zou, Wang, Mandel experiment notable. In their experiment, Zou, Wang and Mandel overlap the two red beams (spatially and temporally), such that it becomes impossible to tell which path a red photon takes. To do this, the experimenters needed to ensure that the time it takes for a photon to go from BS1 (through a and d) to NL2 is identical to the time from BS1 (through b) to NL2. This guarantees that were one to measure the red photon after D3, it would not be possible to tell whether the red photon was generated in NL1 or NL2.

So the question then arises again: if we overlap the two red beams in this way, can we observe interference of the yellow photons at BS2 now that the “which-path information” has been erased? The answer is yes! Mathematically, what we do by overlapping two red beams is to make them indistinguishable:

$\left|\psi\right\rangle = 1/\sqrt{2} ( \left|Y_U\right\rangle \left|R_U\right\rangle + e^{i\phi} \left|Y_L\right\rangle \left|R_L\right\rangle )$

$\rightarrow 1/\sqrt{2} ( \left|Y_U\right\rangle + e^{i\phi} \left|Y_L\right\rangle ) \left|R\right\rangle$

Here, the yellow and red photons are effectively decoupled or disentangled, so that single-particle superposition is recovered! Note that by engineering the “environment” so that “which-path information” is destroyed, coherence returns! Also note that just by inserting an opaque object O in the path d, we can destroy the interference of the yellow beams, which aren’t even touched in the experiment!

Thinking about this experiment also gives us deeper insight into what happens in the two-slit experiment performed with buckyballs. In that experiment, the buckyballs are interacting strongly with the environment, but by the time the buckyballs reach the screen, the “environment” wavefunctions are identical and effectively factor out:

$\left|\psi\right\rangle = 1/\sqrt{2} ( \left|slit1\right\rangle \left|env1\right\rangle + e^{i\phi} \left|slit2\right\rangle \left|env2\right\rangle )$

$\rightarrow 1/\sqrt{2} ( \left|slit1\right\rangle + e^{i\phi} \left|slit2\right\rangle ) \left|env\right\rangle$

To my mind, the Zou-Wang-Mandel experiment is superlative because it extends our understanding of the two-slit experiment to a remarkable degree. It shows that decoherence does not imply “wavefunction collapse”, because it is possible to engineer “re-coherence”. Thus, one needs to distinguish reversible or “false” decoherence from irreversible or “true” decoherence.

## Symmetry, selection rules and reduction to a bare-bones model

When I was a graduate student, a group of us spent long hours learning group theory from scratch in effort to understand and interpret our experiments. One of our main goals back then was to understand Raman and infrared selection rules of phonons. We pored over the textbook and notes by the late Mildred Dresselhaus (the pdf can be found for free here). It is now difficult for me to remember what it was like looking at data without the vantage point of symmetry, such was the influence of the readings on my scientific outlook. Although sometimes hidden behind opaque mathematical formalism, when boiled down to their essence, the ideas are profound in how they simplify certain problems.

Simply stated, symmetry principles allow us to strip unnecessary complicating factors away from certain problems as long as the pertinent symmetries are retained. In this post, I will discuss Raman and IR selection rules in a very simple model that illustrates the essence of this way of thinking. Although this model is cartoonish, it contains the relevant symmetries that cut right to the physics of the problem.

To illustrate the concepts, I will be using the following harmonic oscillator-based model (what else would I use?!). Let’s consider the following setup, depicted below:

It’s relatively intuitive to see that this system possesses two normal modes (see image below). One of these normal modes is inversion symmetric (i.e. maintains the symmetry about the dashed vertical line through the entirety of its oscillatory motion), while the other normal mode is manifestly inversion asymmetric (i.e. does not maintain symmetry about the dashed vertical line through the entirety of its oscillatory motion). In particular, this latter mode is anti-symmetric. These considerations lead us to label the symmetric mode “even” and the anti-symmetric mode “odd”. (I should mention that in group theoretical textbooks, even modes are often labelled gerade (German for even), while odd modes are labelled ungerade (German for odd), from which the u and g subscripts arise). Normal modes for the system of two oscillators are depicted below:

These are the “natural” modes of the system, but our ability to observe them requires us to excite these modes in some way. How do we do this? This is where the analogy to IR and Raman spectroscopy comes in. I’ll describe the analogy in more detail below, but for now consider the possibility that we can move the walls in the oscillator system. Consider moving the walls in the following way. We can oscillate the walls back and forth, moving them closer and farther apart as depicted in the image below. Clearly, this oscillatory wall motion is also symmetric.

This obviously isn’t the only way that we can move the walls. We could just as easily move them like this, which is anti-symmetric:

While there are many other ways we could move the walls, it turns out that the above images essentially capture how Raman (gerade) and infrared (ungerade) modes are excited in a solid. Infrared modes are excited using an odd perturbation (proportional to the electric field $\vec{E}$), while Raman modes are excited with an even perturbation (proportional to two instances of the electric field $\vec{E}\vec{E}$)$^{**}$. (Under an inversion operation, the electric field switches sign, thus the infrared perturbation is odd while the Raman perturbation is even). And that’s basically it — you can see from the images that an even (odd) perturbation will result in the excitation of the even (odd) normal mode!

While this model is unlikely to be taught in classrooms any time soon in reference to Raman and IR selection rules, it does capture the physical picture in a (what I would consider) meaningful way through the use of symmetry. You can even imagine changing the relative masses of the two blocks, and you would then start to see that the formerly IR and Raman modes start to “mix”. The normal modes would no longer be purely even and odd modes, and the perturbations would then excite linear combinations of these new modes (e.g. the even perturbation would excite both modes). The analogous spectroscopic statement would be that in a system that lacks inversion symmetry, normal modes are not exclusively Raman or IR active.

While pictures like this won’t give you a precise solution to most questions you’re trying to answer, they will often help you identify obviously wrong lines of reasoning. It’s been said that physics is not much more than the study of symmetry. While that’s not exactly true, it’s hard to overstate its importance.

$^{**}$ Why is the Raman excitation even? There are many ways to explain this, but on a cartoon level, the first photon (the electric field vector) induces a dipole moment, and the second photon (the other electric field) interacts with the induced dipole. Because this is a two-photon process (i.e. photon-in, photon-out), the excitation is even under inversion. (I should mention that the strength of the induced dipole moment is related to the polarizability of system, which is often why folks talk about the polarizability in relation to Raman spectroscopy).

Why is the infrared excitation odd? Contrary to the Raman excitation, the infrared excitation requires the absorption of the incoming photon. Thus, infrared spectroscopy is a single photon process and requires only a single electric field vector to couple to a dipole moment. The excitation is thus odd under inversion.