Tag Archives: Spectroscopy

Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded \sim 10^{16}  \textrm{cm}^{-3}). The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

EHD-Lo.JPG

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

Pic_EHD.JPG

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.

Ruminations on Raman

The Raman effect concerns the inelastic scattering of light from molecules, liquids, or solids. Brian has written a post about it previously, and it is worth reading. Its use today is so commonplace, that one almost forgets that it was discovered back in the 1920s. As the story goes (whether it is apocryphal or not I do not know), C.V. Raman became entranced by the question of why the ocean appeared blue while on a ship back from London to India in 1921. He apparently was not convinced by Rayleigh’s explanation that it was just the reflection of the sky.

When Raman got back to Calcutta, he began studying light scattering from liquids almost exclusively. Raman experiments are nowadays pretty much always undertaken with the use of a laser. Obviously, Raman did not initially do this (the laser was invented in 1960). Well, you must be thinking, he must have therefore conducted his experiments with a mercury lamp (or something similar). In fact, this is not correct either. Initially, Raman had actually used sunlight!

If you have ever conducted a Raman experiment, you’ll know how difficult it can be to obtain a spectrum, even with a laser. Only about one in a million of the incident photons (and sometimes much fewer) actually gets scattered with a change in wavelength! So for Raman to have originally conducted his experiments with sunlight is really a remarkable achievement. It required patience, exactitude and a great deal of technical ingenuity to focus the sunlight.

Ultimately, Raman wrote his results up and submitted them to Nature magazine in 1928. Although these results were based on sunlight, he had just obtained his first mercury lamp to start his more quantitative studies by then. The article made big news because it was a major result confirming the new “quantum theory”, but Raman immediately recognized the capability of this effect in the study of matter as well. After many years of studying the effect, he came to realize that the reason that water is blue is basically the same as why the sky is blue — Rayleigh scattering goes as 1/\lambda^4.

Readers of this blog will actually notice that I have written about Raman scattering in several different contexts on this site, for instance, in measuring the Damon-Eschbach mode and the Higgs amplitude mode in superconductorsilluminating the nature of LO-TO splitting polar insulators and measuring unusual collective modes in Kondo insulators demonstrating its power as probe of condensed matter even in the present time.

On this blog, one of the major themes I’ve tried to highlight is the technical ingenuity of experimentalists to observe certain phenomena. I find it quite amazing that the Raman effect had its rather meager origins in the use of sunlight!

A View From an X-ray Beam

X-rays have become a rather commonplace tool for people within the art world for several reasons. The example I gave in a previous post was for its use in exposing artistic sketches beneath the final image. These revelations gave a window into Picasso’s artistic process.

X-ray spectroscopy has also become an important method by which to verify the authenticity of a painting. It can determine the materials used, elements within the paint, and the type of paper utilized. This can help pinpoint the painting geographically as well as revealing its age.

To the left below is one of Georges Seurat’s pointilist masterpieces entitled Young Woman Powdering Herself. Apparently, this woman was Seurat’s mistress. X-rays revealed that Seurat had originally painted himself watching her from the window, but later covered this up. It turns out that this would have been Seurat’s only self-portrait.

seurat

There are several more of these images here accompanied by other interesting anecdotes. Be sure to click on the images to reveal the sketches beneath!

A First-Rate Experiment: The Damon-Eshbach Mode

One of the things I have tried to do on this blog is highlight excellent experiments in condensed matter physics. You can click the following links for posts I’ve written on illuminating experiments concerning the symmetry of the order parameter in cuprate superconductors, Floquet states on the surface of topological insulators, quantized vortices in superfluid 4He, sonoluminescence in collapsing bubbles and LO-TO splitting in doped semiconductors, just to highlight a few. Some of these experiments required some outstanding technical ingenuity, and I feel it important to document them.

In a similar vein, there was an elegant experiment published in PRL back in 1977 by P. Grunberg and F. Metawe that shows a rather peculiar spectral signature observed with Brillouin scattering in thin film EuO. The data is presented below. For those who don’t know, Brillouin scattering is basically identical to Raman scattering, but the energy scale observed is much lower, typically a fraction of a cm^{-1} ~ 5 cm^{-1} (1 cm^{-1} \approx 30GHz). Brillouin scattering is often used to observe acoustic phonons.

Damon-Eshbach

From the image above there is immediately something striking in the data: the peak labeled M2 only shows up on either the anti-Stokes side (the incident light absorbs a thermally excited mode) or the Stokes side (the incident light excites a mode) depending on the orientation of the magnetic field. In his Nobel lecture, Grunberg revealed that they discovered this effect by accident after they had hooked up some wires in the opposite orientation!

Anyway, in usual light scattering experiments, depending on the temperature, modes are observed on both sides (Stokes and anti-Stokes) with an intensity difference determined by Bose-Einstein statistics. In this case, two ingredients, the slab geometry of the thin film and the broken time-reversal symmetry give rise to the propagation of a surface spin wave that travels in only one direction, known as the Damon-Eshbach mode. The DE mode propagates on the surface of the sample in a direction perpendicular to the magnetization, B, of the thin film, obeying a right-hand rule.

When one thinks about this, it is truly bizarre, as the dispersion relation would for the DE mode on the surface would look something like the image below for the different magnetic field directions:

Damon-Eshbach

One-way propagation of Damon Eshbach Mode

The dispersion branch only exists for one propagation direction! Because of this fact (and the conservation of momentum and energy laws), the mode is either observed solely on the Stokes or anti-Stokes side. This can be understood in the following way. Suppose the experimental geometry is such that the momentum transferred to the sample, q, is positive. One would then be able to excite the DE mode with the incident photon, giving rise to a peak on the Stokes side. However, the incident photon in the experiment cannot absorb the DE mode of momentum -q, because it doesn’t exist! Similar reasoning applies for the magnetization in the other direction, where one would observe a peak in only the anti-Stokes channel.

There is one more regard in which this experiment relied on a serendipitous occurrence. The thin film was thick enough that the light, which penetrates about 100 Angstroms, did not reach the back side of the film. If the film had been thin enough, a peak would have shown up in both the Stokes and anti-Stokes channels, as the photon would have been able to interact with both surfaces.

So with a little fortune and a lot of ingenuity, this experiment set Peter Grunberg on the path to his Nobel prize winning work on magnetic multilayers. As far as simple spectroscopy experiments go, one is unlikely to find results that are as remarkable and dramatic.

A Graphical Depiction of Linear Response

Most of the theory of linear response derives from the following equation:

y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'

I remember quite vividly first seeing an equation like this in Ashcroft and Mermin in the context of electric fields (i.e. \textbf{D}(\textbf{r}) = \int_{-\infty}^{\infty} \epsilon(\textbf{r}-\textbf{r}')\textbf{E}(\textbf{r}') d\textbf{r}') and wondering what it meant.

One way to think about \chi(t-t') in the equation above is as an impulse response function. What I mean by this is that if I were to apply a Dirac delta-function perturbation to my system, I would expect it to respond in some way, characteristic of the system, like so:

ImpulseResponse

Mathematically, this can be expressed as:

y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'= \int_{-\infty}^{\infty} \chi(t-t')\delta(t') dt'=\chi(t)

This seems reasonable enough. Now, though this is going to sound like tautology, what one means by “linear” in linear response theory is that the system responds linearly to the input. Most of us are familiar with the idea of linearity but in this context, it helps to understand it through two properties. First, the strength of the input delta-function determines the strength of the output, and secondly, the response functions combine additively. This means that if we apply a perturbation of the form:

h(t')=k_1\delta(t'-t_1) + k_2\delta(t'-t_2) +k_3\delta(t'-t_3)

We expect a response of the form:

y(t)=k_1\chi(t-t_1) + k_2\chi(t-t_2) +k_3\chi(t-t_3)

This is most easily grasped (at least for me!) graphically in the following way:

MultipleImpulses

One can see here that the response to the three impulses just add to give the total response. Finally, let’s consider what happens when the input is some sort of continuous function. One can imagine a continuous function as being composed of an infinite number of discrete points like so:

continuous

Now, the output of the discretized input function can be expressed as so:

y(t) = \sum_{n=-\infty}^{\infty} [\chi(t-n\epsilon_{t'})][h(n\epsilon_{t'}][\epsilon_{t'}]

This basically amounts to saying that we can treat each point on the function as a delta-function or impulse function. The strength of each “impulse” in this scenario is quantified by the area under the curve. Each one of these areal slivers gives rise to an output function. We then add up the outputs from each of the input points, and this gives us the total response y(t) (which I haven’t plotted here). If we take the limit n \rightarrow \infty, we get back the following equation:

y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'

This kind of picture is helpful in thinking about the convolution integral in general, not just in the context of linear response theory.

Many experiments, especially scattering experiments, measure a quantity related to the imaginary part of the  Fourier-transformed response function, \chi''(\omega). One can then use a Kramers-Kronig transform to obtain the real part and reconstruct the temporal response function \chi(t-t'). An analogous procedure can be done to obtain the real-space response function from the momentum-space one.

Note: I will be taking some vacation time for a couple weeks following this post and will not be blogging during that period.

The Relationship Between Causality and Kramers-Kronig Relations

The Kramers-Kronig relations tell us that the real and imaginary parts of causal response functions are related. The relations are of immense importance to optical spectroscopists, who use them to obtain, for example, the optical conductivity from the reflectivity. It is often said in textbooks that the Kramers-Kronig relations (KKRs) follow from causality. The proof of this statement usually uses contour integration and the role of causality is then a little obscured. Here, I hope to use intuitive arguments to show the meaning behind the KKRs.

If one imagines applying a sudden force to a simple harmonic oscillator (SHO) and then watches its response, one would expect that the response will look something like this:

ImpulseResponse

We expect the SHO to oscillate for a little while and eventually stop due to friction of some kind. Let’s call the function in the plot above \chi(t). Because \chi(t) is zero before we “kick” the system, we can play a little trick and write \chi(t) = \chi_s(t)+\chi_a(t) where the symmetrized and anti-symmetrized parts are plotted below:

Symmetric

Anti-symmetric

Since for t<0, the symmetrized and anti-symmetrized parts will cancel out perfectly, we recover our original spectrum. Just to convince you (as if you needed convincing!) that this works, I have explicitly plotted this:

Symmetric+AntiSymmetric

Now let’s see what happens when we take this over to the frequency domain, where the KKRs apply, by doing a Fourier transform. We can write the following:

\tilde{\chi}(\omega) = \int_{-\infty}^\infty e^{i \omega t} \chi(t) \mathrm{d}t = \int_{-\infty}^\infty (\mathrm{cos}(\omega t) + i \mathrm{sin}(\omega t)) (\chi_s (t)+\chi_a(t))\mathrm{d}t

where in the last step I’ve used Euler’s identity for the exponential and I’ve decomposed \chi(t) into its symmetrized and anti-symmetrized parts as before. Now, there is something immediately apparent in the last integral. Because the domain of integration is from -\infty to \infty, the area under the curve of any odd (a.k.a anti-symmetric) function will necessarily be zero. Lastly, noting that anti-symmetric \times symmetric = anti-symmetric and symmetric (anti-symmetric) \times symmetric (anti-symmetric) = symmetric, we can write for the equation above:

\tilde{\chi}(\omega) = \int_{-\infty}^\infty \mathrm{cos}(\omega t) \chi_s(t) \mathrm{d}t + i \int_{-\infty}^\infty \mathrm{sin}(\omega t) \chi_a(t) \mathrm{d}t = \tilde{\chi}_s(\omega) + i \tilde{\chi}_a(\omega)

Before I continue, some remarks are in order:

  1. Even though we now have two functions in the frequency domain (i.e. \tilde{\chi}_s(\omega) and  \tilde{\chi}_a(\omega)), they actually derive from one function in the time-domain, \chi(t). We just symmetrized and anti-symmetrized the function artificially.
  2. We actually know the relationship between the symmetric and anti-symmetric functions in the time-domain because of causality.
  3. The symmetrized part of \chi(t) corresponds to the real part of \tilde{\chi}(\omega). The anti-symmetrized part of \chi(t) corresponds to the imaginary part of \tilde{\chi}(\omega).

With this correspondence, the question then naturally arises:

How do we express the relationship between the real and imaginary parts of \tilde{\chi}(\omega), knowing the relationship between the symmetrized and anti-symmetrized functions in the time-domain?

This actually turns out to not be too difficult and involves just a little more math. First, let us express the relationship between the symmetrized and anti-symmetrized parts of \chi(t) mathematically.

\chi_s(t) = \mathrm{sgn}(t) \times \chi_a(t)

where \mathrm{sgn} (t) just changes the sign of the t<0 part of the plot and is shown below.

sgn

Now let’s take this expression over to the frequency domain. Here, we must use the convolution theorem. This theorem says that if we have two functions multiplied by each other, e.g. h(t) = f(t)g(t), the Fourier transform of this product is expressed as a convolution in the frequency domain as so:

\tilde{h}(\omega)=\mathcal{F}(f(t)g(t)) = \int \tilde{f}(\omega-\omega')\tilde{g}(\omega') \mathrm{d}\omega'

where \mathcal{F} means Fourier transform. Therefore, all we have left to do is figure out the Fourier transform of \mathrm{sgn}(t). The answer is given here (in terms of frequency not angular frequency!), but it is a fun exercise to work it out on your own. The answer is:

\mathcal{F}(sgn(t)) = \frac{2}{i\omega}

With this answer, and using the convolution theorem, we can write:

\tilde{\chi_s}(\omega) = \int_{-\infty}^{\infty} \frac{2}{i(\omega-\omega')} \tilde{\chi}_a(\omega')\mathrm{d}\omega'

Hence, up to some factors of 2\pi and i, we can now see better what is behind the KKRs without using contour integration. We can also see why it is always said that the KKRs are a result of causality. Thinking about the KKRs this way has definitely aided in my thinking about response functions more generally.

I hope to write a post in the future talking a little more about the connection between the imaginary part of the response function and dissipation. Let me know if you think this will be helpful.

A lot of this post was inspired by this set of notes, which I found to be very valuable.

Interactions, Collective Excitations and a Few Examples

Most researchers in our field (and many outside our field that study, e.g. ant colonies, traffic, fish schools, etc.) are acutely aware of the relationship between the microscopic interactions between constituent particles and the incipient collective modes. These can be as mundane as phonons in a solid that arise because of interactions between atoms in the lattice or magnons in an anti-ferromagnet that arise due to spin-spin interactions.

From a theoretical point of view, collective modes can be derived by examining the interparticle interactions. An example is the random phase approximation for an electron gas, which yields the plasmon dispersion (here are some of my own notes on this for those who are interested). In experiment, one usually takes the opposite view where inter-particle interations can be inferred from the collective modes. For instance, the force constants in a solid can often be deduced by studying the phonon spectrum, and the exchange interaction can be backed out by examining the magnon dispersions.

In more exotic states of matter, these collective excitations can get a little bizarre. In a two-band superconductor, for instance, it was shown by Leggett that the two superfluids can oscillate out-of-phase resulting in a novel collective mode, first observed in MgB2 (pdf!) by Blumberg and co-workers. Furthermore, in 2H-NbSe2, there have been claims of an observed Higgs-like excitation which is made visible to Raman spectroscopy through its interaction with the charge density wave amplitude mode (see here and here for instance).

As I mentioned in the post about neutron scattering in the cuprates, a spin resonance mode is often observed below the superconducting transition temperature in unconventional superconductors. This mode has been observed in the cuprate, iron-based and heavy fermion superconducting families (see e.g. here for CeCoIn5), and is not (at least to me!) well-understood. In another rather stunning example, no less than four sub-gap collective modes, which are likely of electronic origin, show up below ~40K in SmB6 (see image below), which is in a class of materials known as Kondo insulators.

smb6

Lastly, in a material class that we are actually thought to understand quite well, Peierls-type quasi-1D charge density wave materials, there is a collective mode that shows up in the far-infrared region that (to my knowledge) has so far eluded theoretical understanding. In this paper on blue bronze, they assume that the mode, which shows up at ~8 cm^{-1} in the energy loss function, is a pinned phase mode, but this assignment is likely incorrect in light of the fact that later microwave measurements demonstrated that the phase mode actually exists at a much lower energy scale (see Fig. 9). This example serves to show that even in material classes we think we understand quite well, there are often lurking unanswered questions.

In materials that we don’t understand very well such as the Kondo insulators and the unconventional superconductors mentioned above, it is therefore imperative to map out the collective modes, as they can yield critical insights into the interactions between constituent particles or couplings between different order parameters. To truly understand what is going on these materials, every peak needs to be identified (especially the ones that show up below Tc!), quantified and understood satisfactorily.

As Lestor Freamon says in The Wire:

All the pieces matter.