Tag Archives: Experiment

Fractional quasiparticles and reality

As a condensed matter physicist, one of the central themes that one must become accustomed to is the idea of a quasiparticle. These quasiparticles are not particles as nature made them per se, but only exist inside matter. (Yes, nature made matter too, and therefore quasiparticles as well, but come on — you know what I mean!)

Probably the first formulation of a quasiparticle was in Einstein’s theory of specific heat in a solid at low temperature. He postulated that the sound vibrations in a solid, much like photons from a blackbody, obeyed the Planck distribution, implying some sort of particulate nature to sound. This introduction was quite indirect, and the first really explicit formulation of quasiparticles was presented by Landau in his theory of He4. Here, he proposed that most physical observables could be described in terms of “phonons” and “rotons“, quantized sound vibrations at low and high momenta respectively.

In solid state physics, one of the most common quasiparticles is the hole; in the study of magnetism it is the magnon, in semiconductor physics, the exciton is ubiquitous and there are many other examples as well. So let me ask a seemingly benign question: are these quasiparticles real (i.e. are they real particles)?

In my experience in the condensed matter community, I suspect that most would answer in the affirmative, and if not, at least claim that the particles observed in condensed matter are just as real as any particle observed in particle physics.

Part of the reason I bring this issue up is because of concerns raised soon following the discovery of the fractional quantum Hall effect (FQHE). When the theory of the FQHE was formulated by Laughlin, it was thought that his formulation of quasiparticles of charge e/3 may have been a mere oddity in the mathematical description of the FQHE. Do these particles carrying e/3 current actually exist or is this just a convenient mathematical description?

In two papers that appeared almost concurrently, linked here and here, it was shown using quantum shot noise experiments that these e/3 particles did indeed exist. Briefly, quantum shot noise arises because of the discrete nature of particles and enables one to measure the charge of a current-carrying particle to a pretty good degree of accuracy. In comparing their results to the models of particles carrying charge e versus particles carrying charge e/3, the data shows no contest. Here is a plot below showing this result quite emphatically:

FracCharge.PNG

One may then pose the question: is there a true distinction between what really “exists out there” versus a theory that conveniently describes and predicts nature? Is the physicist’s job complete once the equations have been written down (i.e should he/she not care about questions like “are these fractional charges real”)?

These are tough questions to answer, and are largely personal, but I lean towards answering ‘yes’ to the former and ‘no’ to the latter. I would contend that the quantum shot noise experiments outlined above wouldn’t have even been conducted if the questions posed above were not serious considerations. While asking if something is real may not always be answerable, when it is, it usually results in a deepened understanding.

This discussion reminds me of an (8-year old!) YouTube video of David who, following oral surgery to remove a tooth, still feels the affects of anesthesia :

Wannier-Stark Ladder, Wavefunction Localization and Bloch Oscillations

Most people who study solid state physics are told at some point that in a totally pure sample where there is no scattering, one should observe an AC response to a DC electric field, with oscillations at the Bloch frequency (\omega_B). These are the so-called Bloch oscillations, which were predicted by C. Zener in this paper.

However, the actual observation of Bloch oscillations is not as simple as the textbooks would make it seem. There is an excellent Physics Today article by E. Mendez and G. Bastard that outline some of the challenges associated with observing Bloch oscillations (which was written while this paper was being published!). Since the textbook treatments often use semi-classical equations of motion to demonstrate the existence of Bloch oscillations in a periodic potential, they implicitly assume transport of an electron wave-packet. To generate this wave-packet is non-trivial in a solid.

In fact, if one undertakes a full quantum mechanical treatment of electrons in a periodic potential under the influence of an electric field, one arrives at the Wannier-Stark ladder, which shows that an electric field can localize electrons! It is this ladder and the corresponding localization which is key to observing Bloch oscillations.

Let me use the two-well potential to give you a picture of how this localization might occur. Imagine symmetric potential wells, where the lowest energy eigenstates look like so (where S and A label the symmetric and anti-symmetric states):

Now, imagine that I start to make the wells a little asymmetric. What happens in this case? Well, it turns out that that the electrons start to localize in the following way (for the formerly symmetric and anti-symmetric states):

G. Wannier was able to solve the Schrodinger equation with an applied electric field in a periodic potential in full and showed that the eigenstates of the problem form a Stark ladder. This means that the eigenstates are of identical functional form from quantum well to quantum well (unlike in the double-well shown above) and the energies of the eigenstates are spaced apart by \Delta E=\hbar \omega_B! The potential is shown schematically below. It is also shown that as the potential wells slant more and more (i.e. with larger electric fields), the wavefunctions become more localized (the image is taken from here (pdf!)):

screenshot-from-2016-12-01-222719

A nice numerical solution from the same document shows the wavefunctions for a periodic potential well profile with a strong electric field, exhibiting a strong wavefunction localization. Notice that the wavefunctions are of identical form from well to well.

StarkLadder.png

What can be seen in this solution is that the stationary states are split by \hbar \omega_B, but much like the quantum harmonic oscillator (where the levels are split by \hbar \omega), nothing is actually oscillating until one has a wavepacket (or a linear superposition of eigenstates). Therefore, the Bloch oscillations cannot be observed in the ground state (which includes the the applied electric field) — one must first generate a wavepacket in the solid.

In the landmark paper that finally announced the existence of Bloch oscillations, Waschke et. al. generated a wavepacket in a GaAs-GaAlAs superlattice using a laser pulse. The pulse was incident on a sample with an applied electric field along the superlattice direction, and they were able to observe radiation emitted from the sample due to the Bloch oscillations. I should mention that superlattices must be used to observe the Wannier-Stark ladder and Bloch oscillations because \omega_B, which scales with the width of the quantum well, needs to be fast enough that the electrons don’t scatter from impurities and phonons. Here is the famous plot from the aforementioned paper showing that the frequency of the emitted radiation from the Bloch oscillations can be tuned using an electric field:

PRLBlochOscillations.png

This is a pretty remarkable experiment, one of those which took 60 years from its first proposal to finally be observed.

Kapitza-Dirac Effect

We are all familiar with the fact that light can diffract from two (or multiple) slits in a Young-type experiment. After the advent of quantum mechanics and de Broglie’s wave description of matter, it was shown by Davisson and Germer that electrons could be diffracted by a crystal. In 1927, P. Kapitza and P. Dirac proposed that it should in principle be possible for electrons to be diffracted by standing waves of light, in effect using light as a diffraction grating.

In this scheme, the electrons would interact with light through the ponderomotive potential. If you’re not familiar with the ponderomotive potential, you wouldn’t be the only one — this is something I was totally ignorant of until reading about the Kapitza-Dirac effect. In 1995, Anton Zeilinger and co-workers were able to demonstrate the Kapitza-Dirac effect with atoms, obtaining a beautiful diffraction pattern in the process which you can take a look at in this paper. It probably took so long for this effect to be observed because it required the use of high-powered lasers.

Later, in 2001, this experiment was pushed a little further and an electron-beam was used to demonstrate the effect (as opposed to atoms), as Dirac and Kapitza originally proposed. Indeed, again a diffraction pattern was observed. The article is linked here and I reproduce the main result below:

dirac-kaptiza

(Top) The interference pattern observed in the presence of a standing light wave. (Bottom) The profile of the electron beam in the absence of the light wave.

Even though this experiment is conceptually quite simple, these basic quantum phenomena still manage to elicit awe (at least from me!).

Coupled and Synchronized Metronomes

A couple years ago, I saw P. Littlewood give a colloquium on exciton-polariton condensation. To introduce the idea, he performed a little experiment, a variation of an experiment first performed and published by Christiaan Huygens. Although he performed it with only two metronomes, below is a video of the same experiment performed with 32 metronomes.

A very important ingredient in getting this to work is the suspended foam underneath the metronomes. In effect, the foam is a field that couples the oscillators.

Data Representation and Trust

Though popular media often portrays science as purely objective, there are many subjective sides to it as well. One of these is that there is a certain amount of trust we have in our peers that they are telling the truth.

For instance, in most experimental papers, one can only present an illustrative portion of all the data taken because of the sheer volume of data usually acquired. What is presented is supposed to be to a representative sample. However, as readers, we are never sure this is actually the case. We trust that our experimental colleagues have presented the data in a way that is honest, illustrative of all the data taken, and is reproducible under similar conditions. It is increasingly becoming a trend to publish the remaining data in the supplemental section — but the utter amount of data taken can easily overwhelm this section as well.

When writing a paper, an experimentalist also has to make certain choices about how to represent the data. Increasingly, the amount of data at the experimentalist’s disposal means that they often choose to show the data using some sort of color scheme in a contour or color density plot. Just take a flip through Nature Physics, for example, to see how popular this style of data representation has become. Almost every cover of Nature Physics is supplied by this kind of data.

However, there are some dangers that come with color schemes if the colors are not chosen appropriately. There is a great post at medvis.org talking about the ills of using, e.g. the rainbow color scheme, and how misleading it can be in certain circumstances. Make sure to also take a look at the articles cited therein to get a flavor of what these schemes can do. In particular, there is a paper called “Rainbow Map (Still) Considered Harmful”, which has several noteworthy comparisons of different color schemes including ones that are and are not perceptually linear. Take a look at the plots below and compare the different color schemes chosen to represent the same data set (taken from the “Rainbow Map (Still) Considered Harmful” paper):

rainbow

The rainbow scheme appears to show more drastic gradients in comparison to the other color schemes. My point, though, is that by choosing certain color schemes, an experimentalist can artificially enhance an effect or obscure one he/she does not want the reader to notice.

In fact, the experimentalist makes many choices when publishing a paper — the size of an image, the bounds of the axes, the scale of the axes (e.g. linear vs. log), the outliers omitted, etc.– all of which can have profound effects on the message of the paper. This is why there is an underlying issue of trust that lurks in within the community. We trust that experimentalists choose to exhibit data in an attempt to be as honest as they can be. Of course, there are always subconscious biases lurking when these choices are made. But my hope is that experimentalists are mindful and introspective when representing data, doubting themselves to a healthy extent before publishing results.

To be a part of the scientific community means that, among other things, you are accepted for your honesty and that your work is (hopefully) trustworthy. A breach of this implicit contract is seen as a grave offence and is why cases of misconduct are taken so seriously.

Drought

Since the discovery of superconductivity, the record transition temperature held by a material has been shattered many times. Here is an excellent plot (taken from here) that shows the critical temperature vs. year of discovery:

Superconducting Transition Temperature vs. Year of Discovery (click to enlarge)

This is a pretty remarkable plot for many reasons. One is the dramatic increase in transition temperature ushered in by the cuprates after approximately 70 years of steady and slow increases in transition temperatures. Another more worrying signature of the plot is that we are currently going through an unprecedented drought (not caused by climate change). The highest transition temperature has not been raised (at ambient pressures) for more than 23 years, the longest in history since the discovery of superconductivity.

It was always going to be difficult to increase the transition temperatures of superconductors once the materials in the  cuprate class were (seemingly) exhausted. It is interesting to see, however, that the mode of discovery has altered markedly compared with years prior. Nowadays, vertical lines are more common, with a subsequent leveling out. Hopefully the vertical line will reach room temperature sooner rather than later. I, personally, hope to still be around when room temperature superconductivity is achieved — it will be an interesting time to be alive.

Schrodinger’s Cat and Macroscopic Quantum Mechanics

A persisting question that we inherited from the forefathers of the quantum formalism is why quantum mechanics, which works emphatically well on the micro-scale, seem at odds with our intuition at the macro-scale. Intended to demonstrate the absurdity of applying quantum mechanics on the macro-scale, the mirco/macro logical disconnect was famously captured by Schrodinger in his description of a cat being in a superposition of both alive and dead states. There have been many attempts in the theoretical literature to come to grips with this apparent contradiction, the most popular of which goes under the umbrella of decoherence, where interaction with the environment results in a loss of information.

Back in 1999, Arndt, Zellinger and co-workers observed a two-slit interference of C60 molecules (i.e. buckyballs), in what was the largest molecule to exhibit such interference phenomena at the time. The grating used had a period of about 100 nm in the experiment, while the approximate de Broglie wavelength of the C60 molecules was 2.5 picometers. This was a startling discovery for a couple reasons:

  1. The beam of C60 molecules used here was far from being perfectly monochromatic. In fact, there was a pretty significant spread of initial velocities, with the full width at half maximum (\Delta v/v) getting to be as broad as 60%.
  2. The C60 molecules were not in their ground state. The initial beam was prepared by sublimating the molecules in an oven which was heated to 900-1000K. It is estimated, therefore, that there were likely 3 to 4 photons exchanged with the background blackbody field during the beam’s passage through the instrument. Hence the C60 molecules can be said to have been strongly interacting with the environment.
  3. The molecule consists of approximately 360 protons, 360 neutrons and 360 electrons (about 720 amu), which means that treating the C60 molecule as a purely classical object would be perfectly adequate for most purposes.

In the present, the record set by the C60 molecule has since been smashed by the larger molecules with mass up to 10,000 amu. This is now within one order of magnitude of a small virus. If I was a betting man, I wouldn’t put money against viruses exhibiting interference effects as well.

This of course raises the question as to how far these experiments can go and to what extent they can be applied to the human scale. Unfortunately, we will probably have to wait for a while to be able to definitively have an answer to that question. However, these experiments are a tour-de-force and make us face some of our deepest discomforts concerning the quantum formalism.