Tag Archives: Quantum Hall Effect

Fractional quasiparticles and reality

As a condensed matter physicist, one of the central themes that one must become accustomed to is the idea of a quasiparticle. These quasiparticles are not particles as nature made them per se, but only exist inside matter. (Yes, nature made matter too, and therefore quasiparticles as well, but come on — you know what I mean!)

Probably the first formulation of a quasiparticle was in Einstein’s theory of specific heat in a solid at low temperature. He postulated that the sound vibrations in a solid, much like photons from a blackbody, obeyed the Planck distribution, implying some sort of particulate nature to sound. This introduction was quite indirect, and the first really explicit formulation of quasiparticles was presented by Landau in his theory of He4. Here, he proposed that most physical observables could be described in terms of “phonons” and “rotons“, quantized sound vibrations at low and high momenta respectively.

In solid state physics, one of the most common quasiparticles is the hole; in the study of magnetism it is the magnon, in semiconductor physics, the exciton is ubiquitous and there are many other examples as well. So let me ask a seemingly benign question: are these quasiparticles real (i.e. are they real particles)?

In my experience in the condensed matter community, I suspect that most would answer in the affirmative, and if not, at least claim that the particles observed in condensed matter are just as real as any particle observed in particle physics.

Part of the reason I bring this issue up is because of concerns raised soon following the discovery of the fractional quantum Hall effect (FQHE). When the theory of the FQHE was formulated by Laughlin, it was thought that his formulation of quasiparticles of charge e/3 may have been a mere oddity in the mathematical description of the FQHE. Do these particles carrying e/3 current actually exist or is this just a convenient mathematical description?

In two papers that appeared almost concurrently, linked here and here, it was shown using quantum shot noise experiments that these e/3 particles did indeed exist. Briefly, quantum shot noise arises because of the discrete nature of particles and enables one to measure the charge of a current-carrying particle to a pretty good degree of accuracy. In comparing their results to the models of particles carrying charge e versus particles carrying charge e/3, the data shows no contest. Here is a plot below showing this result quite emphatically:

FracCharge.PNG

One may then pose the question: is there a true distinction between what really “exists out there” versus a theory that conveniently describes and predicts nature? Is the physicist’s job complete once the equations have been written down (i.e should he/she not care about questions like “are these fractional charges real”)?

These are tough questions to answer, and are largely personal, but I lean towards answering ‘yes’ to the former and ‘no’ to the latter. I would contend that the quantum shot noise experiments outlined above wouldn’t have even been conducted if the questions posed above were not serious considerations. While asking if something is real may not always be answerable, when it is, it usually results in a deepened understanding.

This discussion reminds me of an (8-year old!) YouTube video of David who, following oral surgery to remove a tooth, still feels the affects of anesthesia :

Precision in Many-Body Systems

Measurements of the quantum Hall effect give a precise conductance in units of e^2/h. Measurements of the frequency of the AC current in a Josephson junction give us a frequency of 2e/h times the applied voltage. Hydrodynamic circulation in liquid 4He is quantized in units of h/m_{4He}. These measurements (and similar ones like flux quantization) are remarkable. They yield fundamental constants to a great degree of accuracy in a condensed matter setting– a setting which Murray Gell-Mann once referred to as “squalid state” systems. How is this possible?

At first sight, it is stunning that physics of the solid or liquid state could yield a measurement so precise. When we consider the defects, impurities, surfaces and other imperfections in a macroscopic system, these results become even more astounding.

So where does this precision come from? It turns out that in all cases, one is measuring a quantity that is dependent on the single-valued nature of the (appropriately defined) complex scalar  wavefunction. The aforementioned quantities are measured in integer units, n, usually referred to as the winding number. Because the winding number is a topological quantity, in the sense that it arises in a multiply-connected space, these measurements do not particularly care about the small differences that occur in its surroundings.

For instance, the leads used to measure the quantum Hall effect can be placed virtually anywhere on the sample, as long as the wires don’t cross each other. The samples can be any (two-dimensional) geometry, i.e. a square, a circle or some complicated corrugated object. In the Josephson case, the weak links can be constrictions, an insulating oxide layer, a metal, etc. Imprecision of experimental setup is not detrimental, as long as the experimental geometry remains the same.

Another ingredient that is required for this precision is a large number of particles. This can seem counter-intuitive, since one expects quantization on a microscopic rather than at a macroscopic level, but the large number of particles makes these effects possible. For instance, both the Josephson effect and the hydrodynamic circulation in 4He depend on the existence of a macroscopic complex scalar wavefunction or order parameter. In fact, if the superconductor becomes too small, effects like the Josephson effect, flux quantization and persistent currents all start to get washed out. There is a gigantic energy barrier preventing the decay from the n=1 current-carrying state to the n=0 current non-carrying state due to the large number of particles involved (i.e. the higher winding number state is meta-stable). As one decreases the number of particles, the energy barrier is lowered and the system can start to tunnel from the higher winding number state to the lower winding number state.

In the quantum Hall effect, the samples need to be macroscopically large to prevent the boundaries from interacting with each other. Once the states on the edges are able to do that, they may hybridize and the conductance quantization gets washed out. This has been visualized in the context of 3D topological insulators using angle-resolved photoemission spectroscopy, in this well-known paper. Again, a large sample is needed to observe the effect.

It is interesting to think about where else such a robust quantization may arise in condensed matter physics. I suspect that there exist similar kinds of effects in different settings that have yet to be uncovered.

Aside: If you are skeptical about the multiply-connected nature of the quantum Hall effect, you can read about Laughlin’s gauge argument in his Nobel lecture here. His argument critically depends on a multiply-connected geometry.

Timing

Rather than being linear, the historical progression of topics in physics sometimes takes a tortuous route. There are two Annual Reviews of Condensed Matter Physics articles, one by P. Nozieres and one by M. Dresselhaus, that describe how widespread interest on certain subjects in the study of condensed matter were affected by timing.

In the article by Dresselhaus, she notes that HP Boehm and co-workers had actually isolated monolayer graphene back in 1962 (pdf!, and in German). On the theoretical front, P. Nozieres says in his article:

But neither I nor any of these famous people ever suspected what was hiding behind that linear dispersion. Fifty years later, graphene became a frontier of physics with far-reaching quantum effects.

Dresselhaus also mentions that carbon nanotubes were observed in 1952 in Russia followed by another reported discovery in the 1970s by M. Endo. These reports occurred well before its rediscovery in 1991 by Iijima that sparked a wealth of studies. The controversy over the discovery of nanotubes actually seems to date back even further, perhaps even to 1889 (pdf)!

In the field of topological insulators, again there seems to have been an oversight from the greater condensed matter physics community. As early as 1985, in the Soviet journal JETP, B.A. Volhov and O.A. Pankratov discussed the possibility of Dirac electrons at the surface between a normal band-gap semiconductor and an “inverted” band-gap semiconductor (pdf). Startlingly, the authors suggest CdHgTe and PbSnSe as materials in which to investigate the possibility. A HgTe/(Hg,Cd)Te quantum well hosted the first definitive observation of the quantum spin hall effect, while the Pb_{1-x}Sn_xSe system was later found to be a topological crystalline insulator.

One can probably find many more examples of historical inattention if one were to do a thorough study. One also wonders what other kinds of gems are hidden within the vastness of the scientific literature. P. Nozieres notes that perhaps the timing of these discoveries has something to do with why these initial discoveries went relatively unnoticed:

When a problem is not ripe you simply do not see it.

I don’t know how one quantifies “ripeness”, but he seems to be suggesting that the perceived importance of scientific works are correlated in some way to the scientific zeitgeist. In this vein, it is amusing to think about what would have happened had one discovered, say, topological insulators in Newton’s time. In all likelihood, no one would have paid the slightest attention.

Paradigm Shifts and “The Scourge of Bibliometrics”

Yesterday, I attended an insightful talk by A.J. Leggett at the APS March Meeting entitled Reflection on the Past, Present and Future of Condensed Matter Physics. The talk was interesting in two regards. Firstly, he referred to specific points in the history of condensed matter physics that resulted in (Kuhn-type) paradigm shifts in our thinking of condensed matter. Of course these paradigm shifts were not as violent as special relativity or quantum mechanics, so he deemed them “velvet” paradigm shifts.

This list, which he acknowledged was personal, consisted of:

  1. Landau’s theory of the Fermi liquid
  2. BCS theory
  3. Renormalization group
  4. Fractional quantum hall effect

Notable absentees from this list were superfluidity in 3He, the integer quanutm hall effect, the discovery of cuprate superconductivity and topological insulators. He argued that these latter advances did not result in major conceptual upheavals.

He went on to elaborate the reasons for these velvet revolutions, which I enumerate to correspond to the list above:

  1. Abandonment of microscopic theory, in particular with the use of Landau parameters; trying to relate experimental properties to one another with the input of experiment
  2. Use of an effective low-energy Hamiltonian to describe phase of matter
  3. Concept of universality and scaling
  4. Discovery of quasiparticles with fractional charge

It is informative to think about condensed matter physics in this way, as it demonstrates the conceptual advances that we almost take for granted in today’s work.

The second aspect of his talk that resonated strongly with the audience was what he dubbed “the scourge of bibliometrics”. He told the tale of his own formative years as a physicist. He published one single-page paper for his PhD work. Furthermore, once appointed as a lecturer at the University of Sussex, his job was to be a lecturer and teach from Monday thru Friday. If he did this job well, it was considered a job well-done. If research was something he wanted to partake in as a side-project, he was encouraged to do so. He discussed how this atmosphere allowed him to develop as a physicist, without the requirement of publishing papers for career advancement.

Furthermore, he claimed, because of the current focus on metrics, burgeoning young scientists are now encouraged to seek out problems that they can solve in a time frame of two to three years. He saw this as a terrible trend. While it is often necessary to complete short-term projects, it is also important to think about problems that one may be able to solve in, say, twenty years, or maybe even never. He claimed that this is what is meant by doing real science — jumping into the unknown. In fact, he asserted that if he were to give any advice to graduate students, postdocs and young faculty in the audience, it would be to try to spend about 20% of one’s time committed to some of these long-term problems.

This raises a number of questions in my mind. It is well-acknowledged within the community and even the blogosphere that the focus on publication number and short term-ism within the condensed matter physics community is detrimental. Both Ross McKenzie and Doug Natelson have expressed such sentiment numerous times on their blogs as well. From speaking to almost every physicist I know, this is a consensus opinion. The natural question to ask then is: if this is the consensus opinion, why is the modern climate as such?

It seems to me like part of this comes from the competition for funding among different research groups and funding agencies needing a way to discriminate between them. This leads to the widespread use of metrics, such as h-indices and publication number, to decide whether or not to allocate funding to a particular group. This doesn’t seem to be the only reason, however. Increasingly, young scientists are judged for hire by their publication output and the journals in which they publish.

Luckily, the situation is not all bad. Because so many people openly discuss this issue, I have noticed that the there is a certain amount of push-back from individual scientists. On my recent postdoc interviews, the principal investigators were most interested in what I was going to bring to the table rather than peruse through my publication list. I appreciated this immensely, as I had spent a large part of my graduate years pursuing instrumentation development. Nonetheless, I still felt a great deal of pressure to publish papers towards the end of graduate school, and it is this feeling of pressure that needs to be alleviated.

Strangely, I often find myself in the situation working despite the forces that be, rather than being encouraged to do so. I highly doubt that I am the only one with this feeling.

What Happens in 2D Stays in 2D.

There was a recent paper published in Nature Nanotechnology demonstrating that single-layer NbSe_2 exhibits a charge density wave transition at 145K and superconductivity at 2K. Bulk NbSe_2 has a CDW transition at ~34K and a superconducting transition at ~7.5K. The authors speculate (plausibly) that the enhanced CDW transition temperature occurs because of an increase in electron-phonon coupling due to the reduction in screening. An important detail is that the authors used a sapphire substrate for the experiments.

This paper is among a general trend of papers that examine the physics of solids in the 2D limit in single-layer form or at the interface between two solids. This frontier was opened up by the discovery of graphene and also by the discovery of superconductivity and ferromagnetism in the 2D electron gas at the LAO/STO interface. The nature of these transitions at the LAO/STO interface is a prominent area of research in condensed matter physics. Part of the reason for this interest stems from researchers having been ingrained with the Mermin-Wagner theorem. I have written before about the limitations of such theorems.

Nevertheless, it has now been found that the transition temperatures of materials can be significantly enhanced in single layer form. Besides the NbSe_2 case, it was found that the CDW transition temperature in single-layer TiSe_2 was also enhanced by about 40K in monolayer form. Probably most spectacularly, it was reported that single-layer FeSe on an STO substrate exhibited superconductivity at temperatures higher than 100K  (bulk FeSe only exhibits superconductivity at 8K). It should be mentioned that in bulk form the aforementioned materials are all quasi-2D and layered.

The phase transitions in these compounds obviously raise some fundamental questions about the nature of solids in 2D. One would expect, naively, for the transition temperature to be suppressed in reduced dimensions due to enhanced fluctuations. Obviously, this is not experimentally observed, and there must therefore be a boost from another parameter, such as the electron-phonon coupling in the NbSe_2 case, that must be taken into account.

I find this trend towards studying 2D compounds a particularly interesting avenue in the current condensed matter physics climate for a few reasons: (1) whether or not these phase transitions make sense within the Kosterlitz-Thouless paradigm (which works well to explain transitions in 2D superfluid and superconducting films) still needs to be investigated, (2) the need for adequate probes to study interfacial and monolayer compounds will necessarily lead to new experimental techniques and (3) qualitatively different phenomena can occur in the 2D limit that do not necessarily occur in their 3D counterparts (the quantum hall effect being a prime example).

Sometimes trends in condensed matter physics can lead to intellectual atrophy — I think that this one may lead to some fundamental and major discoveries in the years to come on the theoretical, experimental and perhaps even on the technological fronts.

Update: The day after I wrote this post, I also came upon an article demonstrating evidence for a ferroelectric phase transition in thin Strontium Titanate (STO), a material known to exhibit no ferroelectric phase transition in bulk form at all.