Tag Archives: Philosophy

Fractional quasiparticles and reality

As a condensed matter physicist, one of the central themes that one must become accustomed to is the idea of a quasiparticle. These quasiparticles are not particles as nature made them per se, but only exist inside matter. (Yes, nature made matter too, and therefore quasiparticles as well, but come on — you know what I mean!)

Probably the first formulation of a quasiparticle was in Einstein’s theory of specific heat in a solid at low temperature. He postulated that the sound vibrations in a solid, much like photons from a blackbody, obeyed the Planck distribution, implying some sort of particulate nature to sound. This introduction was quite indirect, and the first really explicit formulation of quasiparticles was presented by Landau in his theory of He4. Here, he proposed that most physical observables could be described in terms of “phonons” and “rotons“, quantized sound vibrations at low and high momenta respectively.

In solid state physics, one of the most common quasiparticles is the hole; in the study of magnetism it is the magnon, in semiconductor physics, the exciton is ubiquitous and there are many other examples as well. So let me ask a seemingly benign question: are these quasiparticles real (i.e. are they real particles)?

In my experience in the condensed matter community, I suspect that most would answer in the affirmative, and if not, at least claim that the particles observed in condensed matter are just as real as any particle observed in particle physics.

Part of the reason I bring this issue up is because of concerns raised soon following the discovery of the fractional quantum Hall effect (FQHE). When the theory of the FQHE was formulated by Laughlin, it was thought that his formulation of quasiparticles of charge e/3 may have been a mere oddity in the mathematical description of the FQHE. Do these particles carrying e/3 current actually exist or is this just a convenient mathematical description?

In two papers that appeared almost concurrently, linked here and here, it was shown using quantum shot noise experiments that these e/3 particles did indeed exist. Briefly, quantum shot noise arises because of the discrete nature of particles and enables one to measure the charge of a current-carrying particle to a pretty good degree of accuracy. In comparing their results to the models of particles carrying charge e versus particles carrying charge e/3, the data shows no contest. Here is a plot below showing this result quite emphatically:

FracCharge.PNG

One may then pose the question: is there a true distinction between what really “exists out there” versus a theory that conveniently describes and predicts nature? Is the physicist’s job complete once the equations have been written down (i.e should he/she not care about questions like “are these fractional charges real”)?

These are tough questions to answer, and are largely personal, but I lean towards answering ‘yes’ to the former and ‘no’ to the latter. I would contend that the quantum shot noise experiments outlined above wouldn’t have even been conducted if the questions posed above were not serious considerations. While asking if something is real may not always be answerable, when it is, it usually results in a deepened understanding.

This discussion reminds me of an (8-year old!) YouTube video of David who, following oral surgery to remove a tooth, still feels the affects of anesthesia :

Consistency in the Hierarchy

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

Kapitza-Dirac Effect

We are all familiar with the fact that light can diffract from two (or multiple) slits in a Young-type experiment. After the advent of quantum mechanics and de Broglie’s wave description of matter, it was shown by Davisson and Germer that electrons could be diffracted by a crystal. In 1927, P. Kapitza and P. Dirac proposed that it should in principle be possible for electrons to be diffracted by standing waves of light, in effect using light as a diffraction grating.

In this scheme, the electrons would interact with light through the ponderomotive potential. If you’re not familiar with the ponderomotive potential, you wouldn’t be the only one — this is something I was totally ignorant of until reading about the Kapitza-Dirac effect. In 1995, Anton Zeilinger and co-workers were able to demonstrate the Kapitza-Dirac effect with atoms, obtaining a beautiful diffraction pattern in the process which you can take a look at in this paper. It probably took so long for this effect to be observed because it required the use of high-powered lasers.

Later, in 2001, this experiment was pushed a little further and an electron-beam was used to demonstrate the effect (as opposed to atoms), as Dirac and Kapitza originally proposed. Indeed, again a diffraction pattern was observed. The article is linked here and I reproduce the main result below:

dirac-kaptiza

(Top) The interference pattern observed in the presence of a standing light wave. (Bottom) The profile of the electron beam in the absence of the light wave.

Even though this experiment is conceptually quite simple, these basic quantum phenomena still manage to elicit awe (at least from me!).

Bohr-van Leeuwen Theorem and Micro/Macro Disconnect

A couple weeks ago, I wrote a post about the Gibbs paradox and how it represented a case where, if particle indistinguishability was not taken into account, led to some bizarre consequences on the macroscopic scale. In particular, it suggested that entropy should increase when partitioning a monatomic gas into two volumes. This paradox therefore contained within it the seeds of quantum mechanics (through particle indistinguishability), unbeknownst to Gibbs and his contemporaries.

Another historic case where a logical disconnect between the micro- and macroscale arose was in the context of the Bohr-van Leeuwen theorem. Colloquially, the theorem says that magnetism of any form (ferro-, dia-, paramagnetism, etc.) cannot exist within the realm of classical mechanics in equilibrium. It is quite easy to prove actually, so I’ll quickly sketch the main ideas. Firstly, the Hamiltonian with any electromagnetic field can be written in the form:

H = \sum_i \frac{1}{2m_i}(\textbf{p}_i - e\textbf{A}_i)^2 + U_i(\textbf{r}_i)

Now, because the classical partition function is of the form:

Z \propto \int_{-\infty}^\infty d^3\textbf{r}_1...d^3\textbf{r}_N\int_{-\infty}^\infty d^3\textbf{p}_1...d^3\textbf{p}_N e^{-\beta\sum_i \frac{1}{2m_i}(\textbf{p}_i - e\textbf{A}_i)^2 + U_i(\textbf{r}_i)}

we can just make the substitution:

\textbf{p}'_i = \textbf{p}_i - e\textbf{A}_i

without having to change the limits of the integral. Therefore, with this substitution, the partition function ends up looking like one without the presence of the vector potential (i.e. the partition function is independent of the vector potential and therefore cannot exhibit any magnetism!).

This theorem suggests, like in the Gibbs paradox case, that there is a logical inconsistency when one tries to apply macroscale physics (classical mechanics) to the microscale and attempts to build up from there (by applying statistical mechanics). The impressive thing about this kind of reasoning is that it requires little experimental input but nonetheless exhibits far-reaching consequences regarding a prevailing paradigm (in this case, classical mechanics).

Since the quantum mechanical revolution, it seems like we have the opposite problem, however. Quantum mechanics resolves both the Gibbs paradox and the Bohr-van Leeuwen theorem, but presents us with issues when we try to apply the microscale ideas to the macroscale!

What I mean is that while quantum mechanics is the rule of law on the microscale, we arrive at problems like the Schrodinger cat when we try to apply such reasoning on the macroscale. Furthermore, Bell’s theorem seems to disappear when we look at the world on the macroscale. One wonders whether such ideas, similar to the Gibbs paradox and the Bohr-van Leeuwen theorem, are subtle precursors suggesting where the limits of quantum mechanics may actually lie.

On Science and Meaning

The history of science has provided strong evidence to suggest that the humanity’s place in the universe is not very special. We have not existed for very long in the history of the universe, we are not at the center of the universe and we likely will not exist in the future of the universe. This kind of sentiment can seem depressing to some, as can be seen in the response to the video made by Neil DeGrasse Tyson and MinutePhysics:

It appears that such ideas can make human life and our actions here on earth (and beyond) seem rather meaningless. As I have referenced in a previous post, this can especially be true for graduate students! However, on a more serious note, I would contend the exact opposite.

Because life on earth is so fragile and transient and only exists in some far-flung corner on the universe, the best thing we can hope to do as humans is celebrate our existence through acts of exploration, beauty, creation, truth and acts that enrich the lives of others and our environment.

When working in the lab or on a calculation that requires attention to small details, this larger context is often forgotten. To my mind, it is important not to lose sight of the basic reason why we are there in the first place, which is all too easy to do. The universe can seem meaningless, but not so when she lets us peer into her depths, usually revealing order of spectacular beauty.

I apologize if this post comes off as a little preachy or pretentious– I suspect I am really the one that needed this pep talk.

Transistors, Logic and Abstraction

A general theme of science that manifests itself in many different ways is the concept of abstraction. What this means is that one can understand something at a higher level without having to understand a buried lower level. For instance, one can understand the theory of evolution based on natural selection (higher level) without having to first comprehend quantum mechanics (lower level), even though the higher level must be consistent with the lower one.

To my mind, this idea is most aptly demonstrated with transistors, circuits and logic. Let’s start at the level of transistors and build a NAND gate in the following way:

NANDCircuit

NAND Circuit

The NAND gate has the following truth table:

NANDTruthTable

If you can’t immediately see why the transistor circuit above yields the corresponding truth table, it helps to appeal to the “water analogy”, where one imagines the current flow as water. Imagine that water is flowing from Vcc. If A and B are high, the “dams” (transistors) are open, the current will flow to ground and X will be low. If either A or B is low (closed), the water will flow to X, and X will be high.

Why did I choose the NAND circuit instead of other logic gates? It turns out that all other logic gates can be built from the NAND alone, so it makes sense to choose it as a fundamental unit.

Let’s now abstract away the circuit and draw the NAND gate like so:

NANDgate

NAND Gate

Having abstracted away the transistor circuit, we can now play with this NAND gate and build other logic gates out of it. For instance, let’s think about how to build an OR gate. Well, an OR gate is just a NOT gate applied to the two inputs of a NAND gate. Therefore, we just need to build a NOT gate. One way to do this would be:

NOTgate

NOT from NAND

Notice that whenever A is high, X is low and vice versa. Let us now abstract this circuit away and draw the NOT gate as:

 

NOTabstract

NOT Gate

And now the OR gate can be made in the following way:

ORGate

OR from NOT and NAND

 

and abstracted away to look like:

ORAbstract

OR Gate

Now, although building an OR gate from NAND gates is totally unnecessary, and it actually would just be easier to do this by working with the transistors directly, one can already start to see the power of abstracting away the underlying circuit. We can just work at higher levels, build the component we want and put the transistors back in at the end. Our understanding of what is going on is not compromised in any way and is in fact probably enhanced since we don’t have to think about the water analogy any more!

Let’s work now with an example that actually is much easier at the level of NANDs and NOTs to really demonstrate the power of this technique. Let’s make what is called a multiplexer. A multiplexer is a three input-one output circuit with the following truth table:

MultiplexorTruthTabl

Multiplexor Truth Table

Notice that in this truth table, the X serves as a selector. When X is 0, it selects B as the output (Y), whereas when X is 1, it selects A as the output. The multiplexer can be built in the following way:

MultiplexorCircuit

Multiplexer from NOT and NANDs

and is usually abstracted in the following way:

MultiplexorAbstract

Multiplexer Gate

At this level, it is no longer a simple task to come up with a transistor circuit that will operate as a multiplexer, but it is relatively straightforward at the level of NANDs and NOTs. Now, armed with the multiplexer, NAND, NOT and OR gates, we can build even more complex circuit components. In fact, doing this, one will eventually arrive at the hardware for a basic computer. Therefore, next time you’re looking at complex circuitry, know that the builders used abstraction to think about the meaning of the circuit and then put all the transistors back in later.

I’ll stop building circuits here; I think the idea I’m trying to communicate is becoming increasingly clear. We can work at a certain level, abstract it away and then work at a higher level. This is an important concept in every field of science. Abstraction occurs in every realm. This is even true in particle physics. In condensed matter physics, we use this concept everyday to think about what happens in materials, abstracting away complex electron-electron interactions into a quasi-particle using Fermi liquid theory or abstracting away the interactions between the underlying electrons in a superconductor to understand vortex lattices (pdf!).

Schrodinger’s Cat and Macroscopic Quantum Mechanics

A persisting question that we inherited from the forefathers of the quantum formalism is why quantum mechanics, which works emphatically well on the micro-scale, seem at odds with our intuition at the macro-scale. Intended to demonstrate the absurdity of applying quantum mechanics on the macro-scale, the mirco/macro logical disconnect was famously captured by Schrodinger in his description of a cat being in a superposition of both alive and dead states. There have been many attempts in the theoretical literature to come to grips with this apparent contradiction, the most popular of which goes under the umbrella of decoherence, where interaction with the environment results in a loss of information.

Back in 1999, Arndt, Zellinger and co-workers observed a two-slit interference of C60 molecules (i.e. buckyballs), in what was the largest molecule to exhibit such interference phenomena at the time. The grating used had a period of about 100 nm in the experiment, while the approximate de Broglie wavelength of the C60 molecules was 2.5 picometers. This was a startling discovery for a couple reasons:

  1. The beam of C60 molecules used here was far from being perfectly monochromatic. In fact, there was a pretty significant spread of initial velocities, with the full width at half maximum (\Delta v/v) getting to be as broad as 60%.
  2. The C60 molecules were not in their ground state. The initial beam was prepared by sublimating the molecules in an oven which was heated to 900-1000K. It is estimated, therefore, that there were likely 3 to 4 photons exchanged with the background blackbody field during the beam’s passage through the instrument. Hence the C60 molecules can be said to have been strongly interacting with the environment.
  3. The molecule consists of approximately 360 protons, 360 neutrons and 360 electrons (about 720 amu), which means that treating the C60 molecule as a purely classical object would be perfectly adequate for most purposes.

In the present, the record set by the C60 molecule has since been smashed by the larger molecules with mass up to 10,000 amu. This is now within one order of magnitude of a small virus. If I was a betting man, I wouldn’t put money against viruses exhibiting interference effects as well.

This of course raises the question as to how far these experiments can go and to what extent they can be applied to the human scale. Unfortunately, we will probably have to wait for a while to be able to definitively have an answer to that question. However, these experiments are a tour-de-force and make us face some of our deepest discomforts concerning the quantum formalism.