Category Archives: Review

Coupled and Synchronized Metronomes

A couple years ago, I saw P. Littlewood give a colloquium on exciton-polariton condensation. To introduce the idea, he performed a little experiment, a variation of an experiment first performed and published by Christiaan Huygens. Although he performed it with only two metronomes, below is a video of the same experiment performed with 32 metronomes.

A very important ingredient in getting this to work is the suspended foam underneath the metronomes. In effect, the foam is a field that couples the oscillators.

Acoustic Plasmon

In regards to the posts I’ve made about plasmons in the past (see here and here for instance), it seems like the plasmon in a metal will always exist at a finite energy at q=0 due to the long-ranged nature of the Coulomb interaction. Back in 1956, D. Pines published a paper, where in collaboration with P. Nozieres, he proposed a method by which an acoustic plasmon could indeed exist.

The idea is actually quite simple from a conceptual standpoint, so a cartoony description should suffice in describing how this is possible. The first important ingredient in realizing an acoustic plasmon is two types of charge carriers. Pines, in his original paper, chose s-electrons and d-electrons from two separate bands to illustrate his point. However, electrons from one band and holes from another could also suffice. The second important ingredient in realizing the acoustic plasmon is that the masses of the two types of carriers must be very different (which is why Pines chose light s-electrons and heavy d-electrons).

 

Screening of heavy charge carrier by light charge carrier

 

So why are these two features necessary? Well, simply put, the light charge carriers can screen the heavy charge carriers, effectively reducing the range of the Coulomb interaction (see image above). Such a phenomenon is very familiar to all of us who study solids. If, for instance, the interaction between the ions on the lattice sites in a simple 3D monatomic solid were not screened by the electrons, the longitudinal acoustic phonon would necessarily be gapped because of the Coulomb interaction (forgetting, for the moment, about what the lack of charge neutrality would do to the solid!). In some sense, therefore, the longitudinal acoustic phonon is indeed such an acoustic plasmon. The ion acoustic wave in a classical plasma is similarly a manifestation of an acoustic plasmon.

This isn’t necessarily the kind of acoustic plasmon that has been so elusive to solid-state physicists, though. The original proposal and the subsequent search was conducted on systems where light electrons (or holes) would screen heavy electrons (or holes). Indeed, it was suspected that Landau damping into the particle-hole continuum was preventing the acoustic plasmon from being an observable excitation in a solid. However, there have been a few papers suggesting that the acoustic plasmon has indeed been observed at solid surfaces. Here is one paper from 2007 claiming that an acoustic plasmon exists on the surface of beryllium and here is another showing a similar phenomenon on the surface of gold.

To my knowledge, it is still an open question as to whether such a plasmon can exist in the bulk of a 3D solid. This has not stopped researchers from suggesting that electron-acoustic plasmon coupling could lead to the formation of Cooper pairs and superconductvity in the cuprates. Varma has suggested that a good place to look would be in mixed-valence compounds, where f-electron masses can get very heavy.

On the experimental side, the search continues…

A helpful picture: If one imagines light electrons and heavy holes in a compensated semimetal for instance, the in-phase motion of the electrons and holes would result in an acoustic plasmon while the out-of-phase motion would result in the gapped plasmon.

Transistors, Logic and Abstraction

A general theme of science that manifests itself in many different ways is the concept of abstraction. What this means is that one can understand something at a higher level without having to understand a buried lower level. For instance, one can understand the theory of evolution based on natural selection (higher level) without having to first comprehend quantum mechanics (lower level), even though the higher level must be consistent with the lower one.

To my mind, this idea is most aptly demonstrated with transistors, circuits and logic. Let’s start at the level of transistors and build a NAND gate in the following way:

NANDCircuit

NAND Circuit

The NAND gate has the following truth table:

NANDTruthTable

If you can’t immediately see why the transistor circuit above yields the corresponding truth table, it helps to appeal to the “water analogy”, where one imagines the current flow as water. Imagine that water is flowing from Vcc. If A and B are high, the “dams” (transistors) are open, the current will flow to ground and X will be low. If either A or B is low (closed), the water will flow to X, and X will be high.

Why did I choose the NAND circuit instead of other logic gates? It turns out that all other logic gates can be built from the NAND alone, so it makes sense to choose it as a fundamental unit.

Let’s now abstract away the circuit and draw the NAND gate like so:

NANDgate

NAND Gate

Having abstracted away the transistor circuit, we can now play with this NAND gate and build other logic gates out of it. For instance, let’s think about how to build an OR gate. Well, an OR gate is just a NOT gate applied to the two inputs of a NAND gate. Therefore, we just need to build a NOT gate. One way to do this would be:

NOTgate

NOT from NAND

Notice that whenever A is high, X is low and vice versa. Let us now abstract this circuit away and draw the NOT gate as:

 

NOTabstract

NOT Gate

And now the OR gate can be made in the following way:

ORGate

OR from NOT and NAND

 

and abstracted away to look like:

ORAbstract

OR Gate

Now, although building an OR gate from NAND gates is totally unnecessary, and it actually would just be easier to do this by working with the transistors directly, one can already start to see the power of abstracting away the underlying circuit. We can just work at higher levels, build the component we want and put the transistors back in at the end. Our understanding of what is going on is not compromised in any way and is in fact probably enhanced since we don’t have to think about the water analogy any more!

Let’s work now with an example that actually is much easier at the level of NANDs and NOTs to really demonstrate the power of this technique. Let’s make what is called a multiplexer. A multiplexer is a three input-one output circuit with the following truth table:

MultiplexorTruthTabl

Multiplexor Truth Table

Notice that in this truth table, the X serves as a selector. When X is 0, it selects B as the output (Y), whereas when X is 1, it selects A as the output. The multiplexer can be built in the following way:

MultiplexorCircuit

Multiplexer from NOT and NANDs

and is usually abstracted in the following way:

MultiplexorAbstract

Multiplexer Gate

At this level, it is no longer a simple task to come up with a transistor circuit that will operate as a multiplexer, but it is relatively straightforward at the level of NANDs and NOTs. Now, armed with the multiplexer, NAND, NOT and OR gates, we can build even more complex circuit components. In fact, doing this, one will eventually arrive at the hardware for a basic computer. Therefore, next time you’re looking at complex circuitry, know that the builders used abstraction to think about the meaning of the circuit and then put all the transistors back in later.

I’ll stop building circuits here; I think the idea I’m trying to communicate is becoming increasingly clear. We can work at a certain level, abstract it away and then work at a higher level. This is an important concept in every field of science. Abstraction occurs in every realm. This is even true in particle physics. In condensed matter physics, we use this concept everyday to think about what happens in materials, abstracting away complex electron-electron interactions into a quasi-particle using Fermi liquid theory or abstracting away the interactions between the underlying electrons in a superconductor to understand vortex lattices (pdf!).

Correlated Electrons GRC

I attended the Gordon Research Conference on correlated electron systems this past week, and it was my first attendance at one of the GRCs. I was very impressed with it and hope to return to more of these in the future.

Some observations and notes from the meeting:

1) The GRC is a closed meeting in the sense that no pictures of slides or posters are allowed to be taken at these meetings. This policy is meant to create the ‘Vegas Mentality’, i.e. ‘whatever happens at a GRC stays at a GRC’. I see the value of this framework in the sense that it results in a more free and open exchange of ideas than what I’ve seen in at other conferences. I will therefore eschew from discussing the more technical topics presented at the meeting and concentrate on some rather more sociological observations.

The Vegas mentality at this meeting makes discussions feel even more transient than usual. There is a sense in which this is excellent, in that attendees are permitted to communicate ideas that they don’t understand fully or are speculative without too much judgment from their peers. Feedback and discussions can often resolve these issues or result in suggestions on how to make certain moon-shot ideas tangible.

2) There was a healthy interaction between students, postdocs and professors at the conference, which is usually screened out at the day-to-day level at many universities. These interactions are useful, especially for the younger parties, whose usual interaction with faculty don’t extend too far beyond their advisers. This is accomplished at the conference by inviting a high proportion of early career scientists so that the older ones find it difficult to form cliques.

3) From speaking to many people at the conference, it seems like a common theme is solving the notorious problem of the coupled oscillator (or two-body problem), where both husband and wife are searching for academic positions. One of the attendees at the conference humorously encouraged me to refer to the problem as a ‘two-body opportunity’. It seems like universities are trying harder to address these issues (such as having offices dedicated to solving couples’ employment), but there were still a couple rather horrific anecdotes told on this front. The workforce demographic is changing rapidly, and universities should really be leading the way on addressing the pressing issues on this front.

4) There was a gas leak from some construction work at Mt Holyoke College, the site of the meeting. This resulted in the evacuation of the dining hall during dinner. Many grabbed their plates and proceeded to eat outside, resulting in a rather unique and memorable culinary experience.

5) SciPost is now in full-swing and is accepting articles for submission. One of the attendees was instrumental in turning the idea of an open-access online journal for physicists into reality. I have written on this blog about SciPost previously, and I am hugely in favor of the effort.

Aside: I was heartened by Brian’s recent post on creating a more open and accepting culture among physicists and recommend reading his views on this issue.

The Mystery of URu2Si2 – Experimental Dump

Heavy fermion compounds are known to exhibit a wide range of ground states encompassing ferromagnetism, anti-ferromagnetism, superconductivity, insulating and a host of others. A number of these compounds also exhibit more than one of these phases simultaneously.

There is one of these heavy fermion materials that stands out among the rest, however, and that is URu2Si2. The reason for this is that there is an unidentified phase transition that occurs in this compound at ~17.5K. What I mean by “unidentified” is that the order parameter is unknown, the elementary excitations are not understood and there is a consensus emerging that we currently may not have the experimental capability to identify this phase unambiguously. This has led researchers to refer to this phase in URu2Si2 as “hidden order”. Our inability to understand this phase has now persisted for three decades and well over 600 papers have been written on this single material. For experimentalists and theorists that love a challenge, URu2Si2 presents a rather unique and interesting one.

Let me give a quick rundown of the experimental signatures of this phase. Firstly, to convince you that there actually is a thermodynamic phase transition that happens in URu2Si2, take a look at this plot of the specific heat as a function of temperature:

In the lower image, one can see two transitions, one into the hidden order phase at 17.5K and one into the superconducting phase at ~1.5K. One can see that there is a large entropy change at the phase transition into the hidden order phase, which makes it all the more remarkable that we don’t know what it going on! I should mention that the resistivity also shows an anomaly going into the hidden order phase both along the a- and c-axis (the unit cell is tetragonal).

Furthermore, the thermal expansion coefficient, \alpha = L^{-1}(\Delta L/\Delta T), has a peak for the in-plane coefficient and a smaller dip for the c-axis coefficient at the transition temperature. This implies that the volume of the unit cell gets larger through the transition, indicating that the hidden order phase exhibits a strong coupling to the lattice degrees of freedom.

For those familiar with the other uranium-based heavy fermion compounds, one of the most natural questions to ask is whether the hidden order phase is associated with the onset of some sort of magnetism. Indeed, x-ray resonance magnetic scattering and neutron scattering experiments were carried out in the late 80s and early 90s to investigate this possibility. The structure found corresponded to one where there was a ferromagnetic arrangement in the a-b plane with antiferromagnetic coupling along the out-of-plane c-axis. However, this was not the whole story. The magnetic moments were extremely weak (0.02\mu_B per Uranium atom) and the magnetic Bragg peaks found were not resolution-limited (correlation length ~400 Angstroms). This means that order was not of the true long-range variety!

Also, rather strangely, the integrated intensity of the magnetic Bragg peak was shown to be linear as a function of temperature, saturating at ~3K (shown below). All these results seemed to imply that the magnetism in the compound was of a rather unconventional kind.

The next logical question to ask was what the inelastic magnetic spectrum looked like. Below is an image exhibiting the dispersion of the magnetic modes. Two different modes can identified, one at the magnetic Bragg peak wavevectors (e.g. (1, 0, 0)) and one at “incommensurate” positions (e.g. 1 \pm 0.4, 0, 0). The “incommensurate” excitations exhibit approximately a ~4meV gap while the gap at (1, 0, 0) is about 2meV. These excitations show up with the hidden order and are thought to be closely associated with it. They have been shown to have longitudinal character.

The penultimate thing I will mention is that if one examines the optical conductivity of URu2Si2, a gap of ~5meV in the charge spectrum is also manifest. This is shown below:

And lastly, if one pressurizes a sample up to 0.5 GPa, the URu2Si2 becomes a  full-blown large-moment antiferromagnet with a magnetic moment of approximately 0.4\mu_B per Uranium atom. The transition temperature into the Neel state is about 18K.

So let me summarize the main observations concerning the hidden order phase:

  1. Weak short-range antiferromagnetism
  2. Strong coupling to the lattice
  3. Dispersive and gapped incommensurate and commensurate magnetic excitations
  4. Gapped charge excitations
  5. Lives nearby anti-ferromagnetism
  6. Can coexist with superconductivity

I should stress that I am no expert of heavy fermion compounds, which is why this is my first real post on them, so please feel free to point out any oversights I may have made!

More information can be found in these two excellent review articles:

http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1301

http://www.tandfonline.com/doi/abs/10.1080/14786435.2014.916428

 

Neither Energy Gap Nor Meissner Effect Imply Superflow

I have read several times in lecture notes, textbooks and online forums that the persistent current in a superconductor of annular geometry is a result of either:

  1. The opening of a superconducting gap at the Fermi surface
  2. The Meissner Effect

This is not correct, actually.

The energy gap at the Fermi surface is neither a sufficient nor necessary condition for the existence of persistent supercurrents in a superconducting ring. It is not sufficient because gaps can occur for all sorts of reasons — semiconductors, Mott insulators, charge density wave systems all exhibit energy gaps separating the occupied states from the unoccupied states. Yet these systems do not exhibit superconductivity.

Superconductivity does not require the existence of a gap either. It is possible to come up with models that exhibit superconductivity yet do not have a gap in the single-particle spectra (see de Gennes Chapter 8 or Rickayzen Chaper 8). Moreover, the cuprate and heavy fermion superconductors possess nodes in their single-particle spectra and still exhibit persistent currents.

Secondly, the Meissner effect is often conflated with superflow in a superconductor, but it is an equilibrium phenomenon, whereas persistent currents are a non-equilibrium phenomenon. Therefore, any conceptual attempts to make a conclusion about persistent currents in a superconducting ring from the Meissner effect is fraught with this inherent obstacle.

So, obviously, I must address the lurking $64k question: why does the current in a superconducting ring not decay within an observable time-frame?

Getting this answer right is much more difficult than pointing out the flaws in the other arguments! The answer has to do with a certain “topological protection” of the current-carrying state in a superconductor. However one chooses to understand the superconducting state (i.e. through broken gauge symmetry, the existence of a macroscopic wavefunction, off-diagonal long-range order, etc.), it is the existence of a particular type of condensate and the ability to adequately define the superfluid velocity that enables superflow:

\textbf{v}_s = \frac{\hbar}{2m} \nabla \phi

where \phi is the phase of the order parameter and the superfluid velocity obeys:

\oint \textbf{v}_s \cdot d\textbf{l} = n\hbar/2m

The details behind these ideas are further discussed in this set of lecture notes, though I have to admit that these notes are quite dense. I still have some pretty major difficulties understanding some of the main ideas in them.

I welcome comments concerning these concepts, especially ones challenging the ideas put forth here.

Phase Difference after Resonance

If you take the keys out of your pocket and swing them very slowly back and forth from your key chain, emulating a driven pendulum, you’ll notice that the keys swing back and forth in phase with your hand. Now, if you slowly start to speed up the swinging, you’ll notice that eventually you’ll hit a resonance frequency, where the keys will swing back and forth with a much greater amplitude.

If you keep slowly increasing the frequency of your swing beyond the resonance frequency, you’ll see that the keys don’t swing up as high. Also, you will notice that the keys now seem to be swaying out of phase with your hand (i.e. your hand is going in one direction while the keys are moving in the opposite direction!). This change of phase by 180 degrees between the driving force and the position of the oscillator is a ubiquitous feature of damped harmonic motion at frequencies higher than the resonance frequency. Why does this happen?

To understand this phenomenon, it helps to write down the equation for damped, driven harmonic motion. This could be describing a mass on a spring, a pendulum, a resistor-inductor-capacitor circuit, or something more exotic. Anyway, the relevant equation looks like this:

\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}

Let’s describe in words what each of the terms means. The first term describes the resistance to change or inertia of the system. The second term represents the damping of the system, which is usually quite small. The third term gives us the pullback or restoring force, while the last term on the right-hand side represents the external driving force.

With this nomenclature in place, let’s move on to what actually causes the phase change. First, we have to turn this differential equation into an algebraic equation by doing a Fourier transform (or similarly assuming a sinusoidal dependence of everything). This leaves us with the following equation:

(-\omega^2 + i\omega b + \omega_0^2 )x_0e^{i\omega t} = F_0e^{i\omega t}

Now we can more easily visualize what is going on if we concentrate on the left-hand side of the equation. Note that this equation can also suggestively be written as:

(e^{i\pi}\omega^2 + e^{i\pi/2}\omega b + \omega_0^2 )x_0e^{i\omega t} = F_0e^{i\omega t}

For small driving frequencies, b << \omega << \omega_0, we see that the restoring term is the largest. The phase difference can then be represented graphically on an Argand diagram, where we can draw the following picture:

ArgandPlaneInPhase

Restoring term dominates for low frequency oscillations

Therefore, the restoring force dominates the other two terms and the phase difference between the external force and the position of the oscillator is small (approximately zero).

At resonance, however, the driving frequency is the same as the natural frequency. This causes the restoring and inertial terms to cancel each other out perfectly, resulting in an Argand diagram like this:

 

ArgandPlaneResonance

Equal contribution from the restoring and inertial terms

After adding the vectors, this results in the arrow pointing upward, which is equivalent to saying that there is a 90 degree phase difference between the driving force and position of the oscillator.

You can probably see where this is going now, but let’s just keep going for the sake of completeness. For the case where the driving frequency exceeds the natural frequency (or resonant frequency), b << \omega_0 << \omega, we see that the inertial term starts to dominate, resulting in a phase shift of 180 degrees. This is again can be represented with an Argand diagram, as seen below:

ArgandPlaneOutOfPhase

Inertial term dominates for high frequency oscillations

This expresses the fact that the inertia can no longer “keep up” with the driving force and it therefore begins to lag behind. If the mass in a mass-spring system were to be reduced, the oscillator would be able to keep up with the driver up to a higher frequency. In summary, the phase difference can be plotted against the driving frequency to yield:

PhaseDifference

This phase change can be observed in so many contexts that it would be near impossible to list them all. In condensed matter physics, for instance, when sweeping the incident frequency of light in a reflectivity experiment of a semiconductor, a phase difference arises between the photon and the phonon above the phonon frequency. The problem that actually brought me to this analysis was the ported speaker, where above the resonant frequency of the speaker cone, the air from the port and the pressure wave generated from the speaker go 180 degrees out of phase.