Monthly Archives: September 2015

Does Frivolity Border on the Edge of Creativity?

Sometimes, for the sake of letting one’s imagination run around a bit, it may be advisable to indulge in a seemingly frivolous endeavor. In the sciences, these undertakings can sometimes result in the winning of an Ig Nobel Prize.

This year’s winner of the Ig Nobel in physics studied the “universal” urination time of mammals. The main conclusion of this paper is that mammals that weigh more than 3kg urinate for 21 \pm 13 seconds per session. I will not comment on the rather large error bars.

I reprint the abstract to the paper below:

Many urological studies rely on models of animals, such as rats and pigs, but their relation to the human urinary system is poorly understood. Here, we elucidate the hydrodynamics of urination across five orders of magnitude in body mass. Using high-speed videography and flow-rate measurement obtained at Zoo Atlanta, we discover that all mammals above 3 kg in weight empty their bladders over nearly constant duration of 21 ± 13 s. This feat is possible, because larger animals have longer urethras and thus, higher gravitational force and higher flow speed. Smaller mammals are challenged during urination by high viscous and capillary forces that limit their urine to single drops. Our findings reveal that the urethra is a flow-enhancing device, enabling the urinary system to be scaled up by a factor of 3,600 in volume without compromising its function. This study may help to diagnose urinary problems in animals as well as inspire the design of scalable hydrodynamic systems based on those in nature.

I present a translation of the abstract in my own language below:

We don’t know if humans and other mammals pee in the same way. Here, we study how both big and little mammals pee. We creepily filmed a lot of mammals (that weigh more than 3kg) pee with an unnecessarily high-speed camera and found that they all generally pee for 21 \pm 13 seconds. Large mammals can push more pee through their pee-holes and gravity helps them out a bit. It’s harder for small animals to pee because they have smaller pee-holes. Surprisingly, pee-holes work for mammals with a range of sizes. We hope this study will help mammals with peeing problems.

I genuinely enjoyed reading their paper, and actually recommend it for a bit of fun. Here are some of the high-speed videos (which you may or may not want to watch) associated with the paper.

Please feel free to experiment with your own “translations” in the comments.

Condensed Matter Physics in the Eyes of the Public: A Note from N.P. Armitage

Note: This post is actually a comment on Jennifer Oullette’s blog Cocktail Party Physics. It was in response to a post about why particle physics tends to generate more wonder and hype in the eyes of the media and public at large compared to condensed matter physics. The comment was originally posted in 2006 by N. P. Arimtage, but it still rings true today. I reprint it below:

Nuestra culpa. You’re right, Jennifer. We condensed matter physicists (henceforth CMP) have not been good with providing a compelling narrative for our research. There may be many reasons for this, but I believe it comes in part from a misconception of how we should sell ourselves to the public (and thereby funding agencies).

As a field we can be justifiably proud to have discovered the physics that led to the transistor, NMR, superconducting electronics etc etc. But this boon has also been a curse. It has made us lazy and has stifled our capacity to think creatively about outreach in areas where we don’t have the crutch of technological promise to fall back on.

This is a luxury our cosmology colleagues don’t have. They feel passionately about their research and they have to (get to?) convey that passion to the public (with predictably good results). We feel passionately about our research, but then feel compelled to tell boring stories about this or that new technology we might develop (which predictably elicits yawns and perhaps only a mental note to take advantage of said technology when it is available in Ipod form). We do this because we are bred and raised to think that technological promise is a somehow more legitimate motivation to the outside public than genuine fundamental scientific interest. It doesn’t have to be this way.

Due to our tremendous technological successes there is also the feeling then that at some level ALL our work should touch on technology. This is the easy strategy, but ultimately it hasn’t been good for the health of the field. This is because, for many of us, technology isn’t our passion and it shows. Moreover, the research or aspect of research that has the greatest chance of evoking feelings of real awe and wonderment is typically the precise research that has the least chance of creating viable products. Perhaps this last statement is one regarding human nature itself.

This current modus operadi has lead to 3 things:

-A marginalization of some of the most exciting research (which may have no even tenuous connection to commercialization).

-Big promises about technological directions when it isn’t warranted. And then consequences when results fail to live up to prognostications.

-And most relevant for the current discussion, a lack of focus at and practice on evoking awe and wonderment.

It is telling that virtually every Phys Rev Focus (short news release-style blurbs from the American Physical Society on notable discoveries) on CMP ends with a sentence or two about what technological impact said discovery will have. Sometimes these connections are tenuous at best. Obviously there is no similar onus in articles on cosmology and so those Focuses can focus on what it is that really excites the researchers (instead of the tenuous backstory technological connection). This is nothing against Phys. Rev. Focus, but serves to illustrate the prevailing philosophy in public outreach. The “public” can tell when we’re bluffing and they certainly can feel passion or lack thereof.

The reality is that many of us in CMP don’t have the inclination or interest to ‘make’ anything at all. For instance, we may pursue novel states of matter at low temperature and consider the concept of emergence and the appearance of collective effects to be just as fundamental and irreducible as anything in string theory. We should promote what excites us in the manner that it excites us.

The research that Jennifer cites on graphene is a case in point. Yes, perhaps (but perhaps not) there is technological promise in graphene, but there is also a remarkable (and awe inspiring) fundamental side as well. Here we believe that the electrons in graphene are described by the same formalism that applies to the relativistic particles of the Dirac equation. One can simulate the rich structure of elementary particle physics in a table top experiment! I would posit that this kind of thing is much more likely to provoke enthusiasm from the public at large then any connection to graphene as yet another possible material in new computing devices.

Our cosmology and particle physics colleagues are raised academically to believe that knowledge for knowledge’s sake is a good thing. By and large they do a wonderful job of conveying these ideas to the general public. Although we believe the same thing, we CMP have presented ourselves not as people who also have access to wild and wonderful things, but as people who are discovering stuff to make stuff. We have that, but there is so so much more. We need a new business model and a new narrative.

Lessons from the Coupled Oscillator

In studying solid state physics, one of the first problems encountered is that of phonons. In the usual textbooks (such as Ashcroft and Mermin or Kittel), the physics is buried underneath formalism. Here is my attempt to explain the physics, while just quoting the main mathematical results. For the simple mass-spring oscillator system pictured below, we get the following equation of motion and oscillation frequency:

Simple harmonic oscillator

\ddot{x} = -\omega^2x

and      \omega^2 = \frac{k}{m}

If we couple two harmonic oscillators, such as in the situation below, we get two normal modes that obey the equations of motion identical to the single-oscillator case.

coupledoscillator

Coupled harmonic oscillator

The equations of motion for the normal modes are:

\ddot{\eta_1} = -\omega^2_1\eta_1      and

\ddot{\eta_2} = -\omega^2_2\eta_2,

where

\omega_1^2 = \frac{k+2\kappa}{m}

and   \omega_2^2 = \frac{k}{m}.

I should also mention that \eta_1 = x_1 - x_2\eta_2 = x_1 + x_2. The normal modes are pictured below, consisting of a symmetric and antisymmetric oscillation:

symmetric

Symmetric normal mode

antisymmetric

Antisymmetric normal mode

The surprising thing about the equations for the normal modes is that they look exactly like the equations for two decoupled and independent harmonic oscillators. Any motion of the oscillators can therefore be written as a linear combination of the normal modes. When looking back at such results, it seems trivial — but I’m sure to whoever first solved this problem, the result was probably unexpected and profound.

Now, let us briefly discuss the quantum case. If we have a single harmonic oscillator, we get that the Hamiltonian is:

H = \hbar\omega (a^\dagger a +1/2)

If we have many harmonic oscillators coupled together as pictured below, one would probably guess in light of the classical case that one could obtain the normal modes similarly.

Harmonic Chain

One would probably then naively guess that the Hamiltonian could be decoupled into many seemingly independent oscillators:

H = \sum_k\hbar\omega_k (a^\dagger_k a _k+1/2)

This intuition is exactly correct and this is indeed the Hamiltonian describing phonons, the normal modes of a lattice. The startling conclusion in the quantum mechanical case, though, is that the equations lend themselves to a quasiparticle description — but I wish to speak about quasiparticles another day. Many ideas in quantum mechanics, such as Anderson localization, are general wave phenomena and can be seen in classical systems as well. Studying and visualizing classical waves can therefore still yield interesting insights into quantum mechanics.

Why are the quantum mechanical effects of sound observed in most solids but not most liquids?

Well, if liquids remained liquids down to low temperatures, then the quantum mechanical effects of sound would also occur in them as well. There is actually one example where these effects are important and this is in liquid helium.

Therefore the appropriate questions to ask then are: (i) when are quantum mechanical effects significant in the description of sound? and (ii) when does quantum mechanics have any observable consequences in matter at all?

The answer to this question is probably obvious to most people that read this blog. However, I would still think it needs to be reiterated every once in a while. When does the wave nature of “particles” become relevant? Usually, when the wavelength, \lambda, is on the order of some characteristic length, d:

\lambda \gtrsim d

What is this characteristic length in a liquid or solid? One can approximate this by the interparticle spacing, which one can take to be the inverse of the cube root of the density, n^{-1/3}. Therefore, quantum mechanical effects can be said to become important when:

d \sim n^{-1/3}

Now, lastly, we need an expression for the wavelength of the particles. One can use the deBroglie expression that relates the wavelength to the momentum:

\lambda \sim \frac{h}{p},

where h is Planck’s constant and p is the momentum. And one can approximate the momentum of a particle at temperature, T, by:

p \sim \sqrt{mk_BT}    (massive)    OR      p \sim k_BT/v_s     (massless),

where k_B is Boltzmann’s constan, m is the mass of the particle in question, and v_s is the speed of sound. Therefore we get that quantum mechanics becomes significant when:

n^{2/3}h^{2}/m \gtrsim k_BT   (massive)    OR     n^{1/3}h v_s \gtrsim k_BT     (massless).

Of course this expression is just a rough estimate, but it does tell us that most liquids end up freezing before quantum mechanical effects become relevant. Therefore sound, or phonons, express their quantum mechanical properties at low temperatures — usually below the freezing point of most materials. By the way, the most celebrated example of the quantum mechanical effects of sound in a solid is in the C_v \sim T^3 Debye model. Notice that the left hand side in formula above for massless particles is, within factors of order unity, the Boltzmann constant times the Debye temperature. Sound can exhibit quantum mechanical properties in liquids and gases, but these cases are rare: helium at low temperature is an example of a liquid, and Bose condensed sodium is an example of a gas.

What Happens in 2D Stays in 2D.

There was a recent paper published in Nature Nanotechnology demonstrating that single-layer NbSe_2 exhibits a charge density wave transition at 145K and superconductivity at 2K. Bulk NbSe_2 has a CDW transition at ~34K and a superconducting transition at ~7.5K. The authors speculate (plausibly) that the enhanced CDW transition temperature occurs because of an increase in electron-phonon coupling due to the reduction in screening. An important detail is that the authors used a sapphire substrate for the experiments.

This paper is among a general trend of papers that examine the physics of solids in the 2D limit in single-layer form or at the interface between two solids. This frontier was opened up by the discovery of graphene and also by the discovery of superconductivity and ferromagnetism in the 2D electron gas at the LAO/STO interface. The nature of these transitions at the LAO/STO interface is a prominent area of research in condensed matter physics. Part of the reason for this interest stems from researchers having been ingrained with the Mermin-Wagner theorem. I have written before about the limitations of such theorems.

Nevertheless, it has now been found that the transition temperatures of materials can be significantly enhanced in single layer form. Besides the NbSe_2 case, it was found that the CDW transition temperature in single-layer TiSe_2 was also enhanced by about 40K in monolayer form. Probably most spectacularly, it was reported that single-layer FeSe on an STO substrate exhibited superconductivity at temperatures higher than 100K  (bulk FeSe only exhibits superconductivity at 8K). It should be mentioned that in bulk form the aforementioned materials are all quasi-2D and layered.

The phase transitions in these compounds obviously raise some fundamental questions about the nature of solids in 2D. One would expect, naively, for the transition temperature to be suppressed in reduced dimensions due to enhanced fluctuations. Obviously, this is not experimentally observed, and there must therefore be a boost from another parameter, such as the electron-phonon coupling in the NbSe_2 case, that must be taken into account.

I find this trend towards studying 2D compounds a particularly interesting avenue in the current condensed matter physics climate for a few reasons: (1) whether or not these phase transitions make sense within the Kosterlitz-Thouless paradigm (which works well to explain transitions in 2D superfluid and superconducting films) still needs to be investigated, (2) the need for adequate probes to study interfacial and monolayer compounds will necessarily lead to new experimental techniques and (3) qualitatively different phenomena can occur in the 2D limit that do not necessarily occur in their 3D counterparts (the quantum hall effect being a prime example).

Sometimes trends in condensed matter physics can lead to intellectual atrophy — I think that this one may lead to some fundamental and major discoveries in the years to come on the theoretical, experimental and perhaps even on the technological fronts.

Update: The day after I wrote this post, I also came upon an article demonstrating evidence for a ferroelectric phase transition in thin Strontium Titanate (STO), a material known to exhibit no ferroelectric phase transition in bulk form at all.

Net Attraction à la Bardeen-Pines and Kohn-Luttinger

In the lead up to the full formulation of BCS theory, the derivation of Bardeen-Pines interaction played a prominent role. The Bardeen-Pines interaction demonstrated that a net attractive interaction between electrons in an electron gas/liquid can result in the presence of phonons.

The way that Bardeen and Pines derived this result can be understood by reading this paper. The result is actually quite simple to derive using a random-phase-like approximation or second-order perturbation theory. Regardless, the important result from this paper is that the effective interaction between two electrons is given by:

V_{eff}(\textbf{q},\omega) = \frac{e^2}{\epsilon_0}\frac{1}{q^2 + k_{TF}^2}(1 + \frac{\omega_{ph}^2}{\omega^2 - \omega_{ph}^2})

The crucial aspect of this equation is that for frequencies less than the phonon frequency (i.e. for \omega < \omega_{ph}), the effective interaction becomes negative (i.e. attractive).

It was also shown by Kohn and Luttinger in 1965 that, in principle, one could also obtain superconductivity in the absence of phonons. The attraction would occur using the phenomenon of Friedel oscillations whereby the effective potential can also become negative. This was quite a remarkable result: it showed that a purely electronic form of superconductivity was indeed theoretically possible.

What makes the effective interaction become attractive in these two models? In the Bardeen-Pines case, the phonons screen the electrons leading to a net attraction, while in the Kohn-Luttinger case, Fermi surface effects can again lead to a net attraction. It is important to note that in both papers, the pre-eminent quantity calculated was the dielectric function.

This is because the effective potential, V_{eff}(\textbf{q},\omega), is equal to the following:

V_{eff}(\textbf{q},\omega) = \frac{V(\textbf{q},\omega)}{\epsilon(\textbf{q},\omega)}

In the aforementioned cases, net attraction resulted when \epsilon(\textbf{q},\omega) < 0.

This raises an interesting question: is it possible to still form Cooper pairs even when \epsilon(\textbf{q},\omega) > 0? It is possible that this question has been asked and answered in the literature previously, unbeknownst to me. I do think it is an important point to try to address especially in the context of high temperature superconductivity.

I welcome comments regarding this question.

Update: In light of my previous post about spin fluctuations, it seems like \epsilon < 0 is not a necessary condition to form Cooper pairs. In the s-wave channel, it seems like, barring some pathology, that \epsilon would have to be less than 0, but in the d-wave case, this need not be so. I just hadn’t put two and two together when initially drafting this post.

Draw me a picture of a Cooper pair

Note: This is a post by Brian Skinner as part of a blog exchange. He has his own blog, which I heartily recommend, called Gravity and Levity. He is currently a postdoctoral scholar at MIT in theoretical condensed matter physics.

The central, and most surprising, idea in the conventional theory of superconductivity is the notion of Cooper pairing. In a Cooper pair, two electrons with opposite momentum somehow manage to overcome their ostensibly enormous repulsive energy and bind together to make a composite bosonic particle. These composite bosons are then able to carry electric current without dissipation.

But what does a Cooper pair really look like? In this post I’m going to try to draw a picture of one, and in the process I hope to discuss a little bit of physical intuition behind how Cooper pairing is possible.

To begin with, one should acknowledge that the “electrons” that comprise Cooper pairs are not really electrons as God made them. These electrons are the quasiparticles of Fermi liquid theory, which means that they are singly-charged, half-spinned objects that are dressed in excitations of the Fermi sea around them. In particular, each “electron” that propagates through a metal carries with it a screening atmosphere made up of local perturbations in charge density. Something like this:electron_screening

That distance r_s in this picture is the Thomas-Fermi screening radius, which in metals is on the same order as the Fermi wavelength (generally \sim 5 - 10 Angstroms). At distances much longer than r_s, the electron-electron interaction is screened out exponentially.

What this screening implies is that as long as the typical distance between electrons inside a Cooper pair is much longer than the Fermi wavelength (which it has to be, since there is really no concept of an electron that is smaller than the Fermi wavelength), the mutual Coulomb repulsion between electrons isn’t a problem. Electrons that are much further apart than r_s simply don’t have any significant Coulomb interaction.

But, of course, this doesn’t explain what actually makes the electron stick together.  In the conventional theory, the “glue” between electrons is provided by the electron-phonon interaction. We typically say that electrons within a Cooper pair “exchange phonons”, and that this exchange mediates an attractive interaction. If you push a physicist to tell you what this exchange looks like in real space, you might get something like what is written in the Wikipedia article:

An electron moving through a conductor will attract nearby positive charges in the lattice. This deformation of the lattice causes another electron, with opposite spin, to move into the region of higher positive charge density. The two electrons then become correlated.

This kind of explanation might be accompanied by a picture like this one or even an animation like this one, which attempt to schematically depict how one electron distorts the lattice and creates a positively-charged well that another electron can fall into.

But I never liked these kind of pictures. Their big flaw, to my mind, is that in metals the electrons move much too fast for it to make sense. In particular, the Fermi velocity in metals is usually on the order of 10^6 m/s, while the phonon velocity is a paltry (\text{few}) \times 10^3 m/s. So the idea that one electron can create a little potential well for another to fall into simply doesn’t make sense dynamically. By the time the potential well was created by the slow rearrangement of ions, the first electron would be long gone, and it’s hard to see any meaningful way in which the two electrons would be “paired”.

The other problem with the picture above is that it doesn’t explain why only electrons with opposite momentum can form Cooper pairs. If Cooper pairing came simply from one electron leaving behind a lattice distortion for another to couple to, then why should the pairing only work for opposite-momentum electrons?

So let me advance a different picture of a Cooper pair.

It starts by reminding you that the wavefunction for a (say) right-going electron state looks like this:

\psi_R \sim e^{i (k x - \omega t)}.

The probability density for the electron’s position in this state, |\psi_R(x)|^2, is uniform in space.

On the other hand, the wavefunction for a left-going electron state looks like

\psi_L \sim e^{i (-k x - \omega t)}.

It also has a uniform probability distribution. But if you use the two states (one with momentum +k and the other with momentum -k) to make a superposition, you can get a state \psi_C = (\psi_R + \psi_L)/\sqrt{2} whose probability distribution looks like a standing wave: |\psi_C|^2 \sim \cos^2(k x).

In other words, by combining electron states with +k and -k, you can arrive at an electron state where the electron probability distribution (and therefore the electron charge density) has a static spatial pattern.

Once there is a static pattern, the positively charged ions in the crystal lattice can distort their spacing to bring themselves closer to the regions of large electron charge density. Like this:

standing_wave

In this way the system lowers its total Coulomb energy.  In essence, the binding of opposite-momentum electrons is a clever way of effectively bringing the fast-moving electrons to a stop, so that the slow-moving ionic lattice can accommodate itself to it.

Of course, the final piece of the picture is that the Cooper pair should have a finite size in space – the standing wave can’t actually extend on forever. This finite size is generally what we call the coherence length \xi. Forcing the two electrons within the Cooper pair to be confined within the coherence length costs some quantum confinement energy (i.e., an increase in the electron momentum due to the uncertainty principle), and this energy cost goes like \sim \hbar v/\xi, where v is the Fermi momentum. So generally speaking the length \xi should be large enough that \hbar v / \xi \lesssim \Delta where \Delta is the binding energy gained from Cooper pairing.  Usually these two energy scales are on the same order, so that \xi \sim \hbar v / \Delta.

Putting it all together, my favorite picture of a Cooper pair looks something like this:

cooper_pair

I’m certainly no expert in superconductivity, but this picture makes much more sense to me than the one in Wikipedia.

Your criticisms or refinements of it are certainly welcome.

Author’s note: Thanks to Mike Norman, who taught me this picture over lunch one day.