Tag Archives: Phase Transitions

Expt 9) Soft phonons at continuous structural phase transitions

That continuous structural phase transitions are associated with soft phonon modes was first put forth theoretically by Cochran in 1959-60. He posited that as an optical phonon branch reaches zero frequency the material must become structurally unstable. Qualitatively, when the phonon frequency goes to zero, that mode becomes macroscopically occupied, which ushers in a structural change. The symmetry of the phonon determines the new low temperature structure.

While this theory was tested soon thereafter by many, a soft phonon associated with a structural instability had already been observed by Raman and Nedungadi nineteen years prior. In 1940, they saw that the transition between \alpha-quartz and \beta-quartz at 573C was associated with a soft phonon using (you guessed it!) Raman spectroscopy. However, it is important to note that the \alpha-\beta transition in quartz is a discontinuous phase transition. So while the phonon does soften considerably, it does not actually reach zero frequency before the structural transition takes place.

Below is the original image, showing the rather spectacular result, where the arrow indicates the phonon that softens significantly upon approaching the transition temperature (it starts out at at ~220 cm-1). Both the Stokes and anti-Stokes softening can be observed due to the high temperature of the studies.

Phonon softening in quartz. As the temperature is raised, a phonon that starts out at the position of the arrow shifts toward lower frequency (i.e. towards the region of large intensity Rayleigh scattering). (The phonon mode, for some reason, is barely visible in the -192C spectrum.) At high temperatures, the phonon linewidth broadens considerably and is very difficult to see at 530C. It is actually easier to see the softening on the anti-Stokes side (towards the left of the Rayleigh scattering).

Critical Slowing Down

I realize that it’s been a long while since I’ve written a post, so the topic of this one, while unintentionally so, is quite apt.

Among the more universal themes in studying phase transitions is the notion of critical slowing down. Most students are introduced to this idea in the context of second order phase transitions, but it has turned out to be a useful concept in a wide range of systems beyond this narrow framework and into subjects well outside the purview of the average condensed matter physicist.

Stated simply, critical slowing down refers to the phenomenon observed near phase transitions where a slight perturbation or disturbance away from equilibrium takes a really long time to decay back to equilibrium. Why is this the case?

The main idea can be explained within the Landau theory of phase transitions, and I’ll take that approach here since it’s quite intuitive.  As you can see in the images below, when the Landau potential is far from T_c, the potential well can be approximated by a parabolic form. However, this is not possible for the potential near T_c.

LandauPotentials

Mathematically, this can be explained by considering a simple form of the Landau potential:

V(\phi) = \alpha (T-T_c) x^2 + \beta x^4

Near T_c, the parabolic term vanishes, and we are left with only the quartic one. Although it’s clear from the images why the dynamics slow down near T_c, it helps to spell out the math a little.

Firstly, imagine that the potential is filled with some sort of viscous fluid, something akin to honey, and that the dynamics of the ball represents that of the order parameter. This puts us in the “overdamped” limit, where the order parameter reaches the equilibrium point without executing any sort of oscillatory motion. Far from T_c, as aforementioned, we can approximate the dynamics with a parabolic form of the potential (using the equation for the overdamped limit, \dot{x} = -dV/dx):

\dot{x} = -\gamma(T) x

The solution to this differential equation is of exponential form, i.e. x(t) = x(0)e^{-\gamma(T) t}, and the relaxation back to equilibrium is therefore characterized by a temperature-dependent timescale \tau =1/\gamma(T).

However, near T_c, the parabolic approximation breaks down, as the parabolic term gets very small, and we have to take into consideration the quartic term. The order parameter dynamics then get described by:

\dot{x} = -\beta x^3,

which has a solution of the form x(t) \sim 1/\sqrt{\beta t}. Noticeably, the dynamics of the order parameter obey a much slower power law decay near T_c, as illustrated below:

ExpVsPowerLaw_Decay

At this point, one would naively think, “okay, so this is some weird thing that happens near a critical point at a phase transition…so what?”

Well, it turns out that critical slowing down can actually serve as a precursor of an oncoming phase transition in all sorts of contexts, and can even be predictive! Here are a pair of illuminating papers which show that critical slowing down occurs near a population collapse in microbial communities (from the Scheffer group and from the Gore group). As an aside, the Gore group used the budding yeast Saccharomyces cerevisiae in their experiments, which is the yeast used in most beers (I wonder if their lab has tasting parties, and if so, can I get an invitation?).

Here is another recent paper showing critical slowing down in a snap-through instability of an elastic rod. I could go on and on listing the different contexts where critical slowing down has been observed, but I think it’s better that I cite this review article.

Surprisingly, critical slowing down has been observed at continuous, first-order and far-from-equilibrium phase transitions! As a consequence of this generality, the observation of critical slowing down can therefore be predictive. If the appropriate measurements could be made, one may be able to see how close the earth’s climate is to a “tipping point” from which it will be very difficult to return (due to hysteresic effects) (see this paper which shows some form of critical slowing down in previous climatic changes in the earth’s history). But for now, it’s just interesting to look for critical slowing down in other contexts that are a little easier to predict and where perhaps the consequences aren’t as dire.

*Thanks to Alfred Zong who introduced me to many of the above papers

**Also, a shout out to Brian Skinner who caught repeated noise patterns in a recent preprint on room temperature superconductivity. Great courage and good job!

Landau Theory and the Ginzburg Criterion

The Landau theory of second order phase transitions has probably been one of the most influential theories in all of condensed matter. It classifies phases by defining an order parameter — something that shows up only below the transition temperature, such as the magnetization in a paramagnetic to ferromagnetic phase transition. Landau theory has framed the way physicists think about equilibrium phases of matter, i.e. in terms of broken symmetries. Much current research is focused on transitions to phases of matter that possess a topological index, and a major research question is how to think about these phases which exist outside the Landau paradigm.

Despite its far-reaching influence, Landau theory actually doesn’t work quantitatively in most cases near a continuous phase transition. By this, I mean that it fails to predict the correct critical exponents. This is because Landau theory implicitly assumes that all the particles interact in some kind of average way and does not adequately take into account the fluctuations near a phase transition. Quite amazingly, Landau theory itself predicts that it is going to fail near a phase transition in many situations!

Let me give an example of its failure before discussing how it predicts its own demise. Landau theory predicts that the specific heat should exhibit a discontinuity like so at a phase transition:

specificheatlandau

However, if one examines the specific heat anomaly in liquid helium-4, for example, it looks more like a divergence as seen below:

lambda_transition

So it clearly doesn’t predict the right critical exponent in that case. The Ginzburg criterion tells us how close to the transition temperature Landau theory will fail. The Ginzburg argument essentially goes like so: since Landau theory neglects fluctuations, we can see how accurate Landau theory is going to be by calculating the ratio of the fluctuations to the order parameter:

E_R = |G(R)|/\eta^2

where E_R is the error in Landau theory, |G(R)| quantifies the fluctuations and \eta is the order parameter. Basically, if the error is small, i.e. E_R << 1, then Landau theory will work. However, if it approaches \sim 1, Landau theory begins to fail. One can actually calculate both the order parameter and the fluctuation region (quantified by the two-point correlation function) within Landau theory itself and therefore use Landau theory to calculate whether or not it will fail.

If one does carry out the calculation, one gets that Landau theory will work when:

t^{(4-d)/2} >> k_B/\Delta C \xi(1)^d  \equiv t_{L}^{(4-d)/2}

where t is the reduced temperature, d is the dimension, \xi(1) is the dimensionless mean-field correlation length at T = 2T_C (extrapolated from Landau theory) and \Delta C/k_B is the change in specific heat in units of k_B, which is usually one per degree of freedom. In words, the formula essentially counts the number of degrees of freedom in a volume defined by  \xi(1)^d. If the number of degrees of freedom is large, then Landau theory, which averages the interactions from many particles, works well.

So that was a little bit of a mouthful, but the important thing is that these quantities can be estimated quite well for many phases of matter. For instance, in liquid helium-4, the particle interactions are very short-ranged because the helium atom is closed-shell (this is what enables helium to remain a liquid all the way down to zero temperatures at ambient conditions in the first place). Therefore, we can assume that \xi(1) \sim 1\textrm{\AA}, and hence t_L \sim 1 and deviations from Landau theory can be easily observed in experiment close to the transition temperature.

Despite the qualitative similarities between superfluid helium-4 and superconductivity, a topic I have addressed in the past, Landau theory works much better for superconductors. We can also use the Ginzburg criterion in this case to calculate how close to the transition temperature one has to be in order to observe deviations from Landau theory. In fact, the question as to why Ginzburg-Landau theory works so well for BCS superconductors is what awakened me to these issues in the first place. Anyway, we assume that \xi(1) is on the order of the Cooper pair size, which for BCS superconductors is on the order of 1000 \textrm{\AA}. There are about 10^8 particles in this volume and correspondingly, t_L \sim 10^{-16} and Landau theory fails so close to the transition temperature that this region is inaccessible to experiment. Landau theory is therefore considered to work well in this case.

For high-Tc superconductors, the Cooper pair size is of order 10\textrm{\AA} and therefore deviations from Landau theory can be observed in experiment. The last thing to note about these formulas and approximations is that two parameters determine whether Landau theory works in practice: the number of dimensions and the range of interactions.

*Much of this post has been unabashedly pilfered from N. Goldenfeld’s book Lectures on Phase Transitions and the Renormalization Group, which I heartily recommend for further discussion of these topics.

A Glimpse into the Renormalization Group

Leo Kadanoff passed away recently, and his ideas have had far-reaching consequences in many areas of physics. He took the fist important step of recognizing the role of “scale transformations”, which was embodied in his idea of “block spins“, at critical points. This visionary idea soon led to Wilson’s development of the renormalization group (RG).

On a personal level, I have found that understanding the concepts behind the renormalization group to be quite challenging. Many of the treatments are from a quantum field theoretical perspective, which to an experimentalist like me, present their own difficulties. I therefore found the short introductory article by Maris and Kadanoff entitled Teaching the Renormalization Group to be extremely valuable. It conveys (in 6 pages!) the main ideas behind the RG approach to phase transitions and critical phenomena.

Of course, the article only leaves the reader wanting more, as it is brief and to the point. However, I see this as a positive! It spurs curiosity. I see myself coming back to this article repeatedly even after having surmounted more difficult texts on the topic.