Tag Archives: Ginzburg-Landau

Landau Theory and the Ginzburg Criterion

The Landau theory of second order phase transitions has probably been one of the most influential theories in all of condensed matter. It classifies phases by defining an order parameter — something that shows up only below the transition temperature, such as the magnetization in a paramagnetic to ferromagnetic phase transition. Landau theory has framed the way physicists think about equilibrium phases of matter, i.e. in terms of broken symmetries. Much current research is focused on transitions to phases of matter that possess a topological index, and a major research question is how to think about these phases which exist outside the Landau paradigm.

Despite its far-reaching influence, Landau theory actually doesn’t work quantitatively in most cases near a continuous phase transition. By this, I mean that it fails to predict the correct critical exponents. This is because Landau theory implicitly assumes that all the particles interact in some kind of average way and does not adequately take into account the fluctuations near a phase transition. Quite amazingly, Landau theory itself predicts that it is going to fail near a phase transition in many situations!

Let me give an example of its failure before discussing how it predicts its own demise. Landau theory predicts that the specific heat should exhibit a discontinuity like so at a phase transition:

specificheatlandau

However, if one examines the specific heat anomaly in liquid helium-4, for example, it looks more like a divergence as seen below:

lambda_transition

So it clearly doesn’t predict the right critical exponent in that case. The Ginzburg criterion tells us how close to the transition temperature Landau theory will fail. The Ginzburg argument essentially goes like so: since Landau theory neglects fluctuations, we can see how accurate Landau theory is going to be by calculating the ratio of the fluctuations to the order parameter:

E_R = |G(R)|/\eta^2

where E_R is the error in Landau theory, |G(R)| quantifies the fluctuations and \eta is the order parameter. Basically, if the error is small, i.e. E_R << 1, then Landau theory will work. However, if it approaches \sim 1, Landau theory begins to fail. One can actually calculate both the order parameter and the fluctuation region (quantified by the two-point correlation function) within Landau theory itself and therefore use Landau theory to calculate whether or not it will fail.

If one does carry out the calculation, one gets that Landau theory will work when:

t^{(4-d)/2} >> k_B/\Delta C \xi(1)^d  \equiv t_{L}^{(4-d)/2}

where t is the reduced temperature, d is the dimension, \xi(1) is the dimensionless mean-field correlation length at T = 2T_C (extrapolated from Landau theory) and \Delta C/k_B is the change in specific heat in units of k_B, which is usually one per degree of freedom. In words, the formula essentially counts the number of degrees of freedom in a volume defined by  \xi(1)^d. If the number of degrees of freedom is large, then Landau theory, which averages the interactions from many particles, works well.

So that was a little bit of a mouthful, but the important thing is that these quantities can be estimated quite well for many phases of matter. For instance, in liquid helium-4, the particle interactions are very short-ranged because the helium atom is closed-shell (this is what enables helium to remain a liquid all the way down to zero temperatures at ambient conditions in the first place). Therefore, we can assume that \xi(1) \sim 1\textrm{\AA}, and hence t_L \sim 1 and deviations from Landau theory can be easily observed in experiment close to the transition temperature.

Despite the qualitative similarities between superfluid helium-4 and superconductivity, a topic I have addressed in the past, Landau theory works much better for superconductors. We can also use the Ginzburg criterion in this case to calculate how close to the transition temperature one has to be in order to observe deviations from Landau theory. In fact, the question as to why Ginzburg-Landau theory works so well for BCS superconductors is what awakened me to these issues in the first place. Anyway, we assume that \xi(1) is on the order of the Cooper pair size, which for BCS superconductors is on the order of 1000 \textrm{\AA}. There are about 10^8 particles in this volume and correspondingly, t_L \sim 10^{-16} and Landau theory fails so close to the transition temperature that this region is inaccessible to experiment. Landau theory is therefore considered to work well in this case.

For high-Tc superconductors, the Cooper pair size is of order 10\textrm{\AA} and therefore deviations from Landau theory can be observed in experiment. The last thing to note about these formulas and approximations is that two parameters determine whether Landau theory works in practice: the number of dimensions and the range of interactions.

*Much of this post has been unabashedly pilfered from N. Goldenfeld’s book Lectures on Phase Transitions and the Renormalization Group, which I heartily recommend for further discussion of these topics.

Advertisements

What is Scientific Consensus?

When a theory is put forward, it takes time for the scientific community to evaluate its merits. Ultimately, one hopes that the theory is able to not only explain past data, but to be able to predict the outcome of future experiments as well. When the dust settles, we hope that we reach “scientific consensus” regarding a theory. But what does this mean?

Since this is a condensed matter blog, let us take BCS theory as an example. When BCS was formulated, it was able to explain numerous experimental observations, such as the evolution of the electronic gap as a function of temperature as well as the specific heat anomaly among several other observations. However, there were also apparent problems with BCS theory. Many physicists were concerned with the non-conservation of particle number and with some aspects of broken gauge symmetry (pdf!) in the theory. Notably also, there were materials that did not conform exactly to the BCS formulas, such as Pb (lead), where the predicted 2\Delta/k_BT_c=3.5 relation and was found instead to be around 4.38.

So the question is, how were these issues resolved and how did the community reach the general consensus that BCS theory was applicable for the existing superconductors at that point in history?

This question actually leads to a more general scientific question: how do we reach a consensus concerning a theory? The answer to this question involves a Bayesian approach. We start with a prior probability based on our biases and update this prior probability as we begin to examine more and more data, making predictions as we go along. If physicist A had spent the past 10 years working actively on a theory of superconductivity and may secretly hope that BCS theory is wrong, s/he may start out with only 3% confidence that BCS theory is correct. On the other hand, physicist B may be completely neutral and would have a prior probability of 50%. Another physicist C would perhaps be swayed by the fact that Bardeen had just won a Nobel prize in physics for the invention of the transistor and therefore has a initial confidence level of 85% that BCS is correct. These constitute these physicists’ prior probabilities or “biases”.

What happens with time? Well, BCS predicted the existence of the Hebel-Slichter peak in the NMR spectrum, which was then observed shortly thereafter. Furthermore, Anderson showed that one could project out a particle-conserving part of the ground-state, which resolved some theoretical issues pertaining to particle-number conservation. Gorkov was also able to show that the phenomenological equations of Ginzburg and Landau were derivable from BCS theory (pdf!). McMillan and Rowell then conducted their famous experiments where they analyzed the second derivative of tunneling spectra, which exhibited phonon anomalies, to explain why lead did not obey the simple BCS formalism, but required a small extension.

As these data points accumulated, confidence in BCS theory grew for physicists A, B and C. In a Bayesian picture, we update our beliefs as we get more and more data points that are consistent with (or resolve questions pertaining to) a particular theory. Ultimately, the members of the scientific community would asymptotically approach a place where they understand the domain of validity of BCS theory and understand what it can predict. The picture I have in mind to represent this process is plotted below:

BayesianScientificConsensus

This plot is of a Bayesian updating scheme based on prior beliefs. The convergence of the viewpoints of physicists A, B and C is what is crudely meant by scientific consensus. Note that a person that starts out with a dogmatic 0% belief in the correctness of BCS theory will not change his/her mind with time.

It is important to emphasize that what I have called the 100% confidence level in my plot is meant to indicate a place where we understand the limitations and validity of a theory and how/when to apply this theory. For example, we can have 99.9% confidence that Newton’s theory of gravity will enable us to solve simple kinematics problems on the surface of the earth. While we know that Newton’s theory of gravity requires corrections from Einstein’s theory of general relativity, our confidence in Newton’s theory is not diminished when used in the correct limits. Therefore, in this Bayesian scheme, we get closer and closer to being 100% confident in a theory, but never quite reach it.

This is a rather Popperian view of scientific consensus and we know the limits of such a view in light of Kuhn’s work, but I think it nonetheless serves as a valuable guide as to how to think about the concept which is so often corrupted, especially in regard to the climate change discussion. Therefore, in the future, when people talk about scientific consensus, think convergence and think Bayes.

Theory of Everything – Laughlin and Pines

I recently re-visited a paper written in 2000 by Laughlin and Pines entitled The Theory of Everything (pdf!). The main claim in the paper is that what we call the theory of everything in condensed matter (the Hamiltonian below) does not capture “higher organizing principles”. Condensed Concepts blog has a nice summary of the article.

TOE

Because we can measure quantities like e^2/h and h/2e in quantum hall experiments and superconducting rings respectively, it must be that the theory of everything does not capture some essential physics that emerges only on a different scale. In their words:

These things [e.g. that we can measure e^2/h] are clearly true, yet they cannot be deduced by direct calculation from the Theory of Everything, for exact results cannot be predicted by approximate calculations. This point is still not understood by many professional physicists, who find it easier to believe that a deductive link exists and has only to be discovered than to face the truth that there is no link. But it is true nonetheless. Experiments of this kind work because there are higher organizing principles in nature that make them work.

If I am perfectly honest, I am one of those people that “believes that a deductive link exists”. Let me take the example of the BCS Hamiltonian. I do think that it is reasonable to start with the theory of everything, make a series of approximations, and arrive at the BCS Hamiltonian. From BCS, one can then derive the Ginzburg-Landau (GL) equations as shown by G’orkov (pdf!). Not only that, one can obtain the Josephson effect (where one can measure h/2e) by using either a BCS or a GL approach.

The reason I bring this example up is because, I would rather believe that a deductive link does exist and that even though approximations have been made, that there is some topological property that has survives to each “higher” level. Said another way, in going from the TOE to BCS to GL, one keeps some fundamental topological characteristics in tact.

It is totally possible that what I am saying is gobbledygook. But I do think that the Laughlin-Pines viewpoint is speculative, radical, and has perhaps taken the Anderson “more is different” perspective too far. It is a thought-provoking article partly because of weight that the authors’ names carry and partly because of the self-belief of the article’s tone, but I am a little more conservative in my scientific outlook. The TOE may not always be useful, but I don’t think that means that “no deductive link exists” either.

I’m curious to know whether you see things like Laughlin and Pines.