Tag Archives: BCS Theory

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

Advertisements

Consistency in the Hierarchy

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

Paradigm Shifts and “The Scourge of Bibliometrics”

Yesterday, I attended an insightful talk by A.J. Leggett at the APS March Meeting entitled Reflection on the Past, Present and Future of Condensed Matter Physics. The talk was interesting in two regards. Firstly, he referred to specific points in the history of condensed matter physics that resulted in (Kuhn-type) paradigm shifts in our thinking of condensed matter. Of course these paradigm shifts were not as violent as special relativity or quantum mechanics, so he deemed them “velvet” paradigm shifts.

This list, which he acknowledged was personal, consisted of:

  1. Landau’s theory of the Fermi liquid
  2. BCS theory
  3. Renormalization group
  4. Fractional quantum hall effect

Notable absentees from this list were superfluidity in 3He, the integer quanutm hall effect, the discovery of cuprate superconductivity and topological insulators. He argued that these latter advances did not result in major conceptual upheavals.

He went on to elaborate the reasons for these velvet revolutions, which I enumerate to correspond to the list above:

  1. Abandonment of microscopic theory, in particular with the use of Landau parameters; trying to relate experimental properties to one another with the input of experiment
  2. Use of an effective low-energy Hamiltonian to describe phase of matter
  3. Concept of universality and scaling
  4. Discovery of quasiparticles with fractional charge

It is informative to think about condensed matter physics in this way, as it demonstrates the conceptual advances that we almost take for granted in today’s work.

The second aspect of his talk that resonated strongly with the audience was what he dubbed “the scourge of bibliometrics”. He told the tale of his own formative years as a physicist. He published one single-page paper for his PhD work. Furthermore, once appointed as a lecturer at the University of Sussex, his job was to be a lecturer and teach from Monday thru Friday. If he did this job well, it was considered a job well-done. If research was something he wanted to partake in as a side-project, he was encouraged to do so. He discussed how this atmosphere allowed him to develop as a physicist, without the requirement of publishing papers for career advancement.

Furthermore, he claimed, because of the current focus on metrics, burgeoning young scientists are now encouraged to seek out problems that they can solve in a time frame of two to three years. He saw this as a terrible trend. While it is often necessary to complete short-term projects, it is also important to think about problems that one may be able to solve in, say, twenty years, or maybe even never. He claimed that this is what is meant by doing real science — jumping into the unknown. In fact, he asserted that if he were to give any advice to graduate students, postdocs and young faculty in the audience, it would be to try to spend about 20% of one’s time committed to some of these long-term problems.

This raises a number of questions in my mind. It is well-acknowledged within the community and even the blogosphere that the focus on publication number and short term-ism within the condensed matter physics community is detrimental. Both Ross McKenzie and Doug Natelson have expressed such sentiment numerous times on their blogs as well. From speaking to almost every physicist I know, this is a consensus opinion. The natural question to ask then is: if this is the consensus opinion, why is the modern climate as such?

It seems to me like part of this comes from the competition for funding among different research groups and funding agencies needing a way to discriminate between them. This leads to the widespread use of metrics, such as h-indices and publication number, to decide whether or not to allocate funding to a particular group. This doesn’t seem to be the only reason, however. Increasingly, young scientists are judged for hire by their publication output and the journals in which they publish.

Luckily, the situation is not all bad. Because so many people openly discuss this issue, I have noticed that the there is a certain amount of push-back from individual scientists. On my recent postdoc interviews, the principal investigators were most interested in what I was going to bring to the table rather than peruse through my publication list. I appreciated this immensely, as I had spent a large part of my graduate years pursuing instrumentation development. Nonetheless, I still felt a great deal of pressure to publish papers towards the end of graduate school, and it is this feeling of pressure that needs to be alleviated.

Strangely, I often find myself in the situation working despite the forces that be, rather than being encouraged to do so. I highly doubt that I am the only one with this feeling.

Reflecting on General Ideas

In condensed matter physics, it is easy to get lost in the details of one’s day-to-day work. It is important to sometimes take the time to reflect upon what you’ve done and learned and think about what it all means. In this spirit, below is a list of some of the most important ideas related to condensed matter physics that I picked up during my time as an undergraduate and graduate student. This is of course personal, and I hope that in time I will add to the list.

  1. Relationship between measurements and correlation functions
  2. Relationship between equilibrium fluctuations and non-equilibrium dissipative channels (i.e. the fluctuation-dissipation theorem)
  3. Principle of entropy maximization/free-energy minimization for matter in equilibrium
  4. Concept of the quasi-particle and screening
  5. Concept of Berry phase and the corresponding topological and geometrical consequences
  6. Broken symmetry, the Landau paradigm of phase classification and the idea of an order parameter
  7. Sum rules and the corresponding constraints placed on both microscopic theories and experimental spectra
  8. Bose-Einstein and Cooper Pair condensation and their spectacular properties
  9. Logical independence of physical theories on the theory of everything
  10. Effects of long-range vs. short-range interactions on macroscopic properties of solids
  11. Role of dimensionality in observing qualitatively different physical properties and phases of matter

The first two items on the list are well-explained in Forster’s Hydrodynamics, Fluctuations, Broken Symmetry and Correlation Functions without the use of Green’s functions and other advanced theoretical techniques. Although not yet a condensed matter phenomenon, Bell’s theorem and non-locality rank among the most startling consequences of quantum mechanics that I learned in graduate school. I suspect that its influence will be observed in a condensed matter setting in due time.

Please feel free to share your own ideas or concepts you would add to the list.

What is Scientific Consensus?

When a theory is put forward, it takes time for the scientific community to evaluate its merits. Ultimately, one hopes that the theory is able to not only explain past data, but to be able to predict the outcome of future experiments as well. When the dust settles, we hope that we reach “scientific consensus” regarding a theory. But what does this mean?

Since this is a condensed matter blog, let us take BCS theory as an example. When BCS was formulated, it was able to explain numerous experimental observations, such as the evolution of the electronic gap as a function of temperature as well as the specific heat anomaly among several other observations. However, there were also apparent problems with BCS theory. Many physicists were concerned with the non-conservation of particle number and with some aspects of broken gauge symmetry (pdf!) in the theory. Notably also, there were materials that did not conform exactly to the BCS formulas, such as Pb (lead), where the predicted 2\Delta/k_BT_c=3.5 relation and was found instead to be around 4.38.

So the question is, how were these issues resolved and how did the community reach the general consensus that BCS theory was applicable for the existing superconductors at that point in history?

This question actually leads to a more general scientific question: how do we reach a consensus concerning a theory? The answer to this question involves a Bayesian approach. We start with a prior probability based on our biases and update this prior probability as we begin to examine more and more data, making predictions as we go along. If physicist A had spent the past 10 years working actively on a theory of superconductivity and may secretly hope that BCS theory is wrong, s/he may start out with only 3% confidence that BCS theory is correct. On the other hand, physicist B may be completely neutral and would have a prior probability of 50%. Another physicist C would perhaps be swayed by the fact that Bardeen had just won a Nobel prize in physics for the invention of the transistor and therefore has a initial confidence level of 85% that BCS is correct. These constitute these physicists’ prior probabilities or “biases”.

What happens with time? Well, BCS predicted the existence of the Hebel-Slichter peak in the NMR spectrum, which was then observed shortly thereafter. Furthermore, Anderson showed that one could project out a particle-conserving part of the ground-state, which resolved some theoretical issues pertaining to particle-number conservation. Gorkov was also able to show that the phenomenological equations of Ginzburg and Landau were derivable from BCS theory (pdf!). McMillan and Rowell then conducted their famous experiments where they analyzed the second derivative of tunneling spectra, which exhibited phonon anomalies, to explain why lead did not obey the simple BCS formalism, but required a small extension.

As these data points accumulated, confidence in BCS theory grew for physicists A, B and C. In a Bayesian picture, we update our beliefs as we get more and more data points that are consistent with (or resolve questions pertaining to) a particular theory. Ultimately, the members of the scientific community would asymptotically approach a place where they understand the domain of validity of BCS theory and understand what it can predict. The picture I have in mind to represent this process is plotted below:

BayesianScientificConsensus

This plot is of a Bayesian updating scheme based on prior beliefs. The convergence of the viewpoints of physicists A, B and C is what is crudely meant by scientific consensus. Note that a person that starts out with a dogmatic 0% belief in the correctness of BCS theory will not change his/her mind with time.

It is important to emphasize that what I have called the 100% confidence level in my plot is meant to indicate a place where we understand the limitations and validity of a theory and how/when to apply this theory. For example, we can have 99.9% confidence that Newton’s theory of gravity will enable us to solve simple kinematics problems on the surface of the earth. While we know that Newton’s theory of gravity requires corrections from Einstein’s theory of general relativity, our confidence in Newton’s theory is not diminished when used in the correct limits. Therefore, in this Bayesian scheme, we get closer and closer to being 100% confident in a theory, but never quite reach it.

This is a rather Popperian view of scientific consensus and we know the limits of such a view in light of Kuhn’s work, but I think it nonetheless serves as a valuable guide as to how to think about the concept which is so often corrupted, especially in regard to the climate change discussion. Therefore, in the future, when people talk about scientific consensus, think convergence and think Bayes.

Why Was BCS So Important?

BCS theory, which provides a microscopic framework to understand superconductivity, made us realize that a phenomenon similar to Bose-Einstein condensation was possible for fermions. This is far from a trivial statement, though we sometimes think of it as so in present times.

A cartoon-y scheme to understand it is in the following way. We know that if you put a few fermions together, you can get a boson, such as 4-helium. It was also known well before BCS theory, that one gets a phenomenon reminiscent of Bose-Einstein condensation, known as superfluidity, in 4-helium below 2.17K. The view of 4-helium as a Bose-Einstein condensate (BEC) was advocated strongly by Fritz London, who was perhaps the first to think of it in such a way.

Now let us think of another type of boson, a diatomic molecule, as seen in gas form below:diatomic_gas

Even if the individual atoms were fermions, one would then predict that if this bosonic diatomic gas molecule could remain in the gas phase all the way down to low temperature, that at some point, this diatomic gas would condense into a BEC. This idea is correct and this is indeed what is observed.

However, the idea of a BEC becomes a little more cloudy when one considers a less dilute diatomic gas where the atoms are not so strongly bound together. In that case, the cartoon starts to look something like this:
diatomic_gas_overlappingHere the “diatomic molecules” are overlapping, and it is not easy to see which atoms are paired together to form the diatomic molecule, if one can even ascribe this trait to them. In this case, it is no longer simple to see whether or not BEC will occur and indeed if there is a limit in distance between the molecules that will necessarily give rise to BEC.

This is the question that BCS theory so profoundly addresses. It says that the “diatomic molecules” or Cooper pairs can span a great distance. In superconducting aluminum, this distance is ~16,000 Angstroms, which means the Cooper pairs are wildly overlapping. In fact, in this limit, the Cooper pair is no longer strictly even a boson, in the sense that Cooper pair creation and annihilation operators do not obey Bose-Einstein commutation relations.

However, the Cooper pair can still qualitatively thought of as a pseudo-boson that undergoes pseud0-BEC, and this picture is indeed  very useful. It enabled the prediction of pseud0-BEC in neutron stars, liquid 3-helium and ultra-cold fermionic gases — predictions which now have firm experimental backing.

An interesting note is that one can study this BCS-to-BEC crossover in ultracold Fermi gases and go from the overlapping to non-overlapping limit by tuning the interaction between atoms and I’ll try to write a post about this in the near future.

So while BCS theory has many attributes that make it important, to my mind, the most profound thing is that it presents a mechanism by which weakly interacting fermionic pairs can condense into a pseudo-BEC. This is not at all obvious, but indeed what happens in nature.

Update: In light of the description above, it seems surprising that the temperature at which Cooper pairs form is the same temperature at which they seem to condense into a pseudo-BEC. Why this is the case is not obvious and I think is an open question, especially with regards to the cuprates and in particular the pseudogap.

A Critical Ingredient for Cooper Pairing in Superconductors within the BCS Picture

I’m sure that readers that are experts in superconductivity are aware of this fact already, but there is a point that I feel is not stressed enough by textbooks on superconductivity. This is the issue of reduced dimensionality in BCS theory. In a previous post, I’ve shown the usefulness of the thinking about the Cooper problem instead of the full-blown BCS solution, so I’ll take this approach here as well. In the Cooper problem, one assumes a 3-dimensional spherical Fermi surface like so:

3D Fermi Surface

What subtly happens when one solves the Cooper problem, however, is the reduction from three dimensions to two dimensions. Because only the electrons near the Fermi surface condense, one is really working in a shell around the Fermi surface like so, where the black portion does not participate in the formation of Cooper pairs:

Effective 2D Topology Associated with the Cooper Problem

Therefore, when solving the Cooper problem, one goes from working in a 3D solid sphere (the entire Fermi sea), to working on the surface of the sphere, effectively a 2D manifold. Because one is now confined to just the surface, it enables one of the most crucial steps in the Cooper problem: assuming that the density of states (N(E)) at the Fermi energy is a constant so that one can pull it out of the integral (see, for example, equation 9 in this set of lecture notes by P. Hirschfeld).

The more important role of dimensionality, though, is in the bound state solution. If one solves the Schrodinger equation for the delta-function potential (i.e. V(x,y)= -\alpha\delta(x)\delta(y)) in 2D one sees a quite stunning (but expected) resemblance to the Cooper problem. It tuns out that the solution to obtain a bound state takes the following form:

E \sim \exp{(-\textrm{const.}/\alpha)}.

Note that this is exactly the same function that appears in the solution to the Cooper problem, and this is of course not a coincidence. This function is not expandable in terms of a Taylor series, as is so often stressed when solving the Cooper problem and is therefore not amenable to perturbation methods. Note, also, that there is a bound state solution to this problem whenever \alpha is finite, again similar to the case of the Cooper problem. That there exists a bound state solution for any \alpha >0 no matter how small, is only true in dimensions two or less. This is why reduced dimensionality is so critical to the Cooper problem.

Furthermore, it is well-known to solid-state physicists that for a Fermi gas/liquid, in 3D N(E) \sim \sqrt{E}, in 2D N(E) \sim const., while in 1D N(E) \sim 1/\sqrt{E}. Hence, if one is confined to two-dimensions in the Cooper problem, one is able to treat the density of states as a constant, and pull this term out of the integral (see equation 9 here again) even if the states are not confined to the Fermi energy.

This of course raises the question of what happens in an actual 2D or quasi-2D solid. Naively, it seems like in 2D, a solid should be more susceptible to the formation of Cooper pairs including all the electrons in the Fermi sea, as opposed to the ones constrained to be close to the Fermi surface.

If any readers have any more insight to share with respect to the role of dimensionality in superconductivity, please feel free to comment below.