# Monthly Archives: October 2015

## Macroscopic Wavefunctions, Off-Diagonal Long Range Order and U(1) Symmetry Breaking

Steven Weinberg wrote a piece a while ago entitled Superconductivity for Particular Theorists (pdf!). Although I have to admit that I have not followed the entire mathematical treatment in this document, I much appreciate the conceptual approach he takes in asking the following question:

How can one possibly use such approximations (BCS theory and Ginzburg-Landau theory) to derive predictions about superconducting phenomena that are essentially of unlimited accuracy?

He answers the question by stating that the general features of superconductivity can be explained using the fact that there is a spontaneous breakdown of electromagnetic gauge invariance. The general features he demonstrates are due to broken gauge invariance are the following:

1. The Meissner Effect
2. Flux Quantization
3. Infinite Conductivity
4. The AC Josephson Effect
5. Vortex Lines

Although not related to this post per se, he also makes the following (somewhat controversial) comment that I have to admit I am quoting a little out of context:

“…superconductivity is not macroscopic quantum mechanics; it is the classical field theory of  a Nambu-Goldstone field”

Now, while it may be true that one can derive the phenomena in the list above using the formalism outlined by Weinberg, I do think that there are other ways to obtain similar results that may be just as general. One way to do this is to assume the existence of a macroscopic wave function. This method is outlined in this (illuminatingly simple) set of lecture notes by M. Beasley (pdf!).

Another general formalism is outlined by C.N. Yang in this RMP, where he defines the concept of off-diagonal long range order for a tw0-particle density matrix. ODLRO can be defined for a single-particle density matrix in the following way:

$\lim_{|r-r'| \to \infty} \rho(r,r') \neq 0$

This can be easily extended to the case of a two-particle density matrix appropriate for Cooper pairing (see Yang).

Lastly, there is a formalism similar to that of Yang’s as outlined by Leggett in his book Quantum Liquids, which was first developed by Penrose and Onsager. They conclude that many properties of Bose-Einstein Condensation can be obtained from again examining the diagonalized density matrix:

$\rho(\textbf{r},\textbf{r}';t) = \sum_i n_i(t)\chi_i^*(\textbf{r},t)\chi_i(\textbf{r}',t)$

Leggett then goes onto say

“If there is exactly one eigenvalue of order N, with the rest all of order unity, then we say the system exhibits simple BEC.”

Again, this can easily be extended to the case of a two-particle density matrix when considering Cooper pairing.

The 5-point list of properties of superconductors itemized above can then be subsequently derived using any of these general frameworks:

1. Broken Electromagnetic Gauge Invariance
2. Macroscopic Wavefunction
3. Off-Diagonal Long Range Order in the Two-Particle Density Matrix
4. Macroscopically Large Eigenvalue of Two-Particle Density Matrix

These are all model-independent formulations able to describe general properties associated with superconductivity. Items 3 and 4, and to some extent 2, overlap in their concepts. However, 1 seems quite different to me. It seems to me that 2, 3 & 4 more easily relate the concepts of Bose-Einstein condensation to BCS -type condensation, and I appreciate this element of generality. However, I am not sure at this point which is a more general formulation and which is the most useful. I do have a preference, however, for items 2 and 4 because they are the easiest for me to grasp intuitively.

Please feel free to comment, as this post was intended to raise a question rather than to answer it (which I cannot do at present!). I will continue to think about this question and will hopefully make a more thoughtful post with a concrete answer in the future.

## A Critical Ingredient for Cooper Pairing in Superconductors within the BCS Picture

I’m sure that readers that are experts in superconductivity are aware of this fact already, but there is a point that I feel is not stressed enough by textbooks on superconductivity. This is the issue of reduced dimensionality in BCS theory. In a previous post, I’ve shown the usefulness of the thinking about the Cooper problem instead of the full-blown BCS solution, so I’ll take this approach here as well. In the Cooper problem, one assumes a 3-dimensional spherical Fermi surface like so:

3D Fermi Surface

What subtly happens when one solves the Cooper problem, however, is the reduction from three dimensions to two dimensions. Because only the electrons near the Fermi surface condense, one is really working in a shell around the Fermi surface like so, where the black portion does not participate in the formation of Cooper pairs:

Effective 2D Topology Associated with the Cooper Problem

Therefore, when solving the Cooper problem, one goes from working in a 3D solid sphere (the entire Fermi sea), to working on the surface of the sphere, effectively a 2D manifold. Because one is now confined to just the surface, it enables one of the most crucial steps in the Cooper problem: assuming that the density of states ($N(E)$) at the Fermi energy is a constant so that one can pull it out of the integral (see, for example, equation 9 in this set of lecture notes by P. Hirschfeld).

The more important role of dimensionality, though, is in the bound state solution. If one solves the Schrodinger equation for the delta-function potential (i.e. $V(x,y)= -\alpha\delta(x)\delta(y)$) in 2D one sees a quite stunning (but expected) resemblance to the Cooper problem. It tuns out that the solution to obtain a bound state takes the following form:

$E \sim \exp{(-\textrm{const.}/\alpha)}$.

Note that this is exactly the same function that appears in the solution to the Cooper problem, and this is of course not a coincidence. This function is not expandable in terms of a Taylor series, as is so often stressed when solving the Cooper problem and is therefore not amenable to perturbation methods. Note, also, that there is a bound state solution to this problem whenever $\alpha$ is finite, again similar to the case of the Cooper problem. That there exists a bound state solution for any $\alpha >0$ no matter how small, is only true in dimensions two or less. This is why reduced dimensionality is so critical to the Cooper problem.

Furthermore, it is well-known to solid-state physicists that for a Fermi gas/liquid, in 3D $N(E) \sim \sqrt{E}$, in 2D $N(E) \sim$const., while in 1D $N(E) \sim 1/\sqrt{E}$. Hence, if one is confined to two-dimensions in the Cooper problem, one is able to treat the density of states as a constant, and pull this term out of the integral (see equation 9 here again) even if the states are not confined to the Fermi energy.

This of course raises the question of what happens in an actual 2D or quasi-2D solid. Naively, it seems like in 2D, a solid should be more susceptible to the formation of Cooper pairs including all the electrons in the Fermi sea, as opposed to the ones constrained to be close to the Fermi surface.

If any readers have any more insight to share with respect to the role of dimensionality in superconductivity, please feel free to comment below.

## Another Lesson from the Coupled Oscillator: Fano Lineshape

Every once in a while, I read a paper that exhibits a rather odd spectral signature. This odd spectral signature is asymmetric and looks something like this (taken from Wikipedia):

Fano Resonance

This is called a Fano lineshape and occurs when an excitation interferes with a background process. “Background process” is pretty vague, but I hope that the notion becomes clearer below. Here are a couple papers that have observed Fano lineshapes in different contexts, including a review paper:

The reason I chose the first two papers was because a couple peaks in their respective spectra exhibit a Fano lineshape as a function temperature. In the Raman paper, after the charge density wave gap in the single-particle spectrum opens up, the lineshapes of the phonons are no longer Fano-like, but become Lorentzian. This indicates that the Fano lineshape likely resulted from strong electron-phonon coupling.

Anyway, there is a pedagogical paper on the Fano resonance that is worth a read here (pdf!), and is where the example below comes from. This paper in fact illustrates the simplest and most enlightening case of the Fano resonance I’ve come across. The basic idea is that one has a pair of coupled oscillators, and someone is driving only one of the oscillators (ball 1 below). This is pictured below for clarity:

Only the ball 1 is driven

Now, if one looks at the amplitude of the first oscillator, one gets the following plot:

The first oscillator was chosen to have a natural resonance frequency of 1, while the second oscillator was chosen to have a resonance frequency of 1.5. You can see that this is not exactly where the peaks show up, however. This is because of the level repulsion due to the coupling between the two oscillators.

Now the \$64k question: why does the asymmetric lineshape appear? Well, there are a few ingredients. The first ingredient is that the first oscillator is being driven by an extrenal source and the second oscillator is not. Secondly, there is a strong coupling between the first oscillator and second one. As one decreases the coupling between the two, the amplitude of the second (asymmetric) peak starts to shrink. Thirdly, the natural frequencies of the oscillators need to be relatively close to one another otherwise again the second peak decreases in amplitude.

What is remarkable is that this simple coupled oscillator model is able to exhibit the Fano lineshape while serving as a very instructive toy model. Because of this simplicity, the Fano lineshape is extremely general and is used to explain data in a wide variety of contexts across all areas of physics. A spectroscopist should always keep a look out for an asymmetric lineshape, as it is usually the result of a non-trivial coupling or interaction with another degree of freedom.

## The truth we all know but agree not to talk about

I am currently in the process of reading an enlightening short book entitled Quantum Chance by Nicolas Gisin, an authority on the foundations of quantum mechanics. You can actually download it for free here if your institution has a Springer publishing subscription. The book stands out as being accessible to the educated lay reader without sacrificing much in the form of profundity.

The main topics of this book are the implications of Bell’s theorem. Prior to Bell’s paper (pdf!), a possible view of quantum mechanics was that it predicted the statistical distribution of many events. However, back then, it was plausible that there existed an underlying theory, one we had yet to discover, that was described by a set of “hidden variables”. The idea was that these hidden variables would allow us to calculate the trajectory of a single particle deterministically, but that we just didn’t know the equations obeyed by the hidden variables. Quantum mechanics was just an approximate theory allowing us to calculate probability distributions of many events.

Behind the hidden variable theory was a philosophical stance called realism, strongly espoused by Einstein. Simply stated, realism is the belief that reality exists independent of the observers. This is counter to the orthodox view of quantum mechanics, which was emphasized by Bohr. The orthodox view is that the measurement of quantum systems causes a “wave function collapse” and that observables have no meaning until they are measured. The implication of the orthodox view is that reality is in the eyes of the observer and does not exist independent of the observer. There are other interpretations of quantum mechanics out there as well, but it is my understanding that these were the two prevailing views before Bell worked out his theorem.

Even though Bohr found it quite easy to give up the notion of realism, I find it quite difficult to abandon. In the very least, one should be able to describe the mechanism giving rise to “wave function collapse” if this indeed even occurs. Regardless, Bell’s theorem, when it was published in 1964 (pdf!), showed that local realism was untenable in quantum mechanics.

What does this mean? Well, I’ve described what realism means, so let me now take on locality, which is implicit in the hidden variables idea in the way Einstein originally conceived of it. According to Wikipedia “the principle of locality states that an object is only directly influenced by its immediate surroundings”. This sounds quite vague, but Bell was able to show in a rigorous mathematical sense that if Bell’s inequality was violated, that an event on one side of the universe can instantaneously affect another event on the other side of the universe. Stunningly, experiments suggest that quantum mechanics does indeed appear to violate Bell’s inequality.

For a realist (and for adherents to most other interpretations of quantum mechanics), Bell’s theorem then suggests that the universe is inherently nonlocal. This notion of nonlocality, the idea that two things are somehow connected over vast empty space on an instantaneous time scale, bothered both Einstein and Newton greatly. Newton, whose theory of gravity is also nonlocal said:

It is inconceivable that inanimate Matter should, without the Mediation of something else, which is not material, operate upon, and affect other matter without mutual Contact…That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. Gravity must be caused by an Agent acting constantly according to certain laws; but whether this Agent be material or immaterial, I have left to the Consideration of my readers.

I suspect that the solution to the nonlocality problem in quantum mechanics may end up needing a large conceptual overhaul. It is going to take a work of great insight to preserve locality, if it indeed can at all be preserved in some contrived way. Whatever the solution to this problem, I hope that I am alive to see it. I won’t be betting on it though.

## A Rather Illuminating Experiment Using Ultrafast Lasers

Historically, in condensed matter physics, there have been generally two experimental strategies: (i) scattering/spectroscopy experiments such as angle-resolved photoemission or X-ray scattering, and (ii) experiments involving macroscopic variables such as specific heat, resistivity, or magnetization. In the past few decades, a qualitatively new frontier opened up. This consisted of experiments that involved kicking a system out of equilibrium (usually with a pulsed femtosecond laser) and monitoring its relaxation back to equilibrium.

The importance of this experiment requires a little background. There has been debate for a couple decades now in the literature as to whether excitonic correlations are driving the charge density wave transition in 1T-TiSe2. This experiment claims that one can non-thermally melt (with the ultrafast laser) the excitonic order while the lattice remains distorted. This is done by monitoring the optical response of the sample at time intervals after the intense pulsed laser hits the sample: zone-folded phonons are monitored as evidence of the lattice distortion while the plasmon peak energy is monitored as evidence of excitonic order. The conclusion that the authors come to is that it cannot be purely an excitonic mechanism that is responsible for the charge density wave in this material as the plasmon peak energy is drastically affected by the laser pulse, while the zone-folded phonons do not react.

There is one caveat in this otherwise quite solid piece of work, however. The authors have equated the shift in the plasmon peak frequency (immediately following the arrival of the ultrafast laser pulse on the sample) with the melting of excitonic order. While this interpretation is plausible, it is not necessarily correct considering that the laser is photo-exciting a large number of charge carriers.

Regardless of this last point, the paper is definitely worth the read and highlights the kinds of experiments that can be conducted with these techniques. To my mind, this is one of the more illuminating experiments conducted on 1T-TiSe2 as many other experiments have been quite inconclusive about the mechanism behind the CDW in this material. Despite the aforementioned caveat, this experiment quite definitively demonstrates that one cannot ignore the role that electron-phonon coupling plays in the formation of the CDW in 1T-TiSe2.

## Things Everyone has Thought of, or Actually done, on an Exam

A little teaser:

Here’s a link to a few more…

## Do We Effectively Disseminate “Gems of Insight”?

The content of this post extends beyond condensed matter physics, but I’ll discuss it (as I do most things) within this context.

When attending talks, lectures or reading a paper, sometimes one is struck by what I will refer to as a “gem of insight” (GOI). These tend to be explanations of physical phenomena that are quite well-understood, but can be viewed more intuitively or elegantly in a somewhat unorthodox way.

For instance, one of the ramifications of the Meissner effect is that there is a difference between the longitudinal and transverse response to the vector potential even in the limit that $\textbf{q}\rightarrow 0$. This is discussed here in the lecture notes by Leggett, an effect I find to be quite profound and what I would call a GOI. Another example is the case where Brian Josephson was famously inspired, by P.W. Anderson’s GOI on broken symmetry in superconductors, to realize the effect now known after him. Here is a little set of notes by P.B. Allen discussing how the collective and single-particle properties of the electron gas are compatible, which also contains a few GsOI.

My concern in this post, though, is how such information is spread. It seems to me that most papers today are not necessarily concerned with spreading GsOI, but more with communicating results. Papers are used for “showing” and not “explaining”. Part of this situation arises from the fact that the length of papers are constrained by many journals, limiting the author’s capacity to discuss physical ideas at length rather than just “writing down the answer”.

Another reason is that it sometimes takes a long time for ideas to sink in among the community, and the most profound way to understand a result is only obtained after a period of deep reflection. In this case, publishing a paper on the topic is no longer appropriate because the topic is already considered solved. Publishing a paper with only a physical explanation of an already understood phenomenon is “not new” and likely to be rejected by most journals. This is part of the reason why the literature on topological insulators contained the most clear expositions on the quantum hall effect!

So how should we disseminate GsOI? It seems to me that GsOI tend to be circulated in discussions between individual scientists or in lectures to graduate students, etc — mostly informal settings. It is my personal opinion that these GsOI should be documented somewhere. I had the privilege to learn superconductivity from Tony Leggett, one of the authorities on the subject. Many ideas he expressed in class are hardly discussed in the usual superconductivity texts, and some times not anywhere! However, it would probably be extremely fruitful for his lectures to be recorded and uploaded to a forum (such as YouTube) so that someone interested could watch them.

This is a difficult problem to solve in general, but I think that one of the ways we can rectify this situation is to include more space in papers for physical explanations while cleaning up lengthy introductions. Furthermore, we should not necessarily be discouraged from writing papers on topics that “aren’t new” if they contain important GsOI.

Do you agree? I’m curious to know what others think.