Tag Archives: Topology

Precision in Many-Body Systems

Measurements of the quantum Hall effect give a precise conductance in units of e^2/h. Measurements of the frequency of the AC current in a Josephson junction give us a frequency of 2e/h times the applied voltage. Hydrodynamic circulation in liquid 4He is quantized in units of h/m_{4He}. These measurements (and similar ones like flux quantization) are remarkable. They yield fundamental constants to a great degree of accuracy in a condensed matter setting– a setting which Murray Gell-Mann once referred to as “squalid state” systems. How is this possible?

At first sight, it is stunning that physics of the solid or liquid state could yield a measurement so precise. When we consider the defects, impurities, surfaces and other imperfections in a macroscopic system, these results become even more astounding.

So where does this precision come from? It turns out that in all cases, one is measuring a quantity that is dependent on the single-valued nature of the (appropriately defined) complex scalar  wavefunction. The aforementioned quantities are measured in integer units, n, usually referred to as the winding number. Because the winding number is a topological quantity, in the sense that it arises in a multiply-connected space, these measurements do not particularly care about the small differences that occur in its surroundings.

For instance, the leads used to measure the quantum Hall effect can be placed virtually anywhere on the sample, as long as the wires don’t cross each other. The samples can be any (two-dimensional) geometry, i.e. a square, a circle or some complicated corrugated object. In the Josephson case, the weak links can be constrictions, an insulating oxide layer, a metal, etc. Imprecision of experimental setup is not detrimental, as long as the experimental geometry remains the same.

Another ingredient that is required for this precision is a large number of particles. This can seem counter-intuitive, since one expects quantization on a microscopic rather than at a macroscopic level, but the large number of particles makes these effects possible. For instance, both the Josephson effect and the hydrodynamic circulation in 4He depend on the existence of a macroscopic complex scalar wavefunction or order parameter. In fact, if the superconductor becomes too small, effects like the Josephson effect, flux quantization and persistent currents all start to get washed out. There is a gigantic energy barrier preventing the decay from the n=1 current-carrying state to the n=0 current non-carrying state due to the large number of particles involved (i.e. the higher winding number state is meta-stable). As one decreases the number of particles, the energy barrier is lowered and the system can start to tunnel from the higher winding number state to the lower winding number state.

In the quantum Hall effect, the samples need to be macroscopically large to prevent the boundaries from interacting with each other. Once the states on the edges are able to do that, they may hybridize and the conductance quantization gets washed out. This has been visualized in the context of 3D topological insulators using angle-resolved photoemission spectroscopy, in this well-known paper. Again, a large sample is needed to observe the effect.

It is interesting to think about where else such a robust quantization may arise in condensed matter physics. I suspect that there exist similar kinds of effects in different settings that have yet to be uncovered.

Aside: If you are skeptical about the multiply-connected nature of the quantum Hall effect, you can read about Laughlin’s gauge argument in his Nobel lecture here. His argument critically depends on a multiply-connected geometry.

Advertisements

Neither Energy Gap Nor Meissner Effect Imply Superflow

I have read several times in lecture notes, textbooks and online forums that the persistent current in a superconductor of annular geometry is a result of either:

  1. The opening of a superconducting gap at the Fermi surface
  2. The Meissner Effect

This is not correct, actually.

The energy gap at the Fermi surface is neither a sufficient nor necessary condition for the existence of persistent supercurrents in a superconducting ring. It is not sufficient because gaps can occur for all sorts of reasons — semiconductors, Mott insulators, charge density wave systems all exhibit energy gaps separating the occupied states from the unoccupied states. Yet these systems do not exhibit superconductivity.

Superconductivity does not require the existence of a gap either. It is possible to come up with models that exhibit superconductivity yet do not have a gap in the single-particle spectra (see de Gennes Chapter 8 or Rickayzen Chaper 8). Moreover, the cuprate and heavy fermion superconductors possess nodes in their single-particle spectra and still exhibit persistent currents.

Secondly, the Meissner effect is often conflated with superflow in a superconductor, but it is an equilibrium phenomenon, whereas persistent currents are a non-equilibrium phenomenon. Therefore, any conceptual attempts to make a conclusion about persistent currents in a superconducting ring from the Meissner effect is fraught with this inherent obstacle.

So, obviously, I must address the lurking $64k question: why does the current in a superconducting ring not decay within an observable time-frame?

Getting this answer right is much more difficult than pointing out the flaws in the other arguments! The answer has to do with a certain “topological protection” of the current-carrying state in a superconductor. However one chooses to understand the superconducting state (i.e. through broken gauge symmetry, the existence of a macroscopic wavefunction, off-diagonal long-range order, etc.), it is the existence of a particular type of condensate and the ability to adequately define the superfluid velocity that enables superflow:

\textbf{v}_s = \frac{\hbar}{2m} \nabla \phi

where \phi is the phase of the order parameter and the superfluid velocity obeys:

\oint \textbf{v}_s \cdot d\textbf{l} = n\hbar/2m

The details behind these ideas are further discussed in this set of lecture notes, though I have to admit that these notes are quite dense. I still have some pretty major difficulties understanding some of the main ideas in them.

I welcome comments concerning these concepts, especially ones challenging the ideas put forth here.

An Integral from the SSH Model

A while ago, I was solving the Su-Schrieffer-Heeger (SSH) model for polyacetylene and came across an integral which I immediately thought was pretty cool. Here is the integral along with the answer:

\int_{0}^{2\pi} \frac{\delta(1+\mathrm{tan}^2(x))}{1+\delta^2\mathrm{tan}^2(x)}\frac{dx}{2\pi} = \mathrm{sgn}(\delta)

Just looking at the integral, it is difficult to see why no matter what the value of \delta, the integral will always give +1 or -1, which only depends on the sign of \delta. This means that if \delta=1,000,000 or if \delta=0.00001, you would get the same result, in this case +1, as the answer to the integral! I’ll leave it to you to figure out why this is the case. (Hint: you can use contour integration, but you don’t have to.)

It turns out that the result actually has some interesting topological implications for the SSH model, as there are fractional statistics associated with the domain wall solitons. I guess it’s not so surprising that an integral that possesses topological properties would show up in a physical system with topological characteristics! But I thought the integral was pretty amusing anyhow, so I thought I’d share it.

Aside: For those who are interested in how I arrived at this integral in the SSH model, here are some of my notes. (Sorry if there are any errors and please let me know!) Also, the idea of solitons in the SSH model actually bears a strong qualitative resemblance to the excellent zipper analogy that Brian Skinner used on his blog.

General Aspects of Topology in Quantum Mechanics

Condensed matter physics has, in the past ten years or so, made a left turn towards studying topological properties of materials. Following the discovery of the Quantum Hall Effect (QHE) in 1980, it took about 25 years to experimentally discover that similar phenomenology could occur in bulk samples in the absence of a magnetic field in topological insulators. In the current issue of Nature Physics, there are three papers demonstrating the existence of a Weyl semimetal in TaAs and NbAs. These states of matter bear a striking similarity to quantum mechanical effects such as the Aharonov-Bohm effect and the Dirac monopole problem.

So what do all of these things have in common? Well, I vaguely addressed this issue in a previous post concerning Berry phases, but I want to elaborate a little more here. First it should be understood that all of these problems take place on some sort of manifold. For instance, the Aharonov-Bohm effect takes place in a plane, the Dirac monopole problem on a 3D sphere and the problems in solid-state physics largely on a torus due to periodic boundary conditions.

Now, what makes all of these problems exhibit a robust topological quantization of some sort is that the Berry connection in these problems cannot adequately be described by a single function over the entire manifold. If one were to attempt to write down a function for the Berry connection, there would necessarily exist a singularity somewhere on the manifold. But because the Berry connection is not an observable, one can just write down two (or more) different functions on different parts (or “neighborhoods”) of the manifold. The price one has to pay is that one has to “patch” the functions together at the boundary of the neighborhoods. Therefore, the existence of the topological quantization in most of the problems described above arise because of a singularity in the Berry connection somewhere on the manifold that cannot be gotten rid of with a gauge transformation.

For instance, for the Aharonov-Bohm effect, the outside of the solenoid and the inside of the solenoid must be described by different functions, or else the “outside function” would be singular at the center of the solenoid.  Qualitatively, one can think of the manifold as a plane with a hole punched in the middle of it. In the case of the Dirac monopole, the magnetic monopole itself is the position of the singularity and there is a hole punched in 3-dimensional space.

There is an excellent discussion on both these problems in Sakurai’s quantum mechanics textbook. I particularly like the approach he takes to the Dirac monopole problem, which he adapted from Wu and Yang’s elegant solution. The explanation of the QHE using similar ideas was developed in this great (but unfortunately quite mathematical) paper by Kohmoto (pdf!). I realize that this post only sketches the main point (with perhaps too much haste), but I hope that it will be illuminating to some.

Update: I have written a guest post for Brian Skinner’s blog Gravity and Levity where I discuss the topics here in a little more detail. You can read the post here if you’re interested.

*Private Communication*

Here is a correspondence that took place between Sarang and I, following his post last week about emergence and upward heritability, which was in response to a couple of my posts (here and here).

Subject: Post

Hey Sarang,

Great post — I’m glad that you wrote an article from the other side.

I do have a couple questions/comments though, as you are likely more knowledgeable on this subject than I.

To my understanding, GL theory is a coarse-grained version of BCS theory and many experimental properties of classic superconductors can be calculated using both methods. I also like this example because it is one of the few places I know of where one can derive the coarse-grained model from the underlying one (I am an experimentalist after all, so there may be many others I’m unaware of). Am I being misled in thinking that something “survived” in going from one level of the theory to another, or is this not a good example of “new principles at a different scale” that Laughlin and Pines referred to?

Anshul

Re:Post

This is a good question; I should have a succinct answer but don’t. Some thoughts:

1. I don’t see BCS as a “microscopic derivation” because a lot of coarse-grained ideas go into writing down the reduced BCS Hamiltonian — you throw out the Coulomb interaction b’se of Bohm-Pines and then you reduce the remaining interactions to Hartree-Fock and pairing channels because of Fermi surface kinematics, so what remains is an exactly solvable model. The BCS coupling is in practice a phenomenological parameter, which is backed out from the gap. (Otherwise we would not be so bad at computing Tc’s.) The symmetry-broken (i.e., non-number-conserving) BCS wavefunction violates “heritability” because particle number conservation is precisely the sort of symmetry/conservation law that is supposed to persist from one level to the next. So I see BCS as mostly a reverse-engineered microscopic justification (i.e. a way of saying, look, you can get superconductivity with just electrons) rather than an example of reasoning *from* microscopic considerations to macroscopic results.

2. More generally, when you write down a solvable, microscopically specified toy model that describes some emergent phenomenon, I do not think this counts as a *deduction* unless the decisions on what effects to include and to neglect are based on microscopic considerations. And the renormalization group tells you that such microscopically informed decisions about what effects are important/worth keeping will in general be wrong. “Relevant” and “irrelevant” are properties of the RG flow, not of the initial Hamiltonian.

3. Of course macro-stuff is made up of micro-stuff; the issue is whether the relevant conceptual architecture is inherited or distinct. I’m arguing that it is distinct whenever the thermodynamic limit is nontrivial: there are notions like fractionalized excitations and Goldstone modes that cannot be articulated without reference to the thermodynamic limit. In a finite system there is never a sharp distinction between collective and non-collective parts of the spectrum; “in practice” we know how to identify collective excitations even in relatively small systems, but when we do this we are invoking thermodynamic rather than microscopic concepts.

Re:Re:Post

Thanks for your response. I have been taking some time to think about your answers. I also have a few comments, to which you can reply if you have the time.

1. BCS is not a “microscopic derivation”, but a lot of the work leading up to the full formulation (such as the Bardeen-Pines interaction) required careful thought about how phonons could cause an attractive interaction. Ultimately, the theory is a toy, but one based on some (I would consider) meticulously thought out microscopic considerations. I would even consider the demonstration of the isotope effect a microscopic experiment in some sense. As for number conservation, Tony’s book Quantum Liquids shows that this is not necessary to formulate a theory of superconductivity, but only a trick to make calculations of experimental quantities sufficiently easier. I think to say that actual particle number is not conserved in real life would be quite unnerving.

2. This point, I perhaps don’t understand as well in light of the response given for part 1. It seems to me that the Bardeen-Pines thought process was extremely significant.

3. I personally find it quite stunning that e.g. the Aharonov-Bohm effect (a microscopic effect) and the Quantum Hall quantization (a macroscopic effect) bear such a striking resemblance. These are the kinds of phenomena I speculate (and I suppose Wilczek might also) may be upwardly heritable.

Ultimately, it may be impossible to know whether L&P are right because one cannot solve the Schroedinger equation for 10^23 particles or put it on a computer to see if one gets the right answer. I choose to believe, however, that it would because it is more “natural” (perhaps my mind is lazy). This is where I think L&P are radical. They may have reasons for saying that the Schroedinger equation does not capture some essential physics, but until it is definitively shown to be true, I don’t think I will accept it. This does not mean that I don’t think that there is different physics on different scales. Water is indisputably wet. But to get the Schroedinger equation to exhibit this macroscopic property is indeed futile, though it may be possible in principle — and hydrodynamics is undoubtedly a less unwieldy description.

Anderson himself in his More is Different article says “the concept of broken symmetry has been borrowed by particle physicists, but their use of the term is strictly an analogy, whether a deep or specious one remains to be understood.”

I guess you say perhaps specious and I say perhaps deep?

Anshul

Re:Re:Re:Post

1-2. I guess I think the isotope effect experiment is precisely the opposite of a deduction *from* microscale *to* macroscale. The approach in that kind of expt is to identify the relevant terms in the microscopic Hamiltonian by varying them separately and see if the “answer” changes. In other words what it actually is is a deduction *from* a macroscale fact (Tc depends on isotopic mass) *to* a microscale conclusion about what terms in the microscopic Hamiltonian are truly important. Similarly, a key ingredient in Bardeen-Pines is the previous (Bohm-Pines) result that the Coulomb interaction gets sufficiently renormalized by the Debye frequency that it can actually lose to the electron-phonon coupling. Again, there is nothing deductively “microscopic” about this result: you have to include renormalization due to electron-hole pairs at all energies above the Debye frequency in order to see it, so the infrared behavior of the theory is already smuggled in! Not that there is anything wrong with this, but I do not think it can be correctly interpreted as a deduction from small scales to large scales. The really hard part of the BCS problem, after all, was writing down the BCS Hamiltonian, and the data used to do this came almost entirely from (a) macroscale experiments and (b) theoretical considerations which included infrared/emergent physics. If you want to make deductions from solving the Hamiltonian at the scale of a few atoms it is hopeless because at that scale what you have is just this huge Coulomb interaction and nothing else, and there is no obvious path from there toward a paired state.

The number conserving v. of BCS is basically like the Schrodinger’s cat ground state (all up + all down) in the Ising model. The fact that using the usual variable-number wavefunction is even *possible* undermines the notion that microscopic symmetries/conservation laws tell you anything useful about macroscopic physics.

3. I think part of the problem is that if you define “upward heritability” vaguely enough anything can upwardly heritable. I read Wilczek as saying: given some set of specific, robust facts about the microscopic physics (e.g. symmetries, AB periodicities), you can derive strong constraints on the kinds of macroscopic physics that are possible. I think these constraints are actually weak to the point of barely existing [1] (because the microscopic symmetries can be broken and new symmetries can emerge; the AB periodicity can change…). Perhaps you want to argue that it’s surprising the macroscopic system can even be described using the language of symmetries and AB periodicity at all? I suppose I’m not surprised by that — clearly some symmetries are preserved under composition, the trouble for small -> large deduction is that a priori you don’t know which ones they will be.

I agree that if you solved the Sch. Eq. exactly for a huge system with precisely defined parameters you would get the right answer. However, (a) even slight imprecision in parameters or slight approximation can give you a wildly wrong answer, and (b) even wildly imprecise calculations can give you a qualitatively correct answer. Therefore, if you run a simulation starting from measured parameters with any uncertainty, in the thermodynamic limit you will have no reason (in principle) to trust your simulation. There are phenomena very like chaos that happen under coarse-graining.

[1] I should mention one sense in which Wilczek is right that there is a constraint. Whenever the original symmetry is global and continuous, the constraint says: “either the system will respect this symmetry in the thermodynamic limit or there will be gapless modes.” However this is not a very useful constraint, as there is no way even in principle to count the gapless modes.

Re:Re:Re:Re:Post

This most recent email makes a lot of sense to me.

I should just point out that Laughlin and Pines said that even in principle, one could not obtain certain effects from the Schrodinger equation, that there did not exist a deductive link at all. This is the statement that I found quite jarring. I can totally get on board with what you said below though, that makes a lot more sense.

Thanks again, I actually learned something from this conversation.

Re:Re:Re:Re:Re:Post

Thanks — yes it was useful having to write this stuff out. Again I don’t know that I agree with L&P — I read their paper a long time ago and was bothered by it because it seemed to be saying some crazy things among the many correct things.

Quantized Vortices in Superfluid 4He

There are a couple nice old PRLs  from the Packard Group at Berkeley demonstrating the existence of quantized vortices and the vortex pattern in superfluid 4He. See here (pdf!) and here (paywall). Just as a little bit of background for those who are unfamiliar: below 2.17K helium undergoes a liquid to liquid phase transition from a “normal” liquid to a superfluid. The superfluid is characterized by several properties including zero viscosity (similar to electrons in a superconductor), second sound (effectively, temperature waves), and quantized vorticity.

The observation of this latter property was captured vividly in the series of images taken by the Packard Group shown below. To induce vortex formation, the authors rotated the bucket in which the superfluid had been placed. Because of the zero viscosity, the superfluid remained still until a critical velocity was reached where a single vortex formed in the center of the bucket. As the angular velocity was increased, more and more vortices started to form and the authors show the pattern formed by these vortices in the presence of 1-11 vortices.

VortexArray

Interestingly, there are two different stable configurations for which 6 vortices can form as shown in the figure. I happen to know that Richard Feynman, who had done a lot of the prior theoretical work on vortices in superfluid 4He, sent a personal letter to the authors of these papers to thank them for their elegant experiment.

Theory of Everything – Laughlin and Pines

I recently re-visited a paper written in 2000 by Laughlin and Pines entitled The Theory of Everything (pdf!). The main claim in the paper is that what we call the theory of everything in condensed matter (the Hamiltonian below) does not capture “higher organizing principles”. Condensed Concepts blog has a nice summary of the article.

TOE

Because we can measure quantities like e^2/h and h/2e in quantum hall experiments and superconducting rings respectively, it must be that the theory of everything does not capture some essential physics that emerges only on a different scale. In their words:

These things [e.g. that we can measure e^2/h] are clearly true, yet they cannot be deduced by direct calculation from the Theory of Everything, for exact results cannot be predicted by approximate calculations. This point is still not understood by many professional physicists, who find it easier to believe that a deductive link exists and has only to be discovered than to face the truth that there is no link. But it is true nonetheless. Experiments of this kind work because there are higher organizing principles in nature that make them work.

If I am perfectly honest, I am one of those people that “believes that a deductive link exists”. Let me take the example of the BCS Hamiltonian. I do think that it is reasonable to start with the theory of everything, make a series of approximations, and arrive at the BCS Hamiltonian. From BCS, one can then derive the Ginzburg-Landau (GL) equations as shown by G’orkov (pdf!). Not only that, one can obtain the Josephson effect (where one can measure h/2e) by using either a BCS or a GL approach.

The reason I bring this example up is because, I would rather believe that a deductive link does exist and that even though approximations have been made, that there is some topological property that has survives to each “higher” level. Said another way, in going from the TOE to BCS to GL, one keeps some fundamental topological characteristics in tact.

It is totally possible that what I am saying is gobbledygook. But I do think that the Laughlin-Pines viewpoint is speculative, radical, and has perhaps taken the Anderson “more is different” perspective too far. It is a thought-provoking article partly because of weight that the authors’ names carry and partly because of the self-belief of the article’s tone, but I am a little more conservative in my scientific outlook. The TOE may not always be useful, but I don’t think that means that “no deductive link exists” either.

I’m curious to know whether you see things like Laughlin and Pines.