Tag Archives: Symmetry

Jahn-Teller Distortion and Symmetry Breaking

The Jahn-Teller effect occurs in molecular systems, as well as solid state systems, where a molecular complex distorts, resulting in a lower symmetry. As a consequence, the energy of certain occupied molecular states is reduced. Let me first describe the phenomenon before giving you a little cartoon of the effect.

First, consider, just as an example, a manganese atom with valence 3d^4, surrounded by an octahedral cage of oxygen atoms like so (image taken from this thesis):

Jahn-Teller.png

The electrons are arranged such that the lower triplet of orbital states each contain a single “up-spin”, while the higher doublet of orbitals only contains a single “up-spin”, as shown on the image to the left. This scenario is ripe for a Jahn-Teller distortion, because the electronic energy can be lowered by splitting both the doublet and the triplet as shown on the image on the right.

There is a very simple, but quite elegant problem one can solve to describe this phenomenon at a cartoon level. This is the problem of a two-dimensional square well with adjustable walls. By solving the Schrodinger equation, it is known that the energy of the two-dimensional infinite well has solutions of the form:

E_{i,j} = \frac{h}{8m}(i^2/a^2 + j^2/b^2)                where i,j are integers.

Here, a and b denote the lengths of the sides of the 2D well. Since it is only the quantity in the brackets that determine the energy levels, let me factor out a factor of \gamma = a/b and write the energy dependence in the following way:

E \sim i^2/\gamma + \gamma j^2

Note that \gamma is effectively an anisotropy parameter, giving a measure of the “squareness of the well”. Now, let’s consider filling up the levels with spinless electrons that obey the Pauli principle. These electrons will fill up in a “one-per-level” fashion in accordance with the fermionic statistics. We can therefore write the total energy of the N-fermion problem as so:

E_{tot} \sim \alpha^2/ \gamma + \gamma \beta^2

where \alpha and \beta parameterize the energy levels of the N electrons.

Now, all of this has been pretty simple so far, and all that’s really been done is to re-write the 2D well problem in a different way. However, let’s just systematically look at what happens when we fill up the levels. At first, we fill up the E_{1,1} level, where \alpha^2 = \beta^2 = 1^2. In this case, if we take the derivative of E_{1,1} with respect to \gamma, we get that \gamma_{min} = 1 and the well is a square.

For two electrons, however, the well is no longer a square! The next electron will fill up the E_{2,1} level and the total energy will therefore be:

E_{tot} \sim 1/\gamma (1+4) + \gamma (1+1),

which gives a \gamma_{min} = \sqrt{5/2}!

Why did this breaking of square symmetry occur? In fact, this is very closely related to the Jahn-Teller effect. Since the level is two-fold degenerate (i.e. E_{2,1} =  E_{1,2}), it is favorable for the 2D well to distort to lower its electronic energy.

Notice that when we add the third electron, we get that:

E_{tot} \sim 1/\gamma (1+4+1) + \gamma (1+1+4)

and \gamma_{min} = 1 again, and we return to the system with square symmetry! This is also quite similar to the Jahn-Teller problem, where, when all the states of the degenerate levels are filled up, there is no longer an energy to be gained from the symmetry-broken geometry.

This analogy is made more complete when looking at the following level scheme for different d-electron valence configurations, shown below (image taken from here).

highSpin_Jahnteller

The black configurations are Jahn-Teller active (i.e. prone to distortions of the oxygen octahedra), while the red are not.

In condensed matter physics, we usually think about spontaneous symmetry breaking in the context of the thermodynamic limit. What saves us here, though, is that the well will actually oscillate between the two rectangular configurations (i.e. horizontal vs. vertical), preserving the original symmetry! This is analogous to the case of the ammonia (NH_3) molecule I discussed in this post.

Advertisements

Nonlinear Response and Harmonics

Because we are so often solving problems in quantum mechanics, it is sometimes easy to forget that certain effects also show up in classical physics and are not “mysterious quantum phenomena”. One of these is the problem of avoided crossings or level repulsion, which can be much more easily understood in the classical realm. I would argue that the Fano resonance also represents a case where a classical model is more helpful in grasping the main idea. Perhaps not too surprisingly, a variant of the classical harmonic oscillator problem is used to illustrate the respective concepts in both cases.

There is also another cute example that illustrates why overtones of the natural harmonic frequency components result when subject to slightly nonlinear oscillations. The solution to this problem therefore shows why harmonic distortions often affect speakers; sometimes speakers emit frequencies not present in the original electrical signal. Furthermore, it shows why second harmonic generation can result when intense light is incident on a material.

First, imagine a perfectly harmonic oscillator with a potential of the form V(x) = \frac{1}{2} kx^2. We know that such an oscillator, if displaced from its original position, will result in oscillations at the natural frequency of the oscillator \omega_o = \sqrt{k/m} with the position varying as x(t) = A \textrm{cos}(\omega_o t + \phi). The potential and the position of the oscillator as a function of time are shown below:

harmpotentialrepsonse

(Left) Harmonic potential as a function of position. (Right) Variation of the position of the oscillator with time

Now imagine that in addition to the harmonic part of the potential, we also have a small additional component such that V(x) = \frac{1}{2} kx^2 + \frac{1}{3}\epsilon x^3, so that the potential now looks like so:

nonlinearharm

The equation of motion is now nonlinear:

\ddot{x} = -c_1x - c_2x^2

where c_1 and c_2 are constants. It is easy to see that if the amplitude of oscillations is small enough, there will be very little difference between this case and the case of the perfectly harmonic potential.

However, if the amplitude of the oscillations gets a little larger, there will clearly be deviations from the pure sinusoid. So then what does the position of the oscillator look like as a function of time? Perhaps not too surprisingly, considering the title, is that not only are there oscillations at \omega_0, but there is also an introduction of a harmonic component with 2\omega_o.

While the differential equation can’t be solved exactly without resorting to numerical methods, that the harmonic component is introduced can be seen within the framework of perturbation theory. In this context, all we need to do is plug the solution to the simple harmonic oscillator, x(t) = A\textrm{cos}(\omega_0t +\phi) into the nonlinear equation above. If we do this, the last term becomes:

-c_2A^2\textrm{cos}^2(\omega_0t+\phi) = -c_2 \frac{A^2}{2}(1 + \textrm{cos}(2\omega_0t+2\phi)),

showing that we get oscillatory components at twice the natural frequency. Although this explanation is a little crude — one can already start to see why nonlinearity often leads to higher frequency harmonics.

With respect to optical second harmonic generation, there is also one important ingredient that should not be overlooked in this simplified model. This is the fact that frequency doubling is possible only when there is an x^3 component in the potential. This means that the potential needs to be inversion asymmetric. Indeed, second harmonic generation is only possible in inversion asymmetric materials (which is why ferroelectric materials are often used to produce second harmonic optical signals).

Because of its conceptual simplicity, it is often helpful to think about physical problems in terms of the classical harmonic oscillator. It would be interesting to count how many Nobel Prizes have been given out for problems that have been reduced to some variant of the harmonic oscillator!

Landau Theory and the Ginzburg Criterion

The Landau theory of second order phase transitions has probably been one of the most influential theories in all of condensed matter. It classifies phases by defining an order parameter — something that shows up only below the transition temperature, such as the magnetization in a paramagnetic to ferromagnetic phase transition. Landau theory has framed the way physicists think about equilibrium phases of matter, i.e. in terms of broken symmetries. Much current research is focused on transitions to phases of matter that possess a topological index, and a major research question is how to think about these phases which exist outside the Landau paradigm.

Despite its far-reaching influence, Landau theory actually doesn’t work quantitatively in most cases near a continuous phase transition. By this, I mean that it fails to predict the correct critical exponents. This is because Landau theory implicitly assumes that all the particles interact in some kind of average way and does not adequately take into account the fluctuations near a phase transition. Quite amazingly, Landau theory itself predicts that it is going to fail near a phase transition in many situations!

Let me give an example of its failure before discussing how it predicts its own demise. Landau theory predicts that the specific heat should exhibit a discontinuity like so at a phase transition:

specificheatlandau

However, if one examines the specific heat anomaly in liquid helium-4, for example, it looks more like a divergence as seen below:

lambda_transition

So it clearly doesn’t predict the right critical exponent in that case. The Ginzburg criterion tells us how close to the transition temperature Landau theory will fail. The Ginzburg argument essentially goes like so: since Landau theory neglects fluctuations, we can see how accurate Landau theory is going to be by calculating the ratio of the fluctuations to the order parameter:

E_R = |G(R)|/\eta^2

where E_R is the error in Landau theory, |G(R)| quantifies the fluctuations and \eta is the order parameter. Basically, if the error is small, i.e. E_R << 1, then Landau theory will work. However, if it approaches \sim 1, Landau theory begins to fail. One can actually calculate both the order parameter and the fluctuation region (quantified by the two-point correlation function) within Landau theory itself and therefore use Landau theory to calculate whether or not it will fail.

If one does carry out the calculation, one gets that Landau theory will work when:

t^{(4-d)/2} >> k_B/\Delta C \xi(1)^d  \equiv t_{L}^{(4-d)/2}

where t is the reduced temperature, d is the dimension, \xi(1) is the dimensionless mean-field correlation length at T = 2T_C (extrapolated from Landau theory) and \Delta C/k_B is the change in specific heat in units of k_B, which is usually one per degree of freedom. In words, the formula essentially counts the number of degrees of freedom in a volume defined by  \xi(1)^d. If the number of degrees of freedom is large, then Landau theory, which averages the interactions from many particles, works well.

So that was a little bit of a mouthful, but the important thing is that these quantities can be estimated quite well for many phases of matter. For instance, in liquid helium-4, the particle interactions are very short-ranged because the helium atom is closed-shell (this is what enables helium to remain a liquid all the way down to zero temperatures at ambient conditions in the first place). Therefore, we can assume that \xi(1) \sim 1\textrm{\AA}, and hence t_L \sim 1 and deviations from Landau theory can be easily observed in experiment close to the transition temperature.

Despite the qualitative similarities between superfluid helium-4 and superconductivity, a topic I have addressed in the past, Landau theory works much better for superconductors. We can also use the Ginzburg criterion in this case to calculate how close to the transition temperature one has to be in order to observe deviations from Landau theory. In fact, the question as to why Ginzburg-Landau theory works so well for BCS superconductors is what awakened me to these issues in the first place. Anyway, we assume that \xi(1) is on the order of the Cooper pair size, which for BCS superconductors is on the order of 1000 \textrm{\AA}. There are about 10^8 particles in this volume and correspondingly, t_L \sim 10^{-16} and Landau theory fails so close to the transition temperature that this region is inaccessible to experiment. Landau theory is therefore considered to work well in this case.

For high-Tc superconductors, the Cooper pair size is of order 10\textrm{\AA} and therefore deviations from Landau theory can be observed in experiment. The last thing to note about these formulas and approximations is that two parameters determine whether Landau theory works in practice: the number of dimensions and the range of interactions.

*Much of this post has been unabashedly pilfered from N. Goldenfeld’s book Lectures on Phase Transitions and the Renormalization Group, which I heartily recommend for further discussion of these topics.

Broken Symmetry and Degeneracy

Often times, when I understand a basic concept I had struggled to understand for a long time, I wonder, “Why in the world couldn’t someone have just said that?!” A while later, I will then return to a textbook or paper that actually says precisely what I wanted to hear. I will then realize that the concept just wouldn’t “stick” in my head and required some time of personal and thoughtful deliberation. It reminds me of a humorous quote by E. Rutherford:

All of physics is either impossible or trivial.  It is impossible until you understand it and then it becomes trivial.

I definitely experienced this when first studying the relationship between broken symmetry and degeneracy. As I usually do on this blog, I hope to illustrate the central points within a pretty rigorous, but mostly picture-based framework.

For the rest of this post, I’m going to follow P. W. Anderson’s article More is Different, where I think these ideas are best addressed without any equations. However, I’ll be adding a few details which I wished I had understood upon my first reading.

If you Google “ammonia molecule” and look at the images, you’ll get something that looks like this:

ammonia

With the constraint that the nitrogen atom must sit on a line through the center formed by the triangular network of hydrogen atoms, we can approximate the potential to be one-dimensional. The potential along the line going through the center of the hydrogen triangle will look, in some crude approximation, something like this:

AmmoniaPotential

Notice that the molecule has inversion (or parity) symmetry about the triangular hydrogen atom network. For non-degenerate wavefunctions, the quantum stationary states must also be parity eigenstates. We expect, therefore, that the stationary states will look something like this for the ground state and first excited state respectively:

SymmetricDoubleWell

Ground State

AntiSymmetricDoubleWell

First Excited State

The tetrahedral (pyramid-shaped) ammonia molecule in the image above is clearly not inversion symmetric, though. What does this mean? Well, it implies that the ammonia molecule in the image above cannot be an energy eigenstate. What has to happen, therefore, is that the ammonia molecule has to oscillate between the two configurations pictured below:

ammoniaInversion

The oscillation between the two states can be thought of as the nitrogen atom tunneling from one valley to the other in the potential energy diagram above. The oscillation occurs about 24 billion times per second or with a frequency of 24 GHz.

To those familiar with quantum mechanics, this is a classic two-state problem and there’s nothing particularly new here. Indeed, the tetrahedral structures can be written as linear combinations of the symmetric and anti-symmetric states as so:

| 1 \rangle = \frac{1}{\sqrt{2}} (e^{i \omega_S t}|S\rangle +e^{i \omega_A t}|A\rangle)

| 2 \rangle = \frac{1}{\sqrt{2}} (e^{i \omega_S t}|S\rangle -e^{i \omega_A t}|A\rangle)

One can see that an oscillation frequency of \omega_S-\omega_A will result from the interference between the symmetric and anti-symmetric states.

The interest in this problem, though, comes from examining a certain limit. First, consider what happens when one replaces the nitrogen atom with a phosphorus atom (PH3): the oscillation frequency decreases to about 0.14 MHz, about 200,000 times slower than NH3. If one were to do the same replacement with an arsenic atom instead (AsH3), the oscillation frequency slows down to 160 microHz, which is equivalent to about an oscillation every two years!

This slowing down can be simply modeled in the picture above by imagining the raising of the barrier height between the two valleys like so:

HighBarrierPotential

In the case of an amino acid or a sugar, which are both known to be chiral, the period of oscillation is thought to be greater than the age of the universe. Basically, the molecules never invert!

So what is happening here? Don’t worry, we aren’t violating any laws of quantum mechanics.

As the barrier height reaches infinity, the states in the well become degenerate. This degeneracy is key, because for degenerate states, the stationary states no longer have to be inversion-symmetric. Graphically, we can illustrate this as so:

SymmetricDegenerate

Symmetric state, E=E_0

Anti-symmetricDegenerate

Anti-symmetric state, E=E_0

We can now write for the symmetric and anti-symmetric states:

| 1 \rangle = e^{i\omega t} \frac{1}{\sqrt{2}} (|S\rangle + |A\rangle)

| 2 \rangle = e^{i\omega t} \frac{1}{\sqrt{2}} (|S\rangle - |A\rangle)

These are now bona-fide stationary states. There is therefore a deep connection between degeneracy and the broken symmetry of a ground state, as this example so elegantly demonstrates.

When there is a degeneracy, the ground state no longer has to obey the symmetry of the Hamiltonian.

Technically, the barrier height never reaches infinity and there is never true degeneracy unless the number of particles in the problem (or the mass of the nitrogen atom) approaches infinity, but let’s leave that story for another day.

*Private Communication*

Here is a correspondence that took place between Sarang and I, following his post last week about emergence and upward heritability, which was in response to a couple of my posts (here and here).

Subject: Post

Hey Sarang,

Great post — I’m glad that you wrote an article from the other side.

I do have a couple questions/comments though, as you are likely more knowledgeable on this subject than I.

To my understanding, GL theory is a coarse-grained version of BCS theory and many experimental properties of classic superconductors can be calculated using both methods. I also like this example because it is one of the few places I know of where one can derive the coarse-grained model from the underlying one (I am an experimentalist after all, so there may be many others I’m unaware of). Am I being misled in thinking that something “survived” in going from one level of the theory to another, or is this not a good example of “new principles at a different scale” that Laughlin and Pines referred to?

Anshul

Re:Post

This is a good question; I should have a succinct answer but don’t. Some thoughts:

1. I don’t see BCS as a “microscopic derivation” because a lot of coarse-grained ideas go into writing down the reduced BCS Hamiltonian — you throw out the Coulomb interaction b’se of Bohm-Pines and then you reduce the remaining interactions to Hartree-Fock and pairing channels because of Fermi surface kinematics, so what remains is an exactly solvable model. The BCS coupling is in practice a phenomenological parameter, which is backed out from the gap. (Otherwise we would not be so bad at computing Tc’s.) The symmetry-broken (i.e., non-number-conserving) BCS wavefunction violates “heritability” because particle number conservation is precisely the sort of symmetry/conservation law that is supposed to persist from one level to the next. So I see BCS as mostly a reverse-engineered microscopic justification (i.e. a way of saying, look, you can get superconductivity with just electrons) rather than an example of reasoning *from* microscopic considerations to macroscopic results.

2. More generally, when you write down a solvable, microscopically specified toy model that describes some emergent phenomenon, I do not think this counts as a *deduction* unless the decisions on what effects to include and to neglect are based on microscopic considerations. And the renormalization group tells you that such microscopically informed decisions about what effects are important/worth keeping will in general be wrong. “Relevant” and “irrelevant” are properties of the RG flow, not of the initial Hamiltonian.

3. Of course macro-stuff is made up of micro-stuff; the issue is whether the relevant conceptual architecture is inherited or distinct. I’m arguing that it is distinct whenever the thermodynamic limit is nontrivial: there are notions like fractionalized excitations and Goldstone modes that cannot be articulated without reference to the thermodynamic limit. In a finite system there is never a sharp distinction between collective and non-collective parts of the spectrum; “in practice” we know how to identify collective excitations even in relatively small systems, but when we do this we are invoking thermodynamic rather than microscopic concepts.

Re:Re:Post

Thanks for your response. I have been taking some time to think about your answers. I also have a few comments, to which you can reply if you have the time.

1. BCS is not a “microscopic derivation”, but a lot of the work leading up to the full formulation (such as the Bardeen-Pines interaction) required careful thought about how phonons could cause an attractive interaction. Ultimately, the theory is a toy, but one based on some (I would consider) meticulously thought out microscopic considerations. I would even consider the demonstration of the isotope effect a microscopic experiment in some sense. As for number conservation, Tony’s book Quantum Liquids shows that this is not necessary to formulate a theory of superconductivity, but only a trick to make calculations of experimental quantities sufficiently easier. I think to say that actual particle number is not conserved in real life would be quite unnerving.

2. This point, I perhaps don’t understand as well in light of the response given for part 1. It seems to me that the Bardeen-Pines thought process was extremely significant.

3. I personally find it quite stunning that e.g. the Aharonov-Bohm effect (a microscopic effect) and the Quantum Hall quantization (a macroscopic effect) bear such a striking resemblance. These are the kinds of phenomena I speculate (and I suppose Wilczek might also) may be upwardly heritable.

Ultimately, it may be impossible to know whether L&P are right because one cannot solve the Schroedinger equation for 10^23 particles or put it on a computer to see if one gets the right answer. I choose to believe, however, that it would because it is more “natural” (perhaps my mind is lazy). This is where I think L&P are radical. They may have reasons for saying that the Schroedinger equation does not capture some essential physics, but until it is definitively shown to be true, I don’t think I will accept it. This does not mean that I don’t think that there is different physics on different scales. Water is indisputably wet. But to get the Schroedinger equation to exhibit this macroscopic property is indeed futile, though it may be possible in principle — and hydrodynamics is undoubtedly a less unwieldy description.

Anderson himself in his More is Different article says “the concept of broken symmetry has been borrowed by particle physicists, but their use of the term is strictly an analogy, whether a deep or specious one remains to be understood.”

I guess you say perhaps specious and I say perhaps deep?

Anshul

Re:Re:Re:Post

1-2. I guess I think the isotope effect experiment is precisely the opposite of a deduction *from* microscale *to* macroscale. The approach in that kind of expt is to identify the relevant terms in the microscopic Hamiltonian by varying them separately and see if the “answer” changes. In other words what it actually is is a deduction *from* a macroscale fact (Tc depends on isotopic mass) *to* a microscale conclusion about what terms in the microscopic Hamiltonian are truly important. Similarly, a key ingredient in Bardeen-Pines is the previous (Bohm-Pines) result that the Coulomb interaction gets sufficiently renormalized by the Debye frequency that it can actually lose to the electron-phonon coupling. Again, there is nothing deductively “microscopic” about this result: you have to include renormalization due to electron-hole pairs at all energies above the Debye frequency in order to see it, so the infrared behavior of the theory is already smuggled in! Not that there is anything wrong with this, but I do not think it can be correctly interpreted as a deduction from small scales to large scales. The really hard part of the BCS problem, after all, was writing down the BCS Hamiltonian, and the data used to do this came almost entirely from (a) macroscale experiments and (b) theoretical considerations which included infrared/emergent physics. If you want to make deductions from solving the Hamiltonian at the scale of a few atoms it is hopeless because at that scale what you have is just this huge Coulomb interaction and nothing else, and there is no obvious path from there toward a paired state.

The number conserving v. of BCS is basically like the Schrodinger’s cat ground state (all up + all down) in the Ising model. The fact that using the usual variable-number wavefunction is even *possible* undermines the notion that microscopic symmetries/conservation laws tell you anything useful about macroscopic physics.

3. I think part of the problem is that if you define “upward heritability” vaguely enough anything can upwardly heritable. I read Wilczek as saying: given some set of specific, robust facts about the microscopic physics (e.g. symmetries, AB periodicities), you can derive strong constraints on the kinds of macroscopic physics that are possible. I think these constraints are actually weak to the point of barely existing [1] (because the microscopic symmetries can be broken and new symmetries can emerge; the AB periodicity can change…). Perhaps you want to argue that it’s surprising the macroscopic system can even be described using the language of symmetries and AB periodicity at all? I suppose I’m not surprised by that — clearly some symmetries are preserved under composition, the trouble for small -> large deduction is that a priori you don’t know which ones they will be.

I agree that if you solved the Sch. Eq. exactly for a huge system with precisely defined parameters you would get the right answer. However, (a) even slight imprecision in parameters or slight approximation can give you a wildly wrong answer, and (b) even wildly imprecise calculations can give you a qualitatively correct answer. Therefore, if you run a simulation starting from measured parameters with any uncertainty, in the thermodynamic limit you will have no reason (in principle) to trust your simulation. There are phenomena very like chaos that happen under coarse-graining.

[1] I should mention one sense in which Wilczek is right that there is a constraint. Whenever the original symmetry is global and continuous, the constraint says: “either the system will respect this symmetry in the thermodynamic limit or there will be gapless modes.” However this is not a very useful constraint, as there is no way even in principle to count the gapless modes.

Re:Re:Re:Re:Post

This most recent email makes a lot of sense to me.

I should just point out that Laughlin and Pines said that even in principle, one could not obtain certain effects from the Schrodinger equation, that there did not exist a deductive link at all. This is the statement that I found quite jarring. I can totally get on board with what you said below though, that makes a lot more sense.

Thanks again, I actually learned something from this conversation.

Re:Re:Re:Re:Re:Post

Thanks — yes it was useful having to write this stuff out. Again I don’t know that I agree with L&P — I read their paper a long time ago and was bothered by it because it seemed to be saying some crazy things among the many correct things.

Emergence is about the failure of upward heritability

I’ve been only tenuously on the internet this month, so I missed Anshul’s posts about emergence (here and here). I’m closer on this Q. to Pines and Laughlin than to Anshul and Wilczek (see e.g. here — I still stand by most of that I think). What’s weird about that Wilczek article is that he identifies the main question, but then suavely ignores it.

There are various things I want to say in response to these articles (none of which I entirely agree with), but this is the gist: 1. The thermodynamic limit destroys upward heritability. 2. “Emergence” is a result of this breakage.

According to Wilczek, the reason that particle physics concepts move up into the infrared is that microscopic laws, when “applied to large bodies, retain their character.” Let’s try to unpack that. Obviously, inferences from an approximate microscopic theory will generally not scale up to the macroscopic level (b’se of how errors propagate) but one might reasonably expect some structural properties of the microscopic theory to provide useful deductive guidance at higher levels — e.g., the idea that if the microscopic theory is invariant under some symmetry, then so will any higher level; or if (let’s get nonrelativistic at this point) the microscopic theory only has particles of charge e, then the macroscopic theory is constrained to have particles of charge ne where n is an integer. In fact, of course, neither of these arguments is true: spontaneous symmetry breaking is a thing, as is fractionalization, as is the presence of “emergent” symmetries at critical points that were never there in the underlying model.

These inferences are false, not at any intermediate step, but because of pathologies that arise when you take limits. In the present case, the relevant limit is the thermodynamic limit (more precisely, it is the way the thermodynamic limit commutes, or fails to commute, with other limits such as the quasistatic and linear response limits). Virtually all nontrivial emergent phenomena are due to these pathologies. For instance:

  1. In the transverse field Ising model, a finite-size system at T = 0 in the ferromagnetic phase flips between the all-up and all-down ground states at a rate 1/t that is exponential in N, the number of spins. The magnetization measured on times short compared with t is large and finite, but on much longer times it is zero. Depending on whether you take the averaging time or N to zero first, you either find a phase transition or not. These noncommuting limits are in some sense the opposite of upward heritability, if you interpret this as saying that the laws do not change their character: the microscopic dynamics obeys certain symmetries, but the large-scale behavior does not.
  2. Thouless and Gefen  explained how the fractional quantum Hall effect similarly allows large systems to defy microscopic symmetries. The Byers-Yang theorem requires the ground state of a system with “fundamental” charge e to be periodic in 1/e. However, you can get around this by having multiple different ground state branches that don’t mix (in the thermodynamic limit) — if you want particles with a charge m then you just need m ground states. The switchings between these ground states guarantee the validity of “the letter of” the underlying microscopic theorem while permitting its “spirit” to be violated.

In all such cases, there remains some literal sense in which a “deductive path” exists from the microscopic to the macroscopic world. However, this deductive path is emphatically not a local path: the standard cond. mat. phenomena illustrate that inductive reasoning from particles of N systems to particles of N+1 (or 2N) systems will not help identify emergent phenomena. You need to know where you are going to end up there.

(I think Wilczek might say that “upward heritability” is really about the fact that both cond-mat and hep-th are about symmetry arguments although the symmetries are different in the two cases. I don’t buy this at all. If there is a puzzle here, and for L&P-type reasons I don’t think there is one, it can be resolved by arguing that “the unreasonable effectiveness of mathematics” separately explains both.)

A flip side is that a large family of microscopic possibilities end up at the same macroscopic model. (For instance the 1/3 Laughlin state, or minor deformations of it, is a true ground state for an enormous range of electron-electron interaction strengths and profiles.) When you change scales, some information is lost and other information is amplified; whether a particular piece of information is going to be lost or amplified is a property of the coarse-graining and not a property of the underlying microscopic theory.

Frank Wilczek’s Concept of ‘Upward Inhertiance’

Yesterday, I happened upon an article entitled Why are there Analogies Between Condensed Matter and Particle Theory (pdf!) by Frank Wilczek. In it, he suggests an alternative view to the one espoused by Laughlin and Pines in their Theory of Everything paper. The views expressed in More is Different by P.W. Anderson, which is the most influential paper of the three, lie somewhere in between. The article by Wilczek is noteworthy because of the idea that he calls “upwardly heritable principles”.

He first addresses the issue of why ideas in condensed matter and particle physics bear such a resemblance (i.e. why the macroscopic reflects the microscopic). Here, he highlights examples of cross-fertilization between these two areas of physics to illustrate how it is not only ideas from particle physics that have influenced condensed matter but also vice versa.

The ones I found the most interesting were: 1) Einstein’s application of the Planck spectrum to obtain the specific heat of crystals following Planck’s original work (particle physics \rightarrow condensed matter) and (2) Dirac’s interpretation of negative energy particles as similar to that of  the particle-hole spectrum of the Fermi Sea (condensed matter \rightarrow particle physics).

While Wilczek does hint at the notion that the cross-fertilization is perhaps an accident, he chooses to believe that a fundamental principle belies these connections. He recognizes that precisely because there is no logical necessity for ideas to bridge the two realms, that such a relationship exists is suggestive of a deep reason for its occurrence. He speculates that the reason behind all this is “the upwardly heritable principles of locality and symmetry, together with the quasimaterial nature of apparently empty space”.

I like this paper because its views seem natural, are much less radical than that of Laughlin and Pines’, and because Wilczek suggests a path forward to understanding why such a cross-fertilization might occur. Moreover, the article hints that even though Anderson’s view of “new principles at each scale” may be true, the fact that it is possible to apply principles (e.g. broken symmetry) from higher up the scale (i.e. condensed matter) to lower on the scale (i.e. particle physics) is suggestive of a lingering connection between the two scales.

Just a quick (perhaps too quick) summary of the respective viewpoints:

1)   Wilczek \rightarrow Deep connection between microscopic and macroscopic.

2)   Anderson \rightarrow Different scales yield new physical principles, but still a connection between different scales.

3)   Laughlin and Pines \rightarrow Microscopic cannot, even in principle, explain phenomena on a macroscopic scale (such as the Josephson quantum).

In writing this post, I know that I have not presented the ideas in the three articles thoroughly, so let me link again Anderson’s article here (pdf!), Wilczek’s here (pdf!) and Laughlin’s and Pines’ here (pdf!) for your convenience.