# Monthly Archives: July 2015

## Kohn Anomalies and Fermi Surfaces

Kohn anomalies are dips in phonon dispersions that arise because of the presence of a Fermi surface. The presence of the Fermi surface renormalizes the bare phonon frequencies and causes an anomly in the phonon dispersion, as seen below for lead (taken from this paper):

Why this happens can be understood using a simplified physical picture. One can imagine that the ions form some sort of ionic plasma in the long-wavelength limit and we can use the classical harmonic oscillator equation of motion:

$m\frac{d^2\textbf{x}}{dt^2} = \frac{-NZ^2e^2\textbf{x}}{\epsilon_0}$

One can take into account the screening effect of the electrons by including an electronic dielectric function:

$m\frac{d^2\textbf{x}}{dt^2} = \frac{-NZ^2e^2\textbf{x}}{\epsilon(\textbf{q},\omega)\epsilon_0}$

The phonon frequencies will therefore be renormalized like so:

$\omega^2 = \frac{\Omega_{bare}^2}{\epsilon(\textbf{q},\omega)}$

and the derivative in the phonon frequency will have the form:

$\frac{d\omega}{d\textbf{q}} \propto -\frac{d\epsilon(\textbf{q},\omega)}{d\textbf{q}}$.

Therefore, any singularities that arise in the derivative of the dielectric function will also show up in the phonon spectra. It is known (using the Lindhard function) that there exists such a weak logarithmic singularity that shows up in 3D metals at $\textbf{q} = 2k_F$. This can be understood by noting that the ability of the electrons to screen the ions changes suddenly due to the change in the number of electron-hole pairs that can be generated below and above $\textbf{q}=2k_F$.

The dip in the phonon dispersion can be thought of as the phonon analogue of the “kinks” that are often seen in the electron dispersion relations using ARPES (e.g. see here). In the case here, the phonon dispersion is affected by the presence of the electrons, whereas in the “kink” case, the electronic dispersion is affected by the presence of the phonons (though kinks can arise for other reasons as well).

What is remarkable about all this is that before the advent of high-resolution ARPES, it was difficult to map out the Fermi surfaces of many metals (and still is for samples that don’t cleave well!). The usual method was to use quantum oscillations measurements. However, the group in this paper from the 60s actually tried to map out the Fermi surface of lead using just the Kohn anomalies! They also did it for aluminum. In both cases, they observed pretty good agreement with quantum oscillation measurements — quite a feat!

## *Private Communication*

Here is a correspondence that took place between Sarang and I, following his post last week about emergence and upward heritability, which was in response to a couple of my posts (here and here).

Subject: Post

Hey Sarang,

Great post — I’m glad that you wrote an article from the other side.

I do have a couple questions/comments though, as you are likely more knowledgeable on this subject than I.

To my understanding, GL theory is a coarse-grained version of BCS theory and many experimental properties of classic superconductors can be calculated using both methods. I also like this example because it is one of the few places I know of where one can derive the coarse-grained model from the underlying one (I am an experimentalist after all, so there may be many others I’m unaware of). Am I being misled in thinking that something “survived” in going from one level of the theory to another, or is this not a good example of “new principles at a different scale” that Laughlin and Pines referred to?

Anshul

Re:Post

This is a good question; I should have a succinct answer but don’t. Some thoughts:

1. I don’t see BCS as a “microscopic derivation” because a lot of coarse-grained ideas go into writing down the reduced BCS Hamiltonian — you throw out the Coulomb interaction b’se of Bohm-Pines and then you reduce the remaining interactions to Hartree-Fock and pairing channels because of Fermi surface kinematics, so what remains is an exactly solvable model. The BCS coupling is in practice a phenomenological parameter, which is backed out from the gap. (Otherwise we would not be so bad at computing Tc’s.) The symmetry-broken (i.e., non-number-conserving) BCS wavefunction violates “heritability” because particle number conservation is precisely the sort of symmetry/conservation law that is supposed to persist from one level to the next. So I see BCS as mostly a reverse-engineered microscopic justification (i.e. a way of saying, look, you can get superconductivity with just electrons) rather than an example of reasoning *from* microscopic considerations to macroscopic results.

2. More generally, when you write down a solvable, microscopically specified toy model that describes some emergent phenomenon, I do not think this counts as a *deduction* unless the decisions on what effects to include and to neglect are based on microscopic considerations. And the renormalization group tells you that such microscopically informed decisions about what effects are important/worth keeping will in general be wrong. “Relevant” and “irrelevant” are properties of the RG flow, not of the initial Hamiltonian.

3. Of course macro-stuff is made up of micro-stuff; the issue is whether the relevant conceptual architecture is inherited or distinct. I’m arguing that it is distinct whenever the thermodynamic limit is nontrivial: there are notions like fractionalized excitations and Goldstone modes that cannot be articulated without reference to the thermodynamic limit. In a finite system there is never a sharp distinction between collective and non-collective parts of the spectrum; “in practice” we know how to identify collective excitations even in relatively small systems, but when we do this we are invoking thermodynamic rather than microscopic concepts.

Re:Re:Post

1. BCS is not a “microscopic derivation”, but a lot of the work leading up to the full formulation (such as the Bardeen-Pines interaction) required careful thought about how phonons could cause an attractive interaction. Ultimately, the theory is a toy, but one based on some (I would consider) meticulously thought out microscopic considerations. I would even consider the demonstration of the isotope effect a microscopic experiment in some sense. As for number conservation, Tony’s book Quantum Liquids shows that this is not necessary to formulate a theory of superconductivity, but only a trick to make calculations of experimental quantities sufficiently easier. I think to say that actual particle number is not conserved in real life would be quite unnerving.

2. This point, I perhaps don’t understand as well in light of the response given for part 1. It seems to me that the Bardeen-Pines thought process was extremely significant.

3. I personally find it quite stunning that e.g. the Aharonov-Bohm effect (a microscopic effect) and the Quantum Hall quantization (a macroscopic effect) bear such a striking resemblance. These are the kinds of phenomena I speculate (and I suppose Wilczek might also) may be upwardly heritable.

Ultimately, it may be impossible to know whether L&P are right because one cannot solve the Schroedinger equation for 10^23 particles or put it on a computer to see if one gets the right answer. I choose to believe, however, that it would because it is more “natural” (perhaps my mind is lazy). This is where I think L&P are radical. They may have reasons for saying that the Schroedinger equation does not capture some essential physics, but until it is definitively shown to be true, I don’t think I will accept it. This does not mean that I don’t think that there is different physics on different scales. Water is indisputably wet. But to get the Schroedinger equation to exhibit this macroscopic property is indeed futile, though it may be possible in principle — and hydrodynamics is undoubtedly a less unwieldy description.

Anderson himself in his More is Different article says “the concept of broken symmetry has been borrowed by particle physicists, but their use of the term is strictly an analogy, whether a deep or specious one remains to be understood.”

I guess you say perhaps specious and I say perhaps deep?

Anshul

Re:Re:Re:Post

1-2. I guess I think the isotope effect experiment is precisely the opposite of a deduction *from* microscale *to* macroscale. The approach in that kind of expt is to identify the relevant terms in the microscopic Hamiltonian by varying them separately and see if the “answer” changes. In other words what it actually is is a deduction *from* a macroscale fact (Tc depends on isotopic mass) *to* a microscale conclusion about what terms in the microscopic Hamiltonian are truly important. Similarly, a key ingredient in Bardeen-Pines is the previous (Bohm-Pines) result that the Coulomb interaction gets sufficiently renormalized by the Debye frequency that it can actually lose to the electron-phonon coupling. Again, there is nothing deductively “microscopic” about this result: you have to include renormalization due to electron-hole pairs at all energies above the Debye frequency in order to see it, so the infrared behavior of the theory is already smuggled in! Not that there is anything wrong with this, but I do not think it can be correctly interpreted as a deduction from small scales to large scales. The really hard part of the BCS problem, after all, was writing down the BCS Hamiltonian, and the data used to do this came almost entirely from (a) macroscale experiments and (b) theoretical considerations which included infrared/emergent physics. If you want to make deductions from solving the Hamiltonian at the scale of a few atoms it is hopeless because at that scale what you have is just this huge Coulomb interaction and nothing else, and there is no obvious path from there toward a paired state.

The number conserving v. of BCS is basically like the Schrodinger’s cat ground state (all up + all down) in the Ising model. The fact that using the usual variable-number wavefunction is even *possible* undermines the notion that microscopic symmetries/conservation laws tell you anything useful about macroscopic physics.

3. I think part of the problem is that if you define “upward heritability” vaguely enough anything can upwardly heritable. I read Wilczek as saying: given some set of specific, robust facts about the microscopic physics (e.g. symmetries, AB periodicities), you can derive strong constraints on the kinds of macroscopic physics that are possible. I think these constraints are actually weak to the point of barely existing [1] (because the microscopic symmetries can be broken and new symmetries can emerge; the AB periodicity can change…). Perhaps you want to argue that it’s surprising the macroscopic system can even be described using the language of symmetries and AB periodicity at all? I suppose I’m not surprised by that — clearly some symmetries are preserved under composition, the trouble for small -> large deduction is that a priori you don’t know which ones they will be.

I agree that if you solved the Sch. Eq. exactly for a huge system with precisely defined parameters you would get the right answer. However, (a) even slight imprecision in parameters or slight approximation can give you a wildly wrong answer, and (b) even wildly imprecise calculations can give you a qualitatively correct answer. Therefore, if you run a simulation starting from measured parameters with any uncertainty, in the thermodynamic limit you will have no reason (in principle) to trust your simulation. There are phenomena very like chaos that happen under coarse-graining.

[1] I should mention one sense in which Wilczek is right that there is a constraint. Whenever the original symmetry is global and continuous, the constraint says: “either the system will respect this symmetry in the thermodynamic limit or there will be gapless modes.” However this is not a very useful constraint, as there is no way even in principle to count the gapless modes.

Re:Re:Re:Re:Post

This most recent email makes a lot of sense to me.

I should just point out that Laughlin and Pines said that even in principle, one could not obtain certain effects from the Schrodinger equation, that there did not exist a deductive link at all. This is the statement that I found quite jarring. I can totally get on board with what you said below though, that makes a lot more sense.

Thanks again, I actually learned something from this conversation.

Re:Re:Re:Re:Re:Post

Thanks — yes it was useful having to write this stuff out. Again I don’t know that I agree with L&P — I read their paper a long time ago and was bothered by it because it seemed to be saying some crazy things among the many correct things.

## Screening, Plasmons and LO-TO Splitting: One Last Time

I hope you are not by now fed up with my posts on this topic, but there is a great paper by Mooradian and Wright, which I’ve actually linked before in a different context, that is worth tackling. In this paper, they discuss the concept of plasmon-longitudinal optical (LO) phonon coupling.

To my mind, there is a significant aspect of the data which they do not explicitly address in their paper. Of course, I’m referring to degeneracy, LO-TO splitting, and screening. An image of their data is shown below (click to enlarge):

A quick run-down of the experiment: they are using Raman scattering on several different samples of GaAs with different doping levels. The carrier density can be read off in the image above.

It can be seen that for lower doped samples of GaAs, that there is strong LO-TO splitting. This is because of the long-ranged nature of the Coulomb interaction, as detailed here. As the carrier concentration is increased beyond the plasmon-phonon mixing region, the LO-TO splitting starts to disappear. This observation is noteworthy because there exists a “critical carrier density”, beyond which the LO and TO phonons are degenerate.

One can think of this in the following way: the plasmon energy is a measure of how quickly the free carriers can respond to an electric field. Therefore, for the highly doped GaAs samples, where the plasmon is at a significantly higher energy than the phonons, the free carriers can quickly screen the Coulomb field set up by the polar lattice. The electric field that is set up by the phonons can hence be approximated by a screened electric field (of the Thomas-Fermi kind) in this limit, and the Coulomb interaction is hence no longer long-ranged.

While the points I have made above will be quite obvious to many of you, I still find the data and its implications from a historical perspective quite profound.

Aside: I was heartened by Sarang’s post on the concept of emergence and upward heritability. One tends to think harder about one’s stance when there is an opposing view. He made some extremely important points regarding this topic, though I have to admit that I still lean towards Wilczek-ian concepts at present.

## Emergence is about the failure of upward heritability

I’ve been only tenuously on the internet this month, so I missed Anshul’s posts about emergence (here and here). I’m closer on this Q. to Pines and Laughlin than to Anshul and Wilczek (see e.g. here — I still stand by most of that I think). What’s weird about that Wilczek article is that he identifies the main question, but then suavely ignores it.

There are various things I want to say in response to these articles (none of which I entirely agree with), but this is the gist: 1. The thermodynamic limit destroys upward heritability. 2. “Emergence” is a result of this breakage.

According to Wilczek, the reason that particle physics concepts move up into the infrared is that microscopic laws, when “applied to large bodies, retain their character.” Let’s try to unpack that. Obviously, inferences from an approximate microscopic theory will generally not scale up to the macroscopic level (b’se of how errors propagate) but one might reasonably expect some structural properties of the microscopic theory to provide useful deductive guidance at higher levels — e.g., the idea that if the microscopic theory is invariant under some symmetry, then so will any higher level; or if (let’s get nonrelativistic at this point) the microscopic theory only has particles of charge e, then the macroscopic theory is constrained to have particles of charge ne where n is an integer. In fact, of course, neither of these arguments is true: spontaneous symmetry breaking is a thing, as is fractionalization, as is the presence of “emergent” symmetries at critical points that were never there in the underlying model.

These inferences are false, not at any intermediate step, but because of pathologies that arise when you take limits. In the present case, the relevant limit is the thermodynamic limit (more precisely, it is the way the thermodynamic limit commutes, or fails to commute, with other limits such as the quasistatic and linear response limits). Virtually all nontrivial emergent phenomena are due to these pathologies. For instance:

1. In the transverse field Ising model, a finite-size system at T = 0 in the ferromagnetic phase flips between the all-up and all-down ground states at a rate 1/t that is exponential in N, the number of spins. The magnetization measured on times short compared with t is large and finite, but on much longer times it is zero. Depending on whether you take the averaging time or N to zero first, you either find a phase transition or not. These noncommuting limits are in some sense the opposite of upward heritability, if you interpret this as saying that the laws do not change their character: the microscopic dynamics obeys certain symmetries, but the large-scale behavior does not.
2. Thouless and Gefen  explained how the fractional quantum Hall effect similarly allows large systems to defy microscopic symmetries. The Byers-Yang theorem requires the ground state of a system with “fundamental” charge e to be periodic in 1/e. However, you can get around this by having multiple different ground state branches that don’t mix (in the thermodynamic limit) — if you want particles with a charge m then you just need m ground states. The switchings between these ground states guarantee the validity of “the letter of” the underlying microscopic theorem while permitting its “spirit” to be violated.

In all such cases, there remains some literal sense in which a “deductive path” exists from the microscopic to the macroscopic world. However, this deductive path is emphatically not a local path: the standard cond. mat. phenomena illustrate that inductive reasoning from particles of N systems to particles of N+1 (or 2N) systems will not help identify emergent phenomena. You need to know where you are going to end up there.

(I think Wilczek might say that “upward heritability” is really about the fact that both cond-mat and hep-th are about symmetry arguments although the symmetries are different in the two cases. I don’t buy this at all. If there is a puzzle here, and for L&P-type reasons I don’t think there is one, it can be resolved by arguing that “the unreasonable effectiveness of mathematics” separately explains both.)

A flip side is that a large family of microscopic possibilities end up at the same macroscopic model. (For instance the 1/3 Laughlin state, or minor deformations of it, is a true ground state for an enormous range of electron-electron interaction strengths and profiles.) When you change scales, some information is lost and other information is amplified; whether a particular piece of information is going to be lost or amplified is a property of the coarse-graining and not a property of the underlying microscopic theory.

## Transport Signatures in Charge Density Wave Systems

This post is inspired in part by Inna’s observation that a Josephson junction can act as a DC-AC converter. It turns out that CDWs can also act in a similar manner.

Sometimes I feel like quasi-1D charge density waves (CDWs) are like the lonely neglected child compared to superconductors, the popular, all-star athlete older sibling. Of course this is so because superconductors carry dissipationless current and exhibit perfect diamagnetism. However, quasi-1D CDWs can themselves exhibit pretty stunning transport signatures associated with the CDW condensate. Note that these spectacular properties are associated with incommensurate CDWs, as they break the translational symmetry of the crystal.

To make a comparison with superconductivity (even though no likes to be compared to their older sibling), here is a cartoon of the frequency-dependent conductivity (taken from G. Gruner’s Review of Modern Physics entitled Dynamics of Charge Density Waves):

Frequency-dependent conductivity for (a) a superconductor and (b) an incommensurate CDW

In the superconducting case, there is a delta function at zero frequency, indicative of dissipationless transport. For the CDW, there is also a collective charge transport mode, but in this case it is at finite energy (as it is pinned by impurities), and it is dissipative (indicated by the finite width).

This collective charge transport mode can be “depinned” and results in a nonlinear conductivity known as  a sliding CDW. This is evidenced below in the I-V characteristics. Below a threshold electric field/voltage, usual Ohmic characteristics are observed, associated with the “normal” non-condensed electrons. However, above the threshold electric field/voltage the collective mode is depinned and contributes to the I-V characteristics.

Non-linear IV characteristics indicative of collective charge transport in the CDW phase

Even more amazingly, once this CDW has been depinned, applying a DC field results in an AC response. Below is an image from a famous paper by Fleming and Grimes showing the Fourier transformed AC response with several harmonics. As the voltage is turned up, the fundamental frequency increases markedly (the voltage is highest in (a) and is decreased slowly until (e) where the CDW is no longer sliding).

AC response to a DC applied voltage in order of decreasing DC voltage in NbSe3. (a) V=5.81mV, (b) V=5.05mV, (c) 4.07mV, (d) V=3.40mV (e) V=0

The observed oscillation frequency is due to the collective mode getting depinned from its impurity site and then getting  weakly pinned successively by impurities, though this picture is debated. N.P. Ong, who did some great early work on CDW transport, has noted that the CDW “sings”. A nice cartoon of this idea is presented in the ball-and-egg-crate model shown below. One can imagine the successive “hits in the road” at periodic time intervals resulting in the AC response seen above.

Ball and egg crate model of CDW transport

Hopefully this post will help people appreciate more the shy younger sibling that is the charge density wave.

All images taken from G. Gruner RMP 60, 1129 (1988).

## Let there be (THz) light

The applications of scientific discoveries is sometimes not what you would expect, and high temperature superconductivity is no different.  When high-temperature superconductivity was discovered in copper-oxides (cuprates) in 1986, the envisioned applications were power lines, electromagnets, and maglev trains, all cooled by cheap-as-milk liquid nitrogen.  While applications involving high-temperature superconductors’ dissipationless and diamagnetic properties are slowly coming online, there are other potential technologies which most people are less aware of.  The one I want to discuss here is using the layered structure of cuprate high temperature superconductors to produce coherent THz emitters.  Creating light sources and detectors for the THz portion of the electromagnetic spectrum—the notorious THz gap—has been a pressing challenge for decades.

The Josephson effect

The Josephson effect underlies many important applications of superconductors, such as sensitive magnetometers, qubits for quantum computing, and the SI definition of the volt.  The starting point for the Josephson effect is a superconductors’ complex order parameter, $\Psi=\Psi_0 e^{\imath\varphi}$.  The amplitude, $\Psi_0$, is related to some measure of the robustness of the superconducting state–either the superfluid density or the superconducting gap.  The phase, $\varphi$, reflects that a superconductor is a phase-coherent state–a condensate.  At $T_c$, a superconductor chooses an arbitrary phase, and a current in a superconductor (a supercurrent) corresponds to a gradient in this phase.  A Josephson junction, sketched below, consists of two superconductors separated by a non-superconducting barrier.  Because each superconductor chooses an arbitrary phase and the superconducting wavefunctions can penetrate into the barrier, a phase gradient develops in the barrier region, and a supercurrent can flow.  This supercurrent is given by $I_s=I_c\sin(\delta\varphi)$, where  $I_c$ is the critical current which causes the Josephson junction to become resistive (different from the critical current which makes the superconductor resistive) and  $\delta\varphi$ is the phase difference between the two superconductors.  This is the DC Josephson effect.  In the resistive regime ($I>I_c$), one encounters the AC Josephson effect, in which the Josephson junction supports an oscillating current with AC Josephson frequency $\omega=\frac{2\pi V}{\Phi_0}$, where V is the voltage across the junction and $\Phi_0$ is the magnetic flux quantum.  The current in this regime is given by: $I(t)=I_c\sin(\delta\varphi + \frac{2\pi V}{\Phi_0}t)$

Thus, a Josephson junction can convert a DC voltage to an AC current (and vis versa).

Schematic of a Josephson junction, consisting of two superconductors with a barrier in between. The barrier may be an insulator, a metal, or a constricted piece of superconductor. Each superconducting slab has a complex wavefunction with an arbitrarily chosen phase, $\varphi_{1,2}$. Supercurrent through a Josephson junction depends on the phase difference, $\delta\varphi=\varphi_1-\varphi_2$

Schematic of IV curve of Josephson junction (solid line), from Ref [1]. For sufficiently small bias currents, a supercurrent flows through the junction and no voltage is sustained–the regime of the DC Josephson effect. At currents exceeding $I_c$, the junction becomes resistive and is able to sustain a voltage across it, even though each superconducting slab remains superconducting. This is where the AC josephson effect is realized. The dashed line is an ohmic resistance, which a Josephson junction approaches in the limit of high bias voltage.

More is better

While a Josephson junction in its resistive regime is a perfect DC to AC converter with frequency proportional to voltage, the amount of power it can output is limited by the fact that device performance (and eventually superconductivity) degrade if you crank the voltage up too high.  However, it turns out that if you have multiple Josephson junctions in series, the available power scales with the number of junctions, and if all of these junctions oscillate in phase, they can form a coherent radiation source.  This is where high temperature superconductors come in.

Calling all cuprates

The crystal structure of cuprate high temperature superconductors consists of $CuO_2$ sheet where superconductivity originates, separated by insulating layers.  While the $CuO_2$ sheets are coupled with each other, the coupling can be weak in some cuprates, such that the material behaves like an array of intrinsic Josephson junctions in series.  Thus, a structure which has to be specially manufactured for other superconductors, the cuprates give for free.

Crystal structure of the high temperature superconductor, $Bi_2Sr_2CaCu_2O_{8+\delta}$ (BSCCO) which is most commonly used to make THz emitters. The layered structure of cuprates—superconducting $CuO_2$ layers separated by insulating intervening layers– permits the material itself to be a series of Josephson junctions. Adapted from Ref. [2].

The first step to making a cuprate superconductor into an emitter of coherent THz radiation is to pattern a single crystal into a smaller structure called a mesa. The mesa behaves as a resonant cavity such that a half-integer number of wavelengths ($\lambda/2$) of radiation fit into the width, w, of the device.  The lowest-order resonance condition is met when the AC Josephson frequency is equal to the frequency of a cavity mode, $\omega_c=\frac{\pi c_0}{n w}$, where $c_0/n$ is the mode propagation velocity in the medium and n is the far-infrared refractive index.  For a given mesa width, the resonance condition is met for a specific value of applied voltage for each Josephson junction, $V_{jj}=\frac{c_0 \Phi_0}{2 w n}$.  For a stack of Josephson junctions in series, the applied voltage scales with the number of junctions (N): $V=NV_{jj}$

A schematic of such a device is shown below.  The mesa, produced by ion milling, is 1-2 microns high (corresponding to ~1000 intrinsic Josephson junctions), 40-100 microns wide (setting the resonance emission frequencies), and several hundred microns in length.  A voltage is applied along the height of the stack and THz radiation is emitted out the side of the stack.  Devices have been fabricated with emissions at frequencies between 250 GHz and 1THz.  Linewidths of ~10MHz have been achieved as have radiation powers of 80 microwatts, though it is predicted that the latter figure can be pushed to 1mW [2,3].  The emission frequency can be tuned either by fabricating a new device with a different width, or by fabricating a device shaped like a trapezoid or a stepped pyramid and varying the bias voltage [4].  The latter corresponds to different numbers of Josephson junctions in the stack oscillating coherently.

THz emitter made out of high-temperature superconducting cuprates. A ‘mesa’ is ion-milled from a single crystal of BSCCO with a restricted width dimension, w. THz radiation is emitted out of the side, with frequency depending on the width of the mesa and the applied voltage. From Refs [2-3].

Emission spectra of three devices with different widths,w, made out of high temperature superconductors operated at T~25K. Inset shows linear relationship between frequency and 1/w. From Ref [3]

Implications

Successful fabrication of coherent THz emittors out of high temperature superconductors is a relatively new achievement and there is additional progress to be made, particularly towards increasing the emitted power.  This technology is promising for filling in a portion of the THz gap outside the capabilities of quantum cascade lasers, whose lowest emission frequency is presently 1.6THz.  In the future, one can imagine a light source consisting of an array of BSCCO mesas of different dimensions producing a narrow-bandwidth lightsource which is tuneable between 250GHz and 1.5THz for security and research applications.

References

[1] J. Annett. Superconductivity, Superfluids, and Condensates, Oxford University Press (2003)

[2] U. Welp et alNature Photonics 7 702 (2013)

[3] L. Ozyuzer et al, Science 318 1291 (2007)

[4] T. M. Benseman et al, Phys. Rev. B 84 064523 (2011)

## Frank Wilczek’s Concept of ‘Upward Inhertiance’

Yesterday, I happened upon an article entitled Why are there Analogies Between Condensed Matter and Particle Theory (pdf!) by Frank Wilczek. In it, he suggests an alternative view to the one espoused by Laughlin and Pines in their Theory of Everything paper. The views expressed in More is Different by P.W. Anderson, which is the most influential paper of the three, lie somewhere in between. The article by Wilczek is noteworthy because of the idea that he calls “upwardly heritable principles”.

He first addresses the issue of why ideas in condensed matter and particle physics bear such a resemblance (i.e. why the macroscopic reflects the microscopic). Here, he highlights examples of cross-fertilization between these two areas of physics to illustrate how it is not only ideas from particle physics that have influenced condensed matter but also vice versa.

The ones I found the most interesting were: 1) Einstein’s application of the Planck spectrum to obtain the specific heat of crystals following Planck’s original work (particle physics $\rightarrow$ condensed matter) and (2) Dirac’s interpretation of negative energy particles as similar to that of  the particle-hole spectrum of the Fermi Sea (condensed matter $\rightarrow$ particle physics).

While Wilczek does hint at the notion that the cross-fertilization is perhaps an accident, he chooses to believe that a fundamental principle belies these connections. He recognizes that precisely because there is no logical necessity for ideas to bridge the two realms, that such a relationship exists is suggestive of a deep reason for its occurrence. He speculates that the reason behind all this is “the upwardly heritable principles of locality and symmetry, together with the quasimaterial nature of apparently empty space”.

I like this paper because its views seem natural, are much less radical than that of Laughlin and Pines’, and because Wilczek suggests a path forward to understanding why such a cross-fertilization might occur. Moreover, the article hints that even though Anderson’s view of “new principles at each scale” may be true, the fact that it is possible to apply principles (e.g. broken symmetry) from higher up the scale (i.e. condensed matter) to lower on the scale (i.e. particle physics) is suggestive of a lingering connection between the two scales.

Just a quick (perhaps too quick) summary of the respective viewpoints:

1)   Wilczek $\rightarrow$ Deep connection between microscopic and macroscopic.

2)   Anderson $\rightarrow$ Different scales yield new physical principles, but still a connection between different scales.

3)   Laughlin and Pines $\rightarrow$ Microscopic cannot, even in principle, explain phenomena on a macroscopic scale (such as the Josephson quantum).

In writing this post, I know that I have not presented the ideas in the three articles thoroughly, so let me link again Anderson’s article here (pdf!), Wilczek’s here (pdf!) and Laughlin’s and Pines’ here (pdf!) for your convenience.