Tag Archives: P.W. Anderson

Plasmons, the Coulomb Interaction and a Gap

In a famous 1962 paper entitled Plasmons, Gauge Invariance and Mass (pdf!), P.W. Anderson described the relationship between the gap in the plasmon spectrum and the idea of spontaneous symmetry breaking. It is an interesting historical note that Higgs cites Anderson’s paper in his landmark paper concerning the Higgs mechanism.

While there are many different formulations, including Anderson’s, of why the plasmon is gapped at zero momentum in a 3D solid, they all rely on one crucial element: the long-range nature of the Coulomb interaction (i.e. the electrons are charged particles). Of these formulations, I prefer one “cartoon-y” explanation which captures the essential physics well.

Before continuing, let me stress that it is quite unusual for a fluid medium (such as the electrons in a metal) to possess no zero frequency excitations at long wavelengths. For instance,  the dispersion relation for surface gravity waves on water (pdf!) is:

\omega^2(k)=gk \tanh kh.

Now, in 3D and in the long-wavelength limit, the plasmon sets up opposite charges on the surfaces of the solid as pictured below:

The long-wavelength plasmon therefore sets up the same electric field as in a capacitor. The electric field for a capacitor is \textbf{E} = \frac{\sigma\hat{x}}{\epsilon_0}. This expression is surprisingly independent of the distance separating the surfaces of the solid. Therefore, it takes a finite amount of energy to set up this electric field, even in the limit of infinite distance. This finite energy results in the gapping of the plasmon.

This cartoon can be extended further to 2D and 1D solids. In the 2D case, the electric field for the 1D “lines of charge” bounding the solid falls off like \textbf{E}\sim\frac{1}{\textbf{r}}. Therefore, in the infinite distance limit, it takes no energy to create this electric field and the plasmon is not gapped at \textbf{q}=0. Similarly, for the 1D case, the electric field from the points bounding the solid falls of as \frac{1}{\textbf{r}^2}, and the plasmon is again gapless.

This reasoning can be applied further to the phenomenon known as LO-TO splitting in a polar solid. Here, the longitudinal optical phonon (LO) and the transverse optical phonon (TO) branches are non-degenerate down to the very lowest (but non-zero!) momenta. Group theory predicts these modes to be degenerate at \textbf{q}=0 for the zincblende crystal structure of typical semiconducting compounds. Below is the phonon dispersion for GaAs demonstrating this phenomenon:

Again, the splitting occurs due to the long-ranged nature of the Coulomb interaction. In this case, however, it is the polar ionic degree of freedom that sets up the electric field as opposed to the electronic degrees of freedom. Using the same reasoning as above, one would predict that the LO-TO splitting would disappear in the 2D limit, and a quick check in the literature suggests this to be the case as reported in this paper about mono-layer Boron Nitride.

I very much appreciate toy models such as this that give one enough physical intuition to be able to predict the outcome of an experiment. It has its (very obvious!) limitations, but is valuable nonetheless.

Declining Basic Science in the US

Having been a student in the US for some time now, one constantly gets the feeling that US basic science research is in decline. This is expressed somewhat satirically (you can read the passage here) by PW Anderson in More and Different: Notes from a Thoughtful Curmudgeon.

More seriously, this is expressed concretely in the recently released MIT report called (pdf link!)  The Future Postponed: Why Declining Investment in Basic Research Threatens a US Innovation Deficit. Since the report is long, it is summarized in a Physics Today article that is worth reading.

The Physics Today article provides a link to the plot below. It is a plot of government spending on research and development as a percentage of the federal budget in the years between 1961-2015. It shows a seeming decline in R&D spending:

The Physics Today article did not, however, link the plot below, which provides more context. It is a plot of total nondefense R&D spending per year adjusted for inflation.

More interesting plots here.

There are a few standout features in this plot. One is the dramatic increase in funding for health related sciences. Another is the staggering amount of money spent in the 1960s on space-related sciences during the height of the Cold War Space Race. The most important point of this plot, though, is that when adjusted for inflation, the amount of money spent on science by the federal government has increased over time.

So why does the MIT report lament the lack of money for basic research?

This is my perspective: I agree with the overall premise of the report that basic science innovations are slowing in the US compared with other countries. I don’t agree, however, that it is because of a lack of federal funding. What is missing from both plots above, is the amount of basic science research undertaken in the private sector.

The disappearance of funding (which was not federal) for industrial research facilities such as Bell Labs, IBM Research, Xerox PARC, and General Electric Research Laboratories has been extremely detrimental. In these facilities, scientists were able to work without having to worry about funding, teaching, training the next generation of scientists and other university-related commitments. Moreover, the basic research at these facilities was often conducted with a long-term goal in mind, and taking a tortuous route (even if it took many years) to a solution was acceptable.

These industrial facilities have been replaced with increased funding at universities and at national laboratories such as Argonne National Laboratory. However, it is not clear whether entities such as these, which still require federal spending on basic science research can match the productivity of its industrial predecessors. At Bell Labs, there was more time, more money and fewer commitments for the employed scientists, as detailed in the great book Bell Labs and the Great Age of American Innovation. (That national labs cannot match industry standards is arguable though, and it should be said that the merits of national laboratories far outweigh the negatives.)

Ultimately, in my opinion, the lack of government spending is not the main inadequacy — there is a need for structural reform. Today in the US, much of basic science research is conducted at universities where professors offload most of the scientific legwork to graduate students to train them as future scientists. Professors are rarely working with each other in laboratories in the way that used to occur with the scientists at Bell. This is a major difference.

The lack of industrial facilities in the US undertaking basic science research (at least in physics) compared to years prior is, in my opinion, one of the major reasons for the decline in US innovation on this front. Throwing more money at the problem may not fix systemic flaws.

That being said, it’s not all bad. Companies like SpaceX, Tesla, Google, Apple and Intel are all doing great things for the American economy and applied sciences. The US federal government needs to figure out a method, though, to further incentivize these companies, that have the capability, to create large scale industrial laboratories (such as GoogleX and Tesla’s Gigafactory). This will spur long-term progress that will leave a mark on the next generation’s technological landscape.

Insights from the Cooper Problem

In the lead-up to the full formulation of the Bardeen-Cooper-Schrieffer Theory of superconductivity (BCS theory), Leon Cooper published a paper entitled Bound Electron Pairs in a Degenerate Fermi Gas (pdf) (referred to colloquially as “The Cooper Problem”). Its utility is not always recognized, but has been stressed by Leggett in his book Quantum Liquids where he says:

It seems not always to be appreciated how useful this “toy” model and simple generalizations of it can be, in particular in giving one a physical feel for which kinds of effects are likely to inhibit (or not) the formation of the superconducting state.

Having solved the Cooper problem in many instances, I tend to agree with Leggett. The Cooper problem, generalized to include the addition of a Zeeman field, shows the detrimental effect of a magnetic field on a Cooper pair. When generalized to include a finite center-of-mass momentum, pair-breaking is again induced.

However, it can also give one an intuition concerning effects that do not inhibit superconductivity. Such a case is where the Zeeman field and the center-of-mass momentum effects “cancel out” to yield a superconducting state (known as the FFLO state). Also, one can realize Anderson’s Theorem (pdf), which states that Cooper pairs are formed from time-reversed partners (as opposed to strictly and -k pairs), a result that is important in understanding the indifference of conventional superconductors to non-magnetic impurities.

Another instance of its usefulness is in understanding the “decoupling” of higher-order pairing (e.g. p-wave, d-wave, etc.). This is discussed in the first chapter of Introduction to Unconventional Superconductivity by Mineev and Somakhin. After solving the problem, one gets a similar result for the binding energy, \Delta to that of the Cooper Problem:

\Delta_l = -2\epsilon_l \exp(-2/N(0)V_l)

where l is the index labeling the symmetry channel (e.g. l=2 means d-wave) and \epsilon_l denotes an energy cutoff. The result demonstrates that a superconducting state will result when any of the of the angular momentum channels is unstable (at least for a spherical Fermi Surface).

The Cooper Problem: An instructive, easy-to-solve, insightful toy model.