Monthly Archives: December 2015

The Unswattable

One of the deepest results in all of quantum mechanics is Bell’s inequality. While it is remarkably profound, it is also an irksome, ever-present, unswattable fly. Nature’s violation of this inequality implies that nature is intrinsically non-local. Until the day Bell published his theorem, it was thought that everything in nature could be understood under the locality assumption.

While I continue with my research life day-to-day without thinking too much about it, every once in a while that pesky fly reappears, seemingly out of thin air. In this particular instance, it emerged while I was re-reading the excellent piece by N. David Mermin in Physics Today entitled Is the moon there when nobody looks? (pdf!) Alright, so not exactly thin air, perhaps this was self-inflicted.

Regardless, this article by Mermin is particularly pedagogical and explains Bell’s theorem in the simplest manner that I have found. It describes how in a sea of seemingly random data, there are correlations that cannot be explained without considering the existence of an entangled state. Any “local hidden variable” theories cannot explain the data.

What is so bothersome about Bell’s theorem

At some point in the article, Mermin challenges the reader to try to come up with scheme to explain all the observed results using a purely local and deterministic picture. This appears, at first sight, not to be an impossibly difficult task. However (at least for me), one’s schemes are quickly exhausted (very frustratingly!), and one has to face the reality that this may not be possible. In fact, Bell unequivocally showed that this was not possible.

However, there seems to me to be a (rather pathological) way out of Bell’s constraints. It is possible that embedded in Bell’s theorem is an assumption that perhaps we unaware that we are making. This would be analogous to the implicit assumption in Newton’s formulation of gravity of the infinite speed of light — an assumption that when just looking at Newton’s equations, we would not know that we were making. If we are making such an assumption in the Bell experiments, it may be possible to salvage locality in some extremely contrived manner. While this situation seems unlikely even to me, I sincerely hope that there is such an assumption lurking somewhere rather than face up to the more probable idea that nature is intrinsically non-local.

One of the most unfortunate historical circumstances surrounding the publication of Bell’s inequality was that Einstein was not alive to see its formulation. One wonders what his reaction would have been and whether he would have taken The Unswattable to his grave.

Advertisements

A Much Needed Textbook Overhaul

It is well-accepted in the community that the quality of introductory textbooks in condensed matter physics were decent but not great just up until a few years ago. At US universities, it is common to be exposed to condensed matter physics for the first time through either Kittel’s Intro to Solid State Physics or Ashcroft and Mermin’s Solid State Physics.

While both books have a number of redeeming qualities, they don’t possess the trifecta of (i) being modern, (ii) being easily accessible to a novice (aided by having a conversational tone), and (iii) targeting physical insight/perspective over an information glut. These books possess great problems and are an excellent reference to those who already are well-acquainted with solid state physics, however. I will mention that Ziman’s Theory of Solids, while infrequently used at US institutions, is a great little book — though again probably not appropriate for a complete novice. These books were all written by theorists.

In the past few years, however, there has been an excellent collection of books released under the Oxford Masters Series (OMS) umbrella. These books tend to be more pedagogical and conversational, shorter in length and necessarily more modern. They would be much more appropriately described as bedtime reading compared to the counterparts mentioned above. There are a few books from the OMS that I have read from cover to cover, and some where I have just read a few chapters. These include the following titles:

  1. Band Theory and Electronic Properties of Solids, J. Singleton
  2. Optical Properties of Solids, M. Fox
  3. Magnetism in Condensed Matter, S. Blundell
  4. Superfluids, Superconductors and Condensates, J. Annett
  5. Statistical Mechanics: Entropy, Order Parameters and Complexity, J. Sethna

There are two more great introductory-level books which, though not explicitly in the Oxford Masters Series collection, have been released through the Oxford University Press:

  1. The Oxford Solid State Basics, S. Simon
  2. Quantum Field Theory for the Gifted Amateur, T. Lancaster and S. Blundell

I have to say that I have been surprised with the consistent level of pedagogy that has been maintained over numerous authors in the series.

What these books are:

  1. Introductory level
  2. More data-driven (In particular, Fox’s, Singleton’s and Blundell’s books help one understand data from certain mainstream experimental techniques. This probably has to do with the fact that these authors are experimentalists.)
  3. Modern (e.g. there is a discussion of angle resolved photoemission spectroscopy and corresponding data in Singleton’s book)
  4. Focused

What these books are not:

  1. A complete and thorough treatment of the subjects (it could be argued that “less is more” in this case, however!)
  2. Mathematically involved
  3. Rigorous (sometimes almost appealing too much to intuition!)

Most of us learn in solitude with a good textbook/paper rather than in the classroom, and textbooks like these make it easier to get up to speed. I think that condensed matter physics will have a greater appeal at the undergraduate level in the US and other English-speaking countries due to the clarity of the OMS textbooks. The authors of these books have done a service to our sub-field and I much appreciate their effort. Lastly, the philosophical perspective of condensed matter physics has changed somewhat since the days of Kittel and Ashcroft and Mermin, and our textbooks needed to reflect this overhaul. They can now claim to do this.

Please feel free to comment on and recommend books, articles or papers that you found particularly useful. I am curious to know what else is out there, even if not originally an English-language text.

Just in case you thought otherwise: I was not paid by Oxford University Publishing to write this post.

The Relationship Between Causality and Kramers-Kronig Relations

The Kramers-Kronig relations tell us that the real and imaginary parts of causal response functions are related. The relations are of immense importance to optical spectroscopists, who use them to obtain, for example, the optical conductivity from the reflectivity. It is often said in textbooks that the Kramers-Kronig relations (KKRs) follow from causality. The proof of this statement usually uses contour integration and the role of causality is then a little obscured. Here, I hope to use intuitive arguments to show the meaning behind the KKRs.

If one imagines applying a sudden force to a simple harmonic oscillator (SHO) and then watches its response, one would expect that the response will look something like this:

ImpulseResponse

We expect the SHO to oscillate for a little while and eventually stop due to friction of some kind. Let’s call the function in the plot above \chi(t). Because \chi(t) is zero before we “kick” the system, we can play a little trick and write \chi(t) = \chi_s(t)+\chi_a(t) where the symmetrized and anti-symmetrized parts are plotted below:

Symmetric

Anti-symmetric

Since for t<0, the symmetrized and anti-symmetrized parts will cancel out perfectly, we recover our original spectrum. Just to convince you (as if you needed convincing!) that this works, I have explicitly plotted this:

Symmetric+AntiSymmetric

Now let’s see what happens when we take this over to the frequency domain, where the KKRs apply, by doing a Fourier transform. We can write the following:

\tilde{\chi}(\omega) = \int_{-\infty}^\infty e^{i \omega t} \chi(t) \mathrm{d}t = \int_{-\infty}^\infty (\mathrm{cos}(\omega t) + i \mathrm{sin}(\omega t)) (\chi_s (t)+\chi_a(t))\mathrm{d}t

where in the last step I’ve used Euler’s identity for the exponential and I’ve decomposed \chi(t) into its symmetrized and anti-symmetrized parts as before. Now, there is something immediately apparent in the last integral. Because the domain of integration is from -\infty to \infty, the area under the curve of any odd (a.k.a anti-symmetric) function will necessarily be zero. Lastly, noting that anti-symmetric \times symmetric = anti-symmetric and symmetric (anti-symmetric) \times symmetric (anti-symmetric) = symmetric, we can write for the equation above:

\tilde{\chi}(\omega) = \int_{-\infty}^\infty \mathrm{cos}(\omega t) \chi_s(t) \mathrm{d}t + i \int_{-\infty}^\infty \mathrm{sin}(\omega t) \chi_a(t) \mathrm{d}t = \tilde{\chi}_s(\omega) + i \tilde{\chi}_a(\omega)

Before I continue, some remarks are in order:

  1. Even though we now have two functions in the frequency domain (i.e. \tilde{\chi}_s(\omega) and  \tilde{\chi}_a(\omega)), they actually derive from one function in the time-domain, \chi(t). We just symmetrized and anti-symmetrized the function artificially.
  2. We actually know the relationship between the symmetric and anti-symmetric functions in the time-domain because of causality.
  3. The symmetrized part of \chi(t) corresponds to the real part of \tilde{\chi}(\omega). The anti-symmetrized part of \chi(t) corresponds to the imaginary part of \tilde{\chi}(\omega).

With this correspondence, the question then naturally arises:

How do we express the relationship between the real and imaginary parts of \tilde{\chi}(\omega), knowing the relationship between the symmetrized and anti-symmetrized functions in the time-domain?

This actually turns out to not be too difficult and involves just a little more math. First, let us express the relationship between the symmetrized and anti-symmetrized parts of \chi(t) mathematically.

\chi_s(t) = \mathrm{sgn}(t) \times \chi_a(t)

where \mathrm{sgn} (t) just changes the sign of the t<0 part of the plot and is shown below.

sgn

Now let’s take this expression over to the frequency domain. Here, we must use the convolution theorem. This theorem says that if we have two functions multiplied by each other, e.g. h(t) = f(t)g(t), the Fourier transform of this product is expressed as a convolution in the frequency domain as so:

\tilde{h}(\omega)=\mathcal{F}(f(t)g(t)) = \int \tilde{f}(\omega-\omega')\tilde{g}(\omega') \mathrm{d}\omega'

where \mathcal{F} means Fourier transform. Therefore, all we have left to do is figure out the Fourier transform of \mathrm{sgn}(t). The answer is given here (in terms of frequency not angular frequency!), but it is a fun exercise to work it out on your own. The answer is:

\mathcal{F}(sgn(t)) = \frac{2}{i\omega}

With this answer, and using the convolution theorem, we can write:

\tilde{\chi_s}(\omega) = \int_{-\infty}^{\infty} \frac{2}{i(\omega-\omega')} \tilde{\chi}_a(\omega')\mathrm{d}\omega'

Hence, up to some factors of 2\pi and i, we can now see better what is behind the KKRs without using contour integration. We can also see why it is always said that the KKRs are a result of causality. Thinking about the KKRs this way has definitely aided in my thinking about response functions more generally.

I hope to write a post in the future talking a little more about the connection between the imaginary part of the response function and dissipation. Let me know if you think this will be helpful.

A lot of this post was inspired by this set of notes, which I found to be very valuable.

Interactions, Collective Excitations and a Few Examples

Most researchers in our field (and many outside our field that study, e.g. ant colonies, traffic, fish schools, etc.) are acutely aware of the relationship between the microscopic interactions between constituent particles and the incipient collective modes. These can be as mundane as phonons in a solid that arise because of interactions between atoms in the lattice or magnons in an anti-ferromagnet that arise due to spin-spin interactions.

From a theoretical point of view, collective modes can be derived by examining the interparticle interactions. An example is the random phase approximation for an electron gas, which yields the plasmon dispersion (here are some of my own notes on this for those who are interested). In experiment, one usually takes the opposite view where inter-particle interations can be inferred from the collective modes. For instance, the force constants in a solid can often be deduced by studying the phonon spectrum, and the exchange interaction can be backed out by examining the magnon dispersions.

In more exotic states of matter, these collective excitations can get a little bizarre. In a two-band superconductor, for instance, it was shown by Leggett that the two superfluids can oscillate out-of-phase resulting in a novel collective mode, first observed in MgB2 (pdf!) by Blumberg and co-workers. Furthermore, in 2H-NbSe2, there have been claims of an observed Higgs-like excitation which is made visible to Raman spectroscopy through its interaction with the charge density wave amplitude mode (see here and here for instance).

As I mentioned in the post about neutron scattering in the cuprates, a spin resonance mode is often observed below the superconducting transition temperature in unconventional superconductors. This mode has been observed in the cuprate, iron-based and heavy fermion superconducting families (see e.g. here for CeCoIn5), and is not (at least to me!) well-understood. In another rather stunning example, no less than four sub-gap collective modes, which are likely of electronic origin, show up below ~40K in SmB6 (see image below), which is in a class of materials known as Kondo insulators.

smb6

Lastly, in a material class that we are actually thought to understand quite well, Peierls-type quasi-1D charge density wave materials, there is a collective mode that shows up in the far-infrared region that (to my knowledge) has so far eluded theoretical understanding. In this paper on blue bronze, they assume that the mode, which shows up at ~8 cm^{-1} in the energy loss function, is a pinned phase mode, but this assignment is likely incorrect in light of the fact that later microwave measurements demonstrated that the phase mode actually exists at a much lower energy scale (see Fig. 9). This example serves to show that even in material classes we think we understand quite well, there are often lurking unanswered questions.

In materials that we don’t understand very well such as the Kondo insulators and the unconventional superconductors mentioned above, it is therefore imperative to map out the collective modes, as they can yield critical insights into the interactions between constituent particles or couplings between different order parameters. To truly understand what is going on these materials, every peak needs to be identified (especially the ones that show up below Tc!), quantified and understood satisfactorily.

As Lestor Freamon says in The Wire:

All the pieces matter.

Matthias’ List — Check it Twice

Bernd Matthias was a prominent chemist/physicist and who played a major role in the history of superconductivity. He discovered nearly 1,000 superconducting compounds in his career and most notably discovered the NbTi and Nb_3Sn superconductors, which found commercial use in MRI instruments.

Using his vast experience in the synthesis of (classic BCS) superconducting materials, he made an empirical list of the rules he followed in searching for new superconductors.

  1. High symmetry is good; Cubic symmetry is best
  2. High density of electronic states is good
  3. Stay away from oxygen
  4. Stay away from magnetism
  5. Stay away from insulators
  6. Stay away from theorists

It is worth reflecting upon this list and thinking about how Matthias came to these conclusions (even the last item). First of all, many of the items in the list are not in any way embodied by BCS theory (1, 3, and 5 in particular). Secondly, the cuprates seems to disobey all of these items (except the last one)! It is interesting to note that Matthias was a vocal opponent of BCS theory for its failure to capture many aspects of superconductors he considered essential and for its inability to predict transition temperatures and new superconducting materials.

Matthias’ list has inspired many in the field of unconventional superconductivity to make lists similar to his. For those of you who read this blog regularly, you are probably aware that I am fond of lists (see here and here for instance), because they help synthesize key experimental observations. You are also likely to find out some of your own biases when making a list. Below is a Matthias-style list of unconventional superconductivity put forth by Igor Mazin, which contains items that I find myself generally agreeing with:

  1. Layered structures are good
  2. Carrier density should not be too high
  3. Transition metals of the fourth period are good
  4. Magnetism is essential
  5. Proper Fermi surface geometry is essential
    • Must match the spin excitation structure
  6. Enlist theorists, at least to compute Fermi surface structures

As we near the winter holiday period (in the northern hemisphere!), please feel free to share your own list, add an item to the one above, or even share some misgivings about Mazin’s list.

On the Lighter Side…

I have to say that I do sometimes get in the mood where I find myself truly enjoying horrible physics jokes. There is an undoubted inverse correlation between this enjoyment and amount of sleep. But I happen to be at a beam time run right now.

Here is a taste of one I particularly liked among quite a few others which can be consumed here. I especially appreciated this one because of the use of the label-maker.

enhanced-buzz-26747-1378479000-24

Some Words on Sum Rules

In condensed matter physics, sum rules are used widely by both experimentalists and theorists. One can even go as far to say that sum rules provide us with a framework within which theories must exist, i.e. theories cannot violate the constraints put forth by these sum rules. In this sense, they are of vast importance, and any theory of, for example, the dielectric function should be checked against these constraints.

Even though these sum rules are used often, their physical meaning is not always apparent because they can be written in many forms. Let me use the Thomas-Reiche-Kuhn sum rule (a.k.a the f-sum rule) to illustrate some of these points. This sum rule can be formulated as so:

\sum_m(E_m - E_0)|\langle{m}|n(\textbf{q})|0\rangle|^2 = \frac{n\hbar^2q^2}{2m}

where n(\textbf{q}) is the Fourier-transformed number density operator. In this formulation, one can see the physical principles behind the sum rule most clearly:

If one adds up the energies of the transitions made from the ground state to higher energy states (in this case by perturbing the density), this should be equal to the total energy put into the system.

The TRK sum rule can be understood quite simply, therefore, as an energy conservation law for a many-body system. This is why these sum rules are so important — they are many-body manifestations of conservation laws.

The Thomas-Reiche-Kuhn sum rule is often written in the following way as well:

\int_0^\infty \omega S(\textbf{q},\omega) d \omega = \frac{n\hbar^2 q^2}{2m}

where S(\textbf{q},\omega) is the dynamic structure factor.

Furthermore, TRK can be formulated in terms of the inverse longitudinal dielectric function as so:

\int_0^\infty \omega \textrm{Im}(-1 /\epsilon_L(\textbf{q},\omega))d \omega = \frac{\pi}{2}\omega_p^2

where \omega_p is the plasma frequency. Also, it can be written in a form more familiar to optical spectroscopists, who often plot the optical conductivity:

\int_0^\infty \textrm{Re}(\sigma_L(\textbf{q},\omega))d \omega = \frac{\omega_p^2}{8}

So while there are many sum rules (and many formulations of each sum rule as seen above for the TRK), one should always keep in mind that they derive from rather general physical principles, which are unfortunately sometimes hidden in the way they are written.