Acoustic Plasmon

In regards to the posts I’ve made about plasmons in the past (see here and here for instance), it seems like the plasmon in a metal will always exist at a finite energy at $q=0$ due to the long-ranged nature of the Coulomb interaction. Back in 1956, D. Pines published a paper, where in collaboration with P. Nozieres, he proposed a method by which an acoustic plasmon could indeed exist.

The idea is actually quite simple from a conceptual standpoint, so a cartoony description should suffice in describing how this is possible. The first important ingredient in realizing an acoustic plasmon is two types of charge carriers. Pines, in his original paper, chose $s$-electrons and $d$-electrons from two separate bands to illustrate his point. However, electrons from one band and holes from another could also suffice. The second important ingredient in realizing the acoustic plasmon is that the masses of the two types of carriers must be very different (which is why Pines chose light $s$-electrons and heavy $d$-electrons).

Screening of heavy charge carrier by light charge carrier

So why are these two features necessary? Well, simply put, the light charge carriers can screen the heavy charge carriers, effectively reducing the range of the Coulomb interaction (see image above). Such a phenomenon is very familiar to all of us who study solids. If, for instance, the interaction between the ions on the lattice sites in a simple 3D monatomic solid were not screened by the electrons, the longitudinal acoustic phonon would necessarily be gapped because of the Coulomb interaction (forgetting, for the moment, about what the lack of charge neutrality would do to the solid!). In some sense, therefore, the longitudinal acoustic phonon is indeed such an acoustic plasmon. The ion acoustic wave in a classical plasma is similarly a manifestation of an acoustic plasmon.

This isn’t necessarily the kind of acoustic plasmon that has been so elusive to solid-state physicists, though. The original proposal and the subsequent search was conducted on systems where light electrons (or holes) would screen heavy electrons (or holes). Indeed, it was suspected that Landau damping into the particle-hole continuum was preventing the acoustic plasmon from being an observable excitation in a solid. However, there have been a few papers suggesting that the acoustic plasmon has indeed been observed at solid surfaces. Here is one paper from 2007 claiming that an acoustic plasmon exists on the surface of beryllium and here is another showing a similar phenomenon on the surface of gold.

To my knowledge, it is still an open question as to whether such a plasmon can exist in the bulk of a 3D solid. This has not stopped researchers from suggesting that electron-acoustic plasmon coupling could lead to the formation of Cooper pairs and superconductvity in the cuprates. Varma has suggested that a good place to look would be in mixed-valence compounds, where $f$-electron masses can get very heavy.

On the experimental side, the search continues…

A helpful picture: If one imagines light electrons and heavy holes in a compensated semimetal for instance, the in-phase motion of the electrons and holes would result in an acoustic plasmon while the out-of-phase motion would result in the gapped plasmon.

Trite but True

Correlation does not imply causation…

Data Representation and Trust

Though popular media often portrays science as purely objective, there are many subjective sides to it as well. One of these is that there is a certain amount of trust we have in our peers that they are telling the truth.

For instance, in most experimental papers, one can only present an illustrative portion of all the data taken because of the sheer volume of data usually acquired. What is presented is supposed to be to a representative sample. However, as readers, we are never sure this is actually the case. We trust that our experimental colleagues have presented the data in a way that is honest, illustrative of all the data taken, and is reproducible under similar conditions. It is increasingly becoming a trend to publish the remaining data in the supplemental section — but the utter amount of data taken can easily overwhelm this section as well.

When writing a paper, an experimentalist also has to make certain choices about how to represent the data. Increasingly, the amount of data at the experimentalist’s disposal means that they often choose to show the data using some sort of color scheme in a contour or color density plot. Just take a flip through Nature Physics, for example, to see how popular this style of data representation has become. Almost every cover of Nature Physics is supplied by this kind of data.

However, there are some dangers that come with color schemes if the colors are not chosen appropriately. There is a great post at medvis.org talking about the ills of using, e.g. the rainbow color scheme, and how misleading it can be in certain circumstances. Make sure to also take a look at the articles cited therein to get a flavor of what these schemes can do. In particular, there is a paper called “Rainbow Map (Still) Considered Harmful”, which has several noteworthy comparisons of different color schemes including ones that are and are not perceptually linear. Take a look at the plots below and compare the different color schemes chosen to represent the same data set (taken from the “Rainbow Map (Still) Considered Harmful” paper):

The rainbow scheme appears to show more drastic gradients in comparison to the other color schemes. My point, though, is that by choosing certain color schemes, an experimentalist can artificially enhance an effect or obscure one he/she does not want the reader to notice.

In fact, the experimentalist makes many choices when publishing a paper — the size of an image, the bounds of the axes, the scale of the axes (e.g. linear vs. log), the outliers omitted, etc.– all of which can have profound effects on the message of the paper. This is why there is an underlying issue of trust that lurks in within the community. We trust that experimentalists choose to exhibit data in an attempt to be as honest as they can be. Of course, there are always subconscious biases lurking when these choices are made. But my hope is that experimentalists are mindful and introspective when representing data, doubting themselves to a healthy extent before publishing results.

To be a part of the scientific community means that, among other things, you are accepted for your honesty and that your work is (hopefully) trustworthy. A breach of this implicit contract is seen as a grave offence and is why cases of misconduct are taken so seriously.

Drought

Since the discovery of superconductivity, the record transition temperature held by a material has been shattered many times. Here is an excellent plot (taken from here) that shows the critical temperature vs. year of discovery:

Superconducting Transition Temperature vs. Year of Discovery (click to enlarge)

This is a pretty remarkable plot for many reasons. One is the dramatic increase in transition temperature ushered in by the cuprates after approximately 70 years of steady and slow increases in transition temperatures. Another more worrying signature of the plot is that we are currently going through an unprecedented drought (not caused by climate change). The highest transition temperature has not been raised (at ambient pressures) for more than 23 years, the longest in history since the discovery of superconductivity.

It was always going to be difficult to increase the transition temperatures of superconductors once the materials in the  cuprate class were (seemingly) exhausted. It is interesting to see, however, that the mode of discovery has altered markedly compared with years prior. Nowadays, vertical lines are more common, with a subsequent leveling out. Hopefully the vertical line will reach room temperature sooner rather than later. I, personally, hope to still be around when room temperature superconductivity is achieved — it will be an interesting time to be alive.

What Life is Like in Grad School…

Graduate school is tough. It takes a lot of perseverance and this can be emotionally and mentally draining. While it is important to work hard, it is also important to take care of oneself physically, mentally and emotionally. It is necessary to take vacations, and importantly, to do something else (sports, music, hobbies, socialize, etc.). Otherwise the scenes is this video will become an all to familiar reality…

Although this video is humorous, it does tickle a weird part of the graduate school experience that, unfortunately, too many can relate to!

Field Biologists

Perhaps my sense of humor is juvenile, but this makes me want to tag along with a biologist really bad…

Friedel Sum Rule and Phase Shifts

When I took undergraduate quantum mechanics, one of the most painful subjects to study was scattering theory, due to the usage of special functions, phase shifts and partial waves. To be honest, the sight of those words still makes me shudder a little.

If you have felt like that at some point, I hope that this post will help alleviate some fear of phase shifts. Phase shifts can be seen in many classical contexts, and I think that it is best to start thinking about them in that setting. Consider the following scenarios: a wave pulse on a rope is incident on a (1) fixed boundary and (2) a movable boundary. See below for a sketch, which was taken from here.

Fixed Boundary Reflection

Movable Boundary Reflection

Notice that in the fixed boundary case, one gets a phase shift of $\pi$, while in the movable boundary case, there is no phase shift. The reason that there is a phase shift of $\pi$ in the former case is that the wave amplitude must be zero at the boundary. Therefore, when the wave first comes in and reflects, the only way to enforce the zero is to have the wave reflect with a $\pi$ phase shift and interfere destructively with the incident pulse, cancelling it out perfectly.

The important thing to note is that for elastic scattering, the case we will be considering in this post, the amplitude of the reflected (or scattered) pulse is the same as the incident pulse. All that has changed is the phase.

Let’s now switch to quantum mechanics. If we consider the same setup, where an incident wave hits an infinitely high wall at $x=0$, we basically get the same result as in the classical case.

Elastic scattering from an infinite barrier

If the incident and scattered wavefunctions are:

$\psi_i = Ae^{ikx}$      and      $\psi_s=Be^{-ikx}$

then $B = -A = e^{i\pi}A$ because, as for the fixed boundary case above, the incident and scattered waves destructively interfere (i.e. $\psi_i(0) + \psi_s(0) =0$). The full wavefunction is then:

$\psi(x) = A(e^{ikx}-e^{-ikx}) \sim \textrm{sin}(kx)$

The last equality is a little misleading since the wavefunction is not normalizable, but let’s just pretend we have an infinite barrier at large but not quite infinite $(-x)$. Now consider a similar-looking, but pretty arbitrary potential:

Elastic scattering from an arbitrary potential

What happens in this case? Well, again, the scattering is elastic, so the incident and reflected amplitudes must be the same away from the region of the potential. All that can change, therefore, is the phase of the reflected (scattered) wavefunction. We can therefore write, similar to the case above:

$\psi(x) = A(e^{ikx}-e^{i(2\delta-kx)}) \sim \textrm{sin}(kx+\delta)$

Notice that the sine term has now acquired a phase. What does this mean? It means that the energy of the wavefunction has changed, as would be expected for a different potential. If we had used box normalization for the infinite barrier case, $kL=n\pi$, then the energy eigenvalues would have been:

$E_n = \hbar^2n^2\pi^2/2mL^2$

Now, with the newly introduced potential, however, our box normalization leads to the condition, $kL+\delta(k)=n\pi$ so that the new energies are:

$E_n = \hbar^2n^2\pi^2/2mL^2-\hbar^2\delta(k)^2/2mL^2$

The energy eigenvalues move around, but since $\delta(k)$ can be a pretty complicated function of $k$, we don’t actually know how they move. What’s clear, though, is that the number of energy eigenvalues are going to be the same — we didn’t make or destroy any new eigenvalues or energy eigenstates.

Let’s now move onto some solid state physics. In a metal, one usually fills up $N$ states in accordance with the Pauli principle up to $k_F$. If we introduce an impurity with a different number of valence electrons into the metal, we have effectively created a potential where the electrons of the Fermi gas/liquid can scatter. Just like in the cases above, this potential will cause a phase shift in the electron wavefunctions present in the metal, changing their energies. The amplitudes for the incoming and outgoing electrons again will be the same far from the scattering impurity.

This time, though, there is something to worry about — the phase shift and the corresponding energy shift can potentially move states from above the Fermi energy to below the Fermi energy, or vice versa. Suppose I introduced an impurity with an extra valence electron compared to that of the host metal. For instance, suppose I introduce a Zn impurity into Cu. Since Zn has an extra electron, the Fermi sea has to accommodate an extra energy state. I can illustrate the scenario away from, but in the neighborhood of, the Zn impurity schematically like so:

E=0 represents the Fermi Energy. An extra state below the Fermi energy because of the addition of a Zn impurity

It seems quite clear from the description above, that the phase shift must be related somehow to the valence difference between the impurity and the host metal. Without the impurity, we fill up the available states up to the Fermi wavevector, $k_F=N_{max}\pi/L$, where $N_{max}$ is the highest occupied state. In the presence of the impurity, we now have $k_F=(N'_{max}\pi-\delta(k_F))/L$. Because the Fermi wavevector does not change (the density of the metal does not change), we have that:

$N'_{max} = N_{max} + \delta(k_F)/\pi$

Therefore, the number of extra states needed now to fill up the states to the Fermi momentum is:

$N'_{max}-N_{max}=Z = \delta(k_F)/\pi$,

where $Z$ is the valence difference between the impurity and the host metal. Notice that in this case, each extra state that moves below the Fermi level gives rise to a phase shift of $\pi$. This is actually a simplified version of the Friedel sum rule. It means that the electrons at the Fermi energy have changed the structure of their wavefunctions, by acquiring a phase shift, to exactly screen out the impurity at large distances.

There is just one thing we are missing. I didn’t take into account degeneracy of the energy levels of the Fermi sea electrons. If I do this, as Friedel did assuming a spherically symmetric potential in three dimensions, we get a degeneracy of $2(2l+1)$ for each $l$ where the factor of 2 comes from spin and $(2l+1)$ comes from the fact that we have azimuthal symmetry. We can write the Friedel sum rule more precisely, which states:

$Z = \frac{2}{\pi} \sum_l (2l+1)\delta_l(k_F)$,

We just had to take into consideration the fact that there is a high degeneracy of energy states in this system of spherical symmetry. What this means, informally, is that each energy level that moves below the Fermi energy has the $\pm\pi$ phase shift distributed across each relevant angular momentum channel. They all get a little slice of some phase shift.

An example: If I put Ni  (which is primarily a $d$-wave $l=2$ scatterer in this context) in an Al host, we get that $Z=-1$. This is because Ni has valence $3d^94s^1$.  Now, we can obtain from the Friedel sum rule that the phase shift will be $\sim -\pi/10$. If we move onto Co where $Z=-2$, we get $\sim -2\pi/10$, and so forth for Fe, Mn, Cr, etc. Only after all ten $d$-states shift above the Fermi energy do we acquire a phase shift of $-\pi$.

Note that when the phase shift is $\sim\pm\pi/2$ the impurity will scatter strongly, since the scattering cross section $\sigma \propto |\textrm{sin}(\delta_l)|^2$. This is referred to as resonance scattering from an impurity, and again bears a striking resemblance to the classical driven harmonic oscillator. In the case above, it would correspond to Cr impurities in the Al host, which has phase shift of $\sim -5\pi/10$. Indeed, the resistivity of Al with Cr impurities is the highest among the first row transition metals, as shown below:

Hence, just by knowing the valence difference, we can get out a non-trivial phase shift! This is a pretty remarkable result, because we don’t have to use the (inexact and perturbative) Born approximation. And it comes from (I hope!) some pretty intuitive physics.