Tag Archives: A.J. Leggett

Is it really as bad as they say?

It’s been a little while since I attended A.J. Leggett’s March Meeting talk (see my review of it here), and some part of that talk still irks me. It is the portion where he referred to “the scourge of bibliometrics”, and how it prevents one from thinking about long-term problems.

I am not old enough to know what science was like when he was a graduate student or a young lecturer, but it seems like something was fundamentally different back then. The only evidence that I can present is the word of other scientists who lived through the same time period and witnessed the transformation (there seems to be a dearth of historical work on this issue).

phd100311s

It was easy for me to find articles corroborating Leggett’s views, unsurprisingly I suppose. In addition to the article I linked last week by P. Nozieres, I found interviews with Sydney Brenner and Peter Higgs, and a damning article by P.W. Anderson in his book More and Different entitled Could Modern America Have Invented Wave Mechanics? In his opinion piece, Anderson also refers to an article by L. Kadanoff expressing a similar sentiment, which I was not able to find online (please let me know if you find it, and I’ll link it here!). The conditions described at Bell Labs in David Gertner’s book The Idea Factory also paint a rather stark contrast to the present status of condensed matter physics.

Since I wasn’t alive back then, I really cannot know with any great certainty whether the current state of affairs has impeded me from pursuing a longer-term project or thinking about more fundamental problems in physics. I can only speak for myself, and at present I can openly admit that I am incentivized to work on problems that I can solve in 2-3 years. I do have some concrete ideas for longer-term projects in mind, but I cannot pursue these at the present time because, as an experimentalist and postdoc, I do not have the resources nor the permanent setting in which to complete this work.

While the above anecdote is personal and it may corroborate the viewpoints of the aforementioned scientists, I don’t necessarily perceive all these items as purely negative. I think it is important to publish a paper based on one’s graduate work. It should be something, however small, that no one has done before. It is important to be able to communicate with the scientific community through a technical paper — writing is an important part of science. I also don’t mind spending a few years (not more than four, hopefully!) as a postdoc, where I will pick up a few more tools to add to my current arsenal. This is something that Sydney Brenner, in particular, decried in his interview. However, it is likely that most of what was said in these articles was aimed at junior faculty.

Ultimately, the opinions expressed by these authors is concerning. However, I am uncertain as to the extent to which what is said is exaggeration and the extent to which it is true. Reading these articles has made me ask how the scientific environment I was trained in (US universities) has shaped my attitude and scientific outlook.

One thing is undoubtedly true, though. If one chooses to resist the publish-or-perish trend by working on long-term problems and not publishing, the likelihood of landing an academic job is close to null. Perhaps this is the most damning consequence. Nevertheless, there is still some outstanding experimental and theoretical science done today, some of it very fundamental, so one should not lose all hope.

Again, I haven’t lived through this academic transformation, so if anyone has any insight concerning these issues, please feel free to comment.

Paradigm Shifts and “The Scourge of Bibliometrics”

Yesterday, I attended an insightful talk by A.J. Leggett at the APS March Meeting entitled Reflection on the Past, Present and Future of Condensed Matter Physics. The talk was interesting in two regards. Firstly, he referred to specific points in the history of condensed matter physics that resulted in (Kuhn-type) paradigm shifts in our thinking of condensed matter. Of course these paradigm shifts were not as violent as special relativity or quantum mechanics, so he deemed them “velvet” paradigm shifts.

This list, which he acknowledged was personal, consisted of:

  1. Landau’s theory of the Fermi liquid
  2. BCS theory
  3. Renormalization group
  4. Fractional quantum hall effect

Notable absentees from this list were superfluidity in 3He, the integer quanutm hall effect, the discovery of cuprate superconductivity and topological insulators. He argued that these latter advances did not result in major conceptual upheavals.

He went on to elaborate the reasons for these velvet revolutions, which I enumerate to correspond to the list above:

  1. Abandonment of microscopic theory, in particular with the use of Landau parameters; trying to relate experimental properties to one another with the input of experiment
  2. Use of an effective low-energy Hamiltonian to describe phase of matter
  3. Concept of universality and scaling
  4. Discovery of quasiparticles with fractional charge

It is informative to think about condensed matter physics in this way, as it demonstrates the conceptual advances that we almost take for granted in today’s work.

The second aspect of his talk that resonated strongly with the audience was what he dubbed “the scourge of bibliometrics”. He told the tale of his own formative years as a physicist. He published one single-page paper for his PhD work. Furthermore, once appointed as a lecturer at the University of Sussex, his job was to be a lecturer and teach from Monday thru Friday. If he did this job well, it was considered a job well-done. If research was something he wanted to partake in as a side-project, he was encouraged to do so. He discussed how this atmosphere allowed him to develop as a physicist, without the requirement of publishing papers for career advancement.

Furthermore, he claimed, because of the current focus on metrics, burgeoning young scientists are now encouraged to seek out problems that they can solve in a time frame of two to three years. He saw this as a terrible trend. While it is often necessary to complete short-term projects, it is also important to think about problems that one may be able to solve in, say, twenty years, or maybe even never. He claimed that this is what is meant by doing real science — jumping into the unknown. In fact, he asserted that if he were to give any advice to graduate students, postdocs and young faculty in the audience, it would be to try to spend about 20% of one’s time committed to some of these long-term problems.

This raises a number of questions in my mind. It is well-acknowledged within the community and even the blogosphere that the focus on publication number and short term-ism within the condensed matter physics community is detrimental. Both Ross McKenzie and Doug Natelson have expressed such sentiment numerous times on their blogs as well. From speaking to almost every physicist I know, this is a consensus opinion. The natural question to ask then is: if this is the consensus opinion, why is the modern climate as such?

It seems to me like part of this comes from the competition for funding among different research groups and funding agencies needing a way to discriminate between them. This leads to the widespread use of metrics, such as h-indices and publication number, to decide whether or not to allocate funding to a particular group. This doesn’t seem to be the only reason, however. Increasingly, young scientists are judged for hire by their publication output and the journals in which they publish.

Luckily, the situation is not all bad. Because so many people openly discuss this issue, I have noticed that the there is a certain amount of push-back from individual scientists. On my recent postdoc interviews, the principal investigators were most interested in what I was going to bring to the table rather than peruse through my publication list. I appreciated this immensely, as I had spent a large part of my graduate years pursuing instrumentation development. Nonetheless, I still felt a great deal of pressure to publish papers towards the end of graduate school, and it is this feeling of pressure that needs to be alleviated.

Strangely, I often find myself in the situation working despite the forces that be, rather than being encouraged to do so. I highly doubt that I am the only one with this feeling.

Interactions, Collective Excitations and a Few Examples

Most researchers in our field (and many outside our field that study, e.g. ant colonies, traffic, fish schools, etc.) are acutely aware of the relationship between the microscopic interactions between constituent particles and the incipient collective modes. These can be as mundane as phonons in a solid that arise because of interactions between atoms in the lattice or magnons in an anti-ferromagnet that arise due to spin-spin interactions.

From a theoretical point of view, collective modes can be derived by examining the interparticle interactions. An example is the random phase approximation for an electron gas, which yields the plasmon dispersion (here are some of my own notes on this for those who are interested). In experiment, one usually takes the opposite view where inter-particle interations can be inferred from the collective modes. For instance, the force constants in a solid can often be deduced by studying the phonon spectrum, and the exchange interaction can be backed out by examining the magnon dispersions.

In more exotic states of matter, these collective excitations can get a little bizarre. In a two-band superconductor, for instance, it was shown by Leggett that the two superfluids can oscillate out-of-phase resulting in a novel collective mode, first observed in MgB2 (pdf!) by Blumberg and co-workers. Furthermore, in 2H-NbSe2, there have been claims of an observed Higgs-like excitation which is made visible to Raman spectroscopy through its interaction with the charge density wave amplitude mode (see here and here for instance).

As I mentioned in the post about neutron scattering in the cuprates, a spin resonance mode is often observed below the superconducting transition temperature in unconventional superconductors. This mode has been observed in the cuprate, iron-based and heavy fermion superconducting families (see e.g. here for CeCoIn5), and is not (at least to me!) well-understood. In another rather stunning example, no less than four sub-gap collective modes, which are likely of electronic origin, show up below ~40K in SmB6 (see image below), which is in a class of materials known as Kondo insulators.

smb6

Lastly, in a material class that we are actually thought to understand quite well, Peierls-type quasi-1D charge density wave materials, there is a collective mode that shows up in the far-infrared region that (to my knowledge) has so far eluded theoretical understanding. In this paper on blue bronze, they assume that the mode, which shows up at ~8 cm^{-1} in the energy loss function, is a pinned phase mode, but this assignment is likely incorrect in light of the fact that later microwave measurements demonstrated that the phase mode actually exists at a much lower energy scale (see Fig. 9). This example serves to show that even in material classes we think we understand quite well, there are often lurking unanswered questions.

In materials that we don’t understand very well such as the Kondo insulators and the unconventional superconductors mentioned above, it is therefore imperative to map out the collective modes, as they can yield critical insights into the interactions between constituent particles or couplings between different order parameters. To truly understand what is going on these materials, every peak needs to be identified (especially the ones that show up below Tc!), quantified and understood satisfactorily.

As Lestor Freamon says in The Wire:

All the pieces matter.

Macroscopic Wavefunctions, Off-Diagonal Long Range Order and U(1) Symmetry Breaking

Steven Weinberg wrote a piece a while ago entitled Superconductivity for Particular Theorists (pdf!). Although I have to admit that I have not followed the entire mathematical treatment in this document, I much appreciate the conceptual approach he takes in asking the following question:

How can one possibly use such approximations (BCS theory and Ginzburg-Landau theory) to derive predictions about superconducting phenomena that are essentially of unlimited accuracy?

He answers the question by stating that the general features of superconductivity can be explained using the fact that there is a spontaneous breakdown of electromagnetic gauge invariance. The general features he demonstrates are due to broken gauge invariance are the following:

  1. The Meissner Effect
  2. Flux Quantization
  3. Infinite Conductivity
  4. The AC Josephson Effect
  5. Vortex Lines

Although not related to this post per se, he also makes the following (somewhat controversial) comment that I have to admit I am quoting a little out of context:

“…superconductivity is not macroscopic quantum mechanics; it is the classical field theory of  a Nambu-Goldstone field”

Now, while it may be true that one can derive the phenomena in the list above using the formalism outlined by Weinberg, I do think that there are other ways to obtain similar results that may be just as general. One way to do this is to assume the existence of a macroscopic wave function. This method is outlined in this (illuminatingly simple) set of lecture notes by M. Beasley (pdf!).

Another general formalism is outlined by C.N. Yang in this RMP, where he defines the concept of off-diagonal long range order for a tw0-particle density matrix. ODLRO can be defined for a single-particle density matrix in the following way:

\lim_{|r-r'| \to \infty} \rho(r,r') \neq 0

This can be easily extended to the case of a two-particle density matrix appropriate for Cooper pairing (see Yang).

Lastly, there is a formalism similar to that of Yang’s as outlined by Leggett in his book Quantum Liquids, which was first developed by Penrose and Onsager. They conclude that many properties of Bose-Einstein Condensation can be obtained from again examining the diagonalized density matrix:

\rho(\textbf{r},\textbf{r}';t) = \sum_i n_i(t)\chi_i^*(\textbf{r},t)\chi_i(\textbf{r}',t)

Leggett then goes onto say

“If there is exactly one eigenvalue of order N, with the rest all of order unity, then we say the system exhibits simple BEC.”

Again, this can easily be extended to the case of a two-particle density matrix when considering Cooper pairing.

The 5-point list of properties of superconductors itemized above can then be subsequently derived using any of these general frameworks:

  1. Broken Electromagnetic Gauge Invariance
  2. Macroscopic Wavefunction
  3. Off-Diagonal Long Range Order in the Two-Particle Density Matrix
  4. Macroscopically Large Eigenvalue of Two-Particle Density Matrix

These are all model-independent formulations able to describe general properties associated with superconductivity. Items 3 and 4, and to some extent 2, overlap in their concepts. However, 1 seems quite different to me. It seems to me that 2, 3 & 4 more easily relate the concepts of Bose-Einstein condensation to BCS -type condensation, and I appreciate this element of generality. However, I am not sure at this point which is a more general formulation and which is the most useful. I do have a preference, however, for items 2 and 4 because they are the easiest for me to grasp intuitively.

Please feel free to comment, as this post was intended to raise a question rather than to answer it (which I cannot do at present!). I will continue to think about this question and will hopefully make a more thoughtful post with a concrete answer in the future.

Do We Effectively Disseminate “Gems of Insight”?

The content of this post extends beyond condensed matter physics, but I’ll discuss it (as I do most things) within this context.

When attending talks, lectures or reading a paper, sometimes one is struck by what I will refer to as a “gem of insight” (GOI). These tend to be explanations of physical phenomena that are quite well-understood, but can be viewed more intuitively or elegantly in a somewhat unorthodox way.

For instance, one of the ramifications of the Meissner effect is that there is a difference between the longitudinal and transverse response to the vector potential even in the limit that \textbf{q}\rightarrow 0 . This is discussed here in the lecture notes by Leggett, an effect I find to be quite profound and what I would call a GOI. Another example is the case where Brian Josephson was famously inspired, by P.W. Anderson’s GOI on broken symmetry in superconductors, to realize the effect now known after him. Here is a little set of notes by P.B. Allen discussing how the collective and single-particle properties of the electron gas are compatible, which also contains a few GsOI.

My concern in this post, though, is how such information is spread. It seems to me that most papers today are not necessarily concerned with spreading GsOI, but more with communicating results. Papers are used for “showing” and not “explaining”. Part of this situation arises from the fact that the length of papers are constrained by many journals, limiting the author’s capacity to discuss physical ideas at length rather than just “writing down the answer”.

Another reason is that it sometimes takes a long time for ideas to sink in among the community, and the most profound way to understand a result is only obtained after a period of deep reflection. In this case, publishing a paper on the topic is no longer appropriate because the topic is already considered solved. Publishing a paper with only a physical explanation of an already understood phenomenon is “not new” and likely to be rejected by most journals. This is part of the reason why the literature on topological insulators contained the most clear expositions on the quantum hall effect!

So how should we disseminate GsOI? It seems to me that GsOI tend to be circulated in discussions between individual scientists or in lectures to graduate students, etc — mostly informal settings. It is my personal opinion that these GsOI should be documented somewhere. I had the privilege to learn superconductivity from Tony Leggett, one of the authorities on the subject. Many ideas he expressed in class are hardly discussed in the usual superconductivity texts, and some times not anywhere! However, it would probably be extremely fruitful for his lectures to be recorded and uploaded to a forum (such as YouTube) so that someone interested could watch them.

This is a difficult problem to solve in general, but I think that one of the ways we can rectify this situation is to include more space in papers for physical explanations while cleaning up lengthy introductions. Furthermore, we should not necessarily be discouraged from writing papers on topics that “aren’t new” if they contain important GsOI.

Do you agree? I’m curious to know what others think.