Tag Archives: Theorems

Response and Dissipation: Part 1 of the Fluctuation-Dissipation Theorem

I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property).

In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts.

Consider first a damped driven harmonic oscillator with the following equation (for consistency, I’ll use the conventions from my previous post about the phase change after a resonance):

\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}

One way to solve this equation is to assume that the displacement, x(t), responds linearly to the applied force, F(t) in the following way:

x(t) = \int_{-\infty}^{\infty} \chi(t-t')F(t') dt'

Just in case this equation doesn’t make sense to you, you may want to reference this post about linear response.  In the Fourier domain, this equation can be written as:

\hat{x}{}(\omega) = \hat{\chi}(\omega) \hat{F}(\omega)

and one can solve this equation (as done in a previous post) to give:

\hat{\chi}(\omega) = (-\omega^2 + i\omega b + \omega_0^2 )^{-1}

It is useful to think about the response function, \chi, as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation:

\chi(t-t') = \delta x(t)/\delta F(t')

Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance.

Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so:

\Delta E \sim \int \dot{x}F(t) dt =  \int d\omega d\omega'dt (-i\omega) \hat{\chi}(\omega) \hat{F}(\omega)\hat{F}(\omega') e^{i(\omega+\omega')t}

One can write this more simply as:

\Delta E \sim \int d\omega (-i\omega) \hat{\chi}(\omega) |\hat{F}(\omega)|^2

Noticing that the energy dissipated has to be a real function, and that |\hat{F}(\omega)|^2 is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write:

\Delta E \sim  \int d \omega \omega\hat{\chi}''(\omega)|\hat{F}(\omega)|^2

Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation. 

Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a \pi/2 phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. \chi = \chi' +i \chi'' = \chi' +e^{i\pi/2}\chi''). One can see from the plot from below that damping (i.e. dissipation) is quantified by a \pi/2 phase lag.


Damping is usually associated with a 90 degree phase lag

Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations!

What Happens in 2D Stays in 2D.

There was a recent paper published in Nature Nanotechnology demonstrating that single-layer NbSe_2 exhibits a charge density wave transition at 145K and superconductivity at 2K. Bulk NbSe_2 has a CDW transition at ~34K and a superconducting transition at ~7.5K. The authors speculate (plausibly) that the enhanced CDW transition temperature occurs because of an increase in electron-phonon coupling due to the reduction in screening. An important detail is that the authors used a sapphire substrate for the experiments.

This paper is among a general trend of papers that examine the physics of solids in the 2D limit in single-layer form or at the interface between two solids. This frontier was opened up by the discovery of graphene and also by the discovery of superconductivity and ferromagnetism in the 2D electron gas at the LAO/STO interface. The nature of these transitions at the LAO/STO interface is a prominent area of research in condensed matter physics. Part of the reason for this interest stems from researchers having been ingrained with the Mermin-Wagner theorem. I have written before about the limitations of such theorems.

Nevertheless, it has now been found that the transition temperatures of materials can be significantly enhanced in single layer form. Besides the NbSe_2 case, it was found that the CDW transition temperature in single-layer TiSe_2 was also enhanced by about 40K in monolayer form. Probably most spectacularly, it was reported that single-layer FeSe on an STO substrate exhibited superconductivity at temperatures higher than 100K  (bulk FeSe only exhibits superconductivity at 8K). It should be mentioned that in bulk form the aforementioned materials are all quasi-2D and layered.

The phase transitions in these compounds obviously raise some fundamental questions about the nature of solids in 2D. One would expect, naively, for the transition temperature to be suppressed in reduced dimensions due to enhanced fluctuations. Obviously, this is not experimentally observed, and there must therefore be a boost from another parameter, such as the electron-phonon coupling in the NbSe_2 case, that must be taken into account.

I find this trend towards studying 2D compounds a particularly interesting avenue in the current condensed matter physics climate for a few reasons: (1) whether or not these phase transitions make sense within the Kosterlitz-Thouless paradigm (which works well to explain transitions in 2D superfluid and superconducting films) still needs to be investigated, (2) the need for adequate probes to study interfacial and monolayer compounds will necessarily lead to new experimental techniques and (3) qualitatively different phenomena can occur in the 2D limit that do not necessarily occur in their 3D counterparts (the quantum hall effect being a prime example).

Sometimes trends in condensed matter physics can lead to intellectual atrophy — I think that this one may lead to some fundamental and major discoveries in the years to come on the theoretical, experimental and perhaps even on the technological fronts.

Update: The day after I wrote this post, I also came upon an article demonstrating evidence for a ferroelectric phase transition in thin Strontium Titanate (STO), a material known to exhibit no ferroelectric phase transition in bulk form at all.

Thoughts on Consistency and Physical Approximations

Ever since the series of posts about the Theory of Everything (ToE) and Upward Heritability (see here, here, here and here), I have felt like perhaps my original perspective lacked some clarity of thought. I recently re-read the chapter entitled Physics on a Human Scale in A.J. Leggett’s The Problems of Physics. In it, he takes a bird’s eye view on the framework under which most of us in condensed matter physics operate, and in doing so, untied several of the persisting mental knots I had after that series of blog posts.

The basic message is this: in condensed matter physics, we create models that are not logically dependent on the underlying ToE. This means that one does not deduce the models in the mathematical sense, but the models must be consistent with the ToE. For example, in the study of magnetism, the models must be consistent with the microscopically-derived Bohr-van Leeuwen theorem.

When one goes from the ToE to an actual “physical” model, one is selecting relevant features, rather than making rigorous approximations (so-called physical approximations). This requires a certain amount of guesswork based on experience/inspiration. For example, in writing down the BCS Hamiltonian, one neglects all interaction terms apart from the “pairing interaction”.

Leggett then makes an intuitive analogy, which provides further clarity. If one is building a transportation map of say, Bangkok, Thailand, one could do this in two ways: (i) One could take a series of images from a plane/helicopter and then resize the images to fit on a map or (ii) one could draw schematically the relevant transportation features on a piece of paper that would have to be consistent with the city’s topography. Generally, scheme (ii) will give us a much better representation of the transportation routes in Bangkok than the complicated images in scheme (i). This is the process of selecting relevant features consistent with the underlying system.  This is usually the process undertaken in condensed matter physics. Note this process is not one of approximation, but one of selection while retaining consistency.

With respect to the previous posts on this topic, I stand by the following: (1) I do still think that Laughlin and Pines’ claim that certain effects (such as the Josephson quantum) cannot be derived from the ToE to be quite speculative. It is difficult to prove either (mine or L&P’s) viewpoint, but I  take the more conservative (and what I would think is the simpler) option and suggest that in principle one could obtain such effects from the ToE. (2) Upward heritability, while also speculative, is a hunch that claims that concepts in particle physics and condensed matter physics (such as broken symmetry) may result from a yet undiscovered connection between the two realms of physics. I still consider this a plausible idea, though it could be just a coincidence.

Previously, I was under the assumption that the views expressed in the L&P article and the Wilzcek article were somehow mutually exclusive. However, upon further reflection, I no longer think that this is so and have realized that in fact they are quite compatible with each other. This is probably where my main misunderstanding laid in the previous posts, and I apologize for any confusion this may have caused.

Do “Theorems” in Condensed Matter Physics Limit the Imagination?

There are many so-called “theorems” in physics. The most famously quoted in the field of condensed matter are the ones associated with the names of Goldstone, Mermin-Wagner, and McMillan.

If you aren’t familiar with these often (mis)quoted theorems, then let me (mis)quote them for you:

1) Goldstone: For each continuous symmetry a phase of matter breaks, there is an associated collective excitation that is gapless for long wavelengths, usually referred to as a Nambu-Goldstone mode.

2) Mermin-Wagner: Continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions d ≤ 2. (From Wikipedia)

3) McMillan (PDF link!): Electron-phonon induced superconductivity cannot have a higher Tc than approximately 40K.

All these three theorems in condensed matter physics have been violated to a certain extent. My gut feeling, though, is that these theorems can have the adverse consequence of limiting one’s imagination. As an experimental physicist, I can see the value in such theorems, but I don’t think that it is constructive to believe them outright. The number of times that nature has proven that she is much more creative and elusive than our human minds should tell us that we should use these theorems as guidance but to always be wary of such ideas.

For instance, had one believed the Mermin-Wagner theorem outright, would someone have thought the existence of graphene possible? In a solid, which breaks translational symmetry in three directions and rotational symmetry in three directions, why are there only three acoustic phonons? McMillan’s formula still holds true for electron-phonon coupled superconductors (marginal case being MgB2 which has a Tc~40K), though a startling discovery recently may even shatter this claim. However, placed in its historical context (it was stated before the discovery of high-temperature superconductors), one wonders whether McMillan’s formula disheartened some experimentalists from pursuing the goal of a higher transition temperature superconductor.

My message: One may use the theorems as guidance, but they are really there to be broken.