Tag Archives: Linear Response

A Graphical Depiction of Linear Response

Most of the theory of linear response derives from the following equation:

y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'

I remember quite vividly first seeing an equation like this in Ashcroft and Mermin in the context of electric fields (i.e. \textbf{D}(\textbf{r}) = \int_{-\infty}^{\infty} \epsilon(\textbf{r}-\textbf{r}')\textbf{E}(\textbf{r}') d\textbf{r}') and wondering what it meant.

One way to think about \chi(t-t') in the equation above is as an impulse response function. What I mean by this is that if I were to apply a Dirac delta-function perturbation to my system, I would expect it to respond in some way, characteristic of the system, like so:

ImpulseResponse

Mathematically, this can be expressed as:

y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'= \int_{-\infty}^{\infty} \chi(t-t')\delta(t') dt'=\chi(t)

This seems reasonable enough. Now, though this is going to sound like tautology, what one means by “linear” in linear response theory is that the system responds linearly to the input. Most of us are familiar with the idea of linearity but in this context, it helps to understand it through two properties. First, the strength of the input delta-function determines the strength of the output, and secondly, the response functions combine additively. This means that if we apply a perturbation of the form:

h(t')=k_1\delta(t'-t_1) + k_2\delta(t'-t_2) +k_3\delta(t'-t_3)

We expect a response of the form:

y(t)=k_1\chi(t-t_1) + k_2\chi(t-t_2) +k_3\chi(t-t_3)

This is most easily grasped (at least for me!) graphically in the following way:

MultipleImpulses

One can see here that the response to the three impulses just add to give the total response. Finally, let’s consider what happens when the input is some sort of continuous function. One can imagine a continuous function as being composed of an infinite number of discrete points like so:

continuous

Now, the output of the discretized input function can be expressed as so:

y(t) = \sum_{n=-\infty}^{\infty} [\chi(t-n\epsilon_{t'})][h(n\epsilon_{t'}][\epsilon_{t'}]

This basically amounts to saying that we can treat each point on the function as a delta-function or impulse function. The strength of each “impulse” in this scenario is quantified by the area under the curve. Each one of these areal slivers gives rise to an output function. We then add up the outputs from each of the input points, and this gives us the total response y(t) (which I haven’t plotted here). If we take the limit n \rightarrow \infty, we get back the following equation:

y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'

This kind of picture is helpful in thinking about the convolution integral in general, not just in the context of linear response theory.

Many experiments, especially scattering experiments, measure a quantity related to the imaginary part of the  Fourier-transformed response function, \chi''(\omega). One can then use a Kramers-Kronig transform to obtain the real part and reconstruct the temporal response function \chi(t-t'). An analogous procedure can be done to obtain the real-space response function from the momentum-space one.

Note: I will be taking some vacation time for a couple weeks following this post and will not be blogging during that period.

Advertisements

Reflecting on General Ideas

In condensed matter physics, it is easy to get lost in the details of one’s day-to-day work. It is important to sometimes take the time to reflect upon what you’ve done and learned and think about what it all means. In this spirit, below is a list of some of the most important ideas related to condensed matter physics that I picked up during my time as an undergraduate and graduate student. This is of course personal, and I hope that in time I will add to the list.

  1. Relationship between measurements and correlation functions
  2. Relationship between equilibrium fluctuations and non-equilibrium dissipative channels (i.e. the fluctuation-dissipation theorem)
  3. Principle of entropy maximization/free-energy minimization for matter in equilibrium
  4. Concept of the quasi-particle and screening
  5. Concept of Berry phase and the corresponding topological and geometrical consequences
  6. Broken symmetry, the Landau paradigm of phase classification and the idea of an order parameter
  7. Sum rules and the corresponding constraints placed on both microscopic theories and experimental spectra
  8. Bose-Einstein and Cooper Pair condensation and their spectacular properties
  9. Logical independence of physical theories on the theory of everything
  10. Effects of long-range vs. short-range interactions on macroscopic properties of solids
  11. Role of dimensionality in observing qualitatively different physical properties and phases of matter

The first two items on the list are well-explained in Forster’s Hydrodynamics, Fluctuations, Broken Symmetry and Correlation Functions without the use of Green’s functions and other advanced theoretical techniques. Although not yet a condensed matter phenomenon, Bell’s theorem and non-locality rank among the most startling consequences of quantum mechanics that I learned in graduate school. I suspect that its influence will be observed in a condensed matter setting in due time.

Please feel free to share your own ideas or concepts you would add to the list.

The Relationship Between Causality and Kramers-Kronig Relations

The Kramers-Kronig relations tell us that the real and imaginary parts of causal response functions are related. The relations are of immense importance to optical spectroscopists, who use them to obtain, for example, the optical conductivity from the reflectivity. It is often said in textbooks that the Kramers-Kronig relations (KKRs) follow from causality. The proof of this statement usually uses contour integration and the role of causality is then a little obscured. Here, I hope to use intuitive arguments to show the meaning behind the KKRs.

If one imagines applying a sudden force to a simple harmonic oscillator (SHO) and then watches its response, one would expect that the response will look something like this:

ImpulseResponse

We expect the SHO to oscillate for a little while and eventually stop due to friction of some kind. Let’s call the function in the plot above \chi(t). Because \chi(t) is zero before we “kick” the system, we can play a little trick and write \chi(t) = \chi_s(t)+\chi_a(t) where the symmetrized and anti-symmetrized parts are plotted below:

Symmetric

Anti-symmetric

Since for t<0, the symmetrized and anti-symmetrized parts will cancel out perfectly, we recover our original spectrum. Just to convince you (as if you needed convincing!) that this works, I have explicitly plotted this:

Symmetric+AntiSymmetric

Now let’s see what happens when we take this over to the frequency domain, where the KKRs apply, by doing a Fourier transform. We can write the following:

\tilde{\chi}(\omega) = \int_{-\infty}^\infty e^{i \omega t} \chi(t) \mathrm{d}t = \int_{-\infty}^\infty (\mathrm{cos}(\omega t) + i \mathrm{sin}(\omega t)) (\chi_s (t)+\chi_a(t))\mathrm{d}t

where in the last step I’ve used Euler’s identity for the exponential and I’ve decomposed \chi(t) into its symmetrized and anti-symmetrized parts as before. Now, there is something immediately apparent in the last integral. Because the domain of integration is from -\infty to \infty, the area under the curve of any odd (a.k.a anti-symmetric) function will necessarily be zero. Lastly, noting that anti-symmetric \times symmetric = anti-symmetric and symmetric (anti-symmetric) \times symmetric (anti-symmetric) = symmetric, we can write for the equation above:

\tilde{\chi}(\omega) = \int_{-\infty}^\infty \mathrm{cos}(\omega t) \chi_s(t) \mathrm{d}t + i \int_{-\infty}^\infty \mathrm{sin}(\omega t) \chi_a(t) \mathrm{d}t = \tilde{\chi}_s(\omega) + i \tilde{\chi}_a(\omega)

Before I continue, some remarks are in order:

  1. Even though we now have two functions in the frequency domain (i.e. \tilde{\chi}_s(\omega) and  \tilde{\chi}_a(\omega)), they actually derive from one function in the time-domain, \chi(t). We just symmetrized and anti-symmetrized the function artificially.
  2. We actually know the relationship between the symmetric and anti-symmetric functions in the time-domain because of causality.
  3. The symmetrized part of \chi(t) corresponds to the real part of \tilde{\chi}(\omega). The anti-symmetrized part of \chi(t) corresponds to the imaginary part of \tilde{\chi}(\omega).

With this correspondence, the question then naturally arises:

How do we express the relationship between the real and imaginary parts of \tilde{\chi}(\omega), knowing the relationship between the symmetrized and anti-symmetrized functions in the time-domain?

This actually turns out to not be too difficult and involves just a little more math. First, let us express the relationship between the symmetrized and anti-symmetrized parts of \chi(t) mathematically.

\chi_s(t) = \mathrm{sgn}(t) \times \chi_a(t)

where \mathrm{sgn} (t) just changes the sign of the t<0 part of the plot and is shown below.

sgn

Now let’s take this expression over to the frequency domain. Here, we must use the convolution theorem. This theorem says that if we have two functions multiplied by each other, e.g. h(t) = f(t)g(t), the Fourier transform of this product is expressed as a convolution in the frequency domain as so:

\tilde{h}(\omega)=\mathcal{F}(f(t)g(t)) = \int \tilde{f}(\omega-\omega')\tilde{g}(\omega') \mathrm{d}\omega'

where \mathcal{F} means Fourier transform. Therefore, all we have left to do is figure out the Fourier transform of \mathrm{sgn}(t). The answer is given here (in terms of frequency not angular frequency!), but it is a fun exercise to work it out on your own. The answer is:

\mathcal{F}(sgn(t)) = \frac{2}{i\omega}

With this answer, and using the convolution theorem, we can write:

\tilde{\chi_s}(\omega) = \int_{-\infty}^{\infty} \frac{2}{i(\omega-\omega')} \tilde{\chi}_a(\omega')\mathrm{d}\omega'

Hence, up to some factors of 2\pi and i, we can now see better what is behind the KKRs without using contour integration. We can also see why it is always said that the KKRs are a result of causality. Thinking about the KKRs this way has definitely aided in my thinking about response functions more generally.

I hope to write a post in the future talking a little more about the connection between the imaginary part of the response function and dissipation. Let me know if you think this will be helpful.

A lot of this post was inspired by this set of notes, which I found to be very valuable.

Some Words on Sum Rules

In condensed matter physics, sum rules are used widely by both experimentalists and theorists. One can even go as far to say that sum rules provide us with a framework within which theories must exist, i.e. theories cannot violate the constraints put forth by these sum rules. In this sense, they are of vast importance, and any theory of, for example, the dielectric function should be checked against these constraints.

Even though these sum rules are used often, their physical meaning is not always apparent because they can be written in many forms. Let me use the Thomas-Reiche-Kuhn sum rule (a.k.a the f-sum rule) to illustrate some of these points. This sum rule can be formulated as so:

\sum_m(E_m - E_0)|\langle{m}|n(\textbf{q})|0\rangle|^2 = \frac{n\hbar^2q^2}{2m}

where n(\textbf{q}) is the Fourier-transformed number density operator. In this formulation, one can see the physical principles behind the sum rule most clearly:

If one adds up the energies of the transitions made from the ground state to higher energy states (in this case by perturbing the density), this should be equal to the total energy put into the system.

The TRK sum rule can be understood quite simply, therefore, as an energy conservation law for a many-body system. This is why these sum rules are so important — they are many-body manifestations of conservation laws.

The Thomas-Reiche-Kuhn sum rule is often written in the following way as well:

\int_0^\infty \omega S(\textbf{q},\omega) d \omega = \frac{n\hbar^2 q^2}{2m}

where S(\textbf{q},\omega) is the dynamic structure factor.

Furthermore, TRK can be formulated in terms of the inverse longitudinal dielectric function as so:

\int_0^\infty \omega \textrm{Im}(-1 /\epsilon_L(\textbf{q},\omega))d \omega = \frac{\pi}{2}\omega_p^2

where \omega_p is the plasma frequency. Also, it can be written in a form more familiar to optical spectroscopists, who often plot the optical conductivity:

\int_0^\infty \textrm{Re}(\sigma_L(\textbf{q},\omega))d \omega = \frac{\omega_p^2}{8}

So while there are many sum rules (and many formulations of each sum rule as seen above for the TRK), one should always keep in mind that they derive from rather general physical principles, which are unfortunately sometimes hidden in the way they are written.

Just a Little Thought on TKNN

Thouless, Kohmoto, Nightingale and den Nijs in 1982 wrote a landmark paper relating the Hall conductivity to the Chern number (also now known as the TKNN invariant).

It is well-known that the Quantized Hall Effect (QHE) is an extremely robust phenomenon of topological origin (pdf!). One can think of the Hall conductivity as measuring the total Chern number of the occupied Landau levels. What baffles me about the TKNN result is that despite the robustness and topological character of the QHE, the authors are able to use linear response theory.

This must mean that second and higher order responses must somehow be exponentially suppressed and that the response is perfectly linear. I have not come across a proof of this in the literature, though it may very well be an (boneheaded!) oversight on my part. This line of questioning also applies to the Quantum Spin Hall Effect and the Quantum Anomalous Hall Effect.