Response and Dissipation: Part 1 of the Fluctuation-Dissipation Theorem

I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property).

In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts.

Consider first a damped driven harmonic oscillator with the following equation (for consistency, I’ll use the conventions from my previous post about the phase change after a resonance):

\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}

One way to solve this equation is to assume that the displacement, x(t), responds linearly to the applied force, F(t) in the following way:

x(t) = \int_{-\infty}^{\infty} \chi(t-t')F(t') dt'

Just in case this equation doesn’t make sense to you, you may want to reference this post about linear response.  In the Fourier domain, this equation can be written as:

\hat{x}{}(\omega) = \hat{\chi}(\omega) \hat{F}(\omega)

and one can solve this equation (as done in a previous post) to give:

\hat{\chi}(\omega) = (-\omega^2 + i\omega b + \omega_0^2 )^{-1}

It is useful to think about the response function, \chi, as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation:

\chi(t-t') = \delta x(t)/\delta F(t')

Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance.

Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so:

\Delta E \sim \int \dot{x}F(t) dt =  \int d\omega d\omega'dt (-i\omega) \hat{\chi}(\omega) \hat{F}(\omega)\hat{F}(\omega') e^{i(\omega+\omega')t}

One can write this more simply as:

\Delta E \sim \int d\omega (-i\omega) \hat{\chi}(\omega) |\hat{F}(\omega)|^2

Noticing that the energy dissipated has to be a real function, and that |\hat{F}(\omega)|^2 is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write:

\Delta E \sim  \int d \omega \omega\hat{\chi}''(\omega)|\hat{F}(\omega)|^2

Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation. 

Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a \pi/2 phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. \chi = \chi' +i \chi'' = \chi' +e^{i\pi/2}\chi''). One can see from the plot from below that damping (i.e. dissipation) is quantified by a \pi/2 phase lag.

ArgandPlaneResonance

Damping is usually associated with a 90 degree phase lag

Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations!

Research Topics and the LAQ Method

As a scientific researcher, the toughest part of the job is to come up with good scientific questions. A large part of my time is spent looking for such questions and every once in a while, I happen upon a topic that is worth spending the time to investigate further. The most common method of generating such questions is to come up with a lot of them and then sift through the list to find some that are worth pursuing.

One of the main criteria I use for selecting such questions/problems is what I refer to as the “largest answerable question” or LAQ method. Because the lifetime of a researcher is limited by the human lifespan, it is important to try to answer the largest answerable questions that fall within the window of your ability. Hence, this selection process is actually tied in with one’s self-confidence and actually takes a fair amount of introspection. I imagine the LAQ method looking a little bit like this:

Image result for broad to specific triangle

One starts by asking some really general questions about some scientific topic which eventually proceeds to a more specific, answerable, concrete question. If the question is answerable, it usually will have ramifications that will be broadly felt by many in the community.

I imagine that most readers of this blog will have no trouble coming up with examples of success stories where scientists employed the LAQ method. Just about every famous scientist you can think of has probably, at least to some extent, used this method fruitfully. However, there are counterexamples as well, where important questions are asked by one scientist, but is answered by others.

I am almost done reading Erwin Schrodinger’s book What is Life?, which was written in 1943. In it, Schrodinger asks deep questions about genetics and attempts to put physical constraints on information-carrying molecules (DNA was not known at the time to be the site of genetic information). It is an entertaining read in two regards. Firstly, Schrodinger, at the time of writing, introduces to physicists some of the most pertinent and probing questions in genetics. The book was, after all, one that was read by both Watson and Crick before they set about discovering the structure of DNA. Secondly, and more interestingly, Schrodinger gets almost everything he tries to answer wrong! For instance, he suggests that quantum mechanics may play a role in causing a mutation of certain genes. This is not to say that his reasoning was not sound, but at the time of writing, there were just not enough experimental constraints on some of the assumptions he made.

Nonetheless, I applaud Schrodinger for writing the book and exposing his ignorance. Even though he was not able to answer many of the questions himself, he was an inspiration to many others who eventually were able to shed light on many of the questions posed in the book. Here is an example where the LAQ method fails, but still pushes science forward in a tangible way.

What are your strategies with coming up with good research questions? I have to admit that while the LAQ method is useful, I sometimes pursue problems purely because I find them stunning and no other reason is needed!

DIY Garage Work

Recently, I heard about a string of YouTube videos where Ben Krasnow of the Applied Sciences YouTube Channel makes a series of scientific instruments in his garage. One of the particularly impressive achievements is his homemade Scanning Electron Microscope, where he constructs a pretty decent instrument with approximately $1500. This is definitely outstanding from an educational viewpoint — $1500 will probably be affordable for many high schools and will enable students to see how to image objects with electrons.

Here are a couple videos showing this and another one of his projects where he uses a laser and a couple optical elements to construct a Raman spectroscopy setup:

 

 

 

Lastly, I’d like to point out that Christina Lee has put together an excellent set of Jupyter code (i.e. IPython Notebook code) to solve various condensed matter physics problems. It’s definitely worth having a look.

Jahn-Teller Distortion and Symmetry Breaking

The Jahn-Teller effect occurs in molecular systems, as well as solid state systems, where a molecular complex distorts, resulting in a lower symmetry. As a consequence, the energy of certain occupied molecular states is reduced. Let me first describe the phenomenon before giving you a little cartoon of the effect.

First, consider, just as an example, a manganese atom with valence 3d^4, surrounded by an octahedral cage of oxygen atoms like so (image taken from this thesis):

Jahn-Teller.png

The electrons are arranged such that the lower triplet of orbital states each contain a single “up-spin”, while the higher doublet of orbitals only contains a single “up-spin”, as shown on the image to the left. This scenario is ripe for a Jahn-Teller distortion, because the electronic energy can be lowered by splitting both the doublet and the triplet as shown on the image on the right.

There is a very simple, but quite elegant problem one can solve to describe this phenomenon at a cartoon level. This is the problem of a two-dimensional square well with adjustable walls. By solving the Schrodinger equation, it is known that the energy of the two-dimensional infinite well has solutions of the form:

E_{i,j} = \frac{h}{8m}(i^2/a^2 + j^2/b^2)                where i,j are integers.

Here, a and b denote the lengths of the sides of the 2D well. Since it is only the quantity in the brackets that determine the energy levels, let me factor out a factor of \gamma = a/b and write the energy dependence in the following way:

E \sim i^2/\gamma + \gamma j^2

Note that \gamma is effectively an anisotropy parameter, giving a measure of the “squareness of the well”. Now, let’s consider filling up the levels with spinless electrons that obey the Pauli principle. These electrons will fill up in a “one-per-level” fashion in accordance with the fermionic statistics. We can therefore write the total energy of the N-fermion problem as so:

E_{tot} \sim \alpha^2/ \gamma + \gamma \beta^2

where \alpha and \beta parameterize the energy levels of the N electrons.

Now, all of this has been pretty simple so far, and all that’s really been done is to re-write the 2D well problem in a different way. However, let’s just systematically look at what happens when we fill up the levels. At first, we fill up the E_{1,1} level, where \alpha^2 = \beta^2 = 1^2. In this case, if we take the derivative of E_{1,1} with respect to \gamma, we get that \gamma_{min} = 1 and the well is a square.

For two electrons, however, the well is no longer a square! The next electron will fill up the E_{2,1} level and the total energy will therefore be:

E_{tot} \sim 1/\gamma (1+4) + \gamma (1+1),

which gives a \gamma_{min} = \sqrt{5/2}!

Why did this breaking of square symmetry occur? In fact, this is very closely related to the Jahn-Teller effect. Since the level is two-fold degenerate (i.e. E_{2,1} =  E_{1,2}), it is favorable for the 2D well to distort to lower its electronic energy.

Notice that when we add the third electron, we get that:

E_{tot} \sim 1/\gamma (1+4+1) + \gamma (1+1+4)

and \gamma_{min} = 1 again, and we return to the system with square symmetry! This is also quite similar to the Jahn-Teller problem, where, when all the states of the degenerate levels are filled up, there is no longer an energy to be gained from the symmetry-broken geometry.

This analogy is made more complete when looking at the following level scheme for different d-electron valence configurations, shown below (image taken from here).

highSpin_Jahnteller

The black configurations are Jahn-Teller active (i.e. prone to distortions of the oxygen octahedra), while the red are not.

In condensed matter physics, we usually think about spontaneous symmetry breaking in the context of the thermodynamic limit. What saves us here, though, is that the well will actually oscillate between the two rectangular configurations (i.e. horizontal vs. vertical), preserving the original symmetry! This is analogous to the case of the ammonia (NH_3) molecule I discussed in this post.

The Physicist’s Proof II: Limits and the Monty Hall Problem

As an undergraduate, I was taught the concept of the “physicist’s proof”, a sort of silly concept that was a professor’s attempt to get us students to think a little harder about some problems. Here, I give you a “physicist’s proof” of the famous Monty Hall problem, which (to me!) is easier to think about than the typical Bayesian approach.

The Monty Hall problem, which was developed on a TV game show, goes something like this (if you already know the Monty Hall problem, you can skip the paragraphs in italics):

Suppose you are a contestant on a game show where there are three doors and a car behind one of them. You must select the correct door to win the car.

Image result for monty hall problem

You therefore select one of the three doors. Now, the host of the show, who knows where the car is, opens a different door for you and shows you that there is no car behind that door.

There are two remaining unopened doors — the one you have chosen and one other. Now, before you find out whether or not you have guessed correctly, the host gives you the option to change your selection from the door you initially chose to the other remaining unopened door.

Should you switch or should you remain with you initial selection?

When I first heard this problem, I remember thinking, like most people, that there was a 50/50 chance of the car being behind either door. However, there is a way to convince yourself that this is not so. This is by taking the limit of a large number of doors. I’ll explain what I mean in a second, but let me just emphasize that taking limits is a common and important technique that physicists must master to think about problems in general.

In Einstein’s book, Relativity, he describes using this kind of thinking to point out absurd consequences of Galilean relativity. Einstein imagined himself running away from a clock at the speed of light: in this scenario, the light from the clock would be matching his pace and he would therefore observe the hands of the clock remaining stationary and time standing still. Were he able to run just a little bit faster than the light emanating from the clock, he would see the hands of the clock start to wind backwards. This would enable him to violate causality!  However, Einstein held causality to be a dearer physical truth than Newton’s laws. Special relativity was Einstein’s answer to this contradiction, a conclusion he reached by considering a physical limit.

Now, let us return to the Monty Hall problem. And this time, instead of three doors, let’s think about the limit of, say, a million doors. In this scenario, suppose that you have to choose one door from one million doors instead of just three. For the sake of argument, suppose you select door number 999,983. The host, who knows where the car is, opens all the other doors, apart from door number 17. Should you stick to door 999,983 or should you switch to door 17?

Let’s think about this for a second — there are two scenarios. Either you were correct on your first guess and the car is behind door 999,983 or you were incorrect on your first guess and the car is behind door 17. When you initially made your selection, the chance of you having made the right guess was 1/1,000,000! The probability of you having chosen the right door is almost zero! If you had chosen any other door apart from door 17, you would have been faced with the same option: the door you chose vs. door 17. And there are 999,999 doors for you to select and not win the car. In some sense, by opening all the other doors, the host is basically telling you that the car is behind door 17 (there is a 99.9999% chance!).

To me, at least, the million door scenario demonstrates quite emphatically that switching from your initial choice is more logical. For some reason, the three door case appears to be more psychologically challenging, and the probabilities are not as obvious. Taking the appropriate limit of the Monty Hall problem is therefore (at least to me) much more intuitive!

Especially for those who are soon to take the physics GRE — remember to take appropriate limits, this will often eliminate at least a couple answers!

For completeness, I show below the more rigorous Bayesian method for the three-door case:

Bayes theorem says that:

P(A|B) = \frac{P(B|A) P(A)}{P(B)}

For the sake of argument, suppose that you select door 3. The host then shows you that there is no car behind door 2. The calculation goes something like this. Below, “car3” translates to “the car was behind door 3” and “opened2” translates to “the host opened door 2”

P(car3|opened2) = \frac{P(opened2 | car3) P(car3)}{P(opened2)}

The probabilities in the numerator are easy to obtain P(opened2 | car3) = 1/2 and P(car3) = 1/3. However, the P(opened2) is a little harder to calculate. It helps to enumerate all the scenarios. Given that you have chosen door three, if the car is behind door 1, then the probability that the host opens door two is 1. Given that you have chosen door three and are correct, the probability of the host choosing door 2 is 1/2. Obviously, the probability of the car being behind door 2 is zero. Therefore, considering that all doors have a 1/3 possibility of having the car behind them at the outset, the denominator becomes:

P(opened2) = 1/3*(1/2 + 1 + 0) = 1/2

and hence:

P(car3|opened2) = \frac{1/2*1/3}{1/2} = 1/3.

Likewise, the probability that the car is behind door 1 is:

P(car1|opened2) = \frac{P(opened2 | car1) P(car1)}{P(opened2)}

which can similarly be calculated:

P(car1|opened2) = \frac{1*1/3}{1/2} = 2/3.

It is a bizarre answer, but Bayesian results often are.

An Undergraduate Optics Problem – The Brewster Angle

Recently, a lab-mate of mine asked me if there was an intuitive way to understand Brewster’s angle. After trying to remember how Brewster’s angle was explained to me from Griffiths’ E&M book, I realized that I did not have a simple picture in my mind at all! Griffiths’ E&M book uses the rather opaque Fresnel equations to obtain the Brewster angle. So I did a little bit of thinking and came up with a picture I think is quite easy to grasp.

First, let me briefly remind you what Brewster’s angle is, since many of you have probably not thought of the concept for a long time! Suppose my incident light beam has both components, s– and p-polarization. (In case you don’t remember, p-polarization is parallel to the plane of incidence, while s-polarization is perpendicular to the plane of incidence, as shown below.) If unpolarized light is incident on a medium, say water or glass, there is an angle, the Brewster angle, at which the light comes out perfectly s-polarized.

An addendum to this statement is that if the incident beam was perfectly p-polarized to begin with, there is no reflection at the Brewster angle at all! A quick example of this is shown in this YouTube video:

So after that little introduction, let me give you the “intuitive explanation” as to why these weird polarization effects happen at the Brewster angle. First of all, it is important to note one important fact: at the Brewster angle, the refracted beam and the reflected beam are at 90 degrees with respect to each other. This is shown in the image below:

Why is this important? Well, you can think of the reflected beam as light arising from the electrons jiggling in the medium (i.e. the incident light comes in, strikes the electrons in the medium and these electrons re-radiate the light).

However, radiation from an oscillating charge only gets emitted in directions perpendicular to the axis of motion. Therefore, when the light is purely p-polarized, there is no light to reflect when the reflected and refracted rays are orthogonal — the reflected beam can’t have the polarization in the same direction as the light ray! This is shown in the right image above and is what gives rise to the reflectionless beam in the YouTube video.

This visual aid enables one to use Snell’s law to obtain the celebrated Brewster angle equation:

n_1 \textrm{sin}(\theta_B) = n_2 \textrm{sin}(\theta_2)

and

\theta_B + \theta_2 = 90^o

to obtain:

\textrm{tan}(\theta_B) = n_2/n_1.

The equations also suggest one more thing: when the incident light has an s-polarization component, the reflected beam must come out perfectly polarized at the Brewster angle. This is because only the s-polarized light jiggles the electrons in a way that they can re-radiate in the direction of the outgoing beam. The image below shows the effect a polarizing filter can therefore have when looking at water near the Brewster angle, which is around 53 degrees for water.

To me, this is a much simpler way to think about the Brewster angle than dealing with the Fresnel equations.

Spot the Difference

A little while ago, I wrote a blog post concerning autostereograms, more commonly referred to as Magic Eye images. These are images that, at first sight, seem to possess nothing but a random-seeming pattern. However, looked at in a certain way, a three-dimensional image can actually be made visible. Below is an example of a such an image (taken from Wikipedia):

Autostereogram of a shark

In my previous post about these stereograms, I pointed out that the best way to understand what is going on is to look at a two-image stereogram (see below). Here, the left eye looks at the left image while the right eye looks at the right image, and the brain is tricked into triangulating a distance because the two images are almost the same. The only difference is that part of the image has been displaced horizontally, which makes that part appear like it is at a different depth. This is explained at the bottom of this page, and an example is shown below:

Random Dot Stereogram

Boring old square

In this post, however, I would like to point out that this visual technique can be used to solve a different kind of puzzle. When I was in middle school, one of the most popular games to play was called Photo-Hunt, essentially a spot-the-difference puzzle. You probably know what I’m referring to, but here is an example just in case you don’t:

The bizarre thing about these images is that if you look at them like you would a Magic Eye image, the differences between the two images essentially “pop out” (or rather they flicker noticeably). Because each of your eyes is looking at each image separately, your brain is tricked into thinking there is a single image at a certain depth. Therefore, the differences reveal themselves, because while the parts of the image that are identical are viewed with a particular depth of view, the differences don’t have the same effect. Your eyes cannot triangulate the differences, and they appear to flicker. I wish I had learned this trick in middle school, when this game was all the rage.

While this may all seem a little silly, I noticed recently while zoning out during a rather dry seminar, that I could notice very minute defects in TEM images using this technique. Here is an example of an image of a bubble raft (there are some really cool videos of bubble rafts online — see here for instance), where the defects immediately emerge when viewed stereoscopically (i.e. like a Magic-Eye):

TEMBubbleRaft

Bubble raft image taken from here

I won’t tell you where the defects are, but just to let you know that there are three quite major ones, which are the ones I’m referring to in the image. They’re quite obvious even if not viewed stereoscopically.

Because so many concepts in solid state physics depend on crystal symmetries and periodicity, I can foresee entertaining myself during many more dry seminars in the future, be it a seminar with tons of TEM images or a wealth of diffraction data. I have even started viewing my own data this way to see if anything immediately jumps out, without any luck so far, but I suspect it is only a matter of time before I see something useful.