# Tag Archives: Harmonic Oscillator

## Response and Dissipation: Part 1 of the Fluctuation-Dissipation Theorem

I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property).

In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts.

Consider first a damped driven harmonic oscillator with the following equation (for consistency, I’ll use the conventions from my previous post about the phase change after a resonance):

$\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}$

One way to solve this equation is to assume that the displacement, $x(t)$, responds linearly to the applied force, $F(t)$ in the following way:

$x(t) = \int_{-\infty}^{\infty} \chi(t-t')F(t') dt'$

Just in case this equation doesn’t make sense to you, you may want to reference this post about linear response.  In the Fourier domain, this equation can be written as:

$\hat{x}{}(\omega) = \hat{\chi}(\omega) \hat{F}(\omega)$

and one can solve this equation (as done in a previous post) to give:

$\hat{\chi}(\omega) = (-\omega^2 + i\omega b + \omega_0^2 )^{-1}$

It is useful to think about the response function, $\chi$, as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation:

$\chi(t-t') = \delta x(t)/\delta F(t')$

Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance.

Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so:

$\Delta E \sim \int \dot{x}F(t) dt = \int d\omega d\omega'dt (-i\omega) \hat{\chi}(\omega) \hat{F}(\omega)\hat{F}(\omega') e^{i(\omega+\omega')t}$

One can write this more simply as:

$\Delta E \sim \int d\omega (-i\omega) \hat{\chi}(\omega) |\hat{F}(\omega)|^2$

Noticing that the energy dissipated has to be a real function, and that $|\hat{F}(\omega)|^2$ is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write:

$\Delta E \sim \int d \omega \omega\hat{\chi}''(\omega)|\hat{F}(\omega)|^2$

Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation.

Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a $\pi/2$ phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. $\chi = \chi' +i \chi'' = \chi' +e^{i\pi/2}\chi''$). One can see from the plot from below that damping (i.e. dissipation) is quantified by a $\pi/2$ phase lag.

Damping is usually associated with a 90 degree phase lag

Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations!

## Nonlinear Response and Harmonics

Because we are so often solving problems in quantum mechanics, it is sometimes easy to forget that certain effects also show up in classical physics and are not “mysterious quantum phenomena”. One of these is the problem of avoided crossings or level repulsion, which can be much more easily understood in the classical realm. I would argue that the Fano resonance also represents a case where a classical model is more helpful in grasping the main idea. Perhaps not too surprisingly, a variant of the classical harmonic oscillator problem is used to illustrate the respective concepts in both cases.

There is also another cute example that illustrates why overtones of the natural harmonic frequency components result when subject to slightly nonlinear oscillations. The solution to this problem therefore shows why harmonic distortions often affect speakers; sometimes speakers emit frequencies not present in the original electrical signal. Furthermore, it shows why second harmonic generation can result when intense light is incident on a material.

First, imagine a perfectly harmonic oscillator with a potential of the form $V(x) = \frac{1}{2} kx^2$. We know that such an oscillator, if displaced from its original position, will result in oscillations at the natural frequency of the oscillator $\omega_o = \sqrt{k/m}$ with the position varying as $x(t) = A \textrm{cos}(\omega_o t + \phi)$. The potential and the position of the oscillator as a function of time are shown below:

(Left) Harmonic potential as a function of position. (Right) Variation of the position of the oscillator with time

Now imagine that in addition to the harmonic part of the potential, we also have a small additional component such that $V(x) = \frac{1}{2} kx^2 + \frac{1}{3}\epsilon x^3$, so that the potential now looks like so:

The equation of motion is now nonlinear:

$\ddot{x} = -c_1x - c_2x^2$

where $c_1$ and $c_2$ are constants. It is easy to see that if the amplitude of oscillations is small enough, there will be very little difference between this case and the case of the perfectly harmonic potential.

However, if the amplitude of the oscillations gets a little larger, there will clearly be deviations from the pure sinusoid. So then what does the position of the oscillator look like as a function of time? Perhaps not too surprisingly, considering the title, is that not only are there oscillations at $\omega_0$, but there is also an introduction of a harmonic component with $2\omega_o$.

While the differential equation can’t be solved exactly without resorting to numerical methods, that the harmonic component is introduced can be seen within the framework of perturbation theory. In this context, all we need to do is plug the solution to the simple harmonic oscillator, $x(t) = A\textrm{cos}(\omega_0t +\phi)$ into the nonlinear equation above. If we do this, the last term becomes:

$-c_2A^2\textrm{cos}^2(\omega_0t+\phi) = -c_2 \frac{A^2}{2}(1 + \textrm{cos}(2\omega_0t+2\phi))$,

showing that we get oscillatory components at twice the natural frequency. Although this explanation is a little crude — one can already start to see why nonlinearity often leads to higher frequency harmonics.

With respect to optical second harmonic generation, there is also one important ingredient that should not be overlooked in this simplified model. This is the fact that frequency doubling is possible only when there is an $x^3$ component in the potential. This means that the potential needs to be inversion asymmetric. Indeed, second harmonic generation is only possible in inversion asymmetric materials (which is why ferroelectric materials are often used to produce second harmonic optical signals).

Because of its conceptual simplicity, it is often helpful to think about physical problems in terms of the classical harmonic oscillator. It would be interesting to count how many Nobel Prizes have been given out for problems that have been reduced to some variant of the harmonic oscillator!

## Angular Momentum and Harmonic Oscillators

There are many analogies that can be drawn between spin angular momentum and orbital angular momentum. This is because they obey identical commutation relations:

$[L_x, L_y] = i\hbar L_z$     &     $[S_x, S_y] = i\hbar S_z$.

One can circularly permute the indices to obtain the other commutation relations. However, there is one crucial difference between the orbital and spin angular momenta: components of the orbital angular momentum cannot take half-integer values, whereas this is permitted for spin angular momentum.

The forbidden half-integer quantization stems from the fact that orbital angular momentum can be expressed in terms of the position and momentum operators:

$\textbf{L} = \textbf{r} \times \textbf{p}$.

While in most textbooks the integer quantization of the orbital angular momentum is shown by appealing to the Schrodinger equation, Schwinger demonstrated that by mapping the angular momentum problem to that of two uncoupled harmonic oscillators (pdf!), integer quantization easily follows.

I’m just going to show this for the $z$-component of the angular momentum since the $x$– and $y$-components can easily be obtained by permuting indices. $L_z$ can be written as:

$L_z = xp_y - yp_x$

As Schwinger often did effectively, he made a canonical transformation to a different basis and wrote:

$x_1 = \frac{1}{\sqrt{2}} [x+(a^2/\hbar)p_y]$

$x_2 = \frac{1}{\sqrt{2}} [x-(a^2/\hbar)p_y]$

$p_1 = \frac{1}{\sqrt{2}} [p_x-(\hbar/a^2)y]$

$p_2 = \frac{1}{\sqrt{2}} [p_x+(\hbar/a^2)y]$,

where $a$ is just some variable with units of length. Now, since the transformation is canonical, these new operators satisfy the same commutation relations, i.e. $[x_1,p_1]=i\hbar, [x_1,p_2]=0$, and so forth.

If we now write $L_z$ in terms of the new operators, we find something rather amusing:

$L_z = (\frac{a^2}{2\hbar}p_1^2 + \frac{\hbar}{2a^2}x_1^2) - ( \frac{a^2}{2\hbar}p_2^2 + \frac{\hbar}{2a^2}x_2^2)$.

With the substitution $\hbar/a^2 \rightarrow m$, $L_z$ can be written as so:

$L_z = (\frac{1}{2m}p_1^2 + \frac{m}{2}x_1^2) - ( \frac{1}{2m}p_2^2 + \frac{m}{2}x_2^2)$.

Each of the two terms in brackets can be identified as Hamiltonians for harmonic oscillators with angular frequency, $\omega$, equal to one. The eigenvalues of the harmonic oscillator problem can therefore be used to obtain the eigenvalues of the $z$-component of the orbital angular momentum:

$L_z|\psi\rangle = (H_1 - H_2)|\psi\rangle = \hbar(n_1 - n_2)|\psi\rangle = m\hbar|\psi\rangle$,

where $H_i$ denotes the Hamiltonian operator of the $i^{th}$ oscillator. Since the $n_i$ can only take integer values in the harmonic oscillator problem, integer quantization of Cartesian components of the angular momentum also naturally follows.

How do we interpret all of this? Let’s imagine that we have $n_1$ spins pointing up and $n_2$ spins pointing down. Now consider the angular momentum raising and lowering operators. The angular momentum raising operator in this example, $L_+ = \hbar a_1^\dagger a_2$, corresponds to flipping a spin of angular momentum, $\hbar/2$ from down to up. The $a_1^\dagger (a_2)$ corresponds to the creation (annihilation) operator for oscillator 1 (2). The change in angular momentum is therefore $+\hbar/2 -(-\hbar/2) = \hbar$. It is this constraint, that we cannot “destroy” these spins, but only flip them, that results in the integer quantization of orbital angular momentum.

I find this solution to the forbidden half-integer problem much more illuminating than with the use of the Schrodinger equation and spherical harmonics. The analogy between the uncoupled oscillators and angular momentum is very rich and actually extends much further than this simple example. It has even been used in papers on supersymmetry (which, needless to say, extends far beyond my realm of knowledge).

## Phase Difference after Resonance

If you take the keys out of your pocket and swing them very slowly back and forth from your key chain, emulating a driven pendulum, you’ll notice that the keys swing back and forth in phase with your hand. Now, if you slowly start to speed up the swinging, you’ll notice that eventually you’ll hit a resonance frequency, where the keys will swing back and forth with a much greater amplitude.

If you keep slowly increasing the frequency of your swing beyond the resonance frequency, you’ll see that the keys don’t swing up as high. Also, you will notice that the keys now seem to be swaying out of phase with your hand (i.e. your hand is going in one direction while the keys are moving in the opposite direction!). This change of phase by 180 degrees between the driving force and the position of the oscillator is a ubiquitous feature of damped harmonic motion at frequencies higher than the resonance frequency. Why does this happen?

To understand this phenomenon, it helps to write down the equation for damped, driven harmonic motion. This could be describing a mass on a spring, a pendulum, a resistor-inductor-capacitor circuit, or something more exotic. Anyway, the relevant equation looks like this:

$\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}$

Let’s describe in words what each of the terms means. The first term describes the resistance to change or inertia of the system. The second term represents the damping of the system, which is usually quite small. The third term gives us the pullback or restoring force, while the last term on the right-hand side represents the external driving force.

With this nomenclature in place, let’s move on to what actually causes the phase change. First, we have to turn this differential equation into an algebraic equation by doing a Fourier transform (or similarly assuming a sinusoidal dependence of everything). This leaves us with the following equation:

$(-\omega^2 + i\omega b + \omega_0^2 )x_0e^{i\omega t} = F_0e^{i\omega t}$

Now we can more easily visualize what is going on if we concentrate on the left-hand side of the equation. Note that this equation can also suggestively be written as:

$(e^{i\pi}\omega^2 + e^{i\pi/2}\omega b + \omega_0^2 )x_0e^{i\omega t} = F_0e^{i\omega t}$

For small driving frequencies, $b << \omega << \omega_0$, we see that the restoring term is the largest. The phase difference can then be represented graphically on an Argand diagram, where we can draw the following picture:

Restoring term dominates for low frequency oscillations

Therefore, the restoring force dominates the other two terms and the phase difference between the external force and the position of the oscillator is small (approximately zero).

At resonance, however, the driving frequency is the same as the natural frequency. This causes the restoring and inertial terms to cancel each other out perfectly, resulting in an Argand diagram like this:

Equal contribution from the restoring and inertial terms

After adding the vectors, this results in the arrow pointing upward, which is equivalent to saying that there is a 90 degree phase difference between the driving force and position of the oscillator.

You can probably see where this is going now, but let’s just keep going for the sake of completeness. For the case where the driving frequency exceeds the natural frequency (or resonant frequency), $b << \omega_0 << \omega$, we see that the inertial term starts to dominate, resulting in a phase shift of 180 degrees. This is again can be represented with an Argand diagram, as seen below:

Inertial term dominates for high frequency oscillations

This expresses the fact that the inertia can no longer “keep up” with the driving force and it therefore begins to lag behind. If the mass in a mass-spring system were to be reduced, the oscillator would be able to keep up with the driver up to a higher frequency. In summary, the phase difference can be plotted against the driving frequency to yield:

This phase change can be observed in so many contexts that it would be near impossible to list them all. In condensed matter physics, for instance, when sweeping the incident frequency of light in a reflectivity experiment of a semiconductor, a phase difference arises between the photon and the phonon above the phonon frequency. The problem that actually brought me to this analysis was the ported speaker, where above the resonant frequency of the speaker cone, the air from the port and the pressure wave generated from the speaker go 180 degrees out of phase.

## Lessons from the Coupled Oscillator

In studying solid state physics, one of the first problems encountered is that of phonons. In the usual textbooks (such as Ashcroft and Mermin or Kittel), the physics is buried underneath formalism. Here is my attempt to explain the physics, while just quoting the main mathematical results. For the simple mass-spring oscillator system pictured below, we get the following equation of motion and oscillation frequency:

Simple harmonic oscillator

$\ddot{x} = -\omega^2x$

and      $\omega^2 = \frac{k}{m}$

If we couple two harmonic oscillators, such as in the situation below, we get two normal modes that obey the equations of motion identical to the single-oscillator case.

Coupled harmonic oscillator

The equations of motion for the normal modes are:

$\ddot{\eta_1} = -\omega^2_1\eta_1$      and

$\ddot{\eta_2} = -\omega^2_2\eta_2$,

where

$\omega_1^2 = \frac{k+2\kappa}{m}$

and   $\omega_2^2 = \frac{k}{m}$.

I should also mention that $\eta_1 = x_1 - x_2$$\eta_2 = x_1 + x_2$. The normal modes are pictured below, consisting of a symmetric and antisymmetric oscillation:

Symmetric normal mode

Antisymmetric normal mode

The surprising thing about the equations for the normal modes is that they look exactly like the equations for two decoupled and independent harmonic oscillators. Any motion of the oscillators can therefore be written as a linear combination of the normal modes. When looking back at such results, it seems trivial — but I’m sure to whoever first solved this problem, the result was probably unexpected and profound.

Now, let us briefly discuss the quantum case. If we have a single harmonic oscillator, we get that the Hamiltonian is:

$H = \hbar\omega (a^\dagger a +1/2)$

If we have many harmonic oscillators coupled together as pictured below, one would probably guess in light of the classical case that one could obtain the normal modes similarly.

Harmonic Chain

One would probably then naively guess that the Hamiltonian could be decoupled into many seemingly independent oscillators:

$H = \sum_k\hbar\omega_k (a^\dagger_k a _k+1/2)$

This intuition is exactly correct and this is indeed the Hamiltonian describing phonons, the normal modes of a lattice. The startling conclusion in the quantum mechanical case, though, is that the equations lend themselves to a quasiparticle description — but I wish to speak about quasiparticles another day. Many ideas in quantum mechanics, such as Anderson localization, are general wave phenomena and can be seen in classical systems as well. Studying and visualizing classical waves can therefore still yield interesting insights into quantum mechanics.