Tag Archives: Toy Model

Nonlinear Response and Harmonics

Because we are so often solving problems in quantum mechanics, it is sometimes easy to forget that certain effects also show up in classical physics and are not “mysterious quantum phenomena”. One of these is the problem of avoided crossings or level repulsion, which can be much more easily understood in the classical realm. I would argue that the Fano resonance also represents a case where a classical model is more helpful in grasping the main idea. Perhaps not too surprisingly, a variant of the classical harmonic oscillator problem is used to illustrate the respective concepts in both cases.

There is also another cute example that illustrates why overtones of the natural harmonic frequency components result when subject to slightly nonlinear oscillations. The solution to this problem therefore shows why harmonic distortions often affect speakers; sometimes speakers emit frequencies not present in the original electrical signal. Furthermore, it shows why second harmonic generation can result when intense light is incident on a material.

First, imagine a perfectly harmonic oscillator with a potential of the form V(x) = \frac{1}{2} kx^2. We know that such an oscillator, if displaced from its original position, will result in oscillations at the natural frequency of the oscillator \omega_o = \sqrt{k/m} with the position varying as x(t) = A \textrm{cos}(\omega_o t + \phi). The potential and the position of the oscillator as a function of time are shown below:

harmpotentialrepsonse

(Left) Harmonic potential as a function of position. (Right) Variation of the position of the oscillator with time

Now imagine that in addition to the harmonic part of the potential, we also have a small additional component such that V(x) = \frac{1}{2} kx^2 + \frac{1}{3}\epsilon x^3, so that the potential now looks like so:

nonlinearharm

The equation of motion is now nonlinear:

\ddot{x} = -c_1x - c_2x^2

where c_1 and c_2 are constants. It is easy to see that if the amplitude of oscillations is small enough, there will be very little difference between this case and the case of the perfectly harmonic potential.

However, if the amplitude of the oscillations gets a little larger, there will clearly be deviations from the pure sinusoid. So then what does the position of the oscillator look like as a function of time? Perhaps not too surprisingly, considering the title, is that not only are there oscillations at \omega_0, but there is also an introduction of a harmonic component with 2\omega_o.

While the differential equation can’t be solved exactly without resorting to numerical methods, that the harmonic component is introduced can be seen within the framework of perturbation theory. In this context, all we need to do is plug the solution to the simple harmonic oscillator, x(t) = A\textrm{cos}(\omega_0t +\phi) into the nonlinear equation above. If we do this, the last term becomes:

-c_2A^2\textrm{cos}^2(\omega_0t+\phi) = -c_2 \frac{A^2}{2}(1 + \textrm{cos}(2\omega_0t+2\phi)),

showing that we get oscillatory components at twice the natural frequency. Although this explanation is a little crude — one can already start to see why nonlinearity often leads to higher frequency harmonics.

With respect to optical second harmonic generation, there is also one important ingredient that should not be overlooked in this simplified model. This is the fact that frequency doubling is possible only when there is an x^3 component in the potential. This means that the potential needs to be inversion asymmetric. Indeed, second harmonic generation is only possible in inversion asymmetric materials (which is why ferroelectric materials are often used to produce second harmonic optical signals).

Because of its conceptual simplicity, it is often helpful to think about physical problems in terms of the classical harmonic oscillator. It would be interesting to count how many Nobel Prizes have been given out for problems that have been reduced to some variant of the harmonic oscillator!

LST Relation – The Physical Picture

In 1941, Lydanne, Sachs and Teller wrote a paper entitled “On the Polar Vibrations of Alkali Halides”, where they derived a result now known as the Lydanne-Sachs-Teller (LST) relation. It has wide applicability for polar insulators. I reproduce the relation below:

\frac{\omega_{LO}^2}{\omega_{TO}^2} = \frac{\epsilon(o)}{\epsilon(\infty)}

In the equation above, \omega_{LO} and \omega_{TO} refer to the frequencies of the longitudinal and transverse optical phonons respectively. \epsilon(0) and \epsilon(\infty) refer to the static and high frequency (above the phonon frequencies, but below any electronic energy scale) dielectric constants. All these quantities are understood to be the values in the long-wavelength limit (i.e. q \approx 0).

The beautiful thing about the LST result is that it is independent of any microscopic description, which is quite unusual in solid-state physics. Therefore, the result can be derived from classical electrodynamics, without resorting to any quantum mechanics. It is an interesting question as to whether or not quantum mechanics plays a role in the long-wavelength optical response in general.

Regardless, it turns out that all quantities in the LST relation are experimentally accessible! I find this relation quite remarkable and deep. Not only that, the agreement with experiment in many polar semiconductors is excellent. Take a look at the table below to get an idea of how well this relation holds for a few materials (reproduced from Mark Fox’s textbook Optical Properties of Solids):lstrelation

I have found textbook derivations don’t give a good intuition of why this relation holds, so here is my attempt to rectify this situation. First, let me state an important assumption that goes into the LST relation:

The phonons are assumed to be in the harmonic limit (i.e. no phonon anharmonicity) and as a result, the dielectric constant has the following form:

\epsilon(\omega) = \epsilon(\infty) + \frac{C}{\omega_{TO}^2-\omega^2}

where C is a constant. This form of the dielectric constant can be arrived at using either classical electrodynamics or quantum mechanics (see e.g. Ashcroft and Mermin, Kittel or Ziman).

Now, with this result under our belts, it turns out that it is quite simple to understand why the LST relation holds. In a simple polar semiconductor, we have two atoms per unit cell that are oppositely charged like so:

Therefore, for the longitudinal optical phonon we have an extra polarization effect due to the long-range nature of the Coulomb interaction. This extra polarization results in an extra restoring force (in addition to the springy restoring force between the ions), yielding a higher longitudinal phonon frequency compared to the transverse optical phonon. I have discussed this a little more extensively in a previous post. This extra restoring force (which is only present for the longitudinal oscillation) is pictured below:

restoringforce

The longitudinal optical phonon is at a higher energy because of the extra Coulombic polarization effect

More precisely, we can write the following when including this extra restoring force:

\omega_{LO}^2 = \omega_{TO}^2 + \frac{C}{\epsilon(\infty)}

There is an \epsilon(\infty) in the formula above because this polarization will necessarily be screened by higher energy (electronic) processes. Dividing both sides by \omega_{TO}^2, we can write the above equation suggestively as:

\frac{\omega_{LO}^2}{\omega_{TO}^2} = \frac{\epsilon(\infty)+C/\omega_{TO}^2}{\epsilon(\infty)}

Looking at the equation for the dielectric constant from earlier, this is precisely the LST relation! In effect, the same extra restoring due to the long-range Coulomb interaction leads to the extra screening in the static limit, yielding, in my mind, a delightful little result.

Using the LST relation, we can deduce a property of ferroelectric materials. Namely, we know that at the transition temperature between the normal state and a ferroelectric ground state, the static dielectric constant, \epsilon(0), diverges. Therefore, we can surmise from the LST relation that a zone center transverse optical phonon must go to zero energy (soften) at the transition temperature (see here for PbTiO3). This is a totally non-trivial consequence of the LST relation, demonstrating again its far-reaching utility.

Did I mention that I think this result is pretty excellent?

I’d like to acknowledge Zhanybek Alpichshev for enlightening some aspects regarding this topic.

Lunar Eclipse and the 22 Degree Halo

The beautiful thing about atmospheric optics is that (almost) everyone can look up at the sky and see stunning optical phenomena from the sun, moon or some other celestial object. In this post I’ll focus on two particularly striking phenomena where the physical essence can be captured with relatively simple explanations.

The 22 degree halo is a ring around the sun or moon, which is often observed on cold days. Here are a couple images of the 22 degree halo around the sun and moon respectively:

22_degree_half_around_sun

22 degree halo around the sun

22_degree_halo_around_the_moon

22 degree halo around the moon

Note that the 22 degree halo is distinct from the coronae, which occur due to different reasons. While the coronae arise due to the presence of water droplets, the 22 degree halo arises specifically due to the presence of hexagonal ice crystals in the earth’s atmosphere. So why 22 degrees? Well, it turns out that one can answer the question using rather simple undergraduate-level physics. One of the most famous questions in undergraduate optics is that of light refraction through a prism, illustrated below:

prism

Fig. 1: The Snell’s Law Prism Problem

But if there were hexagonal ice crystals in the atmosphere, the problem is exactly the same, as one can see below. This is so because a hexagon is just an equilateral triangle with its ends chopped off. So as long as the light enters and exits on two sides of the hexagon that are spaced one side apart, the analysis is the same as for the triangle.

triangle_chop

Equilateral triangle with ends chopped off, making a hexagon

It turns out that \theta_4 in Fig. 1 can be solved as a function of \theta_1 with Snell’s law and some simple trigonometry to yield (under the assumption that n_1 =1):

\theta_4 = \textrm{sin}^{-1}(n_2 \times \textrm{sin}(60-\textrm{sin}^{-1}(\textrm{sin}(\theta_1)/n_2)))

It is then pretty straightforward to obtain \delta, the difference in angle between the incident and refracted beam as a function of \theta_1. I have plotted this below for the index of refraction of ice crystals for three different colors of light, red, green and blue (n_2 = 1.306, 1.311 and 1.317 respectively):

22deghalo

The important thing to note in the plot above is that there is a minimum angle below which there is no refracted beam, and this angle is precisely 21.54, 21.92 and 22.37 degrees for red, green and blue light respectively. Because there is no refracted beam below 22 degrees, this region appears darker, and then there is a sudden appearance of the refracted beam at the angles listed above. This is what gives rise to the 22 degree halo and also to the reddish hue on the inside rim of the halo.

Another rather spectacular celestial occurrence is the lunar eclipse, where the earth completely obscures the moon from direct sunlight. This is the geometry for the lunar eclipse:

lunar_eclipse

Geometry of the lunar eclipse

The question I wanted to address is the reddish hue of the moon, despite it lying in the earth’s shadow. It would naively seem like the moon should not be observable at all. However, there is a similar effect occurring here as with the halo. In this case, the earth’s atmosphere is the refracting medium. So just as light incident on the prism was going upward and then exited going downward, the sun’s rays similarly enter the atmosphere on a trajectory that would miss the moon, but then are bent towards the moon after interacting with the earth’s atmosphere.

But why red? Well, this has the same origins as the reddish hue of the sunset. Because light scatters from atmospheric particles as 1/\lambda^4, blue light gets scattered away much more easily than red light. Hence, the only color of light left by the time the light reaches the moon is primarily of red color.

It is interesting to imagine what the earth looks like from the moon during a lunar eclipse — it likely looks completely dark apart from a spectacular red halo around the earth. Anyway, one should realize that Snell’s law was first formulated in 984 by Arab scientist Ibn Sahl, and so it was possible to come to these conclusions more than a thousand years ago. Nothing new here!

Friedel Sum Rule and Phase Shifts

When I took undergraduate quantum mechanics, one of the most painful subjects to study was scattering theory, due to the usage of special functions, phase shifts and partial waves. To be honest, the sight of those words still makes me shudder a little.

If you have felt like that at some point, I hope that this post will help alleviate some fear of phase shifts. Phase shifts can be seen in many classical contexts, and I think that it is best to start thinking about them in that setting. Consider the following scenarios: a wave pulse on a rope is incident on a (1) fixed boundary and (2) a movable boundary. See below for a sketch, which was taken from here.

animation of wave pulse reflecting from hard boundary

Fixed Boundary Reflection

animation of wave pulse reflecting from soft boundary

Movable Boundary Reflection

Notice that in the fixed boundary case, one gets a phase shift of \pi, while in the movable boundary case, there is no phase shift. The reason that there is a phase shift of \pi in the former case is that the wave amplitude must be zero at the boundary. Therefore, when the wave first comes in and reflects, the only way to enforce the zero is to have the wave reflect with a \pi phase shift and interfere destructively with the incident pulse, cancelling it out perfectly.

The important thing to note is that for elastic scattering, the case we will be considering in this post, the amplitude of the reflected (or scattered) pulse is the same as the incident pulse. All that has changed is the phase.

Let’s now switch to quantum mechanics. If we consider the same setup, where an incident wave hits an infinitely high wall at x=0, we basically get the same result as in the classical case.

InfiniteWall

Elastic scattering from an infinite barrier

If the incident and scattered wavefunctions are:

\psi_i = Ae^{ikx}      and      \psi_s=Be^{-ikx}

then B = -A = e^{i\pi}A because, as for the fixed boundary case above, the incident and scattered waves destructively interfere (i.e. \psi_i(0) + \psi_s(0) =0). The full wavefunction is then:

\psi(x) = A(e^{ikx}-e^{-ikx}) \sim \textrm{sin}(kx)

The last equality is a little misleading since the wavefunction is not normalizable, but let’s just pretend we have an infinite barrier at large but not quite infinite (-x). Now consider a similar-looking, but pretty arbitrary potential:

scatteringPotential

Elastic scattering from an arbitrary potential

What happens in this case? Well, again, the scattering is elastic, so the incident and reflected amplitudes must be the same away from the region of the potential. All that can change, therefore, is the phase of the reflected (scattered) wavefunction. We can therefore write, similar to the case above:

\psi(x) = A(e^{ikx}-e^{i(2\delta-kx)}) \sim \textrm{sin}(kx+\delta)

Notice that the sine term has now acquired a phase. What does this mean? It means that the energy of the wavefunction has changed, as would be expected for a different potential. If we had used box normalization for the infinite barrier case, kL=n\pi, then the energy eigenvalues would have been:

E_n = \hbar^2n^2\pi^2/2mL^2

Now, with the newly introduced potential, however, our box normalization leads to the condition, kL+\delta(k)=n\pi so that the new energies are:

E_n = \hbar^2n^2\pi^2/2mL^2-\hbar^2\delta(k)^2/2mL^2

The energy eigenvalues move around, but since \delta(k) can be a pretty complicated function of k, we don’t actually know how they move. What’s clear, though, is that the number of energy eigenvalues are going to be the same — we didn’t make or destroy any new eigenvalues or energy eigenstates.

Let’s now move onto some solid state physics. In a metal, one usually fills up N states in accordance with the Pauli principle up to k_F. If we introduce an impurity with a different number of valence electrons into the metal, we have effectively created a potential where the electrons of the Fermi gas/liquid can scatter. Just like in the cases above, this potential will cause a phase shift in the electron wavefunctions present in the metal, changing their energies. The amplitudes for the incoming and outgoing electrons again will be the same far from the scattering impurity.

This time, though, there is something to worry about — the phase shift and the corresponding energy shift can potentially move states from above the Fermi energy to below the Fermi energy, or vice versa. Suppose I introduced an impurity with an extra valence electron compared to that of the host metal. For instance, suppose I introduce a Zn impurity into Cu. Since Zn has an extra electron, the Fermi sea has to accommodate an extra energy state. I can illustrate the scenario away from, but in the neighborhood of, the Zn impurity schematically like so:

statesChanging

E=0 represents the Fermi Energy. An extra state below the Fermi energy because of the addition of a Zn impurity

It seems quite clear from the description above, that the phase shift must be related somehow to the valence difference between the impurity and the host metal. Without the impurity, we fill up the available states up to the Fermi wavevector, k_F=N_{max}\pi/L, where N_{max} is the highest occupied state. In the presence of the impurity, we now have k_F=(N'_{max}\pi-\delta(k_F))/L. Because the Fermi wavevector does not change (the density of the metal does not change), we have that:

N'_{max} = N_{max} + \delta(k_F)/\pi

Therefore, the number of extra states needed now to fill up the states to the Fermi momentum is:

N'_{max}-N_{max}=Z = \delta(k_F)/\pi,

where Z is the valence difference between the impurity and the host metal. Notice that in this case, each extra state that moves below the Fermi level gives rise to a phase shift of \pi. This is actually a simplified version of the Friedel sum rule. It means that the electrons at the Fermi energy have changed the structure of their wavefunctions, by acquiring a phase shift, to exactly screen out the impurity at large distances.

There is just one thing we are missing. I didn’t take into account degeneracy of the energy levels of the Fermi sea electrons. If I do this, as Friedel did assuming a spherically symmetric potential in three dimensions, we get a degeneracy of 2(2l+1) for each l where the factor of 2 comes from spin and (2l+1) comes from the fact that we have azimuthal symmetry. We can write the Friedel sum rule more precisely, which states:

Z = \frac{2}{\pi} \sum_l (2l+1)\delta_l(k_F),

We just had to take into consideration the fact that there is a high degeneracy of energy states in this system of spherical symmetry. What this means, informally, is that each energy level that moves below the Fermi energy has the \pm\pi phase shift distributed across each relevant angular momentum channel. They all get a little slice of some phase shift.

An example: If I put Ni  (which is primarily a d-wave l=2 scatterer in this context) in an Al host, we get that Z=-1. This is because Ni has valence 3d^94s^1.  Now, we can obtain from the Friedel sum rule that the phase shift will be \sim -\pi/10. If we move onto Co where Z=-2, we get \sim -2\pi/10, and so forth for Fe, Mn, Cr, etc. Only after all ten d-states shift above the Fermi energy do we acquire a phase shift of -\pi.

Note that when the phase shift is \sim\pm\pi/2 the impurity will scatter strongly, since the scattering cross section \sigma \propto |\textrm{sin}(\delta_l)|^2. This is referred to as resonance scattering from an impurity, and again bears a striking resemblance to the classical driven harmonic oscillator. In the case above, it would correspond to Cr impurities in the Al host, which has phase shift of \sim -5\pi/10. Indeed, the resistivity of Al with Cr impurities is the highest among the first row transition metals, as shown below:

aluminum impurties

Hence, just by knowing the valence difference, we can get out a non-trivial phase shift! This is a pretty remarkable result, because we don’t have to use the (inexact and perturbative) Born approximation. And it comes from (I hope!) some pretty intuitive physics.

Broken Symmetry and Degeneracy

Often times, when I understand a basic concept I had struggled to understand for a long time, I wonder, “Why in the world couldn’t someone have just said that?!” A while later, I will then return to a textbook or paper that actually says precisely what I wanted to hear. I will then realize that the concept just wouldn’t “stick” in my head and required some time of personal and thoughtful deliberation. It reminds me of a humorous quote by E. Rutherford:

All of physics is either impossible or trivial.  It is impossible until you understand it and then it becomes trivial.

I definitely experienced this when first studying the relationship between broken symmetry and degeneracy. As I usually do on this blog, I hope to illustrate the central points within a pretty rigorous, but mostly picture-based framework.

For the rest of this post, I’m going to follow P. W. Anderson’s article More is Different, where I think these ideas are best addressed without any equations. However, I’ll be adding a few details which I wished I had understood upon my first reading.

If you Google “ammonia molecule” and look at the images, you’ll get something that looks like this:

ammonia

With the constraint that the nitrogen atom must sit on a line through the center formed by the triangular network of hydrogen atoms, we can approximate the potential to be one-dimensional. The potential along the line going through the center of the hydrogen triangle will look, in some crude approximation, something like this:

AmmoniaPotential

Notice that the molecule has inversion (or parity) symmetry about the triangular hydrogen atom network. For non-degenerate wavefunctions, the quantum stationary states must also be parity eigenstates. We expect, therefore, that the stationary states will look something like this for the ground state and first excited state respectively:

SymmetricDoubleWell

Ground State

AntiSymmetricDoubleWell

First Excited State

The tetrahedral (pyramid-shaped) ammonia molecule in the image above is clearly not inversion symmetric, though. What does this mean? Well, it implies that the ammonia molecule in the image above cannot be an energy eigenstate. What has to happen, therefore, is that the ammonia molecule has to oscillate between the two configurations pictured below:

ammoniaInversion

The oscillation between the two states can be thought of as the nitrogen atom tunneling from one valley to the other in the potential energy diagram above. The oscillation occurs about 24 billion times per second or with a frequency of 24 GHz.

To those familiar with quantum mechanics, this is a classic two-state problem and there’s nothing particularly new here. Indeed, the tetrahedral structures can be written as linear combinations of the symmetric and anti-symmetric states as so:

| 1 \rangle = \frac{1}{\sqrt{2}} (e^{i \omega_S t}|S\rangle +e^{i \omega_A t}|A\rangle)

| 2 \rangle = \frac{1}{\sqrt{2}} (e^{i \omega_S t}|S\rangle -e^{i \omega_A t}|A\rangle)

One can see that an oscillation frequency of \omega_S-\omega_A will result from the interference between the symmetric and anti-symmetric states.

The interest in this problem, though, comes from examining a certain limit. First, consider what happens when one replaces the nitrogen atom with a phosphorus atom (PH3): the oscillation frequency decreases to about 0.14 MHz, about 200,000 times slower than NH3. If one were to do the same replacement with an arsenic atom instead (AsH3), the oscillation frequency slows down to 160 microHz, which is equivalent to about an oscillation every two years!

This slowing down can be simply modeled in the picture above by imagining the raising of the barrier height between the two valleys like so:

HighBarrierPotential

In the case of an amino acid or a sugar, which are both known to be chiral, the period of oscillation is thought to be greater than the age of the universe. Basically, the molecules never invert!

So what is happening here? Don’t worry, we aren’t violating any laws of quantum mechanics.

As the barrier height reaches infinity, the states in the well become degenerate. This degeneracy is key, because for degenerate states, the stationary states no longer have to be inversion-symmetric. Graphically, we can illustrate this as so:

SymmetricDegenerate

Symmetric state, E=E_0

Anti-symmetricDegenerate

Anti-symmetric state, E=E_0

We can now write for the symmetric and anti-symmetric states:

| 1 \rangle = e^{i\omega t} \frac{1}{\sqrt{2}} (|S\rangle + |A\rangle)

| 2 \rangle = e^{i\omega t} \frac{1}{\sqrt{2}} (|S\rangle - |A\rangle)

These are now bona-fide stationary states. There is therefore a deep connection between degeneracy and the broken symmetry of a ground state, as this example so elegantly demonstrates.

When there is a degeneracy, the ground state no longer has to obey the symmetry of the Hamiltonian.

Technically, the barrier height never reaches infinity and there is never true degeneracy unless the number of particles in the problem (or the mass of the nitrogen atom) approaches infinity, but let’s leave that story for another day.

Visualization and Analogies

Like many, my favorite subject in high school mathematics was geometry. This was probably the case because it was one of the few subjects where I was able to directly visualize everything that was going on.

I find that I am prone to thinking in pictures or “visual analogies”, because this enables me to understand and remember concepts better. Solutions to certain problems may then become obvious. I’ve illustrated this kind of thinking on a couple occasions on this blog when addressing plasmons, LO-TO splitting and the long-range Coulomb interaction and also when speaking about the “physicist’s proof”.

Let me give another example, that of measurement probability. Suppose I have a spin-1/2 particle in the following initial state:

|\psi\rangle{}=\sqrt{\frac{1}{3}}|+\rangle{} + \sqrt{\frac{2}{3}}|-\rangle{}

In this case, when measuring S_z, we find that the probability of finding the particle in the spin-up state will be P(S_z = +) = 1/3.

Let us now consider a slightly more thought-provoking problem. Imagine that we have a spin-1 particle in the following state:

|\psi\rangle{}=\sqrt{\frac{1}{3}}|1\rangle{} + \sqrt{\frac{1}{2}}|0\rangle{} + \sqrt{\frac{1}{6}}|-1\rangle{}

Suppose now I measure S_z^2 and obtain +1. This measurement eliminates the possibility of measuring the |0\rangle{} state henceforth. The question is : what is the probability of now measuring -1 if I measure S_z, i.e. P(S_z = -1 | S_z^2 = 1)? (Have a go at solving this problem before reading my solution below.)

My favorite solution to this problem involves a visual interpretation (obviously!). Imagine axes labelled by the S_z kets and the initial state by a vector in this space as so:

BeforeMeasurement

Now, the key involves thinking of the measurement of S_z^2 as a projection onto the (1,-1) plane as so:

AfterMeasurement

After this projection, though, the wavefunction is unnormalized. Therefore, one needs to normalize (or re-scale) the wavefunction again so that the probabilities still add up to one. This can done quite simply and the new wavefunction is therefore:

|\psi_{new}\rangle{}=\sqrt{\frac{2}{3}}|1\rangle{} + \sqrt{\frac{1}{3}}|-1\rangle{}

Hence the probability of measuring S_z = -1 is now increased to 1/3, whereas it was only 1/6 before measuring S_z^2. It is important to note that the ratio of the probabilities, P(S_z=1)/P(S_z=-1), and the relative phase e^{i\phi} between the |+1\rangle{} and the |-1\rangle{} kets does not change after the projection.

I find this solution to the problem particularly illuminating because it permits a visual geometric interpretation and is actually helpful in solving the problem.

Please let me know if you find this kind of visualization as helpful as I do, because I hope to write posts in the future about the Anderson pseudo-spin representation of BCS theory and about the water analogy in electronic circuits.

Sounding Out Krakatoa

I recently watched an interesting documentary on Krakatoa, which is what inspired this post.

The 1883 eruption of Krakatoa, a volcanic Indonesian island, was one of the largest in recorded history, killing between 30,000 – 100,000 people. Wikipedia gives a good overview of its remarkable destructive power. The sound that emanated from the eruption was perhaps the loudest in recorded history — reports suggest that sailors ruptured their eardrums, and subsequently went deaf, up to 80 miles away from the island. The eruption was heard across huge distances, from Sri Lanka to Australia (see pg 80-87). People on Rodrigues Island, close to Madagascar, were reported to have heard the eruption from across the Indian Ocean. Rodrigues Island is about 5,700 km or 3,800 miles from Krakatoa. Here is a Google map showing their separation (click to enlarge):

krakatoa_map

Furthermore, inaudible (to the human ear) acoustic signals were said to have circled the earth up to seven times, and were detected using infrasonic detection.

In this post, I intend to make a couple calculations to discuss the following:

  1. The approximate sound level at Rodrigues Island
  2. The maximum distance at which the volcano was probably heard
  3. The possibility of acoustic circumnavigation of the world

There are three facts that are needed to discuss the above points:

  1. Reports suggest that the sound level of the eruption was approximately a whopping 175 dB at 100 miles from the volcano.
  2. Sound intensity falls as 1/r^2.
  3. Acoustic damping in air is generally lower for lower frequencies. (A great little applet where one can calculate the sound absorption coefficient of air can be found here.)

In addition, we can make some decent estimates by using the sound level formula:

SL (dB) = 10*Log_{10}(I/I_0) - \alpha*r,

where I_0 is 10^{-12} W/m^2 is the threshold of human hearing, \alpha is the coefficient of sound absorption in dB/m and r is the distance in m.

Here is a plot for the sound level in dB as a function of distance from the eruption site with Rodrigues Island marked by the dots (click to enlarge):

KrakatoaLogLog

There are immediately a couple things to note:

  1. I have plotted three curves: the blue curve is a calculation that does not consider any damping from air at all, the yellow curve considers a low but audible frequency sound that includes damping, while the green curve considers an infrasonic sound wave that also includes damping. (The damping coefficient was obtained from this link assuming a temperature of 20C, pressure of 1atm and humidity of 75%).
  2. Close to the volcano, this calculation gives us an unrealistically large value for the sound level. It turns out that sound cannot exceed ~194dB because this is the sound level at which rarefaction of air corresponds to a vacuum. Values greater than this correspond to a shock wave.

Keeping these things in mind, the yellow curve probably is the best estimate of the sound level on Rodrigues Island, since it includes damping for an audible signal (humans can’t hear below about 20 Hz). Therefore, we can estimate that the eruption was heard with about 70 dB on Rodrigues Island! This is approximately the sound level of a noisy restaurant.

If we follow the yellow line to about the 40 dB mark, which is an approximate value where someone may still notice the sound, this would be at a distance of about 4,800 miles! This is approximately the distance from Cape Town, South Africa to Baghdad, Iraq.

The last point to address is the seven-time infrasonic global circumnavigation. It turns out that if one follows the green line on the plot out to where it reaches the 0 dB level, this would be approximately at a distance of about 11 million meters. The earth’s circumference is approximately 40 millions meters, however, and if we were to circumnavigate the world seven times, the required distance of travel would be 280 million meters. What went wrong in the calculation?

There is one major factor to consider. Very low frequency sound basically propagates with very little damping through air. For sub-Hz infrasonic sound, values for the absorption coefficient don’t seem to be very easy to find! (If you know of a database for these, please share and I’ll update this post). Let us then consider the case of no damping (the blue curve). The blue curve actually crosses the 0 dB mark at a distance of approximately 85 trillion meters. This way over-steps the mark (corresponding to circumnavigation 2.125 million times!). Even though this is a ridiculous estimate, at least it beat the 280 million meter mark, which suggests that with the right absorption coefficient, we may be in the right ballpark. A quick calculation shows that for realistic values of the absorption coefficient (about half the value of the 5Hz sound absorption coefficient), we would be very close to the 280 million meter mark (in fact, I get about 225 million meters for this absorption coefficient). This tells us that it is indeed possible for low frequency sound to circumnavigate the planet this way!

Interestingly, we can learn quite a bit concerning the sound propagation of the Krakatoa eruption using relatively simple physics.

Note: Throughout, we have neglected one very important effect — that of reflection. Anyone who has been inside an anechoic chamber will be acutely aware of the effects of sound reflection. (In an anechoic chamber, one can actually stand at different spots and hear the interference pattern when playing a sine wave from a pair of speakers). Even with this oversight, it seems like we have been able to capture the essential points, though reflection probably had a non-trivial effect on the acoustic propagation as well.