# Tag Archives: History

## Book Review – The Gene

Following the March Meeting, I took a vacation for a couple weeks, returning home to Bangkok, Thailand. During my holiday, I was able to get a hold of and read Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History.

I have to preface any commentary by saying that prior to reading the book, my knowledge of biology embarrassingly languished at the middle-school level. With that confession aside, The Gene was probably one of the best (and for me, most enlightening) popular science books I have ever read. This is definitely aided by Mukherjee’s fluid and beautiful writing style from which scientists in all fields can learn a few lessons about scientific communication. The Gene is also touched with a humanity that is not usually associated with the popular science genre, which is usually rather dry in recounting scientific and intellectual endeavors. This humanity is the book’s most powerful feature.

Since there are many glowing reviews of the book published elsewhere, I will just list here a few nuggets I took away from The Gene, which hopefully will serve to entice rather than spoil the book for you:

• Mukherjee compares the gene to an atom or a bit, evolution’s “indivisible” particle. Obviously, the gene is physically divisible in the sense that it is made of atoms, but what he means here is that the lower levels can be abstracted away and the gene is the relevant level at which geneticists work.
• It is worth thinking of what the parallel carriers of information are in condensed matter problems — my hunch is that most condensed matter physicists would contend that these are the quasiparticles in the relevant phase of matter.
• Gregor Mendel, whose work nowadays is recognized as giving birth to the entire field of genetics, was not recognized for his work while he was alive. It took another 40-50 years for scientists to rediscover his experiments and to see that he had localized, in those pea plants, the indivisible gene. One gets the feeling that his work was not celebrated while he was alive because his work was far ahead of its time.
• The history of genetics is harrowing and ugly. While the second World War was probably the pinnacle of obscene crimes committed in the name of genetics, humans seem unable to shake off ideas associated with eugenics even into the modern day.
• Through a large part of its history, the field of genetics has had to deal with a range of ethical questions. There is no sign of this trend abating in light of the recent discovery of CRISPR/Cas-9 technology. If you’re interested in learning more about this, RadioLab has a pretty good podcast about it.
• Schrodinger’s book What is Life? has inspired so much follow-up work that it is hard to overestimate the influence it has had on a generation of physicists that transitioned to studying biology in the middle of the twentieth century, including both Watson and Crick.

While I could go on and on with this list, I’ll stop ruining the book for you. I would just like to say that at the end of the book I got the feeling that humans are still just starting to scratch the surface of understanding what’s going on in a cell. There is much more to learn, and that’s an exciting feeling in any field of science.

Aside: In case you missed March Meeting, the APS has posted the lectures from the Kavli Symposium on YouTube, which includes lectures from Duncan Haldane and Michael Kosterlitz among others.

A couple days ago, Lawrence Livermore National Laboratory released a number of videos of nuclear test explosions. It is worth watching some of these to understand the magnitude of destruction that these can cause. Here is a link to the Lawrence Livermore playlist on YouTube, and below is a video explaining a bit of the background concerning the release of these videos:

Below is a helpful MinutePhysics video that talks about the actual dangers concerning nuclear weapons:

On a somewhat unrelated note, while at the APS March Meeting this past week, Peter Abbamonte mentioned this video to me, which I also found pretty startling:

Lastly, here is a tragicomedy that takes place in the wild — it seems like this orca was never told by its mother not to play with its food:

## An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

## Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~$5\times 10^{17} cm^{-3}$.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

## Consistency in the Hierarchy

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

Thomas Kuhn, the famous philosopher of science, envisioned that scientific revolutions take place when “an increasing number of epicycles” arise, resulting in the untenability of a prevailing theory. Just in case you aren’t familiar, the “epicycles” are a reference to the Ptolemaic world-view with the earth at the center of the universe. To explain the trajectories of the other planets, Ptolemaic theory required that the planets circulate the earth in complicated trajectories called epicycles. These convoluted epicycles were no longer needed once the Copernican revolution took place, and it was realized that our solar system was heliocentric.

This post is specifically about the Gibbs paradox, which provided one of the first examples of an “epicycle” in classical mechanics. If you google Gibbs paradox, you will come up with several different explanations, which are all seemingly related, but don’t quite all tell the same story. So instead of following Gibbs’ original arguments, I’ll just go by the version which is the easiest (in my mind) to follow.

Imagine a large box that is partitioned in two, with volume V on either side, filled with helium gas of the same pressure, temperature, etc. and at equilibrium (i.e. the gases are identical). The total entropy in this scenario is $S + S =2S$. Now, imagine that the partition is removed. The question Gibbs asked himself was: does the entropy increase?

Now, from our perspective, this might seems like an almost silly question, but Gibbs had asked himself this question in 1875, before the advent of quantum mechanics. This is relevant because in classical mechanics, particles are always distinguishable (i.e. they can be “tagged” by their trajectories). Hence, if one calculates the entropy increase assuming distinguishable particles, one gets the result that the entropy increases by $2Nk\textrm{ln}2$.

This is totally at odds with one’s intuition (if one has any intuition when it comes to entropy!) and the extensive nature of entropy (that entropy scales with the system size). Since the size of the larger container of volume $2V$ containing identical gases (i.e. same pressure and temperature) does not change when removing the partition, neither should the entropy. And most damningly, if one were to place the partition back where it was before, one would naively think that the entropy would return to $2S$, suggesting that the entropy decreased when returning the partition.

The resolution to this paradox is that the particles (helium atoms in this case) are completely indistinguishable. Gibbs had indeed recognized this as the resolution to the problem at the time, but considered it a counting problem.

Little did he know that the seeds giving rise to this seemingly benign problem required the complete overthrow of classical mechanics in favor of quantum mechanics. Only in quantum mechanics do truly identical particles exist. Note that nowhere in the Gibbs paradox does it suggest what the next theory will look like – it only points out a severe shortcoming of classical mechanics. Looked at in this light, it is amusing to think about what sorts of epicycles are hiding within our seemingly unshakable theories of quantum mechanics and general relativity, perhaps even in plain sight.

## Lunar Eclipse and the 22 Degree Halo

The beautiful thing about atmospheric optics is that (almost) everyone can look up at the sky and see stunning optical phenomena from the sun, moon or some other celestial object. In this post I’ll focus on two particularly striking phenomena where the physical essence can be captured with relatively simple explanations.

The 22 degree halo is a ring around the sun or moon, which is often observed on cold days. Here are a couple images of the 22 degree halo around the sun and moon respectively:

22 degree halo around the sun

22 degree halo around the moon

Note that the 22 degree halo is distinct from the coronae, which occur due to different reasons. While the coronae arise due to the presence of water droplets, the 22 degree halo arises specifically due to the presence of hexagonal ice crystals in the earth’s atmosphere. So why 22 degrees? Well, it turns out that one can answer the question using rather simple undergraduate-level physics. One of the most famous questions in undergraduate optics is that of light refraction through a prism, illustrated below:

Fig. 1: The Snell’s Law Prism Problem

But if there were hexagonal ice crystals in the atmosphere, the problem is exactly the same, as one can see below. This is so because a hexagon is just an equilateral triangle with its ends chopped off. So as long as the light enters and exits on two sides of the hexagon that are spaced one side apart, the analysis is the same as for the triangle.

Equilateral triangle with ends chopped off, making a hexagon

It turns out that $\theta_4$ in Fig. 1 can be solved as a function of $\theta_1$ with Snell’s law and some simple trigonometry to yield (under the assumption that $n_1 =1$):

$\theta_4 = \textrm{sin}^{-1}(n_2 \times \textrm{sin}(60-\textrm{sin}^{-1}(\textrm{sin}(\theta_1)/n_2)))$

It is then pretty straightforward to obtain $\delta$, the difference in angle between the incident and refracted beam as a function of $\theta_1$. I have plotted this below for the index of refraction of ice crystals for three different colors of light, red, green and blue ($n_2 =$ 1.306, 1.311 and 1.317 respectively):

The important thing to note in the plot above is that there is a minimum angle below which there is no refracted beam, and this angle is precisely 21.54, 21.92 and 22.37 degrees for red, green and blue light respectively. Because there is no refracted beam below 22 degrees, this region appears darker, and then there is a sudden appearance of the refracted beam at the angles listed above. This is what gives rise to the 22 degree halo and also to the reddish hue on the inside rim of the halo.

Another rather spectacular celestial occurrence is the lunar eclipse, where the earth completely obscures the moon from direct sunlight. This is the geometry for the lunar eclipse:

Geometry of the lunar eclipse

The question I wanted to address is the reddish hue of the moon, despite it lying in the earth’s shadow. It would naively seem like the moon should not be observable at all. However, there is a similar effect occurring here as with the halo. In this case, the earth’s atmosphere is the refracting medium. So just as light incident on the prism was going upward and then exited going downward, the sun’s rays similarly enter the atmosphere on a trajectory that would miss the moon, but then are bent towards the moon after interacting with the earth’s atmosphere.

But why red? Well, this has the same origins as the reddish hue of the sunset. Because light scatters from atmospheric particles as $1/\lambda^4$, blue light gets scattered away much more easily than red light. Hence, the only color of light left by the time the light reaches the moon is primarily of red color.

It is interesting to imagine what the earth looks like from the moon during a lunar eclipse — it likely looks completely dark apart from a spectacular red halo around the earth. Anyway, one should realize that Snell’s law was first formulated in 984 by Arab scientist Ibn Sahl, and so it was possible to come to these conclusions more than a thousand years ago. Nothing new here!