Category Archives: History

Book Review – The Gene

Following the March Meeting, I took a vacation for a couple weeks, returning home to Bangkok, Thailand. During my holiday, I was able to get a hold of and read Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History.

I have to preface any commentary by saying that prior to reading the book, my knowledge of biology embarrassingly languished at the middle-school level. With that confession aside, The Gene was probably one of the best (and for me, most enlightening) popular science books I have ever read. This is definitely aided by Mukherjee’s fluid and beautiful writing style from which scientists in all fields can learn a few lessons about scientific communication. The Gene is also touched with a humanity that is not usually associated with the popular science genre, which is usually rather dry in recounting scientific and intellectual endeavors. This humanity is the book’s most powerful feature.

Since there are many glowing reviews of the book published elsewhere, I will just list here a few nuggets I took away from The Gene, which hopefully will serve to entice rather than spoil the book for you:

  • Mukherjee compares the gene to an atom or a bit, evolution’s “indivisible” particle. Obviously, the gene is physically divisible in the sense that it is made of atoms, but what he means here is that the lower levels can be abstracted away and the gene is the relevant level at which geneticists work.
    • It is worth thinking of what the parallel carriers of information are in condensed matter problems — my hunch is that most condensed matter physicists would contend that these are the quasiparticles in the relevant phase of matter.
  • Gregor Mendel, whose work nowadays is recognized as giving birth to the entire field of genetics, was not recognized for his work while he was alive. It took another 40-50 years for scientists to rediscover his experiments and to see that he had localized, in those pea plants, the indivisible gene. One gets the feeling that his work was not celebrated while he was alive because his work was far ahead of its time.
  • The history of genetics is harrowing and ugly. While the second World War was probably the pinnacle of obscene crimes committed in the name of genetics, humans seem unable to shake off ideas associated with eugenics even into the modern day.
  • Through a large part of its history, the field of genetics has had to deal with a range of ethical questions. There is no sign of this trend abating in light of the recent discovery of CRISPR/Cas-9 technology. If you’re interested in learning more about this, RadioLab has a pretty good podcast about it.
  • Schrodinger’s book What is Life? has inspired so much follow-up work that it is hard to overestimate the influence it has had on a generation of physicists that transitioned to studying biology in the middle of the twentieth century, including both Watson and Crick.

While I could go on and on with this list, I’ll stop ruining the book for you. I would just like to say that at the end of the book I got the feeling that humans are still just starting to scratch the surface of understanding what’s going on in a cell. There is much more to learn, and that’s an exciting feeling in any field of science.

Aside: In case you missed March Meeting, the APS has posted the lectures from the Kavli Symposium on YouTube, which includes lectures from Duncan Haldane and Michael Kosterlitz among others.

YouTube Yikes!

A couple days ago, Lawrence Livermore National Laboratory released a number of videos of nuclear test explosions. It is worth watching some of these to understand the magnitude of destruction that these can cause. Here is a link to the Lawrence Livermore playlist on YouTube, and below is a video explaining a bit of the background concerning the release of these videos:

Below is a helpful MinutePhysics video that talks about the actual dangers concerning nuclear weapons:

On a somewhat unrelated note, while at the APS March Meeting this past week, Peter Abbamonte mentioned this video to me, which I also found pretty startling:

Lastly, here is a tragicomedy that takes place in the wild — it seems like this orca was never told by its mother not to play with its food:

An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded \sim 10^{16}  \textrm{cm}^{-3}). The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

EHD-Lo.JPG

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

Pic_EHD.JPG

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

Gibbs Paradox and Epicycles

Thomas Kuhn, the famous philosopher of science, envisioned that scientific revolutions take place when “an increasing number of epicycles” arise, resulting in the untenability of a prevailing theory. Just in case you aren’t familiar, the “epicycles” are a reference to the Ptolemaic world-view with the earth at the center of the universe. To explain the trajectories of the other planets, Ptolemaic theory required that the planets circulate the earth in complicated trajectories called epicycles. These convoluted epicycles were no longer needed once the Copernican revolution took place, and it was realized that our solar system was heliocentric.

This post is specifically about the Gibbs paradox, which provided one of the first examples of an “epicycle” in classical mechanics. If you google Gibbs paradox, you will come up with several different explanations, which are all seemingly related, but don’t quite all tell the same story. So instead of following Gibbs’ original arguments, I’ll just go by the version which is the easiest (in my mind) to follow.

Imagine a large box that is partitioned in two, with volume V on either side, filled with helium gas of the same pressure, temperature, etc. and at equilibrium (i.e. the gases are identical). The total entropy in this scenario is S + S =2S. Now, imagine that the partition is removed. The question Gibbs asked himself was: does the entropy increase?

Now, from our perspective, this might seems like an almost silly question, but Gibbs had asked himself this question in 1875, before the advent of quantum mechanics. This is relevant because in classical mechanics, particles are always distinguishable (i.e. they can be “tagged” by their trajectories). Hence, if one calculates the entropy increase assuming distinguishable particles, one gets the result that the entropy increases by 2Nk\textrm{ln}2.

This is totally at odds with one’s intuition (if one has any intuition when it comes to entropy!) and the extensive nature of entropy (that entropy scales with the system size). Since the size of the larger container of volume 2V containing identical gases (i.e. same pressure and temperature) does not change when removing the partition, neither should the entropy. And most damningly, if one were to place the partition back where it was before, one would naively think that the entropy would return to 2S, suggesting that the entropy decreased when returning the partition.

The resolution to this paradox is that the particles (helium atoms in this case) are completely indistinguishable. Gibbs had indeed recognized this as the resolution to the problem at the time, but considered it a counting problem.

Little did he know that the seeds giving rise to this seemingly benign problem required the complete overthrow of classical mechanics in favor of quantum mechanics. Only in quantum mechanics do truly identical particles exist. Note that nowhere in the Gibbs paradox does it suggest what the next theory will look like – it only points out a severe shortcoming of classical mechanics. Looked at in this light, it is amusing to think about what sorts of epicycles are hiding within our seemingly unshakable theories of quantum mechanics and general relativity, perhaps even in plain sight.

Lunar Eclipse and the 22 Degree Halo

The beautiful thing about atmospheric optics is that (almost) everyone can look up at the sky and see stunning optical phenomena from the sun, moon or some other celestial object. In this post I’ll focus on two particularly striking phenomena where the physical essence can be captured with relatively simple explanations.

The 22 degree halo is a ring around the sun or moon, which is often observed on cold days. Here are a couple images of the 22 degree halo around the sun and moon respectively:

22_degree_half_around_sun

22 degree halo around the sun

22_degree_halo_around_the_moon

22 degree halo around the moon

Note that the 22 degree halo is distinct from the coronae, which occur due to different reasons. While the coronae arise due to the presence of water droplets, the 22 degree halo arises specifically due to the presence of hexagonal ice crystals in the earth’s atmosphere. So why 22 degrees? Well, it turns out that one can answer the question using rather simple undergraduate-level physics. One of the most famous questions in undergraduate optics is that of light refraction through a prism, illustrated below:

prism

Fig. 1: The Snell’s Law Prism Problem

But if there were hexagonal ice crystals in the atmosphere, the problem is exactly the same, as one can see below. This is so because a hexagon is just an equilateral triangle with its ends chopped off. So as long as the light enters and exits on two sides of the hexagon that are spaced one side apart, the analysis is the same as for the triangle.

triangle_chop

Equilateral triangle with ends chopped off, making a hexagon

It turns out that \theta_4 in Fig. 1 can be solved as a function of \theta_1 with Snell’s law and some simple trigonometry to yield (under the assumption that n_1 =1):

\theta_4 = \textrm{sin}^{-1}(n_2 \times \textrm{sin}(60-\textrm{sin}^{-1}(\textrm{sin}(\theta_1)/n_2)))

It is then pretty straightforward to obtain \delta, the difference in angle between the incident and refracted beam as a function of \theta_1. I have plotted this below for the index of refraction of ice crystals for three different colors of light, red, green and blue (n_2 = 1.306, 1.311 and 1.317 respectively):

22deghalo

The important thing to note in the plot above is that there is a minimum angle below which there is no refracted beam, and this angle is precisely 21.54, 21.92 and 22.37 degrees for red, green and blue light respectively. Because there is no refracted beam below 22 degrees, this region appears darker, and then there is a sudden appearance of the refracted beam at the angles listed above. This is what gives rise to the 22 degree halo and also to the reddish hue on the inside rim of the halo.

Another rather spectacular celestial occurrence is the lunar eclipse, where the earth completely obscures the moon from direct sunlight. This is the geometry for the lunar eclipse:

lunar_eclipse

Geometry of the lunar eclipse

The question I wanted to address is the reddish hue of the moon, despite it lying in the earth’s shadow. It would naively seem like the moon should not be observable at all. However, there is a similar effect occurring here as with the halo. In this case, the earth’s atmosphere is the refracting medium. So just as light incident on the prism was going upward and then exited going downward, the sun’s rays similarly enter the atmosphere on a trajectory that would miss the moon, but then are bent towards the moon after interacting with the earth’s atmosphere.

But why red? Well, this has the same origins as the reddish hue of the sunset. Because light scatters from atmospheric particles as 1/\lambda^4, blue light gets scattered away much more easily than red light. Hence, the only color of light left by the time the light reaches the moon is primarily of red color.

It is interesting to imagine what the earth looks like from the moon during a lunar eclipse — it likely looks completely dark apart from a spectacular red halo around the earth. Anyway, one should realize that Snell’s law was first formulated in 984 by Arab scientist Ibn Sahl, and so it was possible to come to these conclusions more than a thousand years ago. Nothing new here!