Category Archives: History

Environmental negligence part 1: Leaded gasoline

Before I start this post, I just want to say that I hope you are all doing okay with regard to the spread of coronavirus. It is important that we take this situation seriously in effort to minimize the risk to yourselves and others.

As a professional scientist, I have to admit that I sometimes struggle with technology’s dual nature. I realize that studying physics and being able to come up with ideas to test in the laboratory is an enormous privilege. But scientific knowledge also comes with substantial weight.

Probably the biggest challenges facing humanity today, environmental damage and climate change, have partly arisen due to the outgrowth of technologies related to the study of electromagnetism and thermodynamics in the 19th century. Eliminating greenhouse gas emissions is a monumental task that is nuanced with all sorts of moral issues that relate to the developing world. In particular, the developing world has not been primarily responsible for much of the anthropogenic greenhouse gasses we see in our atmosphere today, but will likely end up seeing their development stunted as the world attempts to cut emissions.

In this series of posts, I raise some questions pertaining to how we as a society here in the US have dealt with environmental issues in the past and identify a few patterns of behavior. Specifically related to this, I ask whether there exist deep-rooted structural problems and whether there exist options to fix them. I focus on three particular examples (among many!), that of leaded gasoline, chloroflourocarbons (CFCs) and PFOA (Teflon) to illustrate the difficulties in fighting environmental damage, but also how there is some hope in doing so.

For thousands of years, lead has been used for many purposes, most famously for pipes and aqueducts in the Roman empire. Although even the Romans were aware of the toxicity of lead, they continued to use it. Thus, lead poisoning has been in the public consciousness for at least a couple thousand years. In the US, it was thought that low levels of lead exposure did not pose a serious health risk, however. For example, lead had been used in paint for centuries. In the early 1920s, though, something changed. Lead made its way into the air we breathe through the automotive industry as an additive to gasoline due to its anti-knock properties. It was this use of lead, as a gasoline additive, that put lead basically everywhere in the atmosphere and surface ocean water. Despite the well-known health risks of lead to the general public, there were no studies conducted by any government agency or by any of the companies before sales of leaded gasoline were permitted in the marketplace.

Just a couple years after tetraethyl lead (TEL) was included in gasoline, workers at the company producing leaded gasoline (Ethyl Gasoline Corporation, a joint venture between General Motors, DuPont and Standard Oil) started suffering consequences. Some died, some “went crazy” because lead is a neurotoxin and others showed significant mental deterioration. Alexander Gettler and Charles Norris, a toxicologist and the Chief Medical Examiner of New York, were tasked with performing autopsies on four of the workers that had died in relation to work at the company. Their report, published in 1925, showed significant levels of lead in the brain tissue of the victims, more so than in patients that exhibited conventional lead poisoning (i.e. not from TEL). They speculated that the lead in TEL was somehow attracted to brain tissue more than regular lead.

Around the time of their report, New York, New Jersey and the city of Philadelphia all banned the sale of leaded gasoline. Also in 1925, the U.S. Surgeon General formed a task force with the intention of performing a more thorough investigation of the effects of lead on the population, though excluded Gettler and Norris from the committee. Around the same time, Thomas Midgley Jr., one of the inventors of TEL and a scientist at the Ethyl Gasoline Corporation, published a paper on the lack of hazards posed by leaded gasoline to the general public. I hesitate to even mention that this was a massive conflict of interest. In addition to this paper, there were heavy propaganda efforts aimed at the public to make it seem like leaded gasoline was a huge step forward in the automotive industry. Below are a couple examples of advertisements from 1927 and 1953 respectively (notice how lead is never mentioned):

The U.S. Surgeon General ultimately sided with Midgley and industry insiders, citing a single short seven-month study that showed a lack of evidence that TEL was causing harm. The federal government lifted all bans on the sale of leaded gasoline. In a rather foreboding gesture, the task force did acknowledge the possibility that with more cars on the road in the future, the issue would have to be re-visited. This kicking of the can down the lead-coated road would last about 60 years. More about the Surgeon General’s report can be read here (PDF!).

One thing I should mention about the U.S. Surgeon General’s task force is that it abided by what became known as the Kehoe Rule, which puts the burden of proof on showing that leaded gasoline is unsafe. This is in contrast to the precautionary rule, which puts the burden of proof on showing that leaded gasoline is safe if introduced into the public arena.

How did leaded gasoline ultimately get banned from use? This is where the story takes an unlikely turn. Enter Clair Patterson, a geochemist working on trying to date the age of the Earth. At the recommendation of his PhD advisor, Patterson started working on trying to figure out the ratio of lead to uranium in old rocks, as uranium-238 would eventually decay into lead after 4.5 billion years. What Patterson found was startling. He was basically finding lead contaminants everywhere. Whatever rock he looked at, no matter how clean his laboratories were, would always be contaminated. He started to figure out that everything in his lab was contaminated with lead. Distilled water, glassware, you name it, was contaminated. This prevented him from obtaining the correct ratio.

Because of this contamination, Patterson spent years building the world’s first “cleanroom”, that would be lead-free. Below is a rather inspiring image of Clair Patterson scrubbing the lab floor (taken from here):

ClairPatterson

With his massive effort to create a lead-free zone, Clair Patterson was ultimately able to obtain to the age of the earth: 4.5 billion years. But this isn’t what this story is about.

After going to such lengths to fight off lead contamination, Patterson realized where the lead was coming from. In 1965, Patterson tried to convince the public that leaded gasoline was a major health hazard by publishing Contaminated and Natural Lead Environments of Man. Even though he was the world’s foremost expert on the topic at the time, he was left off a National Research Council research effort to study lead in the atmosphere in 1971. See a pattern (see above about Gettler and Norris)? Once Patterson turned his studies toward lead contamination in food, it became abundantly clear that lead was present in every facet of life on earth.

For his efforts, Patterson was hounded by industrial insiders and refused contracts with many research organizations. But ultimately, he did win his long-fought battle. He was massively helped in this battle by Herbert Needleman, who performed research showing that long exposure to lead in children likely resulted in a lower mental capacity. In 1986, the US phased out leaded gasoline, more than 65 years after the first warnings were put out by scientific watchdogs.

There is much to learn from this particular story, but before I go onto conclude, I would like to recap a couple more historical anecdotes in the days to come that I think we can learn from. More to follow…

 

*Much of this post was learned through the following references:

https://jamanetwork.com/journals/jama/article-abstract/237366

https://en.wikipedia.org/wiki/Clair_Cameron_Patterson#Campaign_against_lead_pois

https://pubs.acs.org/doi/pdf/10.1021/ie50188a020

https://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.75.4.344

Looney Gas and Lead Poisoning: A Short, Sad History

https://www.mentalfloss.com/article/94569/clair-patterson-scientist-who-determined-age-earth-and-then-saved-it

 

The photoelectric effect does not imply photons

When I first learned quantum mechanics, I was told that we knew that the photon existed because of Einstein’s explanation of the photoelectric effect in 1905. As the frequency of light impinging on the cathode material was increased, electrons came out with higher kinetic energies. This led to Einstein’s famous formula:

K.E. = \hbar\omega - W.F.

where K.E. is the kinetic energy of the outgoing electron, \hbar\omega is the photon energy and W.F. is the material-dependent work function.

Since the mid-1960s, however, we have known that the photoelectric effect does not definitively imply the existence of photons. From the photoelectric effect alone, it is actually ambiguous whether it is the electronic levels or the impinging radiation that should be quantized!

So, why do we still give the photon explanation to undergraduates? To be perfectly honest, I’m not sure whether we do this because of some sort of intellectual inertia or because many physicists don’t actually know that the photoelectric effect can be explained without invoking photons. It is worth noting that Willis E. Lamb, who played a large role in the development of quantum electrodynamics, implored other physicists to be more cautious when using the word photon (see for instance his 1995 article entitled Anti-Photon, which gives an interesting history of the photon nomenclature and his thoughts as to why we should be wary of its usage).

Let’s return to 1905, when Einstein came up with his explanation of the photoelectric effect. Just five years prior, Planck had heuristically explained the blackbody radiation spectrum and, in the process, evaded the ultraviolet catastrophe that plagued explanations based on the classical equipartition theorem. Planck’s distribution consequently provided the first evidence of “packets of light”, quantized in units of \hbar. At the time, Bohr had yet to come up with his atomic model that suggested that electron levels were quantized, which had to wait until 1913. Thus, from Einstein’s vantage point in 1905, he made the most reasonable assumption at the time — that it was the radiation that was quantized and not the electronic levels.

Today, however, we have the benefit of hindsight.

According to Lamb’s aforementioned Anti-Photon article, in 1926, G. Wentzel and G. Beck showed that one could use a semi-classical theory (i.e. electronic energy levels are quantized, but light is treated classically) to reproduce Einstein’s result. In the mid- to late 1960’s, Lamb and Scully extended the original treatment and made a point of emphasizing that one could get out the photoelectric effect without invoking photons. The main idea can be sketched if you’re familiar with the Fermi golden rule treatment to a harmonic electric field perturbation of the form \textbf{E}(t) = \textbf{E}_0 e^{-i \omega t}, where \omega is the frequency of the incoming photon. In the dipole approximation, we can write the potential as V(t) = -e\textbf{x}(t)\cdot\textbf{E}(t) and we get that the transition rate is:

R_{i \rightarrow f} = \frac{1}{t} \frac{1}{\hbar^2}|\langle{f}|e\textbf{x}(t)\cdot\textbf{E}_0|i \rangle|^2 [\frac{\textrm{sin}((\omega_{fi}-\omega)t/2)}{(\omega_{fi}-\omega)/2}]^2

Here, \hbar\omega_{fi} = (E_f - E_i) is the difference in energies between the initial and final states. Now, there are a couple things to note about the above expression. Firstly, the term in brackets (containing the sinusoidal function) peaks up when \omega_{fi} \approx \omega. This means that when the incoming light is resonant between the ground state and a higher energy level, the transition rate sharply increases.

Let us now interpret this expression with regard to the photoelectric effect. In this case, there exists a continuum of final states which are of the form \langle x|f\rangle \sim e^{i\textbf{k}\cdot\textbf{r}}, and as long as \hbar\omega > W.F., where W.F. is the work function of the material, we recover \hbar\omega = W.F. + K.E., where K.E. represents the energy given to the electron in excess of the work function. Thus, we recover Einstein’s formula from above!

In addition to this, however, we also see from the above expression that the current on the photodetector is proportional to \textbf{E}^2_0, i.e. the intensity of light impinging on the cathode. Therefore, this semi-classical treatment improves upon Einstein’s treatment in the sense that the relation between the intensity and current also naturally falls out.

From this reasoning, we see that the photoelectric effect does not logically imply the existence of photons.

We do have many examples that non-classical light does exist and that quantum fluctuations of light play a significant role in experimental observations. Some examples are photon anti-bunching, spontaneous emission, the Lamb shift, etc. However, I do agree with Lamb and Scully that the notion of a photon is indeed a challenging one and that caution is needed!

A couple further interesting reads on this subject at a non-technical level can be found here: The Concept of the Photon in Physics Today by Scully and Sargent and The Concept of the Photon – Revisited in OPN Trends by Muthukrishnan, Scully and Zubairy (pdf!)

Book Review – The Gene

Following the March Meeting, I took a vacation for a couple weeks, returning home to Bangkok, Thailand. During my holiday, I was able to get a hold of and read Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History.

I have to preface any commentary by saying that prior to reading the book, my knowledge of biology embarrassingly languished at the middle-school level. With that confession aside, The Gene was probably one of the best (and for me, most enlightening) popular science books I have ever read. This is definitely aided by Mukherjee’s fluid and beautiful writing style from which scientists in all fields can learn a few lessons about scientific communication. The Gene is also touched with a humanity that is not usually associated with the popular science genre, which is usually rather dry in recounting scientific and intellectual endeavors. This humanity is the book’s most powerful feature.

Since there are many glowing reviews of the book published elsewhere, I will just list here a few nuggets I took away from The Gene, which hopefully will serve to entice rather than spoil the book for you:

  • Mukherjee compares the gene to an atom or a bit, evolution’s “indivisible” particle. Obviously, the gene is physically divisible in the sense that it is made of atoms, but what he means here is that the lower levels can be abstracted away and the gene is the relevant level at which geneticists work.
    • It is worth thinking of what the parallel carriers of information are in condensed matter problems — my hunch is that most condensed matter physicists would contend that these are the quasiparticles in the relevant phase of matter.
  • Gregor Mendel, whose work nowadays is recognized as giving birth to the entire field of genetics, was not recognized for his work while he was alive. It took another 40-50 years for scientists to rediscover his experiments and to see that he had localized, in those pea plants, the indivisible gene. One gets the feeling that his work was not celebrated while he was alive because his work was far ahead of its time.
  • The history of genetics is harrowing and ugly. While the second World War was probably the pinnacle of obscene crimes committed in the name of genetics, humans seem unable to shake off ideas associated with eugenics even into the modern day.
  • Through a large part of its history, the field of genetics has had to deal with a range of ethical questions. There is no sign of this trend abating in light of the recent discovery of CRISPR/Cas-9 technology. If you’re interested in learning more about this, RadioLab has a pretty good podcast about it.
  • Schrodinger’s book What is Life? has inspired so much follow-up work that it is hard to overestimate the influence it has had on a generation of physicists that transitioned to studying biology in the middle of the twentieth century, including both Watson and Crick.

While I could go on and on with this list, I’ll stop ruining the book for you. I would just like to say that at the end of the book I got the feeling that humans are still just starting to scratch the surface of understanding what’s going on in a cell. There is much more to learn, and that’s an exciting feeling in any field of science.

Aside: In case you missed March Meeting, the APS has posted the lectures from the Kavli Symposium on YouTube, which includes lectures from Duncan Haldane and Michael Kosterlitz among others.

YouTube Yikes!

A couple days ago, Lawrence Livermore National Laboratory released a number of videos of nuclear test explosions. It is worth watching some of these to understand the magnitude of destruction that these can cause. Here is a link to the Lawrence Livermore playlist on YouTube, and below is a video explaining a bit of the background concerning the release of these videos:

Below is a helpful MinutePhysics video that talks about the actual dangers concerning nuclear weapons:

On a somewhat unrelated note, while at the APS March Meeting this past week, Peter Abbamonte mentioned this video to me, which I also found pretty startling:

Lastly, here is a tragicomedy that takes place in the wild — it seems like this orca was never told by its mother not to play with its food:

An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded \sim 10^{16}  \textrm{cm}^{-3}). The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

EHD-Lo.JPG

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

Pic_EHD.JPG

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.