Tag Archives: Review

Environmental negligence part 1: Leaded gasoline

Before I start this post, I just want to say that I hope you are all doing okay with regard to the spread of coronavirus. It is important that we take this situation seriously in effort to minimize the risk to yourselves and others.

As a professional scientist, I have to admit that I sometimes struggle with technology’s dual nature. I realize that studying physics and being able to come up with ideas to test in the laboratory is an enormous privilege. But scientific knowledge also comes with substantial weight.

Probably the biggest challenges facing humanity today, environmental damage and climate change, have partly arisen due to the outgrowth of technologies related to the study of electromagnetism and thermodynamics in the 19th century. Eliminating greenhouse gas emissions is a monumental task that is nuanced with all sorts of moral issues that relate to the developing world. In particular, the developing world has not been primarily responsible for much of the anthropogenic greenhouse gasses we see in our atmosphere today, but will likely end up seeing their development stunted as the world attempts to cut emissions.

In this series of posts, I raise some questions pertaining to how we as a society here in the US have dealt with environmental issues in the past and identify a few patterns of behavior. Specifically related to this, I ask whether there exist deep-rooted structural problems and whether there exist options to fix them. I focus on three particular examples (among many!), that of leaded gasoline, chloroflourocarbons (CFCs) and PFOA (Teflon) to illustrate the difficulties in fighting environmental damage, but also how there is some hope in doing so.

For thousands of years, lead has been used for many purposes, most famously for pipes and aqueducts in the Roman empire. Although even the Romans were aware of the toxicity of lead, they continued to use it. Thus, lead poisoning has been in the public consciousness for at least a couple thousand years. In the US, it was thought that low levels of lead exposure did not pose a serious health risk, however. For example, lead had been used in paint for centuries. In the early 1920s, though, something changed. Lead made its way into the air we breathe through the automotive industry as an additive to gasoline due to its anti-knock properties. It was this use of lead, as a gasoline additive, that put lead basically everywhere in the atmosphere and surface ocean water. Despite the well-known health risks of lead to the general public, there were no studies conducted by any government agency or by any of the companies before sales of leaded gasoline were permitted in the marketplace.

Just a couple years after tetraethyl lead (TEL) was included in gasoline, workers at the company producing leaded gasoline (Ethyl Gasoline Corporation, a joint venture between General Motors, DuPont and Standard Oil) started suffering consequences. Some died, some “went crazy” because lead is a neurotoxin and others showed significant mental deterioration. Alexander Gettler and Charles Norris, a toxicologist and the Chief Medical Examiner of New York, were tasked with performing autopsies on four of the workers that had died in relation to work at the company. Their report, published in 1925, showed significant levels of lead in the brain tissue of the victims, more so than in patients that exhibited conventional lead poisoning (i.e. not from TEL). They speculated that the lead in TEL was somehow attracted to brain tissue more than regular lead.

Around the time of their report, New York, New Jersey and the city of Philadelphia all banned the sale of leaded gasoline. Also in 1925, the U.S. Surgeon General formed a task force with the intention of performing a more thorough investigation of the effects of lead on the population, though excluded Gettler and Norris from the committee. Around the same time, Thomas Midgley Jr., one of the inventors of TEL and a scientist at the Ethyl Gasoline Corporation, published a paper on the lack of hazards posed by leaded gasoline to the general public. I hesitate to even mention that this was a massive conflict of interest. In addition to this paper, there were heavy propaganda efforts aimed at the public to make it seem like leaded gasoline was a huge step forward in the automotive industry. Below are a couple examples of advertisements from 1927 and 1953 respectively (notice how lead is never mentioned):

The U.S. Surgeon General ultimately sided with Midgley and industry insiders, citing a single short seven-month study that showed a lack of evidence that TEL was causing harm. The federal government lifted all bans on the sale of leaded gasoline. In a rather foreboding gesture, the task force did acknowledge the possibility that with more cars on the road in the future, the issue would have to be re-visited. This kicking of the can down the lead-coated road would last about 60 years. More about the Surgeon General’s report can be read here (PDF!).

One thing I should mention about the U.S. Surgeon General’s task force is that it abided by what became known as the Kehoe Rule, which puts the burden of proof on showing that leaded gasoline is unsafe. This is in contrast to the precautionary rule, which puts the burden of proof on showing that leaded gasoline is safe if introduced into the public arena.

How did leaded gasoline ultimately get banned from use? This is where the story takes an unlikely turn. Enter Clair Patterson, a geochemist working on trying to date the age of the Earth. At the recommendation of his PhD advisor, Patterson started working on trying to figure out the ratio of lead to uranium in old rocks, as uranium-238 would eventually decay into lead after 4.5 billion years. What Patterson found was startling. He was basically finding lead contaminants everywhere. Whatever rock he looked at, no matter how clean his laboratories were, would always be contaminated. He started to figure out that everything in his lab was contaminated with lead. Distilled water, glassware, you name it, was contaminated. This prevented him from obtaining the correct ratio.

Because of this contamination, Patterson spent years building the world’s first “cleanroom”, that would be lead-free. Below is a rather inspiring image of Clair Patterson scrubbing the lab floor (taken from here):

ClairPatterson

With his massive effort to create a lead-free zone, Clair Patterson was ultimately able to obtain to the age of the earth: 4.5 billion years. But this isn’t what this story is about.

After going to such lengths to fight off lead contamination, Patterson realized where the lead was coming from. In 1965, Patterson tried to convince the public that leaded gasoline was a major health hazard by publishing Contaminated and Natural Lead Environments of Man. Even though he was the world’s foremost expert on the topic at the time, he was left off a National Research Council research effort to study lead in the atmosphere in 1971. See a pattern (see above about Gettler and Norris)? Once Patterson turned his studies toward lead contamination in food, it became abundantly clear that lead was present in every facet of life on earth.

For his efforts, Patterson was hounded by industrial insiders and refused contracts with many research organizations. But ultimately, he did win his long-fought battle. He was massively helped in this battle by Herbert Needleman, who performed research showing that long exposure to lead in children likely resulted in a lower mental capacity. In 1986, the US phased out leaded gasoline, more than 65 years after the first warnings were put out by scientific watchdogs.

There is much to learn from this particular story, but before I go onto conclude, I would like to recap a couple more historical anecdotes in the days to come that I think we can learn from. More to follow…

 

*Much of this post was learned through the following references:

https://jamanetwork.com/journals/jama/article-abstract/237366

https://en.wikipedia.org/wiki/Clair_Cameron_Patterson#Campaign_against_lead_pois

https://pubs.acs.org/doi/pdf/10.1021/ie50188a020

https://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.75.4.344

Looney Gas and Lead Poisoning: A Short, Sad History

https://www.mentalfloss.com/article/94569/clair-patterson-scientist-who-determined-age-earth-and-then-saved-it

 

Whence we know the photon exists

In my previous post, I laid out the argument discussing why the photoelectric effect does not imply the existence of photons. In this post, I want to outline, not the first, but probably the conceptually simplest experiment that showed that photons do indeed exist. It was performed by Grangier, Roger and Aspect in 1986, and the paper can be found at this link (PDF!).

The idea can be described by considering the following simple experiment. Imagine I have light impinging on a 50/50 beamsplitter and detectors at both of the output ports, as pictured below. In this configuration, 50% of the light will be transmitted, labelled t below, and 50% of the light will be reflected, labeled r below.

BeamSplitter

Now, if a discrete and indivisible packet of light, i.e. a photon, is shone on the beam splitter, then it must either be reflected (and hit D1) or be transmitted (and hit D2). The detectors are forbidden from clicking in coincidence. However, there is one particularly tricky thing about this experiment. How do I ensure that I only fire a single photon at the beam splitter?

This is where Aspect, Roger and Grangier provide us with a rather ingenious solution. They used a two-photon cascade from a calcium atom to solve the issue. For the purpose of this post, one only needs to know that when a photon excites the calcium atom to an excited state, it emits two photons as it relaxes back down to the ground state. This is because it relaxes first to an intermediate state and then to the ground state. This process is so fast that the photons are essentially emitted simultaneously on experimental timescales.

Now, because the calcium atom relaxes in this way, the first photon can be used to trigger the detectors to turn them on, and the second photon can impinge on the beam splitter to determine whether there are coincidences among the detectors. A schematic of the experimental arrangement is shown below (image taken from here; careful, it’s a large PDF file!):

GRA experiment

Famously, they were essentially able to extrapolate their results and show that the photons are perfectly anti-correlated, i.e. that when a photon reflects off of the beam splitter, there is no transmitted photon and vice versa. Alas the photon!

However, they did not stop there. To show that quantum mechanical superposition applies to single photons, they sent these single photons through a Mach-Zehnder interferometer (depicted schematically below, image taken from here).

mach-zehnder

They were able to show that single photons do indeed interfere. The fringes were observed with visibility of about 98%. A truly triumphant experiment that showed not only the existence of photons cleanly, but that their properties are non-classical and can be described by quantum mechanics!

Critical Slowing Down

I realize that it’s been a long while since I’ve written a post, so the topic of this one, while unintentionally so, is quite apt.

Among the more universal themes in studying phase transitions is the notion of critical slowing down. Most students are introduced to this idea in the context of second order phase transitions, but it has turned out to be a useful concept in a wide range of systems beyond this narrow framework and into subjects well outside the purview of the average condensed matter physicist.

Stated simply, critical slowing down refers to the phenomenon observed near phase transitions where a slight perturbation or disturbance away from equilibrium takes a really long time to decay back to equilibrium. Why is this the case?

The main idea can be explained within the Landau theory of phase transitions, and I’ll take that approach here since it’s quite intuitive.  As you can see in the images below, when the Landau potential is far from T_c, the potential well can be approximated by a parabolic form. However, this is not possible for the potential near T_c.

LandauPotentials

Mathematically, this can be explained by considering a simple form of the Landau potential:

V(\phi) = \alpha (T-T_c) x^2 + \beta x^4

Near T_c, the parabolic term vanishes, and we are left with only the quartic one. Although it’s clear from the images why the dynamics slow down near T_c, it helps to spell out the math a little.

Firstly, imagine that the potential is filled with some sort of viscous fluid, something akin to honey, and that the dynamics of the ball represents that of the order parameter. This puts us in the “overdamped” limit, where the order parameter reaches the equilibrium point without executing any sort of oscillatory motion. Far from T_c, as aforementioned, we can approximate the dynamics with a parabolic form of the potential (using the equation for the overdamped limit, \dot{x} = -dV/dx):

\dot{x} = -\gamma(T) x

The solution to this differential equation is of exponential form, i.e. x(t) = x(0)e^{-\gamma(T) t}, and the relaxation back to equilibrium is therefore characterized by a temperature-dependent timescale \tau =1/\gamma(T).

However, near T_c, the parabolic approximation breaks down, as the parabolic term gets very small, and we have to take into consideration the quartic term. The order parameter dynamics then get described by:

\dot{x} = -\beta x^3,

which has a solution of the form x(t) \sim 1/\sqrt{\beta t}. Noticeably, the dynamics of the order parameter obey a much slower power law decay near T_c, as illustrated below:

ExpVsPowerLaw_Decay

At this point, one would naively think, “okay, so this is some weird thing that happens near a critical point at a phase transition…so what?”

Well, it turns out that critical slowing down can actually serve as a precursor of an oncoming phase transition in all sorts of contexts, and can even be predictive! Here are a pair of illuminating papers which show that critical slowing down occurs near a population collapse in microbial communities (from the Scheffer group and from the Gore group). As an aside, the Gore group used the budding yeast Saccharomyces cerevisiae in their experiments, which is the yeast used in most beers (I wonder if their lab has tasting parties, and if so, can I get an invitation?).

Here is another recent paper showing critical slowing down in a snap-through instability of an elastic rod. I could go on and on listing the different contexts where critical slowing down has been observed, but I think it’s better that I cite this review article.

Surprisingly, critical slowing down has been observed at continuous, first-order and far-from-equilibrium phase transitions! As a consequence of this generality, the observation of critical slowing down can therefore be predictive. If the appropriate measurements could be made, one may be able to see how close the earth’s climate is to a “tipping point” from which it will be very difficult to return (due to hysteresic effects) (see this paper which shows some form of critical slowing down in previous climatic changes in the earth’s history). But for now, it’s just interesting to look for critical slowing down in other contexts that are a little easier to predict and where perhaps the consequences aren’t as dire.

*Thanks to Alfred Zong who introduced me to many of the above papers

**Also, a shout out to Brian Skinner who caught repeated noise patterns in a recent preprint on room temperature superconductivity. Great courage and good job!

Pictures of Band Theory: A real space view of where bands and band gaps come from

In learning solid state physics, one of the most difficult conceptual hurdles to overcome is to understand band theory. This is partly due to the difficulty in thinking about reciprocal space, and is highlighted on Nanoscale Views blog in the post “The Tyranny of Reciprocal Space”. In this post, I will sacrifice accuracy in favor of an intuitive picture of band theory in real space. Hopefully, this post will help newcomers overcome those scary feelings when first exposed to solid state physics.

Firstly, it is necessary to recount the mathematical form of a Bloch wavefunction:

\psi_{k}(r) = e^{ikr}u(r)

Let’s pause for a second to take a look at what this means — the Bloch wave consists of a plane wave portion multiplied by a periodic function. In this post, for illustration purposes, I’ll simplify this by treating both parts of the Bloch wave as real.1 Take a look  at the image below to see what this implies:

image117

Fig 1: (a) The periodic potential. (b) The Bloch wavefunction. (c) The periodic part of the Bloch wave function. (d) The sinusoidal envelope part of the Bloch wavefunction.

Within this seemingly simple picture, one can explain the origin of band structure and why band gaps appear.

Let’s see first how band structure arises. For ease, since most readers of this blog are likely familiar with the solution to the infinite square well problem, we shall start there. Pictured below is a periodic potential with infinitely high walls between each well and the first two wavefunctions for each well looks like so:

PeriodicInfiniteSquareWell

Fig. 2: n=1 and n=2 wavefunctions for the periodic infinite square well.

The wavefunctions from well to well don’t have to be in phase, but I’ve just drawn them that way for ease. Bands arise when we reduce the height between walls to let the wavefunctions bleed over into the neighboring wells. This most easily seen for the two-well potential case as seen below:

 

 

In the first row, I have just plotted the n=1 energy levels for each well. Once the barrier height has been reduced, the (formerly degenerate) energy levels split into a symmetric and anti-symmetric state. I have not plotted the n=2 levels — this is just what happens if the n=1 interact! How much the energy levels split will be determined by how much I reduce the barrier height: the more I reduce the barrier, the larger the splitting. In band language, as you’ll see below, this implies that the lower the barrier height, the greater the dispersion.

One important thing to take away from this picture is that both in the infinite and finite barrier cases, we can fit at most four electrons in these two levels (if we include spin). In the infinite barrier case, two electrons can fit in the n=1 level in each well, and in the finite barrier case, two electrons can go into the symmetric state and two in the anti-symmetric state.

Now, let’s return to the case where we have an infinite  number (okay, I only drew fifteen!) of finite potential wells. In analogy to the two-well problem, we can draw the states for the case where the heights of the potential wells have been reduced:

PeriodicFiniteSquareWell

Fig. 3: n=1 and n=2 wavefunctions for the periodic finite square well. My lack of artistic skills is severely exposed for the n=2 level here, but imagine that the wavefunctions don’t look so discontinuous.

 

This is where things get interesting. How do we represent the n=1 states in analogy with the symmetric and anti-symmetric states in the two-well case? We can invoke Bloch’s theorem. It basically says that you just multiply this periodic part by a sinusoidal function!

The sinusoidal function ends up being an envelope function, just like in the very first figure above. Here is what the lowest energy level would look like for the periodic finite potential well:

PeriodicFiniteSquareWellBloch

Fig. 4: The lowest energy wavefunction for the n=1 level

This state is the analog of the symmetric state in the two-well case. To preserve the number of states in going from the infinite barrier case to the finite barrier case, I can only multiply the periodic part by N sinusoidal envelope functions, where N is the number of potential wells — in this case, fifteen!

Therefore the functions from the n=1 level end up looking like this:

 

PeriodicFiniteSquareWellBlochLevels2

Fig. 5: Wavefunctions that comprise the n=1 band

 

These are the wavefunctions that comprise a single band, that is, the band formed by the n=1 level. Interestingly, just from looking at the wavefunctions, you can see that the wavefunctions for the n=1 band increase in energy in going from the totally symmetric state to the totally antisymmetric state, as the number of nodes in the wavefunction increases. Notice here also how this connects to the reciprocal space picture — the totally antisymmetric wavefunction was multiplied with an envelope function that had wavelength 2a, which is the state at the Brillioun zone boundary!

Now, in this picture, why do band gaps exist? Understanding this point requires me to do the same envelope multiplication procedure to the n=2 levels. In particular, when one multiplies by the 2a envelope function, it essentially has the effect of flipping the wavefunction in each well so that we get something that looks something like this (again, imagine a continuous function here, my artistic skills fail me):

PeriodicFiniteSquareWellBloch2ndLevel

Fig. 6: The zone boundary (\pi/a) wavefunction for the n=2 level

 

Imagine for a second what this function would look like in the absence (or with a very small) barrier height. It turns out that it would end up looking very similar to the highest energy wavefunction for the n=1 band! This is pictured below:

 

PeriodicFiniteSquareWellReducedBarrier

Fig. 7: The zone boundary (\pi/a) wavefunctions for the n=1 and n=2 energy levels with a negligible barrier height

What you can see here is that at the zone boundary, the wavefunctions essentially look the same, and are essentially degenerate. This degeneracy is broken when the barriers are present.  The barriers “mess up” the wavefunction so that they no longer perfect sinusoids, changing the energies of both the zone boundary blue n=1 and the orange n=2 curves so that their energies are no longer the same. In other words, a gap has opened between the wavelength 2a n=1 and n=2 energy levels! You can sort of use your eyes to interpolate between Fig. 6 and Fig. 7 to see that the energy of the n=2 level must increase as it loses its pure sinusoidal nature and, by comparing Fig. 6 to the last image in Fig. 5, that the zone boundary wavefunction degeneracy has been lifted.

In this picture, you can also easily see that when the periodic part of the n=2 wavefunction is multiplied by the first sinusoidal function (i.e. the one with wavelength Na/2), it actually has the highest energy in the n=2 band. This can be seen by comparing the orange curves in Fig. 7 and Fig. 3. The curve in Fig. 3 has many more nodes. The lowest energy is actually obtained when the n=2 periodic function is multiplied by the sinusoidal function of wavelength 2a, i.e. at the zone boundary. This implies that in contrast to the first band, the second one disperses downward from the center of the Brillouin zone.

One more thing to note, which has been implicit in the discussion is that essentially the n=1 level has the symmetry of an s-like wavefunction whereas the n=2 level has the symmetry of a p-like wavefunction.  If one keeps going with this picture, you can essentially get d- and f-like bands as well.

I hope this post helps bring an end to the so-called “tyranny of reciprocal space”. It is not difficult to imagine the wavefunctions in real space and this framework shouldn’t be so intimidating to band theory newcomers!

I actually wonder what the limitations of this picture are — if anyone sees how to explain, for instance, the Berry phase within this picture, I’d be interested to hear it!

 

1 This of course is not strictly correct, but this helps in visualizing what is going on tremendously.

Bands Aren’t Only For Crystalline Solids

If one goes through most textbooks on solid state physics such as Ashcroft and Mermin, one can easily forget that most of the solids in this world are not crystalline. If I look around my living room, I see a ceramic tea mug nearby a plastic pepper dispenser sitting on a wooden coffee table. In fact, it is very difficult to find something that we would call “crystalline” in the sense of solid state physics.

Because of this, one could almost be forgiven in thinking that bands are a property only of crystalline solids. That they are not, can be seen within a picture-based framework. As is usual on this blog, let’s start with the wavefunctions of the infinite square well and the two-well potential. Take a look below at the wavefunctions for the infinite well and then at the first four pairs of wavefunctions for the double well (the images are taken from here and here):

InfiniteWell

1870-3542-rmfe-62-02-00096-gf3

What you can already see forming within this simple picture is the notion of a “band”. Each “band” here only contains two energy levels, each of which can take two electrons when taking into consideration spin. If we generalize this picture, one can see that when going from two wells here to N wells, one will get energy levels per band.

However, there has been no explicit, although used above,  requirement that the wells be the same depth. It is quite easy to imagine that the potential wells look like the ones below. The analogue of the symmetric and anti-symmetric states for the E1 level are shown below as well:

Again, this can be generalized to N potential wells that vary in height from site to site for one to get a “band”. The necessary requirement for band formation is that the electrons be allowed to tunnel from one site to the other, i.e. for them “feel” the presence of the neighboring potential wells. While the notion of a Brillouin zone won’t exist and nor will Bragg scattering of the electrons (which leads to the opening up of the gaps at the Brillouin zone boundaries), the notion of a band will persist within a non-crystalline framework.

Because solid state physics textbooks often don’t mention amorphous solids or glasses, one can easily forget which properties of solids are and are not limited to those that are crystalline. We may not know how to mathematically apply them to glasses with random potentials very well, but many ideas used in the framework to describe crystalline solids are applicable when looking at amorphous solids as well.