Tag Archives: History

Slight detour

I am still planning to follow up my previous post on environmental negligence and will write a post about CFCs in the near future. However, I saw this YouTube video recently and found it harrowing. The British government had known about the consequences of acute radiation poisoning, but chose to perform these tests anyhow. In addition to the lives irreversibly changed, there is also the remarkable fact that these people were able to see live images of bones and blood vessels with their eyes. Does anyone have a good explanation as to how this would even be possible?

 

Environmental negligence part 1: Leaded gasoline

Before I start this post, I just want to say that I hope you are all doing okay with regard to the spread of coronavirus. It is important that we take this situation seriously in effort to minimize the risk to yourselves and others.

As a professional scientist, I have to admit that I sometimes struggle with technology’s dual nature. I realize that studying physics and being able to come up with ideas to test in the laboratory is an enormous privilege. But scientific knowledge also comes with substantial weight.

Probably the biggest challenges facing humanity today, environmental damage and climate change, have partly arisen due to the outgrowth of technologies related to the study of electromagnetism and thermodynamics in the 19th century. Eliminating greenhouse gas emissions is a monumental task that is nuanced with all sorts of moral issues that relate to the developing world. In particular, the developing world has not been primarily responsible for much of the anthropogenic greenhouse gasses we see in our atmosphere today, but will likely end up seeing their development stunted as the world attempts to cut emissions.

In this series of posts, I raise some questions pertaining to how we as a society here in the US have dealt with environmental issues in the past and identify a few patterns of behavior. Specifically related to this, I ask whether there exist deep-rooted structural problems and whether there exist options to fix them. I focus on three particular examples (among many!), that of leaded gasoline, chloroflourocarbons (CFCs) and PFOA (Teflon) to illustrate the difficulties in fighting environmental damage, but also how there is some hope in doing so.

For thousands of years, lead has been used for many purposes, most famously for pipes and aqueducts in the Roman empire. Although even the Romans were aware of the toxicity of lead, they continued to use it. Thus, lead poisoning has been in the public consciousness for at least a couple thousand years. In the US, it was thought that low levels of lead exposure did not pose a serious health risk, however. For example, lead had been used in paint for centuries. In the early 1920s, though, something changed. Lead made its way into the air we breathe through the automotive industry as an additive to gasoline due to its anti-knock properties. It was this use of lead, as a gasoline additive, that put lead basically everywhere in the atmosphere and surface ocean water. Despite the well-known health risks of lead to the general public, there were no studies conducted by any government agency or by any of the companies before sales of leaded gasoline were permitted in the marketplace.

Just a couple years after tetraethyl lead (TEL) was included in gasoline, workers at the company producing leaded gasoline (Ethyl Gasoline Corporation, a joint venture between General Motors, DuPont and Standard Oil) started suffering consequences. Some died, some “went crazy” because lead is a neurotoxin and others showed significant mental deterioration. Alexander Gettler and Charles Norris, a toxicologist and the Chief Medical Examiner of New York, were tasked with performing autopsies on four of the workers that had died in relation to work at the company. Their report, published in 1925, showed significant levels of lead in the brain tissue of the victims, more so than in patients that exhibited conventional lead poisoning (i.e. not from TEL). They speculated that the lead in TEL was somehow attracted to brain tissue more than regular lead.

Around the time of their report, New York, New Jersey and the city of Philadelphia all banned the sale of leaded gasoline. Also in 1925, the U.S. Surgeon General formed a task force with the intention of performing a more thorough investigation of the effects of lead on the population, though excluded Gettler and Norris from the committee. Around the same time, Thomas Midgley Jr., one of the inventors of TEL and a scientist at the Ethyl Gasoline Corporation, published a paper on the lack of hazards posed by leaded gasoline to the general public. I hesitate to even mention that this was a massive conflict of interest. In addition to this paper, there were heavy propaganda efforts aimed at the public to make it seem like leaded gasoline was a huge step forward in the automotive industry. Below are a couple examples of advertisements from 1927 and 1953 respectively (notice how lead is never mentioned):

The U.S. Surgeon General ultimately sided with Midgley and industry insiders, citing a single short seven-month study that showed a lack of evidence that TEL was causing harm. The federal government lifted all bans on the sale of leaded gasoline. In a rather foreboding gesture, the task force did acknowledge the possibility that with more cars on the road in the future, the issue would have to be re-visited. This kicking of the can down the lead-coated road would last about 60 years. More about the Surgeon General’s report can be read here (PDF!).

One thing I should mention about the U.S. Surgeon General’s task force is that it abided by what became known as the Kehoe Rule, which puts the burden of proof on showing that leaded gasoline is unsafe. This is in contrast to the precautionary rule, which puts the burden of proof on showing that leaded gasoline is safe if introduced into the public arena.

How did leaded gasoline ultimately get banned from use? This is where the story takes an unlikely turn. Enter Clair Patterson, a geochemist working on trying to date the age of the Earth. At the recommendation of his PhD advisor, Patterson started working on trying to figure out the ratio of lead to uranium in old rocks, as uranium-238 would eventually decay into lead after 4.5 billion years. What Patterson found was startling. He was basically finding lead contaminants everywhere. Whatever rock he looked at, no matter how clean his laboratories were, would always be contaminated. He started to figure out that everything in his lab was contaminated with lead. Distilled water, glassware, you name it, was contaminated. This prevented him from obtaining the correct ratio.

Because of this contamination, Patterson spent years building the world’s first “cleanroom”, that would be lead-free. Below is a rather inspiring image of Clair Patterson scrubbing the lab floor (taken from here):

ClairPatterson

With his massive effort to create a lead-free zone, Clair Patterson was ultimately able to obtain to the age of the earth: 4.5 billion years. But this isn’t what this story is about.

After going to such lengths to fight off lead contamination, Patterson realized where the lead was coming from. In 1965, Patterson tried to convince the public that leaded gasoline was a major health hazard by publishing Contaminated and Natural Lead Environments of Man. Even though he was the world’s foremost expert on the topic at the time, he was left off a National Research Council research effort to study lead in the atmosphere in 1971. See a pattern (see above about Gettler and Norris)? Once Patterson turned his studies toward lead contamination in food, it became abundantly clear that lead was present in every facet of life on earth.

For his efforts, Patterson was hounded by industrial insiders and refused contracts with many research organizations. But ultimately, he did win his long-fought battle. He was massively helped in this battle by Herbert Needleman, who performed research showing that long exposure to lead in children likely resulted in a lower mental capacity. In 1986, the US phased out leaded gasoline, more than 65 years after the first warnings were put out by scientific watchdogs.

There is much to learn from this particular story, but before I go onto conclude, I would like to recap a couple more historical anecdotes in the days to come that I think we can learn from. More to follow…

 

*Much of this post was learned through the following references:

https://jamanetwork.com/journals/jama/article-abstract/237366

https://en.wikipedia.org/wiki/Clair_Cameron_Patterson#Campaign_against_lead_pois

https://pubs.acs.org/doi/pdf/10.1021/ie50188a020

https://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.75.4.344

Looney Gas and Lead Poisoning: A Short, Sad History

https://www.mentalfloss.com/article/94569/clair-patterson-scientist-who-determined-age-earth-and-then-saved-it

 

The photoelectric effect does not imply photons

When I first learned quantum mechanics, I was told that we knew that the photon existed because of Einstein’s explanation of the photoelectric effect in 1905. As the frequency of light impinging on the cathode material was increased, electrons came out with higher kinetic energies. This led to Einstein’s famous formula:

K.E. = \hbar\omega - W.F.

where K.E. is the kinetic energy of the outgoing electron, \hbar\omega is the photon energy and W.F. is the material-dependent work function.

Since the mid-1960s, however, we have known that the photoelectric effect does not definitively imply the existence of photons. From the photoelectric effect alone, it is actually ambiguous whether it is the electronic levels or the impinging radiation that should be quantized!

So, why do we still give the photon explanation to undergraduates? To be perfectly honest, I’m not sure whether we do this because of some sort of intellectual inertia or because many physicists don’t actually know that the photoelectric effect can be explained without invoking photons. It is worth noting that Willis E. Lamb, who played a large role in the development of quantum electrodynamics, implored other physicists to be more cautious when using the word photon (see for instance his 1995 article entitled Anti-Photon, which gives an interesting history of the photon nomenclature and his thoughts as to why we should be wary of its usage).

Let’s return to 1905, when Einstein came up with his explanation of the photoelectric effect. Just five years prior, Planck had heuristically explained the blackbody radiation spectrum and, in the process, evaded the ultraviolet catastrophe that plagued explanations based on the classical equipartition theorem. Planck’s distribution consequently provided the first evidence of “packets of light”, quantized in units of \hbar. At the time, Bohr had yet to come up with his atomic model that suggested that electron levels were quantized, which had to wait until 1913. Thus, from Einstein’s vantage point in 1905, he made the most reasonable assumption at the time — that it was the radiation that was quantized and not the electronic levels.

Today, however, we have the benefit of hindsight.

According to Lamb’s aforementioned Anti-Photon article, in 1926, G. Wentzel and G. Beck showed that one could use a semi-classical theory (i.e. electronic energy levels are quantized, but light is treated classically) to reproduce Einstein’s result. In the mid- to late 1960’s, Lamb and Scully extended the original treatment and made a point of emphasizing that one could get out the photoelectric effect without invoking photons. The main idea can be sketched if you’re familiar with the Fermi golden rule treatment to a harmonic electric field perturbation of the form \textbf{E}(t) = \textbf{E}_0 e^{-i \omega t}, where \omega is the frequency of the incoming photon. In the dipole approximation, we can write the potential as V(t) = -e\textbf{x}(t)\cdot\textbf{E}(t) and we get that the transition rate is:

R_{i \rightarrow f} = \frac{1}{t} \frac{1}{\hbar^2}|\langle{f}|e\textbf{x}(t)\cdot\textbf{E}_0|i \rangle|^2 [\frac{\textrm{sin}((\omega_{fi}-\omega)t/2)}{(\omega_{fi}-\omega)/2}]^2

Here, \hbar\omega_{fi} = (E_f - E_i) is the difference in energies between the initial and final states. Now, there are a couple things to note about the above expression. Firstly, the term in brackets (containing the sinusoidal function) peaks up when \omega_{fi} \approx \omega. This means that when the incoming light is resonant between the ground state and a higher energy level, the transition rate sharply increases.

Let us now interpret this expression with regard to the photoelectric effect. In this case, there exists a continuum of final states which are of the form \langle x|f\rangle \sim e^{i\textbf{k}\cdot\textbf{r}}, and as long as \hbar\omega > W.F., where W.F. is the work function of the material, we recover \hbar\omega = W.F. + K.E., where K.E. represents the energy given to the electron in excess of the work function. Thus, we recover Einstein’s formula from above!

In addition to this, however, we also see from the above expression that the current on the photodetector is proportional to \textbf{E}^2_0, i.e. the intensity of light impinging on the cathode. Therefore, this semi-classical treatment improves upon Einstein’s treatment in the sense that the relation between the intensity and current also naturally falls out.

From this reasoning, we see that the photoelectric effect does not logically imply the existence of photons.

We do have many examples that non-classical light does exist and that quantum fluctuations of light play a significant role in experimental observations. Some examples are photon anti-bunching, spontaneous emission, the Lamb shift, etc. However, I do agree with Lamb and Scully that the notion of a photon is indeed a challenging one and that caution is needed!

A couple further interesting reads on this subject at a non-technical level can be found here: The Concept of the Photon in Physics Today by Scully and Sargent and The Concept of the Photon – Revisited in OPN Trends by Muthukrishnan, Scully and Zubairy (pdf!)

On Scientific Inevitability

If one looks through the history of human evolution, it is surprising to see that humanity has on several independent occasions, in several different locations, figured how to produce food, make pottery, write, invent the wheel, domesticate animals, build complex political societies, etc. It is almost as if these discoveries and inventions were an inevitable part of the evolution of humans. More controversially, one may extend such arguments to include the development of science, mathematics, medicine and many other branches of knowledge (more on this point below).

The interesting part about these ancient inventions is that because they originated in different parts of the world, the specifics varied geographically. For instance, native South Americans domesticated llamas, while cultures in Southwest Asia (today’s Middle East) domesticated sheep, cows, and horses, while the Ancient Chinese were able to domesticate chickens among other animals. The reason that different cultures domesticated different animals was because these animals were by and large native to the regions where they were domesticated.

Now, there are also many instances in human history where inventions were not made independently, but diffused geographically. For instance, writing was developed independently in at least a couple locations (Mesoamerica and Southwest Asia), but likely diffused from Southwest Asia into Europe and other neighboring geographic locations. While the peoples in these other places would have likely discovered writing on their own in due time, the diffusion from Southwest Asia made this unnecessary. These points are well-made in the excellent book by Jared Diamond entitled Guns, Germs and Steel.

If you've ever been to the US post-office, you'll realize very quickly that it's not the product of intelligent design.

At this point, you are probably wondering what I am trying to get at here, and it is no more than the following musing. Consider the following thought experiment: if two different civilizations were geographically isolated without any contact for thousands of years, would they both have developed a similar form of scientific inquiry? Perhaps the questions asked and the answers obtained would have been slightly different, but my naive guess is that given enough time, both would have developed a process that we would recognize today as genuinely scientific. Obviously, this thought experiment is not possible, and this fact makes it difficult to answer to what extent the development of science was inevitable, but I would consider it plausible and likely.

Because what we would call “modern science” was devised after the invention of the printing press, the process of scientific inquiry likely “diffused” rather than being invented independently in many places. The printing press accelerated the pace of information transfer and did not allow geographically separated areas to “invent” science on their own.

Today, we can communicate globally almost instantly and information transfer across large geographic distances is easy. Scientific communication therefore works through a similar diffusive process, through the writing of papers in journals, where scientists from anywhere in the world can submit papers and access them online. Looking at science in this way, as an almost inevitable evolutionary process, downplays the role of individuals and suggests that despite the contribution of any individual scientist, humankind would have likely reached that destination ultimately anyhow. The timescale to reach a particular scientific conclusion may have been slightly different, but those conclusions would have been made nonetheless.

There are some scientists out there who have contributed massively to the advancement of science and their absence may have slowed progress, but it is hard to imagine that progress would have slowed very significantly. In today’s world, where the idea of individual genius is romanticized in the media and further so by prizes such as the Nobel, it is important to remember that no scientist is indispensable, no matter how great. There were often competing scientists simultaneously working on the biggest discoveries of the 20th century, such as the theories of general relativity, the structure of DNA, and others. It is likely that had Einstein or Watson, Crick and Franklin not solved those problems, others would have.

So while the work of this year’s scientific Nobel winners is without a doubt praise-worthy and the recipients deserving, it is interesting to think about such prizes in this slightly different and less romanticized light.

Book Review – The Gene

Following the March Meeting, I took a vacation for a couple weeks, returning home to Bangkok, Thailand. During my holiday, I was able to get a hold of and read Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History.

I have to preface any commentary by saying that prior to reading the book, my knowledge of biology embarrassingly languished at the middle-school level. With that confession aside, The Gene was probably one of the best (and for me, most enlightening) popular science books I have ever read. This is definitely aided by Mukherjee’s fluid and beautiful writing style from which scientists in all fields can learn a few lessons about scientific communication. The Gene is also touched with a humanity that is not usually associated with the popular science genre, which is usually rather dry in recounting scientific and intellectual endeavors. This humanity is the book’s most powerful feature.

Since there are many glowing reviews of the book published elsewhere, I will just list here a few nuggets I took away from The Gene, which hopefully will serve to entice rather than spoil the book for you:

  • Mukherjee compares the gene to an atom or a bit, evolution’s “indivisible” particle. Obviously, the gene is physically divisible in the sense that it is made of atoms, but what he means here is that the lower levels can be abstracted away and the gene is the relevant level at which geneticists work.
    • It is worth thinking of what the parallel carriers of information are in condensed matter problems — my hunch is that most condensed matter physicists would contend that these are the quasiparticles in the relevant phase of matter.
  • Gregor Mendel, whose work nowadays is recognized as giving birth to the entire field of genetics, was not recognized for his work while he was alive. It took another 40-50 years for scientists to rediscover his experiments and to see that he had localized, in those pea plants, the indivisible gene. One gets the feeling that his work was not celebrated while he was alive because his work was far ahead of its time.
  • The history of genetics is harrowing and ugly. While the second World War was probably the pinnacle of obscene crimes committed in the name of genetics, humans seem unable to shake off ideas associated with eugenics even into the modern day.
  • Through a large part of its history, the field of genetics has had to deal with a range of ethical questions. There is no sign of this trend abating in light of the recent discovery of CRISPR/Cas-9 technology. If you’re interested in learning more about this, RadioLab has a pretty good podcast about it.
  • Schrodinger’s book What is Life? has inspired so much follow-up work that it is hard to overestimate the influence it has had on a generation of physicists that transitioned to studying biology in the middle of the twentieth century, including both Watson and Crick.

While I could go on and on with this list, I’ll stop ruining the book for you. I would just like to say that at the end of the book I got the feeling that humans are still just starting to scratch the surface of understanding what’s going on in a cell. There is much more to learn, and that’s an exciting feeling in any field of science.

Aside: In case you missed March Meeting, the APS has posted the lectures from the Kavli Symposium on YouTube, which includes lectures from Duncan Haldane and Michael Kosterlitz among others.