Tag Archives: History

On Scientific Inevitability

If one looks through the history of human evolution, it is surprising to see that humanity has on several independent occasions, in several different locations, figured how to produce food, make pottery, write, invent the wheel, domesticate animals, build complex political societies, etc. It is almost as if these discoveries and inventions were an inevitable part of the evolution of humans. More controversially, one may extend such arguments to include the development of science, mathematics, medicine and many other branches of knowledge (more on this point below).

The interesting part about these ancient inventions is that because they originated in different parts of the world, the specifics varied geographically. For instance, native South Americans domesticated llamas, while cultures in Southwest Asia (today’s Middle East) domesticated sheep, cows, and horses, while the Ancient Chinese were able to domesticate chickens among other animals. The reason that different cultures domesticated different animals was because these animals were by and large native to the regions where they were domesticated.

Now, there are also many instances in human history where inventions were not made independently, but diffused geographically. For instance, writing was developed independently in at least a couple locations (Mesoamerica and Southwest Asia), but likely diffused from Southwest Asia into Europe and other neighboring geographic locations. While the peoples in these other places would have likely discovered writing on their own in due time, the diffusion from Southwest Asia made this unnecessary. These points are well-made in the excellent book by Jared Diamond entitled Guns, Germs and Steel.

If you've ever been to the US post-office, you'll realize very quickly that it's not the product of intelligent design.

At this point, you are probably wondering what I am trying to get at here, and it is no more than the following musing. Consider the following thought experiment: if two different civilizations were geographically isolated without any contact for thousands of years, would they both have developed a similar form of scientific inquiry? Perhaps the questions asked and the answers obtained would have been slightly different, but my naive guess is that given enough time, both would have developed a process that we would recognize today as genuinely scientific. Obviously, this thought experiment is not possible, and this fact makes it difficult to answer to what extent the development of science was inevitable, but I would consider it plausible and likely.

Because what we would call “modern science” was devised after the invention of the printing press, the process of scientific inquiry likely “diffused” rather than being invented independently in many places. The printing press accelerated the pace of information transfer and did not allow geographically separated areas to “invent” science on their own.

Today, we can communicate globally almost instantly and information transfer across large geographic distances is easy. Scientific communication therefore works through a similar diffusive process, through the writing of papers in journals, where scientists from anywhere in the world can submit papers and access them online. Looking at science in this way, as an almost inevitable evolutionary process, downplays the role of individuals and suggests that despite the contribution of any individual scientist, humankind would have likely reached that destination ultimately anyhow. The timescale to reach a particular scientific conclusion may have been slightly different, but those conclusions would have been made nonetheless.

There are some scientists out there who have contributed massively to the advancement of science and their absence may have slowed progress, but it is hard to imagine that progress would have slowed very significantly. In today’s world, where the idea of individual genius is romanticized in the media and further so by prizes such as the Nobel, it is important to remember that no scientist is indispensable, no matter how great. There were often competing scientists simultaneously working on the biggest discoveries of the 20th century, such as the theories of general relativity, the structure of DNA, and others. It is likely that had Einstein or Watson, Crick and Franklin not solved those problems, others would have.

So while the work of this year’s scientific Nobel winners is without a doubt praise-worthy and the recipients deserving, it is interesting to think about such prizes in this slightly different and less romanticized light.

Advertisements

Book Review – The Gene

Following the March Meeting, I took a vacation for a couple weeks, returning home to Bangkok, Thailand. During my holiday, I was able to get a hold of and read Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History.

I have to preface any commentary by saying that prior to reading the book, my knowledge of biology embarrassingly languished at the middle-school level. With that confession aside, The Gene was probably one of the best (and for me, most enlightening) popular science books I have ever read. This is definitely aided by Mukherjee’s fluid and beautiful writing style from which scientists in all fields can learn a few lessons about scientific communication. The Gene is also touched with a humanity that is not usually associated with the popular science genre, which is usually rather dry in recounting scientific and intellectual endeavors. This humanity is the book’s most powerful feature.

Since there are many glowing reviews of the book published elsewhere, I will just list here a few nuggets I took away from The Gene, which hopefully will serve to entice rather than spoil the book for you:

  • Mukherjee compares the gene to an atom or a bit, evolution’s “indivisible” particle. Obviously, the gene is physically divisible in the sense that it is made of atoms, but what he means here is that the lower levels can be abstracted away and the gene is the relevant level at which geneticists work.
    • It is worth thinking of what the parallel carriers of information are in condensed matter problems — my hunch is that most condensed matter physicists would contend that these are the quasiparticles in the relevant phase of matter.
  • Gregor Mendel, whose work nowadays is recognized as giving birth to the entire field of genetics, was not recognized for his work while he was alive. It took another 40-50 years for scientists to rediscover his experiments and to see that he had localized, in those pea plants, the indivisible gene. One gets the feeling that his work was not celebrated while he was alive because his work was far ahead of its time.
  • The history of genetics is harrowing and ugly. While the second World War was probably the pinnacle of obscene crimes committed in the name of genetics, humans seem unable to shake off ideas associated with eugenics even into the modern day.
  • Through a large part of its history, the field of genetics has had to deal with a range of ethical questions. There is no sign of this trend abating in light of the recent discovery of CRISPR/Cas-9 technology. If you’re interested in learning more about this, RadioLab has a pretty good podcast about it.
  • Schrodinger’s book What is Life? has inspired so much follow-up work that it is hard to overestimate the influence it has had on a generation of physicists that transitioned to studying biology in the middle of the twentieth century, including both Watson and Crick.

While I could go on and on with this list, I’ll stop ruining the book for you. I would just like to say that at the end of the book I got the feeling that humans are still just starting to scratch the surface of understanding what’s going on in a cell. There is much more to learn, and that’s an exciting feeling in any field of science.

Aside: In case you missed March Meeting, the APS has posted the lectures from the Kavli Symposium on YouTube, which includes lectures from Duncan Haldane and Michael Kosterlitz among others.

YouTube Yikes!

A couple days ago, Lawrence Livermore National Laboratory released a number of videos of nuclear test explosions. It is worth watching some of these to understand the magnitude of destruction that these can cause. Here is a link to the Lawrence Livermore playlist on YouTube, and below is a video explaining a bit of the background concerning the release of these videos:

Below is a helpful MinutePhysics video that talks about the actual dangers concerning nuclear weapons:

On a somewhat unrelated note, while at the APS March Meeting this past week, Peter Abbamonte mentioned this video to me, which I also found pretty startling:

Lastly, here is a tragicomedy that takes place in the wild — it seems like this orca was never told by its mother not to play with its food:

An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

Consistency in the Hierarchy

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

Gibbs Paradox and Epicycles

Thomas Kuhn, the famous philosopher of science, envisioned that scientific revolutions take place when “an increasing number of epicycles” arise, resulting in the untenability of a prevailing theory. Just in case you aren’t familiar, the “epicycles” are a reference to the Ptolemaic world-view with the earth at the center of the universe. To explain the trajectories of the other planets, Ptolemaic theory required that the planets circulate the earth in complicated trajectories called epicycles. These convoluted epicycles were no longer needed once the Copernican revolution took place, and it was realized that our solar system was heliocentric.

This post is specifically about the Gibbs paradox, which provided one of the first examples of an “epicycle” in classical mechanics. If you google Gibbs paradox, you will come up with several different explanations, which are all seemingly related, but don’t quite all tell the same story. So instead of following Gibbs’ original arguments, I’ll just go by the version which is the easiest (in my mind) to follow.

Imagine a large box that is partitioned in two, with volume V on either side, filled with helium gas of the same pressure, temperature, etc. and at equilibrium (i.e. the gases are identical). The total entropy in this scenario is S + S =2S. Now, imagine that the partition is removed. The question Gibbs asked himself was: does the entropy increase?

Now, from our perspective, this might seems like an almost silly question, but Gibbs had asked himself this question in 1875, before the advent of quantum mechanics. This is relevant because in classical mechanics, particles are always distinguishable (i.e. they can be “tagged” by their trajectories). Hence, if one calculates the entropy increase assuming distinguishable particles, one gets the result that the entropy increases by 2Nk\textrm{ln}2.

This is totally at odds with one’s intuition (if one has any intuition when it comes to entropy!) and the extensive nature of entropy (that entropy scales with the system size). Since the size of the larger container of volume 2V containing identical gases (i.e. same pressure and temperature) does not change when removing the partition, neither should the entropy. And most damningly, if one were to place the partition back where it was before, one would naively think that the entropy would return to 2S, suggesting that the entropy decreased when returning the partition.

The resolution to this paradox is that the particles (helium atoms in this case) are completely indistinguishable. Gibbs had indeed recognized this as the resolution to the problem at the time, but considered it a counting problem.

Little did he know that the seeds giving rise to this seemingly benign problem required the complete overthrow of classical mechanics in favor of quantum mechanics. Only in quantum mechanics do truly identical particles exist. Note that nowhere in the Gibbs paradox does it suggest what the next theory will look like – it only points out a severe shortcoming of classical mechanics. Looked at in this light, it is amusing to think about what sorts of epicycles are hiding within our seemingly unshakable theories of quantum mechanics and general relativity, perhaps even in plain sight.