Tag Archives: History

An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

Consistency in the Hierarchy

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

Gibbs Paradox and Epicycles

Thomas Kuhn, the famous philosopher of science, envisioned that scientific revolutions take place when “an increasing number of epicycles” arise, resulting in the untenability of a prevailing theory. Just in case you aren’t familiar, the “epicycles” are a reference to the Ptolemaic world-view with the earth at the center of the universe. To explain the trajectories of the other planets, Ptolemaic theory required that the planets circulate the earth in complicated trajectories called epicycles. These convoluted epicycles were no longer needed once the Copernican revolution took place, and it was realized that our solar system was heliocentric.

This post is specifically about the Gibbs paradox, which provided one of the first examples of an “epicycle” in classical mechanics. If you google Gibbs paradox, you will come up with several different explanations, which are all seemingly related, but don’t quite all tell the same story. So instead of following Gibbs’ original arguments, I’ll just go by the version which is the easiest (in my mind) to follow.

Imagine a large box that is partitioned in two, with volume V on either side, filled with helium gas of the same pressure, temperature, etc. and at equilibrium (i.e. the gases are identical). The total entropy in this scenario is S + S =2S. Now, imagine that the partition is removed. The question Gibbs asked himself was: does the entropy increase?

Now, from our perspective, this might seems like an almost silly question, but Gibbs had asked himself this question in 1875, before the advent of quantum mechanics. This is relevant because in classical mechanics, particles are always distinguishable (i.e. they can be “tagged” by their trajectories). Hence, if one calculates the entropy increase assuming distinguishable particles, one gets the result that the entropy increases by 2Nk\textrm{ln}2.

This is totally at odds with one’s intuition (if one has any intuition when it comes to entropy!) and the extensive nature of entropy (that entropy scales with the system size). Since the size of the larger container of volume 2V containing identical gases (i.e. same pressure and temperature) does not change when removing the partition, neither should the entropy. And most damningly, if one were to place the partition back where it was before, one would naively think that the entropy would return to 2S, suggesting that the entropy decreased when returning the partition.

The resolution to this paradox is that the particles (helium atoms in this case) are completely indistinguishable. Gibbs had indeed recognized this as the resolution to the problem at the time, but considered it a counting problem.

Little did he know that the seeds giving rise to this seemingly benign problem required the complete overthrow of classical mechanics in favor of quantum mechanics. Only in quantum mechanics do truly identical particles exist. Note that nowhere in the Gibbs paradox does it suggest what the next theory will look like – it only points out a severe shortcoming of classical mechanics. Looked at in this light, it is amusing to think about what sorts of epicycles are hiding within our seemingly unshakable theories of quantum mechanics and general relativity, perhaps even in plain sight.

Lunar Eclipse and the 22 Degree Halo

The beautiful thing about atmospheric optics is that (almost) everyone can look up at the sky and see stunning optical phenomena from the sun, moon or some other celestial object. In this post I’ll focus on two particularly striking phenomena where the physical essence can be captured with relatively simple explanations.

The 22 degree halo is a ring around the sun or moon, which is often observed on cold days. Here are a couple images of the 22 degree halo around the sun and moon respectively:

22_degree_half_around_sun

22 degree halo around the sun

22_degree_halo_around_the_moon

22 degree halo around the moon

Note that the 22 degree halo is distinct from the coronae, which occur due to different reasons. While the coronae arise due to the presence of water droplets, the 22 degree halo arises specifically due to the presence of hexagonal ice crystals in the earth’s atmosphere. So why 22 degrees? Well, it turns out that one can answer the question using rather simple undergraduate-level physics. One of the most famous questions in undergraduate optics is that of light refraction through a prism, illustrated below:

prism

Fig. 1: The Snell’s Law Prism Problem

But if there were hexagonal ice crystals in the atmosphere, the problem is exactly the same, as one can see below. This is so because a hexagon is just an equilateral triangle with its ends chopped off. So as long as the light enters and exits on two sides of the hexagon that are spaced one side apart, the analysis is the same as for the triangle.

triangle_chop

Equilateral triangle with ends chopped off, making a hexagon

It turns out that \theta_4 in Fig. 1 can be solved as a function of \theta_1 with Snell’s law and some simple trigonometry to yield (under the assumption that n_1 =1):

\theta_4 = \textrm{sin}^{-1}(n_2 \times \textrm{sin}(60-\textrm{sin}^{-1}(\textrm{sin}(\theta_1)/n_2)))

It is then pretty straightforward to obtain \delta, the difference in angle between the incident and refracted beam as a function of \theta_1. I have plotted this below for the index of refraction of ice crystals for three different colors of light, red, green and blue (n_2 = 1.306, 1.311 and 1.317 respectively):

22deghalo

The important thing to note in the plot above is that there is a minimum angle below which there is no refracted beam, and this angle is precisely 21.54, 21.92 and 22.37 degrees for red, green and blue light respectively. Because there is no refracted beam below 22 degrees, this region appears darker, and then there is a sudden appearance of the refracted beam at the angles listed above. This is what gives rise to the 22 degree halo and also to the reddish hue on the inside rim of the halo.

Another rather spectacular celestial occurrence is the lunar eclipse, where the earth completely obscures the moon from direct sunlight. This is the geometry for the lunar eclipse:

lunar_eclipse

Geometry of the lunar eclipse

The question I wanted to address is the reddish hue of the moon, despite it lying in the earth’s shadow. It would naively seem like the moon should not be observable at all. However, there is a similar effect occurring here as with the halo. In this case, the earth’s atmosphere is the refracting medium. So just as light incident on the prism was going upward and then exited going downward, the sun’s rays similarly enter the atmosphere on a trajectory that would miss the moon, but then are bent towards the moon after interacting with the earth’s atmosphere.

But why red? Well, this has the same origins as the reddish hue of the sunset. Because light scatters from atmospheric particles as 1/\lambda^4, blue light gets scattered away much more easily than red light. Hence, the only color of light left by the time the light reaches the moon is primarily of red color.

It is interesting to imagine what the earth looks like from the moon during a lunar eclipse — it likely looks completely dark apart from a spectacular red halo around the earth. Anyway, one should realize that Snell’s law was first formulated in 984 by Arab scientist Ibn Sahl, and so it was possible to come to these conclusions more than a thousand years ago. Nothing new here!

On Science and Meaning

The history of science has provided strong evidence to suggest that the humanity’s place in the universe is not very special. We have not existed for very long in the history of the universe, we are not at the center of the universe and we likely will not exist in the future of the universe. This kind of sentiment can seem depressing to some, as can be seen in the response to the video made by Neil DeGrasse Tyson and MinutePhysics:

It appears that such ideas can make human life and our actions here on earth (and beyond) seem rather meaningless. As I have referenced in a previous post, this can especially be true for graduate students! However, on a more serious note, I would contend the exact opposite.

Because life on earth is so fragile and transient and only exists in some far-flung corner on the universe, the best thing we can hope to do as humans is celebrate our existence through acts of exploration, beauty, creation, truth and acts that enrich the lives of others and our environment.

When working in the lab or on a calculation that requires attention to small details, this larger context is often forgotten. To my mind, it is important not to lose sight of the basic reason why we are there in the first place, which is all too easy to do. The universe can seem meaningless, but not so when she lets us peer into her depths, usually revealing order of spectacular beauty.

I apologize if this post comes off as a little preachy or pretentious– I suspect I am really the one that needed this pep talk.

Drought

Since the discovery of superconductivity, the record transition temperature held by a material has been shattered many times. Here is an excellent plot (taken from here) that shows the critical temperature vs. year of discovery:

Superconducting Transition Temperature vs. Year of Discovery (click to enlarge)

This is a pretty remarkable plot for many reasons. One is the dramatic increase in transition temperature ushered in by the cuprates after approximately 70 years of steady and slow increases in transition temperatures. Another more worrying signature of the plot is that we are currently going through an unprecedented drought (not caused by climate change). The highest transition temperature has not been raised (at ambient pressures) for more than 23 years, the longest in history since the discovery of superconductivity.

It was always going to be difficult to increase the transition temperatures of superconductors once the materials in the  cuprate class were (seemingly) exhausted. It is interesting to see, however, that the mode of discovery has altered markedly compared with years prior. Nowadays, vertical lines are more common, with a subsequent leveling out. Hopefully the vertical line will reach room temperature sooner rather than later. I, personally, hope to still be around when room temperature superconductivity is achieved — it will be an interesting time to be alive.