# Tag Archives: Research

## From whence we know the photon exists

In my previous post, I laid out the argument discussing why the photoelectric effect does not imply the existence of photons. In this post, I want to outline, not the first, but probably the conceptually simplest experiment that showed that photons do indeed exist. It was performed by Grangier, Roger and Aspect in 1986, and the paper can be found at this link (PDF!).

The idea can be described by considering the following experimental setup. Imagine I have light impinging on a 50/50 beamsplitter and detectors at both of the output ports, as pictured below. In this configuration, 50% of the light will be transmitted, labelled t below, and 50% of the light will be reflected, labeled r below.

Now, if a discrete and indivisible packet of light, i.e. a photon, is shone on the beam splitter, then it must either be reflected (and hit D1) or be transmitted (and hit D2). The detectors are forbidden from clicking in coincidence. However, there is one particularly tricky thing about this experiment. How do I ensure that I only fire a single photon at the beam splitter?

This is where Aspect, Roger and Grangier provide us with a rather ingenious solution. They used a two-photon cascade from a calcium atom to solve the issue. For the purpose of this post, one only needs to know that when a photon excites the calcium atom to an excited state, it emits two photons as it relaxes back down to the ground state. This is because it relaxes first to an intermediate state and then to the ground state. This process is so fast that the photons are essentially emitted simultaneously on experimental timescales.

Now, because the calcium atom relaxes in this way, the first photon can be used to trigger the detectors to turn them on, and the second photon can impinge on the beam splitter to determine whether there are coincidences among the detectors. A schematic of the experimental arrangement is shown below (image taken from here; careful, it’s a large PDF file!):

Famously, they were essentially able to extrapolate their results and show that the photons are perfectly anti-correlated, i.e. that when a photon reflects off of the beam splitter, there is no transmitted photon and vice versa. Alas the photon!

However, they did not stop there. To show that quantum mechanical superposition applies to single photons, they sent these single photons through a Mach-Zehnder interferometer (depicted schematically below, image taken from here).

They were able to show that single photons do indeed interfere. The fringes were observed with visibility of about 98%. A truly triumphant experiment that showed not only the existence of photons cleanly, but that their properties are non-classical and can be described by quantum mechanics!

## Critical Slowing Down

I realize that it’s been a long while since I’ve written a post, so the topic of this one, while unintentionally so, is quite apt.

Among the more universal themes in studying phase transitions is the notion of critical slowing down. Most students are introduced to this idea in the context of second order phase transitions, but it has turned out to be a useful concept in a wide range of systems beyond this narrow framework and into subjects well outside the purview of the average condensed matter physicist.

Stated simply, critical slowing down refers to the phenomenon observed near phase transitions where a slight perturbation or disturbance away from equilibrium takes a really long time to decay back to equilibrium. Why is this the case?

The main idea can be explained within the Landau theory of phase transitions, and I’ll take that approach here since it’s quite intuitive.  As you can see in the images below, when the Landau potential is far from $T_c$, the potential well can be approximated by a parabolic form. However, this is not possible for the potential near $T_c$.

Mathematically, this can be explained by considering a simple form of the Landau potential:

$V(\phi) = \alpha (T-T_c) x^2 + \beta x^4$

Near $T_c$, the parabolic term vanishes, and we are left with only the quartic one. Although it’s clear from the images why the dynamics slow down near $T_c$, it helps to spell out the math a little.

Firstly, imagine that the potential is filled with some sort of viscous fluid, something akin to honey, and that the dynamics of the ball represents that of the order parameter. This puts us in the “overdamped” limit, where the order parameter reaches the equilibrium point without executing any sort of oscillatory motion. Far from $T_c$, as aforementioned, we can approximate the dynamics with a parabolic form of the potential (using the equation for the overdamped limit, $\dot{x} = -dV/dx$):

$\dot{x} = -\gamma(T) x$

The solution to this differential equation is of exponential form, i.e. $x(t) = x(0)e^{-\gamma(T) t}$, and the relaxation back to equilibrium is therefore characterized by a temperature-dependent timescale $\tau =1/\gamma(T)$.

However, near $T_c$, the parabolic approximation breaks down, as the parabolic term gets very small, and we have to take into consideration the quartic term. The order parameter dynamics then get described by:

$\dot{x} = -\beta x^3$,

which has a solution of the form $x(t) \sim 1/\sqrt{\beta t}$. Noticeably, the dynamics of the order parameter obey a much slower power law decay near $T_c$, as illustrated below:

Now, naively, at this point, one would think, “okay, so this is some weird thing that happens near a critical point at a phase transition…so what?”

Well, it turns out that critical slowing down can actually serve as a precursor of an oncoming phase transition in all sorts of contexts, and can even be predictive! Here are a pair of illuminating papers which show that critical slowing down occurs near a population collapse in microbial communities (from the Scheffer group and from the Gore group). As an aside, the Gore group used the budding yeast Saccharomyces cerevisiae in their experiments, which is the yeast used in most beers (I wonder if their lab has tasting parties, and if so, can I get an invitation?).

Here is another recent paper showing critical slowing down in a snap-through instability of an elastic rod. I could go on and on listing the different contexts where critical slowing down has been observed, but I think it’s better that I cite this review article.

Surprisingly, critical slowing down has been observed at continuous, first-order and far-from-equilibrium phase transitions! As a consequence of this generality, the observation of critical slowing down can therefore be predictive. If the appropriate measurements could be made, one may be able to see how close the earth’s climate is to a “tipping point” from which it will be very difficult to return (due to hysteresic effects) (see this paper which shows some form of critical slowing down in previous climatic changes in the earth’s history). But for now, it’s just interesting to look for critical slowing down in other contexts that are a little easier to predict and where perhaps the consequences aren’t as dire.

*Thanks to Alfred Zong who introduced me to many of the above papers

**Also, a shout out to Brian Skinner who caught repeated noise patterns in a recent preprint on room temperature superconductivity. Great courage and good job!

If you’re in the United States, you’ll probably have noticed that there is a bill that is dangerously close to passing that will increase the tax burden on graduate students dramatically. This bill will tax graduate students counting their tuition waiver as part of their income, increasing their taxable income from somewhere in the $30k range to somewhere in the$70-80k range.

Carnegie Mellon and UC Berkeley have recently done calculations to estimate the extra taxes the graduate students will have to pay, and it does not provide happy reading. The Carnegie Mellon document can be found here and the UC Berkeley document can be found here. The UC Berkeley document also calculates the increase in the tax burden for MIT graduate students, as there can be large differences between public and private institutions (private institutions generally charge more for graduate education and have a larger tuition waiver, so graduate students at private institutions will be taxed more).

Most importantly, the document from UC Berkeley states:

An MIT Ph.D. student who is an RA [Research Assistant] for all twelve months in 2017 will get a salary of approximately $37,128, and a health insurance plan valued at$3,000. The cost of a year of tuition at MIT is about $49,580. With these figures, we can estimate the student’s 2017 tax burden. We​ ​find​ ​that​ ​her​ ​federal​ ​income​ ​tax​ ​would​ ​be​ ​$3,993​ ​under​ ​current​ ​law,​ ​and $13,577​ ​under​ ​the​ ​TCJA [Tax Cuts and Jobs Act],​ ​or​ ​a​ ​240%​ ​increase.​ We also note that her tax burden is about 37% of her salary. This is a huge concern for those involved, but I think there are more dire long-term consequences at stake here for the STEM fields. I chose to pursue a graduate degree in physics in the US partly because it allowed me the pursue a degree without having to accrue student debt and obtain a livable stipend to pay for food and housing (for me it was$20k/year). If I had to apply for graduate school in this current climate, I would probably apply to graduate schools in Canada and Europe to avoid the unpredictability in the current atmosphere and possible cut to my stipend.

That is to say that I am sure that if this bill passes (and the very fact that it could harm graduate students so heavily) will probably have the adverse side-effect of driving away talented graduate students to study in other countries or dissuade them from pursuing those degrees at all. It is important to remember that educated immigrants, especially those in the STEM fields, play a large role in spurring economic growth in the US.

Graduate students may not recognize that if they collectively quit their jobs, the US scientific research enterprise would grind to a quick halt. They are already a relatively hidden and cheap workforce in the US. It bemuses me that these students may about to have their meager stipends for housing and food be taxed further to the point that they may not be able to afford these basic necessities.

## Research Topics and the LAQ Method

As a scientific researcher, the toughest part of the job is to come up with good scientific questions. A large part of my time is spent looking for such questions and every once in a while, I happen upon a topic that is worth spending the time to investigate further. The most common method of generating such questions is to come up with a lot of them and then sift through the list to find some that are worth pursuing.

One of the main criteria I use for selecting such questions/problems is what I refer to as the “largest answerable question” or LAQ method. Because the lifetime of a researcher is limited by the human lifespan, it is important to try to answer the largest answerable questions that fall within the window of your ability. Hence, this selection process is actually tied in with one’s self-confidence and actually takes a fair amount of introspection. I imagine the LAQ method looking a little bit like this:

One starts by asking some really general questions about some scientific topic which eventually proceeds to a more specific, answerable, concrete question. If the question is answerable, it usually will have ramifications that will be broadly felt by many in the community.

I imagine that most readers of this blog will have no trouble coming up with examples of success stories where scientists employed the LAQ method. Just about every famous scientist you can think of has probably, at least to some extent, used this method fruitfully. However, there are counterexamples as well, where important questions are asked by one scientist, but is answered by others.

I am almost done reading Erwin Schrodinger’s book What is Life?, which was written in 1943. In it, Schrodinger asks deep questions about genetics and attempts to put physical constraints on information-carrying molecules (DNA was not known at the time to be the site of genetic information). It is an entertaining read in two regards. Firstly, Schrodinger, at the time of writing, introduces to physicists some of the most pertinent and probing questions in genetics. The book was, after all, one that was read by both Watson and Crick before they set about discovering the structure of DNA. Secondly, and more interestingly, Schrodinger gets almost everything he tries to answer wrong! For instance, he suggests that quantum mechanics may play a role in causing a mutation of certain genes. This is not to say that his reasoning was not sound, but at the time of writing, there were just not enough experimental constraints on some of the assumptions he made.

Nonetheless, I applaud Schrodinger for writing the book and exposing his ignorance. Even though he was not able to answer many of the questions himself, he was an inspiration to many others who eventually were able to shed light on many of the questions posed in the book. Here is an example where the LAQ method fails, but still pushes science forward in a tangible way.

What are your strategies with coming up with good research questions? I have to admit that while the LAQ method is useful, I sometimes pursue problems purely because I find them stunning and no other reason is needed!

## Mott Switches and Resistive RAMs

Over the past few years, there have been some interesting developments concerning narrow gap correlated insulators. In particular, it has been found that it is particularly easy to induce an insulator to metal transition (in the very least, one can say that the resistivity changes by a few orders of magnitude!) in materials such as VO2, GaTa4Se8 and NiS2-xSx with an electric field. There appears to be a threshold electric field above which the material turns into a metal. Here is a plot demonstrating this rather interesting phenomenon in Ca2RuO4, taken from this paper:

It can be seen that the transition is hysteretic, thereby indicating that the insulator-metal transition as a function of field is first-order. It turns out that in most of the materials in which this kind of behavior is observed, there usually exists an insulator-metal transition as a function of temperature and pressure as well. Therefore, in cases such as in (V1-xCrx)2O3, it is likely that the electric field induced insulator-metal transition is caused by Joule heating. However, there are several other cases where it seems like Joule heating is likely not the culprit causing the transition.

While Zener breakdown has been put forth as a possible mechanism causing this transition when Joule heating has been ruled out, back-of-the-envelope calculations suggest that the electric field required to cause a Zener-type breakdown would be several orders of magnitude larger than that observed in these correlated insulators.

On the experimental side, things get even more interesting when applying pulsed electric fields. While the insulator-metal transition observed is usually hysteretic, as shown in the plot above, in some of these correlated insulators, electrical pulses can maintain the metallic state. What I mean is that when certain pulse profiles are applied to the material, it gets stuck in a metastable metallic state. This means that even when the applied voltage is turned off, the material remains a metal! This is shown here for instance for a 30 microsecond / 120V 7-pulse train with each pulse applied every 200 microseconds to GaV4S8 (taken from this paper):

Electric field pulses applied to GaV4S8. A single pulse induces a insulator-metal transition, but reverts back to the insulating state after the pulse disappears. A pulse train induces a transition to a metastable metallic state.

Now, if your thought process is similar to mine, you would be wondering if applying another voltage pulse would switch the material back to an insulator. The answer is that with a specific pulse profile this is possible. In the same paper as the one above, the authors apply a series of 500 microsecond pulses (up to 20V) to the same sample, and they don’t see any change. However, the application of a 12V/2ms pulse does indeed reset the sample back to (almost) its original state. In the paper, the authors attribute the need for a longer pulse to Joule heating, enabling the sample to revert back to the insulating state. Here is the image showing the data for the metastable-metal/insulator transition (taken from the same paper):

So, at the moment, it seems like the mechanism causing this transition is not very well understood (at least I don’t understand it very well!). It is thought that there are filamentary channels between the contacts causing the insulator-metal transition. However, STM has revealed the existence of granular metallic islands in GaTa4S8. The STM results, of course, should be taken with a grain of salt since STM is surface sensitive and something different might be happening in the bulk. Anyway, some hypotheses have been put forth to figure out what is going on microscopically in these materials. Here is a recent theoretical paper putting forth a plausible explanation for some of the observed phenomena.

Before concluding, I would just like to point out that the relatively recent (and remarkable) results on the hidden metallic state in TaS2 (see here as well), which again is a Mott-like insulator in the low temperature state, is likely related to the phenomena in the other materials. The relationship between the “hidden state” in TaS2 and the switching in the other insulators discussed here seems to not have been recognized in the literature.

Anyway, I heartily recommend reading this review article to gain more insight into these topics for those who are interested.

## Discovery vs. Q&A Experiments

When one looks through the history of condensed matter experiment, it is strange to see how many times discoveries were made in a serendipitous fashion (see here for instance). I would argue that most groundbreaking findings were unanticipated. The discoveries of superconductivity by Onnes, the Meissner effect, superfluidity in He-4, cuprate (and high temperature) superconductivity, the quantum Hall effect and the fractional quantum Hall effect were all unforeseen by the very experimentalists that were conducting the experiments! Theorists also did not anticipate these results. Of course, a whole slew of phases and effects were theoretically predicted and then experimentally observed as well, such as Bose-Einstein condensation, the Kosterlitz-Thouless transition, superfluidity in He-3 and the discovery of topological insulators, not to diminish the role of prediction.

For the condensed matter experimentalist, though, this presents a rather strange paradigm.  Naively (and I would say that the general public by and large shares this view), science is perceived as working within a question and answer framework. You pose a concrete question, and then conduct and experiment to try to answer said question. In condensed matter physics, this often not the case, or at least only loosely the case. There are of course experiments that have been conducted to answer concrete questions — and when they are conducted, they usually end up being beautiful experiments (see here for example). But these kinds of experiments can only be conducted when a field reaches a point where concrete questions can be formulated. For exploratory studies, the questions are often not even clear. I would, therefore, consider these kinds of Q&A experiments to be the exception to the rule rather than the norm.

More often then not, discoveries are made by exploring uncharted territory, entering a space others have not explored before, and tempting fate. Questions are often not concrete but posed in the form, “What if I do this…?”. I know that this makes condensed matter physics sound like it lacks organization, clarity and structure. But this is not totally untrue. Most progress in the history of science did not proceed in a straight line like textbooks make it seem. When weird particles were popping up all over the place in particle physics in the 1930s and 40s, it was hard to see any organizing principles. Experimentalists were discovering new particles at a rate with which theory could not keep up. Only after a large number of particles had been discovered did Gell-Mann come up with his “Eightfold Way”, which ultimately led to the Standard Model.

This is all to say that scientific progress is tortuous, thought processes of scientists are highly nonlinear, and there is a lot of intuition required in deciding what problems to solve or what space is worth exploring. In condensed matter experiment, it is therefore important to keep pushing boundaries of what has been done before, explore, and do something unique in hope of finding something new!

Exposure to a wide variety of observations and methods is required to choose what boundaries to push and where to spend one’s time exploring. This is what makes diversity and avoiding “herd thinking” important to the scientific endeavor. Exploratory science without concrete questions makes some (especially younger graduate students) feel uncomfortable, since there is always the fear of finding nothing! This means that condensed matter physics, despite its tremendous progress over the last few decades, where certain general organizing principles have been identified, is still somewhat of a “wild west” in terms of science. But it is precisely this lack of structure that makes it particularly exciting — there are still plenty of rocks that need overturning, and it’s hard to foresee what is going to be found underneath them.

In experimental science, questions are important to formulate — but the adventure towards the answer usually ends up being more important than the answer itself.

## Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded $\sim 10^{16} \textrm{cm}^{-3})$. The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.