# Monthly Archives: January 2017

## Citizen First, Scientist Second

I have written previously in praise of the scientific community becoming more diverse over time. I emphasized its importance because people with different cultural backgrounds often synthesize ideas that are sometimes not juxtaposed in other cultures. It is almost unquestionable that the US scientific enterprise has benefited greatly from the inclusion of scientists from around the world. Because the scientific community has become more diverse in the past few decades, it has also meant that science (at least in the academic sense) has become more open and international. As a member of the international community myself (I am a Thai citizen), recent events have been tough to watch as a scientist, immigrant and person.

This past week has seen some, I would consider, unsavory events affecting the scientific and higher education communities in the US. There was a temporary ban put in place by the US government barring citizens from seven Middle Eastern and African countries from entering the US. Some students are stranded outside the US, unable to return before the spring semester starts.

Day to day, science requires enormous attention to detail, patience doing precise theoretical or experimental work, and time to work without distractions. It is easy to get wrapped up in one’s own work, forgetting to pick one’s head up to look at what is going on around you. If events are not directly affecting you or someone close to you, it is easy to forget that these things are even happening.

In this spirit, I encourage you to attend (or organize!) department town hall meetings and speak up in support of your international colleagues. There is a planned Scientists’ March being arranged, and I urge you to attend if there is a gathering near you. To be perfectly honest (like most scientists), I am a person of thought rather than a person of action, but it is always necessary to be a citizen first and a scientist second.

## Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded $\sim 10^{16} \textrm{cm}^{-3})$. The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.

## Disorganized Reflections

Recently, this blog has been concentrating on topics that have lacked a personal touch. A couple months ago, I started a postdoc position and it has gotten me thinking about a few questions related to my situation and some that are more general. I thought it would be a good time to share some of my thoughts and experiences. Here is just a list of some miscellaneous questions and introspections.

1. In a new role, doing new work, people often make mistakes while getting accustomed to their new surroundings. Since starting at my new position, I’ve been lucky enough to have patient colleagues who have forgiven my rather embarrassing blunders and guided me through uncharted territory. It’s sometimes deflating admitting your (usually) daft errors, but it’s a part of the learning process (at least it is for me).
2. There are a lot of reasons why people are drawn to doing science. One of them is perpetually doing something new, scary and challenging. I hope that, at least for me, science never gets monotonous and there is consistently some “fear” of the unknown at work.
3. In general, I am wary of working too much. It is important to take time to exercise and take care of one’s mental and emotional health. One of the things I have noticed is that sometimes the most driven and most intelligent graduate students suffered from burnout due to their intense work schedules at the beginning of graduate school.
4. Along with the previous point, I am also wary of spending too much time in the lab because it is important to have  time to reflect. It is necessary to think about what you’ve done, what can be done tomorrow and conjure up experiments that one can possibly try, even if they may be lofty. It’s not a bad idea to set aside a little time each day or week to think about these kinds of things.
5. It is necessary to be resilient, not take things personally and know your limits. I know that I am not going to be the greatest physicist of my generation or anything like that, but what keeps me going is the hope that I can make a small contribution to the literature that some physicists and other scientists will appreciate. Maybe they might even say “Huh, that’s pretty cool” with some raised eyebrows.
6. Is physics my “passion”? I would say that I really like it, but I could have just as easily studied a host of other topics (such as literature, philosophy, economics, etc.), and I’m sure I would have enjoyed them just as much. I’ve always been more of a generalist in contrast to being focused on physics since I was a kid or teenager. There are too many interesting things out there in the world to feel satiated just studying condensed matter physics. This is sometimes a drawback and sometimes an asset (i.e. I am sometimes less technically competent than my lab-mates, but I can probably write with less trouble).
7. For me, reading widely is valuable, but I need to be careful that it does not impede or become a substitute for active thought.
8. Overall, science can be intimidating and it can feel unrewarding. This can be particularly true if you measure your success using a publication rate or some so-called “objective” measure. I would venture to say that a much better measure of success is whether you have grown during graduate school or during a postdoc by being able to think more independently, by picking up some valuable skills (both hard and soft) and have brought a  multi-year project into fruition.

Please feel free to share thoughts from your own experiences! I am always eager to learn about people whose experiences and attitudes differ from mine.

A few nuggets on the internet this week:

1. For football/soccer fans:
http://www.espnfc.us/blog/the-toe-poke/65/post/3036987/bayern-munichs-thomas-muller-has-ingenious-way-of-dodging-journalists

2. Barack Obama’s piece in Science Magazine:
http://tinyurl.com/jmeoyz5

3. An interesting read on the history of physics education reform (Thanks to Rodrigo Soto-Garrido for sharing this with me):
http://aapt.scitation.org/doi/full/10.1119/1.4967888

4. I wonder if an experimentalist can get this to work:
http://www.bbc.com/news/uk-england-bristol-38573364

## Landau Theory and the Ginzburg Criterion

The Landau theory of second order phase transitions has probably been one of the most influential theories in all of condensed matter. It classifies phases by defining an order parameter — something that shows up only below the transition temperature, such as the magnetization in a paramagnetic to ferromagnetic phase transition. Landau theory has framed the way physicists think about equilibrium phases of matter, i.e. in terms of broken symmetries. Much current research is focused on transitions to phases of matter that possess a topological index, and a major research question is how to think about these phases which exist outside the Landau paradigm.

Despite its far-reaching influence, Landau theory actually doesn’t work quantitatively in most cases near a continuous phase transition. By this, I mean that it fails to predict the correct critical exponents. This is because Landau theory implicitly assumes that all the particles interact in some kind of average way and does not adequately take into account the fluctuations near a phase transition. Quite amazingly, Landau theory itself predicts that it is going to fail near a phase transition in many situations!

Let me give an example of its failure before discussing how it predicts its own demise. Landau theory predicts that the specific heat should exhibit a discontinuity like so at a phase transition:

However, if one examines the specific heat anomaly in liquid helium-4, for example, it looks more like a divergence as seen below:

So it clearly doesn’t predict the right critical exponent in that case. The Ginzburg criterion tells us how close to the transition temperature Landau theory will fail. The Ginzburg argument essentially goes like so: since Landau theory neglects fluctuations, we can see how accurate Landau theory is going to be by calculating the ratio of the fluctuations to the order parameter:

$E_R = |G(R)|/\eta^2$

where $E_R$ is the error in Landau theory, $|G(R)|$ quantifies the fluctuations and $\eta$ is the order parameter. Basically, if the error is small, i.e. $E_R << 1$, then Landau theory will work. However, if it approaches $\sim 1$, Landau theory begins to fail. One can actually calculate both the order parameter and the fluctuation region (quantified by the two-point correlation function) within Landau theory itself and therefore use Landau theory to calculate whether or not it will fail.

If one does carry out the calculation, one gets that Landau theory will work when:

$t^{(4-d)/2} >> k_B/\Delta C \xi(1)^d \equiv t_{L}^{(4-d)/2}$

where $t$ is the reduced temperature, $d$ is the dimension, $\xi(1)$ is the dimensionless mean-field correlation length at $T = 2T_C$ (extrapolated from Landau theory) and $\Delta C/k_B$ is the change in specific heat in units of $k_B$, which is usually one per degree of freedom. In words, the formula essentially counts the number of degrees of freedom in a volume defined by  $\xi(1)^d$. If the number of degrees of freedom is large, then Landau theory, which averages the interactions from many particles, works well.

So that was a little bit of a mouthful, but the important thing is that these quantities can be estimated quite well for many phases of matter. For instance, in liquid helium-4, the particle interactions are very short-ranged because the helium atom is closed-shell (this is what enables helium to remain a liquid all the way down to zero temperatures at ambient conditions in the first place). Therefore, we can assume that $\xi(1) \sim 1\textrm{\AA}$, and hence $t_L \sim 1$ and deviations from Landau theory can be easily observed in experiment close to the transition temperature.

Despite the qualitative similarities between superfluid helium-4 and superconductivity, a topic I have addressed in the past, Landau theory works much better for superconductors. We can also use the Ginzburg criterion in this case to calculate how close to the transition temperature one has to be in order to observe deviations from Landau theory. In fact, the question as to why Ginzburg-Landau theory works so well for BCS superconductors is what awakened me to these issues in the first place. Anyway, we assume that $\xi(1)$ is on the order of the Cooper pair size, which for BCS superconductors is on the order of $1000 \textrm{\AA}$. There are about $10^8$ particles in this volume and correspondingly, $t_L \sim 10^{-16}$ and Landau theory fails so close to the transition temperature that this region is inaccessible to experiment. Landau theory is therefore considered to work well in this case.

For high-Tc superconductors, the Cooper pair size is of order $10\textrm{\AA}$ and therefore deviations from Landau theory can be observed in experiment. The last thing to note about these formulas and approximations is that two parameters determine whether Landau theory works in practice: the number of dimensions and the range of interactions.

*Much of this post has been unabashedly pilfered from N. Goldenfeld’s book Lectures on Phase Transitions and the Renormalization Group, which I heartily recommend for further discussion of these topics.