Category Archives: Research

Mott Switches and Resistive RAMs

Over the past few years, there have been some interesting developments concerning narrow gap correlated insulators. In particular, it has been found that it is particularly easy to induce an insulator to metal transition (in the very least, one can say that the resistivity changes by a few orders of magnitude!) in materials such as VO2, GaTa4Se8 and NiS2-xSx with an electric field. There appears to be a threshold electric field above which the material turns into a metal. Here is a plot demonstrating this rather interesting phenomenon in Ca2RuO4, taken from this paper:

Ca2RuO4_Switch.PNG

It can be seen that the transition is hysteretic, thereby indicating that the insulator-metal transition as a function of field is first-order. It turns out that in most of the materials in which this kind of behavior is observed, there usually exists an insulator-metal transition as a function of temperature and pressure as well. Therefore, in cases such as in (V1-xCrx)2O3, it is likely that the electric field induced insulator-metal transition is caused by Joule heating. However, there are several other cases where it seems like Joule heating is likely not the culprit causing the transition.

While Zener breakdown has been put forth as a possible mechanism causing this transition when Joule heating has been ruled out, back-of-the-envelope calculations suggest that the electric field required to cause a Zener-type breakdown would be several orders of magnitude larger than that observed in these correlated insulators.

On the experimental side, things get even more interesting when applying pulsed electric fields. While the insulator-metal transition observed is usually hysteretic, as shown in the plot above, in some of these correlated insulators, electrical pulses can maintain the metallic state. What I mean is that when certain pulse profiles are applied to the material, it gets stuck in a metastable metallic state. This means that even when the applied voltage is turned off, the material remains a metal! This is shown here for instance for a 30 microsecond / 120V 7-pulse train with each pulse applied every 200 microseconds to GaV4S8 (taken from this paper):

GaVa4S8.PNG

Electric field pulses applied to GaV4S8. A single pulse induces a insulator-metal transition, but reverts back to the insulating state after the pulse disappears. A pulse train induces a transition to a metastable metallic state.

Now, if your thought process is similar to mine, you would be wondering if applying another voltage pulse would switch the material back to an insulator. The answer is that with a specific pulse profile this is possible. In the same paper as the one above, the authors apply a series of 500 microsecond pulses (up to 20V) to the same sample, and they don’t see any change. However, the application of a 12V/2ms pulse does indeed reset the sample back to (almost) its original state. In the paper, the authors attribute the need for a longer pulse to Joule heating, enabling the sample to revert back to the insulating state. Here is the image showing the data for the metastable-metal/insulator transition (taken from the same paper):

gava4s8_reset

So, at the moment, it seems like the mechanism causing this transition is not very well understood (at least I don’t understand it very well!). It is thought that there are filamentary channels between the contacts causing the insulator-metal transition. However, STM has revealed the existence of granular metallic islands in GaTa4S8. The STM results, of course, should be taken with a grain of salt since STM is surface sensitive and something different might be happening in the bulk. Anyway, some hypotheses have been put forth to figure out what is going on microscopically in these materials. Here is a recent theoretical paper putting forth a plausible explanation for some of the observed phenomena.

Before concluding, I would just like to point out that the relatively recent (and remarkable) results on the hidden metallic state in TaS2 (see here as well), which again is a Mott-like insulator in the low temperature state, is likely related to the phenomena in the other materials. The relationship between the “hidden state” in TaS2 and the switching in the other insulators discussed here seems to not have been recognized in the literature.

Anyway, I heartily recommend reading this review article to gain more insight into these topics for those who are interested.

Discovery vs. Q&A Experiments

When one looks through the history of condensed matter experiment, it is strange to see how many times discoveries were made in a serendipitous fashion (see here for instance). I would argue that most groundbreaking findings were unanticipated. The discoveries of superconductivity by Onnes, the Meissner effect, superfluidity in He-4, cuprate (and high temperature) superconductivity, the quantum Hall effect and the fractional quantum Hall effect were all unforeseen by the very experimentalists that were conducting the experiments! Theorists also did not anticipate these results. Of course, a whole slew of phases and effects were theoretically predicted and then experimentally observed as well, such as Bose-Einstein condensation, the Kosterlitz-Thouless transition, superfluidity in He-3 and the discovery of topological insulators, not to diminish the role of prediction.

For the condensed matter experimentalist, though, this presents a rather strange paradigm.  Naively (and I would say that the general public by and large shares this view), science is perceived as working within a question and answer framework. You pose a concrete question, and then conduct and experiment to try to answer said question. In condensed matter physics, this often not the case, or at least only loosely the case. There are of course experiments that have been conducted to answer concrete questions — and when they are conducted, they usually end up being beautiful experiments (see here for example). But these kinds of experiments can only be conducted when a field reaches a point where concrete questions can be formulated. For exploratory studies, the questions are often not even clear. I would, therefore, consider these kinds of Q&A experiments to be the exception to the rule rather than the norm.

More often then not, discoveries are made by exploring uncharted territory, entering a space others have not explored before, and tempting fate. Questions are often not concrete but posed in the form, “What if I do this…?”. I know that this makes condensed matter physics sound like it lacks organization, clarity and structure. But this is not totally untrue. Most progress in the history of science did not proceed in a straight line like textbooks make it seem. When weird particles were popping up all over the place in particle physics in the 1930s and 40s, it was hard to see any organizing principles. Experimentalists were discovering new particles at a rate with which theory could not keep up. Only after a large number of particles had been discovered did Gell-Mann come up with his “Eightfold Way”, which ultimately led to the Standard Model.

This is all to say that scientific progress is tortuous, thought processes of scientists are highly nonlinear, and there is a lot of intuition required in deciding what problems to solve or what space is worth exploring. In condensed matter experiment, it is therefore important to keep pushing boundaries of what has been done before, explore, and do something unique in hope of finding something new!

Exposure to a wide variety of observations and methods is required to choose what boundaries to push and where to spend one’s time exploring. This is what makes diversity and avoiding “herd thinking” important to the scientific endeavor. Exploratory science without concrete questions makes some (especially younger graduate students) feel uncomfortable, since there is always the fear of finding nothing! This means that condensed matter physics, despite its tremendous progress over the last few decades, where certain general organizing principles have been identified, is still somewhat of a “wild west” in terms of science. But it is precisely this lack of structure that makes it particularly exciting — there are still plenty of rocks that need overturning, and it’s hard to foresee what is going to be found underneath them.

In experimental science, questions are important to formulate — but the adventure towards the answer usually ends up being more important than the answer itself.

Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded \sim 10^{16}  \textrm{cm}^{-3}). The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

EHD-Lo.JPG

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

Pic_EHD.JPG

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.

Disorganized Reflections

Recently, this blog has been concentrating on topics that have lacked a personal touch. A couple months ago, I started a postdoc position and it has gotten me thinking about a few questions related to my situation and some that are more general. I thought it would be a good time to share some of my thoughts and experiences. Here is just a list of some miscellaneous questions and introspections.

  1. In a new role, doing new work, people often make mistakes while getting accustomed to their new surroundings. Since starting at my new position, I’ve been lucky enough to have patient colleagues who have forgiven my rather embarrassing blunders and guided me through uncharted territory. It’s sometimes deflating admitting your (usually) daft errors, but it’s a part of the learning process (at least it is for me).
  2. There are a lot of reasons why people are drawn to doing science. One of them is perpetually doing something new, scary and challenging. I hope that, at least for me, science never gets monotonous and there is consistently some “fear” of the unknown at work.
  3. In general, I am wary of working too much. It is important to take time to exercise and take care of one’s mental and emotional health. One of the things I have noticed is that sometimes the most driven and most intelligent graduate students suffered from burnout due to their intense work schedules at the beginning of graduate school.
  4. Along with the previous point, I am also wary of spending too much time in the lab because it is important to have  time to reflect. It is necessary to think about what you’ve done, what can be done tomorrow and conjure up experiments that one can possibly try, even if they may be lofty. It’s not a bad idea to set aside a little time each day or week to think about these kinds of things.
  5. It is necessary to be resilient, not take things personally and know your limits. I know that I am not going to be the greatest physicist of my generation or anything like that, but what keeps me going is the hope that I can make a small contribution to the literature that some physicists and other scientists will appreciate. Maybe they might even say “Huh, that’s pretty cool” with some raised eyebrows.
  6. Is physics my “passion”? I would say that I really like it, but I could have just as easily studied a host of other topics (such as literature, philosophy, economics, etc.), and I’m sure I would have enjoyed them just as much. I’ve always been more of a generalist in contrast to being focused on physics since I was a kid or teenager. There are too many interesting things out there in the world to feel satiated just studying condensed matter physics. This is sometimes a drawback and sometimes an asset (i.e. I am sometimes less technically competent than my lab-mates, but I can probably write with less trouble).
  7. For me, reading widely is valuable, but I need to be careful that it does not impede or become a substitute for active thought.
  8. Overall, science can be intimidating and it can feel unrewarding. This can be particularly true if you measure your success using a publication rate or some so-called “objective” measure. I would venture to say that a much better measure of success is whether you have grown during graduate school or during a postdoc by being able to think more independently, by picking up some valuable skills (both hard and soft) and have brought a  multi-year project into fruition.

Please feel free to share thoughts from your own experiences! I am always eager to learn about people whose experiences and attitudes differ from mine.

A few nuggets on the internet this week:

  1. For football/soccer fans:
    http://www.espnfc.us/blog/the-toe-poke/65/post/3036987/bayern-munichs-thomas-muller-has-ingenious-way-of-dodging-journalists

  2. Barack Obama’s piece in Science Magazine:
    http://tinyurl.com/jmeoyz5

  3. An interesting read on the history of physics education reform (Thanks to Rodrigo Soto-Garrido for sharing this with me):
    http://aapt.scitation.org/doi/full/10.1119/1.4967888

  4. I wonder if an experimentalist can get this to work:
    http://www.bbc.com/news/uk-england-bristol-38573364

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

Wannier-Stark Ladder, Wavefunction Localization and Bloch Oscillations

Most people who study solid state physics are told at some point that in a totally pure sample where there is no scattering, one should observe an AC response to a DC electric field, with oscillations at the Bloch frequency (\omega_B). These are the so-called Bloch oscillations, which were predicted by C. Zener in this paper.

However, the actual observation of Bloch oscillations is not as simple as the textbooks would make it seem. There is an excellent Physics Today article by E. Mendez and G. Bastard that outline some of the challenges associated with observing Bloch oscillations (which was written while this paper was being published!). Since the textbook treatments often use semi-classical equations of motion to demonstrate the existence of Bloch oscillations in a periodic potential, they implicitly assume transport of an electron wave-packet. To generate this wave-packet is non-trivial in a solid.

In fact, if one undertakes a full quantum mechanical treatment of electrons in a periodic potential under the influence of an electric field, one arrives at the Wannier-Stark ladder, which shows that an electric field can localize electrons! It is this ladder and the corresponding localization which was key to observing Bloch oscillations in semiconductor superlattices.

Let me use the two-well potential to give you a picture of how this localization might occur. Imagine symmetric potential wells, where the lowest energy eigenstates look like so (where S and A label the symmetric and anti-symmetric states):

Now, imagine that I start to make the wells a little asymmetric. What happens in this case? Well, it turns out that that the electrons start to localize in the following way (for the formerly symmetric and anti-symmetric states):

G. Wannier was able to solve the Schrodinger equation with an applied electric field in a periodic potential in full and showed that the eigenstates of the problem form a Stark ladder. This means that the eigenstates are of identical functional form from quantum well to quantum well (unlike in the double-well shown above) and the energies of the eigenstates are spaced apart by \Delta E=\hbar \omega_B! The potential is shown schematically below. It is also shown that as the potential wells slant more and more (i.e. with larger electric fields), the wavefunctions become more localized (the image is taken from here (pdf!)):

screenshot-from-2016-12-01-222719

A nice numerical solution from the same document shows the wavefunctions for a periodic potential well profile with a strong electric field, exhibiting a strong wavefunction localization. Notice that the wavefunctions are of identical form from well to well.

StarkLadder.png

What can be seen in this solution is that the stationary states are split by \hbar \omega_B, but much like the quantum harmonic oscillator (where the levels are split by \hbar \omega), nothing is actually oscillating until one has a wavepacket (or a linear superposition of eigenstates). Therefore, the Bloch oscillations cannot be observed in the ground state (which includes the the applied electric field) in a semiconducting superlattice since it is an insulator! One must first generate a wavepacket in the solid.

In the landmark paper that finally announced the existence of Bloch oscillations, Waschke et. al. generated a wavepacket in a GaAs-GaAlAs superlattice using a laser pulse. The pulse was incident on a sample with an applied electric field along the superlattice direction, and they were able to observe radiation emitted from the sample due to the Bloch oscillations. I should mention that superlattices must be used to observe the Wannier-Stark ladder and Bloch oscillations because \omega_B, which scales with the width of the quantum well, needs to be fast enough that the electrons don’t scatter from impurities and phonons. Here is the famous plot from the aforementioned paper showing that the frequency of the emitted radiation from the Bloch oscillations can be tuned using an electric field:

PRLBlochOscillations.png

This is a pretty remarkable experiment, one of those which took 60 years from its first proposal to finally be observed.

Kapitza-Dirac Effect

We are all familiar with the fact that light can diffract from two (or multiple) slits in a Young-type experiment. After the advent of quantum mechanics and de Broglie’s wave description of matter, it was shown by Davisson and Germer that electrons could be diffracted by a crystal. In 1927, P. Kapitza and P. Dirac proposed that it should in principle be possible for electrons to be diffracted by standing waves of light, in effect using light as a diffraction grating.

In this scheme, the electrons would interact with light through the ponderomotive potential. If you’re not familiar with the ponderomotive potential, you wouldn’t be the only one — this is something I was totally ignorant of until reading about the Kapitza-Dirac effect. In 1995, Anton Zeilinger and co-workers were able to demonstrate the Kapitza-Dirac effect with atoms, obtaining a beautiful diffraction pattern in the process which you can take a look at in this paper. It probably took so long for this effect to be observed because it required the use of high-powered lasers.

Later, in 2001, this experiment was pushed a little further and an electron-beam was used to demonstrate the effect (as opposed to atoms), as Dirac and Kapitza originally proposed. Indeed, again a diffraction pattern was observed. The article is linked here and I reproduce the main result below:

dirac-kaptiza

(Top) The interference pattern observed in the presence of a standing light wave. (Bottom) The profile of the electron beam in the absence of the light wave.

Even though this experiment is conceptually quite simple, these basic quantum phenomena still manage to elicit awe (at least from me!).