Category Archives: States of Matter

Electron-Hole Droplets

While some condensed matter physicists have moved on from studying semiconductors and consider them “boring”, there are consistently surprises from the semiconductor community that suggest the opposite. Most notably, the integral and fractional quantum Hall effect were not only unexpected, but (especially the FQHE) have changed the way we think about matter. The development of semiconductor quantum wells and superlattices have played a large role furthering the physics of semiconductors and have been central to the efforts in observing Bloch oscillations, the quantum spin Hall effect and exciton condensation in quantum hall bilayers among many other discoveries.

However, there was one development that apparently did not need much of a technological advancement in semiconductor processing — it was simply just overlooked. This was the discovery of electron-hole droplets in the late 60s and early 70s in crystalline germanium and silicon. A lot of work on this topic was done in the Soviet Union on both the theoretical and experiment fronts, but because of this, finding the relevant papers online are quite difficult! An excellent review on the topic was written by L. Keldysh, who also did a lot of theoretical work on electron-hole droplets and was probably the first to recognize them for what they were.

Before continuing, let me just emphasize, that when I say electron-hole droplet, I literally mean something quite akin to water droplets in a fog, for instance. In a semiconductor, the exciton gas condenses into a mist-like substance with electron-hole droplets surrounded by a gas of free excitons. This is possible in a semiconductor because the time it takes for the electron-hole recombination is orders of magnitude longer than the time it takes to undergo the transition to the electron-hole droplet phase. Therefore, the droplet can be treated as if it is in thermal equilibrium, although it is clearly a non-equilibrium state of matter. Recombination takes longer in an indirect gap semiconductor, which is why silicon and germanium were used for these experiments.

A bit of history: The field got started in 1968 when Asnin, Rogachev and Ryvkin in the Soviet Union observed a jump in the photoconductivity in germanium at low temperature when excited above a certain threshold radiation (i.e. when the density of excitons exceeded \sim 10^{16}  \textrm{cm}^{-3}). The interpretation of this observation as an electron-hole droplet was put on firm footing when a broad luminescence peak was observed by Pokrovski and Svistunova below the exciton line (~714 meV) at ~709 meV. The intensity in this peak increased dramatically upon lowering the temperature, with a substantial increase within just a tenth of a degree, an observation suggestive of a phase transition. I reproduce the luminescence spectrum from this paper by T.K. Lo showing the free exciton and the electron-hole droplet peaks, because as mentioned, the Soviet papers are difficult to find online.

EHD-Lo.JPG

From my description so far, the most pressing questions remaining are: (1) why is there an increase in the photoconductivity due to the presence of droplets? and (2) is there better evidence for the droplet than just the luminescence peak? Because free excitons are also known to form biexcitons (i.e. excitonic molecules), the peak may easily interpreted as evidence of biexcitons instead of an electron-hole droplet, and this was a point of much contention in the early days of studying the electron-hole droplet (see the Aside below).

Let me answer the second question first, since the answer is a little simpler. The most conclusive evidence (besides the excellent agreement between theory and experiment) was literally pictures of the droplet! Because the electrons and holes within the droplet recombine, they emit the characteristic radiation shown in the luminescence spectrum above centered at ~709 meV. This is in the infrared region and J.P. Wolfe and collaborators were actually able to take pictures of the droplets in germanium (~ 4 microns in diameter) with an infrared-sensitive camera. Below is a picture of the droplet cloud — notice how the droplet cloud is actually anisotropic, which is due to the crystal symmetry and the fact that phonons can propel the electron-hole liquid!

Pic_EHD.JPG

The first question is a little tougher to answer, but it can be accomplished with a qualitative description. When the excitons condense into the liquid, the density of “excitons” is much higher in this region. In fact, the inter-exciton distance is smaller than the distance between the electron and hole in the exciton gas. Therefore, it is not appropriate to refer to a specific electron as bound to a hole at all in the droplet. The electrons and holes are free to move independently. Naively, one can rationalize this because at such high densities, the exchange interaction becomes strong so that electrons and holes can easily switch partners with other electrons and holes respectively. Hence, the electron-hole liquid is actually a multi-component degenerate plasma, similar to a Fermi liquid, and it even has a Fermi energy which is on the order of 6 meV. Hence, the electron-hole droplet is metallic!

So why do the excitons form droplets at all? This is a question of kinetics and has to do with a delicate balance between evaporation, surface tension, electron-hole recombination and the probability of an exciton in the surrounding gas being absorbed by the droplet. Keldysh’s article, linked above, and the references therein are excellent for the details on this point.

In light of the recent discovery that bismuth (also a compensated electron-hole liquid!) was recently found to be superconducting at ~530 microKelvin, one may ask whether it is possible that electron-hole droplets can also become superconducting at similar or lower temperatures. From my brief searches online it doesn’t seem like this question has been seriously asked in the theoretical literature, and it would be an interesting route towards non-equilibrium superconductivity.

Just a couple years ago, a group also reported the existence of small droplet quanta in GaAs, demonstrating that research on this topic is still alive. To my knowledge, electron-hole drops have thus far not been observed in single-layer transition metal dichalcogenide semiconductors, which may present an interesting route to studying dimensional effects on the electron-hole droplet. However, this may be challenging since most of these materials are direct-gap semiconductors.

Aside: Sadly, it seems like evidence for the electron-hole droplet was actually discovered at Bell Labs by J.R. Haynes in 1966 in this paper before the 1968 Soviet paper, unbeknownst to the author. Haynes attributed his observation to the excitonic molecule (or biexciton), which he, it turns out, didn’t have the statistics to observe. Later experiments confirmed that it indeed was the electron-hole droplet that he had observed. Strangely, Haynes’ paper is still cited in the present time relatively frequently in the context of biexcitons, since he provided quite a nice analysis of his results! Also, it so happened that Haynes died after his paper was submitted and never found out that he had actually discovered the electron-hole droplet.

Advertisements

Landau Theory and the Ginzburg Criterion

The Landau theory of second order phase transitions has probably been one of the most influential theories in all of condensed matter. It classifies phases by defining an order parameter — something that shows up only below the transition temperature, such as the magnetization in a paramagnetic to ferromagnetic phase transition. Landau theory has framed the way physicists think about equilibrium phases of matter, i.e. in terms of broken symmetries. Much current research is focused on transitions to phases of matter that possess a topological index, and a major research question is how to think about these phases which exist outside the Landau paradigm.

Despite its far-reaching influence, Landau theory actually doesn’t work quantitatively in most cases near a continuous phase transition. By this, I mean that it fails to predict the correct critical exponents. This is because Landau theory implicitly assumes that all the particles interact in some kind of average way and does not adequately take into account the fluctuations near a phase transition. Quite amazingly, Landau theory itself predicts that it is going to fail near a phase transition in many situations!

Let me give an example of its failure before discussing how it predicts its own demise. Landau theory predicts that the specific heat should exhibit a discontinuity like so at a phase transition:

specificheatlandau

However, if one examines the specific heat anomaly in liquid helium-4, for example, it looks more like a divergence as seen below:

lambda_transition

So it clearly doesn’t predict the right critical exponent in that case. The Ginzburg criterion tells us how close to the transition temperature Landau theory will fail. The Ginzburg argument essentially goes like so: since Landau theory neglects fluctuations, we can see how accurate Landau theory is going to be by calculating the ratio of the fluctuations to the order parameter:

E_R = |G(R)|/\eta^2

where E_R is the error in Landau theory, |G(R)| quantifies the fluctuations and \eta is the order parameter. Basically, if the error is small, i.e. E_R << 1, then Landau theory will work. However, if it approaches \sim 1, Landau theory begins to fail. One can actually calculate both the order parameter and the fluctuation region (quantified by the two-point correlation function) within Landau theory itself and therefore use Landau theory to calculate whether or not it will fail.

If one does carry out the calculation, one gets that Landau theory will work when:

t^{(4-d)/2} >> k_B/\Delta C \xi(1)^d  \equiv t_{L}^{(4-d)/2}

where t is the reduced temperature, d is the dimension, \xi(1) is the dimensionless mean-field correlation length at T = 2T_C (extrapolated from Landau theory) and \Delta C/k_B is the change in specific heat in units of k_B, which is usually one per degree of freedom. In words, the formula essentially counts the number of degrees of freedom in a volume defined by  \xi(1)^d. If the number of degrees of freedom is large, then Landau theory, which averages the interactions from many particles, works well.

So that was a little bit of a mouthful, but the important thing is that these quantities can be estimated quite well for many phases of matter. For instance, in liquid helium-4, the particle interactions are very short-ranged because the helium atom is closed-shell (this is what enables helium to remain a liquid all the way down to zero temperatures at ambient conditions in the first place). Therefore, we can assume that \xi(1) \sim 1\textrm{\AA}, and hence t_L \sim 1 and deviations from Landau theory can be easily observed in experiment close to the transition temperature.

Despite the qualitative similarities between superfluid helium-4 and superconductivity, a topic I have addressed in the past, Landau theory works much better for superconductors. We can also use the Ginzburg criterion in this case to calculate how close to the transition temperature one has to be in order to observe deviations from Landau theory. In fact, the question as to why Ginzburg-Landau theory works so well for BCS superconductors is what awakened me to these issues in the first place. Anyway, we assume that \xi(1) is on the order of the Cooper pair size, which for BCS superconductors is on the order of 1000 \textrm{\AA}. There are about 10^8 particles in this volume and correspondingly, t_L \sim 10^{-16} and Landau theory fails so close to the transition temperature that this region is inaccessible to experiment. Landau theory is therefore considered to work well in this case.

For high-Tc superconductors, the Cooper pair size is of order 10\textrm{\AA} and therefore deviations from Landau theory can be observed in experiment. The last thing to note about these formulas and approximations is that two parameters determine whether Landau theory works in practice: the number of dimensions and the range of interactions.

*Much of this post has been unabashedly pilfered from N. Goldenfeld’s book Lectures on Phase Transitions and the Renormalization Group, which I heartily recommend for further discussion of these topics.

Fractional quasiparticles and reality

As a condensed matter physicist, one of the central themes that one must become accustomed to is the idea of a quasiparticle. These quasiparticles are not particles as nature made them per se, but only exist inside matter. (Yes, nature made matter too, and therefore quasiparticles as well, but come on — you know what I mean!)

Probably the first formulation of a quasiparticle was in Einstein’s theory of specific heat in a solid at low temperature. He postulated that the sound vibrations in a solid, much like photons from a blackbody, obeyed the Planck distribution, implying some sort of particulate nature to sound. This introduction was quite indirect, and the first really explicit formulation of quasiparticles was presented by Landau in his theory of He4. Here, he proposed that most physical observables could be described in terms of “phonons” and “rotons“, quantized sound vibrations at low and high momenta respectively.

In solid state physics, one of the most common quasiparticles is the hole; in the study of magnetism it is the magnon, in semiconductor physics, the exciton is ubiquitous and there are many other examples as well. So let me ask a seemingly benign question: are these quasiparticles real (i.e. are they real particles)?

In my experience in the condensed matter community, I suspect that most would answer in the affirmative, and if not, at least claim that the particles observed in condensed matter are just as real as any particle observed in particle physics.

Part of the reason I bring this issue up is because of concerns raised soon following the discovery of the fractional quantum Hall effect (FQHE). When the theory of the FQHE was formulated by Laughlin, it was thought that his formulation of quasiparticles of charge e/3 may have been a mere oddity in the mathematical description of the FQHE. Do these particles carrying e/3 current actually exist or is this just a convenient mathematical description?

In two papers that appeared almost concurrently, linked here and here, it was shown using quantum shot noise experiments that these e/3 particles did indeed exist. Briefly, quantum shot noise arises because of the discrete nature of particles and enables one to measure the charge of a current-carrying particle to a pretty good degree of accuracy. In comparing their results to the models of particles carrying charge e versus particles carrying charge e/3, the data shows no contest. Here is a plot below showing this result quite emphatically:

FracCharge.PNG

One may then pose the question: is there a true distinction between what really “exists out there” versus a theory that conveniently describes and predicts nature? Is the physicist’s job complete once the equations have been written down (i.e should he/she not care about questions like “are these fractional charges real”)?

These are tough questions to answer, and are largely personal, but I lean towards answering ‘yes’ to the former and ‘no’ to the latter. I would contend that the quantum shot noise experiments outlined above wouldn’t have even been conducted if the questions posed above were not serious considerations. While asking if something is real may not always be answerable, when it is, it usually results in a deepened understanding.

This discussion reminds me of an (8-year old!) YouTube video of David who, following oral surgery to remove a tooth, still feels the affects of anesthesia :

Strontium Titanate – A Historical Tour

Like most ugly haircuts, materials tend to go in and out of style over time. Strontium titanate (SrTiO3), commonly referred to as STO, has, since its discovery, been somewhat timeless. And this is not just because it is often used as a substitute for diamonds. What I mean is that studying STO rarely seems to go out of style and the material always appears to have some surprises in store.

STO was first synthesized in the 1950s, before it was discovered naturally in Siberia. It didn’t take long for research on this material to take off. One of the first surprising results that STO had in store was that it became superconducting when reduced (electron-doped). This is not remarkable in and of itself, but this study and other follow-up ones showed that superconductivity can occur with a carrier density of only ~5\times 10^{17} cm^{-3}.

This is surprising in light of BCS theory, where the Fermi energy is assumed to be much greater than the Debye frequency — which is clearly not the case here. There have been claims in the literature suggesting that the superconductivity may be plasmon-induced, since the plasma frequency is in the phonon energy regime. L. Gorkov recently put a paper up on the arXiv discussing the mechanism problem in STO.

Soon after the initial work on superconductivity in doped STO, Shirane, Yamada and others began studying pure STO in light of the predicted “soft mode” theory of structural phase transitions put forth by W. Cochran and others. Because of an anti-ferroelectric structural phase transition at ~110K (depicted below), they we able to observe a corresponding soft phonon associated with this transition at the Brillouin zone boundary (shown below, taken from this paper). These results had vast implications for how we understand structural phase transitions today, when it is almost always assumed that a phonon softens at the transition temperature through a continuous structural phase transition.

Many materials similar to STO, such as BaTiO3 and PbTiO3, which also have a perovskite crystal structure motif, undergo a phase transition to a ferroelectric state at low (or not so low) temperatures. The transition to the ferroelectric state is accompanied by a diverging dielectric constant (and dielectric susceptibility) much in the way that the magnetic susceptibility diverges in the transition from a paramagnetic to a ferromagnetic state. In 1978, Muller (of Bednorz and Muller fame) and Burkard reported that at low temperature, the dielectric constant begins its ascent towards divergence, but then saturates at around 4K (the data is shown in the top panel below). Ferroelectricity is associated with a zone-center softening of a transverse phonon, and in the case of STO, this process begins, but doesn’t quite get there, as shown schematically in the image below (and you can see this in the data by Shirane and Yamada above as well).

quantumparaelectricity_signatures

Taken from Wikipedia

The saturation of the large dielectric constant and the not-quite-softening of the zone center phonon has led authors to refer to STO as a quantum paraelectric (i.e. because of the zero-point motion of the transverse optical zone-center phonon, the material doesn’t gain enough energy to undergo the ferroelectric transition). As recently as 2004, however, it was reported that one can induce ferroelectricity in STO films at room temperature by straining the film.

In recent times, STO has found itself as a common substrate material due to processes that can make it atomically flat. While this may not sound so exciting, this has had vast implications for the physics of thin films and interfaces. Firstly, this property has enabled researchers to grow high-quality thin films of cuprate superconductors using molecular beam epitaxy, which was a big challenge in the 1990’s. And even more recently, this has led to the discovery of a two-dimensional electron gas, superconductivity and ferromagnetism at the LAO/STO interface, a startling finding due to the fact that both materials are electrically insulating. Also alarmingly, when FeSe (a superconductor at around 7K) is grown as a monolayer film on STO, its transition temperature is boosted to around 100K (though the precise transition temperature in subsequent experiments is disputed but still high!). This has led to the idea that the FeSe somehow “borrows the pairing glue” from the underlying substrate.

STO is a gem of a material in many ways. I doubt that we are done with its surprises.

Precision in Many-Body Systems

Measurements of the quantum Hall effect give a precise conductance in units of e^2/h. Measurements of the frequency of the AC current in a Josephson junction give us a frequency of 2e/h times the applied voltage. Hydrodynamic circulation in liquid 4He is quantized in units of h/m_{4He}. These measurements (and similar ones like flux quantization) are remarkable. They yield fundamental constants to a great degree of accuracy in a condensed matter setting– a setting which Murray Gell-Mann once referred to as “squalid state” systems. How is this possible?

At first sight, it is stunning that physics of the solid or liquid state could yield a measurement so precise. When we consider the defects, impurities, surfaces and other imperfections in a macroscopic system, these results become even more astounding.

So where does this precision come from? It turns out that in all cases, one is measuring a quantity that is dependent on the single-valued nature of the (appropriately defined) complex scalar  wavefunction. The aforementioned quantities are measured in integer units, n, usually referred to as the winding number. Because the winding number is a topological quantity, in the sense that it arises in a multiply-connected space, these measurements do not particularly care about the small differences that occur in its surroundings.

For instance, the leads used to measure the quantum Hall effect can be placed virtually anywhere on the sample, as long as the wires don’t cross each other. The samples can be any (two-dimensional) geometry, i.e. a square, a circle or some complicated corrugated object. In the Josephson case, the weak links can be constrictions, an insulating oxide layer, a metal, etc. Imprecision of experimental setup is not detrimental, as long as the experimental geometry remains the same.

Another ingredient that is required for this precision is a large number of particles. This can seem counter-intuitive, since one expects quantization on a microscopic rather than at a macroscopic level, but the large number of particles makes these effects possible. For instance, both the Josephson effect and the hydrodynamic circulation in 4He depend on the existence of a macroscopic complex scalar wavefunction or order parameter. In fact, if the superconductor becomes too small, effects like the Josephson effect, flux quantization and persistent currents all start to get washed out. There is a gigantic energy barrier preventing the decay from the n=1 current-carrying state to the n=0 current non-carrying state due to the large number of particles involved (i.e. the higher winding number state is meta-stable). As one decreases the number of particles, the energy barrier is lowered and the system can start to tunnel from the higher winding number state to the lower winding number state.

In the quantum Hall effect, the samples need to be macroscopically large to prevent the boundaries from interacting with each other. Once the states on the edges are able to do that, they may hybridize and the conductance quantization gets washed out. This has been visualized in the context of 3D topological insulators using angle-resolved photoemission spectroscopy, in this well-known paper. Again, a large sample is needed to observe the effect.

It is interesting to think about where else such a robust quantization may arise in condensed matter physics. I suspect that there exist similar kinds of effects in different settings that have yet to be uncovered.

Aside: If you are skeptical about the multiply-connected nature of the quantum Hall effect, you can read about Laughlin’s gauge argument in his Nobel lecture here. His argument critically depends on a multiply-connected geometry.

The Mystery of URu2Si2 – Experimental Dump

Heavy fermion compounds are known to exhibit a wide range of ground states encompassing ferromagnetism, anti-ferromagnetism, superconductivity, insulating and a host of others. A number of these compounds also exhibit more than one of these phases simultaneously.

There is one of these heavy fermion materials that stands out among the rest, however, and that is URu2Si2. The reason for this is that there is an unidentified phase transition that occurs in this compound at ~17.5K. What I mean by “unidentified” is that the order parameter is unknown, the elementary excitations are not understood and there is a consensus emerging that we currently may not have the experimental capability to identify this phase unambiguously. This has led researchers to refer to this phase in URu2Si2 as “hidden order”. Our inability to understand this phase has now persisted for three decades and well over 600 papers have been written on this single material. For experimentalists and theorists that love a challenge, URu2Si2 presents a rather unique and interesting one.

Let me give a quick rundown of the experimental signatures of this phase. Firstly, to convince you that there actually is a thermodynamic phase transition that happens in URu2Si2, take a look at this plot of the specific heat as a function of temperature:

In the lower image, one can see two transitions, one into the hidden order phase at 17.5K and one into the superconducting phase at ~1.5K. One can see that there is a large entropy change at the phase transition into the hidden order phase, which makes it all the more remarkable that we don’t know what it going on! I should mention that the resistivity also shows an anomaly going into the hidden order phase both along the a- and c-axis (the unit cell is tetragonal).

Furthermore, the thermal expansion coefficient, \alpha = L^{-1}(\Delta L/\Delta T), has a peak for the in-plane coefficient and a smaller dip for the c-axis coefficient at the transition temperature. This implies that the volume of the unit cell gets larger through the transition, indicating that the hidden order phase exhibits a strong coupling to the lattice degrees of freedom.

For those familiar with the other uranium-based heavy fermion compounds, one of the most natural questions to ask is whether the hidden order phase is associated with the onset of some sort of magnetism. Indeed, x-ray resonance magnetic scattering and neutron scattering experiments were carried out in the late 80s and early 90s to investigate this possibility. The structure found corresponded to one where there was a ferromagnetic arrangement in the a-b plane with antiferromagnetic coupling along the out-of-plane c-axis. However, this was not the whole story. The magnetic moments were extremely weak (0.02\mu_B per Uranium atom) and the magnetic Bragg peaks found were not resolution-limited (correlation length ~400 Angstroms). This means that order was not of the true long-range variety!

Also, rather strangely, the integrated intensity of the magnetic Bragg peak was shown to be linear as a function of temperature, saturating at ~3K (shown below). All these results seemed to imply that the magnetism in the compound was of a rather unconventional kind.

The next logical question to ask was what the inelastic magnetic spectrum looked like. Below is an image exhibiting the dispersion of the magnetic modes. Two different modes can identified, one at the magnetic Bragg peak wavevectors (e.g. (1, 0, 0)) and one at “incommensurate” positions (e.g. 1 \pm 0.4, 0, 0). The “incommensurate” excitations exhibit approximately a ~4meV gap while the gap at (1, 0, 0) is about 2meV. These excitations show up with the hidden order and are thought to be closely associated with it. They have been shown to have longitudinal character.

The penultimate thing I will mention is that if one examines the optical conductivity of URu2Si2, a gap of ~5meV in the charge spectrum is also manifest. This is shown below:

And lastly, if one pressurizes a sample up to 0.5 GPa, the URu2Si2 becomes a  full-blown large-moment antiferromagnet with a magnetic moment of approximately 0.4\mu_B per Uranium atom. The transition temperature into the Neel state is about 18K.

So let me summarize the main observations concerning the hidden order phase:

  1. Weak short-range antiferromagnetism
  2. Strong coupling to the lattice
  3. Dispersive and gapped incommensurate and commensurate magnetic excitations
  4. Gapped charge excitations
  5. Lives nearby anti-ferromagnetism
  6. Can coexist with superconductivity

I should stress that I am no expert of heavy fermion compounds, which is why this is my first real post on them, so please feel free to point out any oversights I may have made!

More information can be found in these two excellent review articles:

http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1301

http://www.tandfonline.com/doi/abs/10.1080/14786435.2014.916428

 

A Matter of Definitions

When one unearths a new superconductor, there exist three experimental signatures one hopes to observe to verify this discovery. These are:

  1. D.C. resistance is zero
  2. Meissner Effect (expulsion of magnetic field)
  3. Zero Peltier coefficient or thermopower

The last item is a little finical, but bear with me for a second. The Peltier coefficient effectively measures the transport of heat current that accompanies the transport of electric current. So in a superconductor, there is no heat transport (condensate carries zero entropy!), when there is electrical transport. For instance, here is a plot of the thermopower for a few iron pnictides:

thermopower

Let us ask a similar, seemingly benign, question: what are the experimental signatures one hopes to observe when one discovers a charge density wave (CDW) material?

If we are to use the superconductor as a guide, one would probably say the following:

  1. Non-linear conductivity
  2. CDW satellite reflections in a diffraction pattern
  3. An almost zero Peltier coefficient or thermopower once the CDW has been depinned

I have posted about the non-linear I-V characteristics of CDWs previously. Associated with the formation of a charge density wave is, in all known cases to my knowledge, a periodic lattice distortion. This can be observed using X-rays, neutrons or electrons. Here is an image from 1T-TaS_2 taken from here:

PLD

Now, startlingly, once the charge density wave is depinned in a large enough electric field, the thermopower decreases dramatically. This is plotted below as a function of electric field along with the differential conductivity:

thermopowerCDW

This indicates that there is very little entropy transport associated with the charge density wave condensate. Personally, I find this result to be quite stunning. I suspect that this was one of the several signatures that led John Bardeen to suggest that the origin of the charge density wave in low-dimensional materials was essentially quantum mechanical in origin.

Having outlined these three criteria, one should ask: do many of the materials we refer to as charge density waves actually exhibit these experimental signatures?

For many of the materials we refer to as charge density waves today, notably the transition metal dichalcogenides, such as 1T-TaS_2, 2H-NbSe_2, and 2H-TaSe_2, items (1) and (3) have not been observed! This is because it has not been possible to definitively depin the charge density wave. This probably has to do with the triple-q structure of the charge density wave in many of these materials, which don’t select a preferential direction.

There exist many clues that the latter materials do indeed exhibit a charge density wave transition similar to others where a depinning has been observed. It is interesting to note, though, that there are some glaring experimental absences in the transition metal dichalcogenides,  which are often considered prototypical examples of a charge density wave transition.