Zener’s Electrical Breakdown Model

In my previous post about electric field induced metal-insulator transitions, I mentioned the notion of Zener breakdown. Since the idea is not likely to be familiar to everyone, I thought I’d use this post to explain the concept a little further.

Simply stated, Zener breakdown occurs when a DC electric field applied to an insulator is large enough such that the insulator becomes conducting due to interband tunneling. Usually, when we imagine electrical conduction in a solid, we think of the mobile electrons moving only within one or more partially filled bands. Modelling electrical transport within a single band can already get quite complicated, so it was a major accomplishment that C. Zener was able to come up with a relatively simple and solvable model for interband tunneling.

To make the problem tractable, Zener came up with a hybrid real-space / reciprocal-space model where he could use the formalism of a 1D quantum mechanical barrier:

In Zener’s model, the barrier height is set by the band gap energy, $E_{g}$, between the valence and conduction bands in the insulator, while the barrier width is set by the length scale relevant to the problem. In this case, we can say that the particle can gain enough kinetic energy to surpass the barrier if $e\mathcal{E}d = E_{g}$, in which case our barrier width would be:

$d = E_{g} / e\mathcal{E}$,

where $\mathcal{E}$ is the applied electric field and $e$ is the electron charge.

Now, how do we solve this tunneling problem? If we were to use the WKB formalism, like Zener, we get that the transmission probability is:

$P_T = e^{-2\gamma}$             where           $\gamma = \int_0^d{k(x) dx}$.

Here, $k(x)$ is the wavenumber. So, really, all that needs to be done is to obtain the correct funtional form for the wavenumber and (hopefully) solve the integral. This turns out not to be too difficult — we just have to make sure that we include both bands in the calculation. This can be done in similar way to the nearly free electron problem.

Quickly, the nearly-free electron problem considers the following $E-k$ relations in the extended zone scheme:

Near the zone boundary, one needs to apply degenerate perturbation theory due to Bragg diffraction of the electrons (or degeneracy of the bands from the next zone, or however you want to think about it). So if one now zooms into the hatched area in the figure above, one gets that a gap opens up by solving the following determinant and obtaining $\epsilon(k)$:

$\left( \begin{array}{cc} \lambda_k - \epsilon & E_g/2 \\ E_g/2 & \lambda_{k-G} - \epsilon \end{array} \right)$,

where $\lambda_k$ is $\hbar^2k^2/2m$ in this problem, and the hatched area becomes gapped like so:

In the Zener model problem, we take a similar approach. Instead of solving for $\epsilon(k)$, we solve for $k(\epsilon)$. To focus on the zone boundary, we first let $k \rightarrow k_0 + \kappa$ and $\epsilon \rightarrow \epsilon_0 + \epsilon_1$, where $k_0 = \pi/a$ (the zone boundary) and $\epsilon_0 = \hbar^2k_0^2/2m$, under the assumption that $\kappa$ and $\epsilon_1$ are small. All this does is shift our reference point to the hatched region in previous figure above.

The trick now is to solve for  $k(\epsilon)$ to see if imaginary solutions are possible. Indeed, they are! I get that:

$\kappa^2 = \frac{2m}{\hbar^2} (\frac{\epsilon_1^2 - E_g^2/4}{4\epsilon_0})$,

so as long as $\epsilon_1^2 - E_g^2/4 < 0$, we get imaginary solutions for $\kappa$.

Although we have a function $\kappa(\epsilon_1)$, we still need to do a little work to obtain $\kappa(x)$, which is required for the WKB exponent. Here, Zener just assumed the simplest thing that he could, that $\epsilon_1$ is related to the tunneling distance, $x$, linearly. The image I’ve drawn above (that shows the potential profile) and the fact that work done by the electric field is $e\mathcal{E}x$ demonstrates that this assumption is very reasonable.

Plugging all the numbers in and doing the integral, one gets that:

$P_T = \exp-\left(\pi^2 E_g^2/(4 \epsilon_0 e \mathcal{E} a)\right)$.

If you’re like me in any way, you’ll find the final answer to the problem pretty intuitive, but Zener’s methodology towards obtaining it pretty strange. To me, the solution is quite bizarre in how it moves between momentum space and real space, and I don’t have a good physical picture of how this happens in the problem. In particular, there is seemingly a contradiction between the assumption of the lattice periodicity and the application of the electric field, which tilts the lattice, that pervades the problem. I am apparently not the only one that is uncomfortable with this solution, seeing that it was controversial for a long time.

Nonetheless, it is a great achievement that with a couple simple physical pictures (albeit that, taken at face value, seem inconsistent), Zener was able to qualitatively explain one mechanism of electrical breakdown in insulators (there are a few others such as avalanche breakdown, etc.).

Mott Switches and Resistive RAMs

Over the past few years, there have been some interesting developments concerning narrow gap correlated insulators. In particular, it has been found that it is particularly easy to induce an insulator to metal transition (in the very least, one can say that the resistivity changes by a few orders of magnitude!) in materials such as VO2, GaTa4Se8 and NiS2-xSx with an electric field. There appears to be a threshold electric field above which the material turns into a metal. Here is a plot demonstrating this rather interesting phenomenon in Ca2RuO4, taken from this paper:

It can be seen that the transition is hysteretic, thereby indicating that the insulator-metal transition as a function of field is first-order. It turns out that in most of the materials in which this kind of behavior is observed, there usually exists an insulator-metal transition as a function of temperature and pressure as well. Therefore, in cases such as in (V1-xCrx)2O3, it is likely that the electric field induced insulator-metal transition is caused by Joule heating. However, there are several other cases where it seems like Joule heating is likely not the culprit causing the transition.

While Zener breakdown has been put forth as a possible mechanism causing this transition when Joule heating has been ruled out, back-of-the-envelope calculations suggest that the electric field required to cause a Zener-type breakdown would be several orders of magnitude larger than that observed in these correlated insulators.

On the experimental side, things get even more interesting when applying pulsed electric fields. While the insulator-metal transition observed is usually hysteretic, as shown in the plot above, in some of these correlated insulators, electrical pulses can maintain the metallic state. What I mean is that when certain pulse profiles are applied to the material, it gets stuck in a metastable metallic state. This means that even when the applied voltage is turned off, the material remains a metal! This is shown here for instance for a 30 microsecond / 120V 7-pulse train with each pulse applied every 200 microseconds to GaV4S8 (taken from this paper):

Electric field pulses applied to GaV4S8. A single pulse induces a insulator-metal transition, but reverts back to the insulating state after the pulse disappears. A pulse train induces a transition to a metastable metallic state.

Now, if your thought process is similar to mine, you would be wondering if applying another voltage pulse would switch the material back to an insulator. The answer is that with a specific pulse profile this is possible. In the same paper as the one above, the authors apply a series of 500 microsecond pulses (up to 20V) to the same sample, and they don’t see any change. However, the application of a 12V/2ms pulse does indeed reset the sample back to (almost) its original state. In the paper, the authors attribute the need for a longer pulse to Joule heating, enabling the sample to revert back to the insulating state. Here is the image showing the data for the metastable-metal/insulator transition (taken from the same paper):

So, at the moment, it seems like the mechanism causing this transition is not very well understood (at least I don’t understand it very well!). It is thought that there are filamentary channels between the contacts causing the insulator-metal transition. However, STM has revealed the existence of granular metallic islands in GaTa4S8. The STM results, of course, should be taken with a grain of salt since STM is surface sensitive and something different might be happening in the bulk. Anyway, some hypotheses have been put forth to figure out what is going on microscopically in these materials. Here is a recent theoretical paper putting forth a plausible explanation for some of the observed phenomena.

Before concluding, I would just like to point out that the relatively recent (and remarkable) results on the hidden metallic state in TaS2 (see here as well), which again is a Mott-like insulator in the low temperature state, is likely related to the phenomena in the other materials. The relationship between the “hidden state” in TaS2 and the switching in the other insulators discussed here seems to not have been recognized in the literature.

Anyway, I heartily recommend reading this review article to gain more insight into these topics for those who are interested.

Discovery vs. Q&A Experiments

When one looks through the history of condensed matter experiment, it is strange to see how many times discoveries were made in a serendipitous fashion (see here for instance). I would argue that most groundbreaking findings were unanticipated. The discoveries of superconductivity by Onnes, the Meissner effect, superfluidity in He-4, cuprate (and high temperature) superconductivity, the quantum Hall effect and the fractional quantum Hall effect were all unforeseen by the very experimentalists that were conducting the experiments! Theorists also did not anticipate these results. Of course, a whole slew of phases and effects were theoretically predicted and then experimentally observed as well, such as Bose-Einstein condensation, the Kosterlitz-Thouless transition, superfluidity in He-3 and the discovery of topological insulators, not to diminish the role of prediction.

For the condensed matter experimentalist, though, this presents a rather strange paradigm.  Naively (and I would say that the general public by and large shares this view), science is perceived as working within a question and answer framework. You pose a concrete question, and then conduct and experiment to try to answer said question. In condensed matter physics, this often not the case, or at least only loosely the case. There are of course experiments that have been conducted to answer concrete questions — and when they are conducted, they usually end up being beautiful experiments (see here for example). But these kinds of experiments can only be conducted when a field reaches a point where concrete questions can be formulated. For exploratory studies, the questions are often not even clear. I would, therefore, consider these kinds of Q&A experiments to be the exception to the rule rather than the norm.

More often then not, discoveries are made by exploring uncharted territory, entering a space others have not explored before, and tempting fate. Questions are often not concrete but posed in the form, “What if I do this…?”. I know that this makes condensed matter physics sound like it lacks organization, clarity and structure. But this is not totally untrue. Most progress in the history of science did not proceed in a straight line like textbooks make it seem. When weird particles were popping up all over the place in particle physics in the 1930s and 40s, it was hard to see any organizing principles. Experimentalists were discovering new particles at a rate with which theory could not keep up. Only after a large number of particles had been discovered did Gell-Mann come up with his “Eightfold Way”, which ultimately led to the Standard Model.

This is all to say that scientific progress is tortuous, thought processes of scientists are highly nonlinear, and there is a lot of intuition required in deciding what problems to solve or what space is worth exploring. In condensed matter experiment, it is therefore important to keep pushing boundaries of what has been done before, explore, and do something unique in hope of finding something new!

Exposure to a wide variety of observations and methods is required to choose what boundaries to push and where to spend one’s time exploring. This is what makes diversity and avoiding “herd thinking” important to the scientific endeavor. Exploratory science without concrete questions makes some (especially younger graduate students) feel uncomfortable, since there is always the fear of finding nothing! This means that condensed matter physics, despite its tremendous progress over the last few decades, where certain general organizing principles have been identified, is still somewhat of a “wild west” in terms of science. But it is precisely this lack of structure that makes it particularly exciting — there are still plenty of rocks that need overturning, and it’s hard to foresee what is going to be found underneath them.

In experimental science, questions are important to formulate — but the adventure towards the answer usually ends up being more important than the answer itself.

An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

Some Gems

I am away this week on a beam time run — here are some masterpieces I’ve come across while trying to remain sane:

A theory is something nobody believes, except the person who made it. An experiment is something everybody believes, except the person who made it.

– Albert Einstein

Nonlinear Response and Harmonics

Because we are so often solving problems in quantum mechanics, it is sometimes easy to forget that certain effects also show up in classical physics and are not “mysterious quantum phenomena”. One of these is the problem of avoided crossings or level repulsion, which can be much more easily understood in the classical realm. I would argue that the Fano resonance also represents a case where a classical model is more helpful in grasping the main idea. Perhaps not too surprisingly, a variant of the classical harmonic oscillator problem is used to illustrate the respective concepts in both cases.

There is also another cute example that illustrates why overtones of the natural harmonic frequency components result when subject to slightly nonlinear oscillations. The solution to this problem therefore shows why harmonic distortions often affect speakers; sometimes speakers emit frequencies not present in the original electrical signal. Furthermore, it shows why second harmonic generation can result when intense light is incident on a material.

First, imagine a perfectly harmonic oscillator with a potential of the form $V(x) = \frac{1}{2} kx^2$. We know that such an oscillator, if displaced from its original position, will result in oscillations at the natural frequency of the oscillator $\omega_o = \sqrt{k/m}$ with the position varying as $x(t) = A \textrm{cos}(\omega_o t + \phi)$. The potential and the position of the oscillator as a function of time are shown below:

(Left) Harmonic potential as a function of position. (Right) Variation of the position of the oscillator with time

Now imagine that in addition to the harmonic part of the potential, we also have a small additional component such that $V(x) = \frac{1}{2} kx^2 + \frac{1}{3}\epsilon x^3$, so that the potential now looks like so:

The equation of motion is now nonlinear:

$\ddot{x} = -c_1x - c_2x^2$

where $c_1$ and $c_2$ are constants. It is easy to see that if the amplitude of oscillations is small enough, there will be very little difference between this case and the case of the perfectly harmonic potential.

However, if the amplitude of the oscillations gets a little larger, there will clearly be deviations from the pure sinusoid. So then what does the position of the oscillator look like as a function of time? Perhaps not too surprisingly, considering the title, is that not only are there oscillations at $\omega_0$, but there is also an introduction of a harmonic component with $2\omega_o$.

While the differential equation can’t be solved exactly without resorting to numerical methods, that the harmonic component is introduced can be seen within the framework of perturbation theory. In this context, all we need to do is plug the solution to the simple harmonic oscillator, $x(t) = A\textrm{cos}(\omega_0t +\phi)$ into the nonlinear equation above. If we do this, the last term becomes:

$-c_2A^2\textrm{cos}^2(\omega_0t+\phi) = -c_2 \frac{A^2}{2}(1 + \textrm{cos}(2\omega_0t+2\phi))$,

showing that we get oscillatory components at twice the natural frequency. Although this explanation is a little crude — one can already start to see why nonlinearity often leads to higher frequency harmonics.

With respect to optical second harmonic generation, there is also one important ingredient that should not be overlooked in this simplified model. This is the fact that frequency doubling is possible only when there is an $x^3$ component in the potential. This means that the potential needs to be inversion asymmetric. Indeed, second harmonic generation is only possible in inversion asymmetric materials (which is why ferroelectric materials are often used to produce second harmonic optical signals).

Because of its conceptual simplicity, it is often helpful to think about physical problems in terms of the classical harmonic oscillator. It would be interesting to count how many Nobel Prizes have been given out for problems that have been reduced to some variant of the harmonic oscillator!

Citizen First, Scientist Second

I have written previously in praise of the scientific community becoming more diverse over time. I emphasized its importance because people with different cultural backgrounds often synthesize ideas that are sometimes not juxtaposed in other cultures. It is almost unquestionable that the US scientific enterprise has benefited greatly from the inclusion of scientists from around the world. Because the scientific community has become more diverse in the past few decades, it has also meant that science (at least in the academic sense) has become more open and international. As a member of the international community myself (I am a Thai citizen), recent events have been tough to watch as a scientist, immigrant and person.

This past week has seen some, I would consider, unsavory events affecting the scientific and higher education communities in the US. There was a temporary ban put in place by the US government barring citizens from seven Middle Eastern and African countries from entering the US. Some students are stranded outside the US, unable to return before the spring semester starts.

Day to day, science requires enormous attention to detail, patience doing precise theoretical or experimental work, and time to work without distractions. It is easy to get wrapped up in one’s own work, forgetting to pick one’s head up to look at what is going on around you. If events are not directly affecting you or someone close to you, it is easy to forget that these things are even happening.

In this spirit, I encourage you to attend (or organize!) department town hall meetings and speak up in support of your international colleagues. There is a planned Scientists’ March being arranged, and I urge you to attend if there is a gathering near you. To be perfectly honest (like most scientists), I am a person of thought rather than a person of action, but it is always necessary to be a citizen first and a scientist second.