Tag Archives: Review

Bands Aren’t Only For Crystalline Solids

If one goes through most textbooks on solid state physics such as Ashcroft and Mermin, one can easily forget that most of the solids in this world are not crystalline. If I look around my living room, I see a ceramic tea mug nearby a plastic pepper dispenser sitting on a wooden coffee table. In fact, it is very difficult to find something that we would call “crystalline” in the sense of solid state physics.

Because of this, one could almost be forgiven in thinking that bands are a property only of crystalline solids. That they are not, can be seen within a picture-based framework. As is usual on this blog, let’s start with the wavefunctions of the infinite square well and the two-well potential. Take a look below at the wavefunctions for the infinite well and then at the first four pairs of wavefunctions for the double well (the images are taken from here and here):

InfiniteWell

1870-3542-rmfe-62-02-00096-gf3

What you can already see forming within this simple picture is the notion of a “band”. Each “band” here only contains two energy levels, each of which can take two electrons when taking into consideration spin. If we generalize this picture, one can see that when going from two wells here to N wells, one will get energy levels per band.

However, there has been no explicit, although used above,  requirement that the wells be the same depth. It is quite easy to imagine that the potential wells look like the ones below. The analogue of the symmetric and anti-symmetric states for the E1 level are shown below as well:

Again, this can be generalized to N potential wells that vary in height from site to site for one to get a “band”. The necessary requirement for band formation is that the electrons be allowed to tunnel from one site to the other, i.e. for them “feel” the presence of the neighboring potential wells. While the notion of a Brillouin zone won’t exist and nor will Bragg scattering of the electrons (which leads to the opening up of the gaps at the Brillouin zone boundaries), the notion of a band will persist within a non-crystalline framework.

Because solid state physics textbooks often don’t mention amorphous solids or glasses, one can easily forget which properties of solids are and are not limited to those that are crystalline. We may not know how to mathematically apply them to glasses with random potentials very well, but many ideas used in the framework to describe crystalline solids are applicable when looking at amorphous solids as well.

Advertisements

Response and Dissipation: Part 1 of the Fluctuation-Dissipation Theorem

I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property).

In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts.

Consider first a damped driven harmonic oscillator with the following equation (for consistency, I’ll use the conventions from my previous post about the phase change after a resonance):

\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}

One way to solve this equation is to assume that the displacement, x(t), responds linearly to the applied force, F(t) in the following way:

x(t) = \int_{-\infty}^{\infty} \chi(t-t')F(t') dt'

Just in case this equation doesn’t make sense to you, you may want to reference this post about linear response.  In the Fourier domain, this equation can be written as:

\hat{x}{}(\omega) = \hat{\chi}(\omega) \hat{F}(\omega)

and one can solve this equation (as done in a previous post) to give:

\hat{\chi}(\omega) = (-\omega^2 + i\omega b + \omega_0^2 )^{-1}

It is useful to think about the response function, \chi, as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation:

\chi(t-t') = \delta x(t)/\delta F(t')

Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance.

Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so:

\Delta E \sim \int \dot{x}F(t) dt =  \int d\omega d\omega'dt (-i\omega) \hat{\chi}(\omega) \hat{F}(\omega)\hat{F}(\omega') e^{i(\omega+\omega')t}

One can write this more simply as:

\Delta E \sim \int d\omega (-i\omega) \hat{\chi}(\omega) |\hat{F}(\omega)|^2

Noticing that the energy dissipated has to be a real function, and that |\hat{F}(\omega)|^2 is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write:

\Delta E \sim  \int d \omega \omega\hat{\chi}''(\omega)|\hat{F}(\omega)|^2

Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation. 

Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a \pi/2 phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. \chi = \chi' +i \chi'' = \chi' +e^{i\pi/2}\chi''). One can see from the plot from below that damping (i.e. dissipation) is quantified by a \pi/2 phase lag.

ArgandPlaneResonance

Damping is usually associated with a 90 degree phase lag

Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations!

An Undergraduate Optics Problem – The Brewster Angle

Recently, a lab-mate of mine asked me if there was an intuitive way to understand Brewster’s angle. After trying to remember how Brewster’s angle was explained to me from Griffiths’ E&M book, I realized that I did not have a simple picture in my mind at all! Griffiths’ E&M book uses the rather opaque Fresnel equations to obtain the Brewster angle. So I did a little bit of thinking and came up with a picture I think is quite easy to grasp.

First, let me briefly remind you what Brewster’s angle is, since many of you have probably not thought of the concept for a long time! Suppose my incident light beam has both components, s– and p-polarization. (In case you don’t remember, p-polarization is parallel to the plane of incidence, while s-polarization is perpendicular to the plane of incidence, as shown below.) If unpolarized light is incident on a medium, say water or glass, there is an angle, the Brewster angle, at which the light comes out perfectly s-polarized.

An addendum to this statement is that if the incident beam was perfectly p-polarized to begin with, there is no reflection at the Brewster angle at all! A quick example of this is shown in this YouTube video:

So after that little introduction, let me give you the “intuitive explanation” as to why these weird polarization effects happen at the Brewster angle. First of all, it is important to note one important fact: at the Brewster angle, the refracted beam and the reflected beam are at 90 degrees with respect to each other. This is shown in the image below:

Why is this important? Well, you can think of the reflected beam as light arising from the electrons jiggling in the medium (i.e. the incident light comes in, strikes the electrons in the medium and these electrons re-radiate the light).

However, radiation from an oscillating charge only gets emitted in directions perpendicular to the axis of motion. Therefore, when the light is purely p-polarized, there is no light to reflect when the reflected and refracted rays are orthogonal — the reflected beam can’t have the polarization in the same direction as the light ray! This is shown in the right image above and is what gives rise to the reflectionless beam in the YouTube video.

This visual aid enables one to use Snell’s law to obtain the celebrated Brewster angle equation:

n_1 \textrm{sin}(\theta_B) = n_2 \textrm{sin}(\theta_2)

and

\theta_B + \theta_2 = 90^o

to obtain:

\textrm{tan}(\theta_B) = n_2/n_1.

The equations also suggest one more thing: when the incident light has an s-polarization component, the reflected beam must come out perfectly polarized at the Brewster angle. This is because only the s-polarized light jiggles the electrons in a way that they can re-radiate in the direction of the outgoing beam. The image below shows the effect a polarizing filter can therefore have when looking at water near the Brewster angle, which is around 53 degrees for water.

To me, this is a much simpler way to think about the Brewster angle than dealing with the Fresnel equations.

Zener’s Electrical Breakdown Model

In my previous post about electric field induced metal-insulator transitions, I mentioned the notion of Zener breakdown. Since the idea is not likely to be familiar to everyone, I thought I’d use this post to explain the concept a little further.

Simply stated, Zener breakdown occurs when a DC electric field applied to an insulator is large enough such that the insulator becomes conducting due to interband tunneling. Usually, when we imagine electrical conduction in a solid, we think of the mobile electrons moving only within one or more partially filled bands. Modelling electrical transport within a single band can already get quite complicated, so it was a major accomplishment that C. Zener was able to come up with a relatively simple and solvable model for interband tunneling.

To make the problem tractable, Zener came up with a hybrid real-space / reciprocal-space model where he could use the formalism of a 1D quantum mechanical barrier:

Tunneling.png

In Zener’s model, the barrier height is set by the band gap energy, E_{g}, between the valence and conduction bands in the insulator, while the barrier width is set by the length scale relevant to the problem. In this case, we can say that the particle can gain enough kinetic energy to surpass the barrier if e\mathcal{E}d = E_{g}, in which case our barrier width would be:

d = E_{g} / e\mathcal{E},

where \mathcal{E} is the applied electric field and e is the electron charge.

Now, how do we solve this tunneling problem? If we were to use the WKB formalism, like Zener, we get that the transmission probability is:

P_T = e^{-2\gamma}             where           \gamma = \int_0^d{k(x) dx}.

Here, k(x) is the wavenumber. So, really, all that needs to be done is to obtain the correct funtional form for the wavenumber and (hopefully) solve the integral. This turns out not to be too difficult — we just have to make sure that we include both bands in the calculation. This can be done in similar way to the nearly free electron problem.

Quickly, the nearly-free electron problem considers the following E-k relations in the extended zone scheme:

E-kRelation

Near the zone boundary, one needs to apply degenerate perturbation theory due to Bragg diffraction of the electrons (or degeneracy of the bands from the next zone, or however you want to think about it). So if one now zooms into the hatched area in the figure above, one gets that a gap opens up by solving the following determinant and obtaining \epsilon(k):

\left( \begin{array}{cc} \lambda_k - \epsilon & E_g/2 \\ E_g/2 & \lambda_{k-G} - \epsilon \end{array} \right),

where \lambda_k is \hbar^2k^2/2m in this problem, and the hatched area becomes gapped like so:

ZoneBoundaryGap

In the Zener model problem, we take a similar approach. Instead of solving for \epsilon(k), we solve for k(\epsilon). To focus on the zone boundary, we first let k \rightarrow k_0 + \kappa and \epsilon \rightarrow \epsilon_0 + \epsilon_1, where k_0 = \pi/a (the zone boundary) and \epsilon_0 = \hbar^2k_0^2/2m, under the assumption that \kappa and \epsilon_1 are small. All this does is shift our reference point to the hatched region in previous figure above.

The trick now is to solve for  k(\epsilon) to see if imaginary solutions are possible. Indeed, they are! I get that:

\kappa^2 = \frac{2m}{\hbar^2} (\frac{\epsilon_1^2 - E_g^2/4}{4\epsilon_0}),

so as long as \epsilon_1^2 - E_g^2/4 < 0, we get imaginary solutions for \kappa.

Although we have a function \kappa(\epsilon_1), we still need to do a little work to obtain \kappa(x), which is required for the WKB exponent. Here, Zener just assumed the simplest thing that he could, that \epsilon_1 is related to the tunneling distance, x, linearly. The image I’ve drawn above (that shows the potential profile) and the fact that work done by the electric field is e\mathcal{E}x demonstrates that this assumption is very reasonable.

Plugging all the numbers in and doing the integral, one gets that:

P_T = \exp-\left(\pi^2 E_g^2/(4 \epsilon_0 e \mathcal{E} a)\right).

If you’re like me in any way, you’ll find the final answer to the problem pretty intuitive, but Zener’s methodology towards obtaining it pretty strange. To me, the solution is quite bizarre in how it moves between momentum space and real space, and I don’t have a good physical picture of how this happens in the problem. In particular, there is seemingly a contradiction between the assumption of the lattice periodicity and the application of the electric field, which tilts the lattice, that pervades the problem. I am apparently not the only one that is uncomfortable with this solution, seeing that it was controversial for a long time.

Nonetheless, it is a great achievement that with a couple simple physical pictures (albeit that, taken at face value, seem inconsistent), Zener was able to qualitatively explain one mechanism of electrical breakdown in insulators (there are a few others such as avalanche breakdown, etc.).

Mott Switches and Resistive RAMs

Over the past few years, there have been some interesting developments concerning narrow gap correlated insulators. In particular, it has been found that it is particularly easy to induce an insulator to metal transition (in the very least, one can say that the resistivity changes by a few orders of magnitude!) in materials such as VO2, GaTa4Se8 and NiS2-xSx with an electric field. There appears to be a threshold electric field above which the material turns into a metal. Here is a plot demonstrating this rather interesting phenomenon in Ca2RuO4, taken from this paper:

Ca2RuO4_Switch.PNG

It can be seen that the transition is hysteretic, thereby indicating that the insulator-metal transition as a function of field is first-order. It turns out that in most of the materials in which this kind of behavior is observed, there usually exists an insulator-metal transition as a function of temperature and pressure as well. Therefore, in cases such as in (V1-xCrx)2O3, it is likely that the electric field induced insulator-metal transition is caused by Joule heating. However, there are several other cases where it seems like Joule heating is likely not the culprit causing the transition.

While Zener breakdown has been put forth as a possible mechanism causing this transition when Joule heating has been ruled out, back-of-the-envelope calculations suggest that the electric field required to cause a Zener-type breakdown would be several orders of magnitude larger than that observed in these correlated insulators.

On the experimental side, things get even more interesting when applying pulsed electric fields. While the insulator-metal transition observed is usually hysteretic, as shown in the plot above, in some of these correlated insulators, electrical pulses can maintain the metallic state. What I mean is that when certain pulse profiles are applied to the material, it gets stuck in a metastable metallic state. This means that even when the applied voltage is turned off, the material remains a metal! This is shown here for instance for a 30 microsecond / 120V 7-pulse train with each pulse applied every 200 microseconds to GaV4S8 (taken from this paper):

GaVa4S8.PNG

Electric field pulses applied to GaV4S8. A single pulse induces a insulator-metal transition, but reverts back to the insulating state after the pulse disappears. A pulse train induces a transition to a metastable metallic state.

Now, if your thought process is similar to mine, you would be wondering if applying another voltage pulse would switch the material back to an insulator. The answer is that with a specific pulse profile this is possible. In the same paper as the one above, the authors apply a series of 500 microsecond pulses (up to 20V) to the same sample, and they don’t see any change. However, the application of a 12V/2ms pulse does indeed reset the sample back to (almost) its original state. In the paper, the authors attribute the need for a longer pulse to Joule heating, enabling the sample to revert back to the insulating state. Here is the image showing the data for the metastable-metal/insulator transition (taken from the same paper):

gava4s8_reset

So, at the moment, it seems like the mechanism causing this transition is not very well understood (at least I don’t understand it very well!). It is thought that there are filamentary channels between the contacts causing the insulator-metal transition. However, STM has revealed the existence of granular metallic islands in GaTa4S8. The STM results, of course, should be taken with a grain of salt since STM is surface sensitive and something different might be happening in the bulk. Anyway, some hypotheses have been put forth to figure out what is going on microscopically in these materials. Here is a recent theoretical paper putting forth a plausible explanation for some of the observed phenomena.

Before concluding, I would just like to point out that the relatively recent (and remarkable) results on the hidden metallic state in TaS2 (see here as well), which again is a Mott-like insulator in the low temperature state, is likely related to the phenomena in the other materials. The relationship between the “hidden state” in TaS2 and the switching in the other insulators discussed here seems to not have been recognized in the literature.

Anyway, I heartily recommend reading this review article to gain more insight into these topics for those who are interested.

An Excellent Intro To Physical Science

On a recent plane ride, I was able to catch an episode of the new PBS series Genius by Stephen Hawking. I was surprised by the quality of the show and in particular, its emphasis on experiment. Usually, shows like this fall into the trap of giving one the facts (or speculations) without an adequate explanation of how scientists come to such conclusions. However, this one is a little different and there is a large emphasis on experiment, which, at least to me, is much more inspirational.

Here is the episode I watched on the plane:

Nonlinear Response and Harmonics

Because we are so often solving problems in quantum mechanics, it is sometimes easy to forget that certain effects also show up in classical physics and are not “mysterious quantum phenomena”. One of these is the problem of avoided crossings or level repulsion, which can be much more easily understood in the classical realm. I would argue that the Fano resonance also represents a case where a classical model is more helpful in grasping the main idea. Perhaps not too surprisingly, a variant of the classical harmonic oscillator problem is used to illustrate the respective concepts in both cases.

There is also another cute example that illustrates why overtones of the natural harmonic frequency components result when subject to slightly nonlinear oscillations. The solution to this problem therefore shows why harmonic distortions often affect speakers; sometimes speakers emit frequencies not present in the original electrical signal. Furthermore, it shows why second harmonic generation can result when intense light is incident on a material.

First, imagine a perfectly harmonic oscillator with a potential of the form V(x) = \frac{1}{2} kx^2. We know that such an oscillator, if displaced from its original position, will result in oscillations at the natural frequency of the oscillator \omega_o = \sqrt{k/m} with the position varying as x(t) = A \textrm{cos}(\omega_o t + \phi). The potential and the position of the oscillator as a function of time are shown below:

harmpotentialrepsonse

(Left) Harmonic potential as a function of position. (Right) Variation of the position of the oscillator with time

Now imagine that in addition to the harmonic part of the potential, we also have a small additional component such that V(x) = \frac{1}{2} kx^2 + \frac{1}{3}\epsilon x^3, so that the potential now looks like so:

nonlinearharm

The equation of motion is now nonlinear:

\ddot{x} = -c_1x - c_2x^2

where c_1 and c_2 are constants. It is easy to see that if the amplitude of oscillations is small enough, there will be very little difference between this case and the case of the perfectly harmonic potential.

However, if the amplitude of the oscillations gets a little larger, there will clearly be deviations from the pure sinusoid. So then what does the position of the oscillator look like as a function of time? Perhaps not too surprisingly, considering the title, is that not only are there oscillations at \omega_0, but there is also an introduction of a harmonic component with 2\omega_o.

While the differential equation can’t be solved exactly without resorting to numerical methods, that the harmonic component is introduced can be seen within the framework of perturbation theory. In this context, all we need to do is plug the solution to the simple harmonic oscillator, x(t) = A\textrm{cos}(\omega_0t +\phi) into the nonlinear equation above. If we do this, the last term becomes:

-c_2A^2\textrm{cos}^2(\omega_0t+\phi) = -c_2 \frac{A^2}{2}(1 + \textrm{cos}(2\omega_0t+2\phi)),

showing that we get oscillatory components at twice the natural frequency. Although this explanation is a little crude — one can already start to see why nonlinearity often leads to higher frequency harmonics.

With respect to optical second harmonic generation, there is also one important ingredient that should not be overlooked in this simplified model. This is the fact that frequency doubling is possible only when there is an x^3 component in the potential. This means that the potential needs to be inversion asymmetric. Indeed, second harmonic generation is only possible in inversion asymmetric materials (which is why ferroelectric materials are often used to produce second harmonic optical signals).

Because of its conceptual simplicity, it is often helpful to think about physical problems in terms of the classical harmonic oscillator. It would be interesting to count how many Nobel Prizes have been given out for problems that have been reduced to some variant of the harmonic oscillator!