Bands Aren’t Only For Crystalline Solids

If one goes through most textbooks on solid state physics such as Ashcroft and Mermin, one can easily forget that most of the solids in this world are not crystalline. If I look around my living room, I see a ceramic tea mug nearby a plastic pepper dispenser sitting on a wooden coffee table. In fact, it is very difficult to find something that we would call “crystalline” in the sense of solid state physics.

Because of this, one could almost be forgiven in thinking that bands are a property only of crystalline solids. That they are not, can be seen within a picture-based framework. As is usual on this blog, let’s start with the wavefunctions of the infinite square well and the two-well potential. Take a look below at the wavefunctions for the infinite well and then at the first four pairs of wavefunctions for the double well (the images are taken from here and here):



What you can already see forming within this simple picture is the notion of a “band”. Each “band” here only contains two energy levels, each of which can take two electrons when taking into consideration spin. If we generalize this picture, one can see that when going from two wells here to N wells, one will get energy levels per band.

However, there has been no explicit, although used above,  requirement that the wells be the same depth. It is quite easy to imagine that the potential wells look like the ones below. The analogue of the symmetric and anti-symmetric states for the E1 level are shown below as well:

Again, this can be generalized to N potential wells that vary in height from site to site for one to get a “band”. The necessary requirement for band formation is that the electrons be allowed to tunnel from one site to the other, i.e. for them “feel” the presence of the neighboring potential wells. While the notion of a Brillouin zone won’t exist and nor will Bragg scattering of the electrons (which leads to the opening up of the gaps at the Brillouin zone boundaries), the notion of a band will persist within a non-crystalline framework.

Because solid state physics textbooks often don’t mention amorphous solids or glasses, one can easily forget which properties of solids are and are not limited to those that are crystalline. We may not know how to mathematically apply them to glasses with random potentials very well, but many ideas used in the framework to describe crystalline solids are applicable when looking at amorphous solids as well.


Graduate Student Stipends

If you’re in the United States, you’ll probably have noticed that there is a bill that is dangerously close to passing that will increase the tax burden on graduate students dramatically. This bill will tax graduate students counting their tuition waiver as part of their income, increasing their taxable income from somewhere in the $30k range to somewhere in the $70-80k range.

Carnegie Mellon and UC Berkeley have recently done calculations to estimate the extra taxes the graduate students will have to pay, and it does not provide happy reading. The Carnegie Mellon document can be found here and the UC Berkeley document can be found here. The UC Berkeley document also calculates the increase in the tax burden for MIT graduate students, as there can be large differences between public and private institutions (private institutions generally charge more for graduate education and have a larger tuition waiver, so graduate students at private institutions will be taxed more).

Most importantly, the document from UC Berkeley states:

An MIT Ph.D. student who is an RA [Research Assistant] for all twelve months in 2017 will get a salary of approximately $37,128, and a health insurance plan valued at $3,000. The cost of a year of tuition at MIT is about $49,580. With these figures, we can estimate the student’s 2017 tax burden. We​ ​find​ ​that​ ​her​ ​federal​ ​income​ ​tax​ ​would​ ​be​ ​$3,993​ ​under​ ​current​ ​law,​ ​and $13,577​ ​under​ ​the​ ​TCJA [Tax Cuts and Jobs Act],​ ​or​ ​a​ ​240%​ ​increase.​ We also note that her tax burden is about 37% of her salary.

This is a huge concern for those involved, but I think there are more dire long-term consequences at stake here for the STEM fields.

I chose to pursue a graduate degree in physics in the US partly because it allowed me the pursue a degree without having to accrue student debt and obtain a livable stipend to pay for food and housing (for me it was $20k/year). If I had to apply for graduate school in this current climate, I would probably apply to graduate schools in Canada and Europe to avoid the unpredictability in the current atmosphere and possible cut to my stipend.

That is to say that I am sure that if this bill passes (and the very fact that it could harm graduate students so heavily) will probably have the adverse side-effect of driving away talented graduate students to study in other countries or dissuade them from pursuing those degrees at all. It is important to remember that educated immigrants, especially those in the STEM fields, play a large role in spurring economic growth in the US.

Graduate students may not recognize that if they collectively quit their jobs, the US scientific research enterprise would grind to a quick halt. They are already a relatively hidden and cheap workforce in the US. It bemuses me that these students may about to have their meager stipends for housing and food be taxed further to the point that they may not be able to afford these basic necessities.

On Scientific Inevitability

If one looks through the history of human evolution, it is surprising to see that humanity has on several independent occasions, in several different locations, figured how to produce food, make pottery, write, invent the wheel, domesticate animals, build complex political societies, etc. It is almost as if these discoveries and inventions were an inevitable part of the evolution of humans. More controversially, one may extend such arguments to include the development of science, mathematics, medicine and many other branches of knowledge (more on this point below).

The interesting part about these ancient inventions is that because they originated in different parts of the world, the specifics varied geographically. For instance, native South Americans domesticated llamas, while cultures in Southwest Asia (today’s Middle East) domesticated sheep, cows, and horses, while the Ancient Chinese were able to domesticate chickens among other animals. The reason that different cultures domesticated different animals was because these animals were by and large native to the regions where they were domesticated.

Now, there are also many instances in human history where inventions were not made independently, but diffused geographically. For instance, writing was developed independently in at least a couple locations (Mesoamerica and Southwest Asia), but likely diffused from Southwest Asia into Europe and other neighboring geographic locations. While the peoples in these other places would have likely discovered writing on their own in due time, the diffusion from Southwest Asia made this unnecessary. These points are well-made in the excellent book by Jared Diamond entitled Guns, Germs and Steel.

If you've ever been to the US post-office, you'll realize very quickly that it's not the product of intelligent design.

At this point, you are probably wondering what I am trying to get at here, and it is no more than the following musing. Consider the following thought experiment: if two different civilizations were geographically isolated without any contact for thousands of years, would they both have developed a similar form of scientific inquiry? Perhaps the questions asked and the answers obtained would have been slightly different, but my naive guess is that given enough time, both would have developed a process that we would recognize today as genuinely scientific. Obviously, this thought experiment is not possible, and this fact makes it difficult to answer to what extent the development of science was inevitable, but I would consider it plausible and likely.

Because what we would call “modern science” was devised after the invention of the printing press, the process of scientific inquiry likely “diffused” rather than being invented independently in many places. The printing press accelerated the pace of information transfer and did not allow geographically separated areas to “invent” science on their own.

Today, we can communicate globally almost instantly and information transfer across large geographic distances is easy. Scientific communication therefore works through a similar diffusive process, through the writing of papers in journals, where scientists from anywhere in the world can submit papers and access them online. Looking at science in this way, as an almost inevitable evolutionary process, downplays the role of individuals and suggests that despite the contribution of any individual scientist, humankind would have likely reached that destination ultimately anyhow. The timescale to reach a particular scientific conclusion may have been slightly different, but those conclusions would have been made nonetheless.

There are some scientists out there who have contributed massively to the advancement of science and their absence may have slowed progress, but it is hard to imagine that progress would have slowed very significantly. In today’s world, where the idea of individual genius is romanticized in the media and further so by prizes such as the Nobel, it is important to remember that no scientist is indispensable, no matter how great. There were often competing scientists simultaneously working on the biggest discoveries of the 20th century, such as the theories of general relativity, the structure of DNA, and others. It is likely that had Einstein or Watson, Crick and Franklin not solved those problems, others would have.

So while the work of this year’s scientific Nobel winners is without a doubt praise-worthy and the recipients deserving, it is interesting to think about such prizes in this slightly different and less romanticized light.


For some reason, the summer months always seem to get a little busy, and this summer has been no exception. I hope to write part 2 of the fluctuation-dissipation post soon, but in the meantime, here are a couple videos that I came across recently showing the rather strange properties of mercury.



Pretty weird, huh?

Response and Dissipation: Part 1 of the Fluctuation-Dissipation Theorem

I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property).

In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts.

Consider first a damped driven harmonic oscillator with the following equation (for consistency, I’ll use the conventions from my previous post about the phase change after a resonance):

\underbrace{\ddot{x}}_{inertial} + \overbrace{b\dot{x}}^{damping} + \underbrace{\omega_0^2 x}_{restoring} = \overbrace{F(t)}^{driving}

One way to solve this equation is to assume that the displacement, x(t), responds linearly to the applied force, F(t) in the following way:

x(t) = \int_{-\infty}^{\infty} \chi(t-t')F(t') dt'

Just in case this equation doesn’t make sense to you, you may want to reference this post about linear response.  In the Fourier domain, this equation can be written as:

\hat{x}{}(\omega) = \hat{\chi}(\omega) \hat{F}(\omega)

and one can solve this equation (as done in a previous post) to give:

\hat{\chi}(\omega) = (-\omega^2 + i\omega b + \omega_0^2 )^{-1}

It is useful to think about the response function, \chi, as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation:

\chi(t-t') = \delta x(t)/\delta F(t')

Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance.

Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so:

\Delta E \sim \int \dot{x}F(t) dt =  \int d\omega d\omega'dt (-i\omega) \hat{\chi}(\omega) \hat{F}(\omega)\hat{F}(\omega') e^{i(\omega+\omega')t}

One can write this more simply as:

\Delta E \sim \int d\omega (-i\omega) \hat{\chi}(\omega) |\hat{F}(\omega)|^2

Noticing that the energy dissipated has to be a real function, and that |\hat{F}(\omega)|^2 is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write:

\Delta E \sim  \int d \omega \omega\hat{\chi}''(\omega)|\hat{F}(\omega)|^2

Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation. 

Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a \pi/2 phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. \chi = \chi' +i \chi'' = \chi' +e^{i\pi/2}\chi''). One can see from the plot from below that damping (i.e. dissipation) is quantified by a \pi/2 phase lag.


Damping is usually associated with a 90 degree phase lag

Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations!

Research Topics and the LAQ Method

As a scientific researcher, the toughest part of the job is to come up with good scientific questions. A large part of my time is spent looking for such questions and every once in a while, I happen upon a topic that is worth spending the time to investigate further. The most common method of generating such questions is to come up with a lot of them and then sift through the list to find some that are worth pursuing.

One of the main criteria I use for selecting such questions/problems is what I refer to as the “largest answerable question” or LAQ method. Because the lifetime of a researcher is limited by the human lifespan, it is important to try to answer the largest answerable questions that fall within the window of your ability. Hence, this selection process is actually tied in with one’s self-confidence and actually takes a fair amount of introspection. I imagine the LAQ method looking a little bit like this:

Image result for broad to specific triangle

One starts by asking some really general questions about some scientific topic which eventually proceeds to a more specific, answerable, concrete question. If the question is answerable, it usually will have ramifications that will be broadly felt by many in the community.

I imagine that most readers of this blog will have no trouble coming up with examples of success stories where scientists employed the LAQ method. Just about every famous scientist you can think of has probably, at least to some extent, used this method fruitfully. However, there are counterexamples as well, where important questions are asked by one scientist, but is answered by others.

I am almost done reading Erwin Schrodinger’s book What is Life?, which was written in 1943. In it, Schrodinger asks deep questions about genetics and attempts to put physical constraints on information-carrying molecules (DNA was not known at the time to be the site of genetic information). It is an entertaining read in two regards. Firstly, Schrodinger, at the time of writing, introduces to physicists some of the most pertinent and probing questions in genetics. The book was, after all, one that was read by both Watson and Crick before they set about discovering the structure of DNA. Secondly, and more interestingly, Schrodinger gets almost everything he tries to answer wrong! For instance, he suggests that quantum mechanics may play a role in causing a mutation of certain genes. This is not to say that his reasoning was not sound, but at the time of writing, there were just not enough experimental constraints on some of the assumptions he made.

Nonetheless, I applaud Schrodinger for writing the book and exposing his ignorance. Even though he was not able to answer many of the questions himself, he was an inspiration to many others who eventually were able to shed light on many of the questions posed in the book. Here is an example where the LAQ method fails, but still pushes science forward in a tangible way.

What are your strategies with coming up with good research questions? I have to admit that while the LAQ method is useful, I sometimes pursue problems purely because I find them stunning and no other reason is needed!

DIY Garage Work

Recently, I heard about a string of YouTube videos where Ben Krasnow of the Applied Sciences YouTube Channel makes a series of scientific instruments in his garage. One of the particularly impressive achievements is his homemade Scanning Electron Microscope, where he constructs a pretty decent instrument with approximately $1500. This is definitely outstanding from an educational viewpoint — $1500 will probably be affordable for many high schools and will enable students to see how to image objects with electrons.

Here are a couple videos showing this and another one of his projects where he uses a laser and a couple optical elements to construct a Raman spectroscopy setup:




Lastly, I’d like to point out that Christina Lee has put together an excellent set of Jupyter code (i.e. IPython Notebook code) to solve various condensed matter physics problems. It’s definitely worth having a look.