# Monthly Archives: March 2016

## A Graphical Depiction of Linear Response

Most of the theory of linear response derives from the following equation:

$y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'$

I remember quite vividly first seeing an equation like this in Ashcroft and Mermin in the context of electric fields (i.e. $\textbf{D}(\textbf{r}) = \int_{-\infty}^{\infty} \epsilon(\textbf{r}-\textbf{r}')\textbf{E}(\textbf{r}') d\textbf{r}'$) and wondering what it meant.

One way to think about $\chi(t-t')$ in the equation above is as an impulse response function. What I mean by this is that if I were to apply a Dirac delta-function perturbation to my system, I would expect it to respond in some way, characteristic of the system, like so:

Mathematically, this can be expressed as:

$y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'= \int_{-\infty}^{\infty} \chi(t-t')\delta(t') dt'=\chi(t)$

This seems reasonable enough. Now, though this is going to sound like tautology, what one means by “linear” in linear response theory is that the system responds linearly to the input. Most of us are familiar with the idea of linearity but in this context, it helps to understand it through two properties. First, the strength of the input delta-function determines the strength of the output, and secondly, the response functions combine additively. This means that if we apply a perturbation of the form:

$h(t')=k_1\delta(t'-t_1) + k_2\delta(t'-t_2) +k_3\delta(t'-t_3)$

We expect a response of the form:

$y(t)=k_1\chi(t-t_1) + k_2\chi(t-t_2) +k_3\chi(t-t_3)$

This is most easily grasped (at least for me!) graphically in the following way:

One can see here that the response to the three impulses just add to give the total response. Finally, let’s consider what happens when the input is some sort of continuous function. One can imagine a continuous function as being composed of an infinite number of discrete points like so:

Now, the output of the discretized input function can be expressed as so:

$y(t) = \sum_{n=-\infty}^{\infty} [\chi(t-n\epsilon_{t'})][h(n\epsilon_{t'}][\epsilon_{t'}]$

This basically amounts to saying that we can treat each point on the function as a delta-function or impulse function. The strength of each “impulse” in this scenario is quantified by the area under the curve. Each one of these areal slivers gives rise to an output function. We then add up the outputs from each of the input points, and this gives us the total response $y(t)$ (which I haven’t plotted here). If we take the limit $n \rightarrow \infty$, we get back the following equation:

$y(t) = \int_{-\infty}^{\infty} \chi(t-t')h(t') dt'$

This kind of picture is helpful in thinking about the convolution integral in general, not just in the context of linear response theory.

Many experiments, especially scattering experiments, measure a quantity related to the imaginary part of the  Fourier-transformed response function, $\chi''(\omega)$. One can then use a Kramers-Kronig transform to obtain the real part and reconstruct the temporal response function $\chi(t-t')$. An analogous procedure can be done to obtain the real-space response function from the momentum-space one.

Note: I will be taking some vacation time for a couple weeks following this post and will not be blogging during that period.

## Paradigm Shifts and “The Scourge of Bibliometrics”

Yesterday, I attended an insightful talk by A.J. Leggett at the APS March Meeting entitled Reflection on the Past, Present and Future of Condensed Matter Physics. The talk was interesting in two regards. Firstly, he referred to specific points in the history of condensed matter physics that resulted in (Kuhn-type) paradigm shifts in our thinking of condensed matter. Of course these paradigm shifts were not as violent as special relativity or quantum mechanics, so he deemed them “velvet” paradigm shifts.

This list, which he acknowledged was personal, consisted of:

1. Landau’s theory of the Fermi liquid
2. BCS theory
3. Renormalization group
4. Fractional quantum hall effect

Notable absentees from this list were superfluidity in 3He, the integer quanutm hall effect, the discovery of cuprate superconductivity and topological insulators. He argued that these latter advances did not result in major conceptual upheavals.

He went on to elaborate the reasons for these velvet revolutions, which I enumerate to correspond to the list above:

1. Abandonment of microscopic theory, in particular with the use of Landau parameters; trying to relate experimental properties to one another with the input of experiment
2. Use of an effective low-energy Hamiltonian to describe phase of matter
3. Concept of universality and scaling
4. Discovery of quasiparticles with fractional charge

It is informative to think about condensed matter physics in this way, as it demonstrates the conceptual advances that we almost take for granted in today’s work.

The second aspect of his talk that resonated strongly with the audience was what he dubbed “the scourge of bibliometrics”. He told the tale of his own formative years as a physicist. He published one single-page paper for his PhD work. Furthermore, once appointed as a lecturer at the University of Sussex, his job was to be a lecturer and teach from Monday thru Friday. If he did this job well, it was considered a job well-done. If research was something he wanted to partake in as a side-project, he was encouraged to do so. He discussed how this atmosphere allowed him to develop as a physicist, without the requirement of publishing papers for career advancement.

Furthermore, he claimed, because of the current focus on metrics, burgeoning young scientists are now encouraged to seek out problems that they can solve in a time frame of two to three years. He saw this as a terrible trend. While it is often necessary to complete short-term projects, it is also important to think about problems that one may be able to solve in, say, twenty years, or maybe even never. He claimed that this is what is meant by doing real science — jumping into the unknown. In fact, he asserted that if he were to give any advice to graduate students, postdocs and young faculty in the audience, it would be to try to spend about 20% of one’s time committed to some of these long-term problems.

This raises a number of questions in my mind. It is well-acknowledged within the community and even the blogosphere that the focus on publication number and short term-ism within the condensed matter physics community is detrimental. Both Ross McKenzie and Doug Natelson have expressed such sentiment numerous times on their blogs as well. From speaking to almost every physicist I know, this is a consensus opinion. The natural question to ask then is: if this is the consensus opinion, why is the modern climate as such?

It seems to me like part of this comes from the competition for funding among different research groups and funding agencies needing a way to discriminate between them. This leads to the widespread use of metrics, such as h-indices and publication number, to decide whether or not to allocate funding to a particular group. This doesn’t seem to be the only reason, however. Increasingly, young scientists are judged for hire by their publication output and the journals in which they publish.

Luckily, the situation is not all bad. Because so many people openly discuss this issue, I have noticed that the there is a certain amount of push-back from individual scientists. On my recent postdoc interviews, the principal investigators were most interested in what I was going to bring to the table rather than peruse through my publication list. I appreciated this immensely, as I had spent a large part of my graduate years pursuing instrumentation development. Nonetheless, I still felt a great deal of pressure to publish papers towards the end of graduate school, and it is this feeling of pressure that needs to be alleviated.

Strangely, I often find myself in the situation working despite the forces that be, rather than being encouraged to do so. I highly doubt that I am the only one with this feeling.

## The Good, the Bad and the Ugly of STEM

There were a couple interesting articles in the New York Times in the past couple weeks that caught my eye. The first article, linked here, is about the government (in particular Senator Matt Bevin of Kentucky) trying to get more students to obtain college degrees in STEM fields as opposed to degrees in the humanities. This would be done by reducing or completely cutting the financial aid for some humanities/social science majors. Strangely, the article singled out French literature on more than one occasion as an example of a seemingly frivolous humanities degree.

Before I continue, let me reveal some of my biases here. I had originally chosen to major in comparative literature for my undergraduate degree and only decided to switch fields to physics after two and half years as an undergraduate. This decision was made certain after I took my first course in literary theory (ick!!).

With that preamble, I can safely say that I greatly value a broad liberal arts education. The study of subjects like philosophy, history, linguistics and literature make us more culturally and morally aware, make us more open-minded and generally make us richer citizens. Furthermore, people are more likely to succeed in a field where their strongest motivations lie. They should not feel discouraged from pursuing these ideals. They already know that they are likely to make significantly less money than STEM majors over a lifetime, yet they choose to pursue those fields nonetheless. Overall, I don’t necessarily see financially biasing STEM fields as harmful, but we must be aware of the extent to which this is done. The humanities are important, and the perspective they bring should not be underestimated.

The second article, linked here, concerned sexual harassment in the STEM fields. Though the data on this is sparse, the anecdotal evidence suggests that this happens more often than we’d like to think. The article is worth reading, and one wonders whether we could learn something from humanities departments with regards to this matter.

Brian Greene was on the Late Show with Stephen Colbert a couple weeks ago to explain the LIGO detection of gravitational waves. He did a great job in simplifying the experiment and explaining the ideas behind the discovery and its importance.

## Private Sector Careers from a Physics PhD

There is an interesting document that was put out by the American Institute of Physics recently about the careers of physics PhDs who had decided to join the private sector.

One interesting note from this document was that most of these physicists were asked: If you could go back in time, would you still accept your postdoc?

Most physicists in the private sector answered in the affirmative to this question. Perhaps a postdoc is not a bad idea even for those people who intend to later take a job in a company or elsewhere. Below is a table demonstrating these statistics with a more precise breakdown:

There are also numbers on salaries across different fields in the private sector. It is worth looking through the document if one is considering joining the private sector.

## Perception

I’ve long had an interest in those magic-eye images that I used to look at (and could never solve!) when I was a kid. These are images that look like a regular tiling, but turn out to contain some sort of embedded image when viewed in a certain manner. Here is one of these for instance:

An autostereogram of a lovely butterfly

Magic-eye images are actually part of a much larger set of images known as stereograms. Stereograms were discovered in 1959 by Bela Julesz, a scientist at Bells Labs, who invented random dot stereograms to study depth perception. Humans are particularly adept at depth perception due to the fact that we have two eyes that are horizontally separated from one another. This allows us to triangulate, and our brain turns this information into depth. In fact, our depth perception is vastly superior to our ability to discern lateral displacements. Take a look at this image for instance:

If one looks at the image normally, it takes a while to figure out which tiles (i.e. black, pink or orange) are spaced further apart. However, if one looks at the image stereoscopically (i.e. how you would look at a magic-eye image), then one can immediately tell that from back to front, we have black, orange then pink, indicating the differences in spacing.

What one is doing when one views the tiling above stereoscopically, is that the left eye is seeing one element of the pattern while the right is seeing another. Since they are not focused on the same element, but the elements are physically the same, the brain is tricked into triangulating distance, resulting in the perception of depth.

Let’s now return to random dot stereograms. They are probably the easiest kinds of stereograms to generate. Here is an image of a random-dot stereogram that I made in MS Paint in about a minute:

Square

The idea behind making a random dot stereogram is outlined well at the bottom of this webpage. Again, a lateral movement results in the perception of depth because one is tricking the brain, which is using triangulation to calculate depth.

There are other types of stereograms as well. One is the single-image random dot stereogram (SIRDS). This type of stereogram is a little more sophisticated than the previous ones, and you can read about how to generate them in this paper (pdf!). Here is an example of a SIRDS:

Single-image random dot stereogram with an embedded annulus

If you are having trouble viewing the embedded images, I’m sure you’re not alone. However, let me prove to you that there really is something in there. The power spectrum (or power spectral density) of a random signal is known to be a constant. This is because there is no correlation between the random pixels in the image. Below is an image of random dots and its spectral density:

Random Dots                                                                      Power Spectrum

However, look what happens when I look at the power spectrum for the SIRDS with the embedded annulus from above:

Not so random dots                               Demonstration of non-randomness

While this is not a “proof” that there is an image embedded, it suggests that there is some sort of periodicity in the supposedly random dots. If you look hard enough, you may even be able to see this. Note that the periodicity is in the horizontal direction only, the vertical direction is indeed quite random.

Now that we know the image is non-random, is there a way to reveal the image embedded? Indeed, there is. We can exploit the repeating pattern. We can take two identical images of the not-so-random-dots, put them on top of each other, and subtract the pixel intensity of one image from the other. Of course, this would just yield a black figure. However, when one starts to slide one image over the other (translating while subtracting!), the image reveals itself. Below is solution for the annulus:

Annulus solution

I had actually posted a solution to the autostereogram on the Wolfram Demonstrations project website a little while ago, where you can download the very simple code if you want to play with it.