Monthly Archives: July 2016

Ruminations on Raman

The Raman effect concerns the inelastic scattering of light from molecules, liquids, or solids. Brian has written a post about it previously, and it is worth reading. Its use today is so commonplace, that one almost forgets that it was discovered back in the 1920s. As the story goes (whether it is apocryphal or not I do not know), C.V. Raman became entranced by the question of why the ocean appeared blue while on a ship back from London to India in 1921. He apparently was not convinced by Rayleigh’s explanation that it was just the reflection of the sky.

When Raman got back to Calcutta, he began studying light scattering from liquids almost exclusively. Raman experiments are nowadays pretty much always undertaken with the use of a laser. Obviously, Raman did not initially do this (the laser was invented in 1960). Well, you must be thinking, he must have therefore conducted his experiments with a mercury lamp (or something similar). In fact, this is not correct either. Initially, Raman had actually used sunlight!

If you have ever conducted a Raman experiment, you’ll know how difficult it can be to obtain a spectrum, even with a laser. Only about one in a million of the incident photons (and sometimes much fewer) actually gets scattered with a change in wavelength! So for Raman to have originally conducted his experiments with sunlight is really a remarkable achievement. It required patience, exactitude and a great deal of technical ingenuity to focus the sunlight.

Ultimately, Raman wrote his results up and submitted them to Nature magazine in 1928. Although these results were based on sunlight, he had just obtained his first mercury lamp to start his more quantitative studies by then. The article made big news because it was a major result confirming the new “quantum theory”, but Raman immediately recognized the capability of this effect in the study of matter as well. After many years of studying the effect, he came to realize that the reason that water is blue is basically the same as why the sky is blue — Rayleigh scattering goes as 1/\lambda^4.

Readers of this blog will actually notice that I have written about Raman scattering in several different contexts on this site, for instance, in measuring the Damon-Eschbach mode and the Higgs amplitude mode in superconductorsilluminating the nature of LO-TO splitting polar insulators and measuring unusual collective modes in Kondo insulators demonstrating its power as probe of condensed matter even in the present time.

On this blog, one of the major themes I’ve tried to highlight is the technical ingenuity of experimentalists to observe certain phenomena. I find it quite amazing that the Raman effect had its rather meager origins in the use of sunlight!

Goodhart’s Law Gone Wrong…

I didn’t envision this when I first wrote the previous post…

Conditional Risk

Goodhart’s Law and Citation Metrics

According to Wikipedia, Goodhart’s law colloquially states that:

“When a measure becomes a target, it ceases to be a good measure.”

It was originally formulated as an economics principle, but has been found to be applicable in a much wider variety of circumstances. Let’s take a look at a few examples to understand what this principle means.

Police departments are often graded using crime statistics. In the US in particular, a combined index of eight categories constitute a “crime index”. In 2014, it was reported in Chicago magazine that the huge crime reduction seen in Chicago was merely due to reclassification of certain crimes. Here is the summary plot they showed:

ChicagoCrime

Image reprinted from Chicago magazine

In effect, some felonies were labeled misdemeanors, etc. The manipulation of the “crime index” corrupted the way the police did their jobs.

Another famous example of Goodhart’s law is Google’s search algorithm, known as PageRank. Crudely, PageRank works in the following way as described by Wikipedia:

“PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.”

Knowing how PageRank works has obviously led to its manipulation. People seeking to have greater visibility and wanting to be ranked higher on Google searches have used several schemes to raise their rating. One of the most popular schemes is to post links of one’s own website in the comments section of high-ranked websites in order to inflate one’s own ranking. You can read a little more about this and other schemes here (pdf!).

With the increased use of citation metrics among the academic community, it should come as no surprise that it also can become corrupted. Increasingly, there are many authors per paper, as groups of authors can all take equal credit for papers when using the h-index as a scale. Many scientists also spend time emailing their colleagues to urge them to cite one of their papers (I only know of this happening anecdotally).

Since the academic example hits home for most of the readers of this blog, let me try to formulate a list of the beneficial and detrimental consequences of bean-counting:

Advantages:

  1. One learns how to write a technical paper early in one’s career.
  2. It can motivate some people to be more efficient with their time.
  3. It provides some sort of metric by which to measure scientific competence (though it can be argued that any currently existing index is wholly inadequate, and will always be inadequate in light of Goodhart’s law!).
  4. Please feel free to share any ideas in the comments section, because I honestly cannot think of any more!

Disadvantages:

  1. It makes researchers focuses on short-term problems instead of long-term moon-shot kinds of problems.
  2. The community loses good scientists because they are deemed as not being productive enough. A handful of the best students I came across in graduate school left physics because they didn’t want to “play the game”.
  3. It rewards those who may be more career-oriented and focus on short-term science, leading to an overpopulation of these kinds of people in the scientific community.
  4. It may lead scientists to cut corners and even go as far as to falsify data. I have addressed some of these concerns before in the context of psychology departments.
  5. It provides an incentive to flood the literature with papers that are of low quality. It is no secret that the number of publications has ballooned in the last couple decades. Though it is hard to quantify quality, I cannot imagine that scientists have just been able to publish more without sacrificing quality in some way.
  6. It takes the focus of scientists’ jobs away from science, and makes scientists concerned with an almost meaningless number.
  7. It leads authors to overstate the importance of their results in effort to publish in higher profile journals.
  8. It does not value potential. Researchers who would have excelled in their latter years, but not their former, are under-valued. Late-bloomers therefore go under-appreciated.

Just by examining my own behavior in reference to the above lists, I can say that my actions have been altered by the existence of citation and publication metrics. Especially towards the end of graduate school, I started pursuing shorter-term problems so that they would result in publications. Obviously, I am not the only one that suffers from this syndrome. The best one can do in this scenario is to work on longer-term problems on the side, while producing a steady stream of papers on shorter-term projects.

In light of the two-slit experiment, it seems ironic that physicists are altering their behavior due to the fact that they are being measured.

Approximate Humor

Approximations can be powerful, but we should be careful not to generalize too much…

comic

Transistors, Logic and Abstraction

A general theme of science that manifests itself in many different ways is the concept of abstraction. What this means is that one can understand something at a higher level without having to understand a buried lower level. For instance, one can understand the theory of evolution based on natural selection (higher level) without having to first comprehend quantum mechanics (lower level), even though the higher level must be consistent with the lower one.

To my mind, this idea is most aptly demonstrated with transistors, circuits and logic. Let’s start at the level of transistors and build a NAND gate in the following way:

NANDCircuit

NAND Circuit

The NAND gate has the following truth table:

NANDTruthTable

If you can’t immediately see why the transistor circuit above yields the corresponding truth table, it helps to appeal to the “water analogy”, where one imagines the current flow as water. Imagine that water is flowing from Vcc. If A and B are high, the “dams” (transistors) are open, the current will flow to ground and X will be low. If either A or B is low (closed), the water will flow to X, and X will be high.

Why did I choose the NAND circuit instead of other logic gates? It turns out that all other logic gates can be built from the NAND alone, so it makes sense to choose it as a fundamental unit.

Let’s now abstract away the circuit and draw the NAND gate like so:

NANDgate

NAND Gate

Having abstracted away the transistor circuit, we can now play with this NAND gate and build other logic gates out of it. For instance, let’s think about how to build an OR gate. Well, an OR gate is just a NOT gate applied to the two inputs of a NAND gate. Therefore, we just need to build a NOT gate. One way to do this would be:

NOTgate

NOT from NAND

Notice that whenever A is high, X is low and vice versa. Let us now abstract this circuit away and draw the NOT gate as:

 

NOTabstract

NOT Gate

And now the OR gate can be made in the following way:

ORGate

OR from NOT and NAND

 

and abstracted away to look like:

ORAbstract

OR Gate

Now, although building an OR gate from NAND gates is totally unnecessary, and it actually would just be easier to do this by working with the transistors directly, one can already start to see the power of abstracting away the underlying circuit. We can just work at higher levels, build the component we want and put the transistors back in at the end. Our understanding of what is going on is not compromised in any way and is in fact probably enhanced since we don’t have to think about the water analogy any more!

Let’s work now with an example that actually is much easier at the level of NANDs and NOTs to really demonstrate the power of this technique. Let’s make what is called a multiplexer. A multiplexer is a three input-one output circuit with the following truth table:

MultiplexorTruthTabl

Multiplexor Truth Table

Notice that in this truth table, the X serves as a selector. When X is 0, it selects B as the output (Y), whereas when X is 1, it selects A as the output. The multiplexer can be built in the following way:

MultiplexorCircuit

Multiplexer from NOT and NANDs

and is usually abstracted in the following way:

MultiplexorAbstract

Multiplexer Gate

At this level, it is no longer a simple task to come up with a transistor circuit that will operate as a multiplexer, but it is relatively straightforward at the level of NANDs and NOTs. Now, armed with the multiplexer, NAND, NOT and OR gates, we can build even more complex circuit components. In fact, doing this, one will eventually arrive at the hardware for a basic computer. Therefore, next time you’re looking at complex circuitry, know that the builders used abstraction to think about the meaning of the circuit and then put all the transistors back in later.

I’ll stop building circuits here; I think the idea I’m trying to communicate is becoming increasingly clear. We can work at a certain level, abstract it away and then work at a higher level. This is an important concept in every field of science. Abstraction occurs in every realm. This is even true in particle physics. In condensed matter physics, we use this concept everyday to think about what happens in materials, abstracting away complex electron-electron interactions into a quasi-particle using Fermi liquid theory or abstracting away the interactions between the underlying electrons in a superconductor to understand vortex lattices (pdf!).