# Tag Archives: Perspective

## On Scientific Inevitability

If one looks through the history of human evolution, it is surprising to see that humanity has on several independent occasions, in several different locations, figured how to produce food, make pottery, write, invent the wheel, domesticate animals, build complex political societies, etc. It is almost as if these discoveries and inventions were an inevitable part of the evolution of humans. More controversially, one may extend such arguments to include the development of science, mathematics, medicine and many other branches of knowledge (more on this point below).

The interesting part about these ancient inventions is that because they originated in different parts of the world, the specifics varied geographically. For instance, native South Americans domesticated llamas, while cultures in Southwest Asia (today’s Middle East) domesticated sheep, cows, and horses, while the Ancient Chinese were able to domesticate chickens among other animals. The reason that different cultures domesticated different animals was because these animals were by and large native to the regions where they were domesticated.

Now, there are also many instances in human history where inventions were not made independently, but diffused geographically. For instance, writing was developed independently in at least a couple locations (Mesoamerica and Southwest Asia), but likely diffused from Southwest Asia into Europe and other neighboring geographic locations. While the peoples in these other places would have likely discovered writing on their own in due time, the diffusion from Southwest Asia made this unnecessary. These points are well-made in the excellent book by Jared Diamond entitled Guns, Germs and Steel.

At this point, you are probably wondering what I am trying to get at here, and it is no more than the following musing. Consider the following thought experiment: if two different civilizations were geographically isolated without any contact for thousands of years, would they both have developed a similar form of scientific inquiry? Perhaps the questions asked and the answers obtained would have been slightly different, but my naive guess is that given enough time, both would have developed a process that we would recognize today as genuinely scientific. Obviously, this thought experiment is not possible, and this fact makes it difficult to answer to what extent the development of science was inevitable, but I would consider it plausible and likely.

Because what we would call “modern science” was devised after the invention of the printing press, the process of scientific inquiry likely “diffused” rather than being invented independently in many places. The printing press accelerated the pace of information transfer and did not allow geographically separated areas to “invent” science on their own.

Today, we can communicate globally almost instantly and information transfer across large geographic distances is easy. Scientific communication therefore works through a similar diffusive process, through the writing of papers in journals, where scientists from anywhere in the world can submit papers and access them online. Looking at science in this way, as an almost inevitable evolutionary process, downplays the role of individuals and suggests that despite the contribution of any individual scientist, humankind would have likely reached that destination ultimately anyhow. The timescale to reach a particular scientific conclusion may have been slightly different, but those conclusions would have been made nonetheless.

There are some scientists out there who have contributed massively to the advancement of science and their absence may have slowed progress, but it is hard to imagine that progress would have slowed very significantly. In today’s world, where the idea of individual genius is romanticized in the media and further so by prizes such as the Nobel, it is important to remember that no scientist is indispensable, no matter how great. There were often competing scientists simultaneously working on the biggest discoveries of the 20th century, such as the theories of general relativity, the structure of DNA, and others. It is likely that had Einstein or Watson, Crick and Franklin not solved those problems, others would have.

So while the work of this year’s scientific Nobel winners is without a doubt praise-worthy and the recipients deserving, it is interesting to think about such prizes in this slightly different and less romanticized light.

## Research Topics and the LAQ Method

As a scientific researcher, the toughest part of the job is to come up with good scientific questions. A large part of my time is spent looking for such questions and every once in a while, I happen upon a topic that is worth spending the time to investigate further. The most common method of generating such questions is to come up with a lot of them and then sift through the list to find some that are worth pursuing.

One of the main criteria I use for selecting such questions/problems is what I refer to as the “largest answerable question” or LAQ method. Because the lifetime of a researcher is limited by the human lifespan, it is important to try to answer the largest answerable questions that fall within the window of your ability. Hence, this selection process is actually tied in with one’s self-confidence and actually takes a fair amount of introspection. I imagine the LAQ method looking a little bit like this:

One starts by asking some really general questions about some scientific topic which eventually proceeds to a more specific, answerable, concrete question. If the question is answerable, it usually will have ramifications that will be broadly felt by many in the community.

I imagine that most readers of this blog will have no trouble coming up with examples of success stories where scientists employed the LAQ method. Just about every famous scientist you can think of has probably, at least to some extent, used this method fruitfully. However, there are counterexamples as well, where important questions are asked by one scientist, but is answered by others.

I am almost done reading Erwin Schrodinger’s book What is Life?, which was written in 1943. In it, Schrodinger asks deep questions about genetics and attempts to put physical constraints on information-carrying molecules (DNA was not known at the time to be the site of genetic information). It is an entertaining read in two regards. Firstly, Schrodinger, at the time of writing, introduces to physicists some of the most pertinent and probing questions in genetics. The book was, after all, one that was read by both Watson and Crick before they set about discovering the structure of DNA. Secondly, and more interestingly, Schrodinger gets almost everything he tries to answer wrong! For instance, he suggests that quantum mechanics may play a role in causing a mutation of certain genes. This is not to say that his reasoning was not sound, but at the time of writing, there were just not enough experimental constraints on some of the assumptions he made.

Nonetheless, I applaud Schrodinger for writing the book and exposing his ignorance. Even though he was not able to answer many of the questions himself, he was an inspiration to many others who eventually were able to shed light on many of the questions posed in the book. Here is an example where the LAQ method fails, but still pushes science forward in a tangible way.

What are your strategies with coming up with good research questions? I have to admit that while the LAQ method is useful, I sometimes pursue problems purely because I find them stunning and no other reason is needed!

## Jahn-Teller Distortion and Symmetry Breaking

The Jahn-Teller effect occurs in molecular systems, as well as solid state systems, where a molecular complex distorts, resulting in a lower symmetry. As a consequence, the energy of certain occupied molecular states is reduced. Let me first describe the phenomenon before giving you a little cartoon of the effect.

First, consider, just as an example, a manganese atom with valence $3d^4$, surrounded by an octahedral cage of oxygen atoms like so (image taken from this thesis):

The electrons are arranged such that the lower triplet of orbital states each contain a single “up-spin”, while the higher doublet of orbitals only contains a single “up-spin”, as shown on the image to the left. This scenario is ripe for a Jahn-Teller distortion, because the electronic energy can be lowered by splitting both the doublet and the triplet as shown on the image on the right.

There is a very simple, but quite elegant problem one can solve to describe this phenomenon at a cartoon level. This is the problem of a two-dimensional square well with adjustable walls. By solving the Schrodinger equation, it is known that the energy of the two-dimensional infinite well has solutions of the form:

$E_{i,j} = \frac{h}{8m}(i^2/a^2 + j^2/b^2)$                where $i,j$ are integers.

Here, $a$ and $b$ denote the lengths of the sides of the 2D well. Since it is only the quantity in the brackets that determine the energy levels, let me factor out a factor of $\gamma = a/b$ and write the energy dependence in the following way:

$E \sim i^2/\gamma + \gamma j^2$

Note that $\gamma$ is effectively an anisotropy parameter, giving a measure of the “squareness of the well”. Now, let’s consider filling up the levels with spinless electrons that obey the Pauli principle. These electrons will fill up in a “one-per-level” fashion in accordance with the fermionic statistics. We can therefore write the total energy of the $N-$fermion problem as so:

$E_{tot} \sim \alpha^2/ \gamma + \gamma \beta^2$

where $\alpha$ and $\beta$ parameterize the energy levels of the $N$ electrons.

Now, all of this has been pretty simple so far, and all that’s really been done is to re-write the 2D well problem in a different way. However, let’s just systematically look at what happens when we fill up the levels. At first, we fill up the $E_{1,1}$ level, where $\alpha^2 = \beta^2 = 1^2$. In this case, if we take the derivative of $E_{1,1}$ with respect to $\gamma$, we get that $\gamma_{min} = 1$ and the well is a square.

For two electrons, however, the well is no longer a square! The next electron will fill up the $E_{2,1}$ level and the total energy will therefore be:

$E_{tot} \sim 1/\gamma (1+4) + \gamma (1+1)$,

which gives a $\gamma_{min} = \sqrt{5/2}$!

Why did this breaking of square symmetry occur? In fact, this is very closely related to the Jahn-Teller effect. Since the level is two-fold degenerate (i.e. $E_{2,1} = E_{1,2}$), it is favorable for the 2D well to distort to lower its electronic energy.

Notice that when we add the third electron, we get that:

$E_{tot} \sim 1/\gamma (1+4+1) + \gamma (1+1+4)$

and $\gamma_{min} = 1$ again, and we return to the system with square symmetry! This is also quite similar to the Jahn-Teller problem, where, when all the states of the degenerate levels are filled up, there is no longer an energy to be gained from the symmetry-broken geometry.

This analogy is made more complete when looking at the following level scheme for different $d$-electron valence configurations, shown below (image taken from here).

The black configurations are Jahn-Teller active (i.e. prone to distortions of the oxygen octahedra), while the red are not.

In condensed matter physics, we usually think about spontaneous symmetry breaking in the context of the thermodynamic limit. What saves us here, though, is that the well will actually oscillate between the two rectangular configurations (i.e. horizontal vs. vertical), preserving the original symmetry! This is analogous to the case of the ammonia ($NH_3$) molecule I discussed in this post.

## The Physicist’s Proof II: Limits and the Monty Hall Problem

As an undergraduate, I was taught the concept of the “physicist’s proof”, a sort of silly concept that was a professor’s attempt to get us students to think a little harder about some problems. Here, I give you a “physicist’s proof” of the famous Monty Hall problem, which (to me!) is easier to think about than the typical Bayesian approach.

The Monty Hall problem, which was developed on a TV game show, goes something like this (if you already know the Monty Hall problem, you can skip the paragraphs in italics):

Suppose you are a contestant on a game show where there are three doors and a car behind one of them. You must select the correct door to win the car.

You therefore select one of the three doors. Now, the host of the show, who knows where the car is, opens a different door for you and shows you that there is no car behind that door.

There are two remaining unopened doors — the one you have chosen and one other. Now, before you find out whether or not you have guessed correctly, the host gives you the option to change your selection from the door you initially chose to the other remaining unopened door.

Should you switch or should you remain with you initial selection?

When I first heard this problem, I remember thinking, like most people, that there was a 50/50 chance of the car being behind either door. However, there is a way to convince yourself that this is not so. This is by taking the limit of a large number of doors. I’ll explain what I mean in a second, but let me just emphasize that taking limits is a common and important technique that physicists must master to think about problems in general.

In Einstein’s book, Relativity, he describes using this kind of thinking to point out absurd consequences of Galilean relativity. Einstein imagined himself running away from a clock at the speed of light: in this scenario, the light from the clock would be matching his pace and he would therefore observe the hands of the clock remaining stationary and time standing still. Were he able to run just a little bit faster than the light emanating from the clock, he would see the hands of the clock start to wind backwards. This would enable him to violate causality!  However, Einstein held causality to be a dearer physical truth than Newton’s laws. Special relativity was Einstein’s answer to this contradiction, a conclusion he reached by considering a physical limit.

Now, let us return to the Monty Hall problem. And this time, instead of three doors, let’s think about the limit of, say, a million doors. In this scenario, suppose that you have to choose one door from one million doors instead of just three. For the sake of argument, suppose you select door number 999,983. The host, who knows where the car is, opens all the other doors, apart from door number 17. Should you stick to door 999,983 or should you switch to door 17?

Let’s think about this for a second — there are two scenarios. Either you were correct on your first guess and the car is behind door 999,983 or you were incorrect on your first guess and the car is behind door 17. When you initially made your selection, the chance of you having made the right guess was 1/1,000,000! The probability of you having chosen the right door is almost zero! If you had chosen any other door apart from door 17, you would have been faced with the same option: the door you chose vs. door 17. And there are 999,999 doors for you to select and not win the car. In some sense, by opening all the other doors, the host is basically telling you that the car is behind door 17 (there is a 99.9999% chance!).

To me, at least, the million door scenario demonstrates quite emphatically that switching from your initial choice is more logical. For some reason, the three door case appears to be more psychologically challenging, and the probabilities are not as obvious. Taking the appropriate limit of the Monty Hall problem is therefore (at least to me) much more intuitive!

Especially for those who are soon to take the physics GRE — remember to take appropriate limits, this will often eliminate at least a couple answers!

For completeness, I show below the more rigorous Bayesian method for the three-door case:

Bayes theorem says that:

$P(A|B) = \frac{P(B|A) P(A)}{P(B)}$

For the sake of argument, suppose that you select door 3. The host then shows you that there is no car behind door 2. The calculation goes something like this. Below, “car3” translates to “the car was behind door 3” and “opened2” translates to “the host opened door 2”

$P(car3|opened2) = \frac{P(opened2 | car3) P(car3)}{P(opened2)}$

The probabilities in the numerator are easy to obtain $P(opened2 | car3) = 1/2$ and $P(car3) = 1/3$. However, the $P(opened2)$ is a little harder to calculate. It helps to enumerate all the scenarios. Given that you have chosen door three, if the car is behind door 1, then the probability that the host opens door two is 1. Given that you have chosen door three and are correct, the probability of the host choosing door 2 is 1/2. Obviously, the probability of the car being behind door 2 is zero. Therefore, considering that all doors have a 1/3 possibility of having the car behind them at the outset, the denominator becomes:

$P(opened2) = 1/3*(1/2 + 1 + 0) = 1/2$

and hence:

$P(car3|opened2) = \frac{1/2*1/3}{1/2} = 1/3$.

Likewise, the probability that the car is behind door 1 is:

$P(car1|opened2) = \frac{P(opened2 | car1) P(car1)}{P(opened2)}$

which can similarly be calculated:

$P(car1|opened2) = \frac{1*1/3}{1/2} = 2/3$.

It is a bizarre answer, but Bayesian results often are.

## Spot the Difference

A little while ago, I wrote a blog post concerning autostereograms, more commonly referred to as Magic Eye images. These are images that, at first sight, seem to possess nothing but a random-seeming pattern. However, looked at in a certain way, a three-dimensional image can actually be made visible. Below is an example of a such an image (taken from Wikipedia):

Autostereogram of a shark

In my previous post about these stereograms, I pointed out that the best way to understand what is going on is to look at a two-image stereogram (see below). Here, the left eye looks at the left image while the right eye looks at the right image, and the brain is tricked into triangulating a distance because the two images are almost the same. The only difference is that part of the image has been displaced horizontally, which makes that part appear like it is at a different depth. This is explained at the bottom of this page, and an example is shown below:

Boring old square

In this post, however, I would like to point out that this visual technique can be used to solve a different kind of puzzle. When I was in middle school, one of the most popular games to play was called Photo-Hunt, essentially a spot-the-difference puzzle. You probably know what I’m referring to, but here is an example just in case you don’t:

The bizarre thing about these images is that if you look at them like you would a Magic Eye image, the differences between the two images essentially “pop out” (or rather they flicker noticeably). Because each of your eyes is looking at each image separately, your brain is tricked into thinking there is a single image at a certain depth. Therefore, the differences reveal themselves, because while the parts of the image that are identical are viewed with a particular depth of view, the differences don’t have the same effect. Your eyes cannot triangulate the differences, and they appear to flicker. I wish I had learned this trick in middle school, when this game was all the rage.

While this may all seem a little silly, I noticed recently while zoning out during a rather dry seminar, that I could notice very minute defects in TEM images using this technique. Here is an example of an image of a bubble raft (there are some really cool videos of bubble rafts online — see here for instance), where the defects immediately emerge when viewed stereoscopically (i.e. like a Magic-Eye):

Bubble raft image taken from here

I won’t tell you where the defects are, but just to let you know that there are three quite major ones, which are the ones I’m referring to in the image. They’re quite obvious even if not viewed stereoscopically.

Because so many concepts in solid state physics depend on crystal symmetries and periodicity, I can foresee entertaining myself during many more dry seminars in the future, be it a seminar with tons of TEM images or a wealth of diffraction data. I have even started viewing my own data this way to see if anything immediately jumps out, without any luck so far, but I suspect it is only a matter of time before I see something useful.

## Book Review – The Gene

Following the March Meeting, I took a vacation for a couple weeks, returning home to Bangkok, Thailand. During my holiday, I was able to get a hold of and read Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History.

I have to preface any commentary by saying that prior to reading the book, my knowledge of biology embarrassingly languished at the middle-school level. With that confession aside, The Gene was probably one of the best (and for me, most enlightening) popular science books I have ever read. This is definitely aided by Mukherjee’s fluid and beautiful writing style from which scientists in all fields can learn a few lessons about scientific communication. The Gene is also touched with a humanity that is not usually associated with the popular science genre, which is usually rather dry in recounting scientific and intellectual endeavors. This humanity is the book’s most powerful feature.

Since there are many glowing reviews of the book published elsewhere, I will just list here a few nuggets I took away from The Gene, which hopefully will serve to entice rather than spoil the book for you:

• Mukherjee compares the gene to an atom or a bit, evolution’s “indivisible” particle. Obviously, the gene is physically divisible in the sense that it is made of atoms, but what he means here is that the lower levels can be abstracted away and the gene is the relevant level at which geneticists work.
• It is worth thinking of what the parallel carriers of information are in condensed matter problems — my hunch is that most condensed matter physicists would contend that these are the quasiparticles in the relevant phase of matter.
• Gregor Mendel, whose work nowadays is recognized as giving birth to the entire field of genetics, was not recognized for his work while he was alive. It took another 40-50 years for scientists to rediscover his experiments and to see that he had localized, in those pea plants, the indivisible gene. One gets the feeling that his work was not celebrated while he was alive because his work was far ahead of its time.
• The history of genetics is harrowing and ugly. While the second World War was probably the pinnacle of obscene crimes committed in the name of genetics, humans seem unable to shake off ideas associated with eugenics even into the modern day.
• Through a large part of its history, the field of genetics has had to deal with a range of ethical questions. There is no sign of this trend abating in light of the recent discovery of CRISPR/Cas-9 technology. If you’re interested in learning more about this, RadioLab has a pretty good podcast about it.
• Schrodinger’s book What is Life? has inspired so much follow-up work that it is hard to overestimate the influence it has had on a generation of physicists that transitioned to studying biology in the middle of the twentieth century, including both Watson and Crick.

While I could go on and on with this list, I’ll stop ruining the book for you. I would just like to say that at the end of the book I got the feeling that humans are still just starting to scratch the surface of understanding what’s going on in a cell. There is much more to learn, and that’s an exciting feeling in any field of science.

Aside: In case you missed March Meeting, the APS has posted the lectures from the Kavli Symposium on YouTube, which includes lectures from Duncan Haldane and Michael Kosterlitz among others.

## Discovery vs. Q&A Experiments

When one looks through the history of condensed matter experiment, it is strange to see how many times discoveries were made in a serendipitous fashion (see here for instance). I would argue that most groundbreaking findings were unanticipated. The discoveries of superconductivity by Onnes, the Meissner effect, superfluidity in He-4, cuprate (and high temperature) superconductivity, the quantum Hall effect and the fractional quantum Hall effect were all unforeseen by the very experimentalists that were conducting the experiments! Theorists also did not anticipate these results. Of course, a whole slew of phases and effects were theoretically predicted and then experimentally observed as well, such as Bose-Einstein condensation, the Kosterlitz-Thouless transition, superfluidity in He-3 and the discovery of topological insulators, not to diminish the role of prediction.

For the condensed matter experimentalist, though, this presents a rather strange paradigm.  Naively (and I would say that the general public by and large shares this view), science is perceived as working within a question and answer framework. You pose a concrete question, and then conduct and experiment to try to answer said question. In condensed matter physics, this often not the case, or at least only loosely the case. There are of course experiments that have been conducted to answer concrete questions — and when they are conducted, they usually end up being beautiful experiments (see here for example). But these kinds of experiments can only be conducted when a field reaches a point where concrete questions can be formulated. For exploratory studies, the questions are often not even clear. I would, therefore, consider these kinds of Q&A experiments to be the exception to the rule rather than the norm.

More often then not, discoveries are made by exploring uncharted territory, entering a space others have not explored before, and tempting fate. Questions are often not concrete but posed in the form, “What if I do this…?”. I know that this makes condensed matter physics sound like it lacks organization, clarity and structure. But this is not totally untrue. Most progress in the history of science did not proceed in a straight line like textbooks make it seem. When weird particles were popping up all over the place in particle physics in the 1930s and 40s, it was hard to see any organizing principles. Experimentalists were discovering new particles at a rate with which theory could not keep up. Only after a large number of particles had been discovered did Gell-Mann come up with his “Eightfold Way”, which ultimately led to the Standard Model.

This is all to say that scientific progress is tortuous, thought processes of scientists are highly nonlinear, and there is a lot of intuition required in deciding what problems to solve or what space is worth exploring. In condensed matter experiment, it is therefore important to keep pushing boundaries of what has been done before, explore, and do something unique in hope of finding something new!

Exposure to a wide variety of observations and methods is required to choose what boundaries to push and where to spend one’s time exploring. This is what makes diversity and avoiding “herd thinking” important to the scientific endeavor. Exploratory science without concrete questions makes some (especially younger graduate students) feel uncomfortable, since there is always the fear of finding nothing! This means that condensed matter physics, despite its tremendous progress over the last few decades, where certain general organizing principles have been identified, is still somewhat of a “wild west” in terms of science. But it is precisely this lack of structure that makes it particularly exciting — there are still plenty of rocks that need overturning, and it’s hard to foresee what is going to be found underneath them.

In experimental science, questions are important to formulate — but the adventure towards the answer usually ends up being more important than the answer itself.