Category Archives: Education

Disorganized Reflections

Recently, this blog has been concentrating on topics that have lacked a personal touch. A couple months ago, I started a postdoc position and it has gotten me thinking about a few questions related to my situation and some that are more general. I thought it would be a good time to share some of my thoughts and experiences. Here is just a list of some miscellaneous questions and introspections.

  1. In a new role, doing new work, people often make mistakes while getting accustomed to their new surroundings. Since starting at my new position, I’ve been lucky enough to have patient colleagues who have forgiven my rather embarrassing blunders and guided me through uncharted territory. It’s sometimes deflating admitting your (usually) daft errors, but it’s a part of the learning process (at least it is for me).
  2. There are a lot of reasons why people are drawn to doing science. One of them is perpetually doing something new, scary and challenging. I hope that, at least for me, science never gets monotonous and there is consistently some “fear” of the unknown at work.
  3. In general, I am wary of working too much. It is important to take time to exercise and take care of one’s mental and emotional health. One of the things I have noticed is that sometimes the most driven and most intelligent graduate students suffered from burnout due to their intense work schedules at the beginning of graduate school.
  4. Along with the previous point, I am also wary of spending too much time in the lab because it is important to have  time to reflect. It is necessary to think about what you’ve done, what can be done tomorrow and conjure up experiments that one can possibly try, even if they may be lofty. It’s not a bad idea to set aside a little time each day or week to think about these kinds of things.
  5. It is necessary to be resilient, not take things personally and know your limits. I know that I am not going to be the greatest physicist of my generation or anything like that, but what keeps me going is the hope that I can make a small contribution to the literature that some physicists and other scientists will appreciate. Maybe they might even say “Huh, that’s pretty cool” with some raised eyebrows.
  6. Is physics my “passion”? I would say that I really like it, but I could have just as easily studied a host of other topics (such as literature, philosophy, economics, etc.), and I’m sure I would have enjoyed them just as much. I’ve always been more of a generalist in contrast to being focused on physics since I was a kid or teenager. There are too many interesting things out there in the world to feel satiated just studying condensed matter physics. This is sometimes a drawback and sometimes an asset (i.e. I am sometimes less technically competent than my lab-mates, but I can probably write with less trouble).
  7. For me, reading widely is valuable, but I need to be careful that it does not impede or become a substitute for active thought.
  8. Overall, science can be intimidating and it can feel unrewarding. This can be particularly true if you measure your success using a publication rate or some so-called “objective” measure. I would venture to say that a much better measure of success is whether you have grown during graduate school or during a postdoc by being able to think more independently, by picking up some valuable skills (both hard and soft) and have brought a  multi-year project into fruition.

Please feel free to share thoughts from your own experiences! I am always eager to learn about people whose experiences and attitudes differ from mine.

A few nuggets on the internet this week:

  1. For football/soccer fans:

  2. Barack Obama’s piece in Science Magazine:

  3. An interesting read on the history of physics education reform (Thanks to Rodrigo Soto-Garrido for sharing this with me):

  4. I wonder if an experimentalist can get this to work:

Lift Off

Diffraction, Babinet and Optical Transforms

In an elementary wave mechanics course, the subject of Fraunhofer diffraction is usually addressed within the context of single-slit and double-slit interference. This is usually followed up with a discussion of diffraction from a grating. In these discussions, one usually has the picture that light is “coming through the slits” like in the image below:


Now, if you take a look at Ashcroft and Mermin or a book like Elements of Modern X-ray Physics by Als-Nielsen and McMorrow, one gets a somewhat different picture. These books make it seem like X-ray diffraction occurs when the “scattered radiation from the atoms add in phase”, as in the image below (from Ashcroft and Mermin):


So in one case it seems like the light is emanating from the free space between obstacles, whereas in the other case it seems like the obstacles are scattering the radiation. I remember being quite confused about this point when first learning X-ray diffraction in a solid-state physics class, because I had already learned Fraunhofer diffraction in a wave mechanics course. The two phenomena seemed different somehow. In their mathematical treatments, it almost seemed as if for optics, light “goes through the holes” but for X-rays “light bounces off the atoms”.

Of course, these two phenomena are indeed the same, so the question arises: which picture is correct? Well, they both give correct answers, so actually they are both correct. The answer as to why they are both correct has to do with Babinet’s principle. Wikipedia summarizes Babinet’s principle, colloquially, as so:

the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape except for the overall forward beam intensity.

To get an idea of what this means, let’s look at an example. In the images below, consider the white space as openings (or slits) and the black space as obstacles in the following optical masks:


What would the diffraction pattern from these masks look like? Well, below are the results (taken from here):


Apart from minute differences close to the center, the two patterns are basically the same! If one looks closely enough at the two images, there are some other small differences, most of which are explained in this paper.

Hold on a second, you say. They can’t be the exact same thing! If I take the open space in the optical mask on the left and add it to the open space on the mask to the right, I just have “free space”. And in this case there is no diffraction! You don’t get the diffraction pattern with twice the intensity. This is of course correct. I have glossed over one small discrepancy. First, one needs to realize that intensity is related to amplitude as so:

I \propto |A|^2

This implies that the optical mask on the left and the one on the right give the same diffraction intensity, but that the amplitudes are 180 degrees out of phase. This phase doesn’t affect the intensity, though, as in the formula above intensity is only related to the magnitude of the amplitude. Therefore the masks, while giving the same intensity, are actually slightly different. The diffraction pattern will then cancel when the optically transparent parts of the two masks are added together. It’s strange to think that “free space” is just a bunch of diffraction patterns cancelling each other out!

With this all in mind, the main message is pretty clear though: optical diffraction through slits and the Ashcroft and Mermin picture of “bouncing off atoms” are complementary pictures of basically the same diffraction phenomenon. The diffraction pattern obtained will be the same in both cases because of Babinet’s principle.

This idea has been exploited to generate the excellent Atlas of Optical Transforms, where subtleties in crystal structures can be manipulated at the optical scale. Below is an example of such an exercise (taken from here). The two images in the first row are the optical masks, while the bottom row gives the respective diffraction patterns. In the first row, the white dots were obtained by poking holes in the optical masks.


Basically, what they are doing here is using Babinet’s principle to image the diffraction from a crystal with stacking faults along the vertical direction. The positions of the atoms are replaced with holes. One can clearly see that the effect of these stacking faults is to smear out and broaden some of the peaks in the diffraction pattern along the vertical direction. This actually turns out to gives one a good intuition of how stacking faults in a crystal can distort a diffraction pattern.

In summary, the Ashcroft and Mermin picture and the Fraunhofer diffraction picture are really two ways to describe the same phenomenon. The link between the two explanations is Babinet’s principle.

Is it really as bad as they say?

It’s been a little while since I attended A.J. Leggett’s March Meeting talk (see my review of it here), and some part of that talk still irks me. It is the portion where he referred to “the scourge of bibliometrics”, and how it prevents one from thinking about long-term problems.

I am not old enough to know what science was like when he was a graduate student or a young lecturer, but it seems like something was fundamentally different back then. The only evidence that I can present is the word of other scientists who lived through the same time period and witnessed the transformation (there seems to be a dearth of historical work on this issue).


It was easy for me to find articles corroborating Leggett’s views, unsurprisingly I suppose. In addition to the article I linked last week by P. Nozieres, I found interviews with Sydney Brenner and Peter Higgs, and a damning article by P.W. Anderson in his book More and Different entitled Could Modern America Have Invented Wave Mechanics? In his opinion piece, Anderson also refers to an article by L. Kadanoff expressing a similar sentiment, which I was not able to find online (please let me know if you find it, and I’ll link it here!). The conditions described at Bell Labs in David Gertner’s book The Idea Factory also paint a rather stark contrast to the present status of condensed matter physics.

Since I wasn’t alive back then, I really cannot know with any great certainty whether the current state of affairs has impeded me from pursuing a longer-term project or thinking about more fundamental problems in physics. I can only speak for myself, and at present I can openly admit that I am incentivized to work on problems that I can solve in 2-3 years. I do have some concrete ideas for longer-term projects in mind, but I cannot pursue these at the present time because, as an experimentalist and postdoc, I do not have the resources nor the permanent setting in which to complete this work.

While the above anecdote is personal and it may corroborate the viewpoints of the aforementioned scientists, I don’t necessarily perceive all these items as purely negative. I think it is important to publish a paper based on one’s graduate work. It should be something, however small, that no one has done before. It is important to be able to communicate with the scientific community through a technical paper — writing is an important part of science. I also don’t mind spending a few years (not more than four, hopefully!) as a postdoc, where I will pick up a few more tools to add to my current arsenal. This is something that Sydney Brenner, in particular, decried in his interview. However, it is likely that most of what was said in these articles was aimed at junior faculty.

Ultimately, the opinions expressed by these authors is concerning. However, I am uncertain as to the extent to which what is said is exaggeration and the extent to which it is true. Reading these articles has made me ask how the scientific environment I was trained in (US universities) has shaped my attitude and scientific outlook.

One thing is undoubtedly true, though. If one chooses to resist the publish-or-perish trend by working on long-term problems and not publishing, the likelihood of landing an academic job is close to null. Perhaps this is the most damning consequence. Nevertheless, there is still some outstanding experimental and theoretical science done today, some of it very fundamental, so one should not lose all hope.

Again, I haven’t lived through this academic transformation, so if anyone has any insight concerning these issues, please feel free to comment.

Paradigm Shifts and “The Scourge of Bibliometrics”

Yesterday, I attended an insightful talk by A.J. Leggett at the APS March Meeting entitled Reflection on the Past, Present and Future of Condensed Matter Physics. The talk was interesting in two regards. Firstly, he referred to specific points in the history of condensed matter physics that resulted in (Kuhn-type) paradigm shifts in our thinking of condensed matter. Of course these paradigm shifts were not as violent as special relativity or quantum mechanics, so he deemed them “velvet” paradigm shifts.

This list, which he acknowledged was personal, consisted of:

  1. Landau’s theory of the Fermi liquid
  2. BCS theory
  3. Renormalization group
  4. Fractional quantum hall effect

Notable absentees from this list were superfluidity in 3He, the integer quanutm hall effect, the discovery of cuprate superconductivity and topological insulators. He argued that these latter advances did not result in major conceptual upheavals.

He went on to elaborate the reasons for these velvet revolutions, which I enumerate to correspond to the list above:

  1. Abandonment of microscopic theory, in particular with the use of Landau parameters; trying to relate experimental properties to one another with the input of experiment
  2. Use of an effective low-energy Hamiltonian to describe phase of matter
  3. Concept of universality and scaling
  4. Discovery of quasiparticles with fractional charge

It is informative to think about condensed matter physics in this way, as it demonstrates the conceptual advances that we almost take for granted in today’s work.

The second aspect of his talk that resonated strongly with the audience was what he dubbed “the scourge of bibliometrics”. He told the tale of his own formative years as a physicist. He published one single-page paper for his PhD work. Furthermore, once appointed as a lecturer at the University of Sussex, his job was to be a lecturer and teach from Monday thru Friday. If he did this job well, it was considered a job well-done. If research was something he wanted to partake in as a side-project, he was encouraged to do so. He discussed how this atmosphere allowed him to develop as a physicist, without the requirement of publishing papers for career advancement.

Furthermore, he claimed, because of the current focus on metrics, burgeoning young scientists are now encouraged to seek out problems that they can solve in a time frame of two to three years. He saw this as a terrible trend. While it is often necessary to complete short-term projects, it is also important to think about problems that one may be able to solve in, say, twenty years, or maybe even never. He claimed that this is what is meant by doing real science — jumping into the unknown. In fact, he asserted that if he were to give any advice to graduate students, postdocs and young faculty in the audience, it would be to try to spend about 20% of one’s time committed to some of these long-term problems.

This raises a number of questions in my mind. It is well-acknowledged within the community and even the blogosphere that the focus on publication number and short term-ism within the condensed matter physics community is detrimental. Both Ross McKenzie and Doug Natelson have expressed such sentiment numerous times on their blogs as well. From speaking to almost every physicist I know, this is a consensus opinion. The natural question to ask then is: if this is the consensus opinion, why is the modern climate as such?

It seems to me like part of this comes from the competition for funding among different research groups and funding agencies needing a way to discriminate between them. This leads to the widespread use of metrics, such as h-indices and publication number, to decide whether or not to allocate funding to a particular group. This doesn’t seem to be the only reason, however. Increasingly, young scientists are judged for hire by their publication output and the journals in which they publish.

Luckily, the situation is not all bad. Because so many people openly discuss this issue, I have noticed that the there is a certain amount of push-back from individual scientists. On my recent postdoc interviews, the principal investigators were most interested in what I was going to bring to the table rather than peruse through my publication list. I appreciated this immensely, as I had spent a large part of my graduate years pursuing instrumentation development. Nonetheless, I still felt a great deal of pressure to publish papers towards the end of graduate school, and it is this feeling of pressure that needs to be alleviated.

Strangely, I often find myself in the situation working despite the forces that be, rather than being encouraged to do so. I highly doubt that I am the only one with this feeling.

Cost of a College Education

I’m not quite sure how it happened, but over the holiday period, I became a little fixated on why the cost of a college education in the US is so high. I therefore did quite a bit of reading regarding this issue. In this post, I intend to lay out some contributing factors to the ever-increasing cost of a college education. Let me stress that I am not any kind of expert on this topic (far from it, in fact!) and also that the lack of transparency when it comes to university budgets makes it difficult for anyone to get a good grasp on what is truly driving costs up. It will probably take a piece of investigative journalism akin to the excellent TIME magazine article on the rising cost of health care in the US by Stephen Brill to be able to answer this question effectively (pdf!).

First of all, here is a table showing the average “sticker price” of the cost of a university education in the US as a function of time (adjusted for inflation). The table was obtained from the College Board. Click the image to enlarge.tuition

You can see that, startlingly, the cost of education as well as the cost of room and board have both increased dramatically since the 1975-76 school year in real terms. Schools often stress, however, that students rarely pay the “sticker price” after financial aid has been doled out. This is true, but it is difficult to know the exact numbers on this.

The $64k question is why the cost has increased over time. To try to answer this question, during the holiday, I read the book Why Public Higher Education Should be Free by Robert Samuels. Samuels also writes the popular blog Changing Universities. I have to say that I found the book quite partial to the author’s point of view without an adequate use of data. That being said, the book did have many redeeming qualities. Samuels highlighted several problems with undergraduate education in the US that were particularly interesting. The main thesis of the book was that the main drivers of cost increases at universities do not contribute to an improved undergraduate education and that US public institutions need to refocus on educating undergrads to drive costs down.

According to Samuels, the increasing costs are mainly due to the following (all of these reasons are debated by various authors, so it is difficult to know for sure whether these are correct):

  1.  Room and board cost hikes due to amenities on college campuses such as recreational facilities
  2. Increased salaries for administrators and star faculty (here, Samuels does have figures to back his claim up, as the salaries for public institutions in the US are publicly available)
  3. Athletic budgets which on most college campuses actually lose money rather than being profitable
  4. Spending on graduate students
  5. Administrative bloat
  6. Reduced state spending on higher education
  7. Running a university like a business

It may not seem obvious why the last item may contribute to rising costs of education. However, Samuels points out in the book that during the financial crisis of 2008, Harvard lost at least $8 billion of its endowment and the entire University of California system lost approximately $23 billion due to their investment strategies. Furthermore, the limited resources at universities are allocated away from the core mission of undergraduate education by hiring and paying people (such as financial analysts) to do jobs not central to this mission.

There needs to be a refocusing on undergraduate education at universities, which is surprisingly neglected on many research university campuses. Professors often complain about “having to teach” and administrators are not trained educators. While research is of vast importance and should not be neglected, professors lack almost all incentive to invest a lot of time in the education of undergraduates.

Finally, while there are other reasons that have been cited as contributing to higher costs, such as the increase in technology and IT staff on college campuses, it strikes me as strange that undergraduates are still able to be taught for free (at least tuition is free) in countries like Germany, where similar technological expansions have occurred. If the priority at US public institutions is to educate undergraduates, it seems that the rising cost of education is a solvable problem.

Let me again stress that while I am no expert on how to run a college campus or on the issue of why the price of a university education is increasing, I agree with Samuels in thinking that that the incentive structure at research universities does not adequately value educating undergraduates, who are ironically paying the largest cost.