Tag Archives: Creativity

Diversity in and of Physics

When someone refers to a physicist from the early twentieth century, what kind of person do you imagine? Most people will think of an Einstein-like figure, but most likely, one will think of a white male from western Europe or the US.

Today, however, things have changed considerably; physics, both as a discipline and in the people that represent it, has become more diverse. This correlation is probably not an accident. In my mind, the increased diversity is an excellent development, but as with everything, it can be further improved. There are a couple excellent podcasts I listened to recently that have championed diversity in different contexts.

The first podcast was an episode of Reply All entitled Raising the Bar (which you should really start listening to at 11:52 after the rather cringe-worthy Yes-Yes-No segment!). The episode focuses on the lack of diversity in many companies in Silicon Valley. In doing so, they interview an African-American man named Leslie Miley who was a security guard at Apple and went on to work as a software developer and manager at Twitter, Apple, and Google among other companies (i.e. he possessed a completely unorthodox background by Silicon Valley standards). He makes an interesting statement about companies in general (while referring specifically to Twitter) saying:

If you don’t have people of diverse backgrounds building your product, you’re going to get a very narrowly focused product.

He also goes onto say that including people from different backgrounds is not just appropriate from a moral standpoint, but also that:

Diverse teams have better outcomes.

There is plenty of research to support this viewpoint. In particular, Scott Page from the Santa Fe institute and University of Michigan – Ann Arbor is interviewed in the episode and suggests that when teams of people are selected and asked to perform a task, teams of “good people” from diverse backgrounds generally outperform many “excellent people”/experts from similar backgrounds (i.e. the same Ivy League schools, socio-economic status, age etc.).

There is a caveat that is presented in this episode, however. They suggest that it may take longer for a diverse team to gel and to communicate and understand each other. But again, the outcomes in the long-term are generally better.

There is an excellent episode of Hidden Brain that also covers similar topics, but focuses on building a better workplace. The host of the podcast, Shankar Vendantam, interviews the (then) head of human resources at Google, Laszlo Bock, to gain some insight into how Google has been able to build their talent pool. Of specific interest to physicists was how much Google borrows from places like Bell Labs to build a creative workplace environment. Again, Bock stresses the importance of diversity among the employees at Google in order for the company to be successful.

In physics departments across the country, I think it is necessary to take a similar approach. Departments should strive to be diverse and hire people from different backgrounds, schools, genders, and countries. Not only that, graduate students with unorthodox backgrounds should also be welcomed. This again, is not just important for the health of the department, but for the health of the discipline in general.

I strongly suspect that Michael Faraday was one of the greatest experimental physicists in the past few hundred years not in spite of his lack of mathematical acuity, but probably because of it. His mathematical ability famously did not extend much beyond basic algebra and not even as far as trigonometry.

Envisioning the Future Technological Landscape

I recently read the well-written and prescient piece entitled As We May Think by Vannevar Bush, which was published in The Atlantic magazine in July of 1945. With World War II coming to a close, and with many physicists and engineers involved in the war effort, Bush outlines what he sees as the future work of physical scientists when they return to their “day jobs”. Many of his predictions concentrate on technological advancements. Reading it today, one is struck by how visionary this article has turned out to be (though it may be argued that some of the prophesies were self-fulfilling). It should be pointed out that this article was written before the discovery of the transistor, which Bardeen and Brattain discovered in 1947.

The most stunning of his predictions to my mind were the following:

  1. Personal computers
  2. Miniature storage capable of holding vast amounts of data (including encyclopedias)
  3. Something akin to digital photography, which he calls dry photography
  4. The internet and world wide web
  5. Speech recognition (though he foresaw people using this more widely than is currently used)
  6. Portable or easily accessible encyclopedias with hyperlinked text
  7. Keyboard- and mouse-controlled computers

Reading about how he saw the future makes it less surprising that Bush was Claude Shannon‘s thesis advisor. For those of you who don’t know, Shannon’s work gave rise to the field now known as information theory and also to the idea that one could use transistors (or binary logic/Boolean algebra) to implement numerical relationships. His ideas underpin the language of the modern computer.

It is amazing the clarity with which Bush saw the technological future. I heartily recommend the article as some eye-opening bedtime reading, if that makes sense.

Does Frivolity Border on the Edge of Creativity?

Sometimes, for the sake of letting one’s imagination run around a bit, it may be advisable to indulge in a seemingly frivolous endeavor. In the sciences, these undertakings can sometimes result in the winning of an Ig Nobel Prize.

This year’s winner of the Ig Nobel in physics studied the “universal” urination time of mammals. The main conclusion of this paper is that mammals that weigh more than 3kg urinate for 21 \pm 13 seconds per session. I will not comment on the rather large error bars.

I reprint the abstract to the paper below:

Many urological studies rely on models of animals, such as rats and pigs, but their relation to the human urinary system is poorly understood. Here, we elucidate the hydrodynamics of urination across five orders of magnitude in body mass. Using high-speed videography and flow-rate measurement obtained at Zoo Atlanta, we discover that all mammals above 3 kg in weight empty their bladders over nearly constant duration of 21 ± 13 s. This feat is possible, because larger animals have longer urethras and thus, higher gravitational force and higher flow speed. Smaller mammals are challenged during urination by high viscous and capillary forces that limit their urine to single drops. Our findings reveal that the urethra is a flow-enhancing device, enabling the urinary system to be scaled up by a factor of 3,600 in volume without compromising its function. This study may help to diagnose urinary problems in animals as well as inspire the design of scalable hydrodynamic systems based on those in nature.

I present a translation of the abstract in my own language below:

We don’t know if humans and other mammals pee in the same way. Here, we study how both big and little mammals pee. We creepily filmed a lot of mammals (that weigh more than 3kg) pee with an unnecessarily high-speed camera and found that they all generally pee for 21 \pm 13 seconds. Large mammals can push more pee through their pee-holes and gravity helps them out a bit. It’s harder for small animals to pee because they have smaller pee-holes. Surprisingly, pee-holes work for mammals with a range of sizes. We hope this study will help mammals with peeing problems.

I genuinely enjoyed reading their paper, and actually recommend it for a bit of fun. Here are some of the high-speed videos (which you may or may not want to watch) associated with the paper.

Please feel free to experiment with your own “translations” in the comments.

What Happens in 2D Stays in 2D.

There was a recent paper published in Nature Nanotechnology demonstrating that single-layer NbSe_2 exhibits a charge density wave transition at 145K and superconductivity at 2K. Bulk NbSe_2 has a CDW transition at ~34K and a superconducting transition at ~7.5K. The authors speculate (plausibly) that the enhanced CDW transition temperature occurs because of an increase in electron-phonon coupling due to the reduction in screening. An important detail is that the authors used a sapphire substrate for the experiments.

This paper is among a general trend of papers that examine the physics of solids in the 2D limit in single-layer form or at the interface between two solids. This frontier was opened up by the discovery of graphene and also by the discovery of superconductivity and ferromagnetism in the 2D electron gas at the LAO/STO interface. The nature of these transitions at the LAO/STO interface is a prominent area of research in condensed matter physics. Part of the reason for this interest stems from researchers having been ingrained with the Mermin-Wagner theorem. I have written before about the limitations of such theorems.

Nevertheless, it has now been found that the transition temperatures of materials can be significantly enhanced in single layer form. Besides the NbSe_2 case, it was found that the CDW transition temperature in single-layer TiSe_2 was also enhanced by about 40K in monolayer form. Probably most spectacularly, it was reported that single-layer FeSe on an STO substrate exhibited superconductivity at temperatures higher than 100K  (bulk FeSe only exhibits superconductivity at 8K). It should be mentioned that in bulk form the aforementioned materials are all quasi-2D and layered.

The phase transitions in these compounds obviously raise some fundamental questions about the nature of solids in 2D. One would expect, naively, for the transition temperature to be suppressed in reduced dimensions due to enhanced fluctuations. Obviously, this is not experimentally observed, and there must therefore be a boost from another parameter, such as the electron-phonon coupling in the NbSe_2 case, that must be taken into account.

I find this trend towards studying 2D compounds a particularly interesting avenue in the current condensed matter physics climate for a few reasons: (1) whether or not these phase transitions make sense within the Kosterlitz-Thouless paradigm (which works well to explain transitions in 2D superfluid and superconducting films) still needs to be investigated, (2) the need for adequate probes to study interfacial and monolayer compounds will necessarily lead to new experimental techniques and (3) qualitatively different phenomena can occur in the 2D limit that do not necessarily occur in their 3D counterparts (the quantum hall effect being a prime example).

Sometimes trends in condensed matter physics can lead to intellectual atrophy — I think that this one may lead to some fundamental and major discoveries in the years to come on the theoretical, experimental and perhaps even on the technological fronts.

Update: The day after I wrote this post, I also came upon an article demonstrating evidence for a ferroelectric phase transition in thin Strontium Titanate (STO), a material known to exhibit no ferroelectric phase transition in bulk form at all.

The Prescience of Ginzburg

In 1977, before the discovery of high-temperature superconductivity, V. Ginzburg wrote:

“On the basis of general theoretical considerations, we believe at present
that the most reasonable estimate is Tc\lesssim300 K, this estimate being, of course, for materials and systems under more or less normal
conditions (equilibrium or quasi-equilibrium metallic systems in the absence
of pressure or under relatively low pressures, etc.). In this case, if we exclude
from consideration metallic hydrogen and, perhaps, organic metals, as well
as semimetals in states near the region of electronic phase transitions, then it is suggested that we should use the exciton [electronic] mechanism of attraction between the conduction electrons. 

In this scheme, the most promising – from the point of view of the possibility of raising T_c-materials are, apparently, layered compounds and dielectric-metal-dielectric sandwiches. However, the state of the theory, let alone of experiment, is still far from being such as to allow us to regard as closed other possible directions, in particular, the use of filamentary compounds. Furthermore, for the present state of the problem of high-temperature superconductivity, the soundest and most fruitful approach will be one that is not preconceived, in which attempts are made to move forward in the most diverse directions.”

I took the quote out of this paper here, though many of the ideas are echoed from and better expressed in one of his previous papers, linked here. It is amusing that for at least 15 years prior to the discovery of the cuprates, Ginzburg stressed looking for high-temperature superconductors (with T_cs above the boiling point of liquid nitrogen) in layered, quasi-2D materials that could host superconductivity with an electronically driven Cooper pairing.

The papers linked above are very readable and he reached these conclusions on startlingly general grounds — by discussing the inverse dielectric function.

The Value of “Wasting Time”

Science magazine this week published an article entitled Advice to a Young Scientist. There were some encouraging words from the Pedro Miguel Echenique in this regard. A lot of career advice nowadays is really geared towards what I often refer to as “careerism”. This is the kind of advice that emphasizes ones career, but is not directly related to improving oneself scientifically.

This article highlights five points that are infrequently discussed when it comes to scientific career advice. I point out the two that are the most rarely stressed:

1) Learn Broadly: Many times, a student of science gets pigeonholed in one particular aspect of a field and cannot see the broader picture. Studying different aspects of one’s scientific discipline (or even outside that) can help open one’s eyes to other avenues of interest and also help frame one’s own work within a larger scientific context.

2) Allow Yourself to Waste Time: The point is made in the article that one should chat with one’s colleagues, enjoy a tea or coffee break, and attend seminars to stimulate one’s mind. In my own experience, talking to other scientists has been a large part of my personal scientific development and has also taught me where there are gaps in my knowledge that need to be filled.

I appreciate Echenique’s sentiment on these points, as this kind of career advice is rarely given out. Sometimes, aspects of careerism can be important, but few things can replace good scientific development and curiosity.

Do “Theorems” in Condensed Matter Physics Limit the Imagination?

There are many so-called “theorems” in physics. The most famously quoted in the field of condensed matter are the ones associated with the names of Goldstone, Mermin-Wagner, and McMillan.

If you aren’t familiar with these often (mis)quoted theorems, then let me (mis)quote them for you:

1) Goldstone: For each continuous symmetry a phase of matter breaks, there is an associated collective excitation that is gapless for long wavelengths, usually referred to as a Nambu-Goldstone mode.

2) Mermin-Wagner: Continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions d ≤ 2. (From Wikipedia)

3) McMillan (PDF link!): Electron-phonon induced superconductivity cannot have a higher Tc than approximately 40K.

All these three theorems in condensed matter physics have been violated to a certain extent. My gut feeling, though, is that these theorems can have the adverse consequence of limiting one’s imagination. As an experimental physicist, I can see the value in such theorems, but I don’t think that it is constructive to believe them outright. The number of times that nature has proven that she is much more creative and elusive than our human minds should tell us that we should use these theorems as guidance but to always be wary of such ideas.

For instance, had one believed the Mermin-Wagner theorem outright, would someone have thought the existence of graphene possible? In a solid, which breaks translational symmetry in three directions and rotational symmetry in three directions, why are there only three acoustic phonons? McMillan’s formula still holds true for electron-phonon coupled superconductors (marginal case being MgB2 which has a Tc~40K), though a startling discovery recently may even shatter this claim. However, placed in its historical context (it was stated before the discovery of high-temperature superconductors), one wonders whether McMillan’s formula disheartened some experimentalists from pursuing the goal of a higher transition temperature superconductor.

My message: One may use the theorems as guidance, but they are really there to be broken.