Monthly Archives: September 2015

Does Frivolity Border on the Edge of Creativity?

Sometimes, for the sake of letting one’s imagination run around a bit, it may be advisable to indulge in a seemingly frivolous endeavor. In the sciences, these undertakings can sometimes result in the winning of an Ig Nobel Prize.

This year’s winner of the Ig Nobel in physics studied the “universal” urination time of mammals. The main conclusion of this paper is that mammals that weigh more than 3kg urinate for 21 \pm 13 seconds per session. I will not comment on the rather large error bars.

I reprint the abstract to the paper below:

Many urological studies rely on models of animals, such as rats and pigs, but their relation to the human urinary system is poorly understood. Here, we elucidate the hydrodynamics of urination across five orders of magnitude in body mass. Using high-speed videography and flow-rate measurement obtained at Zoo Atlanta, we discover that all mammals above 3 kg in weight empty their bladders over nearly constant duration of 21 ± 13 s. This feat is possible, because larger animals have longer urethras and thus, higher gravitational force and higher flow speed. Smaller mammals are challenged during urination by high viscous and capillary forces that limit their urine to single drops. Our findings reveal that the urethra is a flow-enhancing device, enabling the urinary system to be scaled up by a factor of 3,600 in volume without compromising its function. This study may help to diagnose urinary problems in animals as well as inspire the design of scalable hydrodynamic systems based on those in nature.

I present a translation of the abstract in my own language below:

We don’t know if humans and other mammals pee in the same way. Here, we study how both big and little mammals pee. We creepily filmed a lot of mammals (that weigh more than 3kg) pee with an unnecessarily high-speed camera and found that they all generally pee for 21 \pm 13 seconds. Large mammals can push more pee through their pee-holes and gravity helps them out a bit. It’s harder for small animals to pee because they have smaller pee-holes. Surprisingly, pee-holes work for mammals with a range of sizes. We hope this study will help mammals with peeing problems.

I genuinely enjoyed reading their paper, and actually recommend it for a bit of fun. Here are some of the high-speed videos (which you may or may not want to watch) associated with the paper.

Please feel free to experiment with your own “translations” in the comments.

Condensed Matter Physics in the Eyes of the Public: A Note from N.P. Armitage

Note: This post is actually a comment on Jennifer Oullette’s blog Cocktail Party Physics. It was in response to a post about why particle physics tends to generate more wonder and hype in the eyes of the media and public at large compared to condensed matter physics. The comment was originally posted in 2006 by N. P. Arimtage, but it still rings true today. I reprint it below:

Nuestra culpa. You’re right, Jennifer. We condensed matter physicists (henceforth CMP) have not been good with providing a compelling narrative for our research. There may be many reasons for this, but I believe it comes in part from a misconception of how we should sell ourselves to the public (and thereby funding agencies).

As a field we can be justifiably proud to have discovered the physics that led to the transistor, NMR, superconducting electronics etc etc. But this boon has also been a curse. It has made us lazy and has stifled our capacity to think creatively about outreach in areas where we don’t have the crutch of technological promise to fall back on.

This is a luxury our cosmology colleagues don’t have. They feel passionately about their research and they have to (get to?) convey that passion to the public (with predictably good results). We feel passionately about our research, but then feel compelled to tell boring stories about this or that new technology we might develop (which predictably elicits yawns and perhaps only a mental note to take advantage of said technology when it is available in Ipod form). We do this because we are bred and raised to think that technological promise is a somehow more legitimate motivation to the outside public than genuine fundamental scientific interest. It doesn’t have to be this way.

Due to our tremendous technological successes there is also the feeling then that at some level ALL our work should touch on technology. This is the easy strategy, but ultimately it hasn’t been good for the health of the field. This is because, for many of us, technology isn’t our passion and it shows. Moreover, the research or aspect of research that has the greatest chance of evoking feelings of real awe and wonderment is typically the precise research that has the least chance of creating viable products. Perhaps this last statement is one regarding human nature itself.

This current modus operadi has lead to 3 things:

-A marginalization of some of the most exciting research (which may have no even tenuous connection to commercialization).

-Big promises about technological directions when it isn’t warranted. And then consequences when results fail to live up to prognostications.

-And most relevant for the current discussion, a lack of focus at and practice on evoking awe and wonderment.

It is telling that virtually every Phys Rev Focus (short news release-style blurbs from the American Physical Society on notable discoveries) on CMP ends with a sentence or two about what technological impact said discovery will have. Sometimes these connections are tenuous at best. Obviously there is no similar onus in articles on cosmology and so those Focuses can focus on what it is that really excites the researchers (instead of the tenuous backstory technological connection). This is nothing against Phys. Rev. Focus, but serves to illustrate the prevailing philosophy in public outreach. The “public” can tell when we’re bluffing and they certainly can feel passion or lack thereof.

The reality is that many of us in CMP don’t have the inclination or interest to ‘make’ anything at all. For instance, we may pursue novel states of matter at low temperature and consider the concept of emergence and the appearance of collective effects to be just as fundamental and irreducible as anything in string theory. We should promote what excites us in the manner that it excites us.

The research that Jennifer cites on graphene is a case in point. Yes, perhaps (but perhaps not) there is technological promise in graphene, but there is also a remarkable (and awe inspiring) fundamental side as well. Here we believe that the electrons in graphene are described by the same formalism that applies to the relativistic particles of the Dirac equation. One can simulate the rich structure of elementary particle physics in a table top experiment! I would posit that this kind of thing is much more likely to provoke enthusiasm from the public at large then any connection to graphene as yet another possible material in new computing devices.

Our cosmology and particle physics colleagues are raised academically to believe that knowledge for knowledge’s sake is a good thing. By and large they do a wonderful job of conveying these ideas to the general public. Although we believe the same thing, we CMP have presented ourselves not as people who also have access to wild and wonderful things, but as people who are discovering stuff to make stuff. We have that, but there is so so much more. We need a new business model and a new narrative.

Lessons from the Coupled Oscillator

In studying solid state physics, one of the first problems encountered is that of phonons. In the usual textbooks (such as Ashcroft and Mermin or Kittel), the physics is buried underneath formalism. Here is my attempt to explain the physics, while just quoting the main mathematical results. For the simple mass-spring oscillator system pictured below, we get the following equation of motion and oscillation frequency:

Simple harmonic oscillator

\ddot{x} = -\omega^2x

and      \omega^2 = \frac{k}{m}

If we couple two harmonic oscillators, such as in the situation below, we get two normal modes that obey the equations of motion identical to the single-oscillator case.

coupledoscillator

Coupled harmonic oscillator

The equations of motion for the normal modes are:

\ddot{\eta_1} = -\omega^2_1\eta_1      and

\ddot{\eta_2} = -\omega^2_2\eta_2,

where

\omega_1^2 = \frac{k+2\kappa}{m}

and   \omega_2^2 = \frac{k}{m}.

I should also mention that \eta_1 = x_1 - x_2\eta_2 = x_1 + x_2. The normal modes are pictured below, consisting of a symmetric and antisymmetric oscillation:

symmetric

Anti-Symmetric normal mode

antisymmetric

Symmetric normal mode

The surprising thing about the equations for the normal modes is that they look exactly like the equations for two decoupled and independent harmonic oscillators. Any motion of the oscillators can therefore be written as a linear combination of the normal modes. When looking back at such results, it seems trivial — but I’m sure to whoever first solved this problem, the result was probably unexpected and profound.

Now, let us briefly discuss the quantum case. If we have a single harmonic oscillator, we get that the Hamiltonian is:

H = \hbar\omega (a^\dagger a +1/2)

If we have many harmonic oscillators coupled together as pictured below, one would probably guess in light of the classical case that one could obtain the normal modes similarly.

Harmonic Chain

One would probably then naively guess that the Hamiltonian could be decoupled into many seemingly independent oscillators:

H = \sum_k\hbar\omega_k (a^\dagger_k a _k+1/2)

This intuition is exactly correct and this is indeed the Hamiltonian describing phonons, the normal modes of a lattice. The startling conclusion in the quantum mechanical case, though, is that the equations lend themselves to a quasiparticle description — but I wish to speak about quasiparticles another day. Many ideas in quantum mechanics, such as Anderson localization, are general wave phenomena and can be seen in classical systems as well. Studying and visualizing classical waves can therefore still yield interesting insights into quantum mechanics.

Why are the quantum mechanical effects of sound observed in most solids but not most liquids?

Well, if liquids remained liquids down to low temperatures, then the quantum mechanical effects of sound would also occur in them as well. There is actually one example where these effects are important and this is in liquid helium.

Therefore the appropriate questions to ask then are: (i) when are quantum mechanical effects significant in the description of sound? and (ii) when does quantum mechanics have any observable consequences in matter at all?

The answer to this question is probably obvious to most people that read this blog. However, I would still think it needs to be reiterated every once in a while. When does the wave nature of “particles” become relevant? Usually, when the wavelength, \lambda, is on the order of some characteristic length, d:

\lambda \gtrsim d

What is this characteristic length in a liquid or solid? One can approximate this by the interparticle spacing, which one can take to be the inverse of the cube root of the density, n^{-1/3}. Therefore, quantum mechanical effects can be said to become important when:

d \sim n^{-1/3}

Now, lastly, we need an expression for the wavelength of the particles. One can use the deBroglie expression that relates the wavelength to the momentum:

\lambda \sim \frac{h}{p},

where h is Planck’s constant and p is the momentum. And one can approximate the momentum of a particle at temperature, T, by:

p \sim \sqrt{mk_BT}    (massive)    OR      p \sim k_BT/v_s     (massless),

where k_B is Boltzmann’s constan, m is the mass of the particle in question, and v_s is the speed of sound. Therefore we get that quantum mechanics becomes significant when:

n^{2/3}h^{2}/m \gtrsim k_BT   (massive)    OR     n^{1/3}h v_s \gtrsim k_BT     (massless).

Of course this expression is just a rough estimate, but it does tell us that most liquids end up freezing before quantum mechanical effects become relevant. Therefore sound, or phonons, express their quantum mechanical properties at low temperatures — usually below the freezing point of most materials. By the way, the most celebrated example of the quantum mechanical effects of sound in a solid is in the C_v \sim T^3 Debye model. Notice that the left hand side in formula above for massless particles is, within factors of order unity, the Boltzmann constant times the Debye temperature. Sound can exhibit quantum mechanical properties in liquids and gases, but these cases are rare: helium at low temperature is an example of a liquid, and Bose condensed sodium is an example of a gas.

What Happens in 2D Stays in 2D.

There was a recent paper published in Nature Nanotechnology demonstrating that single-layer NbSe_2 exhibits a charge density wave transition at 145K and superconductivity at 2K. Bulk NbSe_2 has a CDW transition at ~34K and a superconducting transition at ~7.5K. The authors speculate (plausibly) that the enhanced CDW transition temperature occurs because of an increase in electron-phonon coupling due to the reduction in screening. An important detail is that the authors used a sapphire substrate for the experiments.

This paper is among a general trend of papers that examine the physics of solids in the 2D limit in single-layer form or at the interface between two solids. This frontier was opened up by the discovery of graphene and also by the discovery of superconductivity and ferromagnetism in the 2D electron gas at the LAO/STO interface. The nature of these transitions at the LAO/STO interface is a prominent area of research in condensed matter physics. Part of the reason for this interest stems from researchers having been ingrained with the Mermin-Wagner theorem. I have written before about the limitations of such theorems.

Nevertheless, it has now been found that the transition temperatures of materials can be significantly enhanced in single layer form. Besides the NbSe_2 case, it was found that the CDW transition temperature in single-layer TiSe_2 was also enhanced by about 40K in monolayer form. Probably most spectacularly, it was reported that single-layer FeSe on an STO substrate exhibited superconductivity at temperatures higher than 100K  (bulk FeSe only exhibits superconductivity at 8K). It should be mentioned that in bulk form the aforementioned materials are all quasi-2D and layered.

The phase transitions in these compounds obviously raise some fundamental questions about the nature of solids in 2D. One would expect, naively, for the transition temperature to be suppressed in reduced dimensions due to enhanced fluctuations. Obviously, this is not experimentally observed, and there must therefore be a boost from another parameter, such as the electron-phonon coupling in the NbSe_2 case, that must be taken into account.

I find this trend towards studying 2D compounds a particularly interesting avenue in the current condensed matter physics climate for a few reasons: (1) whether or not these phase transitions make sense within the Kosterlitz-Thouless paradigm (which works well to explain transitions in 2D superfluid and superconducting films) still needs to be investigated, (2) the need for adequate probes to study interfacial and monolayer compounds will necessarily lead to new experimental techniques and (3) qualitatively different phenomena can occur in the 2D limit that do not necessarily occur in their 3D counterparts (the quantum hall effect being a prime example).

Sometimes trends in condensed matter physics can lead to intellectual atrophy — I think that this one may lead to some fundamental and major discoveries in the years to come on the theoretical, experimental and perhaps even on the technological fronts.

Update: The day after I wrote this post, I also came upon an article demonstrating evidence for a ferroelectric phase transition in thin Strontium Titanate (STO), a material known to exhibit no ferroelectric phase transition in bulk form at all.