Tag Archives: Emergence

Thoughts on Consistency and Physical Approximations

Ever since the series of posts about the Theory of Everything (ToE) and Upward Heritability (see here, here, here and here), I have felt like perhaps my original perspective lacked some clarity of thought. I recently re-read the chapter entitled Physics on a Human Scale in A.J. Leggett’s The Problems of Physics. In it, he takes a bird’s eye view on the framework under which most of us in condensed matter physics operate, and in doing so, untied several of the persisting mental knots I had after that series of blog posts.

The basic message is this: in condensed matter physics, we create models that are not logically dependent on the underlying ToE. This means that one does not deduce the models in the mathematical sense, but the models must be consistent with the ToE. For example, in the study of magnetism, the models must be consistent with the microscopically-derived Bohr-van Leeuwen theorem.

When one goes from the ToE to an actual “physical” model, one is selecting relevant features, rather than making rigorous approximations (so-called physical approximations). This requires a certain amount of guesswork based on experience/inspiration. For example, in writing down the BCS Hamiltonian, one neglects all interaction terms apart from the “pairing interaction”.

Leggett then makes an intuitive analogy, which provides further clarity. If one is building a transportation map of say, Bangkok, Thailand, one could do this in two ways: (i) One could take a series of images from a plane/helicopter and then resize the images to fit on a map or (ii) one could draw schematically the relevant transportation features on a piece of paper that would have to be consistent with the city’s topography. Generally, scheme (ii) will give us a much better representation of the transportation routes in Bangkok than the complicated images in scheme (i). This is the process of selecting relevant features consistent with the underlying system.  This is usually the process undertaken in condensed matter physics. Note this process is not one of approximation, but one of selection while retaining consistency.

With respect to the previous posts on this topic, I stand by the following: (1) I do still think that Laughlin and Pines’ claim that certain effects (such as the Josephson quantum) cannot be derived from the ToE to be quite speculative. It is difficult to prove either (mine or L&P’s) viewpoint, but I  take the more conservative (and what I would think is the simpler) option and suggest that in principle one could obtain such effects from the ToE. (2) Upward heritability, while also speculative, is a hunch that claims that concepts in particle physics and condensed matter physics (such as broken symmetry) may result from a yet undiscovered connection between the two realms of physics. I still consider this a plausible idea, though it could be just a coincidence.

Previously, I was under the assumption that the views expressed in the L&P article and the Wilzcek article were somehow mutually exclusive. However, upon further reflection, I no longer think that this is so and have realized that in fact they are quite compatible with each other. This is probably where my main misunderstanding laid in the previous posts, and I apologize for any confusion this may have caused.

Theory of Everything – Laughlin and Pines

I recently re-visited a paper written in 2000 by Laughlin and Pines entitled The Theory of Everything (pdf!). The main claim in the paper is that what we call the theory of everything in condensed matter (the Hamiltonian below) does not capture “higher organizing principles”. Condensed Concepts blog has a nice summary of the article.


Because we can measure quantities like e^2/h and h/2e in quantum hall experiments and superconducting rings respectively, it must be that the theory of everything does not capture some essential physics that emerges only on a different scale. In their words:

These things [e.g. that we can measure e^2/h] are clearly true, yet they cannot be deduced by direct calculation from the Theory of Everything, for exact results cannot be predicted by approximate calculations. This point is still not understood by many professional physicists, who find it easier to believe that a deductive link exists and has only to be discovered than to face the truth that there is no link. But it is true nonetheless. Experiments of this kind work because there are higher organizing principles in nature that make them work.

If I am perfectly honest, I am one of those people that “believes that a deductive link exists”. Let me take the example of the BCS Hamiltonian. I do think that it is reasonable to start with the theory of everything, make a series of approximations, and arrive at the BCS Hamiltonian. From BCS, one can then derive the Ginzburg-Landau (GL) equations as shown by G’orkov (pdf!). Not only that, one can obtain the Josephson effect (where one can measure h/2e) by using either a BCS or a GL approach.

The reason I bring this example up is because, I would rather believe that a deductive link does exist and that even though approximations have been made, that there is some topological property that has survives to each “higher” level. Said another way, in going from the TOE to BCS to GL, one keeps some fundamental topological characteristics in tact.

It is totally possible that what I am saying is gobbledygook. But I do think that the Laughlin-Pines viewpoint is speculative, radical, and has perhaps taken the Anderson “more is different” perspective too far. It is a thought-provoking article partly because of weight that the authors’ names carry and partly because of the self-belief of the article’s tone, but I am a little more conservative in my scientific outlook. The TOE may not always be useful, but I don’t think that means that “no deductive link exists” either.

I’m curious to know whether you see things like Laughlin and Pines.