Tag Archives: Quantum Mechanics

Precision in Many-Body Systems

Measurements of the quantum Hall effect give a precise conductance in units of e^2/h. Measurements of the frequency of the AC current in a Josephson junction give us a frequency of 2e/h times the applied voltage. Hydrodynamic circulation in liquid 4He is quantized in units of h/m_{4He}. These measurements (and similar ones like flux quantization) are remarkable. They yield fundamental constants to a great degree of accuracy in a condensed matter setting– a setting which Murray Gell-Mann once referred to as “squalid state” systems. How is this possible?

At first sight, it is stunning that physics of the solid or liquid state could yield a measurement so precise. When we consider the defects, impurities, surfaces and other imperfections in a macroscopic system, these results become even more astounding.

So where does this precision come from? It turns out that in all cases, one is measuring a quantity that is dependent on the single-valued nature of the (appropriately defined) complex scalar  wavefunction. The aforementioned quantities are measured in integer units, n, usually referred to as the winding number. Because the winding number is a topological quantity, in the sense that it arises in a multiply-connected space, these measurements do not particularly care about the small differences that occur in its surroundings.

For instance, the leads used to measure the quantum Hall effect can be placed virtually anywhere on the sample, as long as the wires don’t cross each other. The samples can be any (two-dimensional) geometry, i.e. a square, a circle or some complicated corrugated object. In the Josephson case, the weak links can be constrictions, an insulating oxide layer, a metal, etc. Imprecision of experimental setup is not detrimental, as long as the experimental geometry remains the same.

Another ingredient that is required for this precision is a large number of particles. This can seem counter-intuitive, since one expects quantization on a microscopic rather than at a macroscopic level, but the large number of particles makes these effects possible. For instance, both the Josephson effect and the hydrodynamic circulation in 4He depend on the existence of a macroscopic complex scalar wavefunction or order parameter. In fact, if the superconductor becomes too small, effects like the Josephson effect, flux quantization and persistent currents all start to get washed out. There is a gigantic energy barrier preventing the decay from the n=1 current-carrying state to the n=0 current non-carrying state due to the large number of particles involved (i.e. the higher winding number state is meta-stable). As one decreases the number of particles, the energy barrier is lowered and the system can start to tunnel from the higher winding number state to the lower winding number state.

In the quantum Hall effect, the samples need to be macroscopically large to prevent the boundaries from interacting with each other. Once the states on the edges are able to do that, they may hybridize and the conductance quantization gets washed out. This has been visualized in the context of 3D topological insulators using angle-resolved photoemission spectroscopy, in this well-known paper. Again, a large sample is needed to observe the effect.

It is interesting to think about where else such a robust quantization may arise in condensed matter physics. I suspect that there exist similar kinds of effects in different settings that have yet to be uncovered.

Aside: If you are skeptical about the multiply-connected nature of the quantum Hall effect, you can read about Laughlin’s gauge argument in his Nobel lecture here. His argument critically depends on a multiply-connected geometry.

Friedel Sum Rule and Phase Shifts

When I took undergraduate quantum mechanics, one of the most painful subjects to study was scattering theory, due to the usage of special functions, phase shifts and partial waves. To be honest, the sight of those words still makes me shudder a little.

If you have felt like that at some point, I hope that this post will help alleviate some fear of phase shifts. Phase shifts can be seen in many classical contexts, and I think that it is best to start thinking about them in that setting. Consider the following scenarios: a wave pulse on a rope is incident on a (1) fixed boundary and (2) a movable boundary. See below for a sketch, which was taken from here.

animation of wave pulse reflecting from hard boundary

Fixed Boundary Reflection

animation of wave pulse reflecting from soft boundary

Movable Boundary Reflection

Notice that in the fixed boundary case, one gets a phase shift of \pi, while in the movable boundary case, there is no phase shift. The reason that there is a phase shift of \pi in the former case is that the wave amplitude must be zero at the boundary. Therefore, when the wave first comes in and reflects, the only way to enforce the zero is to have the wave reflect with a \pi phase shift and interfere destructively with the incident pulse, cancelling it out perfectly.

The important thing to note is that for elastic scattering, the case we will be considering in this post, the amplitude of the reflected (or scattered) pulse is the same as the incident pulse. All that has changed is the phase.

Let’s now switch to quantum mechanics. If we consider the same setup, where an incident wave hits an infinitely high wall at x=0, we basically get the same result as in the classical case.

InfiniteWall

Elastic scattering from an infinite barrier

If the incident and scattered wavefunctions are:

\psi_i = Ae^{ikx}      and      \psi_s=Be^{-ikx}

then B = -A = e^{i\pi}A because, as for the fixed boundary case above, the incident and scattered waves destructively interfere (i.e. \psi_i(0) + \psi_s(0) =0). The full wavefunction is then:

\psi(x) = A(e^{ikx}-e^{-ikx}) \sim \textrm{sin}(kx)

The last equality is a little misleading since the wavefunction is not normalizable, but let’s just pretend we have an infinite barrier at large but not quite infinite (-x). Now consider a similar-looking, but pretty arbitrary potential:

scatteringPotential

Elastic scattering from an arbitrary potential

What happens in this case? Well, again, the scattering is elastic, so the incident and reflected amplitudes must be the same away from the region of the potential. All that can change, therefore, is the phase of the reflected (scattered) wavefunction. We can therefore write, similar to the case above:

\psi(x) = A(e^{ikx}-e^{i(2\delta-kx)}) \sim \textrm{sin}(kx+\delta)

Notice that the sine term has now acquired a phase. What does this mean? It means that the energy of the wavefunction has changed, as would be expected for a different potential. If we had used box normalization for the infinite barrier case, kL=n\pi, then the energy eigenvalues would have been:

E_n = \hbar^2n^2\pi^2/2mL^2

Now, with the newly introduced potential, however, our box normalization leads to the condition, kL+\delta(k)=n\pi so that the new energies are:

E_n = \hbar^2n^2\pi^2/2mL^2-\hbar^2\delta(k)^2/2mL^2

The energy eigenvalues move around, but since \delta(k) can be a pretty complicated function of k, we don’t actually know how they move. What’s clear, though, is that the number of energy eigenvalues are going to be the same — we didn’t make or destroy any new eigenvalues or energy eigenstates.

Let’s now move onto some solid state physics. In a metal, one usually fills up N states in accordance with the Pauli principle up to k_F. If we introduce an impurity with a different number of valence electrons into the metal, we have effectively created a potential where the electrons of the Fermi gas/liquid can scatter. Just like in the cases above, this potential will cause a phase shift in the electron wavefunctions present in the metal, changing their energies. The amplitudes for the incoming and outgoing electrons again will be the same far from the scattering impurity.

This time, though, there is something to worry about — the phase shift and the corresponding energy shift can potentially move states from above the Fermi energy to below the Fermi energy, or vice versa. Suppose I introduced an impurity with an extra valence electron compared to that of the host metal. For instance, suppose I introduce a Zn impurity into Cu. Since Zn has an extra electron, the Fermi sea has to accommodate an extra energy state. I can illustrate the scenario away from, but in the neighborhood of, the Zn impurity schematically like so:

statesChanging

E=0 represents the Fermi Energy. An extra state below the Fermi energy because of the addition of a Zn impurity

It seems quite clear from the description above, that the phase shift must be related somehow to the valence difference between the impurity and the host metal. Without the impurity, we fill up the available states up to the Fermi wavevector, k_F=N_{max}\pi/L, where N_{max} is the highest occupied state. In the presence of the impurity, we now have k_F=(N'_{max}\pi-\delta(k_F))/L. Because the Fermi wavevector does not change (the density of the metal does not change), we have that:

N'_{max} = N_{max} + \delta(k_F)/\pi

Therefore, the number of extra states needed now to fill up the states to the Fermi momentum is:

N'_{max}-N_{max}=Z = \delta(k_F)/\pi,

where Z is the valence difference between the impurity and the host metal. Notice that in this case, each extra state that moves below the Fermi level gives rise to a phase shift of \pi. This is actually a simplified version of the Friedel sum rule. It means that the electrons at the Fermi energy have changed the structure of their wavefunctions, by acquiring a phase shift, to exactly screen out the impurity at large distances.

There is just one thing we are missing. I didn’t take into account degeneracy of the energy levels of the Fermi sea electrons. If I do this, as Friedel did assuming a spherically symmetric potential in three dimensions, we get a degeneracy of 2(2l+1) for each l where the factor of 2 comes from spin and (2l+1) comes from the fact that we have azimuthal symmetry. We can write the Friedel sum rule more precisely, which states:

Z = \frac{2}{\pi} \sum_l (2l+1)\delta_l(k_F),

We just had to take into consideration the fact that there is a high degeneracy of energy states in this system of spherical symmetry. What this means, informally, is that each energy level that moves below the Fermi energy has the \pm\pi phase shift distributed across each relevant angular momentum channel. They all get a little slice of some phase shift.

An example: If I put Ni  (which is primarily a d-wave l=2 scatterer in this context) in an Al host, we get that Z=-1. This is because Ni has valence 3d^94s^1.  Now, we can obtain from the Friedel sum rule that the phase shift will be \sim -\pi/10. If we move onto Co where Z=-2, we get \sim -2\pi/10, and so forth for Fe, Mn, Cr, etc. Only after all ten d-states shift above the Fermi energy do we acquire a phase shift of -\pi.

Note that when the phase shift is \sim\pm\pi/2 the impurity will scatter strongly, since the scattering cross section \sigma \propto |\textrm{sin}(\delta_l)|^2. This is referred to as resonance scattering from an impurity, and again bears a striking resemblance to the classical driven harmonic oscillator. In the case above, it would correspond to Cr impurities in the Al host, which has phase shift of \sim -5\pi/10. Indeed, the resistivity of Al with Cr impurities is the highest among the first row transition metals, as shown below:

aluminum impurties

Hence, just by knowing the valence difference, we can get out a non-trivial phase shift! This is a pretty remarkable result, because we don’t have to use the (inexact and perturbative) Born approximation. And it comes from (I hope!) some pretty intuitive physics.

Angular Momentum and Harmonic Oscillators

There are many analogies that can be drawn between spin angular momentum and orbital angular momentum. This is because they obey identical commutation relations:

[L_x, L_y] = i\hbar L_z     &     [S_x, S_y] = i\hbar S_z.

One can circularly permute the indices to obtain the other commutation relations. However, there is one crucial difference between the orbital and spin angular momenta: components of the orbital angular momentum cannot take half-integer values, whereas this is permitted for spin angular momentum.

The forbidden half-integer quantization stems from the fact that orbital angular momentum can be expressed in terms of the position and momentum operators:

\textbf{L} = \textbf{r} \times \textbf{p}.

While in most textbooks the integer quantization of the orbital angular momentum is shown by appealing to the Schrodinger equation, Schwinger demonstrated that by mapping the angular momentum problem to that of two uncoupled harmonic oscillators (pdf!), integer quantization easily follows.

I’m just going to show this for the z-component of the angular momentum since the x– and y-components can easily be obtained by permuting indices. L_z can be written as:

L_z = xp_y - yp_x

As Schwinger often did effectively, he made a canonical transformation to a different basis and wrote:

x_1 = \frac{1}{\sqrt{2}} [x+(a^2/\hbar)p_y]

x_2 = \frac{1}{\sqrt{2}} [x-(a^2/\hbar)p_y]

p_1 = \frac{1}{\sqrt{2}} [p_x-(\hbar/a^2)y]

p_2 = \frac{1}{\sqrt{2}} [p_x+(\hbar/a^2)y],

where a is just some variable with units of length. Now, since the transformation is canonical, these new operators satisfy the same commutation relations, i.e. [x_1,p_1]=i\hbar, [x_1,p_2]=0, and so forth.

If we now write L_z in terms of the new operators, we find something rather amusing:

L_z = (\frac{a^2}{2\hbar}p_1^2 + \frac{\hbar}{2a^2}x_1^2) - ( \frac{a^2}{2\hbar}p_2^2 + \frac{\hbar}{2a^2}x_2^2).

With the substitution \hbar/a^2 \rightarrow m, L_z can be written as so:

L_z = (\frac{1}{2m}p_1^2 + \frac{m}{2}x_1^2) - ( \frac{1}{2m}p_2^2 + \frac{m}{2}x_2^2).

Each of the two terms in brackets can be identified as Hamiltonians for harmonic oscillators with angular frequency, \omega, equal to one. The eigenvalues of the harmonic oscillator problem can therefore be used to obtain the eigenvalues of the z-component of the orbital angular momentum:

L_z|\psi\rangle = (H_1 - H_2)|\psi\rangle = \hbar(n_1 - n_2)|\psi\rangle = m\hbar|\psi\rangle,

where H_i denotes the Hamiltonian operator of the i^{th} oscillator. Since the n_i can only take integer values in the harmonic oscillator problem, integer quantization of Cartesian components of the angular momentum also naturally follows.

How do we interpret all of this? Let’s imagine that we have n_1 spins pointing up and n_2 spins pointing down. Now consider the angular momentum raising and lowering operators. The angular momentum raising operator in this example, L_+ = \hbar a_1^\dagger a_2, corresponds to flipping a spin of angular momentum, \hbar/2 from down to up. The a_1^\dagger  (a_2) corresponds to the creation (annihilation) operator for oscillator 1 (2). The change in angular momentum is therefore +\hbar/2 -(-\hbar/2) = \hbar. It is this constraint, that we cannot “destroy” these spins, but only flip them, that results in the integer quantization of orbital angular momentum.

I find this solution to the forbidden half-integer problem much more illuminating than with the use of the Schrodinger equation and spherical harmonics. The analogy between the uncoupled oscillators and angular momentum is very rich and actually extends much further than this simple example. It has even been used in papers on supersymmetry (which, needless to say, extends far beyond my realm of knowledge).

Schrodinger’s Cat and Macroscopic Quantum Mechanics

A persisting question that we inherited from the forefathers of the quantum formalism is why quantum mechanics, which works emphatically well on the micro-scale, seem at odds with our intuition at the macro-scale. Intended to demonstrate the absurdity of applying quantum mechanics on the macro-scale, the mirco/macro logical disconnect was famously captured by Schrodinger in his description of a cat being in a superposition of both alive and dead states. There have been many attempts in the theoretical literature to come to grips with this apparent contradiction, the most popular of which goes under the umbrella of decoherence, where interaction with the environment results in a loss of information.

Back in 1999, Arndt, Zellinger and co-workers observed a two-slit interference of C60 molecules (i.e. buckyballs), in what was the largest molecule to exhibit such interference phenomena at the time. The grating used had a period of about 100 nm in the experiment, while the approximate de Broglie wavelength of the C60 molecules was 2.5 picometers. This was a startling discovery for a couple reasons:

  1. The beam of C60 molecules used here was far from being perfectly monochromatic. In fact, there was a pretty significant spread of initial velocities, with the full width at half maximum (\Delta v/v) getting to be as broad as 60%.
  2. The C60 molecules were not in their ground state. The initial beam was prepared by sublimating the molecules in an oven which was heated to 900-1000K. It is estimated, therefore, that there were likely 3 to 4 photons exchanged with the background blackbody field during the beam’s passage through the instrument. Hence the C60 molecules can be said to have been strongly interacting with the environment.
  3. The molecule consists of approximately 360 protons, 360 neutrons and 360 electrons (about 720 amu), which means that treating the C60 molecule as a purely classical object would be perfectly adequate for most purposes.

In the present, the record set by the C60 molecule has since been smashed by the larger molecules with mass up to 10,000 amu. This is now within one order of magnitude of a small virus. If I was a betting man, I wouldn’t put money against viruses exhibiting interference effects as well.

This of course raises the question as to how far these experiments can go and to what extent they can be applied to the human scale. Unfortunately, we will probably have to wait for a while to be able to definitively have an answer to that question. However, these experiments are a tour-de-force and make us face some of our deepest discomforts concerning the quantum formalism.

Broken Symmetry and Degeneracy

Often times, when I understand a basic concept I had struggled to understand for a long time, I wonder, “Why in the world couldn’t someone have just said that?!” A while later, I will then return to a textbook or paper that actually says precisely what I wanted to hear. I will then realize that the concept just wouldn’t “stick” in my head and required some time of personal and thoughtful deliberation. It reminds me of a humorous quote by E. Rutherford:

All of physics is either impossible or trivial.  It is impossible until you understand it and then it becomes trivial.

I definitely experienced this when first studying the relationship between broken symmetry and degeneracy. As I usually do on this blog, I hope to illustrate the central points within a pretty rigorous, but mostly picture-based framework.

For the rest of this post, I’m going to follow P. W. Anderson’s article More is Different, where I think these ideas are best addressed without any equations. However, I’ll be adding a few details which I wished I had understood upon my first reading.

If you Google “ammonia molecule” and look at the images, you’ll get something that looks like this:

ammonia

With the constraint that the nitrogen atom must sit on a line through the center formed by the triangular network of hydrogen atoms, we can approximate the potential to be one-dimensional. The potential along the line going through the center of the hydrogen triangle will look, in some crude approximation, something like this:

AmmoniaPotential

Notice that the molecule has inversion (or parity) symmetry about the triangular hydrogen atom network. For non-degenerate wavefunctions, the quantum stationary states must also be parity eigenstates. We expect, therefore, that the stationary states will look something like this for the ground state and first excited state respectively:

SymmetricDoubleWell

Ground State

AntiSymmetricDoubleWell

First Excited State

The tetrahedral (pyramid-shaped) ammonia molecule in the image above is clearly not inversion symmetric, though. What does this mean? Well, it implies that the ammonia molecule in the image above cannot be an energy eigenstate. What has to happen, therefore, is that the ammonia molecule has to oscillate between the two configurations pictured below:

ammoniaInversion

The oscillation between the two states can be thought of as the nitrogen atom tunneling from one valley to the other in the potential energy diagram above. The oscillation occurs about 24 billion times per second or with a frequency of 24 GHz.

To those familiar with quantum mechanics, this is a classic two-state problem and there’s nothing particularly new here. Indeed, the tetrahedral structures can be written as linear combinations of the symmetric and anti-symmetric states as so:

| 1 \rangle = \frac{1}{\sqrt{2}} (e^{i \omega_S t}|S\rangle +e^{i \omega_A t}|A\rangle)

| 2 \rangle = \frac{1}{\sqrt{2}} (e^{i \omega_S t}|S\rangle -e^{i \omega_A t}|A\rangle)

One can see that an oscillation frequency of \omega_S-\omega_A will result from the interference between the symmetric and anti-symmetric states.

The interest in this problem, though, comes from examining a certain limit. First, consider what happens when one replaces the nitrogen atom with a phosphorus atom (PH3): the oscillation frequency decreases to about 0.14 MHz, about 200,000 times slower than NH3. If one were to do the same replacement with an arsenic atom instead (AsH3), the oscillation frequency slows down to 160 microHz, which is equivalent to about an oscillation every two years!

This slowing down can be simply modeled in the picture above by imagining the raising of the barrier height between the two valleys like so:

HighBarrierPotential

In the case of an amino acid or a sugar, which are both known to be chiral, the period of oscillation is thought to be greater than the age of the universe. Basically, the molecules never invert!

So what is happening here? Don’t worry, we aren’t violating any laws of quantum mechanics.

As the barrier height reaches infinity, the states in the well become degenerate. This degeneracy is key, because for degenerate states, the stationary states no longer have to be inversion-symmetric. Graphically, we can illustrate this as so:

SymmetricDegenerate

Symmetric state, E=E_0

Anti-symmetricDegenerate

Anti-symmetric state, E=E_0

We can now write for the symmetric and anti-symmetric states:

| 1 \rangle = e^{i\omega t} \frac{1}{\sqrt{2}} (|S\rangle + |A\rangle)

| 2 \rangle = e^{i\omega t} \frac{1}{\sqrt{2}} (|S\rangle - |A\rangle)

These are now bona-fide stationary states. There is therefore a deep connection between degeneracy and the broken symmetry of a ground state, as this example so elegantly demonstrates.

When there is a degeneracy, the ground state no longer has to obey the symmetry of the Hamiltonian.

Technically, the barrier height never reaches infinity and there is never true degeneracy unless the number of particles in the problem (or the mass of the nitrogen atom) approaches infinity, but let’s leave that story for another day.

The Most Surprising Consequences of Quantum Mechanics

In science, today’s breakthroughs quickly become tomorrow’s monotony. Many of us use quantum mechanics everyday, but we don’t always think about its paradigm-shifting consequences and its remaining unanswered questions.

There are many online lists stating the most remarkable facts of quantum mechanics, but they often don’t adequately distinguish between the formalism and the interpretation of quantum mechanics. In my opinion, it is somewhat disingenuous to present interpretations of quantum mechanics as being part of the formalism, though this line is not always clear. The many-worlds view of quantum mechanics is a prime example that often gets media coverage. Not only is this only an interpretation of quantum mechanics, but it does not even maintain a consensus within the scientific community.

Here is a list that attempts to discuss some of what I find to be some of the most remarkable consequences of quantum mechanics. Some of these items do require some interpretation, but for these points they are at least consensus viewpoints.

1. The wavefunction is not a measurable quantity

Unlike in most other realms of physics (such as classical mechanics and general relativity), in quantum mechanics, one of the main quantities that physicists attempt to calculate cannot be directly measured. This is not because we don’t have adequate tools to do so. This is because it is not possible to do so. The wavefunction is a complex quantity, and as such, cannot be observed.

2. Quantum mechanics makes probabilistic, not deterministic, predictions

Within the realm of quantum theory, it is only possible to predict the probability of an outcome. In that sense, one does not know the trajectory of a single particle in a Young’s two-slit setup for instance, but one can predict the statistical distribution of many particles on the screen behind the slits.

3. Heisenberg uncertainty relations

This well-known theorem states that, for instance, one cannot measure the angular momentum of a particle in the x- and y- directions simultaneously without some inherent inaccuracy.

4. Identical particles, spin-statistics theorem and quantum statistics

All electrons are made the same. All photons (of same frequency) are made the same. It turns out that there are two categories of identical particles in three dimensions, bosons, which are of integer spin and fermions, which are of half-integer spin. The properties of bosons allow for superlative low-temperature phenomena like Bose-Einstein condensation. The existence of fermions and the Pauli exclusion principle, on the other hand, make sure that your hand doesn’t go through a table when you put your hand on it!

5. Non-locality, Entanglement, Bell’s Theorem

I’ve written about this on several occasions, but I will just say that quantum theory is inherently non-local. Einstein spent a pain-staking 15 years trying to make the theory of gravity local, only to see quantum mechanics pull out the rug from under is feet. See here for further details.

6. The scalar and vector potentials have measurable consequences and Berry phases

In classical electrodynamics, the quantities that have measurable experimental consequences are the electric and magnetic fields. In quantum mechanics, the Aharonov-Bohm effect demonstrates that a change in the vector potential can have experimental consequences. It is important to note that differences in potentials are gauge-invariant.

 

If you think I’ve left anything off, please let me know!

Visualization and Analogies

Like many, my favorite subject in high school mathematics was geometry. This was probably the case because it was one of the few subjects where I was able to directly visualize everything that was going on.

I find that I am prone to thinking in pictures or “visual analogies”, because this enables me to understand and remember concepts better. Solutions to certain problems may then become obvious. I’ve illustrated this kind of thinking on a couple occasions on this blog when addressing plasmons, LO-TO splitting and the long-range Coulomb interaction and also when speaking about the “physicist’s proof”.

Let me give another example, that of measurement probability. Suppose I have a spin-1/2 particle in the following initial state:

|\psi\rangle{}=\sqrt{\frac{1}{3}}|+\rangle{} + \sqrt{\frac{2}{3}}|-\rangle{}

In this case, when measuring S_z, we find that the probability of finding the particle in the spin-up state will be P(S_z = +) = 1/3.

Let us now consider a slightly more thought-provoking problem. Imagine that we have a spin-1 particle in the following state:

|\psi\rangle{}=\sqrt{\frac{1}{3}}|1\rangle{} + \sqrt{\frac{1}{2}}|0\rangle{} + \sqrt{\frac{1}{6}}|-1\rangle{}

Suppose now I measure S_z^2 and obtain +1. This measurement eliminates the possibility of measuring the |0\rangle{} state henceforth. The question is : what is the probability of now measuring -1 if I measure S_z, i.e. P(S_z = -1 | S_z^2 = 1)? (Have a go at solving this problem before reading my solution below.)

My favorite solution to this problem involves a visual interpretation (obviously!). Imagine axes labelled by the S_z kets and the initial state by a vector in this space as so:

BeforeMeasurement

Now, the key involves thinking of the measurement of S_z^2 as a projection onto the (1,-1) plane as so:

AfterMeasurement

After this projection, though, the wavefunction is unnormalized. Therefore, one needs to normalize (or re-scale) the wavefunction again so that the probabilities still add up to one. This can done quite simply and the new wavefunction is therefore:

|\psi_{new}\rangle{}=\sqrt{\frac{2}{3}}|1\rangle{} + \sqrt{\frac{1}{3}}|-1\rangle{}

Hence the probability of measuring S_z = -1 is now increased to 1/3, whereas it was only 1/6 before measuring S_z^2. It is important to note that the ratio of the probabilities, P(S_z=1)/P(S_z=-1), and the relative phase e^{i\phi} between the |+1\rangle{} and the |-1\rangle{} kets does not change after the projection.

I find this solution to the problem particularly illuminating because it permits a visual geometric interpretation and is actually helpful in solving the problem.

Please let me know if you find this kind of visualization as helpful as I do, because I hope to write posts in the future about the Anderson pseudo-spin representation of BCS theory and about the water analogy in electronic circuits.