Quantum Mechanics Richard Fitzpatrick Professor of Physics The University of Texas at Austin Major Sources The textbooks that I have consulted most frequently while developing course material are: The Principles of Quantum Mechanics. P.A.M. Dirac, 4th Edition (revised) (Oxford University Press, Oxford UK, 1958). The Feynman Lectures on Physics. R.P. Feynman, R.B. Leighton, and M. Sands, Volumes I–III (Addison-Wesley, Reading MA, 1965). Quantum Mechanics. E. Merzbacher, 2nd Edition (John Wiley & Sons, New York NY, 1970). Quantum Physics. S. Gasiorowicz, 2nd Edition (John Wiley & Sons, New York NY, 1996). Modern Quantum Mechanics. J.J. Sakurai, and J. Napolitano, 2nd Edition (Addison-Wesley, Boston MA, 2011). Purpose of Course Quantum mechanics was developed during the ?rst few decades of the twentieth century via a se- ries of inspired guesses made by various physicists, including Planck, Einstein, Bohr, Schr¨ odinger, Heisenberg, Pauli, and Dirac. All of these scientists were trying to construct a self-consistent the- ory of microscopic dynamics that was compatible with experimental observations. The purpose of this course is to present quantum mechanics in a systematic fashion, starting from the fundamental postulates, and developing the theory in as logical a manner as possible. 2 QUANTUM MECHANICS Fundamental Concepts 3 1 Fundamental Concepts 1.1 Breakdown of Classical Physics The necessity for a departure from classical physics is demonstrated by the following phenomena: 1. Anomalous Atomic and Molecular Stability. According to classical physics, an electron orbiting an atomic nucleus undergoes acceleration and should, therefore, lose energy via the continuous emission of electromagnetic radiation, causing it to gradually spiral in towards the nucleus. Experimentally, this is not observed to happen. 2. Anomalously Low Atomic and Molecular Speci?c Heats. According to the equipartition theorem of classical physics, each degree of freedom of an atomic or molecular system should contribute R/2 to its molar speci?c heat capacity, where R is the molar ideal gas constant. In fact, only the translational, and some rotational, degrees of freedom seem to contribute. The vibrational degrees of freedom appear to make no contribution at all (except at high temperatures). Incidentally, this fundamental problem with classical physics was known and appreciated by the middle of the nineteenth century. Stories that physicists at the start of the twentieth century thought that classical physics explained everything, and that there was nothing left to discover, are largely apocryphal (see Feynman, Volume I, Chapter 40). 3. Ultraviolet Catastrophe. According to classical physics, the equilibrium energy density of an electromagnetic ?eld contained within a vacuum cavity whose walls are held at a ?xed temperature is in?nite, due to a divergence of energy carried by short wavelength modes. This divergence is called the ultraviolet catastrophe. Experimentally, there is no such diver- gence, and the total energy density is ?nite. 4. Wave-Particle Duality. Classical physics treats waves and particles as completely distinct phenomena. However, various experiments (e.g., the interference of light, the photoelectric e?ect, electron di?raction) demonstrate that waves sometimes act as if they were streams of particles, and streams of particles sometimes act as if they were waves. This behavior is completely inexplicable within the framework of classical physics. 1.2 Photon Polarization It is known experimentally that if plane polarized light is used to eject photo-electrons then there is a preferred direction of emission of the electrons. Clearly, the polarization properties of light, which are more usually associated with its wave-like behavior, also extend to its particle-like be- havior. In particular, a polarization can be ascribed to each individual photon in a beam of light. Consider the following well-known experiment. A beam of plane polarized light is passed through a polarizing ?lm, which is normal to the beam's direction of propagation, and which has 4 QUANTUM MECHANICS the property that it is only transparent to light whose plane of polarization lies perpendicular to its optic axis (which is assumed to lie in the plane of the ?lm). Classical electromagnetic wave theory tells us that if the beam is polarized perpendicular to the optic axis then all of the light is transmitted, if the beam is polarized parallel to the optic axis then none of the light is transmitted, and if the light is polarized at an angle α to the axis then a fraction sin2 α of the beam energy is transmitted. Let us try to account for these observations at the individual photon level. A beam of light that is plane polarized in a certain direction is made up of a stream of photons which are each plane polarized in that direction. This picture leads to no di?culty if the plane of polarization lies parallel or perpendicular to the optic axis of the polarizing ?lm. In the former case, none of the photons are transmitted, and, in the latter case, all of the photons are transmitted. But, what happens in the case of an obliquely polarized incident beam? The previous question is not very precise. Let us reformulate it as a question relating to the result of some experiment that we could perform. Suppose that we were to ?re a single photon at a polarizing ?lm, and then look to see whether or not it emerges on the other side. The possible results of the experiment are that either a whole photon (whose energy is equal to the energy of the incident photon) is observed, or no photon is observed. Any photon that is transmitted though the ?lm must be polarized perpendicular to the ?lm's optic axis. Furthermore, it is impossible to imagine (in physics) ?nding part of a photon on the other side of the ?lm. If we repeat the experiment a great number of times then, on average, a fraction sin2 α of the photons are transmitted through the ?lm, and a fraction cos2 α are absorbed. Thus, given that the trials are statistically independent of one another, we must conclude that a photon has a probability sin2 α of being transmitted as a photon polarized in the plane perpendicular to the optic axis, and a probability cos2 α of being absorbed. These values for the probabilities lead to the correct classical limit for a beam containing a large number of photons. Note that we have only been able to preserve the individuality of photons, in all cases, by aban- doning the determinacy of classical theory, and adopting a fundamentally probabilistic approach. We have no way of knowing whether an individual obliquely polarized photon is going to be ab- sorbed by, or transmitted through, a polarizing ?lm. We only know the probability of each event occurring. This is a fairly sweeping statement. Recall, however, that the state of a photon is fully speci?ed once its energy, direction of propagation, and polarization are known. If we imagine performing experiments using monochromatic light, normally incident on a polarizing ?lm, with a particular oblique polarization, then the state of each individual photon in the beam is completely speci?ed, and nothing remains to uniquely determine whether the photon is transmitted or absorbed by the ?lm. The above discussion about the results of an experiment with a single obliquely polarized photon incident on a polarizing ?lm answers all that can be legitimately asked about what happens to the photon when it reaches the ?lm. Questions as to what determines whether the photon is transmitted or not, or how it changes its direction of polarization, are illegitimate, because they do not relate to the outcome of a possible experiment. Nevertheless, some further description is needed, in order to allow the results of this experiment to be correlated with the results of other experiments that can be performed using photons. The further description provided by quantum mechanics is as follows. It is supposed that a pho- Fundamental Concepts 5 ton polarized obliquely to the optic axis can be regarded as being partly in a state of polarization parallel to the axis, and partly in a state of polarization perpendicular to the axis. In other words, the oblique polarization state is some sort of superposition of two states of parallel and perpen- dicular polarization. Because there is nothing special about the orientation of the optic axis in our experiment, we deduce that any state of polarization can be regarded as a superposition of two mu- tually perpendicular states of polarization. When we cause a photon to encounter a polarizing ?lm, we are subjecting it to an observation. In fact, we are observing whether it is polarized parallel or perpendicular to the ?lm's optic axis. The e?ect of making this observation is to force the photon entirely into a state of parallel or perpendicular polarization. In other words, the photon has to jump suddenly from being partly in each of these two states to being entirely in one or the other of them. Which of the two states it will jump into cannot be predicted, but is governed by probability laws. If the photon jumps into a state of parallel polarization then it is absorbed. Otherwise, it is transmitted. Note that, in this example, the introduction of indeterminacy into the problem is clearly connected with the act of observation. In other words, the indeterminacy is related to the inevitable disturbance of the system associated with the act of observation. 1.3 Fundamental Principles of Quantum Mechanics There is nothing special about the transmission and absorption of photons through a polarizing ?lm. Exactly the same conclusions as those outlined above are obtained by studying other simple experiments, such as the interference of photons (see Dirac, Section I.3), and the Stern-Gerlach experiment (see Sakurai, Chapter 1; Feynman, Chapter 5). The study of these simple experiments leads us to formulate the following fundamental principles of quantum mechanics: 1. Dirac's Razor. Quantum mechanics can only answer questions regarding the outcome of possible experiments. Any other questions lie beyond the realms of physics. 2. Principle of the Superposition of States. Any microscopic system (i.e., an atom, molecule, or particle) in a given state can be regarded as being partly in each of two or more other states. In other words, any state can be regarded as a superposition of two or more other states. Such superpositions can be performed in an in?nite number of di?erent ways. 3. Principle of Indeterminacy. An observation made on a microscopic system causes it to jump into one or more particular states (which are related to the type of observation). It is impossible to predict into which ?nal state a particular system will jump. However, the probability of a given system jumping into a given ?nal state can be predicted. The ?rst of these principles was formulated by quantum physicists (such as Dirac) in the 1920's to fend o? awkward questions such as "How can a system suddenly jump from one state into another?", or "How does a system decide which state to jump into?". As we shall see, the second principle is the basis for the mathematical formulation of quantum mechanics. The ?nal principle is still rather vague. We need to extend it so that we can predict which possible states a system can jump into after a particular type of observation, as well as the probability of the system making a particular jump. 6 QUANTUM MECHANICS 1.4 Ket Space Consider a microscopic system composed of particles, or bodies, with speci?c properties (mass, moment of inertia, etc.) interacting according to speci?c laws of force. There will be various possible motions of the particles, or bodies, consistent with these laws of force. Let us term each such motion a state of the system. According to the principle of superposition of states, any given state can be regarded as a superposition of two or more other states. Thus, states must be related to mathematical quantities of a kind that can be added together to give other quantities of the same kind. The most obvious examples of such quantities are vectors. Let us consider a particular microscopic system in a particular state, which we label A: e.g., a photon with a particular energy, momentum, and polarization. We can represent this state as a particular vector, which we also label A, residing in some vector space, where the other elements of the space represent all of the other possible states of the system. Such a space is called a ket space (after Dirac). The state vector A is conventionally written |A . (1.1) Suppose that state A is, in fact, the superposition of two di?erent states, B and C. This interrelation is represented in ket space by writing |A = |B + |C , (1.2) where |B is the vector relating to the state B, etc. For instance, state |B might represent a photon propagating in the z-direction, and plane polarized in the x-direction, and state |C might represent a similar photon plane polarized in the y-direction. In this case, the sum of these two states represents a photon whose plane of polarization makes an angle of 45? with both the x- and y-directions (by analogy with classical physics). This latter state is represented by |B + |C in ket space. Suppose that we want to construct a state whose plane of polarization makes an arbitrary angle α with the x-direction. We can do this via a suitably weighted superposition of states B and C. By analogy with classical physics, we require cos α of state B, and sin α of state C. This new state is represented by cos α |B + sin α |C (1.3) in ket space. Note that we cannot form a new state by superposing a state with itself. For instance, a photon polarized in the y-direction superposed with another photon polarized in the y-direction (with the same energy and momentum) gives the same photon. This implies that the ket vector c1 |A + c2 |A = (c1 + c2) |A (1.4) corresponds to the same state that |A does. Thus, ket vectors di?er from conventional vectors in that their magnitudes, or lengths, are physically irrelevant. All the states of the system are in one to one correspondence with all the possible directions of vectors in the ket space, no distinction being made between the directions of the ket vectors |A and ?|A . There is, however, one caveat to the above statements. If c1 + c2 = 0 then the superposition process yields nothing at all: i.e., no Fundamental Concepts 7 state. The absence of a state is represented by the null vector |0 in ket space. The null vector has the fairly obvious property that |A + |0 = |A (1.5) for any vector |A . The fact that ket vectors pointing in the same direction represent the same state relates ultimately to the quantization of matter: i.e., the fact that it comes in irreducible packets called photons, electrons, atoms, etc. If we observe a microscopic system then we either see a state (i.e., a photon, or an atom, or a molecule, etc.) or we see nothing—we can never see a fraction or a multiple of a state. In classical physics, if we observe a wave then the amplitude of the wave can take any value between zero and in?nity. Thus, if we were to represent a classical wave by a vector then the magnitude, or length, of the vector would correspond to the amplitude of the wave, and the direction would correspond to the frequency and wavelength, so that two vectors of di?erent lengths pointing in the same direction would represent di?erent wave states. We have seen, in Equation (1.3), that any plane polarized state of a photon can be represented as a linear superposition of two orthogonal polarization states in which the weights are real num- bers. Suppose that we want to construct a circularly polarized photon state. Well, we know from classical physics that a circularly polarized wave is a superposition of two waves of equal ampli- tude, plane polarized in orthogonal directions, which oscillate in phase quadrature. This suggests that a circularly polarized photon is the superposition of a photon polarized in the x-direction (state B) and a photon polarized in the y-direction (state C), with equal weights given to the two states, but with the proviso that state C oscillates 90? out of phase with state B. By analogy with classical physics, we can use complex numbers to simultaneously represent the weighting and relative phase in a linear superposition. Thus, a circularly polarized photon is represented by |B + i |C (1.6) in ket space. A general elliptically polarized photon is represented by c1 |B + c2 |C , (1.7) where c1 and c2 are complex numbers. We conclude that a ket space must be a complex vec- tor space if it is to properly represent the mutual interrelations between the possible states of a microscopic system. Suppose that the ket |R is expressible linearly in terms of the kets |A and |B , so that |R = c1 |A + c2 |B . (1.8) We say that |R is dependent on |A and |B . It follows that the state R can be regarded as a linear superposition of the states A and B. So, we can also say that state R is dependent on states A and B. In fact, any ket vector (or state) that is expressible linearly in terms of certain others is said to be dependent on them. Likewise, a set of ket vectors (or states) are termed independent if none of them are expressible linearly in terms of the others. The dimensionality of a conventional vector space is de?ned as the number of independent vec- tors contained in that space. Likewise, the dimensionality of a ket space is equivalent to the number of independent ket vectors it contains. Thus, the ket space that represents the possible polarization 8 QUANTUM MECHANICS states of a photon propagating in the z-direction is two-dimensional (the two independent vectors correspond to photons plane polarized in the x- and y-directions, respectively). Some microscopic systems have a ?nite number of independent states (e.g., the spin states of an electron in a mag- netic ?eld). If there are N independent states then the possible states of the system are represented as an N-dimensional ket space. Some microscopic systems have a denumerably in?nite number of independent states (e.g., a particle in an in?nitely deep, one-dimensional, potential well). The possible states of such a system are represented as a ket space whose dimensions are denumerably in?nite. Such a space can be treated in more or less the same manner as a ?nite-dimensional space. Unfortunately, some microscopic systems have a nondenumerably in?nite number of independent states (e.g., a free particle). The possible states of such a system are represented as a ket space whose dimensions are nondenumerably in?nite. This type of space requires a slightly di?erent treatment to spaces of ?nite, or denumerably in?nite, dimensions (see Section 1.15). In conclusion, the states of a general microscopic system can be represented as a complex vector space of (possibly) in?nite dimensions. Such a space is termed a Hilbert space by mathe- maticians. 1.5 Bra Space A snack machine inputs coins plus some code entered on a key pad, and (hopefully) outputs a snack. It also does so in a deterministic manner: i.e., the same money plus the same code produces the same snack (or the same error message) time after time. Note that the input and output of the machine have completely di?erent natures. We can imagine building a rather abstract snack machine which inputs ket vectors and outputs complex numbers in a deterministic fashion. Math- ematicians call such a machine a functional. Imagine a general functional, labeled F, acting on a general ket vector, labeled A, and spitting out a general complex number φA. This process is represented mathematically by writing F|(|A ) = φA. (1.9) Let us narrow our focus to those functionals that preserve the linear dependencies of the ket vectors upon which they operate. Not surprisingly, such functionals are termed linear functionals. A general linear functional, labeled F, satis?es F|(|A + |B ) = F|(|A ) + F|(|B ), (1.10) where |A and |B are any two kets in a given ket space. Consider an N-dimensional ket space [i.e., a ?nite-dimensional, or denumerably in?nite di- mensional (i.e., N → ∞), space]. Let the |i (where i runs from 1 to N) represent N independent ket vectors in this space. A general ket vector can be written1 |A = i=1,N αi |i , (1.11) 1 Actually, this is only strictly true for ?nite-dimensional spaces. Only a special subset of denumerably in?nite dimensional spaces have this property (i.e., they are complete). However, because a ket space must be complete if it is to represent the states of a microscopic system, we need only consider this special subset. Fundamental Concepts 9 where the αi are an arbitrary set of complex numbers. The only way that the functional F can satisfy Equation (1.10) for all vectors in the ket space is if F|(|A ) = i=1,N fi αi, (1.12) where the fi are a set of complex numbers relating to the functional. Let us de?ne N basis functionals i| which satisfy i|(| j ) = δi j. (1.13) Here, the Kronecker delta symbol is de?ned such that δi j = 1 if i = j, and δi j = 0 otherwise. It follows from the previous three equations that F| = i=1,N fi i|. (1.14) But, this implies that the set of all possible linear functionals acting on an N-dimensional ket space is itself an N-dimensional vector space. This type of vector space is called a bra space (after Dirac), and its constituent vectors (which are actually functionals of the ket space) are called bra vectors. Note that bra vectors are quite di?erent in nature to ket vectors (hence, these vectors are written in mirror image notation,and so that they can never be confused). Bra space is an example of what mathematicians call a dual vector space (i.e., it is dual to the original ket space). There is a one to one correspondence between the elements of the ket space and those of the related bra space. So, for every element A of the ket space, there is a corresponding element, which it is also convenient to label A, in the bra space. That is, |A DC ←→ A|, (1.15) where DC stands for dual correspondence. There are an in?nite number of ways of setting up the correspondence between vectors in a ket space and those in the related bra space. However, only one of these has any physical signi?cance. (See Section 1.11.) For a general ket vector A, speci?ed by Equation (1.11), the corresponding bra vector is written A| = i=1,N α? i i|, (1.16) where the α? i are the complex conjugates of the αi. A| is termed the dual vector to |A . It follows, from the above, that the dual to c |A is c? A|, where c is a complex number. More generally, c1 |A + c2 |B DC ←→ c? 1 A| + c? 2 B|. (1.17) Recall that a bra vector is a functional that acts on a general ket vector, and spits out a complex number. Consider the functional which is dual to the ket vector |B = i=1,N βi |i (1.18) 10 QUANTUM MECHANICS acting on the ket vector |A . This operation is denoted B|(|A ). Note, however, that we can omit the round brackets without causing any ambiguity, so the operation can also be written B||A . This expression can be further simpli?ed to give B|A . According to Equations (1.11), (1.13), (1.16), and (1.18), B|A = i=1,N β? i αi. (1.19) Mathematicians term B|A the inner product of a bra and a ket.2 An inner product is (almost) anal- ogous to a scalar product between covariant and contravariant vectors in curvilinear coordinates. It is easily demonstrated that B|A = A|B ? . (1.20) Consider the special case where |B → |A . It follows from Equations (1.19) and (1.20) that A|A is a real number, and that A|A ≥ 0. (1.21) The equality sign only holds if |A is the null ket [i.e., if all of the αi are zero in Equation (1.11)]. This property of bra and ket vectors is essential for the probabilistic interpretation of quantum mechanics, as will become apparent in Section 1.11. Two kets |A and |B are said to be orthogonal if A|B = 0, (1.22) which also implies that B|A = 0. Given a ket |A , which is not the null ket, we can de?ne a normalized ket | ? A , where | ? A = 1 √ A|A |A , (1.23) with the property ? A| ? A = 1. (1.24) Here, √ A|A is known as the norm or "length" of |A , and is analogous to the length, or magnitude, of a conventional vector. Because |A and c |A represent the same physical state, it makes sense to require that all kets corresponding to physical states have unit norms. It is possible to de?ne a dual bra space for a ket space of nondenumerably in?nite dimensions in much the same manner as that described above. The main di?erences are that summations over discrete labels become integrations over continuous labels, Kronecker delta symbols become Dirac delta functions, completeness must be assumed (it cannot be proved), and the normalization convention is somewhat di?erent. (See Section 1.15.) 2 We can now appreciate the elegance of Dirac's notation. The combination of a bra and a ket yields a "bra(c)ket" (which is just a complex number). Fundamental Concepts 11 1.6 Operators We have seen that a functional is a machine that inputs a ket vector and spits out a complex number. Consider a somewhat di?erent machine that inputs a ket vector and spits out another ket vector in a deterministic fashion. Mathematicians call such a machine an operator. We are only interested in operators that preserve the linear dependencies of the ket vectors upon which they act. Such operators are termed linear operators. Consider an operator labeled X. Suppose that when this operator acts on a general ket vector |A it spits out a new ket vector which is denoted X |A . Operator X is linear provided that X (|A + |B ) = X |A + X |B , (1.25) for all ket vectors |A and |B , and X (c |A ) = c X |A , (1.26) for all complex numbers c. Operators X and Y are said to be equal if X |A = Y |A (1.27) for all kets in the ket space in question. Operator X is termed the null operator if X |A = |0 (1.28) for all ket vectors in the space. Operators can be added together. Such addition is de?ned to obey a commutative and associate algebra: i.e., X + Y = Y + X, (1.29) X + (Y + Z) = (X + Y) + Z. (1.30) Operators can also be multiplied. The multiplication is associative: X (Y |A ) = (X Y) |A = X Y |A , (1.31) X (Y Z) = (X Y) Z = X Y Z. (1.32) However, in general, it is non-commutative: i.e., X Y Y X. (1.33) So far, we have only considered linear operators acting on ket vectors. We can also give a meaning to their operation on bra vectors. Consider the inner product of a general bra B| with the ket X |A . This product is a number that depends linearly on |A . Thus, it may be considered to be the inner product of |A with some bra. This bra depends linearly on B|, so we may look on it as the result of some linear operator applied to B|. This operator is uniquely determined by the original operator X, so we might as well call it the same operator acting on B|. A suitable notation 12 QUANTUM MECHANICS to use for the resulting bra when X operates on B| is B| X. The equation which de?nes this vector is ( B| X) |A = B| (X |A ) (1.34) for any |A and B|. The triple product of B|, X, and |A can be written B| X |A without ambiguity, provided we adopt the convention that the bra vector always goes on the left, the operator in the middle, and the ket vector on the right. Consider the dual bra to X |A . This bra depends antilinearly on |A and must therefore depend linearly on A|. Thus, it may be regarded as the result of some linear operator applied to A|. This operator is termed the adjoint of X, and is denoted X? . Thus, X |A DC ←→ A| X? . (1.35) It is readily demonstrated that B| X? |A = A| X |B ? , (1.36) plus (X Y)? = Y? X? . (1.37) It is also easily seen that the adjoint of the adjoint of a linear operator is equivalent to the original operator. An Hermitian operator ξ has the special property that it is its own adjoint: i.e., ξ = ξ? . (1.38) 1.7 Outer Product So far, we have formed the following products: B|A , X |A , A| X, X Y, B| X |A . Are there any other products we are allowed to form? How about |B A| ? (1.39) This clearly depends linearly on the bra A| and the ket |B . Suppose that we right-multiply the above product by the general ket |C . We obtain |B A|C = A|C |B , (1.40) since A|C is just a number. Thus, |B A| acting on a general ket |C yields another ket. Clearly, the product |B A| is a linear operator. This operator also acts on bras, as is easily demonstrated by left-multiplying the expression (1.39) by a general bra C|. It is also easily demonstrated that (|B A|)? = |A B|. (1.41) Mathematicians term the operator |B A| the outer product of |B and A|. The outer product should not be confused with the inner product, A|B , which is just a number. Fundamental Concepts 13 1.8 Eigenvalues and Eigenvectors In general, the ket X |A is not a constant multiple of |A . However, there are some special kets known as the eigenkets of operator X. These are denoted |x′ , |x′′ , |x′′′ 1.42) and have the property X |x′ = x′ |x′ , X |x′′ = x′′ |x′′ 1.43) where x′ , x′′ , . . . are numbers called eigenvalues. Clearly, applying X to one of its eigenkets yields the same eigenket multiplied by the associated eigenvalue. Consider the eigenkets and eigenvalues of a Hermitian operator ξ. These are denoted ξ |ξ′ = ξ′ |ξ′ , (1.44) where |ξ′ is the eigenket associated with the eigenvalue ξ′ . Three important results are readily deduced: (i) The eigenvalues are all real numbers, and the eigenkets corresponding to di?erent eigenval- ues are orthogonal. Since ξ is Hermitian, the dual equation to Equation (1.44) (for the eigenvalue ξ′′ ) reads ξ′′ | ξ = ξ′′ ? ξ′′ |. (1.45) If we left-multiply Equation (1.44) by ξ′′ |, right-multiply the above equation by |ξ′ , and take the di?erence, we obtain (ξ′ ? ξ′′ ? ) ξ′′ |ξ′ = 0. (1.46) Suppose that the eigenvalues ξ′ and ξ′′ are the same. It follows from the above that ξ′ = ξ′ ? , (1.47) where we have used the fact that |ξ′ is not the null ket. This proves that the eigenvalues are real numbers. Suppose that the eigenvalues ξ′ and ξ′′ are di?erent. It follows that ξ′′ |ξ′ = 0, (1.48) which demonstrates that eigenkets corresponding to di?erent eigenvalues are orthogonal. (ii) The eigenvalues associated with eigenkets are the same as the eigenvalues associated with eigenbras. An eigenbra of ξ corresponding to an eigenvalue ξ′ is de?ned ξ′ | ξ = ξ′ | ξ′ . (1.49) (iii) The dual of any eigenket is an eigenbra belonging to the same eigenvalue, and conversely. 14 QUANTUM MECHANICS 1.9 Observables We have developed a mathematical formalism that comprises three types of objects—bras, kets, and linear operators. We have already seen that kets can be used to represent the possible states of a microscopic system. However, there is a one to one correspondence between the elements of a ket space and its dual bra space, so we must conclude that bras could just as well be used to represent the states of a microscopic system. What about the dynamical variables of the system (e.g., its position, momentum, energy, spin, etc.)? How can these be represented in our formalism? Well, the only objects we have left over are operators. We, therefore, assume that the dynamical variables of a microscopic system are represented as linear operators acting on the bras and kets that correspond to the various possible states of the system. Note that the operators have to be linear, otherwise they would, in general, spit out bras/kets pointing in di?erent directions when fed bras/kets pointing in the same direction but di?ering in length. Since the lengths of bras and kets have no physical signi?cance, it is reasonable to suppose that non-linear operators are also without physical signi?cance. We have seen that if we observe the polarization state of a photon, by placing a polarizing ?lm in its path then the result is to cause the photon to jump into a state of polarization parallel or perpendicular to the optic axis of the ?lm. The former state is absorbed, and the latter state is transmitted (which is how we tell them apart). In general, we cannot predict into which state a given photon will jump (except in a statistical sense). However, we do know that if the photon is initially polarized parallel to the optic axis then it will de?nitely be absorbed, and if it is initially polarized perpendicular to the axis then it will de?nitely be transmitted. We also know that, after passing though the ?lm, a photon must be in a state of polarization perpendicular to the optic axis (otherwise it would not have been transmitted). We can make a second observation of the polarization state of such a photon by placing an identical polarizing ?lm (with the same orientation of the optic axis) immediately behind the ?rst ?lm. It is clear that the photon will de?nitely be transmitted through the second ?lm. There is nothing special about the polarization states of a photon. So, more generally, we can say that if a dynamical variable of a microscopic system is measured then the system is caused to jump into one of a number of independent states (note that the perpendicular and parallel po- larization states of our photon are linearly independent). In general, each of these ?nal states is associated with a di?erent result of the measurement: i.e., a di?erent value of the dynamical vari- able. Note that the result of the measurement must be a real number (there are no measurement machines which output complex numbers). Finally, if an observation is made, and the system is found to be a one particular ?nal state, with one particular value for the dynamical variable, then a second observation, made immediately after the ?rst one, will de?nitely ?nd the system in the same state, and yield the same value for the dynamical variable. How can we represent all of these facts in our mathematical formalism? Well, by a fairly non-obvious leap of intuition, we are going to assert that a measurement of a dynamical variable corresponding to an operator X in ket space causes the system to jump into a state corresponding to one of the eigenkets of X. Not surprisingly, such a state is termed an eigenstate. Furthermore, the result of the measurement is the eigenvalue associated with the eigenket into which the system jumps. The fact that the result of the measurement must be a real number implies that dynami- Fundamental Concepts 15 cal variables can only be represented by Hermitian operators (because only Hermitian operators are guaranted to have real eigenvalues). The fact that the eigenkets of a Hermitian operator cor- responding to di?erent eigenvalues (i.e., di?erent results of the measurement) are orthogonal is in accordance with our earlier requirement that the states into which the system jumps should be mutually independent. We can conclude that the result of a measurement of a dynamical variable represented by a Hermitian operator ξ must be one of the eigenvalues of ξ. Conversely, every eigenvalue of ξ is a possible result of a measurement made on the corresponding dynamical vari- able. This gives us the physical signi?cance of the eigenvalues. (From now on, the distinction between a state and its representative ket vector, and a dynamical variable and its representative operator, will be dropped, for the sake of simplicity.) It is reasonable to suppose that if a certain dynamical variable ξ is measured with the system in a particular state then the states into which the system may jump on account of the measurement are such that the original state is dependent on them. This fairly innocuous statement has two very important corollaries. First, immediately after an observation whose result is a particular eigenvalue ξ′ , the system is left in the associated eigenstate. However, this eigenstate is orthogonal to (i.e., independent of) any other eigenstate corresponding to a di?erent eigenvalue. It follows that a second measurement made immediately after the ?rst one must leave the system in an eigenstate corresponding to the eigenvalue ξ′ . In other words, the second measurement is bound to give the same result as the ?rst. Furthermore, if the system is in an eigenstate of ξ, corresponding to an eigenvalue ξ′ , then a measurement of ξ is bound to give the result ξ′ . This follows because the system cannot jump into an eigenstate corresponding to a di?erent eigenvalue of ξ, because such a state is not dependent on the original state. Second, it stands to reason that a measurement of ξ must always yield some result. It follows that no matter what the initial state of the system, it must always be able to jump into one of the eigenstates of ξ. In other words, a general ket must always be dependent on the eigenkets of ξ. This can only be the case if the eigenkets form a complete set (i.e., they span ket space). Thus, in order for a Hermitian operator ξ to be observable its eigenkets must form a complete set. A Hermitian operator that satis?es this condition is termed an observable. Conversely, any observable quantity must be a Hermitian operator with a complete set of eigenstates. 1.10 Measurements We have seen that a measurement of some observable ξ of a microscopic system causes the system to jump into one of the eigenstates of ξ. The result of the measurement is the associated eigenvalue (or some function of this quantity). It is impossible to determine into which eigenstate a given system will jump, but it is possible to predict the probability of such a transition. So, what is the probability that a system in some initial state |A makes a transition to an eigenstate |ξ′ of an observable ξ, as a result of a measurement made on the system? Let us start with the simplest case. If the system is initially in an eigenstate |ξ′ then the transition probability to a eigenstate |ξ′′ corresponding to a di?erent eigenvalue is zero, and the transition probability to the same eigenstate |ξ′ is unity. It is convenient to normalize our eigenkets such that they all have unit norms. It follows 16 QUANTUM MECHANICS from the orthogonality property of the eigenkets that ξ′ |ξ′′ = δξ′ξ′′ . (1.50) For the moment, we are assuming that the eigenvalues of ξ are all di?erent. Note that the probability of a transition from an initial eigenstate |ξ′ to a ?nal eigenstate |ξ′′ is the same as the value of the inner product ξ′ |ξ′′ . Can we use this correspondence to obtain a general rule for calculating transition probabilities? Well, suppose that the system is initially in a state |A which is not an eigenstate of ξ. Can we identify the transition probability to a ?nal eigenstate |ξ′ with the inner product A|ξ′ ? In fact, we cannot because A|ξ′ is, in general, a complex number, and complex probabilities do not make any sense. Let us try again. Suppose that we identify the transition probability with the modulus squared of the inner product, | A|ξ′ |2 ? This quantity is de?nitely a positive number (so it could be a probability). This guess also gives the right answer for the transition probabilities between eigenstates. In fact, it is the correct guess. Because the eigenstates of an observable ξ form a complete set, we can express any given state |A as a linear combination of them. It is easily demonstrated that |A = ξ′ |ξ′ ξ′ |A , (1.51) A| = ξ′ A|ξ′ ξ′ |, (1.52) A|A = ξ′ A|ξ′ ξ′ |A = ξ′ | A|ξ′ |2 , (1.53) where the summation is over all the di?erent eigenvalues of ξ, and use has been made of Equa- tion (1.20), as well as the fact that the eigenstates are mutually orthogonal. Note that all of the above results follow from the extremely useful (and easily proved) result ξ′ |ξ′ ξ′ | = 1, (1.54) where 1 denotes the identity operator. The relative probability of a transition to an eigenstate |ξ′ , which is equivalent to the relative probability of a measurement of ξ yielding the result ξ′ , is P(ξ′ ) ∝ | A|ξ′ |2 . (1.55) The absolute probability is clearly P(ξ′ ) = | A|ξ′ |2 ξ′ | A|ξ′ |2 = | A|ξ′ |2 A|A . (1.56) If the ket |A is normalized such that its norm is unity then this probability simply reduces to P(ξ′ ) = | A|ξ′ |2 . (1.57) Fundamental Concepts 17 1.11 Expectation Values Consider an ensemble of microscopic systems prepared in the same initial state |A . Suppose that a measurement of the observable ξ is made on each system. We know that each measurement yields the value ξ′ with probability P(ξ′ ). What is the mean value of the measurement? This quantity, which is generally referred to as the expectation value of ξ, is given by ξ = ξ′ ξ′ P(ξ′ ) = ξ′ ξ′ | A|ξ′ |2 = ξ′ ξ′ A|ξ′ ξ′ |A = ξ′ A| ξ |ξ′ ξ′ |A , (1.58) which reduces to ξ = A| ξ |A (1.59) with the aid of Equation (1.54). Consider the identity operator, 1. All states are eigenstates of this operator with the eigenvalue unity. Thus, the expectation value of this operator is always unity: i.e., A| 1 |A = A|A = 1, (1.60) for all |A . Note that it is only possible to normalize a given ket |A , such that Equation (1.60) is satis?ed, because of the more general property (1.21) of the norm. This property depends on the previously adopted correspondence (1.16) between the elements of a ket space and those of its dual bra space. 1.12 Degeneracy Suppose that two di?erent eigenstates |ξ′ a and |ξ′ b of the observable ξ correspond to the same eigenvalue ξ′ . These states are termed degenerate eigenstates. Degenerate eigenstates are neces- sarily orthogonal to any eigenstates corresponding to di?erent eigenvalues, but, in general, they are not orthogonal to each other (i.e., the proof of orthogonality given in Section 1.8 does not work in this case). This is unfortunate, because much of the previous formalism depends crucially on the mutual orthogonality of the di?erent eigenstates of an observable. Note, however, that any linear combination of |ξ′ a and |ξ′ b is also an eigenstate corresponding to the eigenvalue ξ′ . It follows that we can always construct two mutually orthogonal degenerate eigenstates. For instance, |ξ′ 1 = |ξ′ a , (1.61) |ξ′ 2 = |ξ′ b ? ξ′ a|ξ′ b |ξ′ a [1 ? | ξ′ a|ξ′ b |2]1/2 . (1.62) This result is easily generalized to the case of more than two degenerate eigenstates. We conclude that it is always possible to construct a complete set of mutually orthogonal eigenstates for any given observable. 18 QUANTUM MECHANICS 1.13 Compatible Observables Suppose that we wish to simultaneously measure two observables, ξ and η, of a microscopic sys- tem? Let us assume that we possess an apparatus that is capable of measuring ξ, and another that can measure η. For instance, the two observables in question might be the projection in the x- and z-directions of the spin angular momentum of a spin one-half particle. These could be measured using appropriate Stern-Gerlach apparatuses (see Sakurai, Section 1.1). Suppose that we make a measurement of ξ, and the system is consequently thrown into one of the eigenstates of ξ, |ξ′ , with eigenvalue ξ′ . What happens if we now make a measurement of η? Well, suppose that the eigenstate |ξ′ is also an eigenstate of η, with eigenvalue η′ . In this case, a measurement of η will de?nitely give the result η′ . A second measurement of ξ will de?nitely give the result ξ′ , and so on. In this sense, we can say that the observables ξ and η simultaneously have the values ξ′ and η′ , respectively. Clearly, if all eigenstates of ξ are also eigenstates of η then it is always possible to make a simultaneous measurement of ξ and η. Such observables are termed compatible. Suppose, however, that the eigenstates of ξ are not eigenstates of η. Is it still possible to mea- sure both observables simultaneously? Let us again make an observation of ξ which throws the system into an eigenstate |ξ′ , with eigenvalue ξ′ . We can now make a second observation to de- termine η. This will throw the system into one of the (many) eigenstates of η which depend on |ξ′ . In principle, each of these eigenstates is associated with a di?erent result of the measurement. Suppose that the system is thrown into an eigenstate |η′ , with the eigenvalue η′ . Another measure- ment of ξ will throw the system into one of the (many) eigenstates of ξ which depend on |η′ . Each eigenstate is again associated with a di?erent possible result of the measurement. It is clear that if the observables ξ and η do not possess simultaneous eigenstates then if the value of ξ is known (i.e., the system is in an eigenstate of ξ) then the value of η is uncertain (i.e., the system is not in an eigenstate of η), and vice versa. We say that the two observables are incompatible. We have seen that the condition for two observables ξ and η to be simultaneously measurable is that they should possess simultaneous eigenstates (i.e., every eigenstate of ξ should also be an eigenstate of η). Suppose that this is the case. Let a general eigenstate of ξ, with eigenvalue ξ′ , also be an eigenstate of η, with eigenvalue η′ . It is convenient to denote this simultaneous eigenstate |ξ′ η′ . We have ξ |ξ′ η′ = ξ′ |ξ′ η′ , (1.63) η |ξ′ η′ = η′ |ξ′ η′ . (1.64) We can left-multiply the ?rst equation by η, and the second equation by ξ, and then take the di?erence. The result is (ξ η ? η ξ) |ξ′ η′ = |0 (1.65) for each simultaneous eigenstate. Recall that the eigenstates of an observable must form a complete set. It follows that the simultaneous eigenstates of two observables must also form a complete set. Thus, the above equation implies that (ξ η ? η ξ) |A = |0 , (1.66) Fundamental Concepts 19 where |A is a general ket. The only way that this can be true is if ξ η = η ξ. (1.67) Thus, the condition for two observables ξ and η to be simultaneously measurable is that they should commute. 1.14 Uncertainty Relation We have seen that if ξ and η are two noncommuting observables then a determination of the value of ξ leaves the value of η uncertain, and vice versa. It is possible to quantify this uncertainty. For a general observable ξ, we can de?ne a Hermitian operator ?ξ = ξ ? ξ , (1.68) where the expectation value is taken over the particular physical state under consideration. It is obvious that the expectation value of ?ξ is zero. The expectation value of (?ξ)2 ≡ ?ξ ?ξ is termed the variance of ξ, and is, in general, non-zero. In fact, it is easily demonstrated that (?ξ)2 = ξ2 ? ξ 2 . (1.69) The variance of ξ is a measure of the uncertainty in the value of ξ for the particular state in question (i.e., it is a measure of the width of the distribution of likely values of ξ about the expectation value). If the variance is zero then there is no uncertainty, and a measurement of ξ is bound to give the expectation value, ξ . Consider the Schwarz inequality A|A B|B ≥ | A|B |2 , (1.70) which is analogous to |a|2 | b|2 ≥ |a · b|2 (1.71) in Euclidian space. This inequality can be proved by noting that ( A| + c? B|) (|A + c |B ) ≥ 0, (1.72) where c is any complex number. If c takes the special value ? B|A / B|B then the above inequality reduces to A|A B|B ? | A|B |2 ≥ 0, (1.73) which is the same as the Schwarz inequality. Let us substitute |A = ?ξ | , (1.74) |B = ?η | , (1.75) 20 QUANTUM MECHANICS into the Schwarz inequality, where the blank ket | stands for any general ket. We ?nd (?ξ)2 (?η)2 ≥ | ?ξ ?η |2 , (1.76) where use has been made of the fact that ?ξ and ?η are Hermitian operators. Note that ?ξ ?η = 1 2 ?ξ, ?η + 1 2 {?ξ, ?η} , (1.77) where the commutator, ?ξ, ?η , and the anti-commutator, {?ξ, ?η}, are de?ned ?ξ, ?η ≡ ?ξ ?η ? ?η ?ξ, (1.78) {?ξ, ?η} ≡ ?ξ ?η + ?η ?ξ. (1.79) The commutator is clearly anti-Hermitian, ( ?ξ, ?η )? = (?ξ ?η ? ?η ?ξ)? = ?η ?ξ ? ?ξ ?η = ? ?ξ, ?η , (1.80) whereas the anti-commutator is obviously Hermitian. Now, it is easily demonstrated that the ex- pectation value of an Hermitian operator is a real number, whereas the expectation value of an anti-Hermitian operator is an imaginary number. It follows that the right-hand side of ?ξ ?η = 1 2 ?ξ, ?η + 1 2 {?ξ, ?η} , (1.81) consists of the sum of an imaginary and a real number. Taking the modulus squared of both sides gives | ?ξ ?η |2 = 1 4 | ξ, η |2 + 1 4 | {?ξ, ?η} |2 , (1.82) where use has been made of ?ξ = 0, etc. The ?nal term on the right-hand side of the above expression is positive de?nite, so we can write (?ξ)2 (?η)2 ≥ | ?ξ ?η |2 ≥ 1 4 | ξ, η |2 , (1.83) where use has been made of Equation (1.76). The above expression is termed the uncertainty relation. According to this relation, an exact knowledge of the value of ξ implies no knowledge whatsoever of the value of η, and vice versa. The one exception to this rule is when ξ and η commute, in which case exact knowledge of ξ does not necessarily imply no knowledge of η. 1.15 Continuous Spectra Up to now, we have studiously avoided dealing with observables possessing eigenvalues that lie in a continuous range, rather than having discrete values. The reason for this is that continuous eigenvalues imply a ket space of nondenumerably in?nite dimension. Unfortunately, continuous eigenvalues are unavoidable in quantum mechanics. In fact, the most important observables of Fundamental Concepts 21 all—namely position and momentum—generally have continuous eigenvalues. Fortunately, many of the results that we obtained previously for a ?nite-dimensional ket space with discrete eigenval- ues can be generalized to ket spaces of nondenumerably in?nite dimensions. Suppose that ξ is an observable with continuous eigenvalues. We can still write the eigenvalue equation as ξ |ξ′ = ξ′ |ξ′ . (1.84) But, ξ′ can now take a continuous range of values. Let us assume, for the sake of simplicity, that ξ′ can take any value. The orthogonality condition (1.50) generalizes to ξ′ |ξ′′ = δ(ξ′ ? ξ′′ ), (1.85) where δ(x) denotes the famous Dirac delta function, and satis?es ∞ ?∞ dx f(x) δ(x ? x′ ) = f(x′ ) (1.86) for any function, f(x), that is well-behaved at x = x′ . Note that there are clearly a nondenumer- ably in?nite number of mutually orthogonal eigenstates of ξ. Hence, the dimensionality of ket space is nondenumerably in?nite. Furthermore, eigenstates corresponding to a continuous range of eigenvalues cannot be normalized such that they have unit norms. In fact, these eigenstates have in?nite norms: i.e., they are in?nitely long. This is the major di?erence between eigenstates in a ?nite-dimensional and an in?nite-dimensional ket space. The extremely useful relation (1.54) generalizes to dξ′ |ξ′ ξ′ | = 1. (1.87) Note that a summation over discrete eigenvalues goes over into an integral over a continuous range of eigenvalues. The eigenstates |ξ′ must form a complete set if ξ is to be an observable. It follows that any general ket can be expanded in terms of the |ξ′ . In fact, the expansions (1.51)–(1.53) generalize to |A = dξ′ |ξ′ ξ′ |A , (1.88) A| = dξ′ A|ξ′ ξ′ |, (1.89) A|A = dξ′ A|ξ′ ξ′ |A = dξ′ | A|ξ′ |2 . (1.90) These results also follow simply from Equation (1.87). We have seen that it is not possible to normalize the eigenstates |ξ′ such that they have unit norms. Fortunately, this convenient nor- malization is still possible for a general state vector. In fact, according to Equation (1.90), the normalization condition can be written A|A = dξ′ | A|ξ′ |2 = 1. (1.91) 22 QUANTUM MECHANICS We have now studied observables whose eigenvalues take a discrete number of values, as well as those whose eigenvalues take a continuous range of values. There are a number of other cases we could look at. For instance, observables whose eigenvalues can take a (?nite) continuous range of values, plus a set of discrete values. Such cases can be dealt with using a fairly straightforward generalization of the previous analysis (see Dirac, Chapters II and III). Exercises 1.1 According to classical physics, a non-relativistic electron whose instantaneous acceleration is of magnitude a radiates electromagnetic energy at the rate P = e2 a2 6π ?0 c3 , where e is the magnitude of the electron charge, ?0 the permittivity of the vacuum, and me the electron mass. Consider a classical electron in a circular orbit of radius r around a proton. Demonstrate that the radiated energy would cause the orbital radius to decrease in time according to d dt r a0 3 = ? 1 τ , where a0 = 4π ?0 2 /(me e2 ) is the Bohr radius, the reduced Planck constant, and τ = a0 4 α4 c . Here, c is the velocity of light in a vacuum, and α = e2 /(4π ?0 c) the ?ne structure constant. Deduce that the classical lifetime of a hydrogen atom is τ ? 1.6 * 10?11 s. 1.2 Demonstrate that B|A = A|B ? in a ?nite dimensional ket space. 1.3 Demonstrate that in a ?nite dimensional ket space: (a) B| X? |A = A| X |B ? . (b) (X Y)? = Y? X? . (c) (X? )? = X. (d) (|B A|)? = |A B|. Fundamental Concepts 23 Here, X, Y are general operators. 1.4 If A, B are Hermitian operators then demonstrate that A B is only Hermitian provided A and B commute. In addition, show that (A + B)n is Hermitian, where n is a positive integer. 1.5 Let A be a general operator. Show that A+ A? , i (A? A? ), and A A? are Hermitian operators. 1.6 Let H be an Hermitian operator. Demonstrate that the Hermitian conjugate of the operator exp( i H) ≡ n=0,∞(i H)n /n! is exp(?i H). 1.7 Let the |ξ′ be the eigenkets of an observable ξ, whose corresponding eigenvalues, ξ′ , are discrete. Demonstrate that ξ′ |ξ′ ξ′ | = 1, where the sum is over all eigenvalues, and 1 denotes the unity operator. 1.8 Let the |ξ′ i , where i = 1, N, and N > 1, be a set of degenerate eigenkets of some observable ξ. Suppose that the |ξ′ i are not mutually orthogonal. Demonstrate that a set of mutually orthogonal (but unnormalized) degenerate eigenkets, |ξ′′ i , for i = 1, N, can be constructed as follows: |ξ′′ i = |ξ′ i ? j=1,i?1 ξ′′ j |ξ′ i ξ′′ j |ξ′′ j |ξ′′ j . This process is known as Gram-Schmidt orthogonalization. 1.9 Demonstrate that the expectation value of a Hermitian operator is a real number. Show that the expectation value of an anti-hermitian operator is an imaginary number. 1.10 Let H be an Hermitian operator. Demonstrate that H 2 ≥ 0. 1.11 Consider an Hermitian operator, H, that has the property that H 4 = 1, where 1 is the unity operator. What are the eigenvalues of H? What are the eigenvalues if H is not restricted to being Hermitian? 1.12 Let ξ be an observable whose eigenvalues, ξ′ , lie in a continuous range. Let the |ξ′ , where ξ′ |ξ′′ = δ(ξ′ ? ξ′′ ), be the corresponding eigenkets. Demonstrate that dξ′ |ξ′ ξ′ | = 1, where the integral is over the whole range of eigenvalues, and 1 denotes the unity operator. 24 QUANTUM MECHANICS Position and Momentum 25 2 Position and Momentum 2.1 Introduction So far, we have considered general dynamical variables represented by general linear operators acting in ket space. However, in classical mechanics, the most important dynamical variables are those involving position and momentum. Let us investigate the role of such variables in quantum mechanics. In classical mechanics, the position q and momentum p of some component of a dynamical system are represented as real numbers which, by de?nition, commute. In quantum mechanics, these quantities are represented as non-commuting linear Hermitian operators acting in a ket space that represents all of the possible states of the system. Our ?rst task is to discover a quantum mechanical replacement for the classical result q p ? p q = 0. 2.2 Poisson Brackets Consider a dynamical system whose state at a particular time t is fully speci?ed by N independent classical coordinates qi (where i runs from 1 to N). Associated with each generalized coordinate qi is a classical canonical momentum pi. For instance, a Cartesian coordinate has an associated linear momentum, an angular coordinate has an associated angular momentum, etc. As is well- known, the behavior of a classical system can be speci?ed in terms of Lagrangian or Hamiltonian dynamics. For instance, in Hamiltonian dynamics, dqi dt = ?H ?pi , (2.1) dpi dt = ? ?H ?qi , (2.2) where the function H(qi, pi, t) is the system energy at time t expressed in terms of the classical coordinates and canonical momenta. This function is usually referred to as the Hamiltonian of the system. We are interested in ?nding some construct in classical dynamics that consists of products of dynamical variables. If such a construct exists then we hope to generalize it somehow to obtain a rule describing how dynamical variables commute with one another in quantum mechanics. There is, indeed, one well-known construct in classical dynamics that involves products of dynamical variables. The classical Poisson bracket of two dynamical variables u and v is de?ned [u, v]cl = i=1,N ?u ?qi ?v ?pi ? ?u ?pi ?v ?qi , (2.3) where u and v are regarded as functions of the coordinates and momenta, qi and pi. It is easily 26 QUANTUM MECHANICS demonstrated that [qi, qj]cl = 0, (2.4) [pi, pj]cl = 0, (2.5) [qi, pj]cl = δij. (2.6) The time evolution of a dynamical variable can also be written in terms of a Poisson bracket by noting that du dt = i=1,N ?u ?qi dqi dt + ?u ?pi dpi dt = i=1,N ?u ?qi ?H ?pi ? ?u ?pi ?H ?qi = [u, H]cl, (2.7) where use has been made of Hamilton's equations, (2.1)–(2.2). Can we construct a quantum mechanical Poisson bracket in which u and v are non-commuting operators, instead of functions? Well, the main properties of the classical Poisson bracket are as follows: [u, v]cl = ?[v, u]cl, (2.8) [u, c]cl = 0, (2.9) [u1 + u2, v]cl = [u1, v]cl + [u2, v]cl, (2.10) [u, v1 + v2]cl = [u, v1]cl + [u, v2]cl, (2.11) [u1 u2, v]cl = [u1, v]cl u2 + u1 [u2, v]cl, (2.12) [u, v1 v2]cl = [u, v1]cl v2 + v1 [u, v2]cl, (2.13) [u, [v, w]cl]cl + [v, [w, u]cl]cl + [w, [u, v]cl]cl = 0. (2.14) The last relation is known as the Jacobi identity. In the above, u, v, w, etc., represent dynamical variables, and c represents a number. Can we ?nd some combination of non-commuting operators u and v, etc., that satis?es all of the above relations? We shall refer to such a combination as a quantum mechanical Poisson bracket. Actually, we can evaluate the quantum mechanical Poisson bracket [u1 u2, v1 v2]qm in two dif- ferent ways, because we can employ either of the formulae (2.12) or (2.13) ?rst. Thus, [u1 u2, v1 v2]qm = [u1, v1 v2]qm u2 + u1 [u2, v1 v2]qm = [u1, v1]qm v2 + v1 [u1, v2]qm u2 + u1 [u2, v1]qm v2 + v1 [u2, v2]qm = [u1, v1]qm v2 u2 + v1 [u1, v2]qm u2 + u1 [u2, v1]qm v2 + u1 v1 [u2, v2]qm, (2.15) and [u1 u2, v1 v2]qm = [u1 u2, v1]qm v2 + v1 [u1 u2, v2]qm = [u1, v1]qm u2 v2 + u1 [u2, v1]qm v2 + v1 [u1, v2]qm u2 + v1 u1 [u2, v2]qm. (2.16) Position and Momentum 27 Note that the order of the various factors has been preserved, because they now represent non- commuting operators. Equating the above two results yields [u1, v1]qm (u2 v2 ? v2 u2) = (u1 v1 ? v1 u1) [u2, v2]qm. (2.17) Since this relation must hold for u1 and v1 quite independent of u2 and v2, it follows that u1 v1 ? v1 u1 = i [u1, v1]qm, (2.18) u2 v2 ? v2 u2 = i [u2, v2]qm, (2.19) where does not depend on u1, v1, u2, v2, and also commutes with (u1 v1 ? v1 u1). Because u1, etc., are general operators, it follows that is just a number. We want the quantum mechanical Poisson bracket of two Hermitian operators to be a Hermitian operator itself, because the classical Poisson bracket of two real dynamical variables is real. This requirement is satis?ed if is a real number. Thus, the quantum mechanical Poisson bracket of two dynamical variables u and v is given by [u, v]qm = u v ? v u i , (2.20) where is a new universal constant of nature. Quantum mechanics agrees with experiments pro- vided that takes the value h/2π, where h = 6.6261 * 10?34 J s (2.21) is Planck's constant. The notation [u, v] is conventionally reserved for the commutator u v ? v u in quantum mechanics. Thus, [u, v]qm = [u, v] i . (2.22) It is easily demonstrated that the quantum mechanical Poisson bracket, as de?ned above, satis?es all of the relations (2.8)–(2.14). The strong analogy we have found between the classical Poisson bracket, de?ned in Equa- tion (2.3), and the quantum mechanical Poisson bracket, de?ned in Equation (2.22), leads us to assume that the quantum mechanical bracket has the same value as the corresponding classical bracket, at least for the simplest cases. In other words, we are assuming that Equations (2.4)– (2.6) hold for quantum mechanical as well as classical Poisson brackets. This argument yields the fundamental commutation relations [qi, qj] = 0, (2.23) [pi, pj] = 0, (2.24) [qi, pj] = i δij. (2.25) These results provide us with the basis for calculating commutation relations between general dynamical variables. For instance, if two dynamical variables, ξ and η, can both be written as a power series in the qi and pi then repeated application of Equations (2.8)–(2.13) allows [ξ, η] to be expressed in terms of the fundamental commutation relations (2.23)–(2.25). 28 QUANTUM MECHANICS Equations (2.23)–(2.25) provide the foundation for the analogy between quantum mechanics and classical mechanics. Note that the classical result (that everything commutes) is obtained in the limit → 0. Thus, classical mechanics can be regarded as the limiting case of quantum mechanics when goes to zero. In classical mechanics, each pair of generalized coordinate and its conjugate momentum, qi and pi, correspond to a di?erent classical degree of freedom of the system. It is clear from Equations (2.23)–(2.25) that in quantum mechanics the dynamical variables corresponding to di?erent degrees of freedom all commute. It is only those variables corresponding to the same degree of freedom that may fail to commute. 2.3 Wavefunctions Consider a simple system with one classical degree of freedom, which corresponds to the Cartesian coordinate x. Suppose that x is free to take any value (e.g., x could be the position of a free particle). The classical dynamical variable x is represented in quantum mechanics as a linear Hermitian operator which is also called x. Moreover, the operator x possesses eigenvalues x′ lying in the continuous range ?∞ < x′ < +∞ (since the eigenvalues correspond to all the possible results of a measurement of x). We can span ket space using the suitably normalized eigenkets of x. An eigenket corresponding to the eigenvalue x′ is denoted |x′ . Moreover, [see Equation (1.85)] x′ |x′′ = δ(x′ ? x′′ ). (2.26) The eigenkets satisfy the extremely useful relation [see Equation (1.87)] +∞ ?∞ dx′ |x′ x′ | = 1. (2.27) This formula expresses the fact that the eigenkets are complete, mutually orthogonal, and suitably normalized. A state ket |A (which represents a general state A of the system) can be expressed as a linear superposition of the eigenkets of the position operator using Equation (2.27). Thus, |A = +∞ ?∞ dx′ x′ |A |x′ (2.28) The quantity x′ |A is a complex function of the position eigenvalue x′ . We can write x′ |A = ψA(x′ ). (2.29) Here, ψA(x′ ) is the famous wavefunction of quantum mechanics. Note that state A is completely speci?ed by its wavefunction ψA(x′ ) [because the wavefunction can be used to reconstruct the state ket |A using Equation (2.28)]. It is clear that the wavefunction of state A is simply the collection of the weights of the corresponding state ket |A , when it is expanded in terms of the eigenkets of the position operator. Recall, from Section 1.10, that the probability of a measurement of a dynamical variable ξ yielding the result ξ′ when the system is in state A is given by | ξ′ |A |2 , assuming that the eigenvalues of ξ are discrete. This result is easily generalized to dynamical variables possessing Position and Momentum 29 continuous eigenvalues. In fact, the probability of a measurement of x yielding a result lying in the range x′ to x′ + dx′ when the system is in a state |A is | x′ |A |2 dx′ . In other words, the probability of a measurement of position yielding a result in the range x′ to x′ + dx′ when the wavefunction of the system is ψA(x′ ) is P(x′ , dx′ ) = |ψA(x′ )|2 dx′ . (2.30) This formula is only valid if the state ket |A is properly normalized: i.e., if A|A = 1. The corresponding normalization for the wavefunction is +∞ ?∞ dx′ |ψA(x′ )|2 = 1. (2.31) Consider a second state B represented by a state ket |B and a wavefunction ψB(x′ ). The inner product B|A can be written B|A = +∞ ?∞ dx′ B|x′ x′ |A = +∞ ?∞ dx′ ψ? B(x′ ) ψ′ A(x′ ), (2.32) where use has been made of Equations (2.27) and (2.29). Thus, the inner product of two states is related to the overlap integral of their wavefunctions. Consider a general function f(x) of the observable x [e.g., f(x) = x2 ]. If |B = f(x) |A then it follows that ψB(x′ ) = x′ | f(x) +∞ ?∞ dx′′ ψA(x′′ ) |x′′ = +∞ ?∞ dx′′ f(x′′ ) ψA(x′′ ) x′ |x′′ , (2.33) giving ψB(x′ ) = f(x′ ) ψA(x′ ), (2.34) where use has been made of Equation (2.26). Here, f(x′ ) is the same function of the position eigenvalue x′ that f(x) is of the position operator x: i.e., if f(x) = x2 then f(x′ ) = x′ 2 . It follows, from the above result, that a general state ket |A can be written |A = ψA(x) , (2.35) where ψA(x) is the same function of the operator x that the wavefunction ψA(x′ ) is of the position eigenvalue x′ , and the ket has the wavefunction ψ(x′ ) = 1. The ket is termed the standard ket. The dual of the standard ket is termed the standard bra, and is denoted . It is easily seen that ψ? A(x) DC ←→ ψA(x) . (2.36) Note, ?nally, that ψA(x) is often shortened to ψA , leaving the dependence on the position operator x tacitly understood. 30 QUANTUM MECHANICS 2.4 Schr¨ odinger Representation Consider the simple system described in the previous section. A general state ket can be written ψ(x) , where ψ(x) is a general function of the position operator x, and ψ(x′ ) is the associated wavefunction. Consider the ket whose wavefunction is dψ(x′ )/dx′ . This ket is denoted dψ/dx . The new ket is clearly a linear function of the original ket, so we can think of it as the result of some linear operator acting on ψ . Let us denote this operator d/dx. It follows that d dx ψ = dψ dx . (2.37) Any linear operator that acts on ket vectors can also act on bra vectors. Consider d/dx acting on a general bra φ(x). According to Equation (1.34), the bra φ d/dx satis?es φ d dx ψ = φ d dx ψ . (2.38) Making use of Equations (2.27) and (2.29), we can write +∞ ?∞ dx′ φ d dx |x′ ψ(x′ ) = +∞ ?∞ dx′ φ(x′ ) dψ(x′ ) dx′ . (2.39) The right-hand side can be transformed via integration by parts to give +∞ ?∞ dx′ φ d dx |x′ ψ(x′ ) = ? +∞ ?∞ dx′ dφ(x′ ) dx′ ψ(x′ ), (2.40) assuming that the contributions from the limits of integration vanish. It follows that φ d dx |x′ = ? dφ(x′ ) dx′ , (2.41) which implies that φ d dx = ? dφ dx . (2.42) The neglect of contributions from the limits of integration in Equation (2.40) is reasonable because physical wavefunctions are square-integrable [see Equation (2.31)]. Note that d dx ψ = dψ dx DC ←→ dψ? dx = ? ψ? d dx , (2.43) where use has been made of Equation (2.42). It follows, by comparison with Equations (1.35) and (2.36), that d dx ? = ? d dx . (2.44) Thus, d/dx is an anti-Hermitian operator. Position and Momentum 31 Let us evaluate the commutation relation between the operators x and d/dx. We have d dx x ψ = d(x ψ) dx = x d dx ψ + ψ . (2.45) Since this holds for any ket ψ , it follows that d dx x ? x d dx = 1. (2.46) Let px be the momentum conjugate to x (for the simple system under consideration px is a straight- forward linear momentum). According to Equation (2.25), x and p satisfy the commutation relation x px ? px x = i . (2.47) It can be seen, by comparison with Equation (2.46), that the Hermitian operator ?i d/dx satis?es the same commutation relation with x that px does. The most general conclusion which may be drawn from a comparison of Equations (2.46) and (2.47) is that px = ?i d dx + f(x), (2.48) since (as is easily demonstrated) a general function f(x) of the position operator automatically commutes with x. We have chosen to normalize the eigenkets and eigenbras of the position operator such that they satisfy the normalization condition (2.26). However, this choice of normalization does not uniquely determine the eigenkets and eigenbras. Suppose that we transform to a new set of eigenbras which are related to the old set via x′ |new = ei γ′ x′ |old, (2.49) where γ′ ≡ γ(x′ ) is a real function of x′ . This transformation amounts to a rearrangement of the relative phases of the eigenbras. The new normalization condition is x′ |x′′ new = x′ | ei γ′ e?i γ′′ |x′′ old = ei (γ′?γ′′) x′ |x′′ old = ei (γ′?γ′′) δ(x′ ? x′′ ) = δ(x′ ? x′′ ). (2.50) Thus, the new eigenbras satisfy the same normalization condition as the old eigenbras. By de?nition, the standard ket satis?es x′ | = 1. It follows from Equation (2.49) that the new standard ket is related to the old standard ket via new = e?i γ old, (2.51) where γ ≡ γ(x) is a real function of the position operator x. The dual of the above equation yields the transformation rule for the standard bra, new= old ei γ . (2.52) 32 QUANTUM MECHANICS The transformation rule for a general operator A follows from Equations (2.51) and (2.52), plus the requirement that the triple product A remain invariant (this must be the case, otherwise the probability of a measurement yielding a certain result would depend on the choice of eigenbras). Thus, Anew = e?i γ Aold ei γ . (2.53) Of course, if A commutes with x then A is invariant under the transformation. In fact, d/dx is the only operator (that we know of) which does not commute with x, so Equation (2.53) yields d dx new = e?i γ d dx ei γ = d dx + i dγ dx , (2.54) where the subscript "old" is taken as read. It follows, from Equation (2.48), that the momentum operator px can be written px = ?i d dx new ? dγ dx + f(x). (2.55) Thus, the special choice γ(x) = x dx′ f(x′ ) (2.56) yields px = ?i d dx new . (2.57) Equation (2.56) ?xes γ to within an arbitrary additive constant: i.e., the special eigenkets and eigenbras for which Equation (2.57) is true are determined to within an arbitrary common phase- factor. In conclusion, it is possible to ?nd a set of basis eigenkets and eigenbras of the position operator x that satisfy the normalization condition (2.26), and for which the momentum conjugate to x can be represented as the operator px = ?i d dx . (2.58) A general state ket is written ψ(x) , where the standard ket satis?es x′ | = 1, and where ψ(x′ ) = x′ |ψ(x) is the wavefunction. This scheme of things is known as the Schr¨ odinger representation, and is the basis of wave mechanics. 2.5 Generalized Schr¨ odinger Representation In the preceding section, we developed the Schr¨ odinger representation for the case of a single operator x corresponding to a classical Cartesian coordinate. However, this scheme can easily be extended. Consider a system with N generalized coordinates, q1 · · · qN, which can all be si- multaneously measured. These are represented as N commuting operators, q1 · · · qN, each with a continuous range of eigenvalues, q′ 1 · · · q′ N. Ket space is conveniently spanned by the simultaneous eigenkets of q1 · · · qN, which are denoted |q′ 1 · · · q′ N . These eigenkets must form a complete set, otherwise the q1 · · · qN would not be simultaneously observable. Position and Momentum 33 The orthogonality condition for the eigenkets [i.e., the generalization of Equation (2.26)] is q′ 1 · · · q′ N|q′′ 1 · · · q′′ N = δ(q′ 1 ? q′′ 1 ) δ(q′ 2 ? q′′ 2 ) · · · δ(q′ N ? q′′ N). (2.59) The completeness condition [i.e., the generalization of Equation (2.27)] is +∞ ?∞ · · · +∞ ?∞ dq′ 1 · · · dq′ N |q′ 1 · · · q′ N q′ 1 · · · q′ N| = 1. (2.60) The standard ket is de?ned such that q′ 1 · · · q′ N| = 1. (2.61) The standard bra is the dual of the standard ket. A general state ket is written ψ(q1 · · · qN) . (2.62) The associated wavefunction is ψ(q′ 1 · · · q′ N) = q′ 1 · · · q′ N|ψ . (2.63) Likewise, a general state bra is written φ(q1 · · · qN), (2.64) where φ(q′ 1 · · · q′ N) = φ|q′ 1 · · · q′ N . (2.65) The probability of an observation of the system simultaneously ?nding the ?rst coordinate in the range q′ 1 to q′ 1 + dq′ 1, the second coordinate in the range q′ 2 to q′ 2 + dq′ 2, etc., is P(q′ 1 · · · q′ N; dq′ 1 · · · dq′ N) = |ψ(q′ 1 · · · q′ N)|2 dq′ 1 · · · dq′ N. (2.66) Finally, the normalization condition for a physical wavefunction is +∞ ?∞ · · · +∞ ?∞ dq′ 1 · · · dq′ N |ψ(q′ 1 · · · q′ N)|2 = 1. (2.67) The N linear operators ?/?qi (where i runs from 1 to N) are de?ned ? ?qi ψ = ?ψ ?qi . (2.68) These linear operators can also act on bras (provided the associated wavefunctions are square integrable) in accordance with [see Equation (2.42)] φ ? ?qi = ? ?φ ?qi . (2.69) 34 QUANTUM MECHANICS Corresponding to Equation (2.46), we can derive the commutation relations ? ?qi qj ? qj ? ?qi = δij. (2.70) It is also clear that ? ?qi ? ?qj ψ = ?2 ψ ?qi?qj = ? ?qj ? ?qi ψ , (2.71) showing that ? ?qi ? ?qj = ? ?qj ? ?qi . (2.72) It can be seen, by comparison with Equations (2.23)–(2.25), that the linear operators ?i ?/?qi satisfy the same commutation relations with the q's and with each other that the p's do. The most general conclusion we can draw from this coincidence of commutation relations is (see Dirac) pi = ?i ? ?qi + ?F(q1 · · · qN) ?qi . (2.73) However, the function F can be transformed away via a suitable readjustment of the phases of the basis eigenkets (see Section 2.4, and Dirac). Thus, we can always construct a set of simultaneous eigenkets of q1 · · · qN for which pi = ?i ? ?qi . (2.74) This is the generalized Schr¨ odinger representation. It follows from Equations (2.61), (2.68), and (2.74) that pi = 0. (2.75) Thus, the standard ket in the Schr¨ odinger representation is a simultaneous eigenket of all the mo- mentum operators belonging to the eigenvalue zero. Note that q′ 1 · · · q′ N| ? ?qi ψ = q′ 1 · · · q′ N| ?ψ ?qi = ?ψ(q′ 1 · · · q′ N) ?q′ i = ? ?q′ i q′ 1 · · · q′ N|ψ . (2.76) Hence, q′ 1 · · · q′ N| ? ?qi = ? ?q′ i q′ 1 · · · q′ N|, (2.77) so that q′ 1 · · · q′ N| pi = ?i ? ?q′ i q′ 1 · · · q′ N|. (2.78) The dual of the above equation gives pi |q′ 1 · · · q′ N = i ? ?q′ i |q′ 1 · · · q′ N . (2.79) Position and Momentum 35 2.6 Momentum Representation Consider a system with one degree of freedom, describable in terms of a coordinate x and its conjugate momentum px, both of which have a continuous range of eigenvalues. We have seen that it is possible to represent the system in terms of the eigenkets of x. This is termed the Schr¨ odinger representation. However, it is also possible to represent the system in terms of the eigenkets of px. Consider the eigenkets of px which belong to the eigenvalues p′ x. These are denoted |p′ x . The orthogonality relation for the momentum eigenkets is p′ x|p′′ x = δ(p′ x ? p′′ x ), (2.80) and the corresponding completeness relation is +∞ ?∞ dp′ x |p′ x p′ x| = 1. (2.81) A general state ket can be written φ(p) (2.82) where the standard ket satis?es p′ x| = 1. (2.83) Note that the standard ket in this representation is quite di?erent to that in the Schr¨ odinger repre- sentation. The momentum space wavefunction φ(p′ x) satis?es φ(p′ x) = p′ x|φ . (2.84) The probability that a measurement of the momentum yields a result lying in the range p′ x to p′ x + dp′ x is given by P(p′ x, dp′ x) = |φ(p′ x)|2 dp′ x. (2.85) Finally, the normalization condition for a physical momentum space wavefunction is +∞ ?∞ dp′ x |φ(p′ x)|2 = 1. (2.86) The fundamental commutation relations (2.23)–(2.25) exhibit a particular symmetry between coordinates and their conjugate momenta. If all the coordinates are transformed into their conju- gate momenta, and vice versa, and i is then replaced by ?i, then the commutation relations are unchanged. It follows from this symmetry that we can always choose the eigenkets of px in such a manner that the coordinate x can be represented as (see Section 2.4) x = i d dpx . (2.87) This is termed the momentum representation. 36 QUANTUM MECHANICS The above result is easily generalized to a system with more than one degree of freedom. Suppose the system is speci?ed by N coordinates, q1 · · · qN, and N conjugate momenta, p1 · · · pN. Then, in the momentum representation, the coordinates can be written as qi = i ? ?pi . (2.88) We also have qi = 0, (2.89) and p′ 1 · · · p′ N| qi = i ? ?p′ i p′ 1 · · · p′ N|. (2.90) The momentum representation is less useful than the Schr¨ odinger representation for a very simple reason. The energy operator (i.e., the Hamiltonian) of most simple systems takes the form of a sum of quadratic terms in the momenta (i.e., the kinetic energy) plus a complicated function of the coordinates (i.e., the potential energy). In the Schr¨ odinger representation, the eigenvalue problem for the energy translates into a second-order di?erential equation in the coordinates, with a complicated potential function. In the momentum representation, the problem transforms into a high-order di?erential equation in the momenta, with a quadratic potential. With the mathematical tools at our disposal, we are far better able to solve the former type of problem than the latter. Hence, the Schr¨ odinger representation is generally more useful than the momentum representation. 2.7 Uncertainty Relation How is a momentum space wavefunction related to the corresponding coordinate space wavefunc- tion? To answer this question, let us consider the representative x′ |p′ x of the momentum eigenkets |p′ in the Schr¨ odinger representation for a system with a single degree of freedom. This represen- tative satis?es p′ x x′ |p′ x = x′ | px |p′ x = ?i d dx′ x′ |p′ x , (2.91) where use has been made of Equation (2.78) (for the case of a system with one degree of freedom). The solution of the above di?erential equation is x′ |p′ x = c′ exp( i p′ x x′ / ), (2.92) where c′ = c′ (p′ x). It is easily demonstrated that p′ x|p′′ x = +∞ ?∞ dx′ p′ x|x′ x′ |p′′ x = c′ ? c′′ ∞ ?∞ dx′ exp[?i (p′ x ? p′′ x ) x′ / ]. (2.93) The well-known mathematical result +∞ ?∞ dx exp( i a x) = 2π δ(a), (2.94) Position and Momentum 37 yields p′ x|p′′ x = |c′ |2 h δ(p′ x ? p′′ x ). (2.95) This is consistent with Equation (2.80), provided that c′ = h?1/2 . Thus, x′ |p′ x = h?1/2 exp( i p′ x x′ / ). (2.96) Consider a general state ket |A whose coordinate wavefunction is ψ(x′ ), and whose momentum wavefunction is Ψ(p′ x). In other words, ψ(x′ ) = x′ |A , (2.97) Ψ(p′ x) = p′ x|A . (2.98) It is easily demonstrated that ψ(x′ ) = +∞ ?∞ dp′ x x′ |p′ x p′ x|A = 1 h1/2 +∞ ?∞ dp′ x Ψ(p′ x) exp( i p′ x x′ / ) (2.99) and Ψ(p′ x) = +∞ ?∞ dx′ p′ x|x′ x′ |A = 1 h1/2 +∞ ?∞ dx′ ψ(x′ ) exp(?i p′ xx′ / ), (2.100) where use has been made of Equations (2.27), (2.81), (2.94), and (2.96). Clearly, the momentum space wavefunction is the Fourier transform of the coordinate space wavefunction. Consider a state whose coordinate space wavefunction is a wavepacket. In other words, the wavefunction only has non-negligible amplitude in some spatially localized region of extent ?x. As is well-known, the Fourier transform of a wavepacket ?lls up a wavenumber band of approximate extent ?k ? 1/?x. Note that in Equation (2.99) the role of the wavenumber k is played by the quantity p′ x/ . It follows that the momentum space wavefunction corresponding to a wavepacket in coordinate space extends over a range of momenta ?px ? /?x. Clearly, a measurement of x is almost certain to give a result lying in a range of width ?x. Likewise, measurement of px is almost certain to yield a result lying in a range of width ?px. The product of these two uncertainties is ?x ?px ? . (2.101) This result is called the Heisenberg uncertainty principle. Actually, it is possible to write the Heisenberg uncertainty principle more exactly by making use of Equation (1.83) and the commutation relation (2.47). We obtain (?x)2 (?px)2 ≥ 2 4 (2.102) for any general state. It is easily demonstrated that the minimum uncertainty states, for which the equality sign holds in the above relation, correspond to Gaussian wavepackets in both coordinate and momentum space. 38 QUANTUM MECHANICS 2.8 Displacement Operators Consider a system with one degree of freedom corresponding to the Cartesian coordinate x. Sup- pose that we displace this system some distance along the x-axis. We could imagine that the system is on wheels, and we just give it a little push. The ?nal state of the system is completely deter- mined by its initial state, together with the direction and magnitude of the displacement. Note that the type of displacement we are considering is one in which everything to do with the system is displaced. So, if the system is subject to an external potential then the potential must be displaced. The situation is not so clear with state kets. The ?nal state of the system only determines the direction of the displaced state ket. Even if we adopt the convention that all state kets have unit norms, the ?nal ket is still not completely determined, because it can be multiplied by a constant phase-factor. However, we know that the superposition relations between states remain invariant under the displacement. This follows because the superposition relations have a physical signi?cance that is una?ected by a displacement of the system. Thus, if |R = |A + |B (2.103) in the undisplaced system, and the displacement causes ket |R to transform to ket |Rd , etc., then in the displaced system we have |Rd = |Ad + |Bd . (2.104) Incidentally, this determines the displaced kets to within a single arbitrary phase-factor to be multi- plied into all of them. The displaced kets cannot be multiplied by individual phase-factors, because this would wreck the superposition relations. Since Equation (2.104) holds in the displaced system whenever Equation (2.103) holds in the undisplaced system, it follows that the displaced ket |Rd must be the result of some linear operator acting on the undisplaced ket |R . In other words, |Rd = D |R , (2.105) where D an operator that depends only on the nature of the displacement. The arbitrary phase- factor by which all displaced kets may be multiplied results in D being undetermined to an arbitrary multiplicative constant of modulus unity. We now adopt the ansatz that any combination of bras, kets, and dynamical variables that pos- sesses a physical signi?cance is invariant under a displacement of the system. The normalization condition A|A = 1 (2.106) for a state ket |A certainly has a physical signi?cance. Thus, we must have Ad|Ad = 1. (2.107) Now, |Ad = D |A and Ad| = A| D? , so A| D? D |A = 1. (2.108) Position and Momentum 39 Because this must hold for any state ket |A , it follows that D? D = 1. (2.109) Hence, the displacement operator is unitary. Note that the above relation implies that |A = D? |Ad . (2.110) The equation v |A = |B , (2.111) where the operator v represents a dynamical variable, has some physical signi?cance. Thus, we require that vd |Ad = |Bd , (2.112) where vd is the displaced operator. It follows that vd |Ad = D |B = D v |A = D v D? |Ad . (2.113) Since this is true for any ket |Ad , we have vd = D v D? . (2.114) Note that the arbitrary numerical factor in D does not a?ect either of the results (2.109) and (2.114). Suppose, now, that the system is displaced an in?nitesimal distance δx along the x-axis. We expect that the displaced ket |Ad should approach the undisplaced ket |A in the limit as δx → 0. Thus, we expect the limit lim δx→0 |Ad ? |A δx = lim δx→0 D ? 1 δx |A (2.115) to exist. Let dx = lim δx→0 D ? 1 δx , (2.116) where dx is denoted the displacement operator along the x-axis. The fact that D can be replaced by D exp( i γ), where γ is a real phase-angle, implies that dx can be replaced by lim δx→0 D exp(i γ) ? 1 δx = lim δx→0 D ? 1 + i γ δx = dx + i ax, (2.117) where ax is the limit of γ/δx. We have assumed, as seems reasonable, that γ tends to zero as δx → 0. It is clear that the displacement operator is undetermined to an arbitrary imaginary additive constant. For small δx, we have D = 1 + δx dx. (2.118) It follows from Equation (2.109) that (1 + δx d? x ) (1 + δx dx) = 1. (2.119) 40 QUANTUM MECHANICS Neglecting order (δx)2 , we obtain d ? x + dx = 0. (2.120) Thus, the displacement operator is anti-Hermitian. Substituting into Equation (2.114), and again neglecting order (δx)2 , we ?nd that vd = (1 + δx dx) v (1 ? δx dx) = v + δx (dx v ? v dx), (2.121) which implies lim δx→0 vd ? v δx = dx v ? v dx. (2.122) Let us consider a speci?c example. Suppose that a state has a wavefunction ψ(x′ ). If the system is displaced a distance δx along the x-axis then the new wavefunction is ψ(x′ ? δx) (i.e., the same shape shifted in the x-direction by a distance δx). Actually, the new wavefunction can be multiplied by an arbitrary number of modulus unity. It can be seen that the new wavefunction is obtained from the old wavefunction according to the prescription x′ → x′ ? δx. Thus, xd = x ? δx. (2.123) A comparison with Equation (2.122), using x = v, yields dx x ? x dx = ?1. (2.124) It follows that i dx obeys the same commutation relation with x that px, the momentum conjugate to x, does [see Equation (2.25)]. The most general conclusion we can draw from this observation is that px = i dx + f(x), (2.125) where f is Hermitian (since px is Hermitian). However, the fact that dx is undetermined to an arbitrary additive imaginary constant (which could be a function of x) enables us to transform the function f out of the above equation, leaving px = i dx. (2.126) Thus, the displacement operator in the x-direction is proportional to the momentum conjugate to x. We say that px is the generator of translations along the x-axis. A ?nite translation along the x-axis can be constructed from a series of very many in?nitesimal translations. Thus, the operator D(?x) which translates the system a distance ?x along the x-axis is written D(?x) = lim N→∞ 1 ? i ?x N px N , (2.127) where use has been made of Equations (2.118) and (2.126). It follows that D(?x) = exp (?i px ?x/ ) . (2.128) The unitary nature of the operator is now clearly apparent. Position and Momentum 41 We can also construct displacement operators which translate the system along the y- and z- axes. Note that a displacement a distance ?x along the x-axis commutes with a displacement a distance ?y along the y-axis. In other words, if the system is moved ?x along the x-axis, and then ?y along the y-axis, then it ends up in the same state as if it were moved ?y along the y-axis, and then ?x along the x-axis. The fact that translations in independent directions commute is clearly associated with the fact that the conjugate momentum operators associated with these directions also commute [see Equations (2.24) and (2.128)]. Exercises 2.1 Demonstrate that [qi, qj]cl = 0, [pi, pj]cl = 0, [qi, pj]cl = δij, where cl represents a classical Poisson bracket. Here, the qi and pi are the co- ordinates and corresponding canonical momenta of a classical, many degree of freedom, dynamical system. 2.2 Verify that [u, v] = ?[v, u], [u, c] = 0, [u1 + u2, v] = [u1, v] + [u2, v], [u, v1 + v2] = [u, v1] + [u, v2], [u1 u2, v] = [u1, v] u2 + u1 [u2, v], [u, v1 v2] = [u, v1] v2 + v1 [u, v2], [u, [v, w]] + [v, [w, u]] + [w, [u, v]] = 0, where represents either a classical or a quantum mechanical Poisson bracket. Here, u, u, w, etc., represent dynamical variables (i.e., functions of the coordinates and canonical momenta of a dynamical system), and c represents a number. 2.3 Consider a Gaussian wavepacket whose corresponding wavefunction is ψ(x′ ) = ψ0 exp ? (x′ ? x0)2 4 σ2 , where ψ0, x0, and σ are constants. Demonstrate that (a) x = x0, 42 QUANTUM MECHANICS (b) (?x)2 = σ2 , (c) p = 0, (d) (?p)2 = 2 4 σ2 . Here, x and p are a position operator and its conjugate momentum operator, respectively. 2.4 Suppose that we displace a one-dimensional quantum mechanical system a distance a along the x-axis. The corresponding displacement operator is D(a) = exp (?i px a/ ) , where px is the momentum conjugate to the position operator x. Demonstrate that D(a) x D(a)? = x ? a. [Hint: Use the momentum representation, x = i d/dpx.] Similarly, demonstrate that D(a) xm D(a)? = (x ? a)m . Hence, deduce that D(a) V(x) D(a)? = V(x ? a), where V(x) is a general function of x. Let k = px/ , and let |k′ denote an eigenket of the k operator belonging to the eigenvalue k′ . Demonstrate that |A = n=?∞,∞ cn |k′ + n ka , where the cn are arbitrary complex coe?cients, and ka = 2π/a, is an eigenket of the D(a) operator belonging to the eigenvalue exp(?i k′ a). Show that the corresponding wavefunc- tion can be written ψA(x′ ) = ei k′ x′ u(x′ ), where u(x′ + a) = u(x′ ) for all x′ . Quantum Dynamics 43 3 Quantum Dynamics 3.1 Schr¨ odinger Equation of Motion Up to now, we have only considered systems at one particular instant of time. Let us now investi- gate the time evolution if quantum mechanical systems. Consider a system in a state A that evolves in time. At time t, the state of the system is represented by the ket |At . The label A is needed to distinguish this ket from any other ket (|Bt , say) that is evolving in time. The label t is needed to distinguish the di?erent states of the system at di?erent times. The ?nal state of the system at time t is completely determined by its initial state at time t0 plus the time interval t ? t0 (assuming that the system is left undisturbed during this time interval). However, the ?nal state only determines the direction of the ?nal state ket. Even if we adopt the convention that all state kets have unit norms, the ?nal ket is still not completely determined, be- cause it can be multiplied by an arbitrary phase-factor. However, we expect that if a superposition relation holds for certain states at time t0 then the same relation should hold between the corre- sponding time-evolved states at time t, assuming that the system is left undisturbed between times t0 and t. In other words, if |Rt0 = |At0 + |Bt0 (3.1) for any three kets then we should have |Rt = |At + |Bt . (3.2) This rule determines the time-evolved kets to within a single arbitrary phase-factor to be multiplied into all of them. The evolved kets cannot be multiplied by individual phase-factors because this would invalidate the superposition relation at later times. According to Equations (3.1) and (3.2), the ?nal ket |Rt depends linearly on the initial ket |Rt0 . Thus, the ?nal ket can be regarded as the result of some linear operator acting on the initial ket: i.e., |Rt = T |Rt0 , (3.3) where T is a linear operator that depends only on the times t and t0. The arbitrary phase-factor by which all time-evolved kets may be multiplied results in T(t, t0) being undetermined to an arbitrary multiplicative constant of modulus unity. Because we have adopted a convention in which the norm of any state ket is unity, it make sense to de?ne the time evolution operator T in such a manner that it preserves the length of any ket upon which it acts (i.e., if a ket is properly normalized at time t then it will remain normalized at all subsequent times t > t0). This is always possible, because the length of a ket possesses no physical signi?cance. Thus, we require that At0|At0 = At|At (3.4) 44 QUANTUM MECHANICS for any ket A, which immediately yields T? T = 1. (3.5) Hence, the time evolution operator T is unitary. Up to now, the time evolution operator T looks very much like the spatial displacement operator D introduced in the previous section. However, there are some important di?erences between time evolution and spatial displacement. In general, we do expect the expectation value of some observable ξ to evolve with time, even if the system is left in a state of undisturbed motion (after all, time evolution has no meaning unless something observable changes with time). The triple product A| ξ |A can evolve either because the ket |A evolves and the operator ξ stays constant, the ket |A stays constant and the operator ξ evolves, or both the ket |A and the operator ξ evolve. Because we are already committed to evolving state kets, according to Equation (3.3), let us assume that the time evolution operator T can be chosen in such a manner that the operators representing the dynamical variables of the system do not evolve in time (unless they contain some speci?c time dependence). We expect, from physical continuity, that if t → t0 then |At → |At0 for any ket A. Thus, the limit lim t→t0 |At ? |At0 t ? t0 = lim t→t0 T ? 1 t ? t0 |At0 (3.6) should exist. Note that this limit is simply the derivative of |At0 with respect to t0. Let τ(t0) = lim t→t0 T(t, t0) ? 1 t ? t0 . (3.7) It is easily demonstrated from Equation (3.5) that τ is anti-Hermitian: i.e., τ? + τ = 0. (3.8) The fact that T can be replaced by T exp( i γ) (where γ is real) implies that τ is undetermined to an arbitrary imaginary additive constant (see Section 2.8). Let us de?ne the Hermitian operator H(t0) = i τ. This operator is undetermined to an arbitrary real additive constant. It follows from Equations (3.6) and (3.7) that i d|At0 dt0 = i lim t→t0 |At ? |At0 t ? t0 = i τ(t0) |At0 = H(t0) |At0 . (3.9) When written for general t, this equation becomes i d|At dt = H(t) |At . (3.10) Equation (3.10) gives the general law for the time evolution of a state ket in a scheme in which the operators representing the dynamical variables remain ?xed. This equation is denoted the Schr¨ odinger equation of motion. It involves a Hermitian operator H(t) which is, presumably, a characteristic of the dynamical system under investigation. Quantum Dynamics 45 We saw, in Section 2.8, that if the operator D(x, x0) displaces the system along the x-axis from x0 to x then px = i lim x→x0 D(x, x0) ? 1 x ? x0 , (3.11) where px is the operator representing the momentum conjugate to x. Furthermore, we have just shown that if the operator T(t, t0) evolves the system in time from t0 to t then H(t0) = i lim t→t0 T(t, t0) ? 1 t ? t0 . (3.12) Thus, the dynamical variable corresponding to the operator H stands to time t as the momentum px stands to the coordinate x. By analogy with classical physics, this suggests that H(t) is the operator representing the total energy of the system. (Recall that, in classical physics, if the equations of motion of a system are invariant under an x-displacement of the system then this implies that the system conserves momentum in the x-direction. Likewise, if the equations of motion are invariant under a temporal displacement then this implies that the system conserves energy.) The operator H(t) is usually called the Hamiltonian of the system. The fact that the Hamiltonian is undetermined to an arbitrary real additive constant is related to the well-known phenomenon that energy is undetermined to an arbitrary additive constant in physics (i.e., the zero of potential energy is not well-de?ned). Substituting |At = T |At0 into Equation (3.10) yields i dT dt |At0 = H(t) T |At0 . (3.13) Because this must hold for any initial state |At0 , we conclude that i dT dt = H(t) T. (3.14) This equation can be integrated to give T(t, t0) = exp ? i t t0 dt′ H(t′ ) , (3.15) where use has been made of Equations (3.5) and (3.6). (Here, we assume that Hamiltonian opera- tors evaluated at di?erent times commute with one another.) The fact that H is undetermined to an arbitrary real additive constant leaves T undetermined to a phase-factor. Incidentally, in the above analysis, time is not an operator (we cannot observe time, as such), it is just a parameter (or, more accurately, a continuous label). 3.2 Heisenberg Equation of Motion We have seen that in the Schr¨ odinger scheme the dynamical variables of the system remain ?xed during a period of undisturbed motion, whereas the state kets evolve according to Equation (3.10). However, this is not the only way in which to represent the time evolution of the system. 46 QUANTUM MECHANICS Suppose that a general state ket A is subject to the transformation |At = T? (t, t0) |A . (3.16) This is a time-dependent transformation, because the operator T(t, t0) obviously depends on time. The subscript t is used to remind us that the transformation is time-dependent. The time evolution of the transformed state ket is given by |Att = T? (t, t0) |At = T? (t, t0) T(t, t0) |At0 = |Att0 , (3.17) where use has been made of Equations (3.3), (3.5), and the fact that T(t0, t0) = 1. Clearly, the transformed state ket does not evolve in time. Thus, the transformation (3.16) has the e?ect of bringing all kets representing states of undisturbed motion of the system to rest. The transformation must also be applied to bras. The dual of Equation (3.16) yields At| = A| T. (3.18) The transformation rule for a general observable v is obtained from the requirement that the expec- tation value A| v |A should remain invariant. It is easily seen that vt = T? v T. (3.19) Thus, a dynamical variable, which corresponds to a ?xed linear operator in Schr¨ odinger's scheme, corresponds to a moving linear operator in this new scheme. It is clear that the transformation (3.16) leads us to a scenario in which the state of the system is represented by a ?xed vector, and the dynamical variables are represented by moving linear operators. This is termed the Heisenberg picture, as opposed to the Schr¨ odinger picture, which is outlined in Section 3.1. Consider a dynamical variable v corresponding to a ?xed linear operator in the Schr¨ odinger picture. According to Equation (3.19), we can write T vt = v T. (3.20) Di?erentiation with respect to time yields dT dt vt + T dvt dt = v dT dt . (3.21) With the help of Equation (3.14), this reduces to H T vt + i T dvt dt = v H T, (3.22) or i dvt dt = T? v H T ? T? H T vt = vt Ht ? Ht vt, (3.23) where Ht = T? H T. (3.24) Quantum Dynamics 47 Equation (3.23) can be written i dvt dt = [vt, Ht]. (3.25) Equation (3.25) shows how the dynamical variables of the system evolve in the Heisenberg pic- ture. It is denoted the Heisenberg equation of motion. Note that the time-varying dynamical vari- ables in the Heisenberg picture are usually called Heisenberg dynamical variables to distinguish them from Schr¨ odinger dynamical variables (i.e., the corresponding variables in the Schr¨ odinger picture), which do not evolve in time. According to Equation (2.22), the Heisenberg equation of motion can be written dvt dt = [vt, Ht]qm, (3.26) where [· · · ]qm denotes the quantum Poisson bracket. Let us compare this equation with the classical time evolution equation for a general dynamical variable v, which can be written in the form [see Equation (2.7)] dv dt = [v, H]cl. (3.27) Here,cl is the classical Poisson bracket, and H denotes the classical Hamiltonian. The strong resemblance between Equations (3.26) and (3.27) provides us with further justi?cation for our identi?cation of the linear operator H with the energy of the system in quantum mechanics. Note that if the Hamiltonian does not explicitly depend on time (i.e., the system is not subject to some time-dependent external force) then Equation (3.15) yields T(t, t0) = exp [?i H (t ? t0)/ ] . (3.28) This operator manifestly commutes with H, so Ht = T? H T = H. (3.29) Furthermore, Equation (3.25) gives i dH dt = [H, H] = 0. (3.30) Thus, if the energy of the system has no explicit time-dependence then it is represented by the same non-time-varying operator H in both the Schr¨ odinger and Heisenberg pictures. Suppose that v is an observable that commutes with the Hamiltonian (and, hence, with the time evolution operator T). It follows from Equation (3.19) that vt = v. Heisenberg's equation of motion yields i dv dt = [v, H] = 0. (3.31) Thus, any observable that commutes with the Hamiltonian is a constant of the motion (hence, it is represented by the same ?xed operator in both the Schr¨ odinger and Heisenberg pictures). Only those observables that do not commute with the Hamiltonian evolve in time in the Heisenberg picture. 48 QUANTUM MECHANICS 3.3 Ehrenfest Theorem We have now introduced all of the basic elements of quantum mechanics. The only thing which is lacking is some rule to determine the form of the quantum mechanical Hamiltonian. For a physical system that possess a classical analogue, we generally assume that the Hamiltonian has the same form as in classical physics (i.e., we replace the classical coordinates and conjugate mo- menta by the corresponding quantum mechanical operators). This scheme guarantees that quantum mechanics yields the correct classical equations of motion in the classical limit. Whenever an am- biguity arises because of non-commuting observables, this can usually be resolved by requiring the Hamiltonian H to be an Hermitian operator. For instance, we would write the quantum mechani- cal analogue of the classical product x px, appearing in the Hamiltonian, as the Hermitian product (1/2)(x px + px x). When the system in question has no classical analogue then we are reduced to guessing a form for H that reproduces the observed behavior of the system. Consider a three-dimensional system characterized by three independent Cartesian position co- ordinates xi (where i runs from 1 to 3), with three corresponding conjugate momenta pi. These are represented by three commuting position operators xi, and three commuting momentum operators pi, respectively. The commutation relations satis?ed by the position and momentum operators are [see Equation (2.25)] [xi, pj] = i δij. (3.32) It is helpful to denote (x1, x2, x3) as x and (p1, p2, p3) as p. The following useful formulae, [xi, F(x, p)] = i ?F ?pi , (3.33) [pi,G(x, p)] = ?i ?G ?xi , (3.34) where F(x, p) and G(x, p) are functions that can be expanded as power series, are easily proved using the fundamental commutation relations, (3.32). Let us now consider the three-dimensional motion of a free particle of mass m in the Heisenberg picture. The Hamiltonian is assumed to have the same form as in classical physics: i.e., H(x, p) = p2 2 m = 1 2 m i=1,3 p2 i . (3.35) In the following, all dynamical variables are assumed to be Heisenberg dynamical variables, al- though we will omit the subscript t for the sake of clarity. The time evolution of the momentum operator pi follows from the Heisenberg equation of motion, (3.25). We ?nd that dpi dt = 1 i [pi, H] = 0, (3.36) since pi automatically commutes with any function of the momentum operators. Thus, for a free particle, the momentum operators are constants of the motion, which means that pi(t) = pi(0) at Quantum Dynamics 49 all times t (for i is 1 to 3). The time evolution of the position operator xi is given by dxi dt = 1 i [xi, H] = 1 i 1 2 m i ? ?pi ? ? ? ? ? ? ? ? j=1,3 p2 j ? ? ? ? ? ? ? ? = pi m = pi(0) m , (3.37) where use has been made of Equation (3.33). It follows that xi(t) = xi(0) + pi(0) m t, (3.38) which is analogous to the equation of motion of a classical free particle. Note that even though [xi(0), xj(0)] = 0, (3.39) where the position operators are evaluated at equal times, the xi do not commute when evaluated at di?erent times. For instance, [xi(t), xi(0)] = pi(0) t m , xi(0) = ?i t m . (3.40) Combining the above commutation relation with the uncertainty relation (1.83) yields (?xi)2 t (?xi)2 t=0 ≥ 2 t2 4 m2 . (3.41) This result implies that even if a particle is well-localized at t = 0, its position becomes progres- sively more uncertain with time. This conclusion can also be obtained by studying the propagation of wavepackets in wave mechanics. Let us now add a potential V(x) to our free particle Hamiltonian: H(x, p) = p2 2 m + V(x). (3.42) Here, V is some (real) function of the xi operators. The Heisenberg equation of motion gives dpi dt = 1 i [pi, V(x)] = ? ?V(x) ?xi , (3.43) where use has been made of Equation (3.34). On the other hand, the result dxi dt = pi m (3.44) still holds, because the xi all commute with the new term, V(x), in the Hamiltonian. We can use the Heisenberg equation of motion a second time to deduce that d2 xi dt2 = 1 i dxi dt , H = 1 i pi m , H = 1 m dpi dt = ? 1 m ?V(x) ?xi . (3.45) 50 QUANTUM MECHANICS In vectorial form, this equation becomes m d2 x dt2 = dp dt = ??V(x). (3.46) This is the quantum mechanical equivalent of Newton's second law of motion. Taking the expec- tation values of both sides with respect to a Heisenberg state ket that does not evolve in time, we obtain the so-called Ehrenfest theorem: m d2 x dt2 = d p dt = ? ?V(x) . (3.47) When written in terms of expectation values, this result is independent of whether we are using the Heisenberg or Schr¨ odinger picture. By contrast, the operator equation (3.46) only holds if x and p are understood to be Heisenberg dynamical variables. Note that Equation (3.47) has no dependence on . In fact, it guarantees to us that the centre of a wavepacket always moves like a classical particle. 3.4 Schr¨ odinger Wave Equation Consider the motion of a particle in three dimensions in the Schr¨ odinger picture. The ?xed dy- namical variables of the system are the position operators, x ≡ (x1, x2, x3), and the momentum operators, p ≡ (p1, p2, p3). The state of the system is represented as some time evolving ket |At . Let |x′ represent a simultaneous eigenket of the position operators belonging to the eigenvalues x′ ≡ (x′ 1, x′ 2, x′ 3). Note that, because the position operators are ?xed in the Schr¨ odinger picture, we do not expect the |x′ to evolve in time. The wavefunction of the system at time t is de?ned ψ(x′ , t) = x′ |At . (3.48) The Hamiltonian of the system is taken to be H(x, p) = p2 2 m + V(x). (3.49) The Schr¨ odinger equation of motion, (3.10), yields i ? x′ |At ?t = x′ | H |At , (3.50) where use has been made of the time independence of the |x′ . We adopt the Schr¨ odinger represen- tation in which the momentum conjugate to the position operator xi is written [see Equation (2.74)] pi = ?i ? ?xi . (3.51) Thus, x′ p2 2 m At = ? 2 2 m ?′ 2 x′ |At , (3.52) Quantum Dynamics 51 where use has been made of Equation (2.78). Here, ?′ ≡ (?/?x′ , ?/?y′ , ?/?z′ ) denotes the gradient operator written in terms of the position eigenvalues. We can also write x′ | V(x) = V(x′ ) x′ |, (3.53) where V(x′ ) is a scalar function of the position eigenvalues. Combining Equations (3.49), (3.50), (3.52), and (3.53), we obtain i ? x′ |At ?t = ? 2 2 m ?′ 2 x′ |At + V(x′ ) x′ |At , (3.54) which can also be written i ?ψ(x′ , t) ?t = ? 2 2 m ?′ 2 ψ(x′ , t) + V(x′ ) ψ(x′ , t). (3.55) This is Schr¨ odinger's famous wave equation, and is the basis of wave mechanics. Note, however, that the wave equation is just one of many possible representations of quantum mechanics. It just happens to give a type of equation that we know how to solve. In deriving the wave equation, we have chosen to represent the system in terms of the eigenkets of the position operators, instead of those of the momentum operators. We have also ?xed the relative phases of the |x′ according to the Schr¨ odinger representation, so that Equation (3.51) is valid. Finally, we have chosen to work in the Schr¨ odinger picture, in which state kets evolve and dynamical variables are ?xed, instead of the Heisenberg picture, in which the opposite is true. Suppose that the ket |At is an eigenket of the Hamiltonian belonging to the eigenvalue H′ : i.e., H |At = H′ |At . (3.56) The Schr¨ odinger equation of motion, (3.10), yields i d|At dt = H′ |At . (3.57) This can be integrated to give |At = exp[?i H′ (t ? t0)/ ] |At0 . (3.58) Note that |At only di?ers from |At0 by a phase-factor. The direction of the vector remains ?xed in ket space. This suggests that if the system is initially in an eigenstate of the Hamiltonian then it remains in this state for ever, as long as the system is undisturbed. Such a state is called a stationary state. The wavefunction of a stationary state satis?es ψ(x′ , t) = ψ(x′ , t0) exp[?i H′ (t ? t0)/ ]. (3.59) Substituting the above relation into the Schr¨ odinger wave equation, (3.55), we obtain ? 2 2 m ?′ 2 ψ0(x′ ) + [V(x′ ) ? E] ψ0(x′ ) = 0, (3.60) 52 QUANTUM MECHANICS where ψ0(x′ ) ≡ ψ(x′ , t0), and E = H′ is the energy of the system. This is Schr¨ odinger's time- independent wave equation. A bound state solution of the above equation, in which the particle is con?ned within a ?nite region of space, satis?es the boundary condition ψ0(x′ ) → 0 as |x′ | → ∞. (3.61) Such a solution is only possible if E < lim |x′|→∞ V(x′ ). (3.62) Since it is conventional to set the potential at in?nity equal to zero, the above relation implies that bound states are equivalent to negative energy states. The boundary condition (3.61) is su?cient to uniquely specify the solution of Equation (3.60). The quantity ρ(x′ , t), de?ned by ρ(x′ , t) = |ψ(x′ , t)|2 , (3.63) is termed the probability density. Recall, from Equation (2.30), that the probability of observing the particle in some volume element d3 x′ around position x′ is proportional to ρ(x′ , t) d3 x′ . The probability is equal to ρ(x′ , t) d3 x′ if the wavefunction is properly normalized, so that d3 x′ ρ(x′ , t) = 1. (3.64) The Schr¨ odinger time-dependent wave equation, (3.55), can easily be transformed into a con- servation equation for the probability density: ?ρ ?t + ?′ · j = 0. (3.65) The probability current, j, takes the form j(x′ , t) = ? i 2 m ψ? ?′ ψ ? (?′ ψ? ) ψ = m Im(ψ? ?′ ψ). (3.66) We can integrate Equation (3.65) over all space, using the divergence theorem, and the boundary condition ρ → 0 as |x′ | → ∞, to obtain d dt d3 x′ ρ(x′ , t) = 0. (3.67) Thus, the Schr¨ odinger wave equation conserves probability. In particular, if the wavefunction starts o? properly normalized, according to Equation (3.64), then it remains properly normalized at all subsequent times. It is easily demonstrated that d3 x′ j(x′ , t) = p t m , (3.68) where p t denotes the expectation value of the momentum evaluated at time t. Clearly, the proba- bility current is indirectly related to the particle momentum. Quantum Dynamics 53 In deriving Equations (3.65), we have, naturally, assumed that the potential V(x′ ) is real. Sup- pose, however, that the potential has an imaginary component. In this case, Equation (3.65) gener- alizes to ?ρ ?t + ?′ · j = 2 Im(V) ρ, (3.69) giving ? ?t d3 x′ ρ(x′ , t) = 2 Im d3 x′ V(x′ ) ρ(x′ , t). (3.70) Thus, if Im(V) < 0 then the total probability of observing the particle anywhere in space decreases monotonically with time. Thus, an imaginary potential can be used to account for the disappear- ance of a particle. Such a potential is often employed to model nuclear reactions in which incident particles are absorbed by nuclei. Exercises 3.1 Let x ≡ (x1, x2, x3) be a set of Cartesian position operators, and let p ≡ (p1, p2, p3) be the corresponding momentum operators. Demonstrate that [xi, F(x, p)] = i ?F ?pi , [pi,G(x, p)] = ?i ?G ?xi , where i = 1, 3, and F(x, p), G(x, p) are functions that can be expanded as power series. 3.2 Assuming that the potential V(x) is complex, demonstrate that the Schr¨ odinger time-dependent wave equation, (3.55), can be transformed to give ?ρ ?t + ?′ · j = 2 Im(V) ρ, where ρ(x′ , t) = |ψ(x′ , t)|2 , and j(x′ , t) = m Im(ψ? ?′ ψ). 3.3 Consider one-dimensional quantum harmonic oscillator whose Hamiltonian is H = p2 x 2 m + 1 2 m ω2 x2 , where x and px are conjugate position and momentum operators, respectively, and m, ω are positive constants. 54 QUANTUM MECHANICS (a) Demonstrate that the expectation value of H, for a general state, is positive de?nite. (b) Let A = m ω 2 x + i px √ 2 m ω . Deduce that [A, A? ] = 1, H = ω 1 2 + A? A , [H, A] = ? ω A, [H, A? ] = ω A? . (c) Suppose that |E is an eigenket of the Hamiltonian whose corresponding energy is E: i.e., H |E = E |E . Demonstrate that H A |E = (E ? ω) A |E , H A? |E = (E + ω) A? |E . Hence, deduce that the allowed values of E are En = (n + 1/2) ω, where n = 0, 1, 2, (d) Let |En be a properly normalized (i.e., En|En = 1) energy eigenket corresponding to the eigenvalue En. Show that the kets can be de?ned such that A |En = √ n |En?1 , A? |En = √ n + 1 |En+1 . Hence, deduce that |En = 1 √ n! (A? )n |E0 . (e) Let the ψn(x′ ) = x′ |En be the wavefunctions of the properly normalized energy eigenkets. Given that A |E0 = |0 , deduce that x′ x0 + x0 d dx′ ψ0(x′ ) = 0, Quantum Dynamics 55 where x0 = ( /m ω)1/2 . Hence, show that ψn(x′ ) = 1 π1/4 (2n n!)1/2 xn+1/2 0 x′ ? x2 0 d dx′ n exp ? ? ? ? ? ?? 1 2 x′ x0 2? ? ? ? ? ? . 3.4 Consider the one-dimensional quantum harmonic oscillator discussed in Exercise 3.3. Let |n be a properly normalized energy eigenket belonging to the eigenvalue En. Show that n′ | x |n = 2 m ω √ n δn′ n?1 + √ n + 1 δn′ n+1 , n′ | px |n = i m ω 2 ? √ n δn′ n?1 + √ n + 1 δn′ n+1 , n′ | x2 |n = 2 m ω n (n ? 1) δn′ n?2 + (n + 1) (n + 2) δn′ n+2 + (2 n + 1) δn′ n , n′ | p2 x |n = m ω 2 ? n (n ? 1) δn′ n?2 ? (n + 1) (n + 2) δn′ n+2 + (2 n + 1) δn′,n . Hence, deduce that (?x)2 (?px)2 = (n + 1/2)2 2 for the nth eigenstate. 3.5 Consider a particle in one dimension whose Hamiltonian is H = p2 x 2 m + V(x). By calculating [[H, x], x], demonstrate that n′ | n| x |n′ |2 (En′ ? En) = 2 2 m , where |n is a properly normalized energy eigenket corresponding to the eigenvalue En, and the sum is over all eigenkets. 3.6 Consider a particle in one dimension whose Hamiltonian is H = p2 x 2 m + V(x). Suppose that the potential is periodic, such that V(x ? a) = V(x), for all x. Deduce that [D(a), H] = 0, 56 QUANTUM MECHANICS where D(a) is the displacement operator de?ned in Exercise 2.4. Hence, show that the wavefunction of an energy eigenstate has the general form ψ(x′ ) = ei k′ x′ u(x′ ), where k′ is a real parameter, and u(x′ ? a) = u(x′ ) for all x′ . This result is known as the Bloch theorem. 3.7 Consider the one-dimensional quantum harmonic oscillator discussed in Exercise 3.3. Show that the Heisenberg equations of motion of the ladder operators A and A? are dA dt = ?i ω A, dA? dt = i ω A? , respectively. Hence, deduce that the momentum and position operators evolve in time as px(t) = cos(ω t) px(0) ? m ω sin(ω t) x(0), x(t) = cos(ω t) x(0) + sin(ω t) m ω px(0), respectively, in the Heisenberg picture. 3.8 Consider a one-dimensional stationary bound state. Using the time-independent Schr¨ odinger equation, prove that p2 x 2 m = E ? V , and p2 x 2 m = ?E + V + x dV dx . [Hint: You can assume, without loss of generality, that the stationary wavefunction is real.] Hence, prove the Virial theorem, p2 x 2 m = 1 2 x dV dx . Orbital Angular Momentum 57 4 Orbital Angular Momentum 4.1 Orbital Angular Momentum Consider a particle described by the Cartesian coordinates (x, y, z) ≡ x and their conjugate mo- menta (px, py, pz) ≡ p. The classical de?nition of the orbital angular momentum of such a particle about the origin is L = x * p, giving Lx = y pz ? z py, (4.1) Ly = z px ? x pz, (4.2) Lz = x py ? y px. (4.3) Let us assume that the operators (Lx, Ly, Lz) ≡ L which represent the components of orbital angular momentum in quantum mechanics can be de?ned in an analogous manner to the corresponding components of classical angular momentum. In other words, we are going to assume that the above equations specify the angular momentum operators in terms of the position and linear momentum operators. Note that Lx, Ly, and Lz are Hermitian, so they represent things which can, in principle, be measured. Note, also, that there is no ambiguity regarding the order in which operators appear in products on the right-hand sides of Equations (4.1)–(4.3), because all of the products consist of operators that commute. The fundamental commutation relations satis?ed by the position and linear momentum opera- tors are [see Equations (2.23)–(2.25)] [xi, xj] = 0, (4.4) [pi, pj] = 0, (4.5) [xi, pj] = i δij, (4.6) where i and j stand for either x, y, or z. Consider the commutator of the operators Lx and Lz: [Lx, Ly] = [(y pz ? z py), (z px ? x pz)] = y [pz, z] px + x py [z, pz] = i (?y px + x py) = i Lz. (4.7) The cyclic permutations of the above result yield the fundamental commutation relations satis?ed by the components of an orbital angular momentum: [Lx, Ly] = i Lz, (4.8) [Ly, Lz] = i Lx, (4.9) [Lz, Lx] = i Ly. (4.10) These can be summed up more succinctly by writing L * L = i L. (4.11) 58 QUANTUM MECHANICS The three commutation relations (4.8)–(4.10) are the foundation for the whole theory of angular momentum in quantum mechanics. Whenever we encounter three operators having these commu- tation relations, we know that the dynamical variables that they represent have identical properties to those of the components of an angular momentum (which we are about to derive). In fact, we shall assume that any three operators that satisfy the commutation relations (4.8)–(4.10) represent the components of some sort of angular momentum. Suppose that there are N particles in the system, with angular momentum vectors Li (where i runs from 1 to N). Each of these vectors satis?es Equation (4.11), so that Li * Li = i Li. (4.12) However, we expect the angular momentum operators belonging to di?erent particles to commute, because they represent di?erent degrees of freedom of the system. So, we can write Li * Lj + Lj * Li = 0, (4.13) for i j. Consider the total angular momentum of the system, L = i=1,N Li. It is clear from Equations (4.12) and (4.13) that L * L = i=1,N Li * j=1,N Lj = i=1,N Li * Li + 1 2 i j i,j=1,N (Li * Lj + Lj * Li) = i i=1,N Li = i L. (4.14) Thus, the sum of two or more angular momentum vectors satis?es the same commutation relation as a primitive angular momentum vector. In particular, the total angular momentum of the system satis?es the commutation relation (4.11). The immediate conclusion which can be drawn from the commutation relations (4.8)–(4.10) is that the three components of an angular momentum vector cannot be speci?ed (or measured) si- multaneously. In fact, once we have speci?ed one component, the values of other two components become uncertain. It is conventional to specify the z-component, Lz. Consider the magnitude squared of the angular momentum vector, L2 ≡ L2 x + L2 y + L2 z . The commutator of L2 and Lz is written [L2 , Lz] = [L2 x , Lz] + [L2 y , Lz] + [L2 z , Lz]. (4.15) It is easily demonstrated that [L2 x , Lz] = ?i (Lx Ly + Ly Lx), (4.16) [L2 y , Lz] = +i (Lx Ly + Ly Lx), (4.17) [L2 z , Lz] = 0, (4.18) so [L2 , Lz] = 0. (4.19) Orbital Angular Momentum 59 Because there is nothing special about the z-direction, we conclude that L2 also commutes with Lx and Ly. It is clear from Equations (4.8)–(4.10) and (4.19) that the best we can do in quan- tum mechanics is to specify the magnitude of an angular momentum vector along with one of its components (by convention, the z-component). It is convenient to de?ne the shift operators L+ and L? : L+ = Lx + i Ly, (4.20) L? = Lx ? i Ly. (4.21) It can easily be shown that [L+ , Lz] = ? L+ , (4.22) [L? , Lz] = + L? , (4.23) [L+ , L? ] = 2 Lz, (4.24) and also that both shift operators commute with L2 . 4.2 Eigenvalues of Orbital Angular Momentum Suppose that the simultaneous eigenkets of L2 and Lz are completely speci?ed by two quantum numbers, l and m. These kets are denoted |l, m . The quantum number m is de?ned by Lz |l, m = m |l, m . (4.25) Thus, m is the eigenvalue of Lz divided by . It is possible to write such an equation because has the dimensions of angular momentum. Note that m is a real number, because Lz is an Hermitian operator. We can write L2 |l, m = f(l, m) 2 |l, m , (4.26) without loss of generality, where f(l, m) is some real dimensionless function of l and m. Later on, we will show that f(l, m) = l (l + 1). Now, l, m| L2 ? L2 z |l, m = l, m| f(l, m) 2 ? m2 2 |l, m = [f(l, m) ? m2 ] 2 , (4.27) assuming that the |l, m have unit norms. However, l, m| L2 ? L2 z |l, m = l, m| L2 x + L2 y |l, m = l, m| L2 x |l, m + l, m| L2 y |l, m . (4.28) It is easily demonstrated that A| ξ2 |A ≥ 0, (4.29) where |A is a general ket, and ξ is an Hermitian operator. The proof follows from the observation that A| ξ2 |A = A| ξ? ξ |A = B|B , (4.30) 60 QUANTUM MECHANICS where |B = ξ |A , plus the fact that B|B ≥ 0 for a general ket |B [see Equation (1.21)]. It follows from Equations (4.27)–(4.29) that m2 ≤ f(l, m). (4.31) Consider the e?ect of the shift operator L+ on the eigenket |l, m . It is easily demonstrated that L2 (L+ |l, m ) = 2 f(l, m) (L+ |l, m ), (4.32) where use has been made of Equation (4.26), plus the fact that L2 and L+ commute. It follows that the ket L+ |l, m has the same eigenvalue of L2 as the ket |l, m . Thus, the shift operator L+ does not a?ect the magnitude of the angular momentum of any eigenket it acts upon. However, Lz L+ |l, m = (L+ Lz + [Lz, L+ ]) |l, m = (L+ Lz + L+ ) |l, m = (m + 1) L+ |l, m , (4.33) where use has been made of Equation (4.22). The above equation implies that L+ |l, m is propor- tional to |l, m + 1 . We can write L+ |l, m = c+ l m |l, m + 1 , (4.34) where c+ l m is a number. It is clear that if the operator L+ acts on a simultaneous eigenstate of L2 and Lz then the eigenvalue of L2 remains unchanged, but the eigenvalue of Lz is increased by . For this reason, L+ is called a raising operator. Using similar arguments to those given above, it is possible to demonstrate that L? |l, m = c? l m |l, m ? 1 . (4.35) Hence, L? is called a lowering operator. The shift operators, L+ and L? , respectively step the value of m up and down by unity each time they operate on one of the simultaneous eigenkets of L2 and Lz. It would appear, at ?rst sight, that any value of m can be obtained by applying these operators a su?cient number of times. However, according to Equation (4.31), there is a de?nite upper bound to the values that m2 can take. This bound is determined by the eigenvalue of L2 [see Equation (4.26)]. It follows that there is a maximum and a minimum possible value which m can take. Suppose that we attempt to raise the value of m above its maximum value mmax. Since there is no state with m > mmax, we must have L+ |l, mmax = |0 . (4.36) This implies that L? L+ |l, mmax = |0 . (4.37) However, L? L+ = L2 x + L2 y + i [Lx, Ly] = L2 ? L2 z ? Lz, (4.38) so Equation (4.37) yields (L2 ? L2 z ? Lz) |l, mmax = |0 . (4.39) The above equation can be rearranged to give L2 |l, mmax = (L2 z + Lz) |l, mmax = mmax(mmax + 1) 2 |l, mmax . (4.40) Orbital Angular Momentum 61 Comparison of this equation with Equation (4.26) yields the result f(l, mmax) = mmax(mmax + 1). (4.41) But, when L? operates on |n, mmax it generates |n, mmax ? 1 , |n, mmax ? 2 , etc. Since the lowering operator does not change the eigenvalue of L2 , all of these states must correspond to the same value of f, namely mmax(mmax + 1). Thus, L2 |l, m = mmax(mmax + 1) 2 |l, m . (4.42) At this stage, we can give the unknown quantum number l the value mmax, without loss of general- ity. We can also write the above equation in the form L2 |l, m = l (l + 1) 2 |l, m . (4.43) It is easily seen that L? L+ |l, m = (L2 ? L2 z ? Lz) |l, m = 2 [l (l + 1) ? m (m + 1)] |l, m . (4.44) Thus, l, m| L? L+ |l, m = 2 [l (l + 1) ? m (m + 1)]. (4.45) However, we also know that l, m|L? L+ |l, m = l, m|L? c+ l m|l, m + 1 = 2 c+ l m c? l m+1, (4.46) where use has been made of Equations (4.34) and (4.35). It follows that c+ l m c? l m+1 = [l (l + 1) ? m (m + 1)]. (4.47) Consider the following: l, m| L? |l, m + 1 = l, m| Lx |l, m + 1 ? i l, m| Ly |l, m + 1 = l, m + 1| Lx |l, m ? ? i l, m + 1| Ly |l, m ? = ( l, m + 1| Lx |l, m + i l, m + 1| Ly |l, m )? = l, m + 1| L+ |l, m ? , (4.48) where use has been made of the fact that Lx and Ly are Hermitian. The above equation reduces to c? l m+1 = (c+ l m)? (4.49) with the aid of Equations (4.34) and (4.35). Equations (4.47) and (4.49) can be combined to give |c+ l m|2 = [l (l + 1) ? m (m + 1)]. (4.50) 62 QUANTUM MECHANICS The solution of the above equation is c+ l m = l (l + 1) ? m (m + 1). (4.51) Note that c+ l m is undetermined to an arbitrary phase-factor [i.e., we can replace c+ l m, given above, by c+ l m exp( i γ), where γ is real, and we still satisfy Equation (4.50)]. We have made the arbitrary, but convenient, choice that c+ l m is real and positive. This is equivalent to choosing the relative phases of the eigenkets |l, m . According to Equation (4.49), c? l m = (c+ l m?1)? = l (l + 1) ? m (m ? 1). (4.52) We have already seen that the inequality (4.31) implies that there is a maximum and a minimum possible value of m. The maximum value of m is denoted l. What is the minimum value? Suppose that we try to lower the value of m below its minimum value mmin. Because there is no state with m < mmin, we must have L? |l, mmin = 0. (4.53) According to Equation (4.35), this implies that c? l mmin = 0. (4.54) It can be seen from Equation (4.52) that mmin = ?l. We conclude that m can take a "ladder" of discrete values, each rung di?ering from its immediate neighbors by unity. The top rung is l, and the bottom rung is ?l. There are only two possible choices for l. Either it is an integer (e.g., l = 2, which allows m to take the values ?2, ?1, 0, 1, 2), or it is a half-integer (e.g., l = 3/2, which allows m to take the values ?3/2, ?1/2, 1/2, 3/2). We shall prove in the next section that an orbital angular momentum can only take integer values of l. In summary, using just the fundamental commutation relations (4.8)–(4.10), plus the fact that Lx, Ly, and Lz are Hermitian operators, we have shown that the eigenvalues of L2 ≡ L2 x + L2 y + L2 z can be written l (l + 1) 2 , where l is an integer, or a half-integer. We have also demonstrated that the eigenvalues of Lz can only take the values m , where m lies in the range ?l, ?l + 1, · · · l ? 1, l. Let |l, m denote a properly normalized simultaneous eigenket of L2 and Lz, belonging to the eigenvalues l (l + 1) 2 and m , respectively. We have shown that L+ |l, m = l (l + 1) ? m (m + 1) |l, m + 1 (4.55) L? |l, m = l (l + 1) ? m (m ? 1) |l, m ? 1 , (4.56) where L± = Lx ± i Ly are the so-called shift operators. 4.3 Rotation Operators Consider a particle whose position is described by the spherical polar coordinates (r, θ, ?). The classical momentum conjugate to the azimuthal angle ? is the z-component of angular momentum, Orbital Angular Momentum 63 Lz. According to Section 2.5, in quantum mechanics we can always adopt the Schr¨ odinger repre- sentation, for which ket space is spanned by the simultaneous eigenkets of the position operators r, θ, and ?, and Lz takes the form Lz = ?i ? ?? . (4.57) We can do this because there is nothing in Section 2.5 which speci?es that we have to use Cartesian coordinates—the representation (2.74) works for any well-de?ned set of coordinates. Consider an operator R(??) that rotates the system through an angle ?? about the z-axis. This operator is very similar to the operator D(?x), introduced in Section 2.8, which translates the system a distance ?x along the x-axis. We were able to demonstrate in Section 2.8 that px = i lim δx→0 D(δx) ? 1 δx , (4.58) where px is the linear momentum conjugate to x. There is nothing in our derivation of this result which speci?es that x has to be a Cartesian coordinate. Thus, the result should apply just as well to an angular coordinate. We conclude that Lz = i lim δ?→0 R(δ?) ? 1 δ? . (4.59) According to Equation (4.59), we can write R(δ?) = 1 ? i Lz δ?/ (4.60) in the limit δ? → 0. In other words, the angular momentum operator Lz can be used to rotate the system about the z-axis by an in?nitesimal amount. We say that Lz is the generator of rotations about the z-axis. The above equation implies that R(??) = lim N→∞ 1 ? i ?? N Lz N , (4.61) which reduces to R(??) = exp(?i Lz ??/ ). (4.62) Note that R(??) has all of the properties we would expect of a rotation operator: i.e., R(0) = 1, (4.63) R(??) R(???) = 1, (4.64) R(??1) R(??2) = R(??1 + ??2). (4.65) Suppose that the system is in a simultaneous eigenstate of L2 and Lz. As before, this state is represented by the eigenket |l, m , where the eigenvalue of L2 is l (l+1) 2 , and the eigenvalue of Lz is m . We expect the wavefunction to remain unaltered if we rotate the system 2π degrees about the z-axis. Thus, R(2π) |l, m = exp(?i Lz 2π/ ) |l, m = exp(?i 2π m) |l, m = |l, m . (4.66) 64 QUANTUM MECHANICS We conclude that m must be an integer. This implies, from the previous section, that l must also be an integer. Thus, an orbital angular momentum can only take integer values of the quantum numbers l and m. Consider the action of the rotation operator R(??) on an eigenstate possessing zero angular momentum about the z-axis (i.e., an m = 0 state). We have R(??) |l, 0 = exp(0) |l, 0 = |l, 0 . (4.67) Thus, the eigenstate is invariant to rotations about the z-axis. Clearly, its wavefunction must be symmetric about the z-axis. There is nothing special about the z-axis, so we can write Rx(??x) = exp(?i Lx ??x/ ), (4.68) Ry(??y) = exp(?i Ly ??y/ ), (4.69) Rz(??y) = exp(?i Lz ??z/ ), (4.70) by analogy with Equation (4.62). Here, Rx(??x) denotes an operator that rotates the system by an angle ??x about the x-axis, etc. Suppose that the system is in an eigenstate of zero overall orbital angular momentum (i.e., an l = 0 state). We know that the system is also in an eigenstate of zero orbital angular momentum about any particular axis. This follows because l = 0 implies m = 0, according to the previous section, and we can choose the z-axis to point in any direction. Thus, Rx(??x) |0, 0 = exp(0) |0, 0 = |0, 0 , (4.71) Ry(??y) |0, 0 = exp(0) |0, 0 = |0, 0 , (4.72) Rz(??z) |0, 0 = exp(0) |0, 0 = |0, 0 . (4.73) Clearly, a zero angular momentum state is invariant to rotations about any axis. Such a state must possess a spherically symmetric wavefunction. Note that a rotation about the x-axis does not commute with a rotation about the y-axis. In other words, if the system is rotated an angle ??x about the x-axis, and then ??y about the y-axis, it ends up in a di?erent state to that obtained by rotating an angle ??y about the y-axis, and then ??x about the x-axis. In quantum mechanics, this implies that Ry(??y) Rx(??x) Rx(??x) Ry(??y), or Ly Lx Lx Ly, [see Equations (4.68)–(4.70)]. Thus, the noncommuting nature of the angular momentum operators is a direct consequence of the fact that rotations do not commute. Orbital Angular Momentum 65 4.4 Eigenfunctions of Orbital Angular Momentum In Cartesian coordinates, the three components of orbital angular momentum can be written Lx = ?i y ? ?z ? z ? ?y , (4.74) Ly = ?i z ? ?x ? x ? ?z , (4.75) Lz = ?i x ? ?y ? y ? ?x , (4.76) using the Schr¨ odinger representation. Transforming to standard spherical polar coordinates, x = r sin θ cos ?, (4.77) y = r sin θ sin ?, (4.78) z = r cos θ, (4.79) we obtain Lx = i sin ? ? ?θ + cot θ cos ? ? ?? , (4.80) Ly = ?i cos ? ? ?θ ? cot θ sin ? ? ?? , (4.81) Lz = ?i ? ?? . (4.82) Note that Equation (4.82) accords with Equation (4.57). The shift operators L± = Lx ± i Ly become L± = ± exp(±i ?) ? ?θ ± i cot θ ? ?? . (4.83) Now, L2 = L2 x + L2 y + L2 z = L2 z + (L+ L? + L? L+ )/2, (4.84) so L2 = ? 2 1 sin θ ? ?θ sin θ ? ?θ + 1 sin2 θ ?2 ??2 . (4.85) The eigenvalue problem for L2 takes the form L2 ψ = λ 2 ψ, (4.86) where ψ(r, θ, ?) is the wavefunction, and λ is a number. Let us write ψ(r, θ, ?) = R(r) Y(θ, ?). (4.87) 66 QUANTUM MECHANICS Equation (4.86) reduces to 1 sin θ ? ?θ sin θ ? ?θ + 1 sin2 θ ?2 ??2 Y + λ Y = 0, (4.88) where use has been made of Equation (4.85). As is well-known, square integrable solutions to this equation only exist when λ takes the values l (l + 1), where l is an integer. These solutions are known as spherical harmonics, and can be written Yl m(θ, ?) = 2 l + 1 4π (l ? m)! (l + m)! (?1)m ei m ? Pl m(cos ?), (4.89) where m is a positive integer lying in the range 0 ≤ m ≤ l. Here, Pl m(ξ) is an associated Legendre function satisfying the equation d dξ (1 ? ξ2 ) dPl m dξ ? m2 1 ? ξ2 Pl m + l (l + 1) Pl m = 0. (4.90) We de?ne Yl ?m = (?1)m Y ? l m, (4.91) which allows m to take the negative values ?l ≤ m < 0. The spherical harmonics are orthogonal functions, and are properly normalized with respect to integration over the entire solid angle: π 0 2π 0 dθ d? sin θ Y ? l m(θ, ?) Yl′ m′ (θ, ?) = δl l′ δm m′ . (4.92) The spherical harmonics also form a complete set for representing general functions of θ and ?. By de?nition, L2 Yl m = l (l + 1) 2 Yl m, (4.93) where l is an integer. It follows from Equations (4.82) and (4.89) that Lz Yl m = m Yl m, (4.94) where m is an integer lying in the range ?l ≤ m ≤ l. Thus, the wavefunction ψ(r, θ, ?) = R(r) Yl,m(θ, φ), where R is a general function, has all of the expected features of the wavefunc- tion of a simultaneous eigenstate of L2 and Lz belonging to the quantum numbers l and m. The well-known formula dPl m dξ = ? 1 1 ? ξ2 Pl m+1 ? m ξ 1 ? ξ2 Pl m = (l + m) (l ? m + 1) 1 ? ξ2 Pl m?1 + m ξ 1 ? ξ2 Pl m (4.95) can be combined with Equations (4.83) and (4.89) to give L+ Yl m = [l (l + 1) ? m (m + 1)]1/2 Yl m+1, (4.96) L? Yl m = [l (l + 1) ? m (m ? 1)]1/2 Yl m?1. (4.97) Orbital Angular Momentum 67 These equations are equivalent to Equations (4.55)–(4.56). Note that a spherical harmonic wave- function is symmetric about the z-axis (i.e., independent of ?) whenever m = 0, and is spherically symmetric whenever l = 0 (since Y0 0 = 1/ √ 4π). In summary, by solving directly for the eigenfunctions of L2 and Lz in the Schr¨ odinger repre- sentation, we have been able to reproduce all of the results of Section 4.2. Nevertheless, the results of Section 4.2 are more general than those obtained in this section, because they still apply when the quantum number l takes on half-integer values. 4.5 Motion in Central Field Consider a particle of mass M moving in a spherically symmetric potential. The Hamiltonian takes the form H = p2 2 M + V(r). (4.98) Adopting Schr¨ odinger's representation, we can write p = ?(i/ )?. Hence, H = ? 2 2 M ?2 + V(r). (4.99) When written in spherical polar coordinates, the above equation becomes H = ? 2 2 M 1 r2 ? ?r r2 ? ?r + 1 r2 sin θ ? ?θ sin θ ? ?θ + 1 r2 sin2 θ ?2 ??2 + V(r). (4.100) Comparing this equation with Equation (4.85), we ?nd that H = 2 2 M ? 1 r2 ? ?r r2 ? ?r + L2 2r2 + V(r). (4.101) Now, we know that the three components of angular momentum commute with L2 (see Sec- tion 4.1). We also know, from Equations (4.80)–(4.82), that Lx, Ly, and Lz take the form of partial derivative operators involving only angular coordinates, when written in terms of spherical polar coordinates using the Schr¨ odinger representation. It follows from Equation (4.101) that all three components of the angular momentum commute with the Hamiltonian: [L, H] = 0. (4.102) It is also easily seen that L2 (which can be expressed as a purely angular di?erential operator) commutes with the Hamiltonian: [L2 , H] = 0. (4.103) According to Section 3.2, the previous two equations ensure that the angular momentum L and its magnitude squared L2 are both constants of the motion. This is as expected for a spherically symmetric potential. 68 QUANTUM MECHANICS Consider the energy eigenvalue problem H ψ = E ψ, (4.104) where E is a number. Since L2 and Lz commute with each other and the Hamiltonian, it is always possible to represent the state of the system in terms of the simultaneous eigenstates of L2 , Lz, and H. But, we already know that the most general form for the wavefunction of a simultaneous eigenstate of L2 and Lz is (see previous section) ψ(r, θ, ?) = R(r) Yl m(θ, ?). (4.105) Substituting Equation (4.105) into Equation (4.101), and making use of Equation (4.93), we obtain 2 2 M ? 1 r2 d dr r2 d dr + l (l + 1) r2 + V(r) ? E R = 0. (4.106) This is a Sturm-Liouville equation for the function R(r). We know, from the general properties of this type of equation, that if R(r) is required to be well-behaved at r = 0 and as r → ∞ then solutions only exist for a discrete set of values of E. These are the energy eigenvalues. In general, the energy eigenvalues depend on the quantum number l, but are independent of the quantum number m. 4.6 Energy Levels of Hydrogen Atom Consider a hydrogen atom, for which the potential takes the speci?c form V(r) = ? e2 4π ?0 r . (4.107) The radial eigenfunction R(r) satis?es Equation (4.106), which can be written 2 2 ? ? 1 r2 d dr r2 d dr + l (l + 1) r2 ? e2 4π ?0 r ? E R = 0. (4.108) Here, ? = me mp/(me + mp) ? me is the reduced mass, which takes into account the fact that the electron (of mass me) and the proton (of mass mp) both rotate about a common centre, which is equivalent to a particle of mass ? rotating about a ?xed point. Let us write the product r R(r) as the function P(r). The above equation transforms to d2 P dr2 ? 2 ? 2 l (l + 1) 2 2 ? r2 ? e2 4π ?0 r ? E P = 0, (4.109) which is the one-dimensional Schr¨ odinger equation for a particle of mass ? moving in the e?ective potential Ve?(r) = ? e2 4π ?0 r + l (l + 1) 2 2 ? r2 . (4.110) Orbital Angular Momentum 69 The e?ective potential has a simple physical interpretation. The ?rst part is the attractive Coulomb potential, and the second part corresponds to the repulsive centrifugal force. Let a = ? 2 2 ? E , (4.111) and y = r/a, with P(r) = f(y) exp(?y). (4.112) Here, it is assumed that the energy eigenvalue E is negative. Equation (4.109) transforms to d2 dy2 ? 2 d dy ? l (l + 1) y2 + 2 ? e2 a 4π ?0 2 y f = 0. (4.113) Let us look for a power-law solution of the form f(y) = n cn yn . (4.114) Substituting this solution into Equation (4.113), we obtain n cn n (n ? 1) yn?2 ? 2 n yn?1 ? l (l + 1) yn?2 + 2 ? e2 a 4π ?0 2 yn?1 = 0. (4.115) Equating the coe?cients of yn?2 gives cn [n (n ? 1) ? l (l + 1)] = cn?1 2 (n ? 1) ? 2 ? e2 a 4π ?0 2 . (4.116) Now, the power law series (4.114) must terminate at small n, at some positive value of n, otherwise f(y) would behave unphysically as y → 0. This is only possible if [nmin(nmin ? 1) ? l (l + 1)] = 0, where the ?rst term in the series is cnmin ynmin . There are two possibilities: nmin = ?l or nmin = l + 1. The former predicts unphysical behavior of the wavefunction at y = 0. Thus, we conclude that nmin = l + 1. Note that for an l = 0 state there is a ?nite probability of ?nding the electron at the nucleus, whereas for an l > 0 state there is zero probability of ?nding the electron at the nucleus (i.e., |ψ|2 = 0 at r = 0, except when l = 0). Note, also, that it is only possible to obtain sensible behavior of the wavefunction as r → 0 if l is an integer. For large values of y, the ratio of successive terms in the series (4.114) is cn y cn?1 = 2 y n , (4.117) according to Equation (4.116). This is the same as the ratio of successive terms in the series n (2 y)n n! , (4.118) 70 QUANTUM MECHANICS which converges to exp(2 y). We conclude that f(y) → exp(2 y) as y → ∞. It follows from Equation (4.112) that R(r) → exp(r/a)/r as r → ∞. This does not correspond to physically acceptable behavior of the wavefunction, since d3 x′ |ψ|2 must be ?nite. The only way in which we can avoid this unphysical behavior is if the series (4.114) terminates at some maximum value of n. According to the recursion relation (4.116), this is only possible if ? e2 a 4π ?0 2 = n, (4.119) where the last term in the series is cn yn . It follows from Equation (4.111) that the energy eigen- values are quantized, and can only take the values E = E0 n2 , (4.120) where E0 = ? ? e4 32π2 ? 2 0 2 = ?13.6 eV (4.121) is the ground state energy. Here, n is a positive integer which must exceed the quantum number l, otherwise there would be no terms in the series (4.114). The properly normalized wavefunction of a hydrogen atom is written ψ(r, θ, ?) = Rn l(r) Yl m(θ, ?), (4.122) where Rn l(r) = Rn l(r/a), (4.123) and a = n a0. (4.124) Here, a0 = 4π ?0 2 ? e2 = 5.3 * 10?11 meters (4.125) is the Bohr radius, and Rn l(x) is a well-behaved solution of the di?erential equation 1 x2 d dx x2 d dx ? l (l + 1) x2 + 2 n x ? 1 Rn l = 0 (4.126) that is consistent with the normalization constraint ∞ 0 dr r2 [Rn l(r)]2 = 1. (4.127) Finally, the Yl m are spherical harmonics. The restrictions on the quantum numbers are |m| ≤ l < n, where n is a positive integer, l a non-negative integer, and m an integer. The ground state of hydrogen corresponds to n = 1. The only permissible values of the other quantum numbers are l = 0 and m = 0. Thus, the ground state is a spherically symmetric, zero Orbital Angular Momentum 71 angular momentum state. The next energy level corresponds to n = 2. The other quantum numbers are allowed to take the values l = 0, m = 0 or l = 1, m = ?1, 0, 1. Thus, there are n = 2 states with non-zero angular momentum. Note that the energy levels given in Equation (4.120) are independent of the quantum number l, despite the fact that l appears in the radial eigenfunction equation (4.126). This is a special property of a 1/r Coulomb potential. In addition to the quantized negative energy states of the hydrogen atom, which we have just found, there is also a continuum of unbound positive energy states. Exercises 4.1 Demonstrate directly from the fundamental commutation relations for angular momentum, (4.11), that [L2 , Lz] = 0, [L± , Lz] = ? L± , and [L+ , L? ] = 2 Lz. 4.2 Demonstrate from Equations (4.74)–(4.79) that Lx = i sin ? ? ?θ + cot θ cos ? ? ?? , Ly = ?i cos ? ? ?θ ? cot θ sin ? ? ?? , Lz = ?i ? ?? , where θ, ? are conventional spherical polar angles. 4.3 A system is in the state ψ(θ, ?) = Yl m(θ, ?). Evaluate Lx , Ly , L2 x , and L2 y . 4.4 Derive Equations (4.96) and (4.97) from Equation (4.95). 4.5 Find the eigenvalues and eigenfunctions (in terms of the angles θ and ?) of Lx. 4.6 Consider a beam of particles with l = 1. A measurement of Lx yields the result . What values will be obtained by a subsequent measurement of Lz, and with what probabilities? Repeat the calculation for the cases in which the measurement of Lx yields the results 0 and ? . 4.7 The Hamiltonian for an axially symmetric rotator is given by H = L2 x + L2 y 2 I1 + L2 z 2 I2 . What are the eigenvalues of H? 4.8 The expectation value of f(x, p) in any stationary state is a constant. Calculate 0 = d dt ( x · p ) = i [H, x · p] 72 QUANTUM MECHANICS for a Hamiltonian of the form H = p2 2 m + V(r). Hence, show that p2 2 m = 1 2 r dV dr in a stationary state. This is another form of the Virial theorem. (See Exercise 3.8.) 4.9 Use the Virial theorem of the previous exercise to prove that 1 r = 1 n2 a0 for an energy eigenstate of the hydrogen atom. 4.10 Demonstrate that the ?rst few properly normalized radial wavefunctions of the hydrogen atom take the form: (a) R1 0(r) = 2 a3/2 0 exp ? r a0 . (b) R2 0(r) = 2 (2 a0)3/2 1 ? r 2 a0 exp ? r 2 a0 . (c) R2 1(r) = 1 √ 3 (2 a0)3/2 r a0 exp ? r 2 a0 . Spin Angular Momentum 73 5 Spin Angular Momentum 5.1 Introduction Up to now, we have tacitly assumed that the state of a particle in quantum mechanics can be completely speci?ed by giving the wavefunction ψ as a function of the spatial coordinates x, y, and z. Unfortunately, there is a wealth of experimental evidence that suggests that this simplistic approach is incomplete. Consider an isolated system at rest, and let the eigenvalue of its total angular momentum be j (j + 1) 2 . According to the theory of orbital angular momentum outlined in Sections 4.4 and 4.5, there are two possibilities. For a system consisting of a single particle, j = 0. For a system consisting of two (or more) particles, j is a non-negative integer. However, this does not agree with observations, because we often encounter systems that appear to be structureless, and yet have j 0. Even worse, systems where j has half-integer values abound in nature. In order to explain this apparent discrepancy between theory and experiments, Gouldsmit and Uhlenbeck (in 1925) introduced the concept of an internal, purely quantum mechanical, angular momentum called spin. For a particle with spin, the total angular momentum in the rest frame is non-vanishing. 5.2 Properties of Spin Angular Momentum Let us denote the three components of the spin angular momentum of a particle by the Hermitian operators (S x, Sy, Sz) ≡ S. We assume that these operators obey the fundamental commutation relations (4.8)–(4.10) for the components of an angular momentum. Thus, we can write S * S = i S. (5.1) We can also de?ne the operator S 2 = S 2 x + S 2 y + S 2 z . (5.2) According to the quite general analysis of Section 4.1, [S, S 2 ] = 0. (5.3) Thus, it is possible to ?nd simultaneous eigenstates of S 2 and Sz. These are denoted |s, sz , where Sz |s, sz = sz |s, sz , (5.4) S 2 |s, sz = s (s + 1) 2 |s, sz . (5.5) According to the equally general analysis of Section 4.2, the quantum number s can, in principle, take integer or half-integer values, and the quantum number sz can only take the values s, s?1 · · ·? s + 1, ?s. 74 QUANTUM MECHANICS Spin angular momentum clearly has many properties in common with orbital angular mo- mentum. However, there is one vitally important di?erence. Spin angular momentum operators cannot be expressed in terms of position and momentum operators, like in Equations (4.1)–(4.3), because this identi?cation depends on an analogy with classical mechanics, and the concept of spin is purely quantum mechanical: i.e., it has no analogy in classical physics. Consequently, the restriction that the quantum number of the overall angular momentum must take integer values is lifted for spin angular momentum, since this restriction (found in Sections 4.3 and 4.4) depends on Equations (4.1)–(4.3). In other words, the spin quantum number s is allowed to take half-integer values. Consider a spin one-half particle, for which Sz |± = ± 2 |± , (5.6) S 2 |± = 3 2 4 |± . (5.7) Here, the |± denote eigenkets of the Sz operator corresponding to the eigenvalues ± /2. These kets are mutually orthogonal (since Sz is an Hermitian operator), so +|? = 0. (5.8) They are also properly normalized and complete, so that 1, (5.9) and 1. (5.10) It is easily veri?ed that the Hermitian operators de?ned by S x = 2 5.11) Sy = i 2 5.12) Sz = 2 5.13) satisfy the commutation relations (4.8)–(4.10) (with the Lj replaced by the S j). The operator S 2 takes the form S 2 = 3 2 4 . (5.14) It is also easily demonstrated that S 2 and Sz, de?ned in this manner, satisfy the eigenvalue relations (5.6)–(5.7). Equations (5.11)–(5.14) constitute a realization of the spin operators S and S 2 (for a spin one-half particle) in spin space (i.e., the Hilbert sub-space consisting of kets which correspond to the di?erent spin states of the particle). Spin Angular Momentum 75 5.3 Wavefunction of Spin One-Half Particle The state of a spin one-half particle is represented as a vector in ket space. Let us suppose that this space is spanned by the basis kets |x′ , y′ , z′ , ± . Here, |x′ , y′ , z′ , ± denotes a simultaneous eigenstate of the position operators x, y, z, and the spin operator Sz, corresponding to the eigenvalues x′ , y′ , z′ , and ± /2, respectively. The basis kets are assumed to satisfy the completeness relation dx′ dy′ dz′ |x′ , y′ , z′ , + x′ , y′ , z′ , +| + |x′ , y′ , z′ , ? x′ , y′ , z′ , ?| = 1. (5.15) It is helpful to think of the ket |x′ , y′ , z′ , + as the product of two kets—a position space ket |x′ , y′ , z′ , and a spin space ket |+ . We assume that such a product obeys the commutative and distributive axioms of multiplication: |x′ , y′ , z′ x′ , y′ , z′ , (5.16) c′ |x′ , y′ , z′ + c′′ |x′′ , y′′ , z′′ |+ = c′ |x′ , y′ , z′ |+ + c′′ |x′′ , y′′ , z′′ |+ , (5.17) |x′ , y′ , z′ (c+ |+ + c?c+ |x′ , y′ , z′ |+ + c? |x′ , y′ , z′ |? , (5.18) where the c's are numbers. We can give meaning to any position space operator (such as Lz) acting on the product |x′ , y′ , z′ |+ by assuming that it operates only on the |x′ , y′ , z′ factor, and commutes with the |+ factor. Similarly, we can give a meaning to any spin operator (such as Sz) acting on |x′ , y′ , z′ |+ by assuming that it operates only on |+ , and commutes with |x′ , y′ , z′ . This implies that every position space operator commutes with every spin operator. In this manner, we can give meaning to the equation |x′ , y′ , z′ , ± = |x′ , y′ , z′ x′ , y′ , z′ . (5.19) The multiplication in the above equation is of a quite di?erent type to any that we have encoun- tered previously. The ket vectors |x′ , y′ , z′ and |± lie in two completely separate vector spaces, and their product |x′ , y′ , z′ |± lies in a third vector space. In mathematics, the latter space is termed the product space of the former spaces, which are termed factor spaces. The number of dimensions of a product space is equal to the product of the number of dimensions of each of the factor spaces. A general ket of the product space is not of the form (5.19), but is instead a sum or integral of kets of this form. A general state A of a spin one-half particle is represented as a ket ||A in the product of the spin and position spaces. This state can be completely speci?ed by two wavefunctions: ψ+(x′ , y′ , z′ ) = x′ , y′ , z′ | +||A , (5.20) ψ?(x′ , y′ , z′ ) = x′ , y′ , z′ | ?||A . (5.21) The probability of observing the particle in the region x′ to x′ + dx′ , y′ to y′ + dy′ , and z′ to z′ + dz′ , with sz = +1/2 is |ψ+(x′ , y′ , z′ )|2 dx′ dy′ dz′ . Likewise, the probability of observing the particle in the region x′ to x′ + dx′ , y′ to y′ + dy′ , and z′ to z′ + dz′ , with sz = ?1/2 is |ψ?(x′ , y′ , z′ )|2 dx′ dy′ dz′ . The normalization condition for the wavefunctions is dx′ dy′ dz′ |ψ+|2 + |ψ?|2 = 1. (5.22) 76 QUANTUM MECHANICS 5.4 Rotation Operators in Spin Space Let us, for the moment, forget about the spatial position of the particle, and concentrate on its spin state. A general spin state A is represented by the ket |A = +|A |+ + ?|A |? (5.23) in spin space. In Section 4.3, we were able to construct an operator Rz(??) that rotates the system through an angle ?? about the z-axis in position space. Can we also construct an operator Tz(??) that rotates the system through an angle ?? about the z-axis in spin space? By analogy with Equation (4.62), we would expect such an operator to take the form Tz(??) = exp(?i Sz ??/ ). (5.24) Thus, after rotation, the ket |A becomes |AR = Tz(??) |A . (5.25) To demonstrate that the operator (5.24) really does rotate the spin of the system, let us consider its e?ect on S x . Under rotation, this expectation value changes as follows: S x → AR| S x |AR = A| T? z S x Tz |A . (5.26) Thus, we need to compute exp( i Sz ??/ ) S x exp(?i Sz ??/ ). (5.27) This can be achieved in two di?erent ways. First, we can use the explicit formula for S x given in Equation (5.11). We ?nd that Equa- tion (5.27) becomes 2 exp( i Sz ??/exp(?i Sz ??/ ), (5.28) or 2 ei ??/2 |+ ?| ei ??/2 + e?i ??/2 |? +| e?i ??/2 , (5.29) which reduces to S x cos ?? ? Sy sin ??, (5.30) where use has been made of Equations (5.11)–(5.13). A second approach is to use the so called Baker-Hausdor? lemma. This takes the form exp( iG λ) A exp(?iG λ) = A + i λ [G, A] + i2 λ2 2! [G, [G, A]]+ + i3 λ3 3! [G, [G, [G, A]5.31) Spin Angular Momentum 77 where G is an Hermitian operator, and λ a real parameter. The proof of this lemma is left as an exercise. Applying the Baker-Hausdor? lemma to Equation (5.27), we obtain S x + i ?? [Sz, S x] + 1 2! i ?? 2 [Sz, [Sz, S x]5.32) which reduces to S x 1 ? (??)2 2! + (??)4 4! Sy ? ? (??)3 3! + (??)5 5! 5.33) or S x cos ?? ? Sy sin ??, (5.34) where use has been made of Equation (5.1). The second proof is more general than the ?rst, be- cause it only uses the fundamental commutation relation (5.1), and is, therefore, valid for systems with spin angular momentum higher than one-half. For a spin one-half system, both methods imply that S x → S x cos ?? ? Sy sin ?? (5.35) under the action of the rotation operator (5.24). It is straightforward to show that Sy → Sy cos ?? + S x sin ??. (5.36) Furthermore, Sz → Sz , (5.37) because Sz commutes with the rotation operator. Equations (5.35)–(5.37) demonstrate that the operator (5.24) rotates the expectation value of S by an angle ?? about the z-axis. In fact, the expectation value of the spin operator behaves like a classical vector under rotation: Sk → l Rk l Sl , (5.38) where the Rk l are the elements of the conventional rotation matrix for the rotation in question. It is clear, from our second derivation of the result (5.35), that this property is not restricted to the spin operators of a spin one-half system. In fact, we have e?ectively demonstrated that Jk → l Rk l Jl , (5.39) where the Jk are the generators of rotation, satisfying the fundamental commutation relation J*J = i J, and the rotation operator about the kth axis is written Rk(??) = exp(?i Jk ??/ ). Consider the e?ect of the rotation operator (5.24) on the state ket (5.23). It is easily seen that Tz(??) |A = e?i ??/2 +|A |+ + ei ??/2 ?|A |? . (5.40) 78 QUANTUM MECHANICS Consider a rotation by 2π radians. We ?nd that |A → Tz(2π) |A = ?|A . (5.41) Note that a ket rotated by 2π radians di?ers from the original ket by a minus sign. In fact, a rotation by 4π radians is needed to transform a ket into itself. The minus sign does not a?ect the expectation value of S, since S is sandwiched between A| and |A , both of which change sign. Nevertheless, the minus sign does give rise to observable consequences, as we shall see presently. 5.5 Magnetic Moments Consider a particle of electric charge q and speed v performing a circular orbit of radius r in the x-y plane. The charge is equivalent to a current loop of radius r in the x-y plane carrying current I = q v/2π r. The magnetic moment ? of the loop is of magnitude π r2 I and is directed along the z-axis. Thus, we can write ? = q 2 x * v, (5.42) where x and v are the vector position and velocity of the particle, respectively. However, we know that p = v/m, where p is the vector momentum of the particle, and m is its mass. We also know that L = x * p, where L is the orbital angular momentum. It follows that ? = q 2 m L. (5.43) Using the usual analogy between classical and quantum mechanics, we expect the above relation to also hold between the quantum mechanical operators, ? and L, which represent magnetic moment and orbital angular momentum, respectively. This is indeed found to the the case. Spin angular momentum also gives rise to a contribution to the magnetic moment of a charged particle. In fact, relativistic quantum mechanics predicts that a charged particle possessing spin must also possess a corresponding magnetic moment (this was ?rst demonstrated by Dirac—see Chapter 11). We can write ? = q 2 m (L + g S) , (5.44) where g is called the gyromagnetic ratio. For an electron this ratio is found to be ge = 2 1 + 1 2π e2 4π ?0 c . (5.45) The factor 2 is correctly predicted by Dirac's relativistic theory of the electron (see Chapter 11). The small correction 1/(2π 137), derived originally by Schwinger, is due to quantum ?eld e?ects. We shall ignore this correction in the following, so ? ? ? e 2 me (L + 2 S) (5.46) for an electron (here, e > 0). Spin Angular Momentum 79 5.6 Spin Precession The Hamiltonian for an electron at rest in a z-directed magnetic ?eld, B = B ez, is H = ?? · B = e me S · B = ω Sz, (5.47) where ω = e B me . (5.48) According to Equation (3.28), the time evolution operator for this system is T(t, 0) = exp(?i H t/ ) = exp(?i Sz ω t/ ). (5.49) It can be seen, by comparison with Equation (5.24), that the time evolution operator is precisely the same as the rotation operator for spin, with ?? set equal to ω t. It is immediately clear that the Hamiltonian (5.47) causes the electron spin to precess about the z-axis with angular frequency ω. In fact, Equations (5.35)–(5.37) imply that S x t = S x t=0 cos(ω t) ? Sy t=0 sin(ω t), (5.50) Sy t = Sy t=0 cos(ω t) + S x t=0 sin(ω t), (5.51) Sz t = Sz t=0. (5.52) The time evolution of the state ket is given by analogy with Equation (5.40): |A, t = e?i ω t/2 +|A, 0 |+ + ei ω t/2 ?|A, 0 |? . (5.53) Note that it takes time t = 4π/ω for the state ket to return to its original state. By contrast, it only takes times t = 2π/ω for the spin vector to point in its original direction. We now describe an experiment to detect the minus sign in Equation (5.41). An almost mo- noenergetic beam of neutrons is split in two, sent along two di?erent paths, A and B, and then recombined. Path A goes through a magnetic ?eld free region. However, path B enters a small region where a static magnetic ?eld is present. As a result, a neutron state ket going along path B acquires a phase-shift exp(?i ω T/2) (the ? signs correspond to sz = ±1/2 states). Here, T is the time spent in the magnetic ?eld, and ω is the spin precession frequency ω = gn e B mp . (5.54) This frequency is de?ned in an analogous manner to Equation (5.48). The gyromagnetic ratio for a neutron is found experimentally to be gn = ?1.91. (The magnetic moment of a neutron is entirely a quantum ?eld e?ect). When neutrons from path A and path B meet they undergo interference. We expect the observed neutron intensity in the interference region to exhibit a cos(±ω T/2 + δ) variation, where δ is the phase di?erence between paths A and B in the absence of a magnetic ?eld. In experiments, the time of ?ight T through the magnetic ?eld region is kept constant, while 80 QUANTUM MECHANICS the ?eld-strength B is varied. It follows that the change in magnetic ?eld required to produce successive maxima is ?B = 4π e gn ? l , (5.55) where l is the path-length through the magnetic ?eld region, and ? is the de Broglie wavelength over 2π of the neutrons. The above prediction has been veri?ed experimentally to within a fraction of a percent. This prediction depends crucially on the fact that it takes a 4π rotation to return a state ket to its original state. If it only took a 2π rotation then ?B would be half of the value given above, which does not agree with the experimental data. 5.7 Pauli Two-Component Formalism We have seen, in Section 4.4, that the eigenstates of orbital angular momentum can be conve- niently represented as spherical harmonics. In this representation, the orbital angular momentum operators take the form of di?erential operators involving only angular coordinates. It is conven- tional to represent the eigenstates of spin angular momentum as column (or row) matrices. In this representation, the spin angular momentum operators take the form of matrices. The matrix representation of a spin one-half system was introduced by Pauli in 1926. Recall, from Section 5.4, that a general spin ket can be expressed as a linear combination of the two eigenkets of Sz belonging to the eigenvalues ± /2. These are denoted |± . Let us represent these basis eigenkets as column vectors: |+ → 1 0 ≡ χ+, (5.56) |? → 0 1 ≡ χ?. (5.57) The corresponding eigenbras are represented as row vectors: +| → (1, 0) ≡ χ? +, (5.58) ?| → (0, 1) ≡ χ? ?. (5.59) In this scheme, a general ket takes the form |A = +|A |+ + ?|A |? → +|A ?|A , (5.60) and a general bra becomes A| = A|+ +| + A|A|+ , A|? ). (5.61) The column vector (5.60) is called a two-component spinor, and can be written χ ≡ +|A ?|A = c+ c? = c+ χ+ + c? χ?, (5.62) Spin Angular Momentum 81 where the c± are complex numbers. The row vector (5.61) becomes χ? ≡ ( A|+ , A|? ) = (c ? + , c ? ? ) = c ? + χ? + + c ? ? χ? ?. (5.63) Consider the ket obtained by the action of a spin operator on ket A: |A′ = Sk |A . (5.64) This ket is represented as |A′ → +|A′ ?|A′ ≡ χ′ . (5.65) However, +|A′ = +| Sk |+ +|A + +| Sk |? ?|A , (5.66) ?|A′ = ?| Sk |+ +|A + ?| Sk |? ?|A , (5.67) or +|A′ ?|A′ = +| Sk |+ +| Sk |? ?| Sk |+ ?| Sk |? +|A ?|A . (5.68) It follows that we can represent the operator/ket relation (5.64) as the matrix relation χ′ = 2 σk χ, (5.69) where the σk are the matrices of the ±| Sk |± values divided by /2. These matrices, which are called the Pauli matrices, can easily be evaluated using the explicit forms for the spin operators given in Equations (5.11)–(5.13). We ?nd that σ1 = 0 1 1 0 , (5.70) σ2 = 0 ?i i 0 , (5.71) σ3 = 1 0 0 ?1 . (5.72) Here, 1, 2, and 3 refer to x, y, and z, respectively. Note that, in this scheme, we are e?ectively representing the spin operators in terms of the Pauli matrices: Sk → 2 σk. (5.73) The expectation value of Sk can be written in terms of spinors and the Pauli matrices: Sk = A| Sk |A = ± A|± ±| Sk |± ±|A = 2 χ? σk χ. (5.74) 82 QUANTUM MECHANICS The fundamental commutation relation for angular momentum, Equation (5.1), can be com- bined with (5.73) to give the following commutation relation for the Pauli matrices: σ * σ = 2 i σ. (5.75) It is easily seen that the matrices (5.70)–(5.72) actually satisfy these relations (i.e., σ1 σ2 ?σ2 σ1 = 2 i σ3, plus all cyclic permutations). It is also easily seen that the Pauli matrices satisfy the anti- commutation relations {σi, σj} = 2 δij. (5.76) Here, {a, b} ≡ a b + b a. Let us examine how the Pauli scheme can be extended to take into account the position of a spin one-half particle. Recall, from Section 5.3, that we can represent a general basis ket as the product of basis kets in position space and spin space: |x′ , y′ , z′ , ± = |x′ , y′ , z′ x′ , y′ , z′ . (5.77) The ket corresponding to state A is denoted ||A , and resides in the product space of the position and spin ket spaces. State A is completely speci?ed by the two wavefunctions ψ+(x′ , y′ , z′ ) = x′ , y′ , z′ | +||A , (5.78) ψ?(x′ , y′ , z′ ) = x′ , y′ , z′ | ?||A . (5.79) Consider the operator relation ||A′ = Sk ||A . (5.80) It is easily seen that x′ , y′ , z′ | +|A′ = +| Sk |+ x′ , y′ , z′ | +||A + +| Sk |? x′ , y′ , z′ | ?||A , (5.81) x′ , y′ , z′ | ?|A′ = ?| Sk |+ x′ , y′ , z′ | +||A + ?| Sk |? x′ , y′ , z′ | ?||A , (5.82) where use has been made of the fact that the spin operator Sk commutes with the eigenbras x′ , y′ , z′ |. It is fairly obvious that we can represent the operator relation (5.80) as a matrix re- lation if we generalize our de?nition of a spinor by writing ||A → ψ+(x′ ) ψ?(x′ ) ≡ χ, (5.83) and so on. The components of a spinor are now wavefunctions, instead of complex numbers. In this scheme, the operator equation (5.80) becomes simply χ′ = 2 σk χ. (5.84) Consider the operator relation ||A′ = pk ||A . (5.85) Spin Angular Momentum 83 In the Schr¨ odinger representation, we have x′ , y′ , z′ | +|A′ = x′ , y′ , z′ | pk +||A = ?i ? ?x′ k x′ , y′ , z′ | +||A , (5.86) x′ , y′ , z′ | ?|A′ = x′ , y′ , z′ | pk ?||A = ?i ? ?x′ k x′ , y′ , z′ | ?||A , (5.87) where use has been made of Equation (2.78). The above equation reduces to ψ′ +(x′ ) ψ′ ?(x′ ) = ?i ?ψ+(x′ )/?x′ k ?i ?ψ?(x′ )/?x′ k . (5.88) Thus, the operator equation (5.85) can be written χ′ = pk χ, (5.89) where pk → ?i ? ?x′ k 1. (5.90) Here, 1 is the 2 * 2 unit matrix. In fact, any position operator (e.g., pk or Lk) is represented in the Pauli scheme as some di?erential operator of the position eigenvalues multiplied by the 2 * 2 unit matrix. What about combinations of position and spin operators? The most commonly occurring com- bination is a dot product: e.g., S · L = ( /2) σ · L. Consider the hybrid operator σ · a, where a ≡ (ax, ay, az) is some vector position operator. This quantity is represented as a 2 * 2 matrix: σ · a ≡ k ak σk = +a3 a1 ? i a2 a1 + i a2 ?a3 . (5.91) Since, in the Schr¨ odinger representation, a general position operator takes the form of a di?erential operator in x′ , y′ , or z′ , it is clear that the above quantity must be regarded as a matrix di?erential operator that acts on spinors of the general form (5.83). The important identity (σ · a) (σ · b) = a · b + i σ · (a * b) (5.92) follows from the commutation and anti-commutation relations (5.75) and (5.76). Thus, j σj aj k σk bk = j k 1 2 {σj, σk} + 1 2 [σj, σk] aj bk = j k (δj k + i ?j k l σl) aj bk = a · b + i σ · (a * b). (5.93) 84 QUANTUM MECHANICS A general rotation operator in spin space is written T(?φ) = exp (?i S · n ??/ ) , (5.94) by analogy with Equation (5.24), where n is a unit vector pointing along the axis of rotation, and ?? is the angle of rotation. Here, n can be regarded as a trivial position operator. The rotation operator is represented exp (?i S · n ??/ ) → exp (?i σ · n ??/2) (5.95) in the Pauli scheme. The term on the right-hand side of the above expression is the exponential of a matrix. This can easily be evaluated using the Taylor series for an exponential, plus the rules (σ · n)k = 1 for k even, (5.96) (σ · n)k = (σ · n) for k odd. (5.97) These rules follow trivially from the identity (5.92). Thus, we can write exp (?i σ·n ??/2) = ? ? ? ? ? ?1 ? (σ · n)2 2! ?? 2 2 + (σ · n)4 4! ?? 2 4 + · · · ? ? ? ? ? ? ? i ? ? ? ? ? ?(σ · n) ?? 2 ? (σ · n)3 3! ?? 2 3 + · · · ? ? ? ? ? ? = cos(??/2) 1 ? i sin(??/2) σ · n. (5.98) The explicit 2 * 2 form of this matrix is cos(??/2) ? i nz sin(??/2) (?i nx ? ny) sin(??/2) (?i nx + ny) sin(??/2) cos(??/2) + i nz sin(??/2) . (5.99) Rotation matrices act on spinors in much the same manner as the corresponding rotation operators act on state kets. Thus, χ′ = exp (?i σ · n ??/2) χ, (5.100) where χ′ denotes the spinor obtained after rotating the spinor χ an angle ?? about the n-axis. The Pauli matrices remain unchanged under rotations. However, the quantity χ? σk χ is proportional to the expectation value of Sk [see Equation (5.74)], so we would expect it to transform like a vector under rotation (see Section 5.4). In fact, we require (χ? σk χ)′ ≡ (χ? )′ σk χ′ = l Rk l (χ? σl χ), (5.101) where the Rkl are the elements of a conventional rotation matrix. This is easily demonstrated, because exp i σ3 ?? 2 σ1 exp ?i σ3 ?? 2 = σ1 cos ?? ? σ2 sin ?? (5.102) Spin Angular Momentum 85 plus all cyclic permutations. The above expression is the 2*2 matrix analogue of (see Section 5.4) exp i Sz ?? S x exp ?i Sz ?? = S x cos ?? ? Sy sin ??. (5.103) The previous two formulae can both be validated using the Baker-Hausdor? lemma, (5.31), which holds for Hermitian matrices, in addition to Hermitian operators. 5.8 Spin Greater Than One-Half Systems In the absence of spin, the Hamiltonian can be written as some function of the position and momen- tum operators. Using the Schr¨ odinger representation, in which p → ?i ?, the energy eigenvalue problem, H |E = E |E , (5.104) can be transformed into a partial di?erential equation for the wavefunction ψ(x′ ) ≡ x′ |E . This function speci?es the probability density for observing the particle at a given position, x′ . In general, we ?nd H ψ = E ψ, (5.105) where H is now a partial di?erential operator. The boundary conditions (for a bound state) are obtained from the normalization constraint d3 x′ |ψ|2 = 1. (5.106) This is all very familiar. However, we now know how to generalize this scheme to deal with a spin one-half particle. Instead of representing the state of the particle by a single wavefunction, we use two wavefunctions. The ?rst, ψ+(x′ ), speci?es the probability density of observing the particle at position x′ with spin angular momentum + /2 in the z-direction. The second, ψ?(x′ ), speci?es the probability density of observing the particle at position x′ with spin angular momentum ? /2 in the z-direction. In the Pauli scheme, these wavefunctions are combined into a spinor, χ, which is simply the column vector of ψ+ and ψ?. In general, the Hamiltonian is a function of the position, momentum, and spin operators. Adopting the Schr¨ odinger representation, and the Pauli scheme, the energy eigenvalue problem reduces to H χ = E χ, (5.107) where χ is a spinor (i.e., a 2*1 matrix of wavefunctions) and H is a 2*2 matrix partial di?erential operator [see Equation (5.91)]. The above spinor equation can always be written out explicitly as two coupled partial di?erential equations for ψ+ and ψ?. Suppose that the Hamiltonian has no dependence on the spin operators. In this case, the Hamil- tonian is represented as diagonal 2 * 2 matrix partial di?erential operator in the Schr¨ odinger/Pauli scheme [see Equation (5.90)]. In other words, the partial di?erential equation for ψ+ decouples 86 QUANTUM MECHANICS from that for ψ?. In fact, both equations have the same form, so there is only really one di?erential equation. In this situation, the most general solution to Equation (5.107) can be written χ = ψ(x′ ) c+ c? . (5.108) Here, ψ(x′ ) is determined by the solution of the di?erential equation, and the c± are arbitrary complex numbers. The physical signi?cance of the above expression is clear. The Hamiltonian determines the relative probabilities of ?nding the particle at various di?erent positions, but the direction of its spin angular momentum remains undetermined. Suppose that the Hamiltonian depends only on the spin operators. In this case, the Hamilto- nian is represented as a 2 * 2 matrix of complex numbers in the Schr¨ odinger/Pauli scheme [see Equation (5.73)], and the spinor eigenvalue equation (5.107) reduces to a straightforward matrix eigenvalue problem. The most general solution can again be written χ = ψ(x′ ) c+ c? . (5.109) Here, the ratio c+/c? is determined by the matrix eigenvalue problem, and the wavefunction ψ(x′ ) is arbitrary. Clearly, the Hamiltonian determines the direction of the particle's spin angular mo- mentum, but leaves its position undetermined. In general, of course, the Hamiltonian is a function of both position and spin operators. In this case, it is not possible to decompose the spinor as in Equations (5.108) and (5.109). In other words, a general Hamiltonian causes the direction of the particle's spin angular momentum to vary with position in some speci?ed manner. This can only be represented as a spinor involving di?erent wavefunctions, ψ+ and ψ?. But, what happens if we have a spin one or a spin three-halves particle? It turns out that we can generalize the Pauli two-component scheme in a fairly straightforward manner. Consider a spin-s particle: i.e., a particle for which the eigenvalue of S 2 is s (s + 1) 2 . Here, s is either an integer, or a half-integer. The eigenvalues of Sz are written sz , where sz is allowed to take the values s, s ? 1,s + 1, ?s. In fact, there are 2 s + 1 distinct allowed values of sz. Not surprisingly, we can represent the state of the particle by 2 s + 1 di?erent wavefunctions, denoted ψsz (x′ ). Here, ψsz (x′ ) speci?es the probability density for observing the particle at position x′ with spin angular momentum sz in the z-direction. More exactly, ψsz (x′ ) = x′ | s, sz||A , (5.110) where ||A denotes a state ket in the product space of the position and spin operators. The state of the particle can be represented more succinctly by a spinor, χ, which is simply the 2 s + 1 component column vector of the ψsz (x′ ). Thus, a spin one-half particle is represented by a two- component spinor, a spin one particle by a three-component spinor, a spin three-halves particle by a four-component spinor, and so on. In this extended Schr¨ odinger/Pauli scheme, position space operators take the form of diagonal (2 s + 1) * (2 s + 1) matrix di?erential operators. Thus, we can represent the momentum operators Spin Angular Momentum 87 as [see Equation (5.90)] pk → ?i ? ?x′ k 1, (5.111) where 1 is the (2 s + 1) * (2 s + 1) unit matrix. We represent the spin operators as Sk → s σk, (5.112) where the (2 s + 1) * (2 s + 1) extended Pauli matrix σk has elements (σk)j l = s, j| Sk |s, l s . (5.113) Here, j, l are integers, or half-integers, lying in the range ?s to +s. But, how can we evaluate the brackets s, j| Sk |s, l and, thereby, construct the extended Pauli matrices? In fact, it is trivial to construct the σz matrix. By de?nition, Sz |s, j = j |s, j . (5.114) Hence, (σ3)j l = s, j| Sz |s, l s = j s δj l, (5.115) where use has been made of the orthonormality property of the |s, j . Thus, σz is the suitably normalized diagonal matrix of the eigenvalues of Sz. The matrix elements of σx and σy are most easily obtained by considering the shift operators, S ± = S x ± i Sy. (5.116) We know, from Equations (4.55)–(4.56), that S + |s, j = [s (s + 1) ? j (j + 1)]1/2 |s, j + 1 , (5.117) S ? |s, j = [s (s + 1) ? j (j ? 1)]1/2 |s, j ? 1 . (5.118) It follows from Equations (5.113), and (5.116)–(5.118), that (σ1)j l = [s (s + 1) ? j (j ? 1)]1/2 2 s δj l+1 + [s (s + 1) ? j (j + 1)]1/2 2 s δj l?1, (5.119) (σ2)j l = [s (s + 1) ? j (j ? 1)]1/2 2 i s δj l+1 ? [s (s + 1) ? j (j + 1)]1/2 2 i s δj l?1. (5.120) According to Equations (5.115) and (5.119)–(5.120), the Pauli matrices for a spin one-half (s = 1/2) particle are σ1 = 0 1 1 0 , (5.121) σ2 = 0 ?i i 0 , (5.122) σ3 = 1 0 0 ?1 , (5.123) 88 QUANTUM MECHANICS as we have seen previously. For a spin one (s = 1) particle, we ?nd that σ1 = 1 √ 2 ? ? ? ? ? ? ? ? ? ? 0 1 0 1 0 1 0 1 0 ? ? ? ? ? ? ? ? ? ? , (5.124) σ2 = 1 √ 2 ? ? ? ? ? ? ? ? ? ? 0 ?i 0 i 0 ?i 0 i 0 ? ? ? ? ? ? ? ? ? ? , (5.125) σ3 = ? ? ? ? ? ? ? ? ? ? 1 0 0 0 0 0 0 0 ?1 ? ? ? ? ? ? ? ? ? ? . (5.126) In fact, we can now construct the Pauli matrices for a spin anything particle. This means that we can convert the general energy eigenvalue problem for a spin-s particle, where the Hamiltonian is some function of position and spin operators, into 2 s + 1 coupled partial di?erential equations involving the 2 s + 1 wavefunctions ψsz (x′ ). Unfortunately, such a system of equations is generally too complicated to solve exactly. Exercises 5.1 Demonstrate that the operators de?ned in Equations (5.11)–(5.13) are Hermitian, and sat- isfy the commutation relations (5.1). 5.2 Prove the Baker-Hausdor? lemma, (5.31). 5.3 Find the Pauli representations of the normalized eigenstates of S x and Sy for a spin-1/2 particle. 5.4 Suppose that a spin-1/2 particle has a spin vector that lies in the x-z plane, making an angle θ with the z-axis. Demonstrate that a measurement of Sz yields /2 with probability cos2 (θ/2), and ? /2 with probability sin2 (θ/2). 5.5 An electron is in the spin-state χ = A 1 ? 2 i 2 in the Pauli representation. Determine the constant A by normalizing χ. If a measurement of Sz is made, what values will be obtained, and with what probabilities? What is the expectation value of Sz? Repeat the above calculations for S x and Sy. 5.6 Consider a spin-1/2 system represented by the normalized spinor χ = cos α sin α exp( i β) in the Pauli representation, where α and β are real. What is the probability that a measure- ment of Sy yields ? /2? Spin Angular Momentum 89 5.7 An electron is at rest in an oscillating magnetic ?eld B = B0 cos(ω t) ez, where B0 and ω are real positive constants. (a) Find the Hamiltonian of the system. (b) If the electron starts in the spin-up state with respect to the x-axis, determine the spinor χ(t) that represents the state of the system in the Pauli representation at all subsequent times. (c) Find the probability that a measurement of S x yields the result ? /2 as a function of time. (d) What is the minimum value of B0 required to force a complete ?ip in S x? 90 QUANTUM MECHANICS Addition of Angular Momentum 91 6 Addition of Angular Momentum 6.1 Introduction Consider a hydrogen atom whose orbiting electron is in an l = 1 state. The electron, consequently, possesses orbital angular momentum of magnitude , and spin angular momentum of magnitude /2. So, what is the electron's total angular momentum? 6.2 Analysis In order to answer this question, we need to learn how to add angular momentum operators. Con- sider the most general case. Suppose that we have two sets of angular momentum operators, J1 and J2. By de?nition, these operators are Hermitian, and obey the fundamental commutation relations J1 * J1 = i J1, (6.1) J2 * J2 = i J2. (6.2) Let us assume that the two groups of operators correspond to di?erent degrees of freedom of the system, so that [J1 i, J2 j] = 0, (6.3) where i, j stand for either x, y, or z. For instance, J1 could be an orbital angular momentum operator, and J2 a spin angular momentum operator. Alternatively, J1 and J2 could be the orbital angular momentum operators of two di?erent particles in a multi-particle system. We know, from the general properties of angular momentum, that the eigenvalues of J 2 1 and J 2 2 can be written j1 (j1 +1) 2 and j2 (j2 +1) 2 , respectively, where j1 and j2 are either integers, or half-integers. We also know that the eigenvalues of J1 z and J2 z take the form m1 and m2 , respectively, where m1 and m2 are numbers lying in the ranges j1, j1 ? 1,j1 + 1, ?j1 and j2, j2 ? 1,j2 + 1, ?j2, respectively. Let us de?ne the total angular momentum operator J = J1 + J2. (6.4) Now, J is an Hermitian operator, because it is the sum of Hermitian operators. Moreover, according to Equations (4.11) and (4.14), J satis?es the fundamental commutation relation J * J = i J. (6.5) Thus, J possesses all of the expected properties of an angular momentum operator. It follows that the eigenvalue of J 2 can be written j (j + 1) 2 , where j is an integer, or a half-integer. Moreover, the eigenvalue of Jz takes the form m , where m lies in the range j, j ? 1,j + 1, ?j. At this stage, however, we do not know the relationship between the quantum numbers of the total angular momentum, j and m, and those of the individual angular momenta, j1, j2, m1, and m2. 92 QUANTUM MECHANICS Now, J 2 = J 2 1 + J 2 2 + 2 J1 · J2. (6.6) Furthermore, we know that [J 2 1 , J1 i] = 0, (6.7) [J 2 2 , J2 i] = 0, (6.8) and also that all of the J1 i, J 2 1 operators commute with the J2 i, J 2 2 operators. It follows from Equation (6.6) that [J 2 , J 2 1 ] = [J 2 , J 2 2 ] = 0. (6.9) This implies that the quantum numbers j1, j2, and j can all be measured simultaneously. In other words, we can know the magnitude of the total angular momentum together with the magnitudes of the component angular momenta. However, it is apparent from Equation (6.6) that [J 2 , J1 z] 0, (6.10) [J 2 , J2 z] 0. (6.11) This suggests that it is not possible to measure the quantum numbers m1 and m2 simultaneously with the quantum number j. Thus, we cannot determine the projections of the individual angular momenta along the z-axis at the same time as the magnitude of the total angular momentum. It is clear, from the preceding discussion, that we can form two alternate groups of mutually commuting operators. The ?rst group is J 2 1 , J 2 2 , J1 z, and J2 z. The second group is J 2 1 , J 2 2 , J 2 , and Jz. These two groups of operators are incompatible with one another. We can de?ne simultaneous eigenkets of each operator group. The simultaneous eigenkets of J 2 1 , J 2 2 , J1z, and J2z are denoted | j1, j2; m1, m2 , where J 2 1 | j1, j2; m1, m2 = j1 (j1 + 1) 2 | j1, j2; m1, m2 , (6.12) J 2 2 | j1, j2; m1, m2 = j2 (j2 + 1) 2 | j1, j2; m1, m2 , (6.13) J1z | j1, j2; m1, m2 = m1 | j1, j2; m1, m2 , (6.14) J2z | j1, j2; m1, m2 = m2 | j1, j2; m1, m2 . (6.15) The simultaneous eigenkets of J 2 1 , J 2 2 , J 2 and Jz are denoted | j1, j2; j, m , where J 2 1 | j1, j2; j, m = j1 (j1 + 1) 2 | j1, j2; j, m , (6.16) J 2 2 | j1, j2; j, m = j2 (j2 + 1) 2 | j1, j2; j, m , (6.17) J 2 | j1, j2; j, m = j (j + 1) 2 | j1, j2; j, m , (6.18) Jz | j1, j2; j, m = m | j1, j2; j, m . (6.19) Each set of eigenkets are complete, mutually orthogonal (for eigenkets corresponding to di?erent sets of eigenvalues), and have unit norms. Since the operators J 2 1 and J 2 2 are common to both Addition of Angular Momentum 93 operator groups, we can assume that the quantum numbers j1 and j2 are known. In other words, we can always determine the magnitudes of the individual angular momenta. In addition, we can either know the quantum numbers m1 and m2, or the quantum numbers j and m, but we cannot know both pairs of quantum numbers at the same time. We can write a conventional completeness relation for both sets of eigenkets: m1 m2 | j1, j2; m1, m2 j1, j2; m1, m2| = 1, (6.20) j m | j1, j2; j, m j1, j2; j, m| = 1, (6.21) where the right-hand sides denote the identity operator in the ket space corresponding to states of given j1 and j2. The summation is over all allowed values of m1, m2, j, and m. As we have seen, the operator group J 2 1 , J 2 2 , J 2 , and Jz is incompatible with the group J 2 1 , J 2 2 , J1 z, and J2 z. This means that if the system is in a simultaneous eigenstate of the former group then, in general, it is not in an eigenstate of the latter. In other words, if the quantum numbers j1, j2, j, and m are known with certainty then a measurement of the quantum numbers m1 and m2 will give a range of possible values. We can use the completeness relation (6.20) to write | j1, j2; j, m = m1 m2 j1, j2; m1, m2| j1, j2; j, m | j1, j2; m1, m2 . (6.22) Thus, we can write the eigenkets of the ?rst group of operators as a weighted sum of the eigenkets of the second set. The weights, j1, j2; m1, m2| j1, j2; j, m , are called the Clebsch-Gordon coe?- cients. If the system is in a state where a measurement of J 2 1 , J 2 2 , J 2 , and Jz is bound to give the results j1 (j1 + 1) 2 , j2 (j2 + 1) 2 , j (j + 1) 2 , and jz , respectively, then a measurement of J1 z and J2 z will give the results m1 and m2 , respectively, with probability | j1, j2; m1, m2| j1, j2; j, m |2 . The Clebsch-Gordon coe?cients possess a number of very important properties. First, the coe?cients are zero unless m = m1 + m2. (6.23) To prove this, we note that (Jz ? J1 z ? J2 z) | j1, j2; j, m = 0. (6.24) Forming the inner product with j1, j2; m1, m2|, we obtain (m ? m1 ? m2) j1, j2; m1, m2| j1, j2; j, m = 0, (6.25) which proves the assertion. Thus, the z-components of di?erent angular momenta add alge- braically. So, an electron in an l = 1 state, with orbital angular momentum , and spin angular momentum /2, projected along the z-axis, constitutes a state whose total angular momentum pro- jected along the z-axis is 3 /2. What is uncertain is the magnitude of the total angular momentum. Second, the coe?cients vanish unless | j1 ? j2| ≤ j ≤ j1 + j2. (6.26) 94 QUANTUM MECHANICS We can assume, without loss of generality, that j1 ≥ j2. We know, from Equation (6.23), that for given j1 and j2 the largest possible value of m is j1 + j2 (because j1 is the largest possible value of m1, etc.). This implies that the largest possible value of j is j1 + j2 (since, by de?nition, the largest value of m is equal to j). Now, there are (2 j1 + 1) allowable values of m1 and (2 j2 + 1) allowable values of m2. Thus, there are (2 j1 + 1) (2 j2 + 1) independent eigenkets, | j1, j2; m1, m2 , needed to span the ket space corresponding to ?xed j1 and j2. Because the eigenkets | j1, j2; j, m span the same space, they must also form a set of (2 j1 +1) (2 j2 +1) independent kets. In other words, there can only be (2 j1 +1) (2 j2 +1) distinct allowable values of the quantum numbers j and m. For each allowed value of j, there are 2 j + 1 allowed values of m. We have already seen that the maximum allowed value of j is j1 + j2. It is easily seen that if the minimum allowed value of j is j1 ? j2 then the total number of allowed values of j and m is (2 j1 + 1) (2 j2 + 1): i.e., j=j1?j2,j1+j2 (2 j + 1) ≡ (2 j1 + 1) (2 j2 + 1). (6.27) This proves our assertion. Third, the sum of the modulus squared of all of the Clebsch-Gordon coe?cients is unity: i.e., m1 m2 | j1, j2; m1, m2| j1, j2; j, m |2 = 1. (6.28) This assertion is proved as follows: j1, j2; j, m| j1, j2; j, m = m1 m2 j1, j2; j, m| j1, j2; m1, m2 j1, j2; m1, m2| j1, j2; j, m = m1 m2 | j1, j2; m1, m2| j1, j2; j, m |2 = 1, (6.29) where use has been made of the completeness relation (6.20). Finally, the Clebsch-Gordon coe?cients obey two recursion relations. To obtain these rela- tions, we start from J± | j1, j2; j, m = (J± 1 + J± 2 ) m′ 1 m′ 2 j1, j2; m′ 1, m′ 2| j1, j2; j, m | j1, j2; m′ 1, m′ 2 . (6.30) Making use of the well-known properties of the shift operators, which are speci?ed by Equa- tions (4.55)–(4.56), we obtain j (j + 1) ? m (m ± 1) | j1, j2; j, m ± 1 = m′ 1 m′ 2 j1 (j1 + 1) ? m′ 1 (m′ 1 ± 1) | j1, j2; m′ 1 ± 1, m′ 2 + j2 (j2 + 1) ? m′ 2 (m′ 2 ± 1) | j1, j2; m′ 1, m′ 2 ± 1 * j1, j2; m′ 1, m′ 2| j1, j2; j, m . (6.31) Addition of Angular Momentum 95 Taking the inner product with j1, j2; m1, m2|, and making use of the orthonormality property of the basis eigenkets, we obtain the desired recursion relations: j (j + 1) ? m (m ± 1) j1, j2; m1, m2| j1, j2; j, m ± 1 = j1 (j1 + 1) ? m1 (m1 ? 1) * j1, j2; m1 ? 1, m2| j1, j2; j, m + j2 (j2 + 1) ? m2 (m2 ? 1) * j1, j2; m1, m2 ? 1| j1, j2; j, m . (6.32) It is clear, from the absence of complex coupling coe?cients in the above relations, that we can always choose the Clebsch-Gordon coe?cients to be real numbers. This is convenient, because it ensures that the inverse Clebsch-Gordon coe?cients, j1, j2; j, m| j1, j2; m1, m2 , are identical to the Clebsch-Gordon coe?cients. In other words, j1, j2; j, m| j1, j2; m1, m2 = j1, j2; m1, m2| j1, j2; j, m . (6.33) The inverse Clebsch-Gordon coe?cients are the weights in the expansion of the | j1, j2; m1, m2 in terms of the | j1, j2; j, m : | j1, j2; m1, m2 = j m j1, j2; j, m| j1, j2; m1, m2 | j1, j2; j, m . (6.34) It turns out that the recursion relations (6.32), together with the normalization condition (6.28), are su?cient to completely determine the Clebsch-Gordon coe?cients to within an arbitrary sign (multiplied into all of the coe?cients). This sign is ?xed by convention. The easiest way of demonstrating this assertion is by considering a speci?c example. Let us add the angular momentum of two spin one-half systems: e.g., two electrons at rest. So, j1 = j2 = 1/2. We know, from general principles, that |m1| ≤ 1/2 and |m2| ≤ 1/2. We also know, from Equation (6.26), that 0 ≤ j ≤ 1, where the allowed values of j di?er by integer amounts. It follows that either j = 0 or j = 1. Thus, two spin one-half systems can be combined to form either a spin zero system or a spin one system. It is helpful to arrange all of the possibly non-zero Clebsch-Gordon coe?cients in a table: m1 m2 1/2 1/2 ? ? ? ? 1/2 -1/2 ? ? ? ? -1/2 1/2 ? ? ? ? -1/2 -1/2 ? ? ? ? j1=1/2 j 1 1 1 0 j2=1/2 m 1 0 -1 0 The box in this table corresponding to m1 = 1/2, m2 = 1/2, j = 1, m = 1 gives the Clebsch- Gordon coe?cient 1/2, 1/2; 1/2, 1/2|1/2, 1/2; 1, 1 , or the inverse Clebsch-Gordon coe?cient 1/2, 1/2; 1, 1|1/2, 1/2; 1/2, 1/2 . All the boxes contain question marks because, at this stage, we do not know the values of any Clebsch-Gordon coe?cients. 96 QUANTUM MECHANICS A Clebsch-Gordon coe?cient is automatically zero unless m1 + m2 = m. In other words, the z-components of angular momentum have to add algebraically. Many of the boxes in the above table correspond to m1 + m2 m. We immediately conclude that these boxes must contain zeroes: i.e., m1 m2 1/2 1/2 ? 0 0 0 1/2 -1/2 0 ? 0 ? -1/2 1/2 0 ? 0 ? -1/2 -1/2 0 0 ? 0 j1=1/2 j 1 1 1 0 j2=1/2 m 1 0 -1 0 The normalization condition (6.28) implies that the sum of the squares of all the rows and columns of the above table must be unity. There are two rows and two columns that only con- tain a single non-zero entry. We conclude that these entries must be ±1, but we have no way of determining the signs at present. Thus, m1 m2 1/2 1/2 ±1 0 0 0 1/2 -1/2 0 ? 0 ? -1/2 1/2 0 ? 0 ? -1/2 -1/2 0 0 ±1 0 j1=1/2 j 1 1 1 0 j2=1/2 m 1 0 -1 0 Let us evaluate the recursion relation (6.32) for j1 = j2 = 1/2, with j = 1, m = 0, m1 = m2 = ±1/2, taking the upper/lower sign. We ?nd that 1/2, ?1/2|1, 0 + ?1/2, 1/2|1, 0 = √ 2 1/2, 1/2|1, 1 = ± √ 2, (6.35) and 1/2, ?1/2|1, 0 + ?1/2, 1/2|1, 0 = √ 2 ?1/2, ?1/2|1, ?1 = ± √ 2. (6.36) Here, the j1 and j2 labels have been suppressed for ease of notation. We also know that 1/2, ?1/2|1, 0 2 + ?1/2, 1/2|1, 0 2 = 1, (6.37) from the normalization condition. The only real solutions to the above set of equations are √ 2 1/2, ?1/2|1, 0 = √ 2 ?1/2, 1/2|1, 0 = 1/2, 1/2|1, 1 = 1/2, 1/2|1, ?1 = ±1. (6.38) Addition of Angular Momentum 97 The choice of sign is arbitrary—the conventional choice is a positive sign. Thus, our table now reads m1 m2 1/2 1/2 1 0 0 0 1/2 -1/2 0 1/ √ 2 0 ? -1/2 1/2 0 1/ √ 2 0 ? -1/2 -1/2 0 0 1 0 j1=1/2 j 1 1 1 0 j2=1/2 m 1 0 -1 0 We could ?ll in the remaining unknown entries of our table by using the recursion relation again. However, an easier method is to observe that the rows and columns of the table must all be mutually orthogonal. That is, the dot product of a row with any other row must be zero. Likewise, for the dot product of a column with any other column. This follows because the entries in the table give the expansion coe?cients of one of our alternative sets of eigenkets in terms of the other set, and each set of eigenkets contains mutually orthogonal vectors with unit norms. The normalization condition tells us that the dot product of a row or column with itself must be unity. The only way that the dot product of the fourth column with the second column can be zero is if the unknown entries are equal and opposite. The requirement that the dot product of the fourth column with itself is unity tells us that the magnitudes of the unknown entries have to be 1/ √ 2. The unknown entries are undetermined to an arbitrary sign multiplied into them both. Thus, the ?nal form of our table (with the conventional choice of arbitrary signs) is m1 m2 1/2 1/2 1 0 0 0 1/2 -1/2 0 1/ √ 2 0 1/ √ 2 -1/2 1/2 0 1/ √ 2 0 -1/ √ 2 -1/2 -1/2 0 0 1 0 j1=1/2 j 1 1 1 0 j2=1/2 m 1 0 -1 0 The table can be read in one of two ways. The columns give the expansions of the eigenstates of overall angular momentum in terms of the eigenstates of the individual angular momenta of the two component systems. Thus, the second column tells us that |1, 0 = 1 √ 2 (|1/2, ?1/2 + | ? 1/2, 1/2 ) . (6.39) The ket on the left-hand side is a | j, m ket, whereas those on the right-hand side are |m1, m2 kets. The rows give the expansions of the eigenstates of individual angular momentum in terms of those of overall angular momentum. Thus, the second row tells us that |1/2, ?1/2 = 1 √ 2 (|1, 0 + |0, 0 ) . (6.40) 98 QUANTUM MECHANICS Here, the ket on the left-hand side is a |m1, m2 ket, whereas those on the right-hand side are | j, m kets. Note that our table is really a combination of two sub-tables, one involving j = 0 states, and one involving j = 1 states. The Clebsch-Gordon coe?cients corresponding to two di?erent choices of j are completely independent: i.e., there is no recursion relation linking Clebsch-Gordon coe?cients corresponding to di?erent values of j. Thus, for every choice of j1, j2, and j we can construct a table of Clebsch-Gordon coe?cients corresponding to the di?erent allowed values of m1, m2, and m (subject to the constraint that m1 + m2 = m). A complete knowledge of angular momentum addition is equivalent to a knowing all possible tables of Clebsch-Gordon coe?cients. These tables are listed (for moderate values of j1, j2 and j) in many standard reference books. Exercises 6.1 Calculate the Clebsch-Gordon coe?cients for adding spin one-half to spin one. 6.2 Calculate the Clebsch-Gordon coe?cients for adding spin one to spin one. 6.3 An electron in a hydrogen atom occupies the combined spin and position state whose wave- function is ψ = R2 1(r) 1/3 Y1 0(θ, ?) χ+ + 2/3 Y1 1(θ, ?) χ? . (a) What values would a measurement of L2 yield, and with what probabilities? (b) Same for Lz. (c) Same for S 2 . (d) Same for Sz. (e) Same for J 2 . (f) Same for Jz. (g) What is the probability density for ?nding the electron at r, θ, ?? (h) What is the probability density for ?nding the electron in the spin up state (with re- spect to the z-axis) at radius r? 6.4 In a low energy neutron-proton system (with zero orbital angular momentum) the potential energy is given by V(x) = V1(r) + V2(r) 3 (σn · x) (σp · x) r2 ? σn · σp + V3(r) σn · σp, where r = |x|, σn denotes the vector of the Pauli matrices of the neutron, and σp denotes the vector of the Pauli matrices of the proton. Calculate the potential energy for the neutron- proton system: (a) In the spin singlet (i.e., spin zero) state. Addition of Angular Momentum 99 (b) In the spin triplet (i.e., spin one) state. [Hint: Calculate the expectation value of V(x) with respect to the overall spin state.] 6.5 Consider two electrons in a spin singlet (i.e., spin zero) state. (a) If a measurement of the spin of one of the electrons shows that it is in the state with Sz = /2, what is the probability that a measurement of the z-component of the spin of the other electron yields Sz = /2? (b) If a measurement of the spin of one of the electrons shows that it is in the state with Sy = /2, what is the probability that a measurement of the x-component of the spin of the other electron yields S x = ? /2? (c) Finally, if electron 1 is in a spin state described by cos α1 χ+ + sin α1 ei β1 χ?, and elec- tron 2 is in a spin state described by cos α2 χ+ + sin α2 ei β2 χ?, what is the probability that the two-electron spin state is a triplet (i.e., spin one) state? 100 QUANTUM MECHANICS Time-Independent Perturbation Theory 101 7 Time-Independent Perturbation Theory 7.1 Introduction We have developed techniques by which the general energy eigenvalue problem can be reduced to a set of coupled partial di?erential equations involving various wavefunctions. Unfortunately, the number of such problems that yield exactly soluble equations is comparatively small. It is, there- fore, necessary to develop techniques for ?nding approximate solutions to otherwise intractable problems. Consider the following problem, which is very common. The Hamiltonian of a system is written H = H0 + H1. (7.1) Here, H0 is a simple Hamiltonian for which we know the exact eigenvalues and eigenstates. H1 introduces some interesting additional physics into the problem, but it is su?ciently complicated that when we add it to H0 we can no longer ?nd the exact energy eigenvalues and eigenstates. However, H1 can, in some sense (which we shall specify more exactly later on), be regarded as being small compared to H0. Let us try to ?nd approximate eigenvalues and eigenstates of the modi?ed Hamiltonian, H0 + H1, by performing a perturbation analysis about the eigenvalues and eigenstates of the original Hamiltonian, H0. 7.2 Two-State System Let us start by considering time-independent perturbation theory, in which the modi?cation to the Hamiltonian, H1, has no explicit dependence on time. It is usually assumed that the unperturbed Hamiltonian, H0, is also time-independent. Consider the simplest non-trivial system, in which there are only two independent eigenkets of the unperturbed Hamiltonian. These are denoted H0 |1 = E1 |1 , (7.2) H0 |2 = E2 |2 . (7.3) It is assumed that these states, and their associated eigenvalues, are known. Because H0 is, by de?nition, an Hermitian operator, its two eigenkets are mutually orthogonal and form a complete set. The lengths of these eigenkets are both normalized to unity. Let us now try to solve the modi?ed energy eigenvalue problem (H0 + H1) |E = E |E . (7.4) In fact, we can solve this problem exactly. Since the eigenkets of H0 form a complete set, we can write |E = 1|E |1 + 2|E |2 . (7.5) 102 QUANTUM MECHANICS Right-multiplication of Equation (7.4) by 1| and 2| yields two coupled equations, which can be written in matrix form: E1 ? E + e11 e12 e? 12 E2 ? E + e22 1|E 2|E = 0 0 . (7.6) Here, e11 = 1| H1 |1 , (7.7) e22 = 2| H1 |2 , (7.8) e12 = 1| H1 |2 . (7.9) In the special (but common) case of a perturbing Hamiltonian whose diagonal matrix elements (in the unperturbed eigenstates) are zero, so that e11 = e22 = 0, (7.10) the solution of Equation (7.6) (obtained by setting the determinant of the matrix equal to zero) is E = (E1 + E2) ± (E1 ? E2)2 + 4 |e12|2 2 . (7.11) Let us expand in the supposedly small parameter ? = |e12| |E1 ? E2| . (7.12) We obtain E ? 1 2 (E1 + E2) ± 1 2 (E1 ? E2) (1 + 2 ?2 7.13) The above expression yields the modi?cations to the energy eigenvalues due to the perturbing Hamiltonian: E′ 1 = E1 + |e12|2 E1 ? E2 7.14) E′ 2 = E2 ? |e12|2 E1 ? E2 7.15) Note that H1 causes the upper eigenvalue to rise, and the lower eigenvalue to fall. It is easily demonstrated that the modi?ed eigenkets take the form |1 ′ = |1 + e ? 12 E1 ? E2 |2 7.16) |2 ′ = |2 ? e12 E1 ? E2 |1 7.17) Time-Independent Perturbation Theory 103 Thus, the modi?ed energy eigenstates consist of one of the unperturbed eigenstates with a slight admixture of the other. Note that the series expansion in Equation (7.13) only converges if 2 |?| < 1. This suggests that the condition for the validity of the perturbation expansion is |e12| < |E1 ? E2| 2 . (7.18) In other words, when we say that H1 needs to be small compared to H0, what we really mean is that the above inequality needs to be satis?ed. 7.3 Non-Degenerate Perturbation Theory Let us now generalize our perturbation analysis to deal with systems possessing more than two energy eigenstates. The energy eigenstates of the unperturbed Hamiltonian, H0, are denoted H0 |n = En |n , (7.19) where n runs from 1 to N. The eigenkets |n are orthogonal, form a complete set, and have their lengths normalized to unity. Let us now try to solve the energy eigenvalue problem for the per- turbed Hamiltonian: (H0 + H1) |E = E |E . (7.20) We can express |E as a linear superposition of the unperturbed energy eigenkets, |E = k k|E |k , (7.21) where the summation is from k = 1 to N. Substituting the above equation into Equation (7.20), and right-multiplying by m|, we obtain (Em + emm ? E) m|E + k m emk k|E = 0, (7.22) where emk = m| H1 |k . (7.23) Let us now develop our perturbation expansion. We assume that |emk| Em ? Ek ? O(?), (7.24) for all m k, where ? ? 1 is our expansion parameter. We also assume that |emm| Em ? O(?), (7.25) for all m. Let us search for a modi?ed version of the nth unperturbed energy eigenstate, for which E = En + O(?), (7.26) 104 QUANTUM MECHANICS and n|E = 1, (7.27) m|E ? O(?), (7.28) for m n. Suppose that we write out Equation (7.22) for m n, neglecting terms that are O(?2 ) according to our expansion scheme. We ?nd that (Em ? En) m|E + emn ? 0, (7.29) giving m|E ? ? emn Em ? En . (7.30) Substituting the above expression into Equation (7.22), evaluated for m = n, and neglecting O(?3 ) terms, we obtain (En + enn ? E) ? k n |enk|2 Ek ? En ? 0. (7.31) Thus, the modi?ed nth energy eigenstate possesses an eigenvalue E′ n = En + enn + k n |enk|2 En ? Ek + O(?3 ), (7.32) and a eigenket |n ′ = |n + k n ekn En ? Ek |k + O(?2 ). (7.33) Note that m|n ′ = δmn + e? nm Em ? En + emn En ? Em + O(?2 ) = δmn + O(?2 ). (7.34) Thus, the modi?ed eigenkets remain orthogonal and properly normalized to O(?2 ). 7.4 Quadratic Stark E?ect Suppose that a one-electron atom [i.e., either a hydrogen atom, or an alkali metal atom (which possesses one valance electron orbiting outside a closed, spherically symmetric, shell)] is subjected to a uniform electric ?eld in the positive z-direction. The Hamiltonian of the system can be split into two parts. The unperturbed Hamiltonian, H0 = p2 2 me + V(r), (7.35) and the perturbing Hamiltonian, H1 = e |E| z. (7.36) Time-Independent Perturbation Theory 105 Here, we are neglecting the small di?erence between the reduced mass, ?, and the electron mass, me. It is assumed that the unperturbed energy eigenvalues and eigenstates are completely known. The electron spin is irrelevant in this problem (because the spin operators all commute with H1), so we can ignore the spin degrees of freedom of the system. This implies that the system possesses no degenerate energy eigenvalues. Actually, this is not true for the n 1 energy levels of the hydrogen atom, due to the special properties of a pure Coulomb potential. It is necessary to deal with this case separately, because the perturbation theory presented in Section 7.3 breaks down for degenerate unperturbed energy levels. An energy eigenket of the unperturbed Hamiltonian is characterized by three quantum numbers— the radial quantum number n, and the two angular quantum numbers l and m (see Section 4.6). Let us denote such a ket |n, l, m , and let its energy be Enlm. According to Equation (7.32), the change in this energy induced by a small electric ?eld is given by ?Enlm = e |E| n, l, m| z |n, l, m + e2 |E|2 n′,l′,m′ n,l,m | n, l, m| z |n,′ l′ , m′ |2 Enlm ? En′l′m′ . (7.37) Now, since Lz = x py ? y px, (7.38) it follows that [Lz, z] = 0. (7.39) Thus, n, l, m| [Lz, z] |n′ , l′ , m′ = 0, (7.40) giving (m ? m′ ) n, l, m| z |n′ , l′ , m′ = 0, (7.41) because |n, l, m is, by de?nition, an eigenstate of Lz with eigenvalue m . It is clear, from the above relation, that the matrix element n, l, m| z |n′ , l′ , m′ is zero unless m′ = m. This is termed the selection rule for the quantum number m. Let us now determine the selection rule for l. We have [L2 , z] = [L2 x , z] + [L2 y , z] = Lx [Lx, z] + [Lx, z] Lx + Ly [Ly, z] + [Ly, z] Ly = i ?Lx y ? y Lx + Ly x + x Ly = 2 i (Ly x ? Lx y + i z) = 2 i (Ly x ? y Lx) = 2 i (x Ly ? Lx y), (7.42) where use has been made of Equations (4.1)–(4.6). Similarly, [L2 , y] = 2 i (Lx z ? x Lz), (7.43) [L2 , x] = 2 i (y Lz ? Ly z). (7.44) 106 QUANTUM MECHANICS Thus, [L2 , [L2 , z]] = 2 i L2 , Ly x ? Lx y + i z = 2 i Ly [L2 , x] ? Lx [L2 , y] + i [L2 , z] = ?4 2 Ly (y Lz ? Ly z) + 4 2 Lx (Lx z ? x Lz) ? 2 2 (L2 z ? z L2 ). (7.45) This reduces to [L2 , [L2 , z]] = ? 2 4 (Lx x + Ly y + Lz z) Lz ? 4 (L2 x + L2 y + L2 z ) z + 2 (L2 z ? z L2 ) . (7.46) However, it is clear from Equations (4.1)–(4.3) that Lx x + Ly y + Lz z = 0. (7.47) Hence, we obtain [L2 , [L2 , z]] = 2 2 (L2 z + z L2 ), (7.48) which can be expanded to give L4 z ? 2 L2 z L2 + z L4 ? 2 2 (L2 z + z L2 ) = 0. (7.49) Equation (7.49) implies that n, l, m| L4 z ? 2 L2 z L2 + z L4 ? 2 2 (L2 z + z L2 ) |n′ , l′ , m′ = 0. (7.50) This expression yields l2 (l + 1)2 ? 2 l (l + 1) l′ (l′ + 1) + l′ 2 (l′ + 1)2 ? 2 l (l + 1) ? 2 l′ (l′ + 1) n, l, m| z |n′ , l′ , m′ = 0, (7.51) which reduces to (l + l′ + 2) (l + l′ ) (l ? l′ + 1) (l ? l′ ? 1) n, l, m| z |n′ , l′ , m′ = 0. (7.52) According to the above formula, the matrix element n, l, m| z |n′ , l′ , m′ vanishes unless l = l′ = 0 or l′ = l ± 1. This matrix element can be written n, l, m| z |n′ , l′ , m′ = dV′ ψ? nlm(r′ , θ′ , ?′ ) r′ cos θ′ ψn′m′l′ (r′ , θ′ , ?′ ), (7.53) where ψnlm(x′ ) = x′ |n, l, m . Recall, however, that the wavefunction of an l = 0 state is spheri- cally symmetric (see Section 4.3): i.e., ψn00(x′ ) = ψn00(r′ ). It follows from Equation (7.53) that the matrix element vanishes by symmetry when l = l′ = 0. In conclusion, the matrix element n, l, m| z |n′ , l′ , m′ is zero unless l′ = l ± 1. This is the selection rule for the quantum number l. Application of the selection rules to Equation (7.37) yields ?Enlm = e2 |E|2 n′ l′=l±1 | n, l, m| z |n′ , l′ , m |2 Enlm ? En′l′m . (7.54) Time-Independent Perturbation Theory 107 Note that all of the terms in Equation (7.37) that vary linearly with the electric ?eld-strength vanish by symmetry, according to the selection rules. Only those terms that vary quadratically with the ?eld-strength survive. The electrical polarizability, α, of an atom is de?ned in terms of the electric- ?eld induced energy-shift of a given atomic state as follows: ?E = ? 1 2 α |E|2 . (7.55) Consider the ground state of a hydrogen atom. (Recall, that we cannot address the n > 1 excited states because they are degenerate, and our theory cannot handle this at present). The polarizability of this state is given by α = 2 e2 n>1 | 1, 0, 0| z |n, 1, 0 |2 En00 ? E100 . (7.56) Here, we have made use of the fact that En10 = En00 for a hydrogen atom. The sum in the above expression can be evaluated approximately by noting that [see Equa- tion (4.120)] En00 = ? e2 8π ?0 a0 n2 (7.57) for a hydrogen atom, where a0 = 4π ?0 2 me e2 (7.58) is the Bohr radius. We can write En00 ? E100 ≥ E200 ? E100 = 3 4 e2 8π ?0 a0 . (7.59) Thus, α < 16 3 4π ?0 a0 n>1 | 1, 0, 0| z |n, 1, 0 |2 . (7.60) However, n>1 | 1, 0, 0| z |n, 1, 0 |2 = n′,l′,m′ 1, 0, 0| z |n′ , l′ , m′ n′ , m′ , l′ | z |1, 0, 0 = 1, 0, 0| z2 |1, 0, 0 , (7.61) where we have made use of the fact that the wavefunctions of a hydrogen atom form a complete set. It is easily demonstrated from the actual form of the ground-state wavefunction that 1, 0, 0| z2 |1, 0, 0 = a2 0 . (7.62) Thus, we conclude that α < 16 3 4π ?0 a3 0 ? 5.3 4π ?0 a3 0 . (7.63) The exact result is α = 9 2 4π ?0 a3 0 = 4.5 4π ?0 a3 0 . (7.64) It is possible to obtain this result, without recourse to perturbation theory, by solving Schr¨ odinger's equation in parabolic coordinates. 108 QUANTUM MECHANICS 7.5 Degenerate Perturbation Theory Let us now consider systems in which the eigenstates of the unperturbed Hamiltonian, H0, pos- sess degenerate energy levels. It is always possible to represent degenerate energy eigenstates as the simultaneous eigenstates of the Hamiltonian and some other Hermitian operator (or group of operators). Let us denote this operator (or group of operators) L. We can write H0 |n, l = En |n, l , (7.65) and L |n, l = Ln l |n, l , (7.66) where [H0, L] = 0. Here, the En and the Ln l are real numbers that depend on the quantum numbers n, and n and l, respectively. It is always possible to ?nd a su?cient number of operators which commute with the Hamiltonian in order to ensure that the Ln l are all di?erent. In other words, we can choose L such that the quantum numbers n and l uniquely specify each eigenstate. Suppose that for each value of n there are Nn di?erent values of l: i.e., the nth energy eigenstate is Nn-fold degenerate. In general, L does not commute with the perturbing Hamiltonian, H1. This implies that the modi?ed energy eigenstates are not eigenstates of L. In this situation, we expect the perturbation to split the degeneracy of the energy levels, so that each modi?ed eigenstate |n, l ′ acquires a unique energy eigenvalue E′ nl. Let us naively attempt to use the standard perturbation theory of Section 7.3 to evaluate the modi?ed energy eigenstates and energy levels. A direct generalization of Equations (7.32) and (7.33) yields E′ nl = En + enlnl + n′,l′ n,l |en′l′nl|2 En ? En′ + O(?3 ), (7.67) and |n, l ′ = |n, l + n′,l′ n,l en′l′nl En ? En′ |n′ , l′ + O(?2 ), (7.68) where en′l′nl = n′ , l′ | H1 |n, l . (7.69) It is fairly obvious that the summations in Equations (7.67) and (7.68) are not well-behaved if the nth energy level is degenerate. The problem terms are those involving unperturbed eigenstates la- beled by the same value of n, but di?erent values of l: i.e., those states whose unperturbed energies are En. These terms give rise to singular factors 1/(En ?En) in the summations. Note, however, that this problem would not exist if the matrix elements, enl′nl, of the perturbing Hamiltonian between distinct, degenerate, unperturbed energy eigenstates corresponding to the eigenvalue En were zero. In other words, if n, l′ | H1 |n, l = λn l δl l′ , (7.70) then all of the singular terms in Equations (7.67) and (7.68) would vanish. In general, Equation (7.70) is not satis?ed. Fortunately, we can always rede?ne the unperturbed energy eigenstates belonging to the eigenvalue En in such a manner that Equation (7.70) is satis?ed. Time-Independent Perturbation Theory 109 Let us de?ne Nn new states that are linear combinations of the Nn original degenerate eigenstates corresponding to the eigenvalue En: |n, l(1) = k=1,Nn n, k|n, l(1) |n, k . (7.71) Note that these new states are also degenerate energy eigenstates of the unperturbed Hamiltonian corresponding to the eigenvalue En. The |n, l(1) are chosen in such a manner that they are eigen- states of the perturbing Hamiltonian, H1. Thus, H1 |n, l(1) = λn l |n, l(1) . (7.72) The |n, l(1) are also chosen so that they are orthogonal, and have unit lengths. It follows that n, l′(1) | H1 |n, l(1) = λn l δl l′ . (7.73) Thus, if we use the new eigenstates, instead of the old ones, then we can employ Equations (7.67) and (7.68) directly, because all of the singular terms vanish. The only remaining di?culty is to determine the new eigenstates in terms of the original ones. Now l=1,Nn |n, l n, l| = 1, (7.74) where 1 denotes the identity operator in the sub-space of all unperturbed energy eigenkets corre- sponding to the eigenvalue En. Using this completeness relation, the operator eigenvalue equation (7.72) can be transformed into a straightforward matrix eigenvalue equation: l′′=1,Nn n, l′ |H1|n, l′′ n, l′′ |n, l(1) = λn l n, l′ |n, l(1) . (7.75) This can be written more transparently as U x = λ x, (7.76) where the elements of the Nn * Nn Hermitian matrix U are Uj k = n, j| H1 |n, k . (7.77) Provided that the determinant of U is non-zero, Equation (7.76) can always be solved to give Nn eigenvalues λn l (for l = 1 to Nn), with Nn corresponding eigenvectors xn l. The eigenvectors specify the weights of the new eigenstates in terms of the original eigenstates: i.e., (xn l)k = n, k|n, l(1) , (7.78) for k = 1 to Nn. In our new scheme, Equations (7.67) and (7.68) yield E′ nl = En + λn l + n′ n,l′ |en′l′nl|2 En ? En′ + O(?3 ), (7.79) 110 QUANTUM MECHANICS and |n, l(1) ′ = |n, l(1) + n′ n,l′ en′l′nl En ? En′ |n′ , l′ + O(?2 ). (7.80) There are no singular terms in these expressions, because the summations are over n′ n: i.e., they speci?cally exclude the problematic, degenerate, unperturbed energy eigenstates corresponding to the eigenvalue En. Note that the ?rst-order energy-shifts are equivalent to the eigenvalues of the matrix equation, (7.76). 7.6 Linear Stark E?ect Let us examine the e?ect of an electric ?eld on the excited energy levels of a hydrogen atom. For instance, consider the n = 2 states. There is a single l = 0 state, usually referred to as 2s, and three l = 1 states (with m = ?1, 0, 1), usually referred to as 2p. All of these states possess the same energy, E200 = ?e2 /(32π ?0 a0). As in Section 7.4, the perturbing Hamiltonian is H1 = e |E| z. (7.81) In order to apply perturbation theory, we have to solve the matrix eigenvalue equation U x = λ x, (7.82) where U is the array of the matrix elements of H1 between the degenerate 2s and 2p states. Thus, U = e |E| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 2, 0, 0| z |2, 1, 0 0 0 2, 1, 0| z |2, 0, 0 0 0 0 0 0 0 0 0 0 0 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? , (7.83) where the rows and columns correspond to the |2, 0, 0 , |2, 1, 0 , |2, 1, 1 , and |2, 1, ?1 states, re- spectively. Here, we have made use of the selection rules, which tell us that the matrix element of z between two hydrogen atom states is zero unless the states possess the same m quantum number, and l quantum numbers that di?er by unity. It is easily demonstrated, from the exact forms of the 2s and 2p wavefunctions, that 2, 0, 0| z |2, 1, 0 = 2, 1, 0| z |2, 0, 0 = 3 a0. (7.84) It can be seen, by inspection, that the eigenvalues of U are λ1 = 3 e a0 |E|, λ2 = ?3 e a0 |E|, Time-Independent Perturbation Theory 111 λ3 = 0, and λ4 = 0. The corresponding eigenvectors are x1 = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1/ √ 2 1/ √ 2 0 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? , (7.85) x2 = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1/ √ 2 ?1/ √ 2 0 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? , (7.86) x3 = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 0 1 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? , (7.87) x4 = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 0 0 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? . (7.88) It follows from Section 7.5 that the simultaneous eigenstates of the unperturbed Hamiltonian and the perturbing Hamiltonian take the form |1 = |2, 0, 0 + |2, 1, 0 √ 2 , (7.89) |2 = |2, 0, 0 ? |2, 1, 0 √ 2 , (7.90) |3 = |2, 1, 1 , (7.91) |4 = |2, 1, ?1 . (7.92) In the absence of an electric ?eld, all of these states possess the same energy, E200. The ?rst-order energy-shifts induced by an electric ?eld are given by ?E1 = +3 e a0 |E|, (7.93) ?E2 = ?3 e a0 |E|, (7.94) ?E3 = 0, (7.95) ?E4 = 0. (7.96) Thus, the energies of states 1 and 2 are shifted upwards and downwards, respectively, by an amount 3 e a0 |E| in the presence of an electric ?eld. States 1 and 2 are orthogonal linear combinations of the original 2s and 2p(m = 0) states. Note that the energy-shifts are linear in the electric ?eld- strength, so this is a much larger e?ect that the quadratic e?ect described in Section 7.4. The 112 QUANTUM MECHANICS energies of states 3 and 4 (which are equivalent to the original 2p(m = 1) and 2p(m = ?1) states, respectively) are not a?ected to ?rst order. Of course, to second order the energies of these states are shifted by an amount that depends on the square of the electric ?eld-strength. Note that the linear Stark e?ect depends crucially on the degeneracy of the 2s and 2p states. This degeneracy is a special property of a pure Coulomb potential, and, therefore, only applies to a hydrogen atom. Thus, alkali metal atoms do not exhibit the linear Stark e?ect. 7.7 Fine Structure Let us now consider the energy levels of hydrogen-like atoms (i.e., alkali metal atoms) in more detail. The outermost electron moves in a spherically symmetric potential V(r) due to the nuclear charge and the charges of the other electrons (which occupy spherically symmetric closed shells). The shielding e?ect of the inner electrons causes V(r) to depart from the pure Coulomb form. This splits the degeneracy of states characterized by the same value of n, but di?erent values of l. In fact, higher l states have higher energies. Let us examine a phenomenon known as ?ne structure, which is due to interaction between the spin and orbital angular momenta of the outermost electron. This electron experiences an electric ?eld E = ?V e . (7.97) However, a non-relativistic charge moving in an electric ?eld also experiences an e?ective mag- netic ?eld B = ? v * E c2 . (7.98) Now, an electron possesses a spin magnetic moment [see Equation (5.46)] ? = ? e S me . (7.99) We, therefore, expect a spin-orbit contribution to the Hamiltonian of the form HLS = ?? · B = ? e S me c2 · v * 1 e x r dV dr = 1 m2 e c2 r dV dr L · S, (7.100) where L = me x * v is the orbital angular momentum. Actually, when the above expression is compared to the observed spin-orbit interaction, it is found to be too large by a factor of two. There is a classical explanation for this, due to spin precession, which we need not go into. The correct quantum mechanical explanation requires a relativistically covariant treatment of electron dynamics (this is achieved using the so-called Dirac equation—see Chapter 11). Let us now apply perturbation theory to a hydrogen-like atom, using HLS as the perturbation (with HLS taking one half of the value given above), and H0 = p2 2 me + V(r) (7.101) Time-Independent Perturbation Theory 113 as the unperturbed Hamiltonian. We have two choices for the energy eigenstates of H0. We can adopt the simultaneous eigenstates of H0, L2 , S 2 , Lz and Sz, or the simultaneous eigenstates of H0, L2 , S 2 , J 2 , and Jz, where J = L + S is the total angular momentum. Although the departure of V(r) from a pure 1/r form splits the degeneracy of same n, di?erent l, states, those states charac- terized by the same values of n and l, but di?erent values of ml, are still degenerate. (Here, ml, ms, and mj are the quantum numbers corresponding to Lz, Sz, and Jz, respectively.) Moreover, with the addition of spin degrees of freedom, each state is doubly degenerate due to the two possible orien- tations of the electron spin (i.e., ms = ±1/2). Thus, we are still dealing with a highly degenerate system. We know, from Section 7.6, that the application of perturbation theory to a degenerate system is greatly simpli?ed if the basis eigenstates of the unperturbed Hamiltonian are also eigen- states of the perturbing Hamiltonian. Now, the perturbing Hamiltonian, HLS , is proportional to L · S, where L · S = J 2 ? L2 ? S 2 2 . (7.102) It is fairly obvious that the ?rst group of operators (H0, L2 , S 2 , Lz and Sz) does not commute with HLS , whereas the second group (H0, L2 , S 2 , J 2 , and Jz) does. In fact, L · S is just a combination of operators appearing in the second group. Thus, it is advantageous to work in terms of the eigenstates of the second group of operators, rather than those of the ?rst group. We now need to ?nd the simultaneous eigenstates of H0, L2 , S 2 , J 2 , and Jz. This is equivalent to ?nding the eigenstates of the total angular momentum resulting from the addition of two angular momenta: j1 = l, and j2 = s = 1/2. According to Equation (6.26), the allowed values of the total angular momentum are j = l + 1/2 and j = l ? 1/2. We can write |l + 1/2, m = cos α |m ? 1/2, 1/2 + sin α |m + 1/2, ?1/2 , (7.103) |l ? 1/2, m = ? sin α |m ? 1/2, 1/2 + cos α |m + 1/2, ?1/2 . (7.104) Here, the kets on the left-hand side are | j, mj kets, whereas those on the right-hand side are |ml, ms kets (the j1, j2 labels have been dropped, for the sake of clarity). We have made use of the fact that the Clebsch-Gordon coe?cients are automatically zero unless mj = ml + ms. We have also made use of the fact that both the | j, mj and |ml, ms kets are orthonormal, and have unit lengths. We now need to determine cos α = m ? 1/2, 1/2|l + 1/2, m , (7.105) where the Clebsch-Gordon coe?cient is written in ml, ms| j, mj form. Let us now employ the recursion relation for Clebsch-Gordon coe?cients, Equation (6.32), with j1 = l, j2 = 1/2, j = l + 1/2, m1 = m ? 1/2, m2 = 1/2 (lower sign). We obtain [(l + 1/2) (l + 3/2) ? m (m + 1)]1/2 m ? 1/2, 1/2|l + 1/2, m = [l (l + 1) ? (m ? 1/2) (m + 1/2)]1/2 m + 1/2, 1/2|l + 1/2, m + 1 , (7.106) which reduces to m ? 1/2, 1/2|l + 1/2, m = l + m + 1/2 l + m + 3/2 m + 1/2, 1/2|l + 1/2, m + 1 . (7.107) 114 QUANTUM MECHANICS We can use this formula to successively increase the value of ml. For instance, m ? 1/2, 1/2|l + 1/2, m = l + m + 1/2 l + m + 3/2 l + m + 3/2 l + m + 5/2 m + 3/2, 1/2|l + 1/2, m + 2 . (7.108) This procedure can be continued until ml attains its maximum possible value, l. Thus, m ? 1/2, 1/2|l + 1/2, m = l + m + 1/2 2 l + 1 l, 1/2|l + 1/2, l + 1/2 . (7.109) Consider the situation in which ml and m both take their maximum values, l and 1/2, respec- tively. The corresponding value of mj is l + 1/2. This value is possible when j = l + 1/2, but not when j = l ? 1/2. Thus, the |ml, ms ket |l, 1/2 must be equal to the | j, mj ket |l + 1/2, l + 1/2 , up to an arbitrary phase-factor. By convention, this factor is taken to be unity, giving l, 1/2|l + 1/2, l + 1/2 = 1. (7.110) It follows from Equation (7.109) that cos α = m ? 1/2, 1/2|l + 1/2, m = l + m + 1/2 2 l + 1 . (7.111) Hence, sin2 α = 1 ? l + m + 1/2 2 l + 1 = l ? m + 1/2 2 l + 1 . (7.112) We now need to determine the sign of sin α. A careful examination of the recursion relation, Equation (6.32), shows that the plus sign is appropriate. Thus, |l + 1/2, m = l + m + 1/2 2 l + 1 |m ? 1/2, 1/2 + l ? m + 1/2 2 l + 1 |m + 1/2, ?1/2 , (7.113) |l ? 1/2, m = ? l ? m + 1/2 2 l + 1 |m ? 1/2, 1/2 + l + m + 1/2 2 l + 1 |m + 1/2, ?1/2 . (7.114) It is convenient to de?ne so called spin-angular functions using the Pauli two-component formal- ism: Yj=l±1/2 l m ≡ Y± l m = ± l ± m + 1/2 2 l + 1 Yl m?1/2(θ, ?) χ+ + l ? m + 1/2 2 l + 1 Yl m+1/2(θ, ?) χ? = 1 √ 2 l + 1 ? ? ? ? ? ? ± √ l ± m + 1/2 Yl m?1/2(θ, ?) √ l ? m + 1/2 Yl m+1/2(θ, ?) ? ? ? ? ? ? . (7.115) These functions are eigenfunctions of the total angular momentum for spin one-half particles, just as the spherical harmonics are eigenfunctions of the orbital angular momentum. A general wavefunction for an energy eigenstate in a hydrogen-like atom is written ψnlm± = Rn l(r) Y± l m. (7.116) Time-Independent Perturbation Theory 115 The radial part of the wavefunction, Rn l(r), depends on the radial quantum number n and the angular quantum number l. The wavefunction is also labeled by m, which is the quantum number associated with Jz. For a given choice of l, the quantum number j (i.e., the quantum number associated with J 2 ) can take the values l ± 1/2. The |l ± 1/2, m kets are eigenstates of L · S, according to Equation (7.102). Thus, L · S | j = l ± 1/2, mj = m = 2 2 j (j + 1) ? l (l + 1) ? 3/4 | j, m , (7.117) giving L · S |l + 1/2, m = l 2 2 |l + 1/2, m , (7.118) L · S |l ? 1/2, m = ? (l + 1) 2 2 |l ? 1/2, m . (7.119) It follows that d? (Y+ l m)? L · S Y+ l m = l 2 2 , (7.120) d? (Y? l m)? L · S Y? l m = ? (l + 1) 2 2 , (7.121) where the integrals are over all solid angle, d? = sin θ dθ d?. Let us now apply degenerate perturbation theory to evaluate the shift in energy of a state whose wavefunction is ψnlm± due to the spin-orbit Hamiltonian, HLS . To ?rst order, the energy-shift is given by ?Enlm± = dV (ψnlm±)? HLS ψnlm±, (7.122) where the integral is over all space, dV = r2 d?. Equations (7.100) (remembering the factor of two), (7.116), and (7.120)–(7.121) yield ?Enlm+ = + 1 2 m2 e c2 1 r dV dr l 2 2 , (7.123) ?Enlm? = ? 1 2 m2 e c2 1 r dV dr (l + 1) 2 2 , (7.124) where 1 r dV dr = ∞ 0 dr r2 (Rn l)? 1 r dV dr Rn l. (7.125) Let us now apply the above result to the case of a sodium atom. In chemist's notation, the ground state is written (1s)2 (2s)2 (2p)6 (3s). (7.126) 116 QUANTUM MECHANICS The inner ten electrons e?ectively form a spherically symmetric electron cloud. We are interested in the excitation of the eleventh electron from 3s to some higher energy state. The closest (in energy) unoccupied state is 3p. This state has a higher energy than 3s due to the deviations of the potential from the pure Coulomb form. In the absence of spin-orbit interaction, there are six degenerate 3p states. The spin-orbit interaction breaks the degeneracy of these states. The modi?ed states are labeled (3p)1/2 and (3p)3/2, where the subscript refers to the value of j. The four (3p)3/2 states lie at a slightly higher energy level than the two (3p)1/2 states, because the radial integral (7.125) is positive. The splitting of the (3p) energy levels of the sodium atom can be observed using a spectroscope. The well-known sodium D line is associated with transitions between the 3p and 3s states. The fact that there are two slightly di?erent 3p energy levels (note that spin-orbit coupling does not split the 3s energy levels) means that the sodium D line actually consists of two very closely spaced spectroscopic lines. It is easily demonstrated that the ratio of the typical spacing of Balmer lines to the splitting brought about by spin-orbit interaction is about 1 : α2 , where α = e2 2 ?0 h c = 1 137 (7.127) is the ?ne structure constant. Actually, Equations (7.123)–(7.124) are not entirely correct, because we have neglected an e?ect (namely, the relativistic mass correction of the electron) that is the same order of magnitude as spin-orbit coupling. (See Exercises 7.3 and 7.4.) 7.8 Zeeman E?ect Consider a hydrogen-like atom placed in a uniform z-directed magnetic ?eld. The change in energy of the outermost electron is HB = ?? · B, (7.128) where ? = ? e 2 me (L + 2 S) (7.129) is its magnetic moment, including both the spin and orbital contributions. Thus, HB = e B 2 me (Lz + 2 Sz). (7.130) Suppose that the energy-shifts induced by the magnetic ?eld are much smaller than those in- duced by spin-orbit interaction. In this situation, we can treat HB as a small perturbation acting on the eigenstates of H0 + HLS . Of course, these states are the simultaneous eigenstates of J 2 and Jz. Let us consider one of these states, labeled by the quantum numbers j and m, where j = l ± 1/2. From standard perturbation theory, the ?rst-order energy-shift in the presence of a magnetic ?eld is ?Enlm± = l ± 1/2, m| HB |l ± 1/2, m . (7.131) Because Lz + 2 Sz = Jz + Sz, (7.132) Time-Independent Perturbation Theory 117 we ?nd that ?Enlm± = e B 2 me (m + l ± 1/2, m| Sz |l ± 1/2, m ) . (7.133) Now, from Equations (7.113)–(7.114), |l ± 1/2, m = ± l ± m + 1/2 2 l + 1 |m ? 1/2, 1/2 + l ? m + 1/2 2 l + 1 |m + 1/2, ?1/2 . (7.134) It follows that l ± 1/2, m| Sz |l ± 1/2, m = 2 (2 l + 1) [(l ± m + 1/2) ? (l ? m + 1/2)] = ± m 2 l + 1 . (7.135) Thus, we obtain the so-called Lande formula for the energy-shift induced by a weak magnetic ?eld: ?Enlm± = e B 2 me m 1 ± 1 2 l + 1 . (7.136) Let us apply this theory to the sodium atom. We have already seen that the non-Coulomb potential splits the degeneracy of the 3s and 3p states, the latter states acquiring a higher energy. The spin-orbit interaction splits the six 3p states into two groups, with four j = 3/2 states lying at a slightly higher energy than two j = 1/2 states. According to Equation (7.136), a magnetic ?eld splits the (3p)3/2 quadruplet of states, each state acquiring a di?erent energy. In fact, the energy of each state becomes dependent on the quantum number m, which measures the projection of the total angular momentum along the z-axis. States with higher m values have higher energies. A magnetic ?eld also splits the (3p)1/2 doublet of states. However, it is evident from Equation (7.136) that these states are split by a lesser amount than the j = 3/2 states. Suppose that we increase the strength of the magnetic ?eld, so that the energy-shift due to the magnetic ?eld becomes comparable to the energy-shift induced by spin-orbit interaction. Clearly, in this situation, it does not make much sense to think of HB as a small interaction term operating on the eigenstates of H0 + HLS . In fact, this intermediate case is very di?cult to analyze. Let us, instead, consider the extreme limit in which the energy-shift due to the magnetic ?eld greatly exceeds that induced by spin-orbit e?ects. This is called the Paschen-Back limit. In the Paschen-Back limit, we can think of the spin-orbit Hamiltonian, HLS , as a small inter- action term operating on the eigenstates of H0 + HB. Note that the magnetic Hamiltonian, HB, commutes with L2 , S 2 , Lz, Sz, but does not commute with L2 , S 2 , J 2 , Jz. Thus, in an intense mag- netic ?eld, the energy eigenstates of a hydrogen-like atom are approximate eigenstates of the spin and orbital angular momenta, but are not eigenstates of the total angular momentum. We can label each state by the quantum numbers n (the energy quantum number), l, ml, and ms. Thus, our energy eigenkets are written |n, l, ml, ms . The unperturbed Hamiltonian, H0, causes states with di?erent values of the quantum numbers n and l to have di?erent energies. However, states with the same value of n and l, but di?erent values of ml and ms, are degenerate. The shift in energy due to the magnetic ?eld is simply ?En l ml ms = n, l, ml, ms| HB |n, l, ml, ms = e B 2 me (ml + 2 ms). (7.137) 118 QUANTUM MECHANICS Thus, states with di?erent values of ml + 2 ms acquire di?erent energies. Let us apply this result to a sodium atom. In the absence of a magnetic ?eld, the six 3p states form two groups of four and two states, depending on the values of their total angular momentum. In the presence of an intense magnetic ?eld, the 3p states are split into ?ve groups. There is a state with ml + 2 ms = 2, a state with ml + 2 ms = 1, two states with ml + 2 ms = 0, a state with ml + 2 ms = ?1, and a state with ml + 2 ms = ?2. These groups are equally spaced in energy, the energy di?erence between adjacent groups being e B/2 me. The energy-shift induced by the spin-orbit Hamiltonian is given by ?En l ml ms = n, l, ml, ms| HLS |n, l, ml, ms , (7.138) where HLS = 1 2 m2 e c2 1 r dV dr L · S. (7.139) Now, L · S = Lz Sz + (L+ S ? + L? S + )/2 = 2 ml ms, (7.140) since L± = S ± = 0 (7.141) for expectation values taken between the simultaneous eigenkets of Lz and Sz. Thus, ?En l ml ms = 2 ml ms 2 m2 e c2 1 r dV dr . (7.142) Let us apply the above result to a sodium atom. In the presence of an intense magnetic ?eld, the 3p states are split into ?ve groups with (ml, ms) quantum numbers (1, 1/2), (0, 1/2), (1, ?1/2) or (?1, 1/2), (0, ?1/2), and (?1, ?1/2), respectively, in order of decreasing energy. The spin-orbit term increases the energy of the highest energy state, does not a?ect the next highest energy state, decreases, but does not split, the energy of the doublet, does not a?ect the next lowest energy state, and increases the energy of the lowest energy state. The net result is that the ?ve groups of states are no longer equally spaced in energy. The typical magnetic ?eld-strength needed to access the Paschen-Bach limit is BPB ? α2 e me ?0 h a0 ? 25 tesla. (7.143) 7.9 Hyper?ne Structure The proton in a hydrogen atom is a spin one-half charged particle, and therefore possesses a mag- netic moment. By analogy with Equation (7.99), we can write ?p = gp e 2 mp Sp, (7.144) where ?p is the proton magnetic moment, Sp is the proton spin, and the proton gyromagnetic ratio gp is found experimentally to take that value 5.59. Note that the magnetic moment of a proton Time-Independent Perturbation Theory 119 is much smaller (by a factor of order me/mp) than that of an electron. According to classical electromagnetism, the proton's magnetic moment generates a magnetic ?eld of the form B = ?0 4π r3 3 (?p · er) er ? ?p + 2 ?0 3 ?p δ(x), (7.145) where er = x/r, and r = |x|. We can understand the origin of the delta-function term in the above expression by thinking of the proton as a tiny current loop centred on the origin. All magnetic ?eld-lines generated by the loop must pass through the loop. Hence, if the size of the loop goes to zero then the ?eld will be in?nite at the origin, and this contribution is what is re?ected by the delta-function term. Now, the Hamiltonian of the electron in the magnetic ?eld generated by the proton is simply H1 = ??e · B, (7.146) where ?e = ? e me Se. (7.147) Here, ?e is the electron magnetic moment [see Equation (7.99)], and Se the electron spin. Thus, the perturbing Hamiltonian is written H1 = ?0 gp e2 8π mp me 3 (Sp · er) (Se · er) ? Sp · Se r3 + ?0 gp e2 3 mp me Sp · Se δ(x). (7.148) Note that, because we have neglected coupling between the proton spin and the magnetic ?eld generated by the electron's orbital motion, the above expression is only valid for l = 0 states. According to standard ?rst-order perturbation theory, the energy-shift induced by spin-spin coupling between the proton and the electron is the expectation value of the perturbing Hamilto- nian. Hence, ?E = ?0 gp e2 8π mp me 3 (Sp · er) (Se · er) ? Sp · Se r3 + ?0 gp e2 3 mp me Sp · Se |ψ(0)|2 . (7.149) For the ground-state of hydrogen, which is spherically symmetric, the ?rst term in the above ex- pression vanishes by symmetry. Moreover, it is easily demonstrated that |ψ000(0)|2 = 1/(π a3 0 ). Thus, we obtain ?E000 = ?0 gp e2 3π mp me a3 0 Sp · Se . (7.150) Let S = Se + Sp (7.151) be the total spin. We can show that Sp · Se = 1 2 (S 2 ? S 2 e ? S 2 p ). (7.152) Thus, the simultaneous eigenstates of the perturbing Hamiltonian and the main Hamiltonian are the simultaneous eigenstates of S 2 e , S 2 p , and S 2 . However, both the proton and the electron are spin 120 QUANTUM MECHANICS one-half particles. According to Chapter 6, when two spin one-half particles are combined (in the absence of orbital angular momentum) the net state has either spin 1 or spin 0. In fact, there are three spin 1 states, known as triplet states, and a single spin 0 state, known as the singlet state. For all states, the eigenvalues of S 2 e and S 2 p are (3/4) 2 . The eigenvalue of S 2 is 0 for the singlet state, and 2 2 for the triplet states. Hence, Sp · Se = ? 3 4 2 (7.153) for the singlet state, and Sp · Se = 1 4 2 (7.154) for the triplet states. It follows, from the above analysis, that spin-spin coupling breaks the degeneracy of the two (1s)1/2 states of the hydrogen atom, lifting the energy of the triplet con?guration, and lowering that of the singlet. This splitting is known as hyper?ne structure. The net energy di?erence between the singlet and the triplet states is ?E000 = 8 3 gp me mp α2 |E0| = 5.88 * 10?6 eV, (7.155) where |E0| = 13.6 eV is the (magnitude of the) ground-state energy. Note that the hyper?ne energy- shift is much smaller, by a factor me/mp, than a typical ?ne structure energy-shift (see Exercise 7.4). If we convert the above energy into a wavelength then we obtain λ = 21.1 cm. (7.156) This is the wavelength of the radiation emitted by a hydrogen atom that is collisionally excited from the singlet to the triplet state, and then decays back to the lower energy singlet state. The 21 cm line is famous in radio astronomy because it was used to map out the spiral structure of our galaxy in the 1950's. Exercises 7.1 Calculate the energy-shift in the ground state of the one-dimensional harmonic oscillator when the perturbation V = λ x4 is added to H = p2 x 2 m + 1 2 m ω2 x2 . The properly normalized ground-state wavefunction is ψ(x) = m ω π 1/4 exp ? m ω2 x2 2 . Time-Independent Perturbation Theory 121 7.2 Calculate the energy-shifts due to the ?rst-order Stark e?ect in the n = 3 state of a hydrogen atom. You do not need to perform all of the integrals, but you should construct the correct linear combinations of states. 7.3 The Hamiltonian of the valence electron in a hydrogen-like atom can be written H = p2 2 me + V(r) ? p4 8 m3 e c2 . Here, the ?nal term on the right-hand side is the ?rst-order correction due to the electron's relativistic mass increase. Treating this term as a small perturbation, deduce that it causes an energy-shift in the energy eigenstate characterized by the standard quantum numbers n, l, m of ?Enlm = ? 1 2 me c2 E 2 n ? 2 En V + V 2 , where En is the unperturbed energy, and α the ?ne structure constant. 7.4 Consider an energy eigenstate of the hydrogen atom characterized by the standard quan- tum numbers n, l, and m. Show that if the energy-shift due to spin-orbit coupling (see Section 7.7) is added to that due to the electron's relativistic mass increase (see previous exercise) then the net ?ne structure energy-shift can be written ?Enlm = α2 En n2 n j + 1/2 ? 3 4 . Here, En is the unperturbed energy, α the ?ne structure constant, and j = l±1/2 the quantum number associated with the magnitude of the sum of the electron's orbital and spin angular momenta. You will need to use the following standard results for a hydrogen atom: a0 r = 1 n2 , a2 0 r2 = 1 (l + 1/2) n3 , a3 0 r3 = 1 l (l + 1/2) (l + 1) n3 . Here, a0 is the Bohr radius. Assuming that the above formula for the energy shift is valid for l = 0 (which it is), show that ?ne structure causes the energy of the (2p)3/2 states of a hydrogen atom to exceed those of the (2p)1/2 and (2s)1/2 states by 4.5 * 10?5 eV. 122 QUANTUM MECHANICS Time-Dependent Perturbation Theory 123 8 Time-Dependent Perturbation Theory 8.1 Introduction Suppose that the Hamiltonian of the system under consideration can be written H = H0 + H1(t), (8.1) where H0 does not contain time explicitly, and H1 is a small time-dependent perturbation. It is assumed that we are able to calculate the eigenkets of the unperturbed Hamiltonian: H0 |n = En |n . (8.2) We know that if the system is in one of the eigenstates of H0 then, in the absence of the external perturbation, it remains in this state for ever. However, the presence of a small time-dependent per- turbation can, in principle, give rise to a ?nite probability that a system initially in some eigenstate |i of the unperturbed Hamiltonian is found in some other eigenstate at a subsequent time (because |i is no longer an exact eigenstate of the total Hamiltonian). In other words, a time-dependent perturbation allows the system to make transitions between its unperturbed energy eigenstates. Let us investigate such transitions. 8.2 General Analysis Suppose that at t = t0 the state of the system is represented by |A = n cn |n , (8.3) where the cn are complex numbers. Thus, the initial state is some linear superposition of the unper- turbed energy eigenstates. In the absence of the time-dependent perturbation, the time evolution of the system is given by |A, t0, t = n cn exp[?i En (t ? t0)/ ] |n . (8.4) Now, the probability of ?nding the system in state |n at time t is Pn(t) = |cn exp[?i En(t ? t0)/ ]|2 = |cn|2 = Pn(t0). (8.5) Clearly, with H1 = 0, the probability of ?nding the system in state |n at time t is exactly the same as the probability of ?nding the system in this state at the initial time t0. However, with H1 0, we expect Pn(t) to vary with time. Thus, we can write |A, t0, t = n cn(t) exp[?i En (t ? t0)/ ] |n , (8.6) 124 QUANTUM MECHANICS where Pn(t) = |cn(t)|2 . Here, we have carefully separated the fast phase oscillation of the eigenkets, which depends on the unperturbed Hamiltonian, from the slow variation of the amplitudes cn(t), which depends entirely on the perturbation (i.e., cn is constant if H1 = 0). Note that the eigen- kets |n , appearing in Equation (8.6), are time-independent (they are actually the eigenkets of H0 evaluated at the time t0). Schr¨ odinger's time evolution equation yields i ? ?t |A, t0, t = H |A, t0, t = (H0 + H1) |A, t0, t . (8.7) It follows from Equation (8.6) that (H0 + H1) |A, t0, t = m cm(t) exp[?i Em (t ? t0)/ ] (Em + H1) |m . (8.8) We also have i ? ?t |A, t0, t = m i dcm dt + cm(t) Em exp[?i Em (t ? t0)/ ] |m , (8.9) where use has been made of the time-independence of the kets |m . According to Equation (8.7), we can equate the right-hand sides of the previous two equations to obtain m i dcm dt exp[?i Em (t ? t0)/ ] |m = m cm(t) exp[?i Em (t ? t0)/ ] H1 |m . (8.10) Left-multiplication by n| yields i dcn dt = m Hnm(t) exp[ i ωnm (t ? t0)] cm(t), (8.11) where Hnm(t) = n| H1(t) |m , (8.12) and ωnm = En ? Em . (8.13) Here, we have made use of the standard orthonormality result, n|m = δnm. Suppose that there are N linearly independent eigenkets of the unperturbed Hamiltonian. According to Equation (8.11), the time variation of the coe?cients cn, which specify the probability of ?nding the system in state |n at time t, is determined by N coupled ?rst-order di?erential equations. Note that Equation (8.11) is exact—we have made no approximations at this stage. Unfortunately, we cannot generally ?nd exact solutions to this equation, so we have to obtain approximate solutions via suitable expansions in small quantities. However, for the particularly simple case of a two-state system (i.e., N = 2), it is actually possible to solve Equation (8.11) without approximation. This solution is of great practical importance. Time-Dependent Perturbation Theory 125 8.3 Two-State System Consider a system in which the time-independent Hamiltonian possesses two eigenstates, denoted H0 |1 = E1 |1 , (8.14) H0 |2 = E2 |2 . (8.15) Suppose, for the sake of simplicity, that the diagonal matrix elements of the interaction Hamilto- nian, H1, are zero: 1| H1 |1 = 2| H1 |2 = 0. (8.16) The o?-diagonal matrix elements are assumed to oscillate sinusoidally at some frequency ω: 1| H1 |2 = 2| H1 |1 ? = γ exp( i ω t), (8.17) where γ and ω are real. Note that it is only the o?-diagonal matrix elements that give rise to the e?ect which we are interested in—namely, transitions between states 1 and 2. For a two-state system, Equation (8.11) reduces to i dc1 dt = γ exp[+i (ω ? ω21) t ] c2, (8.18) i dc2 dt = γ exp[?i (ω ? ω21) t ] c1, (8.19) where ω21 = (E2 ? E1)/ , and it is assumed that t0 = 0. Equations (8.18) and (8.19) can be combined to give a second-order di?erential equation for the time variation of the amplitude c2: d2 c2 dt2 + i (ω ? ω21) dc2 dt + γ2 2 c2 = 0. (8.20) Once we have solved for c2, we can use Equation (8.19) to obtain the amplitude c1. Let us look for a solution in which the system is certain to be in state 1 at time t = 0. Thus, our initial conditions are c1(0) = 1 and c2(0) = 0. It is easily demonstrated that the appropriate solutions are c2(t) = ?i γ/ [γ2/ 2 + (ω ? ω21)2/4]1/2 exp[?i (ω ? ω21) t/2] sin [γ2 / 2 + (ω ? ω21)2 /4]1/2 t , (8.21) c1(t) = exp[ i (ω ? ω21) t/2] cos [γ2 / 2 + (ω ? ω21)2 /4]1/2 t ? i (ω ? ω21)/2 [γ2/ 2 + (ω ? ω21)2/4]1/2 exp[ i (ω ? ω21) t/2] sin [γ2 / 2 + (ω ? ω21)2 /4]1/2 t . (8.22) The probability of ?nding the system in state 1 at time t is simply P1(t) = |c1|2 . Likewise, the probability of ?nding the system in state 2 at time t is P2(t) = |c2|2 . It follows that P2(t) = γ2 / 2 γ2/ 2 + (ω ? ω21)2/4 sin2 [γ2 / 2 + (ω ? ω21)2 /4]1/2 t , (8.23) P1(t) = 1 ? P2(t). (8.24) 126 QUANTUM MECHANICS Equation (8.23) exhibits all the features of a classic resonance. At resonance, when the oscil- lation frequency of the perturbation, ω, matches the frequency ω21, we ?nd that P1(t) = cos2 (γ t/ ), (8.25) P2(t) = sin2 (γ t/ ). (8.26) According to the above result, the system starts o? at t = 0 in state 1. After a time interval π /2 γ, it is certain to be in state 2. After a further time interval π /2 γ, it is certain to be in state 1, and so on. In other words, the system periodically ?ip-?ops between states 1 and 2 under the in?uence of the time-dependent perturbation. This implies that the system alternatively absorbs and emits energy from the source of the perturbation. The absorption-emission cycle also take place away from the resonance, when ω ω21. How- ever, the amplitude of oscillation of the coe?cient c2 is reduced. This means that the maximum value of P2(t) is no longer unity, nor is the minimum value of P1(t) zero. In fact, if we plot the max- imum value of P2(t) as a function of the applied frequency, ω, then we obtain a resonance curve whose maximum (unity) lies at the resonance, and whose full-width half-maximum (in frequency) is 4 γ/ . Thus, if the applied frequency di?ers from the resonant frequency by substantially more than 2 γ/ then the probability of the system jumping from state 1 to state 2 is very small. In other words, the time-dependent perturbation is only e?ective at causing transitions between states 1 and 2 if its frequency of oscillation lies in the approximate range ω21 ± 2 γ/ . Clearly, the weaker the perturbation (i.e., the smaller γ becomes), the narrower the resonance. 8.4 Spin Magnetic Resonance Consider a bound electron placed in a uniform z-directed magnetic ?eld, and then subjected to a small time-dependent magnetic ?eld rotating in the x-y plane. Thus, B = B0 ez + B1 cos(ω t) ex + sin(ω t) ey , (8.27) where B0 and B1 are constants, with B1 ? B0. The rotating magnetic ?eld usually represents the magnetic component of an electromagnetic wave propagating along the z-axis. In this system, the electric component of the wave has no e?ect. The Hamiltonian is written H = ?? · B = H0 + H1, (8.28) where H0 = e B0 me Sz, (8.29) and H1 = e B1 me cos(ω t) S x + sin(ω t) Sy . (8.30) The eigenstates of the unperturbed Hamiltonian are the 'spin up' and 'spin down' states, de- noted |+ and |? , respectively. Thus, H0 |± = ± e B0 2 me |± . (8.31) Time-Dependent Perturbation Theory 127 The time-dependent Hamiltonian can be written H1 = e B1 2 me exp( i ω t) S ? + exp(?i ω t) S + , (8.32) where S ± = S x ± i Sy are the conventional raising and lowering operators for spin angular momen- tum. It follows that +| H1 |+ = ?| H1 |? = 0, (8.33) and ?| H1 |+ = +| H1 |? ? = e B1 2 me exp( i ω t). (8.34) It can be seen that this system is exactly the same as the two-state system discussed in the previous section, provided that we make the identi?cations |1 → |? , (8.35) |2 → |+ , (8.36) ω21 → e B0 me , (8.37) γ → e B1 2 me . (8.38) The resonant frequency, ω21, is simply the spin precession frequency for an electron in a uniform magnetic ?eld of strength B0. In the absence of the perturbation, the expectation values of S x and Sy oscillate because of the spin precession, but the expectation value of Sz remains invariant. If we now apply a magnetic perturbation rotating at the resonant frequency then, according to the analysis of the previous section, the system undergoes a succession of spin-?ops,in addition to the spin precession. We also know that if the oscillation frequency of the applied ?eld is very di?erent from the resonant frequency then there is virtually zero probability of the ?eld triggering a spin-?op. The width of the resonance (in frequency) is determined by the strength of the oscillating magnetic perturbation. Experimentalist are able to measure the magnetic moments of electrons, and other spin one-half particles, to a high degree of accuracy by placing the particles in a magnetic ?eld, and subjecting them to an oscillating magnetic ?eld whose frequency is grad- ually scanned. By determining the resonant frequency (i.e., the frequency at which the particles absorb energy from the oscillating ?eld), it is possible to calculate the magnetic moment. 8.5 Dyson Series Let us now try to ?nd approximate solutions of Equation (8.11) for a general system. It is conve- nient to work in terms of the time evolution operator, U(t0, t), which is de?ned |A, t0, t = U(t0, t) |A . (8.39) 128 QUANTUM MECHANICS Here, |A, t0, t is the state ket of the system at time t, given that the state ket at the initial time t0 is |A . It is easily seen that the time evolution operator satis?es the di?erential equation i ?U(t0, t) ?t = (H0 + H1) U(t0, t), (8.40) subject to the initial condition U(t0, t0) = 1. (8.41) In the absence of the external perturbation, the time evolution operator reduces to U(t0, t) = exp[?i H0 (t ? t0)/ ]. (8.42) Let us switch on the perturbation and look for a solution of the form U(t0, t) = exp[?i H0 (t ? t0)/ ] UI(t0, t). (8.43) It is readily demonstrated that UI satis?es the di?erential equation i ?UI(t0, t) ?t = HI(t0, t) UI(t0, t), (8.44) where HI(t0, t) = exp[+i H0 (t ? t0)/ ] H1 exp[?i H0 (t ? t0)/ ], (8.45) subject to the initial condition UI(t0, t0) = 1. (8.46) Note that UI speci?es that component of the time evolution operator which is due to the time- dependent perturbation. Thus, we would expect UI to contain all of the information regarding transitions between di?erent eigenstates of H0 caused by the perturbation. Suppose that the system starts o? at time t0 in the eigenstate |i of the unperturbed Hamiltonian. The subsequent evolution of the state ket is given by Equation (8.6), |i, t0, t = m cm(t) exp[?i Em (t ? t0)/ ] |m . (8.47) However, we also have |i, t0, t = exp[?i H0 (t ? t0)/ ] UI(t0, t) |i . (8.48) It follows that cn(t) = n| UI(t0, t) |i , (8.49) where use has been made of n|m = δn m. Thus, the probability that the system is found in state |n at time t, given that it is de?nitely in state |i at time t0, is simply Pi→n(t0, t) = | n| UI(t0, t) |i |2 . (8.50) This quantity is usually termed the transition probability between states |i and |n . Time-Dependent Perturbation Theory 129 Note that the di?erential equation (8.44), plus the initial condition (8.46), are equivalent to the following integral equation, UI(t0, t) = 1 ? i t t0 dt′ HI(t0, t′ ) UI(t0, t′ ). (8.51) We can obtain an approximate solution to this equation by iteration: UI(t0, t) ? 1 ? i t t0 HI(t0, t′ ) 1 ? i t′ t0 dt′ HI(t0, t′′ ) UI(t0, t′′ ) ? 1 ? i t t0 HI(t0, t′ ) dt′ + ?i 2 t t0 dt′ t′ t0 dt′′ HI(t0, t′ ) HI(t0, t′′ 8.52) This expansion is known as the Dyson series. Let cn = c(0) n + c(1) n + c(2) n 8.53) where the superscript (1) refers to a ?rst-order term in the expansion, etc. It follows from Equa- tions (8.49) and (8.52) that c(0) n (t) = δi n, (8.54) c(1) n (t) = ? i t t0 dt′ n| HI(t0, t′ ) |i , (8.55) c(2) n (t) = ?i 2 t t0 dt′ t′ t0 dt′′ n| HI(t0, t′ ) HI(t0, t′′ ) |i . (8.56) These expressions simplify to c(0) n (t) = δin, (8.57) c(1) n (t) = ? i t t0 dt′ exp[ i ωni (t′ ? t0)] Hni(t′ ), (8.58) c(2) n (t) = ?i 2 m t t0 dt′ t′ t0 dt′′ exp[ i ωnm (t′ ? t0)] Hnm(t′ ) exp[ i ωmi (t′′ ? t0)] Hmi(t′′ ), (8.59) where ωnm = En ? Em , (8.60) and Hnm(t) = n| H1(t) |m . (8.61) The transition probability between states i and n is simply Pi→n(t0, t) = |c(0) n + c(1) n + c(2) n 2 . (8.62) 130 QUANTUM MECHANICS According to the above analysis, there is no chance of a transition between states |i and |n (where i n) to zeroth order (i.e., in the absence of the perturbation). To ?rst order, the transition probability is proportional to the time integral of the matrix element n| H1 |i , weighted by some oscillatory phase-factor. Thus, if the matrix element is zero then there is no chance of a ?rst-order transition between states |i and |n . However, to second order, a transition between states |i and |n is possible even when the matrix element n| H1 |i is zero. 8.6 Sudden Perturbations Consider, for example, a constant perturbation that is suddenly switched on at time t = 0: H1(t) = 0 for t < 0 H1(t) = H1 for t ≥ 0, (8.63) where H1 is time-independent, but is generally a function of the position, momentum, and spin operators. Suppose that the system is de?nitely in state |i at time t = 0. According to Equa- tions (8.57)–(8.59) (with t0 = 0), c(0) n (t) = δi n, (8.64) c(1) n (t) = ? i Hni t 0 dt′ exp[ i ωni (t′ ? t)] = Hni En ? Ei [1 ? exp( i ωni t)], (8.65) giving Pi→n(t) ? |c(1) n |2 = 4 |Hni|2 |En ? Ei|2 sin2 (En ? Ei) t 2 , (8.66) for i n. The transition probability between states |i and |n can be written Pi→n(t) = |Hni|2 t2 2 sinc2 (En ? Ei) t 2 , (8.67) where sinc(x) ≡ sin x x . (8.68) The sinc function is highly oscillatory, and decays like 1/|x| at large |x|. It is a good approximation to say that sinc(x) is small except when |x| < ? π. It follows that the transition probability, Pi→n, is small except when |En ? Ei| < ? 2π t . (8.69) Note that in the limit t → ∞ only those transitions that conserve energy (i.e., En = Ei) have an appreciable probability of occurrence. At ?nite t, is is possible to have transitions which do not exactly conserve energy, provided that ?E ?t < ? h, (8.70) Time-Dependent Perturbation Theory 131 where ?E = |En ?Ei| is the change in energy of the system associated with the transition, and ?t = t is the time elapsed since the perturbation was switched on. This result is just a manifestation of the well-known uncertainty relation for energy and time. Incidentally, the energy-time uncertainty relation is fundamentally di?erent to the position-momentum uncertainty relation, because (in non- relativistic quantum mechanics) position and momentum are operators, whereas time is merely a parameter. The probability of a transition that conserves energy (i.e., En = Ei) is Pi→n(t) = |Hin|2 t2 2 , (8.71) where use has been made of sinc(0) = 1. Note that this probability grows quadratically with time. This result is somewhat surprising, because it implies that the probability of a transition occurring in a ?xed time interval, t to t + dt, grows linearly with t, despite the fact that H1 is constant for t > 0. In practice, there is usually a group of ?nal states, all possessing nearly the same energy as the energy of the initial state |i . It is helpful to de?ne the density of states, ρ(E), where the number of ?nal states lying in the energy range E to E + dE is given by ρ(E) dE. Thus, the probability of a transition from the initial state i to any of the continuum of possible ?nal states is Pi→(t) = dEn Pi→n(t) ρ(En), (8.72) giving Pi→(t) = 2 t dx |Hni|2 ρ(En) sinc2 (x), (8.73) where x = (En ? Ei) t/2 , (8.74) and use has been made of Equation (8.67). We know that in the limit t → ∞ the function sinc(x) is only non-zero in an in?nitesimally narrow range of ?nal energies centered on En = Ei. It follows that, in this limit, we can take ρ(En) and |Hni|2 out of the integral in the above formula to obtain Pi→[n](t) = 2π |Hni|2 ρ(En) t En?Ei , (8.75) where Pi→[n] denotes the transition probability between the initial state |i and all ?nal states |n that have approximately the same energy as the initial state. Here, |Hni|2 is the average of |Hni|2 over all ?nal states with approximately the same energy as the initial state. In deriving the above formula, we have made use of the result ∞ ?∞ dx sinc2 (x) = π. (8.76) Note that the transition probability, Pi→[n], is now proportional to t, instead of t2 . It is convenient to de?ne the transition rate, which is simply the transition probability per unit time. Thus, wi→[n] = dPi→[n] dt , (8.77) 132 QUANTUM MECHANICS giving wi→[n] = 2π |Hni|2 ρ(En) En?Ei . (8.78) This appealingly simple result is known as Fermi's golden rule. Note that the transition rate is constant in time (for t > 0): i.e., the probability of a transition occurring in the time interval t to t + dt is independent of t for ?xed dt. Fermi's golden rule is sometimes written wi→n = 2π |Hni|2 δ(En ? E), (8.79) where it is understood that this formula must be integrated with dEn ρ(En) to obtain the actual transition rate. Let us now calculate the second-order term in the Dyson series, using the constant perturbation (8.63). From Equation (8.59) we ?nd that c(2) n (t) = ?i 2 m HnmHmi t 0 dt′ exp( i ωnm t′ ) t′ 0 dt′′ exp( i ωmi t ) = i m Hnm Hmi Em ? Ei t 0 dt′ exp( i ωni t′ ) ? exp( i ωnm t′ ] = i t m HnmHmi Em ? Ei exp( i ωni t/2) sinc(ωni t/2) ? exp( i ωnm t/2) sinc(ωnm t/2) . (8.80) Thus, cn(t) = c(1) n + c(2) n = i t exp( i ωni t/2) ? ? ? ? ? ? ? ? ? ? ? ? ? ?Hni + m Hnm Hmi Em ? Ei ? ? ? ? ? ? ? sinc(ωni t/2) ? m Hnm Hmi Em ? Ei exp( i ωim t/2) sinc(ωnm t/2) ? ? ? ? ? ? ? , (8.81) where use has been made of Equation (8.65). It follows, by analogy with the previous analysis, that wi→[n] = 2π Hni + m Hnm Hmi Em ? Ei 2 ρ(En) En?Ei , (8.82) where the transition rate is calculated for all ?nal states, |n , with approximately the same energy as the initial state, |i , and for intermediate states, |m whose energies di?er from that of the initial state. The fact that Em Ei causes the last term on the right-hand side of Equation (8.81) to average to zero (due to the oscillatory phase-factor) during the evaluation of the transition probability. According to Equation (8.82), a second-order transition takes place in two steps. First, the system makes a non-energy-conserving transition to some intermediate state |m . Subsequently, the system makes another non-energy-conserving transition to the ?nal state |n . The net transition, Time-Dependent Perturbation Theory 133 from |i to |n , conserves energy. The non-energy-conserving transitions are generally termed virtual transitions, whereas the energy conserving ?rst-order transition is termed a real transition. The above formula clearly breaks down if Hnm Hmi 0 when Em = Ei. This problem can be avoided by gradually turning on the perturbation: i.e., H1 → exp(η t) H1 (where η is very small). The net result is to change the energy denominator in Equation (8.82) from Ei ?Em to Ei ?Em +i η. 8.7 Energy-Shifts and Decay-Widths We have examined how a state |n , other than the initial state |i , becomes populated as a result of some time-dependent perturbation applied to the system. Let us now consider how the initial state becomes depopulated. In this case, it is convenient to gradually turn on the perturbation from zero at t = ?∞. Thus, H1(t) = exp(η t) H1, (8.83) where η is small and positive, and H1 is a constant. In the remote past, t → ?∞, the system is assumed to be in the initial state |i . Thus, ci(t → ?∞) = 1, and cn i(t → ?∞) = 0. Basically, we want to calculate the time evolution of the coe?cient ci(t). First, however, let us check that our previous Fermi golden rule result still applies when the perturbing potential is turned on slowly, instead of very suddenly. For cn i(t) we have from Equations (8.57)–(8.58) that c(0) n (t) = 0, (8.84) c(1) n (t) = ? i Hni t ?∞ dt′ exp[(η + i ωni) t′ ] = ? i Hni exp[(η + i ωni) t] η + i ωni , (8.85) where Hni = n| H1 |i . It follows that, to ?rst order, the transition probability from state |i to state |n is Pi→n(t) = |c(1) n |2 = |Hni|2 2 exp(2 η t) η2 + ω2 ni . (8.86) The transition rate is given by wi→n(t) = dPi→n dt = 2 |Hni|2 2 η exp(2 η t) η2 + ω2 ni . (8.87) Consider the limit η → 0. In this limit, exp(η t) → 1, but lim η→0 η η2 + ω 2 ni = π δ(ωni) = π δ(En ? Ei). (8.88) Thus, Equation (8.87) yields the standard Fermi golden rule result wi→n = 2π |Hni|2 δ(En ? Ei). (8.89) 134 QUANTUM MECHANICS It is clear that the delta-function in the above formula actually represents a function that is highly peaked at some particular energy. The width of the peak is determined by how fast the perturbation is switched on. Let us now calculate ci(t) using Equations (8.57)–(8.59). We have c(0) i (t) = 1, (8.90) c(1) i (t) = ? i Hii t ?∞ exp(η t′ ) dt′ = ? i Hii exp(η t) η , (8.91) c(2) i (t) = ?i 2 m |Hmi|2 t ?∞ dt′ t′ ?∞ dt′′ exp[(η + i ωim) t′ ] exp[(η + i ωmi) t′′ ] = ?i 2 m |Hmi|2 exp(2 η t) 2 η (η + i ωmi) . (8.92) Thus, to second order we have ci(t) ? 1 + ?i Hii exp(η t) η + ?i 2 |Hii|2 exp(2 η t) 2 η2 + ?i m i |Hmi|2 exp(2 η t) 2 η (Ei ? Em + i η) . (8.93) Let us now consider the ratio . ci/ci, where . ci ≡ dci/dt. Using Equation (8.93), we can evaluate this ratio in the limit η → 0. We obtain . ci ci ? ? ? ? ? ? ? ? ?i Hii + ?i 2 |Hii|2 η + ?i m i |Hmi|2 Ei ? Em + i η ? ? ? ? ? ? ? 1 ? i Hii η ? ?i Hii + lim η→0 ?i m i |Hmi|2 Ei ? Em + i η . (8.94) This result is formally correct to second order in perturbed quantities. Note that the right-hand side of Equation (8.94) is independent of time. We can write . ci ci = ?i ?i, (8.95) where ?i = Hii + lim η→0 m i |Hmi|2 Ei ? Em + i η (8.96) is a constant. According to a well-known result in pure mathematics, lim ?→0 1 x + i ? = P 1 x ? i π δ(x), (8.97) where ? > 0, and P denotes the principal part. It follows that ?i = Hii + P m i |Hmi|2 Ei ? Em ? i π m i |Hmi|2 δ(Ei ? Em). (8.98) Time-Dependent Perturbation Theory 135 It is convenient to normalize the solution of Equation (8.95) such that ci(0) = 1. Thus, we obtain ci(t) = exp ?i ?i t . (8.99) According to Equation (8.6), the time evolution of the initial state ket |i is given by |i, t = exp[?i (?i + Ei) t/ ] |i . (8.100) We can rewrite this result as |i, t = exp(?i [Ei + Re(?i) ] t/ ) exp[ Im(?i) t/ ] |i . (8.101) It is clear that the real part of ?i gives rise to a simple shift in energy of state |i , whereas the imaginary part of ?i governs the growth or decay of this state. Thus, |i, t = exp[?i (Ei + ?Ei) t/ ] exp(?Γi t/2 ) |i , (8.102) where ?Ei = Re(?i) = Hii + P m i |Hmi|2 Ei ? Em , (8.103) and Γi = ? 2 Im(?i) = 2π m i |Hmi|2 δ(Ei ? Em). (8.104) Note that the energy-shift ?Ei is the same as that predicted by standard time-independent pertur- bation theory. The probability of observing the system in state |i at time t > 0, given that it is de?nately in state |i at time t = 0, is given by Pi→i(t) = |ci|2 = exp(?Γi t/ ), (8.105) where Γi = m i wi→m. (8.106) Here, use has been made of Equation (8.79). Clearly, the rate of decay of the initial state is a simple function of the transition rates to the other states. Note that the system conserves probability up to second order in perturbed quantities, because |ci|2 + m i |cm|2 ? (1 ? Γi t/ ) + m i wi→m t = 1. (8.107) The quantity Γi is called the decay-width of state |i . It is closely related to the mean lifetime of this state, τi = Γi , (8.108) 136 QUANTUM MECHANICS where Pi→i = exp(?t/τi). (8.109) According to Equation (8.101), the amplitude of state |i both oscillates and decays as time pro- gresses. Clearly, state |i is not a stationary state in the presence of the time-dependent perturbation. However, we can still represent it as a superposition of stationary states (whose amplitudes simply oscillate in time). Thus, exp[?i (Ei + ?Ei) t/ ] exp(?Γi t/2 ) = dE f(E) exp(?i E t/ ), (8.110) where f(E) is the weight of the stationary state with energy E in the superposition. The Fourier inversion theorem yields | f(E)|2 ∝ 1 (E ? [Ei + Re(?i)])2 + Γ2 i /4 . (8.111) In the absence of the perturbation, | f(E)|2 is basically a delta-function centered on the unperturbed energy Ei of state |i . In other words, state |i is a stationary state whose energy is completely determined. In the presence of the perturbation, the energy of state |i is shifted by Re(?i). The fact that the state is no longer stationary (i.e., it decays in time) implies that its energy cannot be exactly determined. Indeed, the energy of the state is smeared over some region of width (in energy) Γi centered around the shifted energy Ei +Re(?i). The faster the decay of the state (i.e., the larger Γi), the more its energy is spread out. This e?ect is clearly a manifestation of the energy-time uncertainty relation ?E ?t ? . One consequence of this e?ect is the existence of a natural width of spectral lines associated with the decay of some excited state to the ground state (or any other lower energy state). The uncertainty in energy of the excited state, due to its propensity to decay, gives rise to a slight smearing (in wavelength) of the spectral line associated with the transition. Strong lines, which correspond to fast transitions, are smeared out more that weak lines. For this reason, spectroscopists generally favor forbidden lines (see Section 8.10) for Doppler-shift measurements. Such lines are not as bright as those corresponding to allowed transitions, but they are a lot sharper. 8.8 Harmonic Perturbations Consider a perturbation that oscillates sinusoidally in time. This is usually called a harmonic perturbation. Thus, H1(t) = V exp( i ω t) + V? exp(?i ω t), (8.112) where V is, in general, a function of position, momentum, and spin operators. Let us initiate the system in the eigenstate |i of the unperturbed Hamiltonian, H0, and switch on the harmonic perturbation at t = 0. It follows from Equation (8.58) that c(1) n = ?i t 0 dt′ Vni exp(i ω t′ ) + V? ni exp(?i ω t′ ) exp( i ωni t′ ) = 1 1 ? exp[ i (ωni + ω) t] ωni + ω Vni + 1 ? exp[ i (ωni ? ω) t] ωni ? ω V ? ni , (8.113) Time-Dependent Perturbation Theory 137 where Vni = n| V |i , (8.114) V? ni = n| V? |i = i| V |n ? . (8.115) This formula is analogous to Equation (8.65), provided that ωni = En ? Ei → ωni ± ω. (8.116) Thus, it follows from the analysis of Section 8.6 that the transition probability Pi→n(t) = |c(1) n |2 is only appreciable in the limit t → ∞ if ωni + ω ? 0 or En ? Ei ? ω, (8.117) ωni ? ω ? 0 or En ? Ei + ω. (8.118) Clearly, (8.117) corresponds to the ?rst term on the right-hand side of Equation (8.113), and (8.118) corresponds to the second term. The former term describes a process by which the system gives up energy ω to the perturbing ?eld, while making a transition to a ?nal state whose energy level is less than that of the initial state by ω. This process is known as stimulated emission. The latter term describes a process by which the system gains energy ω from the perturbing ?eld, while making a transition to a ?nal state whose energy level exceeds that of the initial state by ω. This process is known as absorption. In both cases, the total energy (i.e., that of the system plus the perturbing ?eld) is conserved. By analogy with Equation (8.78), wi→[n] = 2π |Vni|2 ρ(En) En=Ei? ω , (8.119) wi→[n] = 2π |V? ni|2 ρ(En) En=Ei+ ω . (8.120) Equation (8.119) speci?es the transition rate for stimulated emission, whereas Equation (8.120) gives the transition rate for absorption. These equations are more usually written wi→n = 2π |Vni|2 δ(En ? Ei + ω), (8.121) wi→n = 2π |V? ni|2 δ(En ? Ei ? ω). (8.122) It is clear from Equations (8.114)–(8.115) that |V? in|2 = |Vni|2 . It follows from Equations (8.119)– (8.120) that wi→[n] ρ(En) = wn→[i] ρ(Ei) . (8.123) In other words, the rate of stimulated emission, divided by the density of ?nal states for stimulated emission, equals the rate of absorption, divided by the density of ?nal states for absorption. This result, which expresses a fundamental symmetry between absorption and stimulated emission, is known as detailed balancing, and is very important in statistical mechanics. 138 QUANTUM MECHANICS 8.9 Absorption and Stimulated Emission of Radiation Let us use some of the results of time-dependent perturbation theory to investigate the interaction of an atomic electron with classical (i.e., non-quantized) electromagnetic radiation. The unperturbed Hamiltonian is H0 = p2 2 me + V0(r). (8.124) The standard classical prescription for obtaining the Hamiltonian of a particle of charge q in the presence of an electromagnetic ?eld is p → p ? q A, (8.125) H → H ? q φ, (8.126) where A(r) is the vector potential and φ(r) is the scalar potential. Note that E = ??φ ? ?A ?t , (8.127) B = ? * A. (8.128) This prescription also works in quantum mechanics. Thus, the Hamiltonian of an atomic electron placed in an electromagnetic ?eld is H = |p + e A|2 2 me ? e φ + V0(r), (8.129) where A and φ are real functions of the position operators. The above equation can be written H = p2 + e A · p + e p · A + e2 A2 2 me ? e φ + V0(r). (8.130) Now, p · A = A · p, (8.131) provided that we adopt the gauge ? · A = 0. Hence, H = p2 2 me ? e A · p me + e2 A2 2 me ? e φ + V0(r). (8.132) Suppose that the perturbation corresponds to a monochromatic plane-wave, for which φ = 0, (8.133) A = 2 A0 ? cos ω c n · x ? ω t , (8.134) Time-Dependent Perturbation Theory 139 where ? and n are unit vectors that specify the direction of polarization and the direction of propa- gation, respectively. Note that ? · n = 0. The Hamiltonian becomes H = H0 + H1(t), (8.135) with H0 = p2 2 me + V(r), (8.136) and H1 ? e A · p me , (8.137) where the A2 term, which is second order in A0, has been neglected. The perturbing Hamiltonian can be written H1 = e A0 ? · p me exp[ i (ω/c) n · x ? i ω t] + exp[?i (ω/c) n · x + i ω t] . (8.138) This has the same form as Equation (8.112), provided that V = ? e A0 ? · p me exp[?i (ω/c) n · x ] (8.139) It is clear, by analogy with the previous analysis, that the ?rst term on the right-hand side of Equation (8.138) describes the absorption of a photon of energy ω, whereas the second term describes the stimulated emission of a photon of energy ω. It follows from Equations (8.121) and (8.122) that the rates of absorption and stimulated emission are wi→n = 2π e2 m2 e |A0|2 | n| exp[ i (ω/c) n · x] ? · p |i |2 δ(En ? Ei ? ω), (8.140) and wi→n = 2π e2 m2 e |A0|2 | n| exp[?i (ω/c) n · x] ? · p |i |2 δ(En ? Ei + ω), (8.141) respectively. Now, the energy density of a radiation ?eld is U = 1 2 ?0 E 2 0 2 + B2 0 2 ?0 , (8.142) where E0 and B0 = E0/c = 2 A0 ω/c are the peak electric and magnetic ?eld-strengths, respectively. Hence, U = 2 ?0 c ω2 |A0|2 , (8.143) and expressions (8.140) and (8.141) become wi→n = π e2 ?0 m2 e ω2 U | n| exp[ i (ω/c) n · x] ? · p |i |2 δ(En ? Ei ? ω), (8.144) 140 QUANTUM MECHANICS and wi→n = π e2 ?0 m2 e ω2 U | n| exp[?i (ω/c) n · x] ? · p |i |2 δ(En ? Ei + ω), (8.145) respectively. Finally, if we imagine that the incident radiation has a range of di?erent frequencies, so that U = dω u(ω), (8.146) where dω u(ω) is the energy density of radiation whose frequency lies in the range ω to ω + dω, then we can integrate our transition rates over ω to give wi→n = π 2 e2 ?0 m2 e ω2 ni u(ωni) | n| exp[ i (ωni/c) n · x] ? · p |i |2 (8.147) for absorption, and wi→n = π 2 e2 ?0 m2 e ω2 in u(ωin) | n| exp[?i (ωin/c) n · x] ? · p |i |2 (8.148) for stimulated emission. Here, ωni = (En ? Ei)/ > 0 and ωin = (Ei ? En)/ > 0. Furthermore, we are assuming that the radiation is incoherent, so that intensities can be added. 8.10 Electric Dipole Approximation In general, the wavelength of the type of electromagnetic radiation that induces, or is emitted during, transitions between di?erent atomic energy levels is much larger than the typical size of a light atom. Thus, exp[ i (ω/c) n · x] = 1 + i ω c n · x 8.149) can be approximated by its ?rst term, unity (remember that ω/c = 2π/λ). This approximation is known as the electric dipole approximation. It follows that n| exp[ i (ω/c) n · x] ? · p |i ? ? · n| p |i . (8.150) It is readily demonstrated that [x, H0] = i p me , (8.151) so n| p |i = ?i me n| [x, H0] |i = i me ωni n| x |i . (8.152) Thus, making use of the electric dipole approximation, we obtain wi→n = 4π2 α c u(ωni) |? · fni|2 (8.153) for absorption, and wi→n = 4π2 α c u(ωin) |? · fin|2 (8.154) Time-Dependent Perturbation Theory 141 for stimulated emission, where fni = n| x |i , (8.155) and α = e2 /(2 ?0 h c) ? 1/137 is the ?ne structure constant. Suppose that the radiation is polarized in the z-direction, so that ? = ez. We have already seen, from Section 7.4, that n| z |i = 0 unless the initial and ?nal states satisfy ?l = ±1, (8.156) ?m = 0. (8.157) Here, l is the quantum number describing the total orbital angular momentum of the electron, and m is the quantum number describing the projection of the orbital angular momentum along the z-axis. It is easily demonstrated that n| x |i and n| y |i are only non-zero if ?l = ±1, (8.158) ?m = ±1. (8.159) Thus, for generally directed radiation ? · fni is only non-zero if ?l = ±1, (8.160) ?m = 0, ±1. (8.161) These are termed the selection rules for electric dipole transitions. It is clear, for instance, that the electric dipole approximation allows a transition from a 2p state to a 1s state, but disallows a transition from a 2s to a 1s state. The latter transition is called a forbidden transition. Forbidden transitions are not strictly forbidden. Instead, they take place at a far lower rate than transitions that are allowed according to the electric dipole approximation. After electric dipole transitions, the next most likely type of transition is a magnetic dipole transition, which is due to the interaction between the electron spin and the oscillating magnetic ?eld of the incident electro- magnetic radiation. Magnetic dipole transitions are typically about 105 times more unlikely than similar electric dipole transitions. The ?rst-order term in Equation (8.149) yields so-called electric quadrupole transitions. These are typically about 108 times more unlikely than electric dipole tran- sitions. Magnetic dipole and electric quadrupole transitions satisfy di?erent selection rules than electric dipole transitions. For instance, the selection rules for electric quadrupole transitions are ?l = 0, ±2. Thus, transitions that are forbidden as electric dipole transitions may well be allowed as magnetic dipole or electric quadrupole transitions. 8.11 Spontaneous Emission So far, we have calculated the rates of radiation induced transitions between two atomic states. This process is known as absorption when the energy of the ?nal state exceeds that of the initial state, and stimulated emission when the energy of the ?nal state is less than that of the initial state. Now, in the absence of any external radiation, we would not expect an atom in a given state to 142 QUANTUM MECHANICS spontaneously jump into a state with a higher energy. On the other hand, it should be possible for such an atom to spontaneously jump into an state with a lower energy via the emission of a photon whose energy is equal to the di?erence between the energies of the initial and ?nal states. This process is known as spontaneous emission. It is possible to derive the rate of spontaneous emission between two atomic states from a knowledge of the corresponding absorption and stimulated emission rates using a famous thermo- dynamic argument due to Einstein. Consider a very large ensemble of similar atoms placed inside a closed cavity whose walls (which are assumed to be perfect emitters and absorbers of radiation) are held at the constant temperature T. Let the system have attained thermal equilibrium. Accord- ing to statistical thermodynamics, the cavity is ?lled with so-called "black-body" electromagnetic radiation whose energy spectrum is u(ω) = π2 c3 ω3 exp( ω/kB T) ? 1 , (8.162) where kB is the Boltzmann constant. This well-known result was ?rst obtained by Max Planck in 1900. Consider two atomic states, labeled 2 and 1, with E2 > E1. One of the tenants of statistical thermodynamics is that in thermal equilibrium we have so-called detailed balance. This means that, irrespective of any other atomic states, the rate at which atoms in the ensemble leave state 2 due to transitions to state 1 is exactly balanced by the rate at which atoms enter state 2 due to transitions from state 1. The former rate (i.e., number of transitions per unit time in the ensemble) is written W2→1 = N2 (wspn 2→1 + wstm 2→1), (8.163) where wspn 2→1 and warm 2→1 are the rates of spontaneous and stimulated emission, respectively, (for a single atom) between states 2 and 1, and N2 is the number of atoms in the ensemble in state 2. Likewise, the latter rate takes the form W1→2 = N1 wabs 1→2, (8.164) where wabs 1→2 is the rate of absorption (for a single atom) between states 1 and 2, and N1 is the num- ber of atoms in the ensemble in state 1. The above expressions describe how atoms in the ensemble make transitions from state 2 to state 1 due to a combination of spontaneous and stimulated emis- sion, and make the opposite transition as a consequence of absorption. In thermal equilibrium, we have W2→1 = W1→2, which gives wspn 2→1 = N1 N2 wabs 1→2 ? wstm 2→1. (8.165) Equations (8.153) and (8.154) imply that wspn i→ f = 4π2 α c N1 N2 ? 1 u(ω21) |? · f21|2 , (8.166) Time-Dependent Perturbation Theory 143 where ω21 = (E2?E1)/ , and the large angle brackets denote an average over all possible directions of the incident radiation (because, in equilibrium, the radiation inside the cavity is isotropic). In fact, it is easily demonstrated that |? · f21|2 = f 2 21 3 , (8.167) where f 2 21 stands for f 2 21 = | 2| x |1 |2 + | 2| y |1 |2 + | 2| z |1 |2 . (8.168) Now, another famous result in statistical thermodynamics is that in thermal equilibrium the number of atoms in an ensemble occupying a state of energy E is proportional to exp(?E/kB T). This implies that N1 N2 = exp(?E1/kB T) exp(?E2/kB T) = exp( ω21/kB T). (8.169) Thus, it follows from Equations (8.162), (8.166), (8.167), and (8.169) that the rate of spontaneous emission between states 2 and 1 takes the form wspn 2→1 = ω3 21 e2 f 2 21 3π ?0 c3 . (8.170) Note, that, although the above result has been derived for an atom in a radiation-?lled cavity, it remains correct even in the absence of radiation. Let us estimate the typical value of the spontaneous emission rate for a hydrogen atom. We expect the matrix element f21 to be of order a0, where a0 is the Bohr radius. We also expect ω21 to be of order |E0|/ , where E0 is the ground-state energy. It thus follows from Equation (8.170) that wspn 2→1 ? α3 ω21, (8.171) where α ? 1/137 is the ?ne structure constant. This is an important result, because our perturbation expansion is based on the assumption that the transition rate between di?erent energy eigenstates is much slower than the frequency of phase oscillation of these states: i.e., that wspn 2→1 ? ω21. This is indeed the case. Exercises 8.1 Demonstrate that p·A = A·p when ?·A = 0, where p is the momentum operator, and A(x) is a real function of the position operator, x. Hence, show that the Hamiltonian (8.132) is Hermitian. 8.2 Find the selection rules for the matrix elements n, l, m| x |n′ , l′ , m′ , n, l, m| y |n′ , l′ , m′ , and n, l, m| z |n′ , l′ , m′ to be non-zero. Here, |n, l, m denotes an energy eigenket of a hydrogen- like atom corresponding to the conventional quantum numbers, n, l, and m. 8.3 Demonstrate that |? · f21|2 = f 2 21 3 , where the average is taken over all directions of the incident radiation. 144 QUANTUM MECHANICS 8.4 Demonstrate that the spontaneous decay rate (via an electric dipole transition) from any 2p state to a 1s state of a hydrogen atom is w2p→1s = 2 3 8 α5 me c2 = 6.26 * 108 s?1 , where α is the ?ne structure constant. Hence, deduce that the natural line width of the associated spectral line is ?λ λ ? 4 * 10?8 . The only non-zero 1s ? 2p electric dipole matrix elements take the values 1, 0, 0| x |2, 1, ±1 = ± 27 35 a0, 1, 0, 0| y |2, 1, ±1 = i 27 35 a0, 1, 0, 0| z |2, 1, 0 = √ 2 27 35 a0, where a0 is the Bohr radius. Scattering Theory 145 9 Scattering Theory 9.1 Introduction Historically, data regarding quantum phenomena has been obtained from two main sources—the study of spectroscopic lines, and scattering experiments. We have already developed theories that account for some aspects of the spectra of hydrogen-like atoms. Let us now examine the quantum theory of scattering. 9.2 Fundamental Equations Consider time-independent scattering theory, for which the Hamiltonian of the system is written H = H0 + H1, (9.1) where H0 is the Hamiltonian of a free particle of mass m, H0 = p2 2 m , (9.2) and H1 represents the non-time-varying source of the scattering. Let |φ be an energy eigenket of H0, H0 |φ = E |φ , (9.3) whose wavefunction x′ |φ is φ(x′ ). This state is assumed to be a plane wave state or, possibly, a spherical wave state. Schr¨ odinger's equation for the scattering problem is (H0 + H1) |ψ = E |ψ , (9.4) where |ψ is an energy eigenstate of the total Hamiltonian whose wavefunction x′ |ψ is ψ(x′ ). In general, both H0 and H0 + H1 have continuous energy spectra: i.e., their energy eigenstates are unbound. We require a solution of Equation (9.4) that satis?es the boundary condition |ψ → |φ as H1 → 0. Here, |φ is a solution of the free particle Schr¨ odinger equation, (9.3), corresponding to the same energy eigenvalue. Adopting the Schr¨ odinger representation, we can write the scattering problem (9.4) in the form (?2 + k2 ) ψ(x) = 2 m 2 x| H1 |ψ , (9.5) where E = 2 k2 2 m . (9.6) 146 QUANTUM MECHANICS Equation (9.5) is called the Helmholtz equation, and can be inverted using standard Green's func- tion techniques. Thus, ψ(x) = φ(x) + 2 m 2 d3 x′ G(x, x′ ) x′ | H1 |ψ , (9.7) where (?2 + k2 )G(x, x′ ) = δ(x ? x′ ). (9.8) Note that the solution (9.7) satis?es the boundary condition |ψ → |φ as H1 → 0. As is well- known, the Green's function for the Helmholtz problem is given by G(x, x′ ) = ? exp(±i k |x ? x′ | ) 4π |x ? x′| . (9.9) Thus, Equation (9.7) becomes ψ± (x) = φ(x) ? 2 m 2 d3 x′ exp(±i k |x ? x′ | ) 4π |x ? x′| x′ | H1 |ψ± . (9.10) Let us suppose that the scattering Hamiltonian, H1, is only a function of the position operators. This implies that x′ | H1 |x = V(x) δ(x ? x′ ). (9.11) We can write x′ | H1 |ψ± = d3 x′′ x′ | H1 |x′′ x′′ |ψ± = V(x′ ) ψ± (x′ ). (9.12) Thus, the integral equation (9.10) simpli?es to ψ± (x) = φ(x) ? 2 m 2 d3 x′ exp(±i k |x ? x′ |) 4π |x ? x′| V(x′ ) ψ± (x′ ). (9.13) Suppose that the initial state |φ is a plane wave with wavevector k (i.e., a stream of particles of de?nite momentum p = k). The ket corresponding to this state is denoted |k . The associated wavefunction takes the form x|k = exp( i k · x) (2π)3/2 . (9.14) The wavefunction is normalized such that k|k′ = d3 x k|x x|k′ = d3 x exp[?i x · (k ? k′ )] (2π)3 = δ(k ? k′ ). (9.15) Suppose that the scattering potential V(x) is only non-zero in some relatively localized region centered on the origin (x = 0). Let us calculate the wavefunction ψ(x) a long way from the scattering region. In other words, let us adopt the ordering r ? r′ . It is easily demonstrated that |x ? x′ | ? r ? er · x′ (9.16) Scattering Theory 147 to ?rst order in r′ /r, where er = x r (9.17) is a unit vector that points from the scattering region to the observation point. Here, r = |x| and r′ = |x′ |. Let us de?ne k′ = k er. (9.18) Clearly, k′ is the wavevector for particles that possess the same energy as the incoming particles (i.e., k′ = k), but propagate from the scattering region to the observation point. Note that exp(±i k |x ? x′ | ) ? exp(±i k r) exp(?i k′ · x′ ). (9.19) In the large-r limit, Equation (9.13) reduces to ψ± (x) ? exp( i k · x) (2π)3/2 ? m 2π 2 exp(±i k r) r d3 x′ exp(?i k′ · x′ ) V(x′ ) ψ± (x′ ). (9.20) The ?rst term on the right-hand side is the incident wave. The second term represents a spherical wave centred on the scattering region. The plus sign (on ψ± ) corresponds to a wave propagating away from the scattering region, whereas the minus sign corresponds to a wave propagating to- wards the scattering region. It is obvious that the former represents the physical solution. Thus, the wavefunction a long way from the scattering region can be written ψ(x) = 1 (2π)3/2 exp( i k · x) + exp( i k r) r f(k′ , k) , (9.21) where f(k′ , k) = ? (2π)2 m 2 d3 x′ exp(?i k′ · x′ ) (2π)3/2 V(x′ ) ψ(x′ ) = ? (2π)2 m 2 k′ | H1 |ψ . (9.22) Let us de?ne the di?erential cross-section, dσ/d?, as the number of particles per unit time scattered into an element of solid angle d?, divided by the incident ?ux of particles. Recall, from Chapter 3, that the probability current (i.e., the particle ?ux) associated with a wavefunction ψ is j = m Im(ψ? ?ψ). (9.23) Thus, the probability ?ux associated with the incident wavefunction, exp( i k · x) (2π)3/2 , (9.24) is jinc = (2π)3 m k. (9.25) Likewise, the probability ?ux associated with the scattered wavefunction, exp( i k r) (2π)3/2 f(k′ , k) r , (9.26) 148 QUANTUM MECHANICS is jsca = (2π)3 m | f(k′ , k)|2 r2 k er. (9.27) Now, dσ d? d? = r2 d? |jsca| |jinc| , (9.28) giving dσ d? = | f(k′ , k)|2 . (9.29) Thus, | f(k′ , k)|2 gives the di?erential cross-section for particles with incident momentum k to be scattered into states whose momentum vectors are directed in a range of solid angles d? about k′ . Note that the scattered particles possess the same energy as the incoming particles (i.e., k′ = k). This is always the case for scattering Hamiltonians of the form speci?ed in Equation (9.11). 9.3 Born Approximation Equation (9.29) is not particularly useful, as it stands, because the quantity f(k′ , k) depends on the unknown ket |ψ . Recall that ψ(x) = x|ψ is the solution of the integral equation ψ(x) = φ(x) ? m 2π 2 exp( i k r) r d3 x′ exp(?i k′ · x′ ) V(x′ ) ψ(x′ ), (9.30) where φ(x) is the wavefunction of the incident state. According to the above equation, the total wavefunction is a superposition of the incident wavefunction and lots of spherical waves emitted from the scattering region. The strength of the spherical wave emitted at a given point is propor- tional to the local value of the scattering potential, V, as well as the local value of the wavefunction, ψ. Suppose that the scattering is not particularly strong. In this case, it is reasonable to suppose that the total wavefunction, ψ(x), does not di?er substantially from the incident wavefunction, φ(x). Thus, we can obtain an expression for f(k′ , k) by making the substitution ψ(x) → φ(x) = exp( i k · x) (2π)3/2 . (9.31) This is called the Born approximation. The Born approximation yields f(k′ , k) ? ? m 2π 2 d3 x′ exp i (k ? k′ ) · x′ V(x′ ). (9.32) Thus, f(k′ , k) is proportional to the Fourier transform of the scattering potential V(x) with respect to the wavevector q ≡ k ? k′ . For a spherically symmetric potential, f(k′ , k) ? ? m 2π 2 ∞ 0 π 0 2π 0 dr′ dθ′ dφ′ r′ 2 sin θ′ exp( i q r′ cos θ′ ) V(r′ ), (9.33) Scattering Theory 149 giving f(k′ , k) ? ? 2 m 2 q ∞ 0 dr′ r′ V(r′ ) sin(q r′ ). (9.34) Note that f(k′ , k) is just a function of q for a spherically symmetric potential. It is easily demon- strated that q ≡ |k ? k′ | = 2 k sin(θ/2), (9.35) where θ is the angle subtended between the vectors k and k′ . In other words, θ is the angle of scattering. Recall that the vectors k and k′ have the same length, as a consequence of energy conservation. Consider scattering by a Yukawa potential V(r) = V0 exp(?? r) ? r , (9.36) where V0 is a constant, and 1/? measures the "range" of the potential. It follows from Equa- tion (9.34) that f(θ) = ? 2 m V0 2 ? 1 q2 + ?2 , (9.37) because ∞ 0 dr′ exp(?? r′ ) sin(q r′ ) = q q2 + ?2 . (9.38) Thus, in the Born approximation, the di?erential cross-section for scattering by a Yukawa potential is dσ d? ? 2 m V0 2 ? 2 1 [4 k2 sin2 (θ/2) + ?2]2 . (9.39) The Yukawa potential reduces to the familiar Coulomb potential as ? → 0, provided that V0/? → Z Z′ e2 /4π ?0. In this limit, the Born di?erential cross-section becomes dσ d? ? 2 m Z Z′ e2 4π ?0 2 2 1 16 k4 sin4 (θ/2) . (9.40) Recall that k is equivalent to |p|, so the above equation can be rewritten dσ d? ? Z Z′ e2 16π ?0 E 2 1 sin4 (θ/2) , (9.41) where E = p2 /2 m is the kinetic energy of the incident particles. Equation (9.41) is identical to the classical Rutherford scattering cross-section formula. The Born approximation is valid provided that ψ(x) is not too di?erent from φ(x) in the scat- tering region. It follows, from Equation (9.13), that the condition for ψ(x) ? φ(x) in the vicinity of x = 0 is m 2π 2 d3 x′ exp( i k r′ ) r′ V(x′ ) ? 1. (9.42) 150 QUANTUM MECHANICS Consider the special case of the Yukawa potential. At low energies, (i.e., k ? ?) we can replace exp( i k r′ ) by unity, giving 2 m 2 |V0| ?2 ? 1 (9.43) as the condition for the validity of the Born approximation. The condition for the Yukawa potential to develop a bound state is 2 m 2 |V0| ?2 ≥ 2.7, (9.44) where V0 is negative. Thus, if the potential is strong enough to form a bound state then the Born approximation is likely to break down. In the high-k limit, Equation (9.42) yields 2 m 2 |V0| ? k ? 1. (9.45) This inequality becomes progressively easier to satisfy as k increases, implying that the Born ap- proximation is more accurate at high incident particle energies. 9.4 Partial Waves We can assume, without loss of generality, that the incident wavefunction is characterized by a wavevector k that is aligned parallel to the z-axis. The scattered wavefunction is characterized by a wavevector k′ that has the same magnitude as k, but, in general, points in a di?erent direction. The direction of k′ is speci?ed by the polar angle θ (i.e., the angle subtended between the two wavevectors), and an azimuthal angle ? about the z-axis. Equation (9.34) strongly suggests that for a spherically symmetric scattering potential [i.e., V(x) = V(r)] the scattering amplitude is a function of θ only: i.e., f(θ, ?) = f(θ). (9.46) It follows that neither the incident wavefunction, φ(x) = exp( i k z) (2π)3/2 = exp( i k r cos θ) (2π)3/2 , (9.47) nor the total wavefunction, ψ(x) = 1 (2π)3/2 exp( i k r cos θ) + exp( i k r) f(θ) r , (9.48) depend on the azimuthal angle ?. Outside the range of the scattering potential, both φ(x) and ψ(x) satisfy the free space Schr¨ odinger equation (?2 + k2 ) ψ = 0. (9.49) Consider the most general solution to this equation in spherical polar coordinates that does not depend on the azimuthal angle ?. Separation of variables yields ψ(r, θ) = l=0,∞ Rl(r) Pl(cos θ), (9.50) Scattering Theory 151 since the Legendre polynomials Pl(cos θ) form a complete set in θ-space. The Legendre polyno- mials are related to the spherical harmonics introduced in Chapter 4 via Pl(cos θ) = 4π 2 l + 1 Yl 0(θ, ?). (9.51) Equations (9.49) and (9.50) can be combined to give r2 d2 Rl dr2 + 2 r dRl dr + [k2 r2 ? l (l + 1)] Rl = 0. (9.52) The two independent solutions to this equation are the spherical Bessel function, jl(k r), and the Neumann function, ηl(k r), where jl(y) = yl ? 1 y d dy l sin y y , (9.53) ηl(y) = ?yl ? 1 y d dy l cos y y . (9.54) Note that spherical Bessel functions are well-behaved in the limit y → 0 , whereas Neumann functions become singular. The asymptotic behaviour of these functions in the limit y → ∞ is jl(y) → sin(y ? l π/2) y , (9.55) ηl(y) → ? cos(y ? l π/2) y . (9.56) We can write exp( i k r cos θ) = l=0,∞ al jl(k r) Pl(cos θ), (9.57) where the al are constants. Note there are no Neumann functions in this expansion, because they are not well-behaved as r → 0. The Legendre polynomials are orthogonal, 1 ?1 d? Pn(?) Pm(?) = δn m n + 1/2 , (9.58) so we can invert the above expansion to give al jl(k r) = (l + 1/2) 1 ?1 d? exp( i k r ?) Pl(?). (9.59) It is well-known that jl(y) = (?i)l 2 1 ?1 d? exp( i y ?) Pl(?), (9.60) where l = 0, ∞. Thus, al = il (2 l + 1), (9.61) 152 QUANTUM MECHANICS giving exp( i k r cos θ) = l=0,∞ il (2 l + 1) jl(k r) Pl(cos θ). (9.62) The above expression tells us how to decompose a plane wave into a series of spherical waves (or "partial waves"). The most general solution for the total wavefunction outside the scattering region is ψ(x) = 1 (2π)3/2 l=0,∞ Al jl(k r) + Bl ηl(k r) Pl(cos θ), (9.63) where the Al and Bl are constants. Note that the Neumann functions are allowed to appear in this expansion, because its region of validity does not include the origin. In the large-r limit, the total wavefunction reduces to ψ(x) ? 1 (2π)3/2 l=0,∞ Al sin(k r ? l π/2) k r ? Bl cos(k r ? l π/2) k r Pl(cos θ), (9.64) where use has been made of Equations (9.55)–(9.56). The above expression can also be written ψ(x) ? 1 (2π)3/2 l Cl sin(k r ? l π/2 + δl) k r Pl(cos θ), (9.65) where the sine and cosine functions have been combined to give a sine function that is phase-shifted by δl. Equation (9.65) yields ψ(x) ? 1 (2π)3/2 l Cl exp[ i (k r ? l π/2 + δl)] ? exp[?i (k r ? l π/2 + δl)] 2 i k r Pl(cos θ), (9.66) which contains both incoming and outgoing spherical waves. What is the source of the incoming waves? Obviously, they must be part of the large-r asymptotic expansion of the incident wave- function. In fact, it is easily seen that φ(x) ? 1 (2π)3/2 l=0,∞ il (2l + 1) exp[ i (k r ? l π/2)] ? exp[?i (k r ? l π/2)] 2 i k r Pl(cos θ), (9.67) in the large-r limit. Now, Equations (9.47) and (9.48) give (2π)3/2 [ψ(x) ? φ(x)] = exp( i k r) r f(θ). (9.68) Note that the right-hand side consists only of an outgoing spherical wave. This implies that the coe?cients of the incoming spherical waves in the large-r expansions of ψ(x) and φ(x) must be equal. It follows from Equations (9.66) and (9.67) that Cl = (2 l + 1) exp[ i (δl + l π/2)]. (9.69) Scattering Theory 153 Thus, Equations (9.66)–(9.68) yield f(θ) = l=0,∞ (2 l + 1) exp( i δl) k sin δl Pl(cos θ). (9.70) Clearly, determining the scattering amplitude f(θ) via a decomposition into partial waves (i.e., spherical waves) is equivalent to determining the phase-shifts δl. 9.5 Optical Theorem The di?erential scattering cross-section dσ/d? is simply the modulus squared of the scattering amplitude f(θ). The total cross-section is given by σtotal = d? | f(θ)|2 = 1 k2 d? 1 ?1 d? l l′ (2 l + 1) (2 l′ + 1) exp[ i (δl ? δl′ )] sin δl sin δl′ Pl(?) Pl′ (?), (9.71) where ? = cos θ. It follows that σtotal = 4π k2 l=0,∞ (2 l + 1) sin2 δl, (9.72) where use has been made of Equation (9.58). A comparison of this result with Equation (9.70) yields σtotal = 4π k Im f(0) , (9.73) since Pl(1) = 1. This result is known as the optical theorem. It is a re?ection of the fact that the very existence of scattering requires scattering in the forward (θ = 0) direction in order to interfere with the incident wave, and thereby reduce the probability current in this direction. It is usual to write σtotal = l=0,∞ σl, (9.74) where σl = 4π k2 (2 l + 1) sin2 δl (9.75) is the lth partial cross-section: i.e., the contribution to the total cross-section from the lth partial wave. Note that the maximum value for the lth partial cross-section occurs when the phase-shift δl takes the value π/2. 154 QUANTUM MECHANICS 9.6 Determination of Phase-Shifts Let us now consider how the phase-shifts δl can be evaluated. Consider a spherically symmetric potential V(r) that vanishes for r > a, where a is termed the range of the potential. In the region r > a, the wavefunction ψ(x) satis?es the free-space Schr¨ odinger equation (9.49). The most general solution that is consistent with no incoming spherical waves is ψ(x) = 1 (2π)3/2 l=0,∞ il (2 l + 1) Al(r) Pl(cos θ), (9.76) where Al(r) = exp( i δl) cos δl jl(k r) ? sin δl ηl(k r) . (9.77) Note that Neumann functions are allowed to appear in the above expression, because its region of validity does not include the origin (where V 0). The logarithmic derivative of the lth radial wavefunction Al(r) just outside the range of the potential is given by βl+ = k a cos δl j′ l(k a) ? sin δl η′ l(k a) cos δl jl(k a) ? sin δl ηl(k a) , (9.78) where j′ l(x) denotes d jl(x)/dx, etc. The above equation can be inverted to give tan δl = k a j′ l(k a) ? βl+ jl(k a) k a η′ l(k a) ? βl+ ηl(k a) . (9.79) Thus, the problem of determining the phase-shift δl is equivalent to that of determining βl+. The most general solution to Schr¨ odinger's equation inside the range of the potential (r < a) that does not depend on the azimuthal angle ? is ψ(x) = 1 (2π)3/2 l=0,∞ il (2 l + 1) Rl(r) Pl(cos θ), (9.80) where Rl(r) = ul(r) r , (9.81) and d2 ul dr2 + k2 ? 2m 2 V ? l (l + 1) r2 ul = 0. (9.82) The boundary condition ul(0) = 0 (9.83) ensures that the radial wavefunction is well-behaved at the origin. We can launch a well-behaved solution of the above equation from r = 0, integrate out to r = a, and form the logarithmic derivative βl? = 1 (ul/r) d(ul/r) dr r=a . (9.84) Because ψ(x) and its ?rst derivatives are necessarily continuous for physically acceptible wave- functions, it follows that βl+ = βl?. (9.85) The phase-shift δl is obtainable from Equation (9.79). Scattering Theory 155 9.7 Hard Sphere Scattering Let us test out this scheme using a particularly simple example. Consider scattering by a hard sphere, for which the potential is in?nite for r < a, and zero for r > a. It follows that ψ(x) is zero in the region r < a, which implies that ul = 0 for all l. Thus, βl? = βl+ = ∞, (9.86) for all l. It follows from Equation (9.79) that tan δl = jl(k a) ηl(k a) . (9.87) Consider the l = 0 partial wave, which is usually referred to as the s-wave. Equation (9.87) yields tan δ0 = sin(k a)/k a ? cos(k a)/ka = ? tan(k a), (9.88) where use has been made of Equations (9.53)–(9.54). It follows that δ0 = ?k a. (9.89) The s-wave radial wave function is A0(r) = exp(?i k a) cos(k a) sin(k r) ? sin(k a) cos(k r) k r = exp(?i k a) sin[k (r ? a)] k r . (9.90) The corresponding radial wavefunction for the incident wave takes the form ? A0(r) = sin(k r) k r . (9.91) It is clear that the actual l = 0 radial wavefunction is similar to the incident l = 0 wavefunction, except that it is phase-shifted by k a. Let us consider the low and high energy asymptotic limits of tan δl. Low energy corresponds to k a ? 1. In this limit, the spherical Bessel functions and Neumann functions reduce to: jl(k r) ? (k r)l (2 l + 1)!! , (9.92) ηl(k r) ? ? (2 l ? 1)!! (k r)l+1 , (9.93) where n!! = n (n ? 2) (n ? 4) · · · 1. It follows that tan δl = ?(k a)2 l+1 (2 l + 1) [(2 l ? 1)!!]2 . (9.94) 156 QUANTUM MECHANICS It is clear that we can neglect δl, with l > 0, with respect to δ0. In other words, at low energy only s-wave scattering (i.e., spherically symmetric scattering) is important. It follows from Equa- tions (9.29), (9.70), and (9.89) that dσ d? = sin2 (k a) k2 ? a2 (9.95) for k a ? 1. Note that the total cross-section σtotal = d? dσ d? = 4π a2 (9.96) is four times the geometric cross-section π a2 (i.e., the cross-section for classical particles bouncing o? a hard sphere of radius a). However, low energy scattering implies relatively long wavelengths, so we do not expect to obtain the classical result in this limit. Consider the high energy limit k a ? 1. At high energies, all partial waves up to lmax = k a contribute signi?cantly to the scattering cross-section. It follows from Equation (9.72) that σtotal = 4π k2 l=0,lmax (2 l + 1) sin2 δl. (9.97) With so many l values contributing, it is legitimate to replace sin2 δl by its average value 1/2. Thus, σtotal = l=0,k a 2π k2 (2 l + 1) ? 2π a2 . (9.98) This is twice the classical result, which is somewhat surprizing, because we might expect to obtain the classical result in the short wavelength limit. For hard sphere scattering, incident waves with impact parameters less than a must be de?ected. However, in order to produce a "shadow" behind the sphere there must be scattering in the forward direction (recall the optical theorem) to produce destructive interference with the incident plane wave. In fact, the interference is not completely destructive, and the shadow has a bright spot in the forward direction. The e?ective cross-section associated with this bright spot is π a2 which, when combined with the cross-section for classical re?ection, π a2 , gives the actual cross-section of 2π a2 . 9.8 Low Energy Scattering At low energies (i.e., when 1/k is much larger than the range of the potential) partial waves with l > 0, in general, make a negligible contribution to the scattering cross-section. It follows that, at these energies, with a ?nite range potential, only s-wave scattering is important. As a speci?c example, let us consider scattering by a ?nite potential well, characterized by V = V0 for r < a, and V = 0 for r ≥ a. Here, V0 is a constant. The potential is repulsive for V0 > 0, and attractive for V0 < 0. The external wavefunction is given by [see Equation (9.77)] A0(r) = exp( i δ0) j0(k r) cos δ0 ? η0(k r) sin δ0 = exp( i δ0) sin(k r + δ0) k r , (9.99) Scattering Theory 157 where use has been made of Equations (9.53)–(9.54). The internal wavefunction follows from Equation (9.82). We obtain A0(r) = B sin(k′ r) r , (9.100) where use has been made of the boundary condition (9.83). Here, B is a constant, and E ? V0 = 2 k′ 2 2 m . (9.101) Note that Equation (9.100) only applies when E > V0. For E < V0, we have A0(r) = B sinh(κ r) r , (9.102) where V0 ? E = 2 κ2 2 m . (9.103) Matching A0(r), and its radial derivative at r = a, yields tan(k a + δ0) = k k′ tan(k′ a) (9.104) for E > V0, and tan(k a + δ0) = k κ tanh(κ a) (9.105) for E < V0. Consider an attractive potential, for which E > V0. Suppose that |V0| ? E (i.e., the depth of the potential well is much larger than the energy of the incident particles), so that k′ ? k. It follows from Equation (9.104) that, unless tan(k′ a) becomes extremely large, the right-hand side is much less that unity, so replacing the tangent of a small quantity with the quantity itself, we obtain k a + δ0 ? k k′ tan(k′ a). (9.106) This yields δ0 ? k a tan(k′ a) k′ a ? 1 . (9.107) According to Equation (9.97), the scattering cross-section is given by σtotal ? 4π k2 sin2 δ0 = 4π a2 tan(k′ a) (k′ a) ? 1 2 . (9.108) Now, k′ a = k2 a2 + 2 m |V0| a2 2 , (9.109) 158 QUANTUM MECHANICS so for su?ciently small values of k a, k′ a ? 2 m |V0| a2 2 . (9.110) It follows that the total (s-wave) scattering cross-section is independent of the energy of the incident particles (provided that this energy is su?ciently small). Note that there are values of k′ a (e.g., k′ a ? 4.49) at which δ0 → π, and the scattering cross- section (9.108) vanishes, despite the very strong attraction of the potential. In reality, the cross- section is not exactly zero, because of contributions from l > 0 partial waves. But, at low incident energies, these contributions are small. It follows that there are certain values of V0 and k that give rise to almost perfect transmission of the incident wave. This is called the Ramsauer-Townsend e?ect, and has been observed experimentally. 9.9 Resonance Scattering There is a signi?cant exception to the independence of the cross-section on energy. Suppose that the quantity 2 m |V0| a2/ 2 is slightly less than π/2. As the incident energy increases, k′ a, which is given by Equation (9.109), can reach the value π/2. In this case, tan(k′ a) becomes in?nite, so we can no longer assume that the right-hand side of Equation (9.104) is small. In fact, at the value of the incident energy when k′ a = π/2, it follows from Equation (9.104) that k a + δ0 = π/2, or δ0 ? π/2 (because we are assuming that k a ? 1). This implies that σtotal = 4π k2 sin2 δ0 = 4π a2 1 k2 a2 . (9.111) Note that the cross-section now depends on the energy. Furthermore, the magnitude of the cross- section is much larger than that given in Equation (9.108) for k′ a π/2 (since k a ? 1). The origin of this rather strange behaviour is quite simple. The condition 2 m |V0| a2 2 = π 2 (9.112) is equivalent to the condition that a spherical well of depth V0 possesses a bound state at zero energy. Thus, for a potential well that satis?es the above equation, the energy of the scattering system is essentially the same as the energy of the bound state. In this situation, an incident particle would like to form a bound state in the potential well. However, the bound state is not stable, because the system has a small positive energy. Nevertheless, this sort of resonance scattering is best understood as the capture of an incident particle to form a metastable bound state, and the subsequent decay of the bound state and release of the particle. The cross-section for resonance scattering is generally far higher than that for non-resonance scattering. We have seen that there is a resonant e?ect when the phase-shift of the s-wave takes the value π/2. There is nothing special about the l = 0 partial wave, so it is reasonable to assume that there Scattering Theory 159 is a similar resonance when the phase-shift of the lth partial wave is π/2. Suppose that δl attains the value π/2 at the incident energy E0, so that δl(E0) = π 2 . (9.113) Let us expand cot δl in the vicinity of the resonant energy: cot δl(E) = cot δl(E0) + d cot δl dE E=E0 (E ? E0) 1 sin2 δl dδl dE E=E0 (E ? E0)9.114) De?ning dδl(E) dE E=E0 = 2 Γ , (9.115) we obtain cot δl(E) = ? 2 Γ (E ? E0)9.116) Recall, from Equation (9.75), that the contribution of the lth partial wave to the scattering cross- section is σl = 4π k2 (2 l + 1) sin2 δl = 4π k2 (2 l + 1) 1 1 + cot2 δl . (9.117) Thus, σl ? 4π k2 (2 l + 1) Γ2 /4 (E ? E0)2 + Γ2/4 . (9.118) This is the famous Breit-Wigner formula. The variation of the partial cross-section σl with the incident energy has the form of a classical resonance curve. The quantity Γ is the width of the resonance (in energy). We can interpret the Breit-Wigner formula as describing the absorption of an incident particle to form a metastable state, of energy E0, and lifetime τ = /Γ. Exercises 9.1 Consider a scattering potential of the form V(r) = V0 exp(?r2 /a2 ). Calculate the di?erential scattering cross-section, dσ/d?, using the Born approximation. 9.2 Consider a scattering potential that takes the constant value V0 for r < R, and is zero for r > R, where V0 may be either positive or negative. Using the method of partial waves, show that for |V0| ? E = 2 k2 /2 m, and k R ? 1, the di?erential cross-section is isotropic, and that the total cross-section is σtot = 16π 9 m2 V 2 0 R6 4 . 160 QUANTUM MECHANICS Suppose that the energy is slightly raised. Show that the angular distribution can then be written in the form dσ d? = A + B cos θ. Obtain an approximate expression for B/A. 9.3 Consider scattering by a repulsive δ-shell potential: V(r) = 2 2 m γ δ(r ? R), where γ > 0. Find the equation that determines the s-wave phase-shift, δ0, as a function of k (where E = 2 k2 /2 m). Assume that γ ? R?1 , k. Show that if tan(k R) is not close to zero then the s-wave phase-shift resembles the hard sphere result discussed in the text. Furthermore, show that if tan(k R) is close to zero then resonance behavior is possible: i.e., cot δ0 goes through zero from the positive side as k increases. Determine the approximate positions of the resonances (retaining terms up to order 1/γ). Compare the resonant ener- gies with the bound state energies for a particle con?ned within an in?nite spherical well of radius R. Obtain an approximate expression for the resonance width Γ = ? 2 [d(cot δ0)/dE]E=Er . Show that the resonances become extremely sharp as γ → ∞. 9.4 Show that the di?erential cross-section for the elastic scattering of a fast electron by the ground-state of a hydrogen atom is dσ d? = 2 me e2 4π ?0 2 q2 2 1 ? 16 [4 + (q a0)2]2 , where q = |k ? k′ |, and a0 is the Bohr radius. Identical Particles 161 10 Identical Particles 10.1 Permutation Symmetry Consider a system consisting of a collection of identical particles. In classical mechanics, it is, in principle, possible to continuously monitor the position of each particle as a function of time. Hence, the constituent particles can be unambiguously labeled. In quantum mechanics, on the other hand, this is not possible because continuous position measurements would disturb the system. It follows that identical particles cannot be unambiguously labeled in quantum mechanics. Consider a quantum system consisting of two identical particles. Suppose that one of the particles—particle 1, say—is characterized by the state ket |k′ . Here, k′ represents the eigenvalues of the complete set of commuting observables associated with the particle. Suppose that the other particle—particle 2—is characterized by the state ket |k′′ . The state ket for the whole system can be written in the product form |k′ |k′′ , (10.1) where it is understood that the ?rst ket corresponds to particle 1, and the second to particle 2. We can also consider the ket |k′′ |k′ , (10.2) which corresponds to a state in which particle 1 has the eigenvalues k′′ , and particle 2 the eigen- values k′ . Suppose that we were to measure all of the simultaneously measurable properties of our two- particle system. We might obtain the results k′ for one particle, and k′′ for the other. However, we have no way of knowing whether the corresponding state ket is |k′ |k′′ or |k′′ |k′ , or any linear combination of these two kets. In other words, all state kets of the form c1 |k′ |k′′ + c2 |k′′ |k′ , (10.3) correspond to an identical set of results when the properties of the system are measured. This phenomenon is known as exchange degeneracy. Such degeneracy is problematic because the spec- i?cation of a complete set of observable eigenvalues in a system of identical particles does not seem to uniquely determine the corresponding state ket. Fortunately, nature has a way of avoiding this di?culty. Consider the permutation operator P12, which is de?ned such that P12 |k′ |k′′ = |k′′ |k′ . (10.4) In other words, P12 swaps the identities of particles 1 and 2. It is easily seen that P21 = P12, (10.5) P2 12 = 1. (10.6) 162 QUANTUM MECHANICS Now, the Hamiltonian of a system of two identical particles must necessarily be a symmetric function of each particle's observables (because exchange of identical particles could not possibly a?ect the overall energy of the system). For instance, H = p2 1 2 m + p2 2 2 m + Vpair(|x1 ? x2|) + Vext(x1) + Vext(x2). (10.7) Here, we have separated the mutual interaction of the two particles from their interaction with an external potential. It follows that if H |k′ |k′′ = E |k′ |k′′ (10.8) then H |k′′ |k′ = E |k′′ |k′ , (10.9) where E is the total energy. Operating on both sides of (10.8) with P12, and making use of Equa- tion (10.6), we obtain P12 H P2 12 |k′ |k′′ = E P12 |k′ |k′′ , (10.10) or P12 H P12 |k′′ |k′ = E |k′′ |k′ = H |k′′ |k′ , (10.11) where use has been made of (10.9). We deduce that P12 H P12 = H, (10.12) which implies [from (10.6)] that [H, P12] = 0. (10.13) In other words, an eigenstate of the Hamiltonian is a simultaneous eigenstate of the permutation operator P12. Now, according to Equation (10.6), the permutation operator possesses the eigenvalues +1 and ?1, respectively. The corresponding properly normalized eigenstates are |k′ k′′ + = 1 √ 2 |k′ |k′′ + |k′′ |k′ , (10.14) and |k′ k′′ ? = 1 √ 2 |k′ |k′′ ? |k′′ |k′ . (10.15) Here, it is assumed that k′ |k′′ = δk′ k′′ . Note that |k′ k′′ + is symmetric with respect to interchange of particles—i.e., |k′′ k′ + = +|k′ k′′ +, (10.16) whereas |k′ k′′ ? is antisymmetric—i.e., |k′′ k′ ? = ?|k′ k′′ ?. (10.17) Identical Particles 163 Let us now consider a system of three identical particles. We can represent the overall state ket as |k′ k′′ k′′′ , (10.18) where k′ , k′′ , and k′′′ are the eigenvalues of particles 1, 2, and 3, respectively. We can also de?ne the two-particle permutation operators P12 |k′ k′′ k′′′ = |k′′ k′ k′′′ , (10.19) P23 |k′ k′′ k′′′ = |k′ k′′′ k′′ , (10.20) P31 |k′ k′′ k′′′ = |k′′′ k′′ k′ . (10.21) It is easily demonstrated that P21 = P12, (10.22) P32 = P23, (10.23) P13 = P31, (10.24) and P2 12 = 1, (10.25) P2 23 = 1, (10.26) P2 31 = 1. (10.27) As before, the Hamiltonian of the system must be a symmetric function of the particle's observ- ables: i.e., H |k′ k′′ k′′′ = E |k′ k′′ k′′′ , (10.28) H |k′′ k′′′ k′ = E |k′′ k′′′ k′ , (10.29) H |k′′′ k′ k′′ = E |k′′′ k′ k′′ , (10.30) H |k′′ k′ k′′′ = E |k′′ k′ k′′′ , (10.31) H |k′ k′′′ k′′ = E |k′ k′′′ k′′ , (10.32) H |k′′′ k′′ k′ = E |k′′′ k′′ k′ , (10.33) where E is the total energy. Using analogous arguments to those employed for the two-particle system, we deduce that [H, P12] = [H, P23] = [H, P31] = 0. (10.34) Hence, an eigenstate of the Hamiltonian is a simultaneous eigenstate of the permutation operators P12, P23, and P31. However, according to Equations (10.25)–(10.27), the possible eigenvalues of these operators are ±1. 164 QUANTUM MECHANICS Let us de?ne the cyclic permutation operator P123, where P123 |k′ k′′ k′′′ = |k′′′ k′ k′′ . (10.35) It follows that P123 = P12 P31 = P23 P12 = P31 P23. (10.36) It is also clear from Equations (10.28) and (10.33) that [H, P123] = 0. (10.37) Thus, an eigenstate of the Hamiltonian is a simultaneous eigenstate of the permutation operators P12, P23, P31, and P123. Let λ12, λ23, λ31 and λ123 represent the eigenvalues of these operators, respectively. We know that λ12 = ±1, λ23 = ±1, and λ31 = ±1. Moreover, it follows from (10.36) that λ123 = λ12 λ31 = λ23 λ12 = λ31 λ23. (10.38) The above equations imply that λ123 = +1, (10.39) and either λ12 = λ23 = λ31 = +1, (10.40) or λ12 = λ23 = λ31 = ?1. (10.41) In other words, the multi-particle state ket must be either totally symmetric, or totally antisym- metric, with respect to swapping the identities of any given pair of particles. Thus, in terms of properly normalized single particle kets, the properly normalized totally symmetric and totally antisymmetric kets are |k′ k′′ k′′′ + = 1 √ 3! |k′ |k′′ |k′′′ + |k′′′ |k′ |k′′ + |k′′ |k′′′ |k′ +|k′′′ |k′′ |k′ + |k′ |k′′′ |k′′ + |k′′ |k′ |k′′′ , (10.42) and |k′ k′′ k′′′ ? = 1 √ 3! |k′ |k′′ |k′′′ + |k′′′ |k′ |k′′ + |k′′ |k′′′ |k′ ?|k′′′ |k′′ |k′ ? |k′ |k′′′ |k′′ ? |k′′ |k′ |k′′′ , (10.43) respectively. The above arguments can be generalized to systems of more than three identical particles in a straightforward manner. Identical Particles 165 10.2 Symmetrization Postulate We have seen that the exchange degeneracy of a system of identical particles is such that a speci?- cation of a complete set of observable eigenvalues does not uniquely determine the corresponding state ket. However, we have also seen that there are only two possible state kets: i.e., a ket that is totally symmetric with respect to particle interchange, or a ket that is totally antisymmetric. It turns out that systems of identical particles possessing integer-spin (e.g., spin 0, or spin 1) always choose the totally symmetric ket, whereas systems of identical particles possessing half-integer-spin (e.g., spin 1/2) always choose the totally antisymmetric ket. This additional piece of information ensures that the speci?cation of a complete set of observable eigenvalues of a system of identical particles does, in fact, uniquely determine the corresponding state ket. Systems of identical particles whose state kets are totally symmetric with respect to particle interchange are said to obey Bose-Einstein statistics. Moreover, such particles are termed bosons. On the other hand, systems of identical particles whose state kets are totally antisymmetric with respect to particle interchange are said to obey Fermi-Dirac statistics, and the constituent particles are called fermions. In non-relativistic quantum mechanics, the rule that all integer-spin particles are bosons, whereas all half-integer spin particles are fermions, must be accepted as an empirical fact. However, in relativistic quantum mechanics, it is possible to prove that half-integer-spin particles cannot be bosons, and integer-spin particles cannot be fermions. Incidentally, electrons, protons, and neutrons are all fermions. The Pauli exclusion principle is an immediate consequence of the fact that electrons obey Fermi-Dirac statistics. This principle states that no two electrons in a multi-electron system can possess identical sets of observable eigenvalues. For instance, in the case of a three-electron sys- tem, the state ket is [see Equation (10.43)] |k′ k′′ k′′′ ? = 1 √ 3! |k′ |k′′ |k′′′ + |k′′′ |k′ |k′′ + |k′′ |k′′′ |k′ ?|k′′′ |k′′ |k′ ? |k′ |k′′′ |k′′ ? |k′′ |k′ |k′′′ . (10.44) Note, however, that |k′ k′ k′′′ = |k′ k′′ k′′ = |k′′′ k′′ k′′′ = |0 . (10.45) In other words, if two of the electrons in the system possess the same set of observable eigenvalues then the state ket becomes the null ket, which corresponds to the absence of a state. 10.3 Two-Electron System Consider a system consisting of two electrons. Let x1 and S1 represent the position and spin operators of the ?rst electron, respectively, and let x2 and S2 represent the corresponding operators for the second electron. Furthermore, let S = S1 + S2 represent the total spin operator for the system. Suppose that the Hamiltonian commutes with S 2 , as is often the case. It follows that the state of the system is speci?ed by the position eigenvalues x′ 1 and x′ 2, as well as the total spin quantum numbers s and m. As usual, the eigenvalue of S 2 is s (s + 1) 2 , and the eigenvalue of Sz 166 QUANTUM MECHANICS is m . Moreover, s, m can only takes the values 1, 1 or 1, 0 or 1, ?1 or 0, 0. (See Chapter 6.) The overall wavefunction of the system can be written ψ(x′ 1, x′ 2; s, m) = φ(x′ 1, x′ 2) χ(s, m), (10.46) where χ(1, 1) = χ+ χ+, (10.47) χ(1, 0) = 1 √ 2 (χ+ χ? + χ? χ+), (10.48) χ(1, ?1) = χ? χ?, (10.49) χ(0, 0) = 1 √ 2 (χ+ χ? ? χ? χ+). (10.50) Here, the spinor χ+ χ? denotes a state in which m1 = 1/2 and m2 = ?1/2, etc., where m1 and m2 are the eigenvalues of S1 z and S2 z, respectively. The three s = 1 spinors are usually referred to as triplet spinors, whereas the single s = 0 spinor is called the singlet spinor. Note that the triplet spinors are all symmetric with respect to exchange of particles, whereas the singlet spinor is antisymmetric. Fermi-Dirac statistics requires the overall wavefunction to be antisymmetric with respect to exchange of particles. Now, according to (10.46), the overall wavefunction can be written as a product of a spatial wavefunction and a spinor. Moreover, when the system is in the spin triplet state (i.e., s = 1) the spinor is symmetric with respect to exchange of particles. On the other hand, when the system is in the spin singlet state (i.e., s = 0) the spinor is antisymmetric. It follows that, to maintain the overall antisymmetry of the wavefunction, the triplet spatial wavefunction must be antisymmetric with respect to exchange of particles, whereas the singlet spatial wavefunction must be symmetric. In other words, in the spin triplet state, the spatial wavefunction takes the form φ(x′ 1, x′ 2) = 1 √ 2 ωA(x′ 1) ωB(x′ 2) ? ωB(x′ 1) ωA(x′ 2) , (10.51) whereas in the spin singlet state the spatial wavefunction is written φ(x′ 1, x′ 2) = 1 √ 2 ωA(x′ 1) ωB(x′ 2) + ωB(x′ 1) ωA(x′ 2) . (10.52) The probability of observing one electron in the volume element d3 x1 around position x1, and the other in the volume element d3 x2 around position x2, is |φ(x1, x2)|2 d3 x1 d3 x2, or 1 2 |ωA(x1)|2 |ωB(x2)|2 + |ωA(x2)|2 |ωB(x1)|2 ± 2 Re ωA(x1) ωB(x2) ω? A(x2) ω? B(x1) d3 x1 d3 x2. (10.53) Here, the plus sign corresponds to the spin singlet state, whereas the minus sign corresponds to the spin triplet state. We can immediately see that in the spin triplet state the probability of ?nding the Identical Particles 167 two electrons at the same point in space is zero. In other words, the two electrons have a tendency to avoid one another in the triplet state. On the other hand, in the spin singlet state there is an enhanced probability of ?nding the two electrons at the same point in space, because of the ?nal term in the previous expression. In other words, the two electrons are attracted to one another in the singlet state. Note, however, that the spatial probability distributions associated with the singlet and triplet states only di?er substantially when the two single particle spatial wavefunctions ωA(x) and ωB(x) overlap: i.e., when there exists a region of space in which the two wavefunctions are simultaneously nonnegligible. 10.4 Helium Atom Consider the helium atom, which is a good example of a two-electron system. The Hamiltonian is written H = p2 1 2 m1 + p2 2 Z m2 ? Z e2 4π ?0 r1 ? Z e2 4π ?0 r2 + e2 4π ?0 r12 , (10.54) where Z = 2, r1 = |x1|, r2 = |x2|, and r12 = |x1 ? x2|. Suppose that the ?nal term on the right-hand side of the above expression were absent. In this case, the overall spatial wavefunction can be formed from products of hydrogen atom wavefunctions calculated with Z = 2, instead of Z = 1. Each of these wavefunctions is characterized by the usual triplet of quantum numbers, n, l, and m. Now, the total spin of the system is a constant of the motion (since S obviously commutes with the Hamiltonian), so the overall spin state is either the singlet or the triplet state. The corresponding spatial wavefunction is symmetric in the former case, and antisymmetric in the latter. Suppose that one electron has the quantum numbers n, l, m whereas the other has the quantum numbers n′ ,l′ , m′ . The corresponding spatial wavefunction is φ(x1, x2) = 1 √ 2 ψnlm(x1) ψn′l′m′ (x2) ± ψnlm(x2) ψn′l′m′ (x1) , (10.55) where the plus and minus signs correspond to the singlet and triplet spin states, respectively. Here, ψnlm(x) is a standard hydrogen atom wavefunction (calculated with Z = 2). For the special case in which the two sets of spatial quantum numbers, n, l, m and n′ , l′ , m′ , are the same, the triplet spin state does not exist (because the associated spatial wavefunction is null). Hence, only singlet spin state is allowed, and the spatial wavefunction reduces to φ(x1, x2) = ψnlm(x1) ψnlm(x2). (10.56) In particular, the ground state (n = n′ = 1, l = l′ = 0, m = m′ = 0) can only exist as a singlet spin state (i.e., a state of overall spin 0), and has the spatial wavefunction φ(x1, x2) = ψ100(x1) ψ100(x2) = Z 3 π a3 0 exp ?Z (r1 + r2) a0 , (10.57) where a0 is the Bohr radius. This follows because ψ100(x) = 1 √ 4π Z a0 3/2 2 exp ?Z r a0 . (10.58) 168 QUANTUM MECHANICS The energy of this state is E = 2 Z 2 E0 = ?108.8 eV, (10.59) where E0 = ?13.6 eV is the ground state energy of a hydrogen atom. In the above expression, the factor of 2 comes from the fact that there are two electrons in a helium atom. The above estimate for the ground state energy of a helium atom completely ignores the ?nal term on the right-hand side of Equation (10.54), which describes the mutual interaction between the two electrons. We can obtain a better estimate for the ground state energy by treating (10.57) as the unperturbed wavefunction, and e2 /(4π ?0 r12) as a perturbation. According to standard ?rst- order perturbation theory, the correction to the ground state energy is ?E = e2 4π ?0 r12 = d3 x1 d3 x2 Z 6 π2 a6 0 exp ?2 Z (r1 + r2) a0 e2 4π ?0 r12 . (10.60) This can be written ?E |E0| = 2 Z 6 π2 d3 x1 a3 0 d3 x2 a3 0 exp ?2 Z r1 + r2 a0 a0 r12 , (10.61) since E0 = ?e2 /(8π ?0 a0). Now, 1 r12 = 1 (r2 1 + r2 2 ? 2 r1 r2 cos γ)1/2 = l=0,∞ rl < rl+1 > Pl(cos γ), (10.62) where r> (r<) is the larger (smaller) of r1 and r2, and γ is the angle subtended between x1 and x2. Moreover, the so-called addition theorem for spherical harmonics states that Pl(cos γ) = 4π 2 l + 1 m=?l,+l Y? l m(θ1, ?1) Yl m(θ2, ?2). (10.63) However, d? Yl m(θ, φ) = √ 4π δl 0 δm 0, (10.64) so we obtain ?E |E0| = 32 Z ∞ 0 dx1 x2 1 x1 0 dx2 x2 2 x1 e?2 (x1+x2) + ∞ x1 dx2 x2 e?2 (x1+x2) = 5 Z 4 = 5 2 . (10.65) Here, x1 = Z r1/a0 and x2 = Z r2/a0, and Z = 2. Thus, our improved estimate for the ground state energy of the helium atom is E = 8 ? 5 2 E0 = ?74.8 eV. (10.66) This is much closer to the experimental value of ?78.8 eV than our previous estimate. Identical Particles 169 Consider an excited state of the helium atom in which one electron is in the ground state, while the other is in a state characterized by the quantum numbers n, l, m. We can write the energy of this state as E = Z 2 E100 + Z 2 Enlm + ?E, (10.67) where Enlm is the energy of a hydrogen atom electron whose quantum numbers are n, l, m. Accord- ing to ?rst-order perturbation theory, ?E is the expectation value of e2 /(4π ?0 r12). It follows from (10.55) (with n, l, m = 1, 0, 0 and n′ , l′ , m′ = n, l, m) that ?E = I ± J, (10.68) where I = d3 x1 d3 x2 |ψ100(x1)|2 |ψnlm(x2)|2 e2 4π ?0 r12 , (10.69) J = d3 x1 d3 x2 ψ100(x1) ψnlm(x2) e2 4π ?0 r12 ψ? 100(x2) ψ? nlm(x1). (10.70) Here, the plus sign in (10.68) corresponds to the spin singlet state, whereas the minus sign corre- sponds to the spin triplet state. The integral I—which is known as the direct integral—is obviously positive. The integral J—which is known as the exchange integral—can be shown to also be posi- tive. Hence, we conclude that in excited states of helium the spin singlet state has a higher energy than the spin triplet state. Incidentally, helium in the spin singlet state is known as para-helium, whereas helium in the triplet state is called ortho-helium. As we have seen, for the ground state, only para-helium is possible. The fact that para-helium energy levels lie slightly above corresponding ortho-helium levels is interesting because our original Hamiltonian does not depend on spin. Nevertheless, there is a spin dependent e?ect—i.e., a helium atom has a lower energy when its electrons possess parallel spins—as a consequence of Fermi-Dirac statistics. To be more exact, the energy is lower in the spin triplet state because the corresponding spatial wavefunction is antisymmetric, causing the electrons to tend to avoid one another (thereby reducing their electrostatic repulsion). Exercises 10.1 Demonstrate that the particle interchange operator, P12, in a system of two identical parti- cles is Hermitian. 10.2 Consider two identical spin-1/2 particles of mass m con?ned in a cubic box of dimension L. Find the possible energies and wavefunctions of this system in the case of no interaction between the particles. 10.3 Consider a system of two spin-1 particles with no orbital angular momentum (i.e., both particles are in s-states). What are the possible eigenvalues of the total spin angular mo- mentum of the system, as well as its projection along the z-direction, in the cases in which the particles are non-identical and identical? 170 QUANTUM MECHANICS Relativistic Electron Theory 171 11 Relativistic Electron Theory 11.1 Introduction The aim of this chapter is to develop a quantum mechanical theory of electron dynamics that is consistent with special relativity. Such a theory is needed to explain the origin of electron spin (which is essentially a relativistic e?ect), and to account for the fact that the spin contribution to the electron's magnetic moment is twice what we would naively expect by analogy with (non- relativistic) classical physics (see Section 5.5). Relativistic electron theory is also required to fully understand the ?ne structure of the hydrogen atom energy levels (recall, from Section 7.7, and Exercises 7.3 and 7.4, that the modi?cation to the energy levels due to spin-orbit coupling is of the same order of magnitude as the ?rst-order correction due to the electron's relativistic mass increase.) In the following, we shall use x1 , x2 , x3 to represent the Cartesian coordinates x, y, z, respec- tively, and x0 to represent c t. The time dependent wavefunction then takes the form ψ(x0 , x1 , x2 , x3 ). Adopting standard relativistic notation, we write the four x's as x? , for ? = 0, 1, 2, 3. A space-time vector with four components that transforms under Lorentz transformation in an analogous manner to the four space-time coordinates x? is termed a 4-vector, and its components are written like a? (i.e., with an upper Greek su?x). We can lower the su?x according to the rules a0 = a0 , (11.1) a1 = ?a1 , (11.2) a2 = ?a2 , (11.3) a3 = ?a3 . (11.4) Here, the a? are called the contravariant components of the vector a, whereas the a? are termed the covariant components. Two 4-vectors a? and b? have the Lorentz invariant scalar product a0 b0 ? a1 b1 ? a2 a2 ? a3 b3 = a? b? = a? b? , (11.5) a summation being implied over a repeated letter su?x. The metric tenor g? ν is de?ned g00 = 1, (11.6) g11 = ?1, (11.7) g22 = ?1, (11.8) g33 = ?1, (11.9) with all other components zero. Thus, a? = g? ν aν . (11.10) 172 QUANTUM MECHANICS Likewise, a? = g? ν aν, (11.11) where g00 = 1, g11 = g22 = g33 = ?1, with all other components zero. Finally, g ? ν = g? ν = 1 if ? = ν, and g ? ν = g? ν = 0 otherwise. In the Schr¨ odinger representation, the momentum of a particle, whose components are written px, py, pz, or p1 , p2 , p3 , is represented by the operators pi = ?i ? ?xi , (11.12) for i = 1, 2, 3. Now, the four operators ?/?x? form the covariant components of a 4-vector whose contravariant components are written ?/?x?. So, to make expression (11.12) consistent with rela- tivistic theory, we must ?rst write it with its su?xes balanced, pi = i ? ?xi , (11.13) and then extend it to the complete 4-vector equation p? = i ? ?x? . (11.14) According to standard relativistic theory, the new operator p0 = i ?/?x0, which forms a 4-vector when combined with the momenta pi , is interpreted as the energy of the particle divided by c, where c is the velocity of light in vacuum. 11.2 Dirac Equation Consider the motion of an electron in the absence of an electromagnetic ?eld. In classical relativity, electron energy, E, is related to electron momentum, p, according to the well-known formula E c = (p2 + m2 e c2 )1/2 , (11.15) where me is the electron rest mass. The quantum mechanical equivalent of this expression is the wave equation p0 ? (p1 p1 + p2 p2 + p3 p3 + m2 e c2 )1/2 ψ = 0, (11.16) where the p's are interpreted as di?erential operators according to Equation (11.14). The above equation takes into account the correct relativistic relation between electron energy and momen- tum, but is nevertheless unsatisfactory from the point of view of relativistic theory, because it is highly asymmetric between p0 and the other p's. This makes the equation di?cult to generalize, in a manifestly Lorentz invariant manner, in the presence of an electromagnetic ?eld. We must therefore look for a new equation. Relativistic Electron Theory 173 If we multiply the wave equation (11.16) by the operator p0 + (p1 p1 + p2 p2 + p3 p3 + m2 e c2 )1/2 then we obtain p0 p0 ? p1 p1 ? p2 p2 ? p3 p3 + m2 e c2 ψ = p? p? + m2 e c2 ψ. (11.17) This equation is manifestly Lorentz invariant, and, therefore, forms a more convenient starting point for relativistic quantum mechanics. Note, however, that Equation (11.17) is not entirely equivalent to Equation (11.16), because, although every solution of (11.16) is also a solution of (11.17), the converse is not true. In fact, only those solutions of (11.17) belonging to positive values of p0 are also solutions of (11.16). The wave equation (11.17) is quadratic in p0 , and is thus not of the form required by the laws of quantum theory. (Recall that we showed, from general principles, in Chapter 3, that the time evolution equation for the wavefunction should be linear in the operator ?/?t, and, hence, in p0 .) We, therefore, seek a wave equation that is equivalent to (11.17), but is linear in p0 . In order to ensure that this equation transforms in a simple way under a Lorentz transformation, we shall require it to be rational and linear in p1 , p2 , p3 , as well as p0 . We are thus lead to a wave equation of the form p0 ? α1 p1 ? α2 p2 ? α3 p3 ? β me c ψ = 0, (11.18) where the α's and β are dimensionless, and independent of the p's. Moreover, according to standard relativity, because we are considering the case of no electromagnetic ?eld, all points in space-time must be equivalent. Hence, the α's and β must also be independent of the x's. This implies that the α's and β commute with the p's and the x's. We, therefore, deduce that the α's and β describe an internal degree of freedom that is independent of space-time coordinates. Actually, we shall show later that these operators are related to electron spin. Multiplying (11.18) by the operator p0 + α1 p1 + α2 p2 + α3 p3 + β me c, we obtain ? ? ? ? ? ? ? ?p0 p0 ? 1 2 i,j=1,3 {αi, αj} pi pj ? i=1,3 {αi, β} pi me c ? β2 m2 e c2 ? ? ? ? ? ? ? ? ψ = 0, (11.19) where {a, b} ≡ a b + b a. This equation is equivalent to (11.17) provided that {αi, αj} = 2 δij, (11.20) {αi, β} = 0, (11.21) β2 = 1, (11.22) for i, j = 1, 3. It is helpful to de?ne the γ? , for ? = 0, 3, where β = γ0 , (11.23) αi = γ0 γi , (11.24) for i = 1, 3. Equations (11.20)–(11.22) can then be shown to reduce to {γ? , γν } = 2 g? ν . (11.25) 174 QUANTUM MECHANICS One way of satisfying the above anti-commutation relations is to represent the operators γ? as matrices. However, it turns out that the smallest dimension in which the γ? can be realized is four. In fact, it is easily veri?ed that the 4 * 4 matrices γ0 = 1 0 0 ?1 , (11.26) γi = 0 σi ?σi 0 , (11.27) for i = 1, 3, satisfy the appropriate anti-commutation relations. Here, 0 and 1 denote 2 * 2 null and identity matrices, respectively, whereas the σi represent the 2 * 2 Pauli matrices introduced in Section 5.7. It follows from (11.23) and (11.24) that β = 1 0 0 ?1 , (11.28) αi = 0 σi σi 0 . (11.29) Note that γ0 , β, and the αi, are all Hermitian matrices, whereas the γ? , for ? = 1, 3, are anti- Hermitian. However, the matrices γ0 γ? , for ? = 0, 3, are Hermitian. Moreover, it is easily demon- strated that γ? ? = γ0 γ? γ0 , (11.30) for ? = 0, 3. Equation (11.18) can be written in the form (γ? p? ? me c) ψ = (i γ? ?? ? me c) ψ = 0, (11.31) where ?? ≡ ?/?x? . Alternatively, we can write i ?ψ ?t = (c α · p + β me c2 ) ψ, (11.32) where p = (px, py, pz) = (p1 , p2 , p3 ), and α is the vector of the αi matrices. The previous expression is known as the Dirac equation. Incidentally, it is clear that, corresponding to the four rows and columns of the γ? matrices, the wavefunction ψ must take the form of a 4 * 1 column matrix, each element of which is, in general, a function of the x? . We saw in Section 5.7 that the spin of the electron requires the wavefunction to have two components. The reason our present theory requires the wavefunction to have four components is because the wave equation (11.17) has twice as many solutions as it ought to have, half of them corresponding to negative energy states. We can incorporate an electromagnetic ?eld into the above formalism by means of the standard prescription E → E + e φ, and pi → pi + e Ai , where e is the magnitude of the electron charge, φ the scalar potential, and A the vector potential. This prescription can be expressed in the Lorentz invariant form p? → p? + e c Φ? , (11.33) Relativistic Electron Theory 175 where Φ? = (φ, c A) is the potential 4-vector. Thus, Equation (11.31) becomes γ? p? + e c Φ? ? me c ψ = γ? i ?? + e c Φ? ? me c ψ = 0, (11.34) whereas Equation (11.32) generalizes to i ?ψ ?t = ?e φ + c α · (p + e A) + β me c2 ψ = 0. (11.35) If we write the wavefunction in the spinor form ψ = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ψ0 ψ1 ψ2 ψ3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? (11.36) then the Hermitian conjugate of Equation (11.35) becomes ?i ?ψ? ?t = ψ? ?e φ + c α · (p + e A) + β me c2 = 0, (11.37) where ψ? = ψ? 0 , ψ? 1 , ψ? 2 , ψ? 3 , (11.38) Here, use has been made of the fact that the αi and β are Hermitian matrices that commute with the pi and Ai . It follows from ψ? γ0 times Equation (11.34) that ψ? γ0 γ? i ?? ? e c Φ? ? γ0 me c ψ = 0. (11.39) The Hermitian conjugate of this expression is ψ? ?i ?? ? e c Φ? γ0 γ? ? me c γ0 ψ = 0, (11.40) where ?? now acts backward on ψ? , and use has been made of the fact that the matrices γ0 γ? and γ0 are Hermitian. Taking the di?erence between the previous two equation, we obtain ?? j? = 0, (11.41) where j? = c ψ? γ0 γ? ψ. (11.42) Writing j? = (c ρ, j), where ρ = ψ? ψ, (11.43) ji = c ψ? γ0 γi ψ = ψ? c αi ψ, (11.44) 176 QUANTUM MECHANICS Equation (11.41) becomes ?ρ ?t + ? · j = 0. (11.45) The above expression has the same form as the non-relativistic probability conservation equation (3.65). This suggests that we can interpret the positive de?nite real scalar ?eld ρ(x, t) = |ψ|2 as the relativistic probability density, and the vector ?eld j(x, t) as the relativistic probability current. Integration of the above expression over all space, assuming that |ψ(x, t)| → 0 as |x| → ∞, yields d dt d3 x ρ(x, t) = 0. (11.46) This ensures that if the wavefunction is properly normalized at time t = 0, such that d3 x ρ(x, 0) = 1, (11.47) then the wavefunction remains properly normalized at all subsequent times, as it evolves in accor- dance with the Dirac equation. In fact, if this were not the case then it would be impossible to interpret ρ as a probability density. Now, relativistic invariance demands that if the wavefunction is properly normalized in one particular inertial frame then it should be properly normalized in all inertial frames. This is the case provided that Equation (11.41) is Lorentz invariant (i.e., if it has the property that if it holds in one inertial frame then it holds in all inertial frames), which is true as long as the j? transform as the contravariant components of a 4-vector under Lorentz transformation (see Exercise 11.4). 11.3 Lorentz Invariance of Dirac Equation Consider two inertial frames, S and S ′ . Let the x? and x?′ be the space-time coordinates of a given event in each frame, respectively. These coordinates are related via a Lorentz transformation, which takes the general form x?′ = a? ν xν , (11.48) where the a? ν are real numerical coe?cients that are independent of the x? . We also have x?′ = a ν ? xν. (11.49) Now, since [see Equation (11.5)] x?′ x?′ = x? x?, (11.50) it follows that a? ν a λ ? = g λ ν . (11.51) Moreover, it is easily shown that x? = a ? ν xν′ , (11.52) x? = aν ? xν′ . (11.53) Relativistic Electron Theory 177 By de?nition, a 4-vector p? has analogous transformation properties to the x? . Thus, p?′ = a? ν pν , (11.54) p? = a ? ν pν′ , (11.55) etc. In frame S , the Dirac equation is written γ? p? ? e c Φ? ? me c ψ = 0. (11.56) Let ψ′ be the wavefunction in frame S ′ . Suppose that ψ′ = A ψ, (11.57) where A is a 4 * 4 transformation matrix that is independent of the x? . (Hence, A commutes with the p? and the Φ?.) Multiplying (11.56) by A, we obtain A γ? A?1 p? ? e c Φ? ? me c ψ′ = 0. (11.58) Hence, given that the p? and Φ? are the covariant components of 4-vectors, we obtain A γ? A?1 aν ? pν′ ? e c Φν′ ? me c ψ′ = 0. (11.59) Suppose that A γ? A?1 aν ? = γν , (11.60) which is equivalent to A?1 γν A = aν ? γ? . (11.61) Here, we have assumed that the aν ? commute with A and the γ? (since they are just numbers). If (11.60) holds then (11.59) becomes γ? p?′ ? e c Φ?′ ? me c ψ′ = 0. (11.62) A comparison of this equation with (11.56) reveals that the Dirac equation takes the same form in frames S and S ′ . In other words, the Dirac equation is Lorentz invariant. Incidentally, it is clear from (11.56) and (11.62) that the γ? matrices are the same in all inertial frames. It remains to ?nd a transformation matrix A that satis?es (11.61). Consider an in?nitesimal Lorentz transformation, for which a ν ? = g ν ? + ?ω ν ? , (11.63) where the ?ω ν ? are real numerical coe?cients that are independent of the x? , and are also small compared to unity. To ?rst order in small quantities, (11.51) yields ?ω? ν + ?ων ? = 0. (11.64) 178 QUANTUM MECHANICS Let us write A = 1 ? i 4 σ? ν ?ω? ν , (11.65) where the σ? ν are O(1) 4 * 4 matrices. To ?rst order in small quantities, A?1 = 1 + i 4 σ? ν ?ω? ν . (11.66) Moreover, it follows from (11.64) that σ? ν = ?σν ?. (11.67) To ?rst order in small quantities, Equations (11.61), (11.63), (11.65), and (11.66) yield ?ων β γβ = ? i 4 ?ωα β γν σα β ? σα β γν . (11.68) Hence, making use of the symmetry property (11.64), we obtain ?ωα β (gν α γβ ? gν β γα) = ? i 2 ?ωα β (γν σα β ? σα β γν ), (11.69) where γ? = g? ν γν . Since this equation must hold for arbitrary ?ωα β , we deduce that 2 i (gν α γβ ? gν β γα) = [γν , σα β]. (11.70) Making use of the anti-commutation relations (11.25), it can be shown that a suitable solution of the above equation is σ? ν = i 2 [γ?, γν]. (11.71) Hence, A = 1 + 1 8 [γ?, γν] ?ω? ν , (11.72) A?1 = 1 ? 1 8 [γ?, γν] ?ω? ν . (11.73) Now that we have found the correct transformation rules for an in?nitesimal Lorentz transforma- tion, we can easily ?nd those for a ?nite transformation by building it up from a large number of successive in?nitesimal transforms. Making use of (11.30), as well as γ0 γ0 = 1, the Hermitian conjugate of (11.72) can be shown to take the form A? = 1 ? 1 8 γ0 [γ?, γν] γ0 ?ω? ν = γ0 A?1 γ0 . (11.74) Hence, (11.61) yields A? γ0 γ? A = a? ν γ0 γν . (11.75) Relativistic Electron Theory 179 It follows that ψ? A? γ0 γ? A ψ = a? ν ψ? γ0 γν ψ, (11.76) or ψ′? γ0 γ? ψ′ = a? ν ψ? γ0 γν ψ, (11.77) which implies that j?′ = a? ν jν , (11.78) where the j? are de?ned in Equation (11.42). This proves that the j? transform as the contravariant components of a 4-vector. 11.4 Free Electron Motion According to Equation (11.32), the relativistic Hamiltonian of a free electron takes the form H = c α · p + β me c2 . (11.79) Let us use the Heisenberg picture to investigate the motion of such an electron. For the sake of brevity, we shall omit the su?x t that should be appended to dynamical variables that vary in time, according to the formalism of Section 3.2. The above Hamiltonian is independent of x. Hence, momentum p commutes with the Hamil- tonian, and is therefore a constant of the motion. The x component of the velocity is . x = [x, H] i = c α1, (11.80) where use has been made of the standard commutation relations between position and momentum operators. This result is rather surprising, since it implies a relationship between velocity and momentum that is quite di?erent from that in classical mechanics. This relationship, however, is clearly connected to the expression jx = ψ? c α1 ψ for the x component of the probability current. The operator . x, speci?ed in the above equation, has the eigenvalues ±c, corresponding to the eigenvalues ±1 of α1. Since . y and . z are similar to . x, we conclude that a measurement of a velocity component of a free electron is certain to yield the result ±c. As is easily demonstrated, this conclusion also holds in the presence of an electromagnetic ?eld. Of course, electrons are often observed to have velocities considerably less than that of light. Hence, the previous conclusion seems to be in con?ict with experimental observations. The con?ict is not real, however, because the theoretical velocity discussed above is the velocity at one instance in time, whereas observed velocities are always averages over a ?nite time interval. We shall ?nd, on further examination of the equations of motion, that the velocity of a free electron is not constant, but oscillates rapidly about a mean value that agrees with the experimentally observed value. In order to understand why a measurement of a velocity component must lead to the result ±c in a relativistic theory, consider the following argument. To measure the velocity we must measure the position at two slightly di?erent times, and then divide the change in position by 180 QUANTUM MECHANICS the time interval. (We cannot just measure the momentum and then apply a formula, because the ordinary connection between velocity and momentum is no longer valid.) In order that our measured velocity may approximate to the instantaneous velocity, the time interval between the two measurements of position must be very short, and the measurements themselves very accurate. However, the great accuracy with which the position of the electron is known during the time interval leads to an almost complete indeterminacy in its momentum, according to the Heisenberg uncertainty principle. This means that almost all values of the momentum are equally likely, so that the momentum is almost certain to be in?nite. But, an in?nite value of a momentum component corresponds to the values ±c for the corresponding velocity component. Let us now examine how the election velocity varies in time. We have i . α1 = α1 H ? H α1. (11.81) Now α1 anti-commutes with all terms in H except c α1 p1 , so α1 H + H α1 = α1 c α1 p1 + c α1 p1 α1 = 2 c p1 . (11.82) Here, use has been made of the fact that α1 commutes with p1 , and also that α2 1 = 1. Hence, we get i . α1 = 2 α1 H ? 2 c p1 . (11.83) Since H and p1 are constants of the motion, this equation yields i .. α1 = 2 . α1 H, (11.84) which can be integrated to give . α1(t) = . α1(0) exp ?2 i H t . (11.85) It follows from Equation (11.80) and (11.83) that . x(t) = c α1(t) = i c 2 . α1(0) exp ?2 i H t H ?1 + c2 px H ?1 , (11.86) and x(t) = x(0) ? 2 c 4 . α1(0) exp ?2 i H t H ?2 + c2 px H ?1 t. (11.87) We can see that the x-component of velocity consists of two parts, a constant part, c2 px H ?1 , connected with the momentum according to the classical relativistic formula, and an oscillatory part whose frequency, 2 H/h, is high (being at least 2 me c2 /h). Only the constant part would be observed in a practical measurement of velocity (i.e., an average over a short time interval that is still much longer than h/2 me c2 ). The oscillatory part ensures that the instantaneous value of . x has the eigenvalues ±c. Note, ?nally, that the oscillatory part of x is small, being of order /me c. Relativistic Electron Theory 181 11.5 Electron Spin According to Equation (11.35), the relativistic Hamiltonian of an electron in an electromagnetic ?eld is H = ?e φ + c α · (p + e A) + β me c2 . (11.88) Hence, H c + e c φ 2 = α · (p + e A) + β me c 2 = α · (p + e A) 2 + m2 e c2 , (11.89) where use has been made of Equations (11.21) and (11.22). Now, we can write αi = γ5 Σi, (11.90) for i = 1, 3, where γ5 = 0 1 1 0 , (11.91) and Σi = σi 0 0 σi . (11.92) Here, 0 and 1 denote 2 * 2 null and identity matrices, respectively, whereas the σi are conventional 2 * 2 Pauli matrices. Note that γ5 γ5 = 1, and [γ5 , Σi] = 0. (11.93) It follows from (11.89) that H c + e c φ 2 = Σ · (p + e A) 2 + m2 e c2 . (11.94) Now, a straightforward generalization of Equation (5.92) gives (Σ · a) (Σ · b) = a · b + i Σ · (a * b), (11.95) where a and b are any two three-dimensional vectors that commute with Σ. It follows that Σ · (p + e A) 2 = (p + e A)2 + i Σ · (p + e A) * (p + e A). (11.96) However, (p + e A) * (p + e A) = e p * A + e A * p = ?i e ? * A ? i e A * ? = ?i e B, (11.97) where B = ? * A is the magnetic ?eld strength. Hence, we obtain H c + e c φ 2 = (p + e A)2 + m2 e c2 + e Σ · B. (11.98) 182 QUANTUM MECHANICS Consider the non-relativistic limit. In this case, we can write H = me c2 + δH, (11.99) where δH is small compared to me c2 . Substituting into (11.98), and neglecting δH 2 , and other terms involving c?2 , we get δH ? ?e φ + 1 2 me (p + e A)2 + e 2 me Σ · B. (11.100) This Hamiltonian is the same as the classical Hamiltonian of a non-relativistic electron, except for the ?nal term. This term may be interpreted as arising from the electron having an intrinsic magnetic moment ? = ? e 2 me Σ. (11.101) In order to demonstrate that the electron's intrinsic magnetic moment is associated with an intrinsic angular momentum, consider the motion of an electron in a central electrostatic potential: i.e., φ = φ(r) and A = 0. In this case, the Hamiltonian (11.88) becomes H = ?e φ(r) + c γ5 Σ · p + β me c2 . (11.102) Consider the x component of the electron's orbital angular momentum, Lx = y pz ? z py = i z ? ?y ? y ? ?z . (11.103) The Heisenberg equation of motion for this quantity is i . Lx = [Lx, H]. (11.104) However, it is easily demonstrated that [Lx, r] = 0, (11.105) [Lx, px] = 0, (11.106) [Lx, py] = i pz, (11.107) [Lx, pz] = ?i py. (11.108) Hence, we obtain [Lx, H] = i c γ5 (Σ2 pz ? Σ3 py), (11.109) which implies that . Lx = c γ5 (Σ2 pz ? Σ3 py). (11.110) It can be seen that Lx is not a constant of the motion. However, the x-component of the total angular momentum of the system must be a constant of the motion (because a central electrostatic potential Relativistic Electron Theory 183 exerts zero torque on the system). Hence, we deduce that the electron possesses additional angular momentum that is not connected with its motion through space. Now, i . Σ1 = [Σ1, H]. (11.111) However, [Σ1, γ5 ] = 0, (11.112) [Σ1, Σ1] = 0, (11.113) [Σ1, Σ2] = 2 i Σ3, (11.114) [Σ1, Σ3] = ?2 i Σ2, (11.115) so [Σ1, H] = 2 i c γ5 (Σ3 py ? Σ2 pz), (11.116) which implies that 2 . Σ1 = ?c γ5 (Σ2 pz ? Σ3 py). (11.117) Hence, we deduce that . Lx + 2 . Σ1 = 0. (11.118) Since there is nothing special about the x direction, we conclude that the vector L + ( /2) Σ is a constant of the motion. We can interpret this result by saying that the electron has a spin angular momentum S = ( /2) Σ, which must be added to its orbital angular momentum in order to obtain a constant of the motion. According to (11.101), the relationship between the electron's spin angular momentum and its intrinsic (i.e., non-orbital) magnetic moment is ? = ? e g 2 me S, (11.119) where the gyromagnetic ratio g takes the value g = 2. (11.120) As explained in Section 5.5, this is twice the value one would naively predict by analogy with classical physics. 11.6 Motion in Central Field To further study the motion of an electron in a central ?eld, whose Hamiltonian is H = ?e φ(r) + c α · p + β me c2 , (11.121) it is convenient to transform to polar coordinates. Let r = (x2 + y2 + z2 )1/2 , (11.122) 184 QUANTUM MECHANICS and pr = x · p. (11.123) It is easily demonstrated that [r, pr] = i , (11.124) which implies that in the Schr¨ odinger representation pr = ?i ? ?r . (11.125) Now, by symmetry, an energy eigenstate in a central ?eld is a simultaneous eigenstate of the total angular momentum J = L + 2 Σ. (11.126) Furthermore, we know from general principles that the eigenvalues of J 2 are j (j + 1) 2 , where j is a positive half-integer (since j = |l + 1/2|, where l is the standard non-negative integer quantum number associated with orbital angular momentum.) It follows from Equation (11.95) that (Σ · L) (Σ · L) = L2 + i Σ * (L * L). (11.127) However, because L is an angular momentum, its components satisfy the standard commutation relations L * L = i L. (11.128) Thus, we obtain (Σ · L) (Σ · L) = L2 ? Σ · L = J 2 ? 2 Σ · L ? 2 4 Σ2 . (11.129) However, Σ2 = 3, so (Σ · L + )2 = J 2 + 1 4 2 . (11.130) Further application of (11.95) yields (Σ · L) (Σ · p) = L · p + i Σ · L * p = i Σ · L * p, (11.131) (Σ · p) (Σ · L) = p · L + i Σ · p * L = i Σ · p * L, (11.132) However, it is easily demonstrated from the fundamental commutation relations between position and momentum operators that L * p + p * L = 2 i p. (11.133) Thus, (Σ · L) (Σ · p) + (Σ · p) (Σ · L) = ?2 Σ · p, (11.134) which implies that {Σ · L + , Σ · p} = 0. (11.135) Relativistic Electron Theory 185 Now, γ5 Σ = α. Moreover, γ5 commutes with p, L, and Σ. Hence, we conclude that {Σ · L + , α · p} = 0. (11.136) Finally, since β commutes with p and L, but anti-commutes with the components of α, we obtain [ζ, α · p] = 0, (11.137) where ζ = β (Σ · L + ) . (11.138) If we repeat the above analysis, starting at Equation (11.131), but substituting x for p, and making use of the easily demonstrated result L * x + x * L = 2 i x, (11.139) we ?nd that [ζ, α · x] = 0. (11.140) Now, r commutes with β, as well as the components of Σ and L. Hence, [ζ, r] = 0. (11.141) Moreover, β commutes with the components of L, and can easily be shown to commute with all components of Σ. It follows that [ζ, β] = 0. (11.142) Hence, Equations (11.121), (11.137), (11.141), and (11.142) imply that [ζ, H] = 0. (11.143) In other words, an eigenstate of the Hamiltonian is a simultaneous eigenstate of ζ. Now, ζ 2 = [β (Σ · L + )]2 = (Σ · L + )2 = J 2 + 1 4 2 , (11.144) where use has been made of Equation (11.130), as well as β2 = 1. It follows that the eigenvalues of ζ 2 are j (j + 1) 2 + (1/4) 2 = (j + 1/2)2 2 . Thus, the eigenvalues of ζ can be written k , where k = ±(j + 1/2) is a non-zero integer. Equation (11.95) implies that (Σ · x) (Σ · p) = x · p + i Σ · x * p = r pr + i Σ · L = r pr + i (β ζ ? ), (11.145) where use has been made of (11.123) and (11.138). It is helpful to de?ne the new operator ?, where r ? = α · x. (11.146) 186 QUANTUM MECHANICS Moreover, it is evident that [?, r] = 0. (11.147) Hence, r2 ?2 = (α · x)2 = 1 2 i,j=1,3 {αi, αj} xi xj = i=1,3 (xi )2 = r2 , (11.148) where use has been made of (11.20). It follows that ?2 = 1. (11.149) We have already seen that ζ commutes with α · x and r. Thus, [ζ, ?] = 0. (11.150) Equation (11.95) gives (Σ · x) (x · p) ? (x · p) (Σ · x) = Σ · x (x · p) ? (x · p) x = i Σ · x, (11.151) where use has been made of the fundamental commutation relations for position and momentum operators. However, x · p = r pr and Σ · x = γ5 r ?, so, multiplying through by γ5 , we get r2 ? pr ? r pr r ? = i r ?. (11.152) Equation (11.124) then yields [?, pr] = 0. (11.153) Equation (11.145) implies that (α · x) (α · p) = r pr + i (β ζ ? ). (11.154) Making use of Equations (11.141), (11.146), (11.147), and (11.149), we get α · p = ? (pr ? i /r) + i ? β ζ/r. (11.155) Hence, the Hamiltonian (11.121) becomes H = ?e φ(r) + c ? (pr ? i /r) + i c ? β ζ/r + β me c2 . (11.156) Now, we wish to solve the energy eigenvalue problem H ψ = E ψ, (11.157) where E is the energy eigenvalue. However, we have already shown that an eigenstate of the Hamiltonian is a simultaneous eigenstate of the ζ operator belonging to the eigenvalue k , where k is a non-zero integer. Hence, the eigenvalue problem reduces to ?e φ(r) + c ? (pr ? i /r) + i c k ? β/r + β me c2 ψ = E ψ, (11.158) Relativistic Electron Theory 187 which only involves the radial coordinate r. It is easily demonstrated that ? anti-commutes with β. Hence, given that β takes the form (11.28), and that ?2 = 1, we can represent ? as the matrix ? = 0 ?i i 0 . (11.159) Thus, writing ψ in the spinor form ψ = ψa(r) ψb(r) , (11.160) and making use of (11.125), the energy eigenvalue problem for an electron in a central ?eld reduces to the following two coupled radial di?erential equations: c d dr + k + 1 r ψb + (E ? me c2 + e φ) ψa = 0, (11.161) c d dr ? k ? 1 r ψa ? (E + me c2 + e φ) ψb = 0. (11.162) 11.7 Fine Structure of Hydrogen Energy Levels For the case of a hydrogen atom, φ(r) = e 4π ?0 r . (11.163) Hence, Equations (11.161) and (11.162) yield 1 a1 ? α y ψa ? d dy + k + 1 y ψb = 0, (11.164) 1 a2 + α y ψb ? d dy ? k ? 1 y ψa = 0, (11.165) where y = r/a0, and a1 = α 1 ? E , (11.166) a2 = α 1 + E , (11.167) with E = E/(me c2 ). Here, a0 = 4π ?0 2 /(me e2 ) is the Bohr radius, and α = e2 /(4π ?0 c) the ?ne structure constant. Writing ψa(y) = e?y/a y f(y), (11.168) ψb(y) = e?y/a y g(y), (11.169) 188 QUANTUM MECHANICS where a = (a1 a2)1/2 = α √ 1 ? E2 , (11.170) we obtain 1 a1 ? α y f ? d dy ? 1 a + k y g = 0, (11.171) 1 a2 + α y g ? d dy ? 1 a ? k y f = 0. (11.172) Let us search for power law solutions of the form f(y) = s cs ys , (11.173) g(y) = s c′ s ys , (11.174) where successive values of s di?er by unity. Substitution of these solutions into Equations (11.171) and (11.172) leads to the recursion relations cs?1 a1 ? α cs ? (s + k) c′ s + cs?1 a = 0, (11.175) c′ s?1 a2 + α c′ s ? (s ? k) cs + cs?1 a = 0. (11.176) Multiplying the ?rst of these equations by a, and the second by a2, and then subtracting, we elimi- nate both cs?1 and c′ s?1, since a/a1 = a2/a. We are left with [a α ? a2 (s ? k)] cs + [a2 α + a (s + k)] c′ s = 0. (11.177) The physical boundary conditions at y = 0 require that y ψa → 0 and y ψb → 0 as y → 0. Thus, it follows from (11.168) and (11.169) that f → 0 and g → 0 as y → 0. Consequently, the series (11.173) and (11.174) must terminate at small positive s. If s0 is the minimum value of s for which cs and c′ s do not both vanish then it follows from (11.175) and (11.176), putting s = s0 and cs0?1 = c′ s0?1 = 0, that α cs0 + (s0 + k) c′ s0 = 0, (11.178) α c′ s0 ? (s0 ? k) cs0 = 0, (11.179) which implies that α2 = ?s2 0 + k2 . (11.180) Since the boundary condition requires that the minimum value of s0 be greater than zero, we must take s0 = (k2 ? α2 )1/2 . (11.181) Relativistic Electron Theory 189 To investigate the convergence of the series (11.173) and (11.174) at large y, we shall determine the ratio cs/cs?1 for large s. In the limit of large s, Equations (11.176) and (11.177) yield s cs ? cs?1 a + c′ s?1 a2 , (11.182) a2 cs ? a c′ s, (11.183) since α ? 1/137 ? 1. Thus, cs cs?1 ? 2 a s . (11.184) However, this is the ratio of coe?cients in the series expansion of exp(2 y/a). Hence, we deduce that the series (11.173) and (11.174) diverge unphysically at large y unless they terminate at large s. Suppose that the series (11.173) and (11.174) terminate with the terms cs and c′ s, so that cs+1 = c′ s+1 = 0. It follows from (11.175) and (11.176), with s + 1 substituted for s, that cs a1 + c′ s a = 0, (11.185) c′ s a2 + cs a = 0. (11.186) These two expressions are equivalent, because a2 = a1 a2. When combined with (11.177) they give a1 [a α ? a2 (s ? k)] = a [a2 α + a (s + k)] , (11.187) which reduces to 2 a1 a2 s = a (a1 ? a2) α, (11.188) or E = 1 + α2 s2 ?1/2 . (11.189) Here, s, which speci?es the last term in the series, must be greater than s0 by some non-negative integer i. Thus, s = i + (k2 ? α2 )1/2 = i + [(j + 1/2)2 ? α2 ]1/2 . (11.190) where j (j + 1) 2 is the eigenvalue of J 2 . Hence, the energy eigenvalues of the hydrogen atom become E me c2 = ? ? ? ? ?1 + α2 i + [(j + 1/2)2 ? α2]1/2 2 ? ? ? ? ? ?1/2 . (11.191) Given that α ? 1/137, we can expand the above expression in α2 to give E me c2 = 1 ? α2 2 n2 ? α4 2 n4 n j + 1/2 ? 3 4 + O(α6 ), (11.192) 190 QUANTUM MECHANICS where n = i + j + 1/2 is a positive integer. Of course, the ?rst term in the above expression corresponds to the electron's rest mass energy. The second term corresponds to the standard non- relativistic expression for the hydrogen energy levels, with n playing the role of the radial quantum number (see Section 4.6). Finally, the third term corresponds to the ?ne structure correction to these energy levels (see Exercise 7.4). Note that this correction only depends on the quantum numbers n and j. Now, we showed in Section 7.7 that the ?ne structure correction to the energy levels of the hydrogen atom is a combined e?ect of spin-orbit coupling and the electron's relativistic mass increase. Hence, it is evident that both of these e?ects are automatically taken into account in the Dirac equation. 11.8 Positron Theory We have already mentioned that the Dirac equation admits twice as many solutions as it ought to, half of them belonging to states with negative values for the kinetic energy c p0 + e Φ0 . This di?- culty was introduced when we passed from Equation (11.16) to Equation (11.17), and is inherent in any relativistic theory. Let us examine the negative energy solutions of the equation p0 + e c Φ0 ? α1 p1 + e c Φ1 ? α2 p2 + e c Φ2 ? α3 p3 + e c Φ3 ? β me c ψ = 0 (11.193) a little more closely. For this purpose, it is convenient to use a representation of the α's and β in which all the elements of the matrices α1, α2, and α3 are real, and all of those of the matrix representing β are imaginary or zero. Such a representation can be obtained from the standard representation by interchanging the expressions for α2 and β. If Equation (11.193) is expressed as a matrix equation in this representation, and we then substitute ?i for i, we get [remembering the factor i in Equation (11.14)] p0 ? e c Φ0 ? α1 p1 ? e c Φ1 ? α2 p2 ? e c Φ2 ? α3 p3 ? e c Φ3 ? β me c ψ? = 0. (11.194) Thus, each solution, ψ, of the wave equation (11.193) has for its complex conjugate, ψ? , a solution of the wave equation (11.194). Furthermore, if the solution, ψ, of (11.193) belongs to a negative value for c p0 + e Φ0 then the corresponding solution, ψ? , of (11.194) will belong to a positive value for c p0 ? e Φ0 . But, the operator in (11.194) is just what we would get if we substituted ?e for e in the operator in (11.193). It follows that each negative energy solution of (11.193) is the complex conjugate of a positive energy solution of the wave equation obtained from (11.193) by the substitution of ?e for e. The latter solution represents an electron of charge +e (instead of ?e, as we have had up to now) moving through the given electromagnetic ?eld. We conclude that the negative energy solutions of (11.193) refer to the motion of a new type of particle having the mass of an electron, but the opposite charge. Such particles have been observed experimentally, and are called positrons. Note that we cannot simply assert that the negative energy solutions represent positrons, since this would make the dynamical relations all wrong. For instance, it is certainly not true that a positron has a negative kinetic energy. Instead, Relativistic Electron Theory 191 we assume that nearly all of the negative energy states are occupied, with one electron in each state, in accordance with the Pauli exclusion principle. An unoccupied negative energy state will now appear as a particle with a positive energy, since to make it disappear we would have to add an electron with a negative energy to the system. We assume that these unoccupied negative energy states correspond to positrons. The previous assumptions require there to be a distribution of electrons of in?nite density everywhere in space. A perfect vacuum is a region of space in which all states of positive energy are unoccupied, and all of those of negative energy are occupied. In such a vacuum, the Maxwell equation ? · E = 0 (11.195) must be valid. This implies that the in?nite distribution of negative energy electrons does not contribute to the electric ?eld. Thus, only departures from the vacuum distribution contribute to the electric charge density ρ in the Maxwell equation ? · E = ρ ?0 . (11.196) In other words, there is a contribution ?e for each occupied state of positive energy, and a contri- bution +e for each unoccupied state of negative energy. The exclusion principle ordinarily prevents a positive energy electron from making transitions to states of negative energy. However, it is still possible for such an electron to drop into an unoccupied state of negative energy. In this case, we would observe an electron and a positron simultaneously disappearing, their energy being emitted in the form of radiation. The converse process would consist in the creation of an electron positron pair from electromagnetic radiation. Exercises 11.1 Noting that αi = ?β αi β, prove that the αi and β matrices all have zero trace. Hence, deduce that each of these matrices has n eigenvalues +1, and n eigenvalues ?1, where 2 n is the dimension of the matrices. 11.2 Verify that the matrices (11.28) and (11.29) satisfy Equations (11.20)–(11.22). 11.3 Verify that the matrices (11.26) and (11.27) satisfy the anti-commutation relations (11.25). 11.4 Verify that if ?? j? = 0, where j? is a 4-vector ?eld, then d3 x j0 is Lorentz invariant, where the integral is over all space, and it is assumed that j? → 0 as |x| → ∞. 11.5 Verify that (11.71) is a solution of (11.70). 192 QUANTUM MECHANICS 11.6 Verify that the 4 * 4 matrices Σi, de?ned in (11.92), satisfy the standard anti-commutation relations for Pauli matrices: i.e., {Σi, Σj} = 2 δij.