Words to the reader about how to use this textbook
I. What This Book Does and Does Not Contain
This text is intended for use by beginning graduate students and advanced upper
division undergraduate students in all areas of chemistry.
It provides:
(i) An introduction to the fundamentals of quantum mechanics as they apply to chemistry,
(ii) Material that provides brief introductions to the subjects of molecular spectroscopy and
chemical dynamics,
(iii) An introduction to computational chemistry applied to the treatment of electronic
structures of atoms, molecules, radicals, and ions,
(iv) A large number of exercises, problems, and detailed solutions.
It does not provide much historical perspective on the development of quantum
mechanics. Subjects such as the photoelectric effect, black-body radiation, the dual nature
of electrons and photons, and the Davisson and Germer experiments are not even
discussed.
To provide a text that students can use to gain introductory level knowledge of
quantum mechanics as applied to chemistry problems, such a non-historical approach had
to be followed. This text immediately exposes the reader to the machinery of quantum
mechanics.
Sections 1 and 2 (i.e., Chapters 1-7), together with Appendices A, B, C and E,
could constitute a one-semester course for most first-year Ph. D. programs in the U. S. A.
Section 3 (Chapters 8-12) and selected material from other appendices or selections from
Section 6 would be appropriate for a second-quarter or second-semester course. Chapters
13- 15 of Sections 4 and 5 would be of use for providing a link to a one-quarter or one-
semester class covering molecular spectroscopy. Chapter 16 of Section 5 provides a brief
introduction to chemical dynamics that could be used at the beginning of a class on this
subject.
There are many quantum chemistry and quantum mechanics textbooks that cover
material similar to that contained in Sections 1 and 2; in fact, our treatment of this material
is generally briefer and less detailed than one finds in, for example, Quantum Chemistry ,
H. Eyring, J. Walter, and G. E. Kimball, J. Wiley and Sons, New York, N.Y. (1947),
Quantum Chemistry , D. A. McQuarrie, University Science Books, Mill Valley, Ca.
(1983), Molecular Quantum Mechanics , P. W. Atkins, Oxford Univ. Press, Oxford,
England (1983), or Quantum Chemistry , I. N. Levine, Prentice Hall, Englewood Cliffs,
N. J. (1991), Depending on the backgrounds of the students, our coverage may have to be
supplemented in these first two Sections.
By covering this introductory material in less detail, we are able, within the
confines of a text that can be used for a one-year or a two-quarter course, to introduce the
student to the more modern subjects treated in Sections 3, 5, and 6. Our coverage of
modern quantum chemistry methodology is not as detailed as that found in Modern
Quantum Chemistry , A. Szabo and N. S. Ostlund, Mc Graw-Hill, New York (1989),
which contains little or none of the introductory material of our Sections 1 and 2.
By combining both introductory and modern up-to-date quantum chemistry material
in a single book designed to serve as a text for one-quarter, one-semester, two-quarter, or
one-year classes for first-year graduate students, we offer a unique product.
It is anticipated that a course dealing with atomic and molecular spectroscopy will
follow the student's mastery of the material covered in Sections 1- 4. For this reason,
beyond these introductory sections, this text's emphasis is placed on electronic structure
applications rather than on vibrational and rotational energy levels, which are traditionally
covered in considerable detail in spectroscopy courses.
In brief summary, this book includes the following material:
1. The Section entitled The Basic Tools of Quantum Mechanics treats
the fundamental postulates of quantum mechanics and several applications to exactly
soluble model problems. These problems include the conventional particle-in-a-box (in one
and more dimensions), rigid-rotor, harmonic oscillator, and one-electron hydrogenic
atomic orbitals. The concept of the Born-Oppenheimer separation of electronic and
vibration-rotation motions is introduced here. Moreover, the vibrational and rotational
energies, states, and wavefunctions of diatomic, linear polyatomic and non-linear
polyatomic molecules are discussed here at an introductory level. This section also
introduces the variational method and perturbation theory as tools that are used to deal with
problems that can not be solved exactly.
2. The Section Simple Molecular Orbital Theory deals with atomic and
molecular orbitals in a qualitative manner, including their symmetries, shapes, sizes, and
energies. It introduces bonding, non-bonding, and antibonding orbitals, delocalized,
hybrid, and Rydberg orbitals, and introduces Hückel-level models for the calculation of
molecular orbitals as linear combinations of atomic orbitals (a more extensive treatment of
several semi-empirical methods is provided in Appendix F). This section also develops
the Orbital Correlation Diagram concept that plays a central role in using Woodward-
Hoffmann rules to predict whether chemical reactions encounter symmetry-imposed
barriers.
3. The Electronic Configurations, Term Symbols, and StatesSection treats the spatial, angular momentum, and spin symmetries of the many-electron
wavefunctions that are formed as antisymmetrized products of atomic or molecular orbitals.
Proper coupling of angular momenta (orbital and spin) is covered here, and atomic and
molecular term symbols are treated. The need to include Configuration Interaction to
achieve qualitatively correct descriptions of certain species' electronic structures is treated
here. The role of the resultant Configuration Correlation Diagrams in the Woodward-
Hoffmann theory of chemical reactivity is also developed.
4. The Section on Molecular Rotation and Vibration provides an
introduction to how vibrational and rotational energy levels and wavefunctions are
expressed for diatomic, linear polyatomic, and non-linear polyatomic molecules whose
electronic energies are described by a single potential energy surface. Rotations of "rigid"
molecules and harmonic vibrations of uncoupled normal modes constitute the starting point
of such treatments.
5. The Time Dependent Processes Section uses time-dependent perturbation
theory, combined with the classical electric and magnetic fields that arise due to the
interaction of photons with the nuclei and electrons of a molecule, to derive expressions for
the rates of transitions among atomic or molecular electronic, vibrational, and rotational
states induced by photon absorption or emission. Sources of line broadening and time
correlation function treatments of absorption lineshapes are briefly introduced. Finally,
transitions induced by collisions rather than by electromagnetic fields are briefly treated to
provide an introduction to the subject of theoretical chemical dynamics.
6. The Section on More Quantitive Aspects of Electronic StructureCalculations introduces many of the computational chemistry methods that are used
to quantitatively evaluate molecular orbital and configuration mixing amplitudes. The
Hartree-Fock self-consistent field (SCF), configuration interaction (CI),
multiconfigurational SCF (MCSCF), many-body and Møller-Plesset perturbation theories,
coupled-cluster (CC), and density functional or Xα-like methods are included. The
strengths and weaknesses of each of these techniques are discussed in some detail. Having
mastered this section, the reader should be familiar with how potential energy
hypersurfaces, molecular properties, forces on the individual atomic centers, and responses
to externally applied fields or perturbations are evaluated on high speed computers.
II. How to Use This Book: Other Sources of Information and Building Necessary
Background
In most class room settings, the group of students learning quantum mechanics as it
applies to chemistry have quite diverse backgrounds. In particular, the level of preparation
in mathematics is likely to vary considerably from student to student, as will the exposure
to symmetry and group theory. This text is organized in a manner that allows students to
skip material that is already familiar while providing access to most if not all necessary
background material. This is accomplished by dividing the material into sections, chapters
and Appendices which fill in the background, provide methodological tools, and provide
additional details.
The Appendices covering Point Group Symmetry and Mathematics Review are
especially important to master. Neither of these two Appendices provides a first-principles
treatment of their subject matter. The students are assumed to have fulfilled normal
American Chemical Society mathematics requirements for a degree in chemistry, so only a
review of the material especially relevant to quantum chemistry is given in the Mathematics
Review Appendix. Likewise, the student is assumed to have learned or to be
simultaneously learning about symmetry and group theory as applied to chemistry, so this
subject is treated in a review and practical-application manner here. If group theory is to be
included as an integral part of the class, then this text should be supplemented (e.g., by
using the text Chemical Applications of Group Theory , F. A. Cotton, Interscience, New
York, N. Y. (1963)).
The progression of sections leads the reader from the principles of quantum
mechanics and several model problems which illustrate these principles and relate to
chemical phenomena, through atomic and molecular orbitals, N-electron configurations,
states, and term symbols, vibrational and rotational energy levels, photon-induced
transitions among various levels, and eventually to computational techniques for treating
chemical bonding and reactivity.
At the end of each Section, a set of Review Exercises and fully worked out
answers are given. Attempting to work these exercises should allow the student to
determine whether he or she needs to pursue additional background building via the
Appendices .
In addition to the Review Exercises , sets of Exercises and Problems, and
their solutions, are given at the end of each section.
The exercises are brief and highly focused on learning a particular skill. They allow the
student to practice the mathematical steps and other material introduced in the section. The
problems are more extensive and require that numerous steps be executed. They illustrate
application of the material contained in the chapter to chemical phenomena and they help
teach the relevance of this material to experimental chemistry. In many cases, new material
is introduced in the problems, so all readers are encouraged to become actively involved in
solving all problems.
To further assist the learning process, readers may find it useful to consult other
textbooks or literature references. Several particular texts are recommended for additional
reading, further details, or simply an alternative point of view. They include the following
(in each case, the abbreviated name used in this text is given following the proper
reference):
1. Quantum Chemistry , H. Eyring, J. Walter, and G. E. Kimball, J. Wiley
and Sons, New York, N.Y. (1947)- EWK.
2. Quantum Chemistry , D. A. McQuarrie, University Science Books, Mill Valley, Ca.
(1983)- McQuarrie.
3. Molecular Quantum Mechanics , P. W. Atkins, Oxford Univ. Press, Oxford, England
(1983)- Atkins.
4. The Fundamental Principles of Quantum Mechanics , E. C. Kemble, McGraw-Hill, New
York, N.Y. (1937)- Kemble.
5. The Theory of Atomic Spectra , E. U. Condon and G. H. Shortley, Cambridge Univ.
Press, Cambridge, England (1963)- Condon and Shortley.
6. The Principles of Quantum Mechanics , P. A. M. Dirac, Oxford Univ. Press, Oxford,
England (1947)- Dirac.
7. Molecular Vibrations , E. B. Wilson, J. C. Decius, and P. C. Cross, Dover Pub., New
York, N. Y. (1955)- WDC.
8. Chemical Applications of Group Theory , F. A. Cotton, Interscience, New York, N. Y.
(1963)- Cotton.
9. Angular Momentum , R. N. Zare, John Wiley and Sons, New York, N. Y. (1988)-
Zare.
10. Introduction to Quantum Mechanics , L. Pauling and E. B. Wilson, Dover Publications,
Inc., New York, N. Y. (1963)- Pauling and Wilson.
11. Modern Quantum Chemistry , A. Szabo and N. S. Ostlund, Mc Graw-Hill, New York
(1989)- Szabo and Ostlund.
12. Quantum Chemistry , I. N. Levine, Prentice Hall, Englewood Cliffs, N. J. (1991)-
Levine.
13. Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, Portola
Valley, Calif. (1983),
Section 1 The Basic Tools of Quantum Mechanics
Chapter 1
Quantum Mechanics Describes Matter in Terms of Wavefunctions and Energy Levels.
Physical Measurements are Described in Terms of Operators Acting on Wavefunctions
I. Operators, Wavefunctions, and the Schrödinger Equation
The trends in chemical and physical properties of the elements described beautifully
in the periodic table and the ability of early spectroscopists to fit atomic line spectra by
simple mathematical formulas and to interpret atomic electronic states in terms of empirical
quantum numbers provide compelling evidence that some relatively simple framework
must exist for understanding the electronic structures of all atoms. The great predictive
power of the concept of atomic valence further suggests that molecular electronic structure
should be understandable in terms of those of the constituent atoms.
Much of quantum chemistry attempts to make more quantitative these aspects of
chemists' view of the periodic table and of atomic valence and structure. By starting from
'first principles' and treating atomic and molecular states as solutions of a so-called
Schrödinger equation, quantum chemistry seeks to determine what underlies the empirical
quantum numbers, orbitals, the aufbau principle and the concept of valence used by
spectroscopists and chemists, in some cases, even prior to the advent of quantum
mechanics.
Quantum mechanics is cast in a language that is not familiar to most students of
chemistry who are examining the subject for the first time. Its mathematical content and
how it relates to experimental measurements both require a great deal of effort to master.
With these thoughts in mind, the authors have organized this introductory section in a
manner that first provides the student with a brief introduction to the two primary
constructs of quantum mechanics, operators and wavefunctions that obey a Schrödinger
equation, then demonstrates the application of these constructs to several chemically
relevant model problems, and finally returns to examine in more detail the conceptual
structure of quantum mechanics.
By learning the solutions of the Schrödinger equation for a few model systems, the
student can better appreciate the treatment of the fundamental postulates of quantum
mechanics as well as their relation to experimental measurement because the wavefunctions
of the known model problems can be used to illustrate.
A. Operators
Each physically measurable quantity has a corresponding operator. The eigenvalues
of the operator tell the values of the corresponding physical property that can be observed
In quantum mechanics, any experimentally measurable physical quantity F (e.g.,
energy, dipole moment, orbital angular momentum, spin angular momentum, linear
momentum, kinetic energy) whose classical mechanical expression can be written in terms
of the cartesian positions {qi} and momenta {pi} of the particles that comprise the system
of interest is assigned a corresponding quantum mechanical operator F. Given F in terms
of the {qi} and {pi}, F is formed by replacing pj by -ih∂/∂qj and leaving qj untouched.
For example, if
F=Σl=1,N (pl2/2ml + 1/2 k(ql-ql0)2 + L(ql-ql0)),
then
F=Σl=1,N (- h2/2ml ∂2/∂ql2 + 1/2 k(ql-ql0)2 + L(ql-ql0))
is the corresponding quantum mechanical operator. Such an operator would occur when,
for example, one describes the sum of the kinetic energies of a collection of particles (the
Σl=1,N (pl2/2ml ) term, plus the sum of "Hookes' Law" parabolic potentials (the 1/2 Σl=1,N
k(ql-ql0)2), and (the last term in F) the interactions of the particles with an externally
applied field whose potential energy varies linearly as the particles move away from their
equilibrium positions {ql0}.
The sum of the z-components of angular momenta of a collection of N particles has
F=Σ j=1,N (xjpyj - yjpxj),
and the corresponding operator is
F=-ih Σ j=1,N (xj∂/∂yj - yj∂/∂xj).
The x-component of the dipole moment for a collection of N particles
has
F=Σ j=1,N Zjexj, and
F=Σ j=1,N Zjexj ,
where Zje is the charge on the jth particle.
The mapping from F to F is straightforward only in terms of cartesian coordinates.
To map a classical function F, given in terms of curvilinear coordinates (even if they are
orthogonal), into its quantum operator is not at all straightforward. Interested readers are
referred to Kemble's text on quantum mechanics which deals with this matter in detail. The
mapping can always be done in terms of cartesian coordinates after which a transformation
of the resulting coordinates and differential operators to a curvilinear system can be
performed. The corresponding transformation of the kinetic energy operator to spherical
coordinates is treated in detail in Appendix A. The text by EWK also covers this topic in
considerable detail.
The relationship of these quantum mechanical operators to experimental
measurement will be made clear later in this chapter. For now, suffice it to say that these
operators define equations whose solutions determine the values of the corresponding
physical property that can be observed when a measurement is carried out; only the values
so determined can be observed. This should suggest the origins of quantum mechanics'
prediction that some measurements will produce discrete or quantized values of certain
variables (e.g., energy, angular momentum, etc.).
B. Wavefunctions
The eigenfunctions of a quantum mechanical operator depend on the coordinates
upon which the operator acts; these functions are called wavefunctions
In addition to operators corresponding to each physically measurable quantity,
quantum mechanics describes the state of the system in terms of a wavefunction Ψ that is a
function of the coordinates {qj} and of time t. The function |Ψ(qj,t)|2 = Ψ*Ψ gives the
probability density for observing the coordinates at the values qj at time t. For a many-
particle system such as the H2O molecule, the wavefunction depends on many coordinates.
For the H2O example, it depends on the x, y, and z (or r,θ, and φ) coordinates of the ten
electrons and the x, y, and z (or r,θ, and φ) coordinates of the oxygen nucleus and of the
two protons; a total of thirty-nine coordinates appear in Ψ.
In classical mechanics, the coordinates qj and their corresponding momenta pj are
functions of time. The state of the system is then described by specifying qj(t) and pj(t). In
quantum mechanics, the concept that qj is known as a function of time is replaced by the
concept of the probability density for finding qj at a particular value at a particular time t:
|Ψ(qj,t)|2. Knowledge of the corresponding momenta as functions of time is also
relinquished in quantum mechanics; again, only knowledge of the probability density for
finding pj with any particular value at a particular time t remains.
C. The Schrödinger Equation
This equation is an eigenvalue equation for the energy or Hamiltonian operator; its
eigenvalues provide the energy levels of the system
1. The Time-Dependent Equation
If the Hamiltonian operator contains the time variable explicitly, one must solve the
time-dependent Schrödinger equation
How to extract from Ψ(qj,t) knowledge about momenta is treated below in Sec. III.
A, where the structure of quantum mechanics, the use of operators and wavefunctions to
make predictions and interpretations about experimental measurements, and the origin of
'uncertainty relations' such as the well known Heisenberg uncertainty condition dealing
with measurements of coordinates and momenta are also treated.
Before moving deeper into understanding what quantum mechanics 'means', it is
useful to learn how the wavefunctions Ψ are found by applying the basic equation of
quantum mechanics, the Schrödinger equation , to a few exactly soluble model problems.
Knowing the solutions to these 'easy' yet chemically very relevant models will then
facilitate learning more of the details about the structure of quantum mechanics because
these model cases can be used as 'concrete examples'.
The Schrödinger equation is a differential equation depending on time and on all of
the spatial coordinates necessary to describe the system at hand (thirty-nine for the H2O
example cited above). It is usually written
H Ψ = i h ∂Ψ/∂t
where Ψ(qj,t) is the unknown wavefunction and H is the operator corresponding to the
total energy physical property of the system. This operator is called the Hamiltonian and is
formed, as stated above, by first writing down the classical mechanical expression for the
total energy (kinetic plus potential) in cartesian coordinates and momenta and then replacing
all classical momenta pj by their quantum mechanical operators pj = - ih∂/∂qj .
For the H2O example used above, the classical mechanical energy of all thirteen
particles is
E = Σi { pi2/2me + 1/2 Σj e2/ri,j - Σa Zae2/ri,a }
+ Σa {pa2/2ma + 1/2 Σb ZaZbe2/ra,b },
where the indices i and j are used to label the ten electrons whose thirty cartesian
coordinates are {qi} and a and b label the three nuclei whose charges are denoted {Za}, and
whose nine cartesian coordinates are {qa}. The electron and nuclear masses are denoted me
and {ma}, respectively.
The corresponding Hamiltonian operator is
H = Σi { - (h2/2me) ∂2/∂qi2 + 1/2 Σj e2/ri,j - Σa Zae2/ri,a }
+ Σa { - (h2/2ma) ∂2/∂qa2+ 1/2 Σb ZaZbe2/ra,b }.
Notice that H is a second order differential operator in the space of the thirty-nine cartesian
coordinates that describe the positions of the ten electrons and three nuclei. It is a second
order operator because the momenta appear in the kinetic energy as pj2 and pa2, and the
quantum mechanical operator for each momentum p = -ih ∂/∂q is of first order.
The Schrödinger equation for the H2O example at hand then reads
Σi { - (h2/2me) ∂2/∂qi2 + 1/2 Σj e2/ri,j - Σa Zae2/ri,a } Ψ
+ Σa { - (h2/2ma) ∂2/∂qa2+ 1/2 Σb ZaZbe2/ra,b } Ψ
= i h ∂Ψ/∂t.
2. The Time-Independent Equation
If the Hamiltonian operator does not contain the time variable explicitly, one can
solve the time-independent Schrödinger equation
In cases where the classical energy, and hence the quantum Hamiltonian, do not
contain terms that are explicitly time dependent (e.g., interactions with time varying
external electric or magnetic fields would add to the above classical energy expression time
dependent terms discussed later in this text), the separations of variables techniques can be
used to reduce the Schrödinger equation to a time-independent equation.
In such cases, H is not explicitly time dependent, so one can assume that Ψ(qj,t) is
of the form
Ψ(qj,t) = Ψ(qj) F(t).
Substituting this 'ansatz' into the time-dependent Schrödinger equation gives
Ψ(qj) i h ∂F/∂t = H Ψ(qj) F(t) .
Dividing by Ψ(qj) F(t) then gives
F-1 (i h ∂F/∂t) = Ψ-1 (H Ψ(qj) ).
Since F(t) is only a function of time t, and Ψ(qj) is only a function of the spatial
coordinates {qj}, and because the left hand and right hand sides must be equal for all
values of t and of {qj}, both the left and right hand sides must equal a constant. If this
constant is called E, the two equations that are embodied in this separated Schrödinger
equation read as follows:
H Ψ(qj) = E Ψ(qj),
i h ∂F(t)/∂t = ih dF(t)/dt = E F(t).
The first of these equations is called the time-independent Schrödinger equation; it
is a so-called eigenvalue equation in which one is asked to find functions that yield a
constant multiple of themselves when acted on by the Hamiltonian operator. Such functions
are called eigenfunctions of H and the corresponding constants are called eigenvalues of H.
For example, if H were of the form - h2/2M ∂2/∂φ2 = H , then functions of the form exp(i
mφ) would be eigenfunctions because
{ - h2/2M ∂2/∂φ2} exp(i mφ) = { m2 h2 /2M } exp(i mφ).
In this case, { m2 h2 /2M } is the eigenvalue.
When the Schrödinger equation can be separated to generate a time-independent
equation describing the spatial coordinate dependence of the wavefunction, the eigenvalue
E must be returned to the equation determining F(t) to find the time dependent part of the
wavefunction. By solving
ih dF(t)/dt = E F(t)
once E is known, one obtains
F(t) = exp( -i Et/ h),
and the full wavefunction can be written as
Ψ(qj,t) = Ψ(qj) exp (-i Et/ h).
For the above example, the time dependence is expressed by
F(t) = exp ( -i t { m2 h2 /2M }/ h).
Having been introduced to the concepts of operators, wavefunctions, the
Hamiltonian and its Schrödinger equation, it is important to now consider several examples
of the applications of these concepts. The examples treated below were chosen to provide
the learner with valuable experience in solving the Schrödinger equation; they were also
chosen because the models they embody form the most elementary chemical models of
electronic motions in conjugated molecules and in atoms, rotations of linear molecules, and
vibrations of chemical bonds.
II. Examples of Solving the Schrödinger Equation
A. Free-Particle Motion in Two Dimensions
The number of dimensions depends on the number of particles and the number of
spatial (and other) dimensions needed to characterize the position and motion of each
particle
1. The Schrödinger Equation
Consider an electron of mass m and charge e moving on a two-dimensional surface
that defines the x,y plane (perhaps the electron is constrained to the surface of a solid by a
potential that binds it tightly to a narrow region in the z-direction), and assume that the
electron experiences a constant potential V0 at all points in this plane (on any real atomic or
molecular surface, the electron would experience a potential that varies with position in a
manner that reflects the periodic structure of the surface). The pertinent time independent
Schrödinger equation is:
- h2/2m (∂2/∂x2 +∂2/∂y2)ψ(x,y) +V0ψ(x,y) = E ψ(x,y).
Because there are no terms in this equation that couple motion in the x and y directions
(e.g., no terms of the form xayb or ∂/∂x ∂/∂y or x∂/∂y), separation of variables can be used
to write ψ as a product ψ(x,y)=A(x)B(y). Substitution of this form into the Schrödinger
equation, followed by collecting together all x-dependent and all y-dependent terms, gives;
- h2/2m A-1∂2A/∂x2 - h2/2m B-1∂2B/∂y2 =E-V0.
Since the first term contains no y-dependence and the second contains no x-dependence,
both must actually be constant (these two constants are denoted Ex and Ey, respectively),
which allows two separate Schrödinger equations to be written:
- h2/2m A-1∂2A/∂x2 =Ex, and
- h2/2m B-1∂2B/∂y2 =Ey.
The total energy E can then be expressed in terms of these separate energies Ex and Ey as
Ex + Ey =E-V0. Solutions to the x- and y- Schrödinger equations are easily seen to be:
A(x) = exp(ix(2mEx/h2)1/2) and exp(-ix(2mEx/h2)1/2) ,
B(y) = exp(iy(2mEy/h2)1/2) and exp(-iy(2mEy/h2)1/2).
Two independent solutions are obtained for each equation because the x- and y-space
Schrödinger equations are both second order differential equations.
2. Boundary Conditions
The boundary conditions, not the Schrödinger equation, determine whether the
eigenvalues will be discrete or continuous
If the electron is entirely unconstrained within the x,y plane, the energies Ex and Ey
can assume any value; this means that the experimenter can 'inject' the electron onto the x,y
plane with any total energy E and any components Ex and Ey along the two axes as long as
Ex + Ey = E. In such a situation, one speaks of the energies along both coordinates as
being 'in the continuum' or 'not quantized'.
In contrast, if the electron is constrained to remain within a fixed area in the x,y
plane (e.g., a rectangular or circular region), then the situation is qualitatively different.
Constraining the electron to any such specified area gives rise to so-called boundary
conditions that impose additional requirements on the above A and B functions.
These constraints can arise, for example, if the potential V0(x,y) becomes very large for
x,y values outside the region, in which case, the probability of finding the electron outside
the region is very small. Such a case might represent, for example, a situation in which the
molecular structure of the solid surface changes outside the enclosed region in a way that is
highly repulsive to the electron.
For example, if motion is constrained to take place within a rectangular region
defined by 0 ≤ x ≤ Lx; 0 ≤ y ≤ Ly, then the continuity property that all wavefunctions must
obey (because of their interpretation as probability densities, which must be continuous)
causes A(x) to vanish at 0 and at Lx. Likewise, B(y) must vanish at 0 and at Ly. To
implement these constraints for A(x), one must linearly combine the above two solutions
exp(ix(2mEx/h2)1/2) and exp(-ix(2mEx/h2)1/2) to achieve a function that vanishes at x=0:
A(x) = exp(ix(2mEx/h2)1/2) - exp(-ix(2mEx/h2)1/2).
One is allowed to linearly combine solutions of the Schrödinger equation that have the same
energy (i.e., are degenerate) because Schrödinger equations are linear differential
equations. An analogous process must be applied to B(y) to achieve a function that
vanishes at y=0:
B(y) = exp(iy(2mEy/h2)1/2) - exp(-iy(2mEy/h2)1/2).
Further requiring A(x) and B(y) to vanish, respectively, at x=Lx and y=Ly, gives
equations that can be obeyed only if Ex and Ey assume particular values:
exp(iLx(2mEx/h2)1/2) - exp(-iLx(2mEx/h2)1/2) = 0, and
exp(iLy(2mEy/h2)1/2) - exp(-iLy(2mEy/h2)1/2) = 0.
These equations are equivalent to
sin(Lx(2mEx/h2)1/2) = sin(Ly(2mEy/h2)1/2) = 0.
Knowing that sin(θ) vanishes at θ=nπ, for n=1,2,3,..., (although the sin(nπ) function
vanishes for n=0, this function vanishes for all x or y, and is therefore unacceptable
because it represents zero probability density at all points in space) one concludes that the
energies Ex and Ey can assume only values that obey:
Lx(2mEx/h2)1/2 =nxπ,
Ly(2mEy/h2)1/2 =nyπ, or
Ex = nx2π2 h2/(2mLx2), and
Ey = ny2π2 h2/(2mLy2), with nx and ny =1,2,3, ...
It is important to stress that it is the imposition of boundary conditions, expressing the fact
that the electron is spatially constrained, that gives rise to quantized energies. In the absence
of spatial confinement, or with confinement only at x =0 or Lx or only
at y =0 or Ly, quantized energies would not be realized.
In this example, confinement of the electron to a finite interval along both the x and
y coordinates yields energies that are quantized along both axes. If the electron were
confined along one coordinate (e.g., between 0 ≤ x ≤ Lx) but not along the other (i.e., B(y)
is either restricted to vanish at y=0 or at y=Ly or at neither point), then the total energy E
lies in the continuum; its Ex component is quantized but Ey is not. Such cases arise, for
example, when a linear triatomic molecule has more than enough energy in one of its bonds
to rupture it but not much energy in the other bond; the first bond's energy lies in the
continuum, but the second bond's energy is quantized.
Perhaps more interesting is the case in which the bond with the higher dissociation
energy is excited to a level that is not enough to break it but that is in excess of the
dissociation energy of the weaker bond. In this case, one has two degenerate states- i. the
strong bond having high internal energy and the weak bond having low energy (ψ1), and
ii. the strong bond having little energy and the weak bond having more than enough energy
to rupture it (ψ2). Although an experiment may prepare the molecule in a state that contains
only the former component (i.e., ψ= C1ψ1 + C2ψ2 with C1>>C2), coupling between the
two degenerate functions (induced by terms in the Hamiltonian H that have been ignored in
defining ψ1 and ψ2) usually causes the true wavefunction Ψ = exp(-itH/h) ψ to acquire a
component of the second function as time evolves. In such a case, one speaks of internal
vibrational energy flow giving rise to unimolecular decomposition of the molecule.
3. Energies and Wavefunctions for Bound States
For discrete energy levels, the energies are specified functions the depend on
quantum numbers, one for each degree of freedom that is quantized
Returning to the situation in which motion is constrained along both axes, the
resultant total energies and wavefunctions (obtained by inserting the quantum energy levels
into the expressions for
A(x) B(y) are as follows:
Ex = nx2π2 h2/(2mLx2), and
Ey = ny2π2 h2/(2mLy2),
E = Ex + Ey ,
ψ(x,y) = (1/2Lx)1/2 (1/2Ly)1/2[exp(inxπx/Lx) -exp(-inxπx/Lx)]
[exp(inyπy/Ly) -exp(-inyπy/Ly)], with nx and ny =1,2,3, ... .
The two (1/2L)1/2 factors are included to guarantee that ψ is normalized:
∫ |ψ(x,y)|2 dx dy = 1.
Normalization allows |ψ(x,y)|2 to be properly identified as a probability density for finding
the electron at a point x, y.
4. Quantized Action Can Also be Used to Derive Energy Levels
There is another approach that can be used to find energy levels and is especially
straightforward to use for systems whose Schrödinger equations are separable. The so-
called classical action (denoted S) of a particle moving with momentum p along a path
leading from initial coordinate qi at initial time ti to a final coordinate qf at time tf is defined
by:
S = ⌡⌠qi;ti
qf;tf
p•dq .
Here, the momentum vector p contains the momenta along all coordinates of the system,
and the coordinate vector q likewise contains the coordinates along all such degrees of
freedom. For example, in the two-dimensional particle in a box problem considered above,
q = (x, y) has two components as does p = (Px, py),
and the action integral is:
S = ⌡⌠xi;yi;ti
x f;yf;tf (px dx + py dy) .
In computing such actions, it is essential to keep in mind the sign of the momentum as the
particle moves from its initial to its final positions. An example will help clarify these
matters.
For systems such as the above particle in a box example for which the Hamiltonian
is separable, the action integral decomposed into a sum of such integrals, one for each
degree of freedom. In this two-dimensional example, the additivity of H:
H = Hx + Hy = px2/2m + py2/2m + V(x) + V(y)
= - h2/2m ∂2/∂x2 + V(x) - h2/2m ∂2/∂y2 + V(y)
means that px and py can be independently solved for in terms of the potentials V(x) and
V(y) as well as the energies Ex and Ey associated with each separate degree of freedom:
px = ± 2m(Ex - V(x))
py = ± 2m(Ey - V(y)) ;
the signs on px and py must be chosen to properly reflect the motion that the particle is
actually undergoing. Substituting these expressions into the action integral yields:
S = Sx + Sy
= ⌡⌠xi;ti
x f;tf
± 2m(Ex - V(x)) dx + ⌡⌠yi;ti
y f;tf
± 2m(Ey - V(y)) dy .
The relationship between these classical action integrals and existence of quantized
energy levels has been show to involve equating the classical action for motion on a closed
path (i.e., a path that starts and ends at the same place after undergoing motion away from
the starting point but eventually returning to the starting coordinate at a later time) to an
integral multiple of Planck's constant:
Sclosed = ⌡⌠qi;ti
qf=qi;tf
p•dq = n h. (n = 1, 2, 3, 4, ...)
Applied to each of the independent coordinates of the two-dimensional particle in a box
problem, this expression reads:
nx h = ⌡⌠x=0
x=Lx
2m(Ex - V(x)) dx + ⌡⌠x=Lx
x=0
- 2m(Ex - V(x)) dx
ny h = ⌡⌠y=0
y=Ly
2m(Ey - V(y)) dy + ⌡⌠y=Ly
y=0
- 2m(Ey - V(y)) dy .
Notice that the sign of the momenta are positive in each of the first integrals appearing
above (because the particle is moving from x = 0 to x = Lx, and analogously for y-motion,
and thus has positive momentum) and negative in each of the second integrals (because the
motion is from x = Lx to x = 0 (and analogously for y-motion) and thus with negative
momentum). Within the region bounded by 0 ≤ x ≤ Lx; 0 ≤ y ≤ Ly, the potential vanishes,
so V(x) = V(y) = 0. Using this fact, and reversing the upper and lower limits, and thus the
sign, in the second integrals above, one obtains:
nx h = 2 ⌡⌠x=0
x=Lx
2mEx dx = 2 2mEx Lx
ny h = 2 ⌡⌠y=0
y=Ly
2mEy dy = 2 2mEy Ly.
Solving for Ex and Ey, one finds:
Ex = (nxh)2
8mLx2
Ey = (nyh)2
8mLy2 .
These are the same quantized energy levels that arose when the wavefunction boundary
conditions were matched at x = 0, x = Lx and y = 0, y = Ly. In this case, one says that the
Bohr-Sommerfeld quantization condition:
n h = ⌡⌠qi;ti
qf=qi;tf
p•dq
has been used to obtain the result.
B. Other Model Problems
1. Particles in Boxes
The particle-in-a-box problem provides an important model for several relevant
chemical situations
The above 'particle in a box' model for motion in two dimensions can obviously be
extended to three dimensions or to one.
For two and three dimensions, it provides a crude but useful picture for electronic states on
surfaces or in crystals, respectively. Free motion within a spherical volume gives rise to
eigenfunctions that are used in nuclear physics to describe the motions of neutrons and
protons in nuclei. In the so-called shell model of nuclei, the neutrons and protons fill
separate s, p, d, etc orbitals with each type of nucleon forced to obey the Pauli principle.
These orbitals are not the same in their radial 'shapes' as the s, p, d, etc orbitals of atoms
because, in atoms, there is an additional radial potential V(r) = -Ze2/r present. However,
their angular shapes are the same as in atomic structure because, in both cases, the potential
is independent of θ and φ. This same spherical box model has been used to describe the
orbitals of valence electrons in clusters of mono-valent metal atoms such as Csn, Cun, Nan
and their positive and negative ions. Because of the metallic nature of these species, their
valence electrons are sufficiently delocalized to render this simple model rather effective
(see T. P. Martin, T. Bergmann, H. Göhlich, and T. Lange, J. Phys. Chem. 95 , 6421
(1991)).
One-dimensional free particle motion provides a qualitatively correct picture for π-
electron motion along the pπ orbitals of a delocalized polyene. The one cartesian dimension
then corresponds to motion along the delocalized chain. In such a model, the box length L
is related to the carbon-carbon bond length R and the number N of carbon centers involved
in the delocalized network L=(N-1)R. Below, such a conjugated network involving nine
centers is depicted. In this example, the box length would be eight times the C-C bond
length.
Conjugated π Network with 9 Centers Involved
The eigenstates ψn(x) and their energies En represent orbitals into which electrons are
placed. In the example case, if nine π electrons are present (e.g., as in the 1,3,5,7-
nonatetraene radical), the ground electronic state would be represented by a total
wavefunction consisting of a product in which the lowest four ψ's are doubly occupied and
the fifth ψ is singly occupied:
Ψ = ψ1αψ1βψ2αψ2βψ3αψ3βψ4αψ4βψ5α.
A product wavefunction is appropriate because the total Hamiltonian involves the kinetic
plus potential energies of nine electrons. To the extent that this total energy can be
represented as the sum of nine separate energies, one for each electron, the Hamiltonian
allows a separation of variables
H ≅ Σj H(j)
in which each H(j) describes the kinetic and potential energy of an individual electron. This
(approximate) additivity of H implies that solutions of H Ψ = E Ψ are products of solutions
to H (j) ψ(rj) = Ej ψ(rj).
The two lowest π-excited states would correspond to states of the form
Ψ* = ψ1α ψ1β ψ2α ψ2β ψ3α ψ3β ψ4α ψ5β ψ5α , and
Ψ'* = ψ1α ψ1β ψ2α ψ2β ψ3α ψ3β ψ4α ψ4β ψ6α ,
where the spin-orbitals (orbitals multiplied by α or β) appearing in the above products
depend on the coordinates of the various electrons. For example,
ψ1α ψ1β ψ2α ψ2β ψ3α ψ3β ψ4α ψ5β ψ5α
denotes
ψ1α(r1) ψ1β (r2) ψ2α (r3) ψ2β (r4) ψ3α (r5) ψ3β (r6) ψ4α (r7) ψ5β
(r8) ψ5α (r9).
The electronic excitation energies within this model would be
∆E* = π2 h2/2m [ 52/L2 - 42/L2] and
∆E'* = π2 h2/2m [ 62/L2 - 52/L2], for the two excited-state functions described
above. It turns out that this simple model of π-electron energies provides a qualitatively
correct picture of such excitation energies.
This simple particle-in-a-box model does not yield orbital energies that relate to
ionization energies unless the potential 'inside the box' is specified. Choosing the value of
this potential V0 such that V0 + π2 h2/2m [ 52/L2] is equal to minus the lowest ionization
energy of the 1,3,5,7-nonatetraene radical, gives energy levels (as E = V0 + π2 h2/2m [
n2/L2]) which then are approximations to ionization energies.
The individual π-molecular orbitals
ψn = (2/L)1/2 sin(nπx/L)
are depicted in the figure below for a model of the 1,3,5 hexatriene π-orbital system for
which the 'box length' L is five times the distance RCC between neighboring pairs of
Carbon atoms.
n = 6
n = 5
n = 4
n = 3
n = 2
n = 1
(2/L)1/2
sin(nπx/L); L = 5 x RCC
In this figure, positive amplitude is denoted by the clear spheres and negative amplitude is
shown by the darkened spheres; the magnitude of the kth C-atom centered atomic orbital in
the nth π-molecular orbital is given by (2/L)1/2 sin(nπkRCC/L).
This simple model allows one to estimate spin densities at each carbon center and
provides insight into which centers should be most amenable to electrophilic or nucleophilic
attack. For example, radical attack at the C5 carbon of the nine-atom system described
earlier would be more facile for the ground state Ψ than for either Ψ* or Ψ'*. In the
former, the unpaired spin density resides in ψ5, which has non-zero amplitude at the C5
site x=L/2; in Ψ* and Ψ'*, the unpaired density is in ψ4 and ψ6, respectively, both of
which have zero density at C5. These densities reflect the values (2/L)1/2 sin(nπkRCC/L) of
the amplitudes for this case in which L = 8 x RCC for n = 5, 4, and 6, respectively.
2. One Electron Moving About a Nucleus
The Hydrogenic atom problem forms the basis of much of our thinking about
atomic structure. To solve the corresponding Schrödinger equation requires separation of
the r, θ, and φ variables
[Suggested Extra Reading- Appendix B: The Hydrogen Atom Orbitals]
The Schrödinger equation for a single particle of mass µ moving in a centralpotential (one that depends only on the radial coordinate r) can be written as
-h−2
2µ
∂2
∂x2 +
∂2
∂y2 +
∂2
∂z2 ψ + V x2+y2+z2 ψ = Eψ.
This equation is not separable in cartesian coordinates (x,y,z) because of the way x,y, andz appear together in the square root. However, it is separable in spherical coordinates
-h−2
2µr2
∂
∂r
r2 ∂ψ∂r
+ 1
r2Sinθ ∂∂θ
Sinθ ∂ψ∂θ
+ 1
r2Sin2θ ∂2ψ∂φ2
+ V(r)ψ = Eψ .
Subtracting V(r)ψ from both sides of the equation and multiplying by - 2µr2
h−2 then moving
the derivatives with respect to r to the right-hand side, one obtains
1
Sinθ ∂∂θ
Sinθ ∂ψ∂θ
+ 1
Sin2θ ∂2ψ∂φ2
= -2µr2
h−2 ( )E-V(r) ψ -
∂∂r
r2 ∂ψ∂r
.
Notice that the right-hand side of this equation is a function of r only; it contains no θ or φdependence. Let's call the entire right hand side F(r) to emphasize this fact.
To further separate the θ and φ dependence, we multiply by Sin2θ and subtract the
θ derivative terms from both sides to obtain
∂2ψ∂φ2
= F(r)ψSin2θ - Sinθ ∂∂θ
Sinθ ∂ψ∂θ
.
Now we have separated the φ dependence from the θ and r dependence. If we now
substitute ψ = Φ(φ) Q(r,θ) and divide by Φ Q, we obtain
1
Φ ∂2Φ∂φ2
= 1Q
F(r)Sin2θ Q - Sinθ ∂∂θ
Sinθ ∂Q
∂θ .
Now all of the φ dependence is isolated on the left hand side; the right hand side contains
only r and θ dependence.Whenever one has isolated the entire dependence on one variable as we have done
above for the φ dependence, one can easily see that the left and right hand sides of theequation must equal a constant. For the above example, the left hand side contains no r orθ dependence and the right hand side contains no φ dependence. Because the two sides are
equal, they both must actually contain no r, θ, or φ dependence; that is, they are constant.For the above example, we therefore can set both sides equal to a so-called
separation constant that we call -m2 . It will become clear shortly why we have chosen toexpress the constant in this form.
a. The Φ Equation
The resulting Φ equation reads
Φ" + m2Φ = 0
which has as its most general solution
Φ = Αeimφ + Be-imφ .
We must require the function Φ to be single-valued, which means that
Φ(φ) = Φ(2π + φ) or,
Aeimφ( )1 - e2imπ + Be-imφ( )1 - e -2imπ = 0.
This is satisfied only when the separation constant is equal to an integer m = 0, ±1, ± 2, .... and provides another example of the rule that quantization comes from the boundaryconditions on the wavefunction. Here m is restricted to certain discrete values because thewavefunction must be such that when you rotate through 2π about the z-axis, you must getback what you started with.
b. The Θ Equation
Now returning to the equation in which the φ dependence was isolated from the r
and θ dependence.and rearranging the θ terms to the left-hand side, we have
1
Sinθ ∂∂θ
Sinθ ∂Q
∂θ -
m2Q
Sin2θ = F(r)Q.
In this equation we have separated θ and r variations so we can further decompose the
wavefunction by introducing Q = Θ(θ) R(r) , which yields
1
Θ
1
Sinθ ∂∂θ
Sinθ ∂Θ∂θ
- m2
Sin2θ = F(r)R
R = -λ,
where a second separation constant, -λ, has been introduced once the r and θ dependentterms have been separated onto the right and left hand sides, respectively.
We now can write the θ equation as
1
Sinθ ∂∂θ
Sinθ ∂Θ∂θ
- m2ΘSin2θ
= -λ Θ,
where m is the integer introduced earlier. To solve this equation for Θ , we make the
substitutions z = Cosθ and P(z) = Θ(θ) , so 1-z2 = Sinθ , and
∂∂θ
= ∂z
∂θ ∂∂z
= - Sinθ ∂∂z
.
The range of values for θ was 0 ≤ θ < π , so the range for z is
-1 < z < 1. The equation for Θ , when expressed in terms of P and z, becomes
ddz
(1-z2)
dPdz -
m2P
1-z2 + λP = 0.
Now we can look for polynomial solutions for P, because z is restricted to be less thanunity in magnitude. If m = 0, we first let
P = ∑k=0
∞akzk ,
and substitute into the differential equation to obtain
∑k=0
∞(k+2)(k+1) ak+2 zk - ∑
k=0
∞(k+1) k akzk + λ ∑
k=0
∞akzk = 0.
Equating like powers of z gives
ak+2 = ak(k(k+1)-λ)(k+2)(k+1) .
Note that for large values of k
ak+2ak
→ k2
1+
1k
k2
1+
2k
1+
1k
= 1.
Since the coefficients do not decrease with k for large k, this series will diverge for z = ± 1
unless it truncates at finite order. This truncation only happens if the separation constant λobeys λ = l(l+1), where l is an integer. So, once again, we see that a boundary condition(i.e., that the wavefunction be normalizable in this case) give rise to quantization. In thiscase, the values of λ are restricted to l(l+1); before, we saw that m is restricted to 0, ±1, ±2, .. .
Since this recursion relation links every other coefficient, we can choose to solvefor the even and odd functions separately. Choosing a0 and then determining all of theeven ak in terms of this a0, followed by rescaling all of these ak to make the functionnormalized generates an even solution. Choosing a1 and determining all of the odd ak inlike manner, generates an odd solution.
For l= 0, the series truncates after one term and results in Po(z) = 1. For l= 1 the
same thing applies and P1(z) = z. For l= 2, a2 = -6 ao2 = -3ao , so one obtains P2 = 3z2-1,
and so on. These polynomials are called Legendre polynomials.For the more general case where m ≠ 0, one can proceed as above to generate a
polynomial solution for the Θ function. Doing so, results in the following solutions:
Plm(z) = (1-z2)
|m|2
d |m| Pl (z)
dz|m| .
These functions are called Associated Legendre polynomials, and they constitute thesolutions to the Θ problem for non-zero m values.
The above P and eimφ functions, when re-expressed in terms of θ and φ, yield thefull angular part of the wavefunction for any centrosymmetric potential. These solutions
are usually written as Yl,m(θ,φ) = Plm(Cosθ) (2π)
-12 exp(imφ), and are called spherical
harmonics. They provide the angular solution of the r,θ, φ Schrödinger equation for any problem in which the potential depends only on the radial coordinate. Such situationsinclude all one-electron atoms and ions (e.g., H, He+, Li++ , etc.), the rotational motion ofa diatomic molecule (where the potential depends only on bond length r), the motion of anucleon in a spherically symmetrical "box" (as occurs in the shell model of nuclei), and thescattering of two atoms (where the potential depends only on interatomic distance).
c. The R Equation
Let us now turn our attention to the radial equation, which is the only place that theexplicit form of the potential appears. Using our derived results and specifying V(r) to bethe coulomb potential appropriate for an electron in the field of a nucleus of charge +Ze,yields:
1
r2 ddr
r2
dRdr +
2µ
h−2
E +
Ze2
r - l(l + 1)
r2 R = 0.
We can simplify things considerably if we choose rescaled length and energy units because
doing so removes the factors that depend on µ,h− , and e. We introduce a new radial
coordinate ρ and a quantity σ as follows:
ρ =
-8µE
h−2
12 r, and σ2 = -
µZ2e4
2Eh−2 .
Notice that if E is negative, as it will be for bound states (i.e., those states with energybelow that of a free electron infinitely far from the nucleus and with zero kinetic energy), ρis real. On the other hand, if E is positive, as it will be for states that lie in the continuum,ρ will be imaginary. These two cases will give rise to qualitatively different behavior in thesolutions of the radial equation developed below.
We now define a function S such that S(ρ) = R(r) and substitute S for R to obtain:
1
ρ2 d
dρ
ρ2
dS
dρ +
- 14 -
l(l+1)
ρ2 +
σρ
S = 0.
The differential operator terms can be recast in several ways using
1
ρ2 d
dρ
ρ2
dS
dρ =
d2S
dρ2 +
2
ρ dS
dρ =
1
ρ d2
dρ2 (ρS) .
It is useful to keep in mind these three embodiments of the derivatives that enter into theradial kinetic energy; in various contexts it will be useful to employ various of these.
The strategy that we now follow is characteristic of solving second orderdifferential equations. We will examine the equation for S at large and small ρ values.
Having found solutions at these limits, we will use a power series in ρ to "interpolate"between these two limits.
Let us begin by examining the solution of the above equation at small values of ρ to
see how the radial functions behave at small r. As ρ→0, the second term in the bracketswill dominate. Neglecting the other two terms in the brackets, we find that, for smallvalues of ρ (or r), the solution should behave like ρL and because the function must be
normalizable, we must have L ≥ 0. Since L can be any non-negative integer, this suggests
the following more general form for S(ρ) :
S(ρ) ≈ ρL e-aρ.
This form will insure that the function is normalizable since S(ρ) → 0 as r → ∞ for all L,
as long as ρ is a real quantity. If ρ is imaginary, such a form may not be normalized (seebelow for further consequences).
Turning now to the behavior of S for large ρ, we make the substitution of S(ρ) into
the above equation and keep only the terms with the largest power of ρ (e.g., first term inbrackets). Upon so doing, we obtain the equation
a2ρLe-aρ = 14 ρLe-aρ ,
which leads us to conclude that the exponent in the large-ρ behavior of S is a = 12 .
Having found the small- and large-ρ behaviors of S(ρ), we can take S to have the
following form to interpolate between large and small ρ-values:
S(ρ) = ρLe-ρ2 P(ρ),
where the function L is expanded in an infinite power series in ρ as P(ρ) = ∑ak ρk . Again
Substituting this expression for S into the above equation we obtain
P"ρ + P'(2L+2-ρ) + P(σ-L-l) = 0,
and then substituting the power series expansion of P and solving for the ak's we arrive at:
ak+1 = (k-σ+L+l) ak
(k+1)(k+2L+2) .
For large k, the ratio of expansion coefficients reaches the limit ak+1ak
= 1k , which
has the same behavior as the power series expansion of eρ. Because the power seriesexpansion of P describes a function that behaves like eρ for large ρ, the resulting S(ρ)
function would not be normalizable because the e-ρ2 factor would be overwhelmed by this
eρ dependence. Hence, the series expansion of P must truncate in order to achieve anormalizable S function. Notice that if ρ is imaginary, as it will be if E is in the continuum,the argument that the series must truncate to avoid an exponentially diverging function nolonger applies. Thus, we see a key difference between bound (with ρ real) and continuum
(with ρ imaginary) states. In the former case, the boundary condition of non-divergencearises; in the latter, it does not.
To truncate at a polynomial of order n', we must have n' - σ + L+ l= 0. This
implies that the quantity σ introduced previously is restricted to σ = n' + L + l , which iscertainly an integer; let us call this integer n. If we label states in order of increasing n =1,2,3,... , we see that doing so is consistent with specifying a maximum order (n') in the
P(ρ) polynomial n' = 0,1,2,... after which the l-value can run from l = 0, in steps of unityup toL = n-1.
Substituting the integer n for σ , we find that the energy levels are quantized
because σ is quantized (equal to n):
E = - µZ2e4
2h−2n2 and ρ =
Zraon .
Here, the length ao is the so called Bohr radius
ao = h−2
µe2 ; it appears once the above E-
expression is substituted into the equation for ρ. Using the recursion equation to solve forthe polynomial's coefficients ak for any choice of n and l quantum numbers generates a so-
called Laguerre polynomial; Pn-L-1(ρ). They contain powers of ρ from zero through n-l-1.This energy quantization does not arise for states lying in the continuum because the
condition that the expansion of P(ρ) terminate does not arise. The solutions of the radialequation appropriate to these scattering states (which relate to the scattering motion of anelectron in the field of a nucleus of charge Z) are treated on p. 90 of EWK.
In summary, separation of variables has been used to solve the full r,θ,φSchrödinger equation for one electron moving about a nucleus of charge Z. The θ and φsolutions are the spherical harmonics YL,m (θ,φ). The bound-state radial solutions
Rn,L(r) = S(ρ) = ρLe-ρ2 Pn-L-1(ρ)
depend on the n and l quantum numbers and are given in terms of the Laguerre polynomials(see EWK for tabulations of these polynomials).
d. Summary
To summarize, the quantum numbers l and m arise through boundary conditions
requiring that ψ(θ) be normalizable (i.e., not diverge) and ψ(φ) = ψ(φ+2π). In the texts by
Atkins, EWK, and McQuarrie the differential equations obeyed by the θ and φ components
of Yl,m are solved in more detail and properties of the solutions are discussed. This
differential equation involves the three-dimensional Schrödinger equation's angular kinetic
energy operator. That is, the angular part of the above Hamiltonian is equal to h2L2/2mr2,
where L2 is the square of the total angular momentum for the electron.
The radial equation, which is the only place the potential energy enters, is found to
possess both bound-states (i.e., states whose energies lie below the asymptote at which the
potential vanishes and the kinetic energy is zero) and continuum states lying energetically
above this asymptote. The resulting hydrogenic wavefunctions (angular and radial) and
energies are summarized in Appendix B for principal quantum numbers n ranging from 1
to 3 and in Pauling and Wilson for n up to 5.
There are both bound and continuum solutions to the radial Schrödinger equation
for the attractive coulomb potential because, at energies below the asymptote the potential
confines the particle between r=0 and an outer turning point, whereas at energies above the
asymptote, the particle is no longer confined by an outer turning point (see the figure
below).
-Zee/r
r0.0
Continuum State
BoundStates
The solutions of this one-electron problem form the qualitative basis for much of
atomic and molecular orbital theory. For this reason, the reader is encouraged to use
Appendix B to gain a firmer understanding of the nature of the radial and angular parts of
these wavefunctions. The orbitals that result are labeled by n, l, and m quantum numbers
for the bound states and by l and m quantum numbers and the energy E for the continuum
states. Much as the particle-in-a-box orbitals are used to qualitatively describe π- electrons
in conjugated polyenes, these so-called hydrogen-like orbitals provide qualitative
descriptions of orbitals of atoms with more than a single electron. By introducing the
concept of screening as a way to represent the repulsive interactions among the electrons of
an atom, an effective nuclear charge Zeff can be used in place of Z in the ψn,l,m and En,l to
generate approximate atomic orbitals to be filled by electrons in a many-electron atom. For
example, in the crudest approximation of a carbon atom, the two 1s electrons experience
the full nuclear attraction so Zeff=6 for them, whereas the 2s and 2p electrons are screened
by the two 1s electrons, so Zeff= 4 for them. Within this approximation, one then occupies
two 1s orbitals with Z=6, two 2s orbitals with Z=4 and two 2p orbitals with Z=4 in
forming the full six-electron wavefunction of the lowest-energy state of carbon.
3. Rotational Motion For a Rigid Diatomic Molecule
This Schrödinger equation relates to the rotation of diatomic and linear polyatomic
molecules. It also arises when treating the angular motions of electrons in any spherically
symmetric potential
A diatomic molecule with fixed bond length R rotating in the absence of any
external potential is described by the following Schrödinger equation:
h2/2µ {(R2sinθ)-1∂/∂θ (sinθ ∂/∂θ) + (R2sin2θ)-1 ∂2/∂φ2 } ψ = E ψ
or
L2ψ/2µR2 = E ψ.
The angles θ and φ describe the orientation of the diatomic molecule's axis relative to a
laboratory-fixed coordinate system, and µ is the reduced mass of the diatomic molecule
µ=m1m2/(m1+m2). The differential operators can be seen to be exactly the same as those
that arose in the hydrogen-like-atom case, and, as discussed above, these θ and φdifferential operators are identical to the L2 angular momentum operator whose general
properties are analyzed in Appendix G. Therefore, the same spherical harmonics that
served as the angular parts of the wavefunction in the earlier case now serve as the entire
wavefunction for the so-called rigid rotor: ψ = YJ,M(θ,φ). As detailed later in this text, the
eigenvalues corresponding to each such eigenfunction are given as:
EJ = h2 J(J+1)/(2µR2) = B J(J+1)
and are independent of M. Thus each energy level is labeled by J and is 2J+1-fold
degenerate (because M ranges from -J to J). The so-called rotational constant B (defined as
h2/2µR2) depends on the molecule's bond length and reduced mass. Spacings between
successive rotational levels (which are of spectroscopic relevance because angular
momentum selection rules often restrict ∆J to 1,0, and -1) are given by
∆E = B (J+1)(J+2) - B J(J+1) = 2B(J+1).
These energy spacings are of relevance to microwave spectroscopy which probes the
rotational energy levels of molecules.
The rigid rotor provides the most commonly employed approximation to the
rotational energies and wavefunctions of linear molecules. As presented above, the model
restricts the bond length to be fixed. Vibrational motion of the molecule gives rise to
changes in R which are then reflected in changes in the rotational energy levels. The
coupling between rotational and vibrational motion gives rise to rotational B constants that
depend on vibrational state as well as dynamical couplings,called centrifugal distortions,
that cause the total ro-vibrational energy of the molecule to depend on rotational and
vibrational quantum numbers in a non-separable manner.
4. Harmonic Vibrational Motion
This Schrödinger equation forms the basis for our thinking about bond stretching and angle
bending vibrations as well as collective phonon motions in solids
The radial motion of a diatomic molecule in its lowest (J=0) rotational level can be
described by the following Schrödinger equation:
- h2/2µ r-2∂/∂r (r2∂/∂r) ψ +V(r) ψ = E ψ,
where µ is the reduced mass µ = m1m2/(m1+m2) of the two atoms.
By substituting ψ= F(r)/r into this equation, one obtains an equation for F(r) in which the
differential operators appear to be less complicated:
- h2/2µ d2F/dr2 + V(r) F = E F.
This equation is exactly the same as the equation seen above for the radial motion of the
electron in the hydrogen-like atoms except that the reduced mass µ replaces the electron
mass m and the potential V(r) is not the coulomb potential.
If the potential is approximated as a quadratic function of the bond displacement x =
r-re expanded about the point at which V is minimum:
V = 1/2 k(r-re)2,
the resulting harmonic-oscillator equation can be solved exactly. Because the potential V
grows without bound as x approaches
∞ or -∞, only bound-state solutions exist for this model problem; that is, the motion is
confined by the nature of the potential, so no continuum states exist.
In solving the radial differential equation for this potential (see Chapter 5 of
McQuarrie), the large-r behavior is first examined. For large-r, the equation reads:
d2F/dx2 = 1/2 k x2 (2µ/h2) F,
where x = r-re is the bond displacement away from equilibrium. Defining ξ= (µk/h2)1/4 x
as a new scaled radial coordinate allows the solution of the large-r equation to be written as:
Flarge-r = exp(-ξ2/2).
The general solution to the radial equation is then taken to be of the form:
F = exp(-ξ2/2) ∑n=0
∞ ξn C n ,
where the Cn are coefficients to be determined. Substituting this expression into the full
radial equation generates a set of recursion equations for the Cn amplitudes. As in the
solution of the hydrogen-like radial equation, the series described by these coefficients is
divergent unless the energy E happens to equal specific values. It is this requirement that
the wavefunction not diverge so it can be normalized that yields energy quantization. The
energies of the states that arise are given by:
En = h (k/µ)1/2 (n+1/2),
and the eigenfunctions are given in terms of the so-called Hermite polynomials Hn(y) as
follows:
ψn(x) = (n! 2n)-1/2 (α/π)1/4 exp(-αx2/2) Hn(α1/2 x),
where α =(kµ/h2)1/2. Within this harmonic approximation to the potential, the vibrational
energy levels are evenly spaced:
∆E = En+1 - En = h (k/µ)1/2 .
In experimental data such evenly spaced energy level patterns are seldom seen; most
commonly, one finds spacings En+1 - En that decrease as the quantum number n increases.
In such cases, one says that the progression of vibrational levels displays anharmonicity.
Because the Hn are odd or even functions of x (depending on whether n is odd or
even), the wavefunctions ψn(x) are odd or even. This splitting of the solutions into two
distinct classes is an example of the effect of symmetry; in this case, the symmetry is
caused by the symmetry of the harmonic potential with respect to reflection through the
origin along the x-axis. Throughout this text, many symmetries will arise; in each case,
symmetry properties of the potential will cause the solutions of the Schrödinger equation to
be decomposed into various symmetry groupings. Such symmetry decompositions are of
great use because they provide additional quantum numbers (i.e., symmetry labels) by
which the wavefunctions and energies can be labeled.
The harmonic oscillator energies and wavefunctions comprise the simplest
reasonable model for vibrational motion. Vibrations of a polyatomic molecule are often
characterized in terms of individual bond-stretching and angle-bending motions each of
which is, in turn, approximated harmonically. This results in a total vibrational
wavefunction that is written as a product of functions one for each of the vibrational
coordinates.
Two of the most severe limitations of the harmonic oscillator model, the lack of
anharmonicity (i.e., non-uniform energy level spacings) and lack of bond dissociation,
result from the quadratic nature of its potential. By introducing model potentials that allow
for proper bond dissociation (i.e., that do not increase without bound as x=>∞), the major
shortcomings of the harmonic oscillator picture can be overcome. The so-called Morse
potential (see the figure below)
V(r) = De (1-exp(-a(r-re)))2,
is often used in this regard.
0 1 2 3 4-6
-4
-2
0
2
4
Internuclear distance
En
ergy
Here, De is the bond dissociation energy, re is the equilibrium bond length, and a is a
constant that characterizes the 'steepness' of the potential and determines the vibrational
frequencies. The advantage of using the Morse potential to improve upon harmonic-
oscillator-level predictions is that its energy levels and wavefunctions are also known
exactly. The energies are given in terms of the parameters of the potential as follows:
En = h(k/µ)1/2 { (n+1/2) - (n+1/2)2 h(k/µ)1/2/4De },
where the force constant k is k=2De a2. The Morse potential supports both bound states
(those lying below the dissociation threshold for which vibration is confined by an outer
turning point) and continuum states lying above the dissociation threshold. Its degree of
anharmonicity is governed by the ratio of the harmonic energy h(k/µ)1/2 to the dissociation
energy De.
III. The Physical Relevance of Wavefunctions, Operators and Eigenvalues
Having gained experience on the application of the Schrödinger equation to several
of the more important model problems of chemistry, it is time to return to the issue of how
the wavefunctions, operators, and energies relate to experimental reality.
In mastering the sections that follow the reader should keep in mind that :
i. It is the molecular system that possesses a set of characteristic wavefunctions and energy
levels, but
ii. It is the experimental measurement that determines the nature by which these energy
levels and wavefunctions are probed.
This separation between the 'system' with its intrinsic set of energy levels and
'observation' or 'experiment' with its characteristic interaction with the system forms an
important point of view used by quantum mechanics. It gives rise to a point of view in
which the measurement itself can 'prepare' the system in a wavefunction Ψ that need not be
any single eigenstate but can still be represented as a combination of the complete set of
eigenstates. For the beginning student of quantum mechanics, these aspects of quantum
mechanics are among the more confusing. If it helps, one should rest assured that all of the
mathematical and 'rule' structure of this subject was created to permit the predictions of
quantum mechanics to replicate what has been observed in laboratory experiments.
Note to the Reader :
Before moving on to the next section, it would be very useful to work some of the
Exercises and Problems. In particular, Exercises 3, 5, and 12 as well as problems 6, 8, and
11 provide insight that would help when the material of the next section is studied. The
solution to Problem 11 is used throughout this section to help illustrate the concepts
introduced here.
A. The Basic Rules and Relation to Experimental Measurement
Quantum mechanics has a set of 'rules' that link operators, wavefunctions, and
eigenvalues to physically measurable properties. These rules have been formulated not in
some arbitrary manner nor by derivation from some higher subject. Rather, the rules were
designed to allow quantum mechanics to mimic the experimentally observed facts as
revealed in mother nature's data. The extent to which these rules seem difficult to
understand usually reflects the presence of experimental observations that do not fit in with
our common experience base.
[Suggested Extra Reading- Appendix C: Quantum Mechanical Operators and Commutation]
The structure of quantum mechanics (QM) relates the wavefunction Ψ and
operators F to the 'real world' in which experimental measurements are performed through
a set of rules (Dirac's text is an excellent source of reading concerning the historical
development of these fundamentals). Some of these rules have already been introduced
above. Here, they are presented in total as follows:
1. The time evolution of the wavefunction Ψ is determined by solving the time-dependent
Schrödinger equation (see pp 23-25 of EWK for a rationalization of how the Schrödinger
equation arises from the classical equation governing waves, Einstein's E=hν, and
deBroglie's postulate that λ=h/p)
ih∂Ψ/∂t = HΨ,
where H is the Hamiltonian operator corresponding to the total (kinetic plus potential)
energy of the system. For an isolated system (e.g., an atom or molecule not in contact with
any external fields), H consists of the kinetic and potential energies of the particles
comprising the system. To describe interactions with an external field (e.g., an
electromagnetic field, a static electric field, or the 'crystal field' caused by surrounding
ligands), additional terms are added to H to properly account for the system-field
interactions.
If H contains no explicit time dependence, then separation of space and time
variables can be performed on the above Schrödinger equation Ψ=ψ exp(-itE/h) to give
Hψ=Eψ.
In such a case, the time dependence of the state is carried in the phase factor exp(-itE/h); the
spatial dependence appears in ψ(qj).
The so called time independent Schrödinger equation Hψ=Eψ must be solved to
determine the physically measurable energies Ek and wavefunctions ψk of the system. The
most general solution to the full Schrödinger equation ih∂Ψ/∂t = HΨ is then given by
applying exp(-iHt/h) to the wavefunction at some initial time (t=0) Ψ=Σk Ckψk to obtain
Ψ(t)=Σk Ckψk exp(-itEk/h). The relative amplitudes Ck are determined by knowledge of
the state at the initial time; this depends on how the system has been prepared in an earlier
experiment. Just as Newton's laws of motion do not fully determine the time evolution of a
classical system (i.e., the coordinates and momenta must be known at some initial time),
the Schrödinger equation must be accompanied by initial conditions to fully determine
Ψ(qj,t).
Example :
Using the results of Problem 11 of this chapter to illustrate, the sudden ionization of N2 in
its v=0 vibrational state to generate N2+ produces a vibrational wavefunction
Ψ0 =
α
π 1/4
e-αx2/2 = 3.53333Å-12 e-(244.83Å-2)(r-1.09769Å)2
that was created by the fast ionization of N2. Subsequent to ionization, this N2 function is
not an eigenfunction of the new vibrational Schrödinger equation appropriate to N2+. As a
result, this function will time evolve under the influence of the N2+ Hamiltonian.
The time evolved wavefunction, according to this first rule, can be expressed in terms of
the vibrational functions {Ψv} and energies {Ev} of the N2+ ion as
Ψ (t) = Σv Cv Ψv exp(-i Ev t/h).
The amplitudes Cv, which reflect the manner in which the wavefunction is prepared (at
t=0), are determined by determining the component of each Ψv in the function Ψ at t=0. To
do this, one uses
⌡⌠Ψv'* Ψ(t=0) dτ = Cv',
which is easily obtained by multiplying the above summation by Ψ∗v', integrating, and
using the orthonormality of the {Ψv} functions.
For the case at hand, this results shows that by forming integrals involving
products of the N2 v=0 function Ψ(t=0)
Ψ0 =
α
π 1/4
e-αx2/2 = 3.53333Å-12 e-(244.83Å-2)(r-1.09769Å)2
and various N2+ vibrational functions Ψv, one can determine how Ψ will evolve in time
and the amplitudes of all {Ψv} that it will contain. For example, the N2 v=0 function, upon
ionization, contains the following amount of the N2+ v=0 function:
C0 = ⌡⌠ Ψ0*(N2+) Ψ0(N2) dτ
= ⌡⌠
-∞
∞
3.47522 e-229.113(r-1.11642)23.53333e-244.83(r-1.09769)2dr
As demonstrated in Problem 11, this integral reduces to 0.959. This means that the N2 v=0
state, subsequent to sudden ionization, can be represented as containing |0.959|2 = 0.92
fraction of the v=0 state of the N2+ ion.
This example relates to the well known Franck-Condon principal of spectroscopy in
which squares of 'overlaps' between the initial electronic state's vibrational wavefunction
and the final electronic state's vibrational wavefunctions allow one to estimate the
probabilities of populating various final-state vibrational levels.
In addition to initial conditions, solutions to the Schrödinger equation must obey
certain other constraints in form. They must be continuous functions of all of their spatial
coordinates and must be single valued; these properties allow Ψ* Ψ to be interpreted as a
probability density (i.e., the probability of finding a particle at some position can not be
multivalued nor can it be 'jerky' or discontinuous). The derivative of the wavefunction
must also be continuous except at points where the potential function undergoes an infinite
jump (e.g., at the wall of an infinitely high and steep potential barrier). This condition
relates to the fact that the momentum must be continuous except at infinitely 'steep'
potential barriers where the momentum undergoes a 'sudden' reversal.
2. An experimental measurement of any quantity (whose corresponding operator is F) must
result in one of the eigenvalues fj of the operator F. These eigenvalues are obtained by
solving
Fφj =fj φj,
where the φj are the eigenfunctions of F. Once the measurement of F is made, for that sub-
population of the experimental sample found to have the particular eigenvalue fj, the
wavefunction becomes φj.
The equation Hψk=Ekψk is but a special case; it is an especially important case
because much of the machinery of modern experimental chemistry is directed at placing the
system in a particular energy quantum state by detecting its energy (e.g., by spectroscopic
means).
The reader is strongly urged to also study Appendix C to gain a more detailed and
illustrated treatment of this and subsequent rules of quantum mechanics.
3. The operators F corresponding to all physically measurable quantities are Hermitian; this
means that their matrix representations obey (see Appendix C for a description of the 'bra'
| > and 'ket' < | notation used below):
<χj|F|χk> = <χk|F|χj>*= <Fχj|χk>
in any basis {χj} of functions appropriate for the action of F (i.e., functions of the
variables on which F operates). As expressed through equality of the first and third
elements above, Hermitian operators are often said to 'obey the turn-over rule'. This means
that F can be allowed to operate on the function to its right or on the function to its left if F
is Hermitian.
Hermiticity assures that the eigenvalues {fj} are all real, that eigenfunctions {χj}
having different eigenvalues are orthogonal and can be normalized <χj|χk>=δj,k, and that
eigenfunctions having the same eigenvalues can be made orthonormal (these statements are
proven in Appendix C).
4. Once a particular value fj is observed in a measurement of F, this same value will be
observed in all subsequent measurements of F as long as the system remains undisturbed
by measurements of other properties or by interactions with external fields. In fact, once fi
has been observed, the state of the system becomes an eigenstate of F (if it already was, it
remains unchanged):
FΨ =fiΨ.
This means that the measurement process itself may interfere with the state of the system
and even determines what that state will be once the measurement has been made.
Example:
Again consider the v=0 N2 ionization treated in Problem 11 of this chapter. If,
subsequent to ionization, the N2+ ions produced were probed to determine their internal
vibrational state, a fraction of the sample equal to |<Ψ(N2; v=0) | Ψ(N2+; v=0)>|2 = 0.92
would be detected in the v=0 state of the N2+ ion. For this sub-sample, the vibrational
wavefunction becomes, and remains from then on,
Ψ (t) = Ψ(N2+; v=0) exp(-i t E+v=0/ h),
where E+v=0 is the energy of the N2+ ion in its v=0 state. If, at some later time, this sub-
sample is again probed, all species will be found to be in the v=0 state.
5. The probability Pk of observing a particular value fk when F is measured, given that the
system wavefunction is Ψ prior to the measurement, is given by expanding Ψ in terms of
the complete set of normalized eigenstates of F
Ψ=Σ j |φj> <φj|Ψ>
and then computing Pk =|<φk|Ψ>|2 . For the special case in which Ψ is already one of the
eigenstates of F (i.e., Ψ=φk), the probability of observing fj reduces to Pj =δj,k. The set
of numbers Cj = <φj|Ψ> are called the expansion coefficients of Ψ in the basis of the {φj}.
These coefficients, when collected together in all possible products as
Dj,i = Ci* Cj form the so-called density matrix Dj,i of the wavefunction Ψ within the {φj}
basis.
Example:
If F is the operator for momentum in the x-direction and Ψ(x,t) is the wave
function for x as a function of time t, then the above expansion corresponds to a Fourier
transform of Ψ
Ψ(x,t) = 1/2π ∫ exp(ikx) ∫ exp(-ikx') Ψ(x',t) dx' dk.
Here (1/2π)1/2 exp(ikx) is the normalized eigenfunction of F =-ih∂/∂x corresponding to
momentum eigenvalue hk. These momentum eigenfunctions are orthonormal:
1/2π ∫ exp(-ikx) exp(ik'x) dx = δ(k-k'),
and they form a complete set of functions in x-space
1/2π ∫ exp(-ikx) exp(ikx') dk = δ(x-x')
because F is a Hermitian operator. The function ∫ exp(-ikx') Ψ(x',t) dx' is called the
momentum-space transform of Ψ(x,t) and is denoted Ψ(k,t); it gives, when used as
Ψ*(k,t)Ψ(k,t), the probability density for observing momentum values hk at time t.
Another Example:
Take the initial ψ to be a superposition state of the form
ψ = a (2p0 + 2p-1 - 2p1) + b (3p0 - 3p-1),
where the a and b ar amplitudes that describe the admixture of 2p and 3p functions in this
wavefunction. Then:
a. If L2 were measured, the value 2h2 would be observed with probability 3 |a|2 + 2 |b|2 =
1, since all of the functions in ψ are p-type orbitals. After said measurement, the
wavefunction would still be this same ψ because this entire ψ is an eigenfunction of L2 .
b. If Lz were measured for this
ψ = a (2p0 + 2p-1 - 2p1) + b (3p0 - 3p-1),
the values 0h, 1h, and -1h would be observed (because these are the only functions with
non-zero Cm coefficients for the Lz operator) with respective probabilities | a |2 + | b |2, | -a
|2, and | a |2 + | -b |2 .
c. After Lz were measured, if the sub-population for which -1h had been detected were
subjected to measurement of L2 the value 2h2 would certainly be found because the new
wavefunction
ψ' = {- a 2p-1 - b 3p-1} (|a|2 + |b|2)-1/2
is still an eigenfunction of L2 with this eigenvalue.
d. Again after Lz were measured, if the sub-population for which -1h
had been observed and for which the wavefunction is now
ψ' = {- a 2p-1 - b 3p-1} (|a|2 + |b|2)-1/2
were subjected to measurement of the energy (through the Hamiltonian operator), two
values would be found. With probability
| -a |2 (|a|2 + |b|2)-1 the energy of the 2p-1 orbital would be observed; with probability | -b |2
(|a|2 + |b|2)-1 , the energy of the 3p-1 orbital would be observed.
If Ψ is a function of several variables (e.g., when Ψ describes more than one
particle in a composite system), and if F is a property that depends on a subset of these
variables (e.g., when F is a property of one of the particles in the composite system), then
the expansion Ψ=Σ j |φj> <φj|Ψ> is viewed as relating only to Ψ's dependence on the
subset of variables related to F. In this case, the integrals <φk|Ψ> are carried out over only
these variables; thus the probabilities Pk =|<φk|Ψ>|2 depend parametrically on the remaining
variables.
Example:
Suppose that Ψ(r,θ) describes the radial (r) and angular (θ) motion of a diatomic
molecule constrained to move on a planar surface. If an experiment were performed to
measure the component of the rotational angular momentum of the diatomic molecule
perpendicular to the surface (Lz= -ih ∂/∂θ), only values equal to mh (m=0,1,-1,2,-2,3,-
3,...) could be observed, because these are the eigenvalues of Lz :
Lz φm= -ih ∂/∂θ φm = mh φm, where
φm = (1/2π)1/2 exp(imθ).
The quantization of Lz arises because the eigenfunctions φm(θ) must be periodic in θ:
φ(θ+2π) = φ(θ).
Such quantization (i.e., constraints on the values that physical properties can realize) will
be seen to occur whenever the pertinent wavefunction is constrained to obey a so-called
boundary condition (in this case, the boundary condition is φ(θ+2π) = φ(θ)).
Expanding the θ-dependence of Ψ in terms of the φm
Ψ =Σm <φm|Ψ> φm(θ)
allows one to write the probability that mh is observed if the angular momentum Lz is
measured as follows:
Pm = |<φm|Ψ>|2 = | ∫φm*(θ) Ψ(r,θ) dθ |2.
If one is interested in the probability that mh be observed when Lz is measured regardless
of what bond length r is involved, then it is appropriate to integrate this expression over the
r-variable about which one does not care. This, in effect, sums contributions from all r-
values to obtain a result that is independent of the r variable. As a result, the probability
reduces to:
Pm = ∫ φ*(θ') {∫ Ψ*(r,θ') Ψ(r,θ) r dr} φ(θ) dθ' dθ,
which is simply the above result integrated over r with a volume element r dr for the two-
dimensional motion treated here.
If, on the other hand, one were able to measure Lz values when r is equal to some specified
bond length (this is only a hypothetical example; there is no known way to perform such a
measurement), then the probability would equal:
Pm r dr = r dr∫ φm*(θ')Ψ*(r,θ')Ψ(r,θ)φm(θ)dθ' dθ = |<φm|Ψ>|2 r dr.
6. Two or more properties F,G, J whose corresponding Hermitian operators F, G, J
commute
FG-GF=FJ-JF=GJ-JG= 0
have complete sets of simultaneous eigenfunctions (the proof of this is treated in
Appendix C). This means that the set of functions that are eigenfunctions of one of the
operators can be formed into a set of functions that are also eigenfunctions of the others:
Fφj=fjφj ==> Gφj=gjφj ==> Jφj=jjφj.
Example:
The px, py and pz orbitals are eigenfunctions of the L2 angular momentum operator
with eigenvalues equal to L(L+1) h2 = 2 h2. Since L2 and Lz commute and act on the same
(angle) coordinates, they possess a complete set of simultaneous eigenfunctions.
Although the px, py and pz orbitals are not eigenfunctions of Lz , they can be
combined to form three new orbitals: p0 = pz,
p1= 2-1/2 [px + i py], and p-1= 2-1/2 [px - i py] that are still eigenfunctions of L2 but are
now eigenfunctions of Lz also (with eigenvalues 0h, 1h, and -1h, respectively).
It should be mentioned that if two operators do not commute, they may still have
some eigenfunctions in common, but they will not have a complete set of simultaneous
eigenfunctions. For example, the Lz and Lx components of the angular momentum operator
do not commute; however, a wavefunction with L=0 (i.e., an S-state) is an eigenfunction
of both operators.
The fact that two operators commute is of great importance. It means that once a
measurement of one of the properties is carried out, subsequent measurement of that
property or of any of the other properties corresponding to mutually commuting operators
can be made without altering the system's value of the properties measured earlier. Only
subsequent measurement of another property whose operator does not commute with F,
G, or J will destroy precise knowledge of the values of the properties measured earlier.
Example:
Assume that an experiment has been carried out on an atom to measure its total
angular momentum L2. According to quantum mechanics, only values equal to L(L+1) h2
will be observed. Further assume, for the particular experimental sample subjected to
observation, that values of L2 equal to 2 h2 and 0 h2 were detected in relative amounts of
64 % and 36 % , respectively. This means that the atom's original wavefunction ψ could be
represented as:
ψ = 0.8 P + 0.6 S,
where P and S represent the P-state and S-state components of ψ. The squares of the
amplitudes 0.8 and 0.6 give the 64 % and 36 % probabilities mentioned above.
Now assume that a subsequent measurement of the component of angular
momentum along the lab-fixed z-axis is to be measured for that sub-population of the
original sample found to be in the P-state. For that population, the wavefunction is now a
pure P-function:
ψ' = P.
However, at this stage we have no information about how much of this ψ' is of m = 1, 0,
or -1, nor do we know how much 2p, 3p, 4p, ... np components this state contains.
Because the property corresponding to the operator Lz is about to be measured, we
express the above ψ' in terms of the eigenfunctions of Lz:
ψ' = P = Σm=1,0,-1 C'm Pm.
When the measurement of Lz is made, the values 1 h, 0 h, and -1 h will be observed with
probabilities given by |C'1|2, |C'0|2, and |C'-1|2, respectively. For that sub-population found
to have, for example, Lz equal to -1 h, the wavefunction then becomes
ψ'' = P-1.
At this stage, we do not know how much of 2p-1, 3p -1, 4p -1, ... np-1 this wavefunction
contains. To probe this question another subsequent measurement of the energy
(corresponding to the H operator) could be made. Doing so would allow the amplitudes in
the expansion of the above ψ''= P-1
ψ''= P-1 = Σn C''n nP-1
to be found.
The kind of experiment outlined above allows one to find the content of each
particular component of an initial sample's wavefunction. For example, the original
wavefunction has
0.64 |C''n|2 |C'm|2 fractional content of the various nPm functions. It is analogous to the
other examples considered above because all of the operators whose properties are
measured commute.
Another Example:
Let us consider an experiment in which we begin with a sample (with wavefunction
ψ) that is first subjected to measurement of Lz and then subjected to measurement of L2 and
then of the energy. In this order, one would first find specific values (integer multiples of
h) of Lz and one would express ψ as
ψ = Σm Dm ψm.
At this stage, the nature of each ψm is unknown (e.g., the ψ1 function can contain np1,
n'd1, n''f1, etc. components); all that is known is that ψm has m h as its Lz value.
Taking that sub-population (|Dm|2 fraction) with a particular m h value for Lz and
subjecting it to subsequent measurement of L2 requires the current wavefunction ψm to be
expressed as
ψm = ΣL DL,m ψL,m.
When L2 is measured the value L(L+1) h2 will be observed with probability |Dm,L|2, and
the wavefunction for that particular sub-population will become
ψ'' = ψL,m.
At this stage, we know the value of L and of m, but we do not know the energy of the
state. For example, we may know that the present sub-population has L=1, m=-1, but we
have no knowledge (yet) of how much 2p-1, 3p -1, ... np-1 the system contains.
To further probe the sample, the above sub-population with L=1 and m=-1 can be
subjected to measurement of the energy. In this case, the function ψ1,-1 must be expressed
as
ψ1,-1 = Σn Dn'' nP-1.
When the energy measurement is made, the state nP-1 will be found |Dn''|2 fraction of the
time.
The fact that Lz , L2 , and H all commute with one another (i.e., are mutually
commutative ) makes the series of measurements described in the above examples more
straightforward than if these operators did not commute.
In the first experiment, the fact that they are mutually commutative allowed us to
expand the 64 % probable L2 eigenstate with L=1 in terms of functions that were
eigenfunctions of the operator for which measurement was about to be made without
destroying our knowledge of the value of L2. That is, because L2 and Lz can have
simultaneous eigenfunctions , the L = 1 function can be expanded in terms of functions that
are eigenfunctions of both L2 and Lz. This in turn, allowed us to find experimentally the
sub-population that had, for example -1 h as its value of Lz while retaining knowledge that
the state remains an eigenstate of L2 (the state at this time had L = 1 and m = -1 and was
denoted P-1). Then, when this P-1 state was subjected to energy measurement, knowledge
of the energy of the sub-population could be gained without giving up knowledge of the L2
and Lz information; upon carrying out said measurement, the state became nP-1.
We therefore conclude that the act of carrying out an experimental measurement
disturbs the system in that it causes the system's wavefunction to become an eigenfunction
of the operator whose property is measured. If two properties whose corresponding
operators commute are measured, the measurement of the second property does not destroy
knowledge of the first property's value gained in the first measurement.
On the other hand, as detailed further in Appendix C, if the two properties (F and
G) do not commute, the second measurement destroys knowledge of the first property's
value. After the first measurement, Ψ is an eigenfunction of F; after the second
measurement, it becomes an eigenfunction of G. If the two non-commuting operators'
properties are measured in the opposite order, the wavefunction first is an eigenfunction of
G, and subsequently becomes an eigenfunction of F.
It is thus often said that 'measurements for operators that do not commute interfere
with one another'. The simultaneous measurement of the position and momentum along the
same axis provides an example of two measurements that are incompatible. The fact that x= x and px = -ih ∂/∂x do not commute is straightforward to demonstrate:
{x(-ih ∂/∂x ) χ - (-ih ∂/∂x )x χ} = ih χ ≠ 0.
Operators that commute with the Hamiltonian and with one another form a
particularly important class because each such operator permits each of the energy
eigenstates of the system to be labelled with a corresponding quantum number. These
operators are called symmetry operators. As will be seen later, they include angular
momenta (e.g., L2,Lz, S2, Sz, for atoms) and point group symmetries (e.g., planes and
rotations about axes). Every operator that qualifies as a symmetry operator provides a
quantum number with which the energy levels of the system can be labeled.
7. If a property F is measured for a large number of systems all described by the same Ψ,
the average value <F> of F for such a set of measurements can be computed as
<F> = <Ψ|F|Ψ>.
Expanding Ψ in terms of the complete set of eigenstates of F allows <F> to be rewritten as
follows:
<F> = Σ j fj |<φj|Ψ>|2,
which clearly expresses <F> as the product of the probability Pj of obtaining the particular
value fj when the property F is measured and the value fj.of the property in such a
measurement. This same result can be expressed in terms of the density matrix Di,j of the
state Ψ defined above as:
<F> = Σ i,j <Ψ|φi> <φi|F|φj> <φj|Ψ> = Σ i,j Ci* <φi|F|φj> Cj
= Σ i,j Dj,i <φi|F|φj> = Tr (DF).
Here, DF represents the matrix product of the density matrix Dj,i and the matrix
representation Fi,j = <φi|F|φj> of the F operator, both taken in the {φj} basis, and Tr
represents the matrix trace operation.
As mentioned at the beginning of this Section, this set of rules and their
relationships to experimental measurements can be quite perplexing. The structure of
quantum mechanics embodied in the above rules was developed in light of new scientific
observations (e.g., the photoelectric effect, diffraction of electrons) that could not be
interpreted within the conventional pictures of classical mechanics. Throughout its
development, these and other experimental observations placed severe constraints on the
structure of the equations of the new quantum mechanics as well as on their interpretations.
For example, the observation of discrete lines in the emission spectra of atoms gave rise to
the idea that the atom's electrons could exist with only certain discrete energies and that
light of specific frequencies would be given off as transitions among these quantized
energy states took place.
Even with the assurance that quantum mechanics has firm underpinnings in
experimental observations, students learning this subject for the first time often encounter
difficulty. Therefore, it is useful to again examine some of the model problems for which
the Schrödinger equation can be exactly solved and to learn how the above rules apply to
such concrete examples.
The examples examined earlier in this Chapter and those given in the Exercises and
Problems serve as useful models for chemically important phenomena: electronic motion in
polyenes, in solids, and in atoms as well as vibrational and rotational motions. Their study
thus far has served two purposes; it allowed the reader to gain some familiarity with
applications of quantum mechanics and it introduced models that play central roles in much
of chemistry. Their study now is designed to illustrate how the above seven rules of
quantum mechanics relate to experimental reality.
B. An Example Illustrating Several of the Fundamental Rules
The physical significance of the time independent wavefunctions and energies
treated in Section II as well as the meaning of the seven fundamental points given above
can be further illustrated by again considering the simple two-dimensional electronic motion
model.
If the electron were prepared in the eigenstate corresponding to nx =1, ny =2, its
total energy would be
E = π2 h2/2m [ 12/Lx2 + 22/Ly2 ].
If the energy were experimentally measured, this and only this value would be observed,
and this same result would hold for all time as long as the electron is undisturbed.
If an experiment were carried out to measure the momentum of the electron along
the y-axis, according to the second postulate above, only values equal to the eigenvalues of
-ih∂/∂y could be observed. The py eigenfunctions (i.e., functions that obey py F =
-ih∂/∂y F = c F) are of the form
(1/Ly)1/2 exp(iky y),
where the momentum hky can achieve any value; the (1/Ly)1/2 factor is used to normalize
the eigenfunctions over the range 0 ≤ y ≤ Ly. It is useful to note that the y-dependence of ψas expressed above [exp(i2πy/Ly) -exp(-i2πy/Ly)] is already written in terms of two such
eigenstates of -ih∂/∂y:
-ih∂/∂y exp(i2πy/Ly) = 2h/Ly exp(i2πy/Ly) , and
-ih∂/∂y exp(-i2πy/Ly) = -2h/Ly exp(-i2πy/Ly) .
Thus, the expansion of ψ in terms of eigenstates of the property being measured dictated by
the fifth postulate above is already accomplished. The only two terms in this expansion
correspond to momenta along the y-axis of 2h/Ly and -2h/Ly ; the probabilities of
observing these two momenta are given by the squares of the expansion coefficients of ψ in
terms of the normalized eigenfunctions of -ih∂/∂y. The functions (1/Ly)1/2 exp(i2πy/Ly)
and
(1/Ly)1/2 exp(-i2πy/Ly) are such normalized eigenfunctions; the expansion coefficients of
these functions in ψ are 2-1/2 and -2-1/2 , respectively. Thus the momentum 2h/Ly will be
observed with probability (2-1/2)2 = 1/2 and -2h/Ly will be observed with probability (-2-
1/2)2 = 1/2. If the momentum along the x-axis were experimentally measured, again only
two values 1h/Lx and -1h/Lx would be found, each with a probability of 1/2.
The average value of the momentum along the x-axis can be computed either as the
sum of the probabilities multiplied by the momentum values:
<px> = 1/2 [1h/Lx -1h/Lx ] =0,
or as the so-called expectation value integral shown in the seventh postulate:
<px> = ∫ ∫ ψ* (-ih∂ψ/∂x) dx dy.
Inserting the full expression for ψ(x,y) and integrating over x and y from 0 to Lx and Ly,
respectively, this integral is seen to vanish. This means that the result of a large number of
measurements of px on electrons each described by the same ψ will yield zero net
momentum along the x-axis.; half of the measurements will yield positive momenta and
half will yield negative momenta of the same magnitude.
The time evolution of the full wavefunction given above for the nx=1, ny=2 state is
easy to express because this ψ is an energy eigenstate:
Ψ(x,y,t) = ψ(x,y) exp(-iEt/h).
If, on the other hand, the electron had been prepared in a state ψ(x,y) that is not a pure
eigenstate (i.e., cannot be expressed as a single energy eigenfunction), then the time
evolution is more complicated. For example, if at t=0 ψ were of the form
ψ = (2/Lx)1/2 (2/Ly)1/2 [a sin(2πx/Lx) sin(1πy/Ly)
+ b sin(1πx/Lx) sin(2πy/Ly) ],
with a and b both real numbers whose squares give the probabilities of finding the system
in the respective states, then the time evolution operator exp(-iHt/h) applied to ψ would
yield the following time dependent function:
Ψ = (2/Lx)1/2 (2/Ly)1/2 [a exp(-iE2,1 t/h) sin(2πx/Lx)
sin(1πy/Ly) + b exp(-iE1,2 t/h) sin(1πx/Lx) sin(2πy/Ly) ],
where
E2,1 = π2 h2/2m [ 22/Lx2 + 12/Ly2 ], and
E1,2 = π2 h2/2m [ 12/Lx2 + 22/Ly2 ].
The probability of finding E2,1 if an experiment were carried out to measure energy would
be |a exp(-iE2,1 t/h)|2 = |a|2; the probability for finding E1,2 would be |b|2. The spatial
probability distribution for finding the electron at points x,y will, in this case, be given by:
|Ψ|2 = |a|2 |ψ2,1|2 + |b|2 |ψ1,2|2 + 2 ab ψ2,1ψ1,2 cos(∆Et/h),
where ∆E is E2,1 - E1,2,
ψ2,1 =(2/Lx)1/2 (2/Ly)1/2 sin(2πx/Lx) sin(1πy/Ly),
and
ψ1,2 =(2/Lx)1/2 (2/Ly)1/2 sin(1πx/Lx) sin(2πy/Ly).
This spatial distribution is not stationary but evolves in time. So in this case, one has a
wavefunction that is not a pure eigenstate of the Hamiltonian (one says that Ψ is a
superposition state or a non-stationary state) whose average energy remains constant
(E=E2,1 |a|2 + E1,2 |b|2) but whose spatial distribution changes with time.
Although it might seem that most spectroscopic measurements would be designed
to prepare the system in an eigenstate (e.g., by focusing on the sample light whose
frequency matches that of a particular transition), such need not be the case. For example,
if very short laser pulses are employed, the Heisenberg uncertainty broadening (∆E∆t ≥ h)
causes the light impinging on the sample to be very non-monochromatic (e.g., a pulse time
of 1 x10-12 sec corresponds to a frequency spread of approximately 5 cm-1). This, in turn,
removes any possibility of preparing the system in a particular quantum state with a
resolution of better than 30 cm-1 because the system experiences time oscillating
electromagnetic fields whose frequencies range over at least 5 cm-1).
Essentially all of the model problems that have been introduced in this Chapter to
illustrate the application of quantum mechanics constitute widely used, highly successful
'starting-point' models for important chemical phenomena. As such, it is important that
students retain working knowledge of the energy levels, wavefunctions, and symmetries
that pertain to these models.
Thus far, exactly soluble model problems that represent one or more aspects of an
atom or molecule's quantum-state structure have been introduced and solved. For example,
electronic motion in polyenes was modeled by a particle-in-a-box. The harmonic oscillator
and rigid rotor were introduced to model vibrational and rotational motion of a diatomic
molecule.
As chemists, we are used to thinking of electronic, vibrational, rotational, and
translational energy levels as being (at least approximately) separable. On the other hand,
we are aware that situations exist in which energy can flow from one such degree of
freedom to another (e.g., electronic-to-vibrational energy flow occurs in radiationless
relaxation and vibration-rotation couplings are important in molecular spectroscopy). It is
important to understand how the simplifications that allow us to focus on electronic or
vibrational or rotational motion arise, how they can be obtained from a first-principles
derivation, and what their limitations and range of accuracy are.
Chapter 2
Approximation Methods Can be Used When Exact Solutions to the Schrödinger Equation
Can Not be Found.
In applying quantum mechanics to 'real' chemical problems, one is usually faced
with a Schrödinger differential equation for which, to date, no one has found an analytical
solution. This is equally true for electronic and nuclear-motion problems. It has therefore
proven essential to develop and efficiently implement mathematical methods which can
provide approximate solutions to such eigenvalue equations. Two methods are widely used
in this context- the variational method and perturbation theory. These tools, whose use
permeates virtually all areas of theoretical chemistry, are briefly outlined here, and the
details of perturbation theory are amplified in Appendix D.
I. The Variational Method
For the kind of potentials that arise in atomic and molecular structure, the
Hamiltonian H is a Hermitian operator that is bounded from below (i.e., it has a lowest
eigenvalue). Because it is Hermitian, it possesses a complete set of orthonormal
eigenfunctions {ψj}. Any function Φ that depends on the same spatial and spin variables
on which H operates and obeys the same boundary conditions that the {ψj} obey can be
expanded in this complete set
Φ = Σ j Cj ψj.
The expectation value of the Hamiltonian for any such function can be expressed in
terms of its Cj coefficients and the exact energy levels Ej of H as follows:
<Φ|H|Φ> = Σij CiCj <ψi|H|ψj> = Σ j|Cj|2 Ej.
If the function Φ is normalized, the sum Σ j |Cj|2 is equal to unity. Because H is bounded
from below, all of the Ej must be greater than or equal to the lowest energy E0. Combining
the latter two observations allows the energy expectation value of Φ to be used to produce a
very important inequality:
<Φ|H|Φ> ≥ E0.
The equality can hold only if Φ is equal to ψ0; if Φ contains components along any of the
other ψj, the energy of Φ will exceed E0.
This upper-bound property forms the basis of the so-called variational method in
which 'trial wavefunctions' Φ are constructed:
i. To guarantee that Φ obeys all of the boundary conditions that the exact ψj do and
that Φ is of the proper spin and space symmetry and is a function of the same spatial and
spin coordinates as the ψj;
ii. With parameters embedded in Φ whose 'optimal' values are to be determined by
making <Φ|H|Φ> a minimum.
It is perfectly acceptable to vary any parameters in Φ to attain the lowest possible
value for <Φ|H|Φ> because the proof outlined above constrains this expectation value to be
above the true lowest eigenstate's energy E0 for any Φ. The philosophy then is that the Φthat gives the lowest <Φ|H|Φ> is the best because its expectation value is closes to the exact
energy.
Quite often a trial wavefunction is expanded as a linear combination of other
functions
Φ = ΣJ CJ ΦJ.
In these cases, one says that a 'linear variational' calculation is being performed. The set of
functions {ΦJ} are usually constructed to obey all of the boundary conditions that the exact
state Ψ obeys, to be functions of the the same coordinates as Ψ, and to be of the same
spatial and spin symmetry as Ψ. Beyond these conditions, the {ΦJ} are nothing more than
members of a set of functions that are convenient to deal with (e.g., convenient to evaluate
Hamiltonian matrix elements <ΦI|H|ΦJ>) and that can, in principle, be made complete if
more and more such functions are included.
For such a trial wavefunction, the energy depends quadratically on the 'linear
variational' CJ coefficients:
<Φ|H|Φ> = ΣIJ CICJ <ΦΙ |H|ΦJ>.
Minimization of this energy with the constraint that Φ remain normalized (<Φ|Φ> = 1 = ΣIJ
CICJ <ΦI|ΦJ>) gives rise to a so-called secular or eigenvalue-eigenvector problem:
ΣJ [<ΦI|H|ΦJ> - E <ΦI|ΦJ>] CJ = ΣJ [HIJ - E SIJ]CJ = 0.
If the functions {ΦJ} are orthonormal, then the overlap matrix S reduces to the unit
matrix and the above generalized eigenvalue problem reduces to the more familiar form:
ΣJ HIJ CJ = E CI.
The secular problem, in either form, has as many eigenvalues Ei and eigenvectors
{CiJ} as the dimension of the HIJ matrix as Φ. It can also be shown that between
successive pairs of the eigenvalues obtained by solving the secular problem at least one
exact eigenvalue must occur (i.e., Ei+1 > Eexact > Ei, for all i). This observation is
referred to as 'the bracketing theorem'.
Variational methods, in particular the linear variational method, are the most widely
used approximation techniques in quantum chemistry. To implement such a method one
needs to know the Hamiltonian H whose energy levels are sought and one needs to
construct a trial wavefunction in which some 'flexibility' exists (e.g., as in the linear
variational method where the CJ coefficients can be varied). In Section 6 this tool will be
used to develop several of the most commonly used and powerful molecular orbital
methods in chemistry.
II. Perturbation Theory
[Suggested Extra Reading- Appendix D; Time Independent Perturbation Theory]
Perturbation theory is the second most widely used approximation method in
quantum chemistry. It allows one to estimate the splittings and shifts in energy levels and
changes in wavefunctions that occur when an external field (e.g., an electric or magnetic
field or a field that is due to a surrounding set of 'ligands'- a crystal field) or a field arising
when a previously-ignored term in the Hamiltonian is applied to a species whose
'unperturbed' states are known. These 'perturbations' in energies and wavefunctions are
expressed in terms of the (complete) set of unperturbed eigenstates.
Assuming that all of the wavefunctions Φk and energies Ek0 belonging to the
unperturbed Hamiltonian H0 are known
H0 Φk = Ek0 Φk ,
and given that one wishes to find eigenstates (ψk and Ek) of the perturbed Hamiltonian
H=H0+λV,
perturbation theory expresses ψk and Ek as power series in the perturbation strength λ:
ψk = ∑n=0
∞ λn ψk(n)
Ek = ∑n=0
∞ λn Ek(n) .
The systematic development of the equations needed to determine the Ek(n) and the ψk(n) is
presented in Appendix D. Here, we simply quote the few lowest-order results.
The zeroth-order wavefunctions and energies are given in terms of the solutions of
the unperturbed problem as follows:
ψk(0) = Φk and Ek(0) = Ek0.
This simply means that one must be willing to identify one of the unperturbed states as the
'best' approximation to the state being sought. This, of course, implies that one must
therefore strive to find an unperturbed model problem, characterized by H0 that represents
the true system as accurately as possible, so that one of the Φk will be as close as possible
to ψk.
The first-order energy correction is given in terms of the zeroth-order (i.e.,
unperturbed) wavefunction as:
Ek(1) = <Φk| V | Φk>,
which is identified as the average value of the perturbation taken with respect to the
unperturbed function Φk. The so-called first-order wavefunction ψk(1) expressed in terms
of the complete set of unperturbed functions {ΦJ} is:
ψk(1) = ∑j≠k
< Φj| V | Φk>/[ Ek0 - E j0 ] | Φj> .
The second-order energy correction is expressed as follows:
Ek(2) = ∑j≠k
|<Φj| V | Φk>|2/[ Ek0 - E j0 ] ,
and the second-order correction to the wavefunction is expressed as
ψk(2) = Σj≠k [ Ek0 - Ej0]-1 Σl≠k{<Φj| V | Φl> -δj,l Ek(1)}
<Φl| V | Φk> [ Ek0 - El0 ]-1 |Φj>.
An essential point about perturbation theory is that the energy corrections Ek(n) and
wavefunction corrections ψk(n) are expressed in terms of integrals over the unperturbed
wavefunctions Φk involving the perturbation (i.e., <Φj|V|Φl>) and the unperturbed
energies Ej0. Perturbation theory is most useful when one has, in hand, the solutions to an
unperturbed Schrödinger equation that is reasonably 'close' to the full Schrödinger
equation whose solutions are being sought. In such a case, it is likely that low-order
corrections will be adequate to describe the energies and wavefunctions of the full problem.
It is important to stress that although the solutions to the full 'perturbed'
Schrödinger equation are expressed, as above, in terms of sums over all states of the
unperturbed Schrödinger equation, it is improper to speak of the perturbation as creating
excited-state species. For example, the polarization of the 1s orbital of the Hydrogen atom
caused by the application of a static external electric field of strength E along the z-axis is
described, in first-order perturbation theory, through the sum
Σn=2,∞ φnp0 <φnp0 | E e r cosθ | 1s> [ E1s - Enp0
]-1
over all pz = p0 orbitals labeled by principal quantum number n. The coefficient multiplying
each p0 orbital depends on the energy gap corresponding to the 1s-to-np 'excitation' as well
as the electric dipole integral <φnp0 | E ercosθ | 1s> between the 1s orbital and the np0
orbital.
This sum describes the polarization of the 1s orbital in terms of functions that have
p0 symmetry; by combining an s orbital and p0 orbitals, one can form a 'hybrid-like' orbital
that is nothing but a distorted 1s orbital. The appearance of the excited np0 orbitals has
nothing to do with forming excited states; these np0 orbitals simply provide a set of
functions that can describe the response of the 1s orbital to the applied electric field.
The relative strengths and weaknesses of perturbation theory and the variational
method, as applied to studies of the electronic structure of atoms and molecules, are
discussed in Section 6.
Chapter 3
The Application of the Schrödinger Equation to the Motions of Electrons and Nuclei in a
Molecule Lead to the Chemists' Picture of Electronic Energy Surfaces on Which Vibration
and Rotation Occurs and Among Which Transitions Take Place.
I. The Born-Oppenheimer Separation of Electronic and Nuclear Motions
Many elements of chemists' pictures of molecular structure hinge on the point of
view that separates the electronic motions from the vibrational/rotational motions and treats
couplings between these (approximately) separated motions as 'perturbations'. It is
essential to understand the origins and limitations of this separated-motions picture.
To develop a framework in terms of which to understand when such separability is
valid, one thinks of an atom or molecule as consisting of a collection of N electrons and M
nuclei each of which possesses kinetic energy and among which coulombic potential
energies of interaction arise. To properly describe the motions of all these particles, one
needs to consider the full Schrödinger equation HΨ = EΨ, in which the Hamiltonian H
contains the sum (denoted He) of the kinetic energies of all N electrons and the coulomb
potential energies among the N electrons and the M nuclei as well as the kinetic energy T of
the M nuclei
T = Σa=1,M ( - h2/2ma ) ∇a2,
H = He + T
He = Σ j { ( - h2/2me ) ∇ j2 - Σa Zae2/rj,a } + Σ j<k e2/rj,k
+ Σa < b Za Zb e2/Ra,b.
Here, ma is the mass of the nucleus a, Zae2 is its charge, and ∇a2 is the Laplacian with
respect to the three cartesian coordinates of this nucleus (this operator ∇a2 is given in
spherical polar coordinates in Appendix A); rj,a is the distance between the jth electron and
the ath nucleus, rj,k is the distance between the jth and kth electrons, me is the electron's
mass, and Ra,b is the distance from nucleus a to nucleus b.
The full Hamiltonian H thus contains differential operators over the 3N electronic
coordinates (denoted r as a shorthand) and the 3M nuclear coordinates (denoted R as a
shorthand). In contrast, the electronic Hamiltonian He is a Hermitian differential operator in
r-space but not in R-space. Although He is indeed a function of the R-variables, it is not a
differential operator involving them.
Because He is a Hermitian operator in r-space, its eigenfunctions Ψi (r|R) obey
He Ψi (r|R) = Ei (R) Ψi (r|R)
for any values of the R-variables, and form a complete set of functions of r for any values
of R. These eigenfunctions and their eigenvalues Ei (R) depend on R only because the
potentials appearing in He depend on R. The Ψi and Ei are the electronic wavefunctions
and electronic energies whose evaluations are treated in the next three Chapters.
The fact that the set of {Ψi} is, in principle, complete in r-space allows the full
(electronic and nuclear) wavefunction Ψ to have its r-dependence expanded in terms of the
Ψi:
Ψ(r,R) = Σ i Ψi (r|R) Ξi (R) .
The Ξi(R) functions, carry the remaining R-dependence of Ψ and are determined by
insisting that Ψ as expressed here obey the full Schrödinger equation:
( He + T - E ) Σ i Ψi (r|R) Ξi (R) = 0.
Projecting this equation against < Ψj (r|R)| (integrating only over the electronic coordinates
because the Ψj are orthonormal only when so integrated) gives:
[ (Ej(R) - E) Ξj (R) + T Ξj(R) ] = - Σ i { < Ψj | T | Ψi > (R) Ξi(R)
+ Σa=1,M ( - h2/ma ) < Ψj | ∇a | Ψi >(R) . ∇a Ξi(R) },
where the (R) notation in < Ψj | T | Ψi > (R) and < Ψj | ∇a | Ψi >(R) has been used to
remind one that the integrals < ...> are carried out only over the r coordinates and, as a
result, still depend on the R coordinates.
In the Born-Oppenheimer (BO) approximation, one neglects the so-called non-
adiabatic or non-BO couplings on the right-hand side of the above equation. Doing so
yields the following equations for the Ξi(R) functions:
[ (Ej(R) - E) Ξj0 (R) + T Ξj0(R) ] = 0,
where the superscript in Ξi0(R) is used to indicate that these functions are solutions within
the BO approximation only.
These BO equations can be recognized as the equations for the translational,
rotational, and vibrational motion of the nuclei on the 'potential energy surface' Ej (R).
That is, within the BO picture, the electronic energies Ej(R), considered as functions of the
nuclear positions R, provide the potentials on which the nuclei move. The electronic and
nuclear-motion aspects of the Schrödinger equation are thereby separated.
A. Time Scale Separation
The physical parameters that determine under what circumstances the BO
approximation is accurate relate to the motional time scales of the electronic and
vibrational/rotational coordinates.
The range of accuracy of this separation can be understood by considering the
differences in time scales that relate to electronic motions and nuclear motions under
ordinary circumstances. In most atoms and molecules, the electrons orbit the nuclei at
speeds much in excess of even the fastest nuclear motions (the vibrations). As a result, the
electrons can adjust 'quickly' to the slow motions of the nuclei. This means it should be
possible to develop a model in which the electrons 'follow' smoothly as the nuclei vibrate
and rotate.
This picture is that described by the BO approximation. Of course, one should
expect large corrections to such a model for electronic states in which 'loosely held'
electrons exist. For example, in molecular Rydberg states and in anions, where the outer
valence electrons are bound by a fraction of an electron volt, the natural orbit frequencies of
these electrons are not much faster (if at all) than vibrational frequencies. In such cases,
significant breakdown of the BO picture is to be expected.
B. Vibration/Rotation States for Each Electronic Surface
The BO picture is what gives rise to the concept of a manifold of potential energy
surfaces on which vibrational/rotational motions occur.
Even within the BO approximation, motion of the nuclei on the various electronic
energy surfaces is different because the nature of the chemical bonding differs from surface
to surface. That is, the vibrational/rotational motion on the ground-state surface is certainly
not the same as on one of the excited-state surfaces. However, there are a complete set of
wavefunctions Ξ0j,m (R) and energy levels E0j,m for each surface Ej(R) because T + Ej(R)
is a Hermitian operator in R-space for each surface (labelled j):
[ T + Ej(R) ] Ξ0j,m (R) = E0j,m Ξ0j,m .
The eigenvalues E0j,m must be labelled by the electronic surface (j) on which the motion
occurs as well as to denote the particular state (m) on that surface.
II. Rotation and Vibration of Diatomic Molecules
For a diatomic species, the vibration-rotation (V/R) kinetic energy operator can be
expressed as follows in terms of the bond length R and the angles θ and φ that describe the
orientation of the bond axis relative to a laboratory-fixed coordinate system:
TV/R = - h2/2µ { R-2 ∂/∂R( R2 ∂/∂R) - R-2 h-2L2 },
where the square of the rotational angular momentum of the diatomic species is
L2 = h2{ (sinθ)-1 ∂/∂θ ((sinθ) ∂/∂θ ) + (sinθ)-2 ∂2/∂φ2}.
Because the potential Ej (R) depends on R but not on θ or φ, the V/R function Ξ0j,m can be
written as a product of an angular part and an R-dependent part; moreover, because L2
contains the full angle-dependence of TV/R , Ξ0j,n can be written as
Ξ0j,n = YJ,M (θ,φ) Fj,J,v (R).
The general subscript n, which had represented the state in the full set of 3M-3 R-space
coordinates, is replaced by the three quantum numbers J,M, and v (i.e., once one focuses
on the three specific coordinates R,θ, and φ , a total of three quantum numbers arise in
place of the symbol n).
Substituting this product form for Ξ0j,n into the V/R equation gives:
- h2/2µ { R-2 ∂/∂R( R2 ∂/∂R) - R-2 h-2 J(J+1) } Fj,J,v (R)
+ Ej(R) Fj,J,v (R) = E0j,J,v Fj,J,v
as the equation for the vibrational (i.e., R-dependent) wavefunction within electronic state j
and with the species rotating with J(J+1) h2 as the square of the total angular momentum
and a projection along the laboratory-fixed Z-axis of Mh. The fact that the Fj,J,v functions
do not depend on the M quantum number derives from the fact that the TV/R kinetic energy
operator does not explicitly contain JZ; only J2 appears in TV/R.
The solutions for which J=0 correspond to vibrational states in which the species
has no rotational energy; they obey
- h2/2µ { R-2 ∂/∂R( R2 ∂/∂R) } Fj,0,v (R)
+ Ej(R) Fj,0,v (R) = E0j,0,v Fj,0,v .
The differential-operator parts of this equation can be simplified somewhat by substituting
F= R-1χ and thus obtaining the following equation for the new function χ:
- h2/2µ ∂/∂R ∂/∂R χj,0,v (R) + Ej(R) χj,0,v (R) = E0j,0,v χj,0,v .
Solutions for which J≠0 require the vibrational wavefunction and energy to respond to the
presence of the 'centrifugal potential' given by h2 J(J+1)/(2µR2); these solutions obey the
full coupled V/R equations given above.
A. Separation of Vibration and Rotation
It is common, in developing the working equations of diatomic-molecule
rotational/vibrational spectroscopy, to treat the coupling between the two degrees of
freedom using perturbation theory as developed later in this chapter. In particular, one can
expand the centrifugal coupling h2J(J+1)/(2µR2) around the equilibrium geometry Re
(which depends, of course, on j):
h2J(J+1)/(2µR2) = h2J(J+1)/(2µ[Re2 (1+∆R)2])
= h2 J(J+1)/(2µRe2) [1 - 2 ∆R + ... ],
and treat the terms containing powers of the bond length displacement ∆Rk as
perturbations. The zeroth-order equations read:
- h2/2µ { R-2 ∂/∂R( R2 ∂/∂R) } F0j,J,v (R) + Ej(R) F0j,J,v (R)
+ h2 J(J+1)/(2µRe2) F0j,J,v = E0j,J,v F0j,J,v ,
and have solutions whose energies separate
E0j,J,v = h2 J(J+1)/(2µRe2) + Ej,v
and whose wavefunctions are independent of J (because the coupling is not R-dependent in
zeroth order)
F0j,J,v (R) = Fj,v (R).
Perturbation theory is then used to express the corrections to these zeroth order solutions as
indicated in Appendix D.
B. The Rigid Rotor and Harmonic Oscillator
Treatment of the rotational motion at the zeroth-order level described above
introduces the so-called 'rigid rotor' energy levels and wavefunctions: EJ = h2
J(J+1)/(2µRe2) and YJ,M (θ,φ); these same quantities arise when the diatomic molecule is
treated as a rigid rod of length Re. The spacings between successive rotational levels within
this approximation are
∆EJ+1,J = 2hcB(J+1),
where the so-called rotational constant B is given in cm-1 as
B = h/(8π2 cµRe2) .
The rotational level J is (2J+1)-fold degenerate because the energy EJ is independent of the
M quantum number of which there are (2J+1) values for each J: M= -J, -J+1, -J+2, ... J-2,
J-1, J.
The explicit form of the zeroth-order vibrational wavefunctions and energy levels,
F0j,v and E0j,v, depends on the description used for the electronic potential energy surface
Ej(R). In the crudest useful approximation, Ej(R) is taken to be a so-called harmonic
potential
Ej(R) ≈ 1/2 kj (R-Re)2 ;
as a consequence, the wavefunctions and energy levels reduce to
E0j,v = Ej (Re) + h √k/µ ( v +1/2), and
F0j,v (R) = [2v v! ]-1/2 (α/π)1/4 exp(-α(R-Re)2/2) Hv (α1/2 (R-Re)),
where α = (kj µ)1/2/h and Hv (y) denotes the Hermite polynomial defined by:
Hv (y) = (-1)v exp(y2) dv/dyv exp(-y2).
The solution of the vibrational differential equation
- h2/2µ { R-2 ∂/∂R( R2 ∂/∂R) } Fj,v (R) + Ej(R) Fj,v (R) = Ej,v Fj,v
is treated in EWK, Atkins, and McQuarrie.
These harmonic-oscillator solutions predict evenly spaced energy levels (i.e., no
anharmonicity) that persist for all v. It is, of course, known that molecular vibrations
display anharmonicity (i.e., the energy levels move closer together as one moves to higher
v) and that quantized vibrational motion ceases once the bond dissociation energy is
reached.
C. The Morse Oscillator
The Morse oscillator model is often used to go beyond the harmonic oscillator
approximation. In this model, the potential Ej(R) is expressed in terms of the bond
dissociation energy De and a parameter a related to the second derivative k of Ej(R) at Re
k = ( d2Ej/dR2) = 2a2De as follows:
Ej(R) - Ej(Re) = De { 1 - exp(-a(R-Re)) }2 .
The Morse oscillator energy levels are given by
E0j,v = Ej(Re) + h √k/µ (v+1/2) - h2/4 (k/µDe) ( v+1/2)2;
the corresponding eigenfunctions are also known analytically in terms of hypergeometric
functions (see, for example, Handbook of Mathematical Functions , M. Abramowitz and I.
A. Stegun, Dover, Inc. New York, N. Y. (1964)). Clearly, the Morse solutions display
anharmonicity as reflected in the negative term proportional to (v+1/2)2 .
D. Perturbative Treatment of Vibration-Rotation Coupling
III. Rotation of Polyatomic Molecules
To describe the orientations of a diatomic or linear polyatomic molecule requires
only two angles (usually termed θ and φ). For any non-linear molecule, three angles
(usually α, β, and γ) are needed. Hence the rotational Schrödinger equation for a non-
linear molecule is a differential equation in three-dimensions.
There are 3M-6 vibrations of a non-linear molecule containing M atoms; a linear
molecule has 3M-5 vibrations. The linear molecule requires two angular coordinates to
describe its orientation with respect to a laboratory-fixed axis system; a non-linear molecule
requires three angles.
A. Linear Molecules
The rotational motion of a linear polyatomic molecule can be treated as an extension
of the diatomic molecule case. One obtains the YJ,M (θ,φ) as rotational wavefunctions and,
within the approximation in which the centrifugal potential is approximated at the
equilibrium geometry of the molecule (Re), the energy levels are:
E0J = J(J+1) h2/(2I) .
Here the total moment of inertia I of the molecule takes the place of µRe2 in the diatomic
molecule case
I = Σa ma (Ra - RCofM)2;
ma is the mass of atom a whose distance from the center of mass of the molecule is (Ra -
RCofM). The rotational level with quantum number J is (2J+1)-fold degenerate again
because there are (2J+1)
M- values.
B. Non-Linear Molecules
For a non-linear polyatomic molecule, again with the centrifugal couplings to the
vibrations evaluated at the equilibrium geometry, the following terms form the rotational
part of the nuclear-motion kinetic energy:
Trot = Σ i=a,b,c (Ji2/2Ii).
Here, Ii is the eigenvalue of the moment of inertia tensor:
Ix,x = Σa ma [ (Ra-RCofM)2 -(xa - xCofM )2]
Ix,y = Σa ma [ (xa - xCofM) ( ya -yCofM) ]
expressed originally in terms of the cartesian coordinates of the nuclei (a) and of the center
of mass in an arbitrary molecule-fixed coordinate system (and similarly for Iz,z , Iy,y , Ix,z
and Iy,z). The operator Ji corresponds to the component of the total rotational angular
momentum J along the direction belonging to the ith eigenvector of the moment of inertia
tensor.
Molecules for which all three principal moments of inertia (the Ii's) are equal are
called 'spherical tops'. For these species, the rotational Hamiltonian can be expressed in
terms of the square of the total rotational angular momentum J2 :
Trot = J2 /2I,
as a consequence of which the rotational energies once again become
EJ = h2 J(J+1)/2I.
However, the YJ,M are not the corresponding eigenfunctions because the operator J2 now
contains contributions from rotations about three (no longer two) axes (i.e., the three
principal axes). The proper rotational eigenfunctions are the DJM,K (α,β,γ) functions
known as 'rotation matrices' (see Sections 3.5 and 3.6 of Zare's book on angular
momentum) these functions depend on three angles (the three Euler angles needed to
describe the orientation of the molecule in space) and three quantum numbers- J,M, and K.
The quantum number M labels the projection of the total angular momentum (as Mh) along
the laboratory-fixed z-axis; Kh is the projection along one of the internal principal axes ( in
a spherical top molecule, all three axes are equivalent, so it does not matter which axis is
chosen).
The energy levels of spherical top molecules are (2J+1)2 -fold degenerate. Both the
M and K quantum numbers run from -J, in steps of unity, to J; because the energy is
independent of M and of K, the degeneracy is (2J+1)2.
Molecules for which two of the three principal moments of inertia are equal are
called symmetric top molecules. Prolate symmetric tops have Ia < Ib = Ic ; oblate symmetric
tops have Ia = Ib < Ic ( it is convention to order the moments of inertia as Ia ≤ Ib ≤ Ic ).
The rotational Hamiltonian can now be written in terms of J2 and the component of J
along the unique moment of inertia's axis as:
Trot = Ja2 ( 1/2Ia - 1/2Ib ) + J2 /2Ib
for prolate tops, and
Trot = Jc2 ( 1/2Ic - 1/2Ib ) + J2/2Ib
for oblate tops. Again, the DJM,K (α,β,γ) are the eigenfunctions, where the quantum
number K describes the component of the rotational angular momentum J along the unique
molecule-fixed axis (i.e., the axis of the unique moment of inertia). The energy levels are
now given in terms of J and K as follows:
EJ,K = h2J(J+1)/2Ib + h2 K2 (1/2Ia - 1/2Ib)
for prolate tops, and
EJ,K = h2J(J+1)/2Ib + h2K2 (1/2Ic - 1/2Ib)
for oblate tops.
Because the rotational energies now depend on K (as well as on J), the
degeneracies are lower than for spherical tops. In particular, because the energies do not
depend on M and depend on the square of K, the degeneracies are (2J+1) for states with
K=0 and 2(2J+1) for states with |K| > 0; the extra factor of 2 arises for |K| > 0 states
because pairs of states with K = |K| and K = |-K| are degenerate.
IV. Summary
This Chapter has shown how the solution of the Schrödinger equation governing
the motions and interparticle potential energies of the nuclei and electrons of an atom or
molecule (or ion) can be decomposed into two distinct problems: (i) solution of the
electronic Schrödinger equation for the electronic wavefunctions and energies, both of
which depend on the nuclear geometry and (ii) solution of the vibration/rotation
Schrödinger equation for the motion of the nuclei on any one of the electronic energy
surfaces. This decomposition into approximately separable electronic and nuclear-
motion problems remains an important point of view in chemistry. It forms the basis of
many of our models of molecular structure and our interpretation of molecular
spectroscopy. It also establishes how we approach the computational simulation of the
energy levels of atoms and molecules; we first compute electronic energy levels at a 'grid'
of different positions of the nuclei, and we then solve for the motion of the nuclei on a
particular energy surface using this grid of data.
The treatment of electronic motion is treated in detail in Sections 2, 3, and 6
where molecular orbitals and configurations and their computer evaluation is covered. The
vibration/rotation motion of molecules on BO surfaces is introduced above, but should be
treated in more detail in a subsequent course in molecular spectroscopy.
Section Summary
This Introductory Section was intended to provide the reader with an overview of
the structure of quantum mechanics and to illustrate its application to several exactly
solvable model problems. The model problems analyzed play especially important roles in
chemistry because they form the basis upon which more sophisticated descriptions of the
electronic structure and rotational-vibrational motions of molecules are built. The variational
method and perturbation theory constitute the tools needed to make use of solutions of
simpler model problems as starting points in the treatment of Schrödinger equations that are
impossible to solve analytically.
In Sections 2, 3, and 6 of this text, the electronic structures of polyatomic
molecules, linear molecules, and atoms are examined in some detail. Symmetry, angular
momentum methods, wavefunction antisymmetry, and other tools are introduced as needed
throughout the text. The application of modern computational chemistry methods to the
treatment of molecular electronic structure is included. Given knowledge of the electronic
energy surfaces as functions of the internal geometrical coordinates of the molecule, it is
possible to treat vibrational-rotational motion on these surfaces. Exercises, problems,
and solutions are provided for each Chapter. Readers are strongly encouraged to work
these exercises and problems because new material that is used in other Chapters is often
developed within this context.