approximation techniques for large finite quantum many ... · meng ho and nwan hiang yap, my...
TRANSCRIPT
Approximation Techniques for Large Finite Quantum Many-Body
Systems
by
Shen Yong Ho
A thesis submitted in conformity with the requirements
for the degree of Doctor of Philosophy
Graduate Department of Physics
University of Toronto
Copyright c© 2009 by Shen Yong Ho
Abstract
Approximation Techniques for Large Finite Quantum Many-Body Systems
Shen Yong Ho
Doctor of Philosophy
Graduate Department of Physics
University of Toronto
2009
In this thesis, we will show how certain classes of quantum many-body Hamiltonians with
su(2)1 ⊕ su(2)2 ⊕ . . . ⊕ su(2)k spectrum generating algebras can be approximated by multi-
dimensional shifted harmonic oscillator Hamiltonians. The dimensions of the Hilbert spaces of
such Hamiltonians usually depend exponentially on k. This can make obtaining eigenvalues
by diagonalization computationally challenging. The Shifted Harmonic Approximation (SHA)
developed here gives good predictions of properties such as ground state energies, excitation en-
ergies and the most probable states in the lowest eigenstates. This is achieved by solving only a
system of k equations and diagonalizing k×k matrices. The SHA gives accurate approximations
over wide domains of parameters and in many cases even across phase transitions.
The SHA is first illustrated using the Lipkin-Meshkov-Glick (LMG) model and the Canonical
Josephson Hamiltonian (CJH) which have su(2) spectrum generating algebras. Next, we extend
the technique to the non-compact su(1, 1) algebra, using the five-dimensional quartic oscillator
(5DQO) as an example. Finally, the SHA is applied to a k-level Bardeen-Cooper-Shrieffer
(BCS) pairing Hamiltonian with fixed particle number. The BCS model has a su(2)1⊕su(2)2⊕
. . .⊕ su(2)k spectrum generating algebra.
An attractive feature of the SHA is that it also provides information to construct basis
states which yield very accurate eigenvalues for low-lying states by diagonalizing Hamiltonians
in small subspaces of huge Hilbert spaces. For Hamiltonians that involve a smaller number
of operators, accurate eigenvalues can be obtained using another technique developed in this
thesis: the generalized Rowe-Rosensteel-Kerman-Klein equations-of-motion method (RRKK).
The RRKK is illustrated using the LMG and the 5DQO. In RRKK, solving unknowns in a set
of 10× 10 matrices typically gives estimates of the lowest few eigenvalues to an accuracy of at
ii
least eight significant figures. The RRKK involves optimization routines which require initial
guesses of the matrix representations of the operators. In many cases, very good initial guesses
can be obtained using the SHA.
The thesis concludes by exploring possible future developments of the SHA.
iii
Dedication
This thesis is dedicated to my wife Pui Yee Lum,
to my mother Nwan Hiang Yap and father Yuit Meng Ho.
iv
Acknowledgements
First and foremost, I wish to express my sincere thanks to my supervisor Professor David
Rowe for his unfailing support, guidance and encouragement. His wealth of knowledge and
research insights have always been the source of my inspiration and will continue to be. I have
been most privileged to work with him and I thank him for patiently imparting the skills of a
researcher. I like to thank Professor Joe Repka for his insightful advice on thesis. I also like
to express my gratitude to Professor Stephen Julian for his encouragement and being always
ready to help me in various aspects of condensed matter Physics.
I would like to thank my friends and colleagues at university and elsewhere: Peter Turner,
especially for introducing me to Matlab and effectively shortening the period I need to complete
my PhD; Gabriela Thiamova, for teaching me aspects of Nuclear Physics; Professor George
Rosensteel (Tulane University), for collaborative work in RRKK; Trevor Welsh, for skilfully
sharing his knowledge of Mathematical Physics, especially the much needed combinatorics in
the thesis; Stijn De Baerdemacker (and Veerle Hellemans), especially for their help in refining
the SHA and making the formalism more rigorous; and J.R. Gonzales Alonso, for introducing
the Canonical Josephson Hamiltonian to me (at the CAM 2007 conference), leading to a re-
exploration of the SHA which formed the core of the thesis. I like to thank Professor Juliana
Carvalho, David Brookes, Lindsey Shorser and Maria Wesslen for sharing their knowledge in
the weekly seminars. I also like to thank Professor Arun Paramekanti and Professor Allan
Griffin for their discussions on many-body Physics.
Thanks for Krystyna Biel and Marianne Khurana for their excellent support in the course
of my graduate studies. I am grateful for the financial support from the following sources:
University of Toronto; Edward Christie Stevens Award; E.F. Burton Fellowship; the Agency
for Science, Technology and Research (especially Mr Philip Yeo and Professor G. Rajagopal);
and the Lee Foundation. I like to especially thank Professor K.K. Phua and the World Scientific
Publication Company for making special arrangements for a home-based part-time job for my
wife. I like to thank Mr Wee Hiong Ang (Principal) and Mrs Ai Chin Tan (then, Head of
Science), from Hwa Chong Junior College where I previously taught, who help sought financial
support for my graduate studies.
I like to thank my brothers and sister-in-laws Shen Shyang Ho, Chunguang Yu, Shen Teng
Ho and Peiru Liao for their support, especially Shen Shyang and Chunguang who made the trip
from Virginia to help us when we first arrived in Toronto. I also like to thank my parents Yuit
Meng Ho and Nwan Hiang Yap, my parents-in-law Yew Sum Lum and Ai Choo Lee, sister-in-
law Pui Pui Lum for their love, support and bearing with our absence, especially my parents
who made the trip to Toronto during the birth of my daughter.
Most importantly, I would like to express my deepest thanks to my wife Pui Yee Lum who
v
gave up a good and comfortable career in Singapore but has never wavered in her support in
this arduous expedition. She has taken over a large part of my responsibilities at home so that
I can stay focused on my work. I also like to thank my children Zhe Xi and Zhi Ling, who in
their child-likeness, have continually refreshed my perspectives.
I thank God for His provisions.
Post defence: I would like to thank Professor Theodore Shepherd (convenor for departmental
oral examination) and Professor Hubert de Guise (external examiner for final oral examination)
for their suggestions in improving the thesis.
vi
Contents
1 Introduction 1
2 Some Relevant Aspects of Many-body Physics 8
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 The Lipkin-Meshkov-Glick (LMG) Model . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Two Traditional Approximations Techniques and the LMG Spectrum . . . . . . 16
3 The SHA and an Equations-of-Motion Approach 21
3.1 The Shifted Harmonic Approximation . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Applying the SHA to the LMG Model . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Accuracy Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 An Equations-of-Motion Approach: RRKK . . . . . . . . . . . . . . . . . . . . . 42
3.5 Numerical Results for RRKK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Appendix 1: Issues related to the Zero Point Energy . . . . . . . . . . . . . . . . . . . 62
4 The CJH and 5-D Quartic Oscillator 65
4.1 Bose-Einstein Condensates and the Canonical Josephson Hamiltonian . . . . . . 65
4.2 The 5-D Quartic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5 The BCS Pairing Hamiltonian 102
5.1 Outline of the BCS Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.2 Application of the SHA to the BCS Hamiltonian . . . . . . . . . . . . . . . . . . 110
5.3 Comparison between the BCS approach and the SHA . . . . . . . . . . . . . . . 121
5.4 A sample 10-level model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6 Conclusion 133
Bibliography 140
vii
Chapter 1
Introduction
“Professor G. E. Brown has pointed out that, for those interested in exact solutions,
this can be answered by a look at history. In eighteen century Newtonian Mechanics,
the three body problem was insoluble. With the birth of general relativity and
quantum electrodynamics in 1930, the two and one-body problems become insoluble.
And within the modern quantum field theory, the problem of zero bodies (vacuum)
is insoluble. So, if we are out after exact solutions, no bodies at all is already too
many.”
taken from “A Guide to Feynman Diagrams in the Many-body Problem”
by Richard D. Mattuck, 1963 [47].
In this thesis, we are not so much interested in ‘exact’ solutions as in addressing the enormity
of the difficulty of solving very large interacting many-body problems to an acceptable level
of precision. In many situations, a good simple physical model will give us a sense of the
underlying Physics in nature. However, it is usually not easy and sometimes even impossible to
solve simple physical models. This is the subject of the thesis - to systematically obtain a good
approximate solution for a class of quantum many-body problems that pose a computational
challenge.
On the length scale that is of interest to most researchers in quantum many-body physics, we
have very good solutions for quantum two body systems. The famous example is Schrodinger’s
treatment of the hydrogen atom. However, it is not easy to obtain numerical solutions for
models of multi-particle interacting systems. Moreover, in cases where it is possible, it is often
computationally challenging. For example, as observed by X. G. Wen in his book “Quantum
Field Theory of Many-Body Systems” [78], a model of eleven interacting electrons described by
the corresponding Schrodinger equations could be solved with 32 Mbytes RAM on a workstation
1
Chapter 1. Introduction 2
in the 1980s. Two decades later, when the computational power has increased one hundred
times1, we can only solve a system with two more electrons. On the other hand, even if much
larger systems can be solved to arbitrary accuracy, we may not necessarily gain additional
physical insights.
In light of this, it may seem futile to study many-body Physics. However, when we examine
the low lying spectrum of atomic, molecular or nuclear systems, we are not always faced with
something incomprehensible but rather we often see resemblances to something we already
know, such as the quantum harmonic oscillator or rotor spectrum. For example, in figure (1.1)
we see part of the spectrum of the 15468Er nucleus identified as the ground state band [72]. This
is characteristic of the sequence of states observed for nuclei with mass number in the region
150 < A < 180. These low-lying levels can be closely fitted with a model of a quantum rotor
J , parity Energy in keV
0+ 0.02+ 91.4 (E2 = ~
2I2(2 + 1) = 91.4)
4+ 299.43 (E4 = 304.7)
6+ 614.37 (E6 = 639.8)
8+ 1024.52 (E8 = 1096.8)
10+ 1518.06 (E10 = 1675.7)
12+ 2082.79 (E12 = 2376.4)
Figure 1.1: Part of the spectrum of the 15468Er nucleus identified as the ground state band. The
spacings of the levels bear resemblance to those of a quantum rotor. A simple computation of
the energy levels based on a rotor model is given in ( ) for comparison.
which has energy levels given by E = ~
2IJ(J + 1) where I is the moment of inertia of the rotor
and J is the angular momentum quantum number. Using the first excited state to fix ~
2I, we
list the values of the corresponding quantum rotor in figure (1.1) for comparison [38].
1For such numerical computations, it is usually the RAM that determines the size of the model the computercan handle. The processor speed determines the duration of the computation.
Chapter 1. Introduction 3
Figure 1.2: Empirical values of the ratio R =E4+
E2+for even-even nuclei over a range of mass num-
ber A. The lines connect sequences of isotopes. Taken from Scharff-Goldhaber [71]. Reprinted,
with permission, from the Annual Review of Nuclear and Particle Science, Volume 26 c©1976
by Annual Reviews.
In fact, if we study the ratio of the energies of the low lying states (R =E4+
E2+) obtained
from experiments for even-even nuclei over a range of mass number (as shown in figure (1.2)),
we observe some trends. For mass number A < 150, except for nuclei near closed shells, R is
approximately 2. Thus, nuclei in this region can generally be treated with models based on
vibrations about a spherical equilibrium shape. From the experimental results in figure (1.2),
we see that atomic nuclei in the domains 150 < A < 180 and A > 230 have R ≈ 103 and can be
modelled as rotors. Here, there is strong evidence that the nucleons behave as a collective entity
- like a suspended droplet of liquid. This simple analysis has revealed two main types of nuclear
collective motions - vibrations and rotations. A detailed theory based on liquid drop model has
been developed by Bohr and Mottelson and many others from the Copenhagen school (see [9]).
In figure (1.2), for certain isotopes in the 150 < A < 180 region, we note a change in the ratio
from R ≈ 2 to R ≈ 10/3. This change from a vibrator-like spectrum to a rotor-like spectrum
can be regarded as a phase transition. Thus, instead of just looking at individual many-body
systems, we can study collections of many-body systems with some common parameters that
Chapter 1. Introduction 4
Figure 1.3: The temperature dependence of the energy gaps of lead and vanadium. The solid
curve is the prediction of the BCS theory. In both cases, Eg(0) was selected to fit the theoretical
curve [52]. Reprinted figure with permission from P. L. Richards and M. Tinkham, Physical
Reviews 119, 575-590, 1960. Copyright 1960 by the American Physical Society.
can be varied to get from one system or phase to another. Such studies will give us greater
insights into the nature of the interactions within the systems.
Next, we look at a condensed matter system where the number of interacting particles is
of the order of Avogadro’s constant. Since the number of particles involved is large, either
a statistical or a quantum field treatment is often appropriate. Typically, theoretical predic-
tions of macroscopic quantities such as specific heats can be compared against experimental
measurements. It is only in the last decade that experimental techniques have been devel-
oped to resolve the energy levels for condensed matter systems which are clustered very closely
in bands. Before that, it was mainly the characteristics of energy bands such as band gaps
that were studied. We show in figure (1.3) an illustration of the successful application of the
Bardeen-Cooper-Shrieffer (BCS) theory [5] in the description of the change in the energy gap
in the metal-superconductor phase transition in lead and vanadium. The metals cease to be
superconducting when the gap vanishes above the transition temperature Tc. In this case, we
have a single many-body system in the statistical limit and an external control parameter,
temperature, that is varied continuously.
In the past decade, spectroscopic techniques for condensed matter systems have advanced
tremendously and now it is possible to resolve low-lying energy levels of metallic nano-grains
using single-electron tunnelling (SET) [6] and scanning tunnelling microscopy (STM) techniques
Chapter 1. Introduction 5
(see [1] for example). At this size, the grains are only one or two orders of magnitude larger than
the atomic spacing and these ‘spectra’ are greatly dependent on the shape and dimensions of
the sample. The total number of conduction electrons in each sample can range from hundreds
to hundreds of thousands. A very small fraction of these electrons near the Fermi surface form
Cooper pairs when the conditions are right. These pairs are responsible for the superconducting
behaviour.
In the traditional BCS wave function of the pairing model that approximately describes the
collection of Cooper pairs, the number of pairs is allowed to fluctuate about a fixed mean value.
This small fluctuation is not a major concern in the statistical limit but needs to be taken care
of when the pair number is small. When particle number conservation becomes significant in a
small sample, this approximation may not work well for the BCS pairing Hamiltonian. While
the system is smaller, the Hamiltonians matrices defined on these Hilbert space are still often
too large to be diagonalized.
Our objectives in this thesis are
1. to systematically bring out the salient features of such a Hamiltonian across a wide range
of parameters, and
2. to accurately approximate the low-lying energy levels of a Hamiltonian that is computa-
tionally too challenging to diagonalize.
In this thesis, a technique to achieve the two objectives has been developed. The essence of
the technique is to approximate an eigenvalue problem for an algebraic Hamiltonian H({Xν})(where Xν are the operators of a Lie algebra, g), defined on a discrete basis, by a differential
equation for a Hamiltonian H(x, ddx
) on a continuous basis. Here x denotes a set of variables
constructed from the state labels. For example, in su(2) where the basis states are labelled
by |j,m〉, we can define x = mj
and approximate it as a continuous variable. Essentially, an
algebraic Hamiltonian H({Xν}) is mapped into a differential equation H(x, ddx
) such that its ith
low-lying eigenfunction ψi(x) is a good approximation for the distribution of the components
of the ith eigenvector |φi〉
|m〉 → x,
〈m|H |φi〉 → Hψi(x).
This approach, introduced by Chen et al. [13], is improved and extended in this thesis. The full
details will be given in chapter 3. For now, we just mention that the su(2) operators Jz and J±
Chapter 1. Introduction 6
can be mapped to differential operators, Jz and J±, defined for relatively large values of j by
〈m|Jz|φi〉 → Jzψi(x) = jxψi(x)
〈m|J±|φi〉 → J±ψi(x) = j
√
(1∓ x+1
j)(1 ± x) e∓
1j
ddxψi(x)
≈ j√
1− x2
[
1∓ 1
j
d
dx+
1
2j2d2
dx2
]
ψi(x).
If the wave functions of H are localized about a value xo (yet to be determined) it is reasonable
to expand√
1− x2 about x = xo up to the first few terms in x′ = x − xo. For hermitian
Hamiltonians that involve either su(2) or su(1, 1) operators, such an approximation, when taken
up to bilinear terms in x′ and d/dx′, will often have the form of a shifted harmonic oscillator
H(x′,d
dx′) ≈ HSHA = −A 1
j2d2
dx′2+Bj2x′2 +Djx′ +E.
Thus, we call this technique the Shifted Harmonic Approximation (SHA). Now, the origin of
the x′ coordinates is at xo, where the minimum of the SHA oscillator potential is. Thus, the
‘shift’ of the oscillator is zero in the x′ coordinates. To obtain xo, we can simply solve D = 0.
Similarly for Hamiltonians that have su(2)1 ⊕ su(2)2⊕ . . . su(2)k spectrum generating algebras,
we can approximate them as multi-dimensional shifted harmonic oscillators. The important
features of the Hamiltonian are captured in the expressions of xo, A, B, D and E. This is
how the first objective is achieved. The second objective of accurately approximating the low-
lying eigenenergies of the Hamiltonians can be achieved either by diagonalizing in a smaller but
appropriate basis based on xo, A and B, or by using the so-called RRKK equations-of-motion
technique based on methods by Rowe, Rosensteel, Kerman and Klein [33]. The details will be
discussed in chapter 3.
As we have seen in the above two examples, typically when the parameter in the Hamiltonian
is varied, the corresponding many-body system can move from one phase to another through
the critical point(s). Typically, traditional approximation techniques work on one side of the
critical point(s) and break down at the critical point(s). In the examples in this thesis, we
will show that the SHA is able to make reasonably accurate predictions on both sides of the
critical point and it can serve as a useful tool to study phase transitions. Within the thesis,
the approximations obtained using the SHA or RRKK are sometimes compared with ‘exact’
results. These ‘exact’ results are obtained by diagonalizing the Hamiltonian matrices in the
full Hilbert spaces (for compact algebras like su(2)). In cases where the algebra is non-compact
(such as su(1, 1)) and the dimension of the Hamiltonian matrices is infinite, the Hamiltonian
matrices constructed using a subset of the basis states are diagonalized instead. The number of
basis states used are increased progressively to check for convergence to the precision required.
Chapter 1. Introduction 7
In chapter 2, we will review some relevant aspects of quantum many-body physics and in-
troduce the Lipkin-Meshkov-Glick (LMG) model which involves su(2) operators. We will also
compute the ground state and excitation energy of the LMG using two traditional approx-
imation techniques. In chapter 3, the SHA and RRKK are developed and illustrated using
the LMG. In chapter 4, we extend the application to the Canonical Josephson Hamiltonian
(CJH) which also involves su(2) operators but has more features. The CJH is of interest in the
study of Bose-Einstein condensate (BEC) systems. In this same chapter, some modifications
are made to adapt the SHA for Hamiltonians with the non-compact su(1, 1) spectrum gener-
ating algebra, i.e. the matrix representations are infinite dimensional. The many-body model
investigated is related to the vibrator-rotor transition mentioned earlier. In chapter 5, we will
discuss how the SHA can be applied to the multilevel-level BCS pairing Hamiltonian which has
an su(2)1 ⊕ su(2)2 ⊕ . . . su(2)k spectrum generating algebra. For a k-level model, we will see
that we just need to solve a set of k equations and diagonalize two k×k matrices to extract the
most important information about the system. When the pair number in the system is fixed,
the SHA gives a (k − 1)-dimensional shifted coupled oscillator.
The dimensions of the Hilbert space for a model with an su(2)1⊕su(2)2⊕. . . su(2)p⊕. . . su(2)k
spectrum generating algebra can be extremely large and in such cases, the diagonalization of
the Hamiltonian matrix will be computationally intractable. If the total spin jp for each copy
of the su(2) algebra in the model is sufficiently large, the SHA is expected to give accurate ap-
proximations of the physical quantities associated with the model. Another possible application
of the SHA is in the Bose-Hubbard model which is of interest to both the condensed matter
physics and the quantum information physics community. The dimensions of the Hilbert space
associated with the Bose-Hubbard model can be extremely large and usually only small systems
are studied. A more detailed discussion of the Bose-Hubbard model is given in the conclusion
along with other possible developments of the SHA.
Parts of this thesis have been/will be published:
1. S.Y. Ho, G. Rosensteel and D.J. Rowe, Equations-of-Motion Approach to Quantum Me-
chanics: Application to a Model Phase Transition, Phys. Rev. Lett. 98 080401 (2007).
2. G. Rosensteel, D.J. Rowe and S.Y. Ho, Equations-of-Motion for a Spectrum Generating
Algebra: Lipkin-Meshkov-Glick model, J. of Phys. A: Math. 41 025208 (2008).
3. S.Y. Ho, D.J. Rowe and S. De Baerdemacker, The Shifted Harmonic Approximation and
multi-level pairing models, (in preparation).
4. S.Y. Ho, D.J. Rowe and S. De Baerdemacker, The Shifted Harmonic Approximation, (in
preparation).
Chapter 2
Some Relevant Aspects of
Many-body Physics
2.1 Preliminaries
A natural starting point when extending from a single particle quantum system to a many-body
quantum system is to first consider a system of N identical but non-interacting particles. In
this independent particle system, the total Hamiltonian Ho is simply the sum of the individual
one-body Hamiltonians
Ho =
N∑
i=1
hi =
N∑
i=1
− ~2
2m∇2i + U(ri) (2.1)
where U is the potential acting on the particles. Ho can be solved as easily as hi alone. In
fact, if hiψνi(ri) = εiψνi
(ri), where νi indexes the state occupied by particle i and ψk(r) is a
set of eigenfunctions of h, the total energy of the system will simply be E =∑k
i=1 εi. However,
the corresponding wave function is not simply Ψ(r1, . . . , ri, . . . , rN ) =∏Ni=1 ψνi
(ri). This is
because we have to take into consideration the indistinguishability of the particles. Thus, for
an N -particle system, if the coordinates of two particles labeled by j and k are exchanged, the
wave functions differ only up to a sign,
Ψ(r1, . . . , rj , . . . , rk, . . . , rN ) = ±Ψ(r1, . . . , rk, . . . , rj , . . . , rN ). (2.2)
This permutation symmetry arises from the indistinguishability of identical quantum particles
and suggests the existence of two species of particles - bosons (corresponding to ‘+’ sign) and
fermions (corresponding to ‘-’ sign). Bosons are allowed to be in the same quantum state, while
fermions are not allowed because Ψ = 0 when rj = rk. Thus, fermions obey the Pauli exclusion
principle.
8
Chapter 2. Some Relevant Aspects of Many-body Physics 9
Depending on the type of particles involved, the correct wave function should be written as
(anti-)symmetrized sums of∏Ni=1 ψνi
(ri):
Ψ(r1, . . . , ri, . . . , rN ) =1√N !
∑
π∈SN
sign(π)
N∏
i=1
ψνi(rπ(i))
where SN is the set of all possible permutations π of N distinct objects. For fermions,
sign(π) = +1 for even permutation and −1 for odd permutations. For bosons, sign(π) = 1
for all permutations. The determinant/permanent expressions are rather cumbersome and we
will next introduce the more economical occupation notation in which the relevant information
for our purposes is encapsulated. Previously, we labelled indistinguishable particles and later
took the extra step of doing permutations to make them indistinguishable again. In the new
notation, we just track the number of particles occupying each single particle state directly
since the particles are indistinguishable. Thus,
Ψ(r1, . . . , ri, . . . , rN )→ |Ψν1,...,νN〉 = |nν1 , . . . nνi
, . . .〉
where nνiis the number of particles occupying state ψνi
. We define FN as the space spanned
by the set of occupation number basis states |Ψν1,...,νN〉 with
∑
νinνi
= N . The Fock space is
given by F = F0 ⊕ F1 ⊕ F2 . . .. In this space, we can define occupation number operators nνi
as
nνi|nν1, . . . nνi
, . . .〉 = nνi|nν1 , . . . nνi
, . . .〉
where the occupation number basis states are their eigenvectors. For bosons, nνi= 0, 1 . . . N .
Borrowing the harmonic oscillator raising and lower operators, we can write nνi= b†νi bνi
where
bνi|nν1 , . . . nνi
, . . .〉 =√nνi|nν1, . . . nνi
− 1, . . .〉b†νi|nν1 , . . . nνi
, . . .〉 =√
nνi+ 1|nν1, . . . nνi
+ 1, . . .〉
and bνi|nν1 , . . . nνi
= 0, . . .〉 = 0. For bosons, the raising and lowering operators follow the usual
commutation relations
[bνj, bνk
] = 0, [b†νj, b†νk
] = 0, [bνj, b†νk
] = δνjνk.
For fermions, because not more than one particle can occupy a quantum state, nνi= 0 or 1
only. Hence,
f †νi|nν1 , . . . nνi
= 0, . . .〉 = |nν1, . . . nνi= 1, . . .〉 (2.3)
f †νi|nν1 , . . . nνi
= 1, . . .〉 = 0 (2.4)
fνi|nν1 , . . . nνi
= 1, . . .〉 = |nν1, . . . nνi= 0, . . .〉 (2.5)
fνi|nν1 , . . . nνi
= 0, . . .〉 = 0 (2.6)
Chapter 2. Some Relevant Aspects of Many-body Physics 10
The fermonic operators follow the anti-commutation relations
{fνj, fνk} = 0, {f †νj
, f †νk} = 0, {fνj
, f †νk} = δνjνk
where {A, B} = AB + BA. This is consistent with the exclusion principle. For example,
(f †νi)2| . . . nνi
= 0 . . .〉 = 0, preventing more than one particle from occupying the same state. We
also see that the order in which the operators act now matters: for example, f †νj f†νk
= −f †νkf †νj .
Thus, a convention has to be defined in the way the states are labeled:
|nν1, . . . , nνi. . .〉 := (f †ν1)
nν1 . . . (f †νi)nνi . . . |0〉,
i.e. the operators are ordered in the same way as the quantum numbers in the ket. Since
{f †νj , f†νk}|nν1 , . . . , nνj
= 0 . . . , nνk= 0 . . .〉 = 0, we get
|nν1, . . . , nνj= 1 . . . , nνk
= 1 . . .〉 = −|nν1, . . . , nνk= 1 . . . , nνj
= 1 . . .〉
as required by fermions in equation (2.2). The approach that we have used in the treatment of
many-body systems is known as second quantization. Next, we will incorporate the interacting
term into the Hamiltonian in equation (2.1) in second quantization form. We will use a and
a† to represent the lowering and raising operator for concepts applicable to both bosons and
fermions. It is the (anti-)commutation relation that distinguishes the two types of particles.
Revisiting equation (2.1), we can rewrite Ho, expressed in its own eigenstates as
Ho =
N∑
i=1
(
− ~2
2m∇2i + U(ri)
)
→ Ho =∑
k,l
〈ψk| −~
2
2m∇2 + U(r)|ψl〉a†kal
=∑
k,l
εkδkla†kal
=∑
k
εka†kak. (2.7)
The interaction term of the Hamiltonian can be written as
HI =1
2
N∑
i,j 6=iV (ri − rj). (2.8)
The ‘12 ’ eliminates the double counting when i and j are exchanged while j 6= i excludes self
interaction terms. If we regard a†k (ak) as an operation that creates (annihilates) a particle in
a state labeled by k, we can consider a two-body interaction as two particles in initial states
labeled by, say, m and n, then scattering to final states labeled by, say, k and l, with transition
amplitude Vklmn given by
Vklmn =
∫
d3r
∫
d3r′ψ∗k(r)ψ
∗l (r
′)V (|r− r′|)ψm(r)ψn(r′) = Vlknm.
Chapter 2. Some Relevant Aspects of Many-body Physics 11
�Vklmn
k
m
l
n
Figure 2.1: A schematic representation of a two-body interaction. The incoming and outgoing
arrows represent initial and final states respectively and the wiggly line represents the transition
amplitude.
This process is schematically represented by the Feynman diagram shown in figure (2.1). Both
momentum and energy are conserved in the process.
Thus, the interaction term can be written as
HI =1
2
N∑
i,j 6=iV (ri − rj) →
1
2
∑
k,l,m,n
Vklmna†ka
†l anam. (2.9)
The order of the operators is important here. For example, a†kama†l an is not correct as the
interaction term should vanish when there is only one particle present, i.e. k = l and m = n.
The order in equation (2.9) with all creation operators on the left of the annihilation operators
is known as the normal order. Thus, the general interacting Hamiltonian1 can be written as
H = Ho + HI =∑
k
εka†kak +
1
2
∑
k,l,m,n
Vklmna†ka
†l anam. (2.10)
If an additional external potential V ′ is present, we can include an extra term∑
k,l V′kla
†kal
where V ′kl =
∫d3rψ∗
k(r)V′(r,p)ψl(r).
The above Hamiltonian is applicable for both fermions and bosons. As we have seen earlier,
because of the Pauli exclusion principle, a†νi | . . . nνi= 1 . . .〉 = 0, thus, fermions cannot scatter
into states that are already occupied. This poses many restrictions on the possible two-body
interactions in fermionic systems. Consider a system of N non-interacting fermions with equal
number of particles for each spin ±12 confined in a cube of length L. The Schrodinger equation
for one particle of mass m is given by
HΨ = − ~2
2m
(∂2
∂x2+
∂2
∂y2+
∂2
∂z2
)
Ψ = EΨ. (2.11)
1Only in rare cases are three or more body interactions considered (for example, skyrmions in nuclear physics)so we have not included such interactions.
Chapter 2. Some Relevant Aspects of Many-body Physics 12
The possible single particle energies of this system are
Enx,ny,nz =~
2π2
2mL2(n2x + n2
y + n2z) (2.12)
where nx, ny, nz are integers. Each state |nx, ny, nz〉 can be occupied by a pair of fermions with
opposite spins. The ground state of this N particle system corresponds to all the available
lowest energy states filled up. When N is very large, the surface that encloses the filled states
is approximately spherical. This surface is known as the Fermi surface.
Now consider that the fermions are interacting. Fermions with low momenta (such as those
indicated by A and B in figure (2.2)) which are deep within the Fermi surface will not scatter
out of their original state because the states which are energetically within reach are filled.
Thus, in this model, a large fraction of the fermions do not scatter off each other most of the
time. This is indicative of the behaviour of real systems of fermions at low temperature. Only
interacting fermions near the Fermi surface will contribute to the two-body interaction terms.
Therefore, it is reasonable that models involving fermions are based on states near the Fermi
surface. Both the LMG model and BCS model which we will discuss later have this feature.
Figure 2.2: The fermions fill up from the lowest available states all the way up to the Fermi
surface. The two states indicated by A and B are deep within the Fermi surface with low
momenta. Fermions with these momenta will not scatter from their original state.
Chapter 2. Some Relevant Aspects of Many-body Physics 13
2.2 The Lipkin-Meshkov-Glick (LMG) Model
The Lipkin-Meshkov-Glick Model [46, 48, 29] was proposed in 1965. In the words of the authors,
its purpose was to “test the validity of various techniques and formalisms developed for treating
many-particle systems”. The LMG model is simple enough to be solved exactly to arbitrary
accuracy and yet have features that are non-trivial. It includes a parameter, controlling a simple
two-body interaction term, that allows the system to move from one phase to another. It also
exhibits features associated with the transition from a few-body model to the thermodynamic
limit of an infinite system. Since its inception, it has been used as a test for many approximation
methods and has been described as the benchmark test problem [58] for new approximation
techniques. The literature on the LMG model is extensive and we will list a few here to illustrate:
Operator Boson Expansion [19], Extended Holstein-Primakoff Approximation [24], Extended
Coupled Cluster techniques [58] and more recently, the Continuous Unitary Transformation [22].
In this section, we will solve the LMG model using two standard many-body approximation
techniques: the Hartree-Fock (HF) method and the Random Phase Approximation (RPA), and
compare them with the techniques we introduce.
The LMG model consists of N identical fermions occupying two single particle energy levels.
The two levels are both N -fold degenerate and separated by an energy ε. The lower level is
labelled by σ = − and the upper level by σ = +. (We can regard this model as having just two
active levels - one with energy ε/2 above the Fermi level and the other with energy ε/2 below.)
There is also another quantum number p labelling the degenerate states at each level. (See
figure (2.3).) The nature of the two-body interaction is such that the particles scatter without
changing the value of p. Let a†pσ (apσ) be the creation (annihilation) operator for a particle in
the p state of the σ level. The Hamiltonian of such a system can be written as
HL =ε
2
N∑
p=1
(a†p+ap+ − a†p−ap−)− V
2
N∑
p, p′=1
(a†p+a†p′+ap′−ap− + a†p−a
†p′−ap′+ap+) (2.13)
where the term proportional to V scatters a pair of particles in the same level to another level.
Each particle has a different value of p and the interaction V does not change the value of p for
each particle.
Since each particle can either be in the ‘+’ or ‘−’ level, the dimension of the Hilbert space
with N particles is 2N . However, because all the p states at each level correspond to the
same single particle energy, it is the number of particles at each level that is important. This
information is captured indirectly in the first term of the Hamiltonian which we will define as
Jz:
Jz =1
2
N∑
p=1
(a†p+ap+ − a†p−ap−). (2.14)
Chapter 2. Some Relevant Aspects of Many-body Physics 14
σ = −
σ = +
p = 1 2 N
Figure 2.3: Schematic sketch of the two level LMG model. Each level labeled by σ = ± has
degeneracy N . Each particle has a different value of p and the interaction V does not change
the value of p for each particle.
The operator Jz gives half the difference in the number of particles in level ‘+’ and ‘−’. Similarly,
we can define
J+ =
N∑
p
a†p+ap− and J− =
N∑
p
a†p−ap+ = (J+)†. (2.15)
The operators J± and Jz satisfy the su(2) angular momentum commutation relations
[
J+, J−]
= 2Jz and[
Jz, J±]
= ±J±. (2.16)
Thus, they are called ‘quasi-spin’ operators. This is because while they follow the angular
momentum algebra, they are not related to rotation in three dimensional space. With these
operators, equation (2.13) can be written as2
HL = εJz −V
2(J2
+ + J2−) (2.17)
= εJz − V (J2x − J2
y ) (2.18)
since J± = Jx ± iJy. We note that equation (2.17) commutes with the su(2) Casimir operator
J2 =1
2(J+J− + J−J+) + J2
z . (2.19)
Thus, the basis states of the Hamiltonian corresponding to particle number N can be assigned
the same label j = N/2. This set of basis states is {|j,m〉} where m = −j . . . j. The symmetry
reduces the largest size of the Hamiltonian matrix that needs to be diagonalized from 2N to
N + 1.
2The original LMG Hamiltonian has a third term −W2
(J+J− + J−J+) where the term proportional to Wscatters one particle up while the other is scattered down. However, this term is already diagonal in the quasi-spin representation and is excluded in almost all applications of the LMG. We will do the same for comparisonswith the rest of the literature.
Chapter 2. Some Relevant Aspects of Many-body Physics 15
The Hamiltonian matrix can be constructed by using the well-known su(2) representation
〈j m|Jz|j′ n〉 = mδj,j′δn,m (2.20)
〈j m|J±|j′ n〉 =√
(j ∓m+ 1)(j ±m)δj,j′δn,m∓1. (2.21)
We see that the matrix representation of Jz is diagonal and m can take integers or half-integers
depending on whether N is odd or even.
We show in figure (2.4) the exact excitation spectrum obtained by diagonalization over
a range of interaction strength for N = 200 and N = 1000. We note two distinct regions:
(i) the small interaction region with a non-degenerate spectrum and (ii) the large interaction
region with energy levels corresponding to wave functions of opposite parity coalescing to form
a degenerate spectrum. (The points are connected by lines to show this transition more clearly.)
We note that when N is larger, the transition becomes sharper.
N = 200 N = 1000
Figure 2.4: The exact excitation energy (in units of ε) for the low-lying states of the LMG
model obtained from diagonalization. In the small interaction region, the energy levels are
non-degenerate. In the large interaction region, the levels become degenerate. The points are
connected by lines to show this transition more clearly.
In the next section, using the LMG as an example, we will illustrate how the ground state
energies and excitation energies can be estimated using the Hartree-Fock (HF) method and the
Random Phase Approximation (RPA).
Chapter 2. Some Relevant Aspects of Many-body Physics 16
2.3 Two Traditional Approximations Techniques and the LMG
Spectrum
The well-known Random Phase Approximation (RPA) [8, 3] is the prototype approximation for
many-body quantum dynamics as a system of coupled oscillators. The RPA involves extracting
the oscillator content from the Hamiltonian operator of interest. While both the RPA and the
SHA mentioned in the chapter 1 are oscillator approximations, the two approaches are different.
The details of the comparison will be given at the end of chapter 3 after both techniques have
been introduced. Here, we will first give a brief outline of the RPA and show how it can be
used to solve the LMG [49].
The version of the RPA presented here is expressed in terms of equations-of-motion [64, 65]
and would thus differ in format to the version of the RPA commonly known in condensed matter
physics. However, in both cases, the essence, as mentioned, is to extract the oscillator content
from the Hamiltonian. The equations-of motion for a harmonic oscillator Hamiltonian H can
be expressed as[
H, O†]
= ωO† and[
H, O]
= −ωO (2.22)
where ω is the excitation energy and we have taken ~ = 1. O† and O are boson operators
satisfying the relation[
O, O†]
= 1. If we want to make an oscillator approximation of a
Hamiltonian H ′({Xν}), expressed as a polynomial in the elements {Xν} of a Lie algebra g, O† is
first written as a linear combination of the step operators in {Xν}. Typically, the approximation
is made by assuming that the lowest weight state of the corresponding Cartan subalgebra is
the ground state of the oscillator. We will look at an example in su(2). The step operators are
J± and we can write
O† = (Y J+ − ZJ−)/〈 |[
J−, J+
]
| 〉 12 (2.23)
O = (Y J− − ZJ+)/〈 |[
J−, J+
]
| 〉 12 (2.24)
where | 〉 = |j m = −j〉 is the lowest weight state and Y and Z are coefficients to be determined.
The denominator ensures that 〈 |[O†, O
]| 〉 = 1.
In general, unless H ′ is already an oscillator, if we take the commutator[
H ′, O†]
, we will
not get equation (2.22) but rather we can write
[
H ′, O†]
= ωO† + P (2.25)
where P are parts in H ′ that do not belong to an oscillator. We note here that J−| 〉 = 0. By
taking the expectation value of the commutator with J−, we get
〈 |[
J−,[
H ′, O†]]
| 〉 = ω〈 |[
J−, O†]
| 〉+ 〈 |[
J−, P]
| 〉. (2.26)
Chapter 2. Some Relevant Aspects of Many-body Physics 17
In equation (2.26), because we are equating expectation values (which are numbers), the co-
efficients can be chosen so that 〈 |[
J−, P]
| 〉 can be made identically zero. Following this
equations-of-motions formalism of the RPA [64], we can write
〈 |[
J−,[
H ′, O†]]
| 〉=ω〈 |[
J−, O†]
| 〉, (2.27)
then, together with
〈 |[
J−,[
H ′, O]]
| 〉= − ω〈 |[
J−, O]
| 〉, (2.28)
we get the RPA matrix equation
(
A− ω BB A+ ω
)(
Y
Z
)
= 0 (2.29)
where
A = 〈 |[
J−,[
H ′, J+
]]
| 〉/〈 |[
J−, J+
]
| 〉 12 (2.30)
B = −〈 |[
J−,[
H ′, J−]]
| 〉/〈 |[
J−, J+
]
| 〉 12 . (2.31)
The excitation energy ω can be obtained by requiring the determinant of the RPA matrix in
equation (2.29) to vanish.
If we apply the RPA to the LMG Hamiltonian, the approximation will work well if the
interaction V is weak such that | 〉 = |j − j〉 is a good estimate of the ground state. In the
limit of no interaction, P in equation (2.25) vanishes and the approximation becomes exact.
Following the procedure listed above, the RPA estimate of excitation energy for the LMG model
in the weak interaction limit is [49]
ω =(ε2 − V 2(2j − 1)2
) 12 (2.32)
with
Y =
(ε+ ω
2ω
) 12
and Z = −(ε− ω2ω
)12
. (2.33)
Next, to handle the large interaction limit, modifications have to be made because | 〉 =
|j − j〉 is no longer a good estimate of the ground state. To get the approximate ground state
of Hamiltonians with large interaction, we can consider rotating | 〉 about the x-axis in quasi-
spin space, through an angle θ, to another state |θ〉 = eiθJx| 〉. The energy expectation value of
|θ〉 is given by
〈θ|HL|θ〉 = 〈 |e−iθJxHLeiθJx | 〉. (2.34)
Chapter 2. Some Relevant Aspects of Many-body Physics 18
We note that it is more convenient to rotate the Hamiltonian HL and the operators within. For
example,
Jz → J ′z = e−iθJx Jze
iθJx
= Jz − iθ[Jz, Jx] +1
2!(iθ)2[Jx, [Jz , Jx]] + . . .
= Jz[1−θ2
2!+ . . .] + Jy[θ −
θ3
3!+ . . .]
= Jz cos θ + Jy sin θ. (2.35)
Similarly, Jy → J ′y = Jy cos θ − Jz sin θ. Therefore, equation (2.18) becomes
H ′L(θ) = ε(Jz cos θ + Jy sin θ)− V [J2
x − (Jy cos θ − Jz sin θ)2] (2.36)
and the energy expectation
EL(θ) = 〈 |H ′L(θ)| 〉 (2.37)
= −εj cos θ + j(j − 1
2)V sin2 θ (2.38)
since 〈 |J2y | 〉 = 1
4〈 |J−J+| 〉 = j/2. Similarly, 〈 |J2x | 〉 = j/2. Following Rayleigh-Ritz’s varia-
tional principle, we get an upper bound on the ground state energy. In the approximation, a
relationship between θ and V can be obtained by minimizing EL(θ) with respect to θ. SolvingdEL(θ)dθ
= 0, we get
sin θ = 0 or cos θ =ε
V (2j − 1). (2.39)
The solution sin θ = 0 corresponds to no rotation which is appropriate for the no interaction
limit. Since | cos θ| ≤ 1, we see that the second solution only exists if V ≥ Vc = ε(2j−1) where Vc
is the critical value for the approximation. Thus, the variational ground state energies Eo(ε, V )
are given by
for small V < Vc : Eo(ε, V ) = −εj, (2.40)
for large V > Vc : Eo(ε, V ) = − ε2j
2V (2j − 1)− j(j − 1
2)V . (2.41)
The technique used here is a variation of the Hartree-Fock (HF) approximation.
We write down the excitation energy for small interaction given in equation (2.32):
for small V < Vc : ω(ε, V ) =√
ε2 − V 2(2j − 1)2 (2.42)
To obtain the excitation energy ω′ for the large interaction limit, we can apply the RPA as
before to equation (2.36) to get
for large V > Vc : ω′(ε, V ) =√
2[V 2(2j − 1)2 − ε2]. (2.43)
Chapter 2. Some Relevant Aspects of Many-body Physics 19
In the following figures, we plot the exact values of Eo and excitation energies ∆E for
N = 20, 200 and 1000 for a range of interactions and make comparisons with results obtained
using HF and the RPA in equations (2.40)-(2.43). All values are given in units of ε. In figure
(2.5)-(2.6), the two phase regions are separated by NV/ε ∼ 1. The blue squares are the exact
results obtained by diagonalizing the Hamiltonian expressed in the full Hilbert space. Those
predicted by the approximations are given by the red line. The figure on the left compares ω
(ω′) predicted by RPA (HF-RPA) given in equations (2.42) and (2.43) with the exact lowest
excitation energies ∆E. We note that in the large interaction domain away from Vc, pairs of
exact energy levels coalesce and become degenerate. To check the accuracy of the approximated
excitation energies ω′ in the large interaction domain, we compare it with the energy gap
between the lowest pair of levels with the next lowest pair. For finite N , the exact energy levels
are not degenerate in the vicinity V & Vc as seen in figure (2.5). Thus, this comparison in the
vicinity V & Vc is only meaningful when N is large as shown in figure (2.6). The figures on the
right compare ground state energy Eo predicted by HF with those of the exact.
Figure 2.5: N = 20: Blue squares are exact results. (Left) The predictions of ω given by the
RPA equations (2.42)-(2.43) are indicated by the red lines. (Right) The predictions of Eo given
by HF approximations (2.40)-(2.41) are indicated by the red lines.
As seen in figure (2.5), the approximations are not accurate. We will see in the subsequent
figures that the predictions improve with increasing N . In fact, the RPA, which is a bosonic
approximation, gives the exact excitation energy in the limit N →∞.
A particular point of interest is that the ‘shape’ of the red lines in both ω and Eo remain
very much the same as N changes. The ground state energy appears to scale with N . This
point will be discussed in greater detail in chapter 3.
Chapter 2. Some Relevant Aspects of Many-body Physics 20
Figure 2.6: (Above) N = 200 and (below) N = 1000. Blue squares are exact results. (Left)
The predictions of ω given by equations (2.42)-(2.43) are indicated by the red lines. (Right)
The predictions of Eo given by equations (2.40)-(2.41) are indicated by the red lines.
The approximations in the vicinity of the critical point Vc are not accurate, especially for
small N . This is a common feature of approximation techniques for general many-body models.
In the next chapter, we will develop the formalism of the SHA and RRKK which can be used
to significantly improve the accuracy of the predictions of the excitation energies and ground
state energies over a wide range of interaction strengths. We will also be able to obtain good
predictions of the energies of the next few excited states. As we will see in chapter 5, the SHA
is also applicable in more complicated many body models such as the finite BCS pairing model.
Chapter 3
The Shifted Harmonic
Approximation and an
Equations-of-Motion Approach
3.1 The Shifted Harmonic Approximation
We concluded chapter 2 by using the Hartree-Fock (HF) method and the Random Phase Ap-
proximations (RPA) for estimating the properties of the low-lying energy levels of the Lipkin-
Meshkov-Glick (LMG) model. In this chapter, we will develop the Shifted Harmonic Approxi-
mation (SHA) which is an alternative technique that is useful for predicting the properties of
the low-lying eigenvectors of the Hamiltonian. The LMG will be used again to illustrate the use
of the SHA. We will have a discussion at the end of the chapter to compare the differences of
the SHA with HF-RPA and explore areas where the SHA can perform better than the HF-RPA.
In the course of our discussion, operators will be denoted as O to distinguish from the matrix
representations O.
Consider a Hamiltonian H with an su(2) spectrum generating algebra expressed in a basis
for which the matrix representation of Jz is diagonal (hereafter Jz-diagonal representation).
Then, the eigenvectors |φi〉 corresponding to the ith excited state of H can be expressed as
|φi〉 =j∑
m=−jaim|j m〉. (3.1)
The su(2) basis states will be abbreviated as |m〉 whenever appropriate. In figure (3.1), the
components of the eigenvectors am corresponding to the lowest three states of the LMG for a
particular interaction are shown. In the plot, the abscissa is redefined as x = mj
(such that
|x| ≤ 1).
21
Chapter 3. The SHA and an Equations-of-Motion Approach 22
Figure 3.1: The components of the eigenvector am belonging to the ground state (blue), first
(green) and second (red) degenerate excited states of an LMG Hamiltonian. In this plot,
N = 200 with interaction NVε
= 3.0. To guide the eye, the points belonging to the same excited
state are connected by a line.
In figure (3.1), the profiles of the coefficients am resemble the eigenfunctions of a shifted
harmonic oscillator. This suggests that it may be possible that the eigenvectors of an algebraic
Hamiltonian H({Xν}) (where Xν are the operators of a Lie algebra, g), defined on a discrete
basis, can approximated by the eigenfunctions of a differential equation for a Hamiltonian
H(x, ddx
) on a continuous basis. Here, x denotes a set of variables constructed from the state
labels. In the case of the su(2) algebra, we define x = mj
and anticipate H(x, ddx
) to be a
differential equation in the form of a shifted harmonic oscillator. Thus, this technique introduced
by Chen et al. [13] for su(2) models is known as the Shifted Harmonic Approximation.
In this thesis, we shall:
1. formalize the approximation technique;
2. extend the domain in which the SHA gives accurate approximations;
3. apply the SHA to Hamiltonians with the non-compact su(1, 1) spectrum generating alge-
bra (this will be illustrated using the the five dimensional quartic oscillator Hamiltonian
in chapter 4); and
4. apply the SHA to the BCS pairing Hamiltonians with a su(2)1 ⊕ su(2)2 ⊕ . . . ⊕ su(2)k
spectrum generating algebra. This will be discussed in in chapter 5.
Chapter 3. The SHA and an Equations-of-Motion Approach 23
The SHA
An important objective of the SHA is to seek the form of H(x, ddx
) such that its low-
lying eigenfunctions ψi(x) will have similar properties, such as mean and spread, as those of
the distributions of the components aim of the eigenvectors |φi〉 of the original Hamiltonian
H({Xν}). A good place to start is to look at the operators that make up the Hamiltonian H.
For the LMG Hamiltonian, the su(2) operators {Jz , J±} can be represented in the Jz-diagonal
basis as we have seen earlier,
〈j m|Jz|j′ n〉 = mδj,j′δn,m (3.2)
〈j m|J±|j′ n〉 =√
(j ∓m+ 1)(j ±m)δj,j′δn,m∓1. (3.3)
In the SHA, the key is in finding the properties of the distribution aim. Using the eigenvector
defined in equation (3.1), the components aim are identified as follows:
〈m|Jz |φi〉 = maim (3.4)
and similarly
〈m|J±|φi〉 =√
(j ∓m+ 1)(j ±m)aim∓1. (3.5)
Here, if we define x = mj
and approximate x as a continuous variable, then 〈m|φi〉 = aim →ψi(x). (We use right arrows ‘→’ to denote the approximations of discrete variables as continuous
ones and the continuous interpolations of discrete distributions1.) The terms aim∓1 can be
regarded as
aim∓1 → ψi(x∓ 1
j) = e
∓ 1j
ddxψi(x) (3.6)
≈[
1∓ 1
j
d
dx+
1
2j2d2
dx2
]
ψi(x). (3.7)
Here, we have assumed the low-lying eigenfunctions ψi(x) to be sufficiently smooth that higher
derivatives beyond second order can be dropped. Thus, in general, the su(2) operators Jz and
J± can be mapped to differential operators, Jz and J±, defined for relatively large values of j
by
〈m|Jz|φi〉 → Jzψi(x) = jxψi(x) (3.8)
〈m|J±|φi〉 → J±ψi(x) = j
√
(1∓ x+1
j)(1 ± x) e∓
1j
ddxψi(x) (3.9)
≈ j√
1− x2
[
1∓ 1
j
d
dx+
1
2j2d2
dx2
]
ψi(x). (3.10)
1A technique due to Braun [11, 12] also uses similar continuous approximations for the variables. However,the SHA developed in this thesis differs in many aspects from Braun’s approach. For example, Braun’s techniquesets an upper and lower bound for the potential of the approximate Hamiltonian while the SHA uses a shiftedharmonic potential directly. Consequently, the SHA is able to provide more definite approximations. See Braun’streatment of the LMG [11].
Chapter 3. The SHA and an Equations-of-Motion Approach 24
If the eigenfunctions of H are localized about a value xo (yet to be determined), it may be
reasonable to expand√
1− x2 about x = xo up to the first few terms in x′ = x−xo. With this,
we get
Jzψi(x′) = j(x′ + xo)ψi(x′) (3.11)
J±ψi(x′) ≈ j√
(1− x2o)(1−
xox′
1− x2o
− x′2
2(1− x2o)
2)(1 ∓ 1
j
d
dx′+
1
2j2d2
dx′2)ψi(x′). (3.12)
Here, we require |x′| < (1−x2o) for convergence. For hermitian Hamiltonians that involve either
su(2) or su(1, 1) operators, such an approximation, when taken up to bilinear terms in x′ and
d/dx′, will generally have the form of a shifted harmonic oscillator
H(x′,d
dx′) ≈ HSHA = −A 1
j2d2
dx′2+Bj2x′2 +Djx′ +E. (3.13)
If the minimum of the SHA oscillator potential is located at x = xo and the origin of the x′ is
shifted to x = xo, then we have D = 0 in equation (3.13). Conversely, setting D = 0 enables
us to solve for xo. Equation (3.13) is the so-called Shifted Harmonic Approximation (SHA).
From equations (3.11) and (3.12), we can also see here that A,B and E in equation (3.13) will
be functions of xo and parameters of the model. We note that the radius of convergence of
x′ in the expansion in equation (3.12) does not directly affect the derivation of the quantities
A, B, D and E. Borrowing the analogy of taking the small oscillation approximation in classical
mechanics, the concern in the SHA is similar, i.e. locating the oscillator minimum in the
approximated SHA Hamiltonian (3.13) and finding quantities such as the spread associated with
the curvature at this minimum. This is the geometrical interpretation of the approximation.
On the other hand, the accuracy of ψ(x′) as an approximation for am is dependent on the
magnitude of x′ and the radius of convergence.
This move of performing the Taylor expansion about xo (instead of x = 0) and then shifting
the origin to xo plays a major role in enhancing the SHA in Chen et al. [13]. It extends the
domain in which the SHA gives good results. This will be discussed in greater details in section
3.6.
Once we get a Hamiltonian in the form (3.13), its eigenfunctions are those of a simple
harmonic oscillator with excitation energies
ω = 2√AB (3.14)
and the approximate energies are given by
Eν = (ν +1
2)ω + E (3.15)
where E is the minimum of the SHA oscillator potential in equation (3.13). However, because
we excluded 1j
in the expansion of√
(1∓ x+ 1j)(1± x) for J± (see equation (3.9) - (3.10)), it
Chapter 3. The SHA and an Equations-of-Motion Approach 25
is more consistent to write (see details in Appendix 1 at the end of this chapter)
Eν = νω + E. (3.16)
where ν = 0, 1 . . . , j2. The spread of the SHA eigenfunction is given by
σ =4
√
A
B. (3.17)
Figure 3.2: For the purpose of illustration, we use the Canonical Josephson Hamiltonian (CJH):
H = K2 J
2z − ∆µJz − εJ Jx. This Hamiltonian will be discussed in greater detail in chapter 4.
In this example, K = 0.1, ∆µ = 0.5 and εJ = 1. The components of the exact ground
state eigenvector am in the Jz-diagonal basis are indicated by black diamonds. The SHA
eigenfunction ψ(m) is given by the red curve. The red crosses indicate ψ(m) evaluated at
points m = −j,−j+1, . . . , j−1, j(= 50). The agreement with exact results is very good. Since
the curve is away from m = ±j, η = 1.
In a harmonic approximation, the mean of the oscillator eigenfunction coincides with the
position of the minimum of the oscillator potential. Thus, the mean of the SHA eigenfunction
ψ(m) is given by mo = jxo. (Note: the conceptual difference between using x or m as the
variable is insignificant in the formalism. The choice of either x or m is a matter of ease of
comparison of quantities. So instead of introducing a new symbol for the eigenfunction, we will
continue to use ψ even though the variable used here is m and not x.) Using m = jx, the SHA
2We set the upper limit as j because the su(2) algebra is compact and its spectrum finite. It is observed thatthe profile of eigenfunctions ψν(m) are similar to ψ2j−ν(m) where ν = 0, 1 . . . , j.
Chapter 3. The SHA and an Equations-of-Motion Approach 26
eigenfunction ψi(m) for the ith excited state is given by
ψi(m) = η(2iσi!
√π)− 1
2Hi(m−mo
σ)e−
12(m−mo
σ)2 (3.18)
where Hi is a Hermite polynomial. The extra normalization term η has been included to cater
to situations where ψi(m) does not vanish at m = ±j. (See figure (3.3)). Whenever this
happens, we can compute η by requiring
∑
m
|ψi(m)|2 = 1 (3.19)
where m = −j,−j + 1, . . . j. From equation (3.19), we can also compute the mean of the
distribution
m =∑
m
m|ψi(m)|2. (3.20)
To have a sense of the applicability of the SHA eigenfunction (3.18), we show in figures (3.2)
and (3.3) two possible ways the components am of the ground state eigenvector of an su(2)
Hamiltonian will distribute. The Hamiltonian used in these two examples is the Canonical
Josephson Hamiltonian (CJH) which will be discussed in detail in chapter 4. (At this point,
the LMG Hamiltonian is not used because the issues involving the parity of the states in the
LMG model will complicate the example and obscure the points we want to make here. The
LMG will be visited later in this chapter.) The parameters of the CJH are selected such that
the distribution in figure (3.2) belongs to one phase region and the distribution in figure (3.3)
belongs to a different phase region.
In figure (3.2), the Gaussian-like ground state SHA eigenfunction ψ(m) fits the components
am of the exact eigenvector very well. This good agreement can be attributed to the fact
that the contraction of the su(2) algebra to an oscillator algebra3 is a good approximation
of the exact representation for this choice of parameters for the CJH. On the other hand, in
figure (3.3), another set of parameters is selected such that the CJH is approximately a linear
combination of su(2) operators. In this ‘su(2) limit’, we note that the SHA continues to give
accurate predictions of the mean mo, spread σ (as shown in figure (3.3)) and also for ω and
E (not shown here). However, we see that the approximation am ≈ ψ(m) is not accurate as
before. Later in section 3.6, we will introduce a technique that utilizes the properties of the
su(2) algebra to obtain an accurate approximation of am in the su(2) limit. In that same section,
we will also discuss how it is possible that the SHA continues to give accurate predictions away
from the present oscillator limit.
3In certain limits, the exact representations of J± and Jz can be approximated as linear combinations ofelements of an oscillator algebra, {a, a†, a†a, I} where [a, a†] = 1. This will be discussed in greater detail insection 3.6.
Chapter 3. The SHA and an Equations-of-Motion Approach 27
Figure 3.3: We use the CJH again for comparison. In this example, K = 0.1, ∆µ = 24.5
and εJ = 1. The components of the exact ground state eigenvector am are indicated by black
diamonds. The SHA eigenfunction ψ(m) is given by the red curve. The red crosses indicate
ψ(m) evaluated at points m = −j,−j + 1, . . . , j − 1, j(= 50). In this case, the SHA predicted
quantities mo and σ remain accurate. However, these quantities cannot be used to accurately
determine the components am because the approximation of the discrete distribution as a
continuous function is no longer a good one. In section 3.6, we will discuss why the SHA
quantities remain accurate and how the quantities mo and σ can be used to predict the ground
state am accurately using a different method. Here, η 6= 1 because the eigenfunction ψ(m) does
not vanish at m = ±j .
The SHA: Key Approximations and Assumptions
We will consolidate the various approximations that have been used in the SHA and the
corresponding assumptions that have to be made. The approximations listed here will serve as
a reference for the later chapters.
Using a Hamiltonian H with an su(2) spectrum generating algebra as an example, the
approximations and assumptions (enclosed in [ ]) are summarized below:
(A) Approximating m (or x = m/j) as a continuous variable, as seen in equations (3.8) and
(3.9). The action of ddx
on continuous interpolations, such as ψ(x), is then well defined.
[This is based on the assumption that j is large.]
(B) The assumption that j is large enables us to drop the 1j
term in√
(1∓ x+ 1j)(1 ± x) when
Chapter 3. The SHA and an Equations-of-Motion Approach 28
performing the Taylor expansion for J±, seen in the step from equation (3.9) to (3.10).
As the term 1j
will vanish in the limit where x is continuous, this omission is therefore
consistent with approximation (A). It implies that when J± are mapped as operators on a
continuous domain on x (or m), the action of the J± operators only differ in the direction
of shift they generate (as given by e∓ 1
jd
dx ) and they both have the same multiplicative
factor√
(1∓ x)(1 ± x). [This is based on the assumption that j is large].
(C) Keeping only bilinear terms in x′ and d/dx′ in the Taylor expansions of H, for example
in equations (3.10) to (3.12). [This is based on the assumption that j is large and over
any interval of m = jx, the distributions aim are sufficiently slowly varying that higher
derivatives beyond the second order are insignificant. Furthermore, we have assumed that
near the minimum of the potential associated with H, the oscillator potential is a good
approximation (similar to the small oscillation approximation in classical mechanics).]
The approximations (A) - (C) are for the mapping H(Jz , J±)→ H(x, ddx
) ≈ HSHA. A sepa-
rate but related approximation is used to estimate the components aim of low-lying eigenvectors
of H using eigenfunctions ψi(m) of HSHA:
(D) The discrete components of the eigenvectors aim are estimated by evaluating the continuous
function ψi(m) at discrete points m = −j,−j + 1, . . . , j − 1, j. [Here, we use some simple
guidelines to demarcate this oscillator limit from the other limits. In most cases where
σ > 1 and |mo ± 2σ| . j (i.e. ψi(m) vanish sufficiently at m = ±j), the Hamiltonian is
near the oscillator limit mentioned and the approximation aim ≈ ψi(m), will yield accurate
results. In cases where σ < 1 or |mo ± 2σ| & j, the Hamiltonian is typically not in this
oscillator limit and a different treatment is required to estimate am. An elaboration of
how this can be done in the su(2) limit will be discussed in section 3.6. ]
The procedure used in obtaining the SHA Hamiltonian from an su(2) Hamiltonian and
subsequently approximating the eigenvectors of the Hamiltonian H is summarized below with
the corresponding approximations:
H(Jz , J±)(A) - continuous approximation−−−−−−−−−−−−−−−−−−−→
x= mj
H(x, ddx
)(B), (C)≈ HSHA
Diagonalization
in selected basis
y
y
parameters →
mo = jxo, σ
Eigenvectors, |φi〉 (D) - Discretization←−−−−−−−−−−−−−−−−−−−−−components ai
m = 〈m|φi〉 ≈ ψi(m)Eigenfunctions, ψi(m)
(3.21)
It is important to emphasize that there are two separate but related approximations here:
Chapter 3. The SHA and an Equations-of-Motion Approach 29
1. H → H ≈ HSHA, and
2. aim ≈ ψi(m) near the oscillator limit we mentioned. A different approximation for am will
be required in other limits.
In subsequent examples, we will see that accurate approximations of mo, σ, ω and E can
be obtained using the SHA for a wide range of parameters. The approximation of am requires
different techniques in different phase regions. Both the mean mo and spread σ, together with
the parameters of the Hamiltonian, give an indication of the appropriate technique to use. We
next proceed to explore possible extensions of the SHA which can be use to obtain very accurate
eigenvalues.
Extending the SHA to improve accuracy of eigenvalues: ESHA(I) and ESHA(II)
The SHA we have used gives an approximation of the properties of the low-lying states of
the exact spectrum. Depending on the parameters of the Hamiltonian, the approximate results
will differ from the exact results to different extents. However, it is possible to extend the SHA
for finding near exact eigenvalues for the low-lying states. From the observations in figures
(3.2) and (3.3), we see that the SHA can be used to identify the most important basis states
of a Hamiltonian matrix for the purpose of computing low-lying eigenvalues. In figure (3.3),
using the SHA prediction of mo and σ, we can identify basis states with components that are
numerically significant in the ground state eigenvector. Therefore, if we are interested only
in the low-lying states, we just need to diagonalize a sub-matrix of the Hamiltonian involving
these states. We present this argument graphically in equation (3.23). Denoting low-lying
eigenvectors in the ith excited states as |φi〉,
〈φi|H|φi〉 = Ei (3.22)
where H is the Hamiltonian and Ei is the eigenvalue corresponding to eigenvector |φi〉. Using
figure (3.3) as an example, we identify the numerically significant components of the ground
state eigenvector |φo〉 asm = 48, 49, 50. In this case, only a small sub-matrix of the Hamiltonian
contributes significantly to the ground state energy Eo. Denoting these three components as
‘√
’ in the eigenvector, we can identify the matrix elements of H that are most important. This
is illustrated schematically in equation (3.23) with these matrix elements marked out using ‘*’,
[
0 · · · √ √ √ ]
. . .. . . · · · 0 0
. . .. . . · · · 0 0
· · · · · · ∗ ∗ ∗0 · · · ∗ ∗ ∗0 · · · ∗ ∗ ∗
0...√√√
= Eo. (3.23)
Chapter 3. The SHA and an Equations-of-Motion Approach 30
Therefore, we see that diagonalizing the bottom 3 × 3 sub-matrix will give us an accurate
eigenvalue for the ground state. If we need the eigenvalues for higher excited states, we can
increase the size of this sub-matrix accordingly. We can also check for convergence by gradually
increasing the size of the sub-matrix. This technique will be known as ESHA(I).
On the other hand, when the SHA gives a good prediction of the components aim of the
eigenvectors such as those in figure (3.2), we can adopt another strategy. From linear algebra,
we know that for a Hamiltonian matrix H with eigenvectors vi corresponding to eigenvalues ei,
i.e. Hvi = eivi, the unitary transformation matrix U that diagonalizes H has columns made up
of vi, i.e. U = [vo|v1| . . .]. If the Hamiltonian matrix is originally expressed in su(2) basis states
|jm〉, then the exact components aim of the low-lying eigenvectors vi can be approximated using
the SHA eigenfunctions as aim ≈ ψi(m). Essentially, this is the approximation (D) mentioned
in equation (3.21). We denote these approximate eigenvectors as vi. Since our purpose is just
to obtain accurate eigenvalues for the low-lying states, we do not need to approximate the
full unitary transformation. It is sufficient to consider a truncated matrix Ut = [vo|v1| . . . |vk],where k is the number of low-lying states to be considered. We next obtain
U−1t HUt → HSHA
t (3.24)
which is nearly diagonal (or block diagonal). We illustrate the above equation below:
ψo(−j) ψo(−j + 1) · · · ψo(j)
ψ1(−j) ψ1(−j + 1) · · · ψ1(j)
..
....
..
....
ψk(−j) ψk(−j + 1) · · · ψk(j)
. . .. . . · · · 0 0
. . .. . . · · · 0 0
· · · H. . .
0 0 · · ·. . .
. . .
0 0 · · ·. . .
. . .
ψo(−j) ψ1(−j) · · · ψk(−j)ψo(−j + 1) ψ1(−j + 1) · · · ψk(−j + 1)
..
....
..
....
ψo(j) ψ1(j) · · · ψk(j)
−→
∼ Eo ∼ 0 · · · 0 0
∼ 0 ∼ E1 ∼ 0 · · · 0
.... . .
. . .. . . · · ·
0 0 · · ·. . . ∼ 0
0 0 · · · ∼ 0 ∼ Ek
. (3.25)
Next, we further diagonalize the smaller k× k Hamiltonian matrix HSHAt which is now essen-
tially expressed in terms of the SHA eigenstates instead of su(2) basis states. Diagonalizing
HSHAt gives us the improved approximation of the eigenvalues we need. The number of low-
lying states k can also be gradually increased until satisfactory convergence of the eigenvalues
are achieved. We call this technique ESHA(II).
In this section, we have developed the SHA and the accuracy improvement techniques
ESHA(I) and ESHA(II). In the next section, we will apply the SHA to the LMG.
Chapter 3. The SHA and an Equations-of-Motion Approach 31
3.2 Applying the SHA to the LMG Model
We write down the LMG Hamiltonian introduced in chapter 2,
H = εJz −V
2(J2
+ + J2−). (3.26)
To apply the SHA to the LMG Hamiltonian, we first compute non-vanishing terms of H|φi〉when an inner product is taken with 〈m|
〈m|H|φi〉 → ε〈m|Jz |m〉aim −V
2(〈m|J+|m− 1〉〈m− 1|J+|m− 2〉aim−2
+〈m|J−|m+ 1〉〈m+ 1|J−|m+ 2〉aim+2)
= εmaim −V
2
[√
(j −m+ 1)(j +m)(j −m+ 2)(j +m− 1)aim−2
+√
(j +m+ 1)(j −m)(j +m+ 2)(j −m− 1)aim+2
]
Now we can apply the SHA to the LMG. Using equations (3.8), (3.9) and assuming that j is
large, we get
〈m|H |φi〉 → Hψi(x) ≈ εjxψi(x)− V
2j2(1− x)(1 + x)(e
− 2j
ddx + e
2j
ddx )ψi(x)
≈ εjxψi(x)− V j2(1− x2)
[
1 +2
j2d2
dx2
]
ψi(x). (3.27)
Here, we have made use of the approximations (A)-(C) mentioned in equation (3.21). Finally,
we shift the coordinates to x′ = x− xo. Rearranging the terms, we get
H ≈ − 2V j2(1− x2o)
1
j2d2
dx′2+ V j2x′2 + (ε+ 2jV xo) jx
′ + j(εxo − jV (1− x2
o)). (3.28)
Thus, we have obtained the SHA for the LMG Hamiltonian in three simple steps.
It is important to note that the matrix representations for the operators Jz, J± given in
equations (3.2) and (3.3) depend on the choice of basis. For example, it is also possible to define
〈j m|Jz|j′ n〉 = −1
2
(√
(j −m+ 1)(j +m)δj,j′δn,m−1 +√
(j +m+ 1)(j −m)δj,j′δn,m+1
)
(3.29)
and
〈j m|J±|j′ n〉 = ∓1
2
(√
(j −m+ 1)(j +m)δj,j′δn,m−1 −√
(j +m+ 1)(j −m)δj,j′δn,m+1
)
−mδj,j′δn,m. (3.30)
In this definition, the matrix representation Jx = 12(J+ + J−) is diagonal and we call this the
Jx-diagonal representation.
Chapter 3. The SHA and an Equations-of-Motion Approach 32
Developing the SHA in the Jx-diagonal representation for the LMG Hamiltonian (3.26), we
get
H(x) ≈ −(εj
2
√
1− x2o + j2V (1− x2
o)
)1
j2d2
dx′2 +
(
ε
2j√
(1 − x2o)
3− V
)
j2x′2
+
(
εxo√
1− x2o
− 2jV xo
)
jx′ − j(
ε√
1− x2o + jV x2
o
)
. (3.31)
Here, we will refer to the coefficients of 1j2
d2
dx′2 , j2x′2 etc. in H(x) as A(x), B(x) . . . as in equation
(3.13). We will also write xo in equation (3.31) later as x(x)o .
Now, we can make a comparison between the predictions of xo (x(x)o ), ω (ω(x)) and E (E(x))
from the two SHA expressions of the LMG. We first solve for xo using D = 0 since the origin
is defined to be at xo, where the minimum of the SHA oscillator potential is located. This
approach can be similarly applied to finding (x(x)o ). In the Jz-diagonal representation (see
equation (3.28)),
xo = − ε
2jV, if V >
ε
2j(or equivalently
NV
ε> 1). (3.32)
Here, the restriction on V ensures that |xo| < 1, in line with our definition x = m/j. The
critical value of ε2j here is very close to Vc = ε
2j−1 in RPA previously defined in equation (2.40).
We also note here the resemblance of the expression xo here with cos θ in equation (2.39).
For the Jx-diagonal representation, (see equation (3.31))
x(x)o =
0 if NVε< 1
±√
1− ( ε2jV )2 if NV
ε> 1.
(3.33)
The restriction on V in the second solution ensures that x(x)o is real. An interesting point to
note here is that if xo in equation (3.32) is equivalent to cos θ in equation (2.39), then x(x)o for
NVε> 1 will be ± sin θ. They are just two different components of the same vector in quasi-spin
space for the domain NVε
> 1. For the interactions NVε
< 1, we have a solution x(x)o = 0 in
the Jx-diagonal representation but no valid solution for xo in the Jz-diagonal representation
(see in equation (3.32)). However, referring to figure (3.4, Jz diagonal), we can see that the
exact distribution of the components for NVε
= 0.80 peaks at xo = −1. This is similar for all
distributions for NVε
< 1. Therefore, we can still apply our interpretation of xo as cos θ and
x(x)o as sin θ in this domain. If we regard the choice of a particular matrix representation as
equivalent to the choice of a projection axis in quasi-spin space, we see that it is possible that
the SHA is able to extract more information from some choice of projection axis than others in
the LMG. However, for the SHA approximations in either the Jz or Jx-diagonal representations,
we should expect the expressions for excitation energies ω and ω(x) to be the similar or the
Chapter 3. The SHA and an Equations-of-Motion Approach 33
same. This similarly applies to E and E(x). On the other hand, the means, xo and x(x)o , of
the SHA eigenfunctions for different representation are not expected to be the same. This is
because the bases are not the same for the eigenfunctions. Similarly, the spreads σo and σ(x)o
are also different.
Having solved for xo (or x(x)o ), we can next most easily obtain the SHA ground state energy
E (E(x)). Using the equation (3.31) or (3.28) as appropriate, we get
E(x) = −εj ifNV
ε< 1 (3.34)
E = E(x) = − ε2
4V− j2V if
NV
ε> 1. (3.35)
Here, we have used x(x)o = 0 for NV
ε< 1. These expressions are very similar to the ground
state energy in equations (2.40)-(2.41) obtained using HF. The variation of the ground state
energy with the interaction is as depicted in figure (2.6). The only notable difference is that
the variational ground state energy E obtained using HF exceeds that of the SHA by 12jV for
NVε> 1.
We note an interesting relation between D (D(x)) and E (E(x)) in equation (3.28) and (3.31)
dE
dxo= j
d
dxo
[εxo − jV (1− x2
o)]
= j (ε+ 2jV xo) = Dj (3.36)
and
dE(x)
dxo= −j d
dxo
[
ε√
1− x2o + jV x2
o
]
= j
(
εxo√
1− x2o
− 2jV xo
)
= D(x)j. (3.37)
Earlier, we mentioned that xo (or x(x)o ) can be obtained by setting D = 0 (or D(x) = 0). We
will see that in all the subsequent models in chapter 4 and 5 that the two quantities D and E
are indeed related by D = 1jdEdxo
. We further establish the point that in the SHA, when solving
for D = 0, we are essentially solving for the location of the turning point of E as xo varies. This
brings to mind the variational technique used in equations (2.37) and (2.39) for determining
the ground state energy in chapter 2.
Next, we look at the SHA prediction for the excitation energy ω (ω(x)) where ω = 2√AB
(see equation (3.14)),
ω(x) =√
ε2 − (2jV )2 ifNV
ε< 1 (3.38)
ω = ω(x) =√
2((2jV )2 − ε2) ifNV
ε> 1. (3.39)
Chapter 3. The SHA and an Equations-of-Motion Approach 34
Here, we have used x(x)o = 0 for NV
ε< 1 such that ω(x) is always real. Again, we note that the
results obtained using the SHA are similar to those obtained using RPA in equations (2.42)
and (2.43). The variation of the excitation energy with the interaction is as depicted in figure
(2.6). We can then use equation (3.16) to estimate the energies of the low-lying states of the
LMG Hamiltonian.
At this point, we summarize some notable features of the SHA.
1. With the appropriate choice of matrix representation (Jx-diagonal in the case of the
LMG), the SHA can be used to extract information about the system on both sides of
the critical point.
2. The SHA gives a single expression for E, ω and σ as functions of xo throughout the whole
domain of interaction. Only expressions of xo are different for different phase region and
this controls the behaviour of E, ω and σ in different phase regions.
3. As mentioned, the procedure of solving for xo using D = 1jdEdxo
= 0 is analogous to the
Hartree-Fock method used in chapter 2. This observation is supported by the fact that the
SHA predicted ground state energy resembles the HF ground state energy. Furthermore,
the SHA excitation energy obtained using ω = 2√AB is also similar to the excitation
energy obtained using the RPA. Therefore, the SHA contains information carried by both
the HF and RPA in a single approximation. In addition, the formalism of the SHA allows
the spreads of the approximate eigenfunctions to be easily obtained, i.e. σ = 4
√AB
. With
the spread σ, we are able to apply ESHA(I) and ESHA(II) to improve the accuracy of
approximate LMG eigenvalues. This is the subject of the next section.
3.3 Accuracy Improvement
We start this section by looking at the spreads ( 4
√AB
) of the eigenfunctions predicted by the
SHA. As we have seen, applying the SHA in the Jx-diagonal representation brings out the
essential features of the LMG on both sides of the critical point. We can obtain A(x) and B(x)
in the Jx-diagonal representation equation (3.31) (evaluated using xo given in equation (3.33))
to compute the spreads of the eigenfunctions.
σ(x) = 4
√
j2ε+ 2V j
(ε− 2V j)ifNV
ε< 1 (3.40)
σ(x) =ε
2V4
√√√√
1
j21
(
1− ε2
4j2V 2
) ifNV
ε> 1. (3.41)
Chapter 3. The SHA and an Equations-of-Motion Approach 35
Figure 3.4: Components of the exact ground state eigenvector (blue squares) obtained from
diagonalizing the LMG in Jz (left), Jx-diagonal (right) basis for various NVε
. The corresponding
SHA eigenfunctions are given by the red lines. As discussed in the text, there is no Jz-diagonal
SHA solution for NVε< 1.
NVε
Jz diagonal Jx diagonal
0.80
1.15
1.40
7.00
Chapter 3. The SHA and an Equations-of-Motion Approach 36
We note here that σ(x) will be different from σ (in the Jz-diagonal representation) because
the representations and hence eigenvectors are different. At the critical point, the spring con-
stant terms B(x) vanish and the spreads σ(x) diverge. At this point, the SHA is no longer valid
and equation (3.31) is not a shifted oscillator equation. Other approaches such as regarding
equation (3.31) as either a free-particle equation or an Airy equation with the appropriate
boundary conditions are possible avenues of exploration. However, in this thesis, we will use
an equations-of-motion method to handle the Hamiltonian at the critical point. This is the
subject of discussion in section 3.4.
The spreads σ(x) diverge at the critical point and the SHA breaks down at this point. In
the Jz-diagonal representation, the spread obtained from equation (3.28) which is given by
σ = j 4
√
2
j2
(
1− ε2
4j2V 2
)
ifNV
ε> 1, (3.42)
vanishes at critical point.
The means of the SHA eigenfunctions are given by mo = jxo (or in our case jx(x)o ). We
note that there are two x(x)o s for NV
ε> 1. Therefore, the SHA eigenfunctions in the Jx-diagonal
representation can be written as4
ψ(x)i (m) = η1
(2iσ(x)i!
√π)− 1
2Hi(m
σ(x))e
− 12( m
σ(x))2
ifNV
ε< 1 (3.43)
ψ(x)i (m) = η2
(2iσ(x)i!
√π)− 1
2(Hi(
m−m+o
σ(x))e
− 12(
m−m+o
σ(x))2±
Hi(m−m−
o
σ(x))e
− 12(
m−m−o
σ(x))2)
ifNV
ε> 1. (3.44)
Here, m±o = ±j
√
1− ( ε2jV )2. We considered both roots in the eigenfunction for NV
ε> 1 because
both roots are admissible. In the large interaction region, two neighbouring energy levels start
to coalesce and become degenerate in the large interaction limit, NVε→∞. In the vicinity of the
transition, we have a superposition of two distinct distribution, one having a symmetric profile
and the other with an anti-symmetric profile as shown in figure (3.5). For large interactions,
NVε� 1, where the neighbouring eigenstates become paired as doublets, we no longer have a
superposition and one of the doublets will have components centred at mo and the other at
−mo instead as shown in figure (3.6). A discussion of the occurrence of multiple roots will be
discussed in chapter 6 along with other examples.
4 The operators J2+ and J2
− in H scatters pairs of particles from one level to another. Thus, because the basisstates |m〉 are associated with the difference in particle number between two levels, the subset of basis states withodd m does not mix with the subset with even m. Therefore, depending on whether N is odd (even), alternateams corresponding to even (odd) m vanishes in the ground state eigenvectors. In the next excited state, those ofeven (odd) m vanishes instead and so on. In the large interaction limit when the spectrum becomes degenerate,the components ai
m+1 and ai+1m corresponding to low-lying eigenstates become equal in magnitude.
Chapter 3. The SHA and an Equations-of-Motion Approach 37
Ground state First excited state
Figure 3.5: Components of the exact ground state eigenvector (blue squares) obtained from
diagonalizing the LMG in the Jx-diagonal basis for NVε
= 1.40. At above the critical interaction
NVε
= 1, the eigenstates become a doublet and xo has two roots in the Jx-diagonal basis. The
corresponding SHA eigenfunctions (see equation (3.44)) are given by the red lines. (Left) The
symmetric combination of the two SHA distributions as seen in figure (3.4). (Right) The anti-
symmetric combination of the two SHA distributions.
In figure (3.4), we make a comparison between the components of the exact eigenvectors5
am with those of the SHA eigenfunctions ψ(m) for N = 200 over a range of interactions.
The predictions obtained using the Jz-diagonal representation for NVε
> 1 have also been
included in figure (3.4) for comparison but there is no SHA eigenfunction for NVε< 1 in the Jz-
diagonal representation. Hence, there is no SHA eigenfunction for NVε
= 0.8 in the Jz-diagonal
representation.
In the Jz-diagonal representation, the subset of basis states with odd m does not mix with
the subset of basis states with even m (see footnote 4). Therefore, in figure (3.4), we note
that alternate ams corresponding to odd m vanish in the ground state eigenvectors. Thus,
to normalize the distribution, we set η =√
2 for the SHA eigenfunction in the Jz-diagonal
representation (left column of figure (3.4)).
From figure (3.4), we see that the SHA eigenfunctions ψ(m) in the Jx-diagonal representation
give good approximations of the distributions of the components of the exact ground state
eigenvectors am in both phases away from the critical point at NVε
= 1. For each of the example
provided, it is often the tail end of the eigenfunction (furthest away from mo) that differ most
5The actual distributions for NVε
> 1 will involve some phase factors for the excited states but we will notinclude these contributions in our discussion here as our concern is the goodness of fit for mo and σ from theSHA.
Chapter 3. The SHA and an Equations-of-Motion Approach 38
from exact results. This is in line with our understanding that the approximations are most
accurate in the neighbourhood of mo. As expected, the approximations are also generally not
good in the vicinity of the critical point. Nonetheless, the profiles of the SHA eigenfunctions
give a good indication of where the numerically most significant basis states are distributed.
Figure 3.6: Components of the exact eigenvectors for the two lowest states for NVε
= 7.00
using the Jx-diagonal representation are indicated by blue squares. Here, we no longer have an
(anti-)symmetric profile for each eigenstate as shown in figure (3.5). For this interaction, the
exact components am associated with +m basis states now belong to one eigenstate and those
associated with −m basis states belong to another. These two eigenstates are degenerate in the
limit NVε→∞. Considering the degeneracy, we use the SHA predicted mo = ±99.0 and spread
σ(x) = 1.7 for both eigenstates. The SHA eigenfunctions ψ(m) are given by the red lines and
the signs have been selected to match the phase of the exact results for comparison.
In figure (3.4), we see that when NVε< 1 in the Jx-diagonal representation and NV
ε� 1 in
the Jz-diagonal representation, the SHA eigenfunctions ψ(m) matches the profile of the exact
am. This indicates that the su(2) algebra contracts to an oscillator algebra in both phases.
On the other hand, for NVε� 1 in the Jx-diagonal representation, the match between
the SHA eigenfunctions ψ(m) and the exact components am are not very good. However, the
number of basis states that contribute significantly to the eigenvectors is rather small. The
same is observed for NVε→ 0 in the Jz-diagonal representation. Therefore, it is possible to
apply ESHA(I) to obtain more accurate eigenvalues in such cases.
We consider the example where NVε
= 7.00 in the Jx-diagonal representation, as seen in
figure (3.6). When NVε� 1, pairs of adjacent energy levels become nearly degenerate (see
figure (2.4)). In the case where NVε
= 7.00, we have mo = +99.0 for the ground state and
Chapter 3. The SHA and an Equations-of-Motion Approach 39
mo = −99.0 for the first excited state, both with spreads σ(x) = 1.7.
The profile of the SHA eigenfunction in figure (3.6) suggests that in order to get accurate
eigenvalues for the two lowest states using ESHA(I), we have to construct a Hamiltonian ma-
trix H ′ that involves two disconnected6 subsets of basis states {|m〉} which have numerically
significant components. As seen in figure (3.6), an appropriate choice of basis states should
involve {|m〉} with m = −100,−99, . . . ,−82 and m = 82, 83, . . . , 100. Each range corresponds
to about ten times the σ(x) = 1.7 predicted and we will use this as a guideline when we apply
ESHA(I). This is shown schematically below with ‘−’ indicating the sub-matrix corresponding
to m = −100, 99, . . . ,−82 and ‘+’ corresponding to m = 82, 83, . . . , 100:
H =
−
0
. . .
0
+
−→ H ′ =
−
0
0
+
. (3.45)
The other basis states which are not relevant are excluded in H ′. Having done so, we diagonalize
H ′. The exact ground state energy and excitation energy (from one degenerate pair of states to
another) are compared with the SHA predictions (in equations (3.38) and (3.34)) and ESHA(I):
E0 (Exact) = −355.7458733, E (SHA) = −357.1428571, E (ESHA(I)) = −355.7458731;
E2 − E0 (Exact) = 9.7383450, ω (SHA) = 9.7979590, E2 − E0 (ESHA(I)) = 9.7383554.
The dimension of H ′ is less than 20% of the full Hilbert space in ESHA(I) to obtain an
accuracy of four to five decimal places. A larger set of basis states can be used to obtain
eigenvalues with higher accuracy or for checking convergence.
On the other hand, if we are interested in a more accurate estimate of the eigenvalue for
NVε
= 0.80, N = 200, the Gaussian-like profile of the SHA ground state eigenfunction in figure
(3.4) for the Jx-diagonal representation indicates that ESHA(II) is the appropriate technique
to use. Here, we will use the set of SHA eigenfunctions ψi(m) to approximate the lowest
6It is the way the basis states m are ordered in the Hamiltonian that caused this ‘disconnect’. If the basisstates are ordered by |m| instead, the numerically significant basis states will be grouped together.
Chapter 3. The SHA and an Equations-of-Motion Approach 40
15 eigenvectors aim. We compare the exact results with those obtained using the SHA and
ESHA(II) below:
E0 (Exact) = −100.1962031, E (SHA) = −100.000, E (ESHA(II)) = −100.1962029;
E1 − E0 (Exact) = 0.61970, ω (SHA) = 0.600, E2 − E0 (ESHA(II)) = 0.61974.
Diagonalizing the LMG Hamiltonian sub-matrix expressed in the 15 basis states gives the
ground state energy and first excited state to an accuracy of four to five decimal places. Similar
accuracy is achieved for the next few low-lying excited states. The computational resources
required here are small compared to diagonalizing the Hamiltonian in the full Hilbert space
with 201 basis states. However, near the critical point, many more basis states will be needed.
In section 3.4, we will introduce a basis independent technique that is useful for making ap-
proximations even in the vicinity of the critical interactions, i.e. NVε→ 1. Then, we will make
comparisons on the effectiveness of the various techniques in different domains of interaction in
section 3.5.
Matrix representations of operators in eigenstate basis
We round off this section by discussing how the matrix representations of {Jz , J±} in the
basis where the Hamiltonian is diagonal can be approximated. We will call this basis the
eigenstate basis. First, we make a simple observation that if a unitary transformation U diag-
onalizes a given Hamiltonian matrix H({Xν}) expressed as polynomials of the operators Xν ,
then U−1XνU will also give the corresponding matrix representations of these operators in the
eigenstate basis. For ESHA(I), the unitary transformation U we can use for this purpose is
the same transformation that diagonalizes H ′ in equation (3.45). As for ESHA(II), U can be
obtained using the method described in equation (3.24). Below, we compare the lowest 4 × 4
J+ matrix elements for NVε
= 0.80, 7.00 with N = 200. For NVε
= 0.80,
J+(Exact) =
0 −7.8726 0 −0.0326
−16.1659 0 −10.6282 0
0 −22.5497 0 12.4729
0.1286 0 27.2807 0
,
J+(ESHA(II)) =
0 −7.8725 0 −0.0334
−16.1658 0 −10.6340 0
0 −22.5523 0 12.4736
0.1265 0 27.2691 0
.
Chapter 3. The SHA and an Equations-of-Motion Approach 41
For NVε
= 7.00,
J+(Exact) =
−98.9263296 0 0 −4.6854003
0 98.9263296 −4.6854003 0
0 7.1296271 97.8171293 0
7.1296271 0 0 −97.8171293
,
J+(ESHA(I)) =
98.9263301 0 0 4.6854061
0 −98.9263301 −4.6854061 0
0 7.1296282 −97.8171556 0
−7.1296282 0 0 97.8171556
.
As we have seen, the J+ matrix representation in the eigenstate basis obtained using ESHA(I)
and ESHA(II) are in good agreement with the exact results up to an arbitrary phase.
Alternatively, when the contraction of the su(2) algebra to an oscillator algebra is a good
approximation (also where the approximation am ≈ ψ(m) is good), an approximate matrix
representation of {Jz, J±} in the eigenstate basis can be obtained directly. Using the matrix
representation of the position (x) of the harmonic oscillator,
x− xo = x′ =1√2
σ
j(a+ a†) (3.46)
where a† and a are the matrix representations of the raising and lowering operators of the
harmonic oscillator, we can use Jz = jx to get
Jz =σ√2(a+ a†) +mo. (3.47)
Likewise, we can also write1
j
d
dx′=
1√2σ
(a− a†) (3.48)
and using equation (3.9),
J± ≈ j√
(1− x2)(1∓ 1
j
d
dx′+
1
2(1
j
d
dx′)2)
≈ j√
(1− x2o)(1−
xox′
1− x2o
− x′2
2(1 − x2o)
2)(1∓ 1
j
d
dx′+
1
2(1
j
d
dx′)2). (3.49)
In the last line, we expanded√
(1− x2) about xo. Thus, in this domain, we can easily obtain
the approximate matrix representations7 of J± and Jz in the eigenstate basis without diagonal-
ization. Next, we introduce another approximation technique that is expected to give accurate
results even in the vicinity of the critical point.
7The diagonal of the SHA representation of J± differs from the exact by approximately 12
for reasons similarto those given on the appendix at the end of this chapter.
Chapter 3. The SHA and an Equations-of-Motion Approach 42
3.4 An Equations-of-Motion Approach: RRKK
The RRKK approach is based on three previous developments: a variational technique by
Rosensteel and Rowe [60, 61] for computing the irreducible representations (hereafter irreps)
of potentially difficult Lie algebras; Rowe’s double-commutator equations-of-motion formalism
[64, 65]; and the equations-of-motion method of Kerman and Klein [35, 45]. Thus, we refer to
it as the RRKK equations-of-motion method [33, 62]. The RRKK avoids the preliminary stage
of defining basis states and proceeds directly to the determination of a matrix representation
of operators in which the Hamiltonian is diagonal, i.e. the eigenstate basis.
RRKK and diagonalization differ in much the same way that the Heisenberg and Schrodinger
treatments of quantum mechanics differ. Schrodinger regarded physical observables as operators
and the energy levels of the system are obtained by solving Hamiltonian equations as eigenvalue
problems. Using an appropriate basis, the Hamiltonian H can be represented as a matrix and
the energy levels can be obtained by numerical diagonalization. On the other hand, Heisenberg’s
approach involves solving for the time evolution of operators Xν , using the equations-of-motions,
id
dt〈Xν〉 = 〈[Xν , H]〉. (3.50)
In the context of RRKK, we consider Hamiltonians H which can be expressed as the set
of operators {Xν} belonging a Lie algebra g. The objective of the RRKK is to obtain the
eigenvalues of H and matrix representations of the operators {Xν} in the eigenstate basis
numerically. In the RRKK, we need to use the set of commutation relations of the algebra,
information on the Casimir invariant and the equations-of-motion, [Xν , H ].
With established numerical algorithms, diagonalization is an efficient technique. Currently,
on a laptop computer, a Hamiltonian matrix with dimensions of a few thousand can be diag-
onalized in just a few minutes. On supercomputers, Hamiltonian matrices with dimensions of
the order 1012 can also be diagonalized. For example, see [63]. On the other hand, if we are
only interested in a small number of low-lying states, a good numerical approximation can be
obtained with RRKK requiring only relatively modest computational capabilities.
The formalism of RRKK
We first consider a Hamiltonian H expressed as a polynomial in the elements {Xν} of a Lie
algebra g of operators with commutation relations [Xµ, Xν ] =∑
σ Cσµν Xσ. The objective is to
determine a unitary irrep in which each observable Xν is represented by a matrix X(ν), with
elements
Xpq(ν) := 〈p|Xν |q〉, (3.51)
Chapter 3. The SHA and an Equations-of-Motion Approach 43
to be determined along with energy differences {Ep − Eq}, such that the sets of equations
f (1)pq (µ, ν) := 〈p|[Xµ, Xν ]|q〉 −
∑
σ
Cσµν〈p|Xσ|q〉 = 0, (3.52)
f (2)pq (ν) := 〈p|[H, Xν ]|q〉 − (Ep − Eq)〈p|Xν |q〉 = 0, (3.53)
are satisfied. We can see here that these equations are a huge potential improvement over
the RPA (in equation (2.25)) where non-harmonic contributions are excluded. In fact, these
equations are exact until further approximations are made.
In addition to the above two sets of equations, it will also be necessary to include additional
equations to ensure that the representation is the one desired. For example, if the Lie algebra
has Casimir operators8 Cn(g), equations on the diagonal of the matrix representation of the
Casimir operator can be included and we require these equations to be equal to the Casimir
invariants, cn. Together with equations (3.52) and (3.53), we can define the so-called objective
function F for a finite irrep of dimension D as the sum of squares
F =
D∑
p,q
[∑
µν
∣∣f (1)pq (µ, ν)
∣∣2 +
∑
ν
∣∣f (2)pq (ν)
∣∣2 +
∑
n
∣∣〈p|(Cn(g)− cn)δp,q|q〉
∣∣2]
. (3.54)
F cannot be negative and can only vanish when a solution to the system of equations has
been obtained. Thus, for finite irreps, exact solutions to the above equations are obtained
by minimization of F as a function of the unknown matrix elements of the operators and the
energy differences. If needed, the Hamiltonian matrix H can also be evaluated using the solved
{X(ν)} matrices to determine the eigenvalues.
When the problem is solved in the full space, RRKK gives the same results as diagonalization
within computational errors up to an arbitrary phase factor. However, diagonalization remains
computationally more efficient than RRKK (which is based on optimization) for spaces with
small to intermediate dimensions. On the other hand, techniques can be developed such that
RRKK only require a small number of unknowns in problems where diagonalization will require
a huge space to get accurate results even for the lowest states. It is not hard to see this weakness
in diagonalization as the size of the space required is dependent on the suitability of the pre-
determined basis. For example, consider diagonalizing the LMG model using the Jx-diagonal
representation instead of the Jz-diagonal representation for interactions NVε→ 1. (See figure
(3.4)). Almost all the basis states contribute significantly to the eigenvector of the low-lying
states. To obtain accurate eigenvalues even for the low-lying states, we have to diagonalize the
Hamiltonian matrix using nearly the full Hilbert space.
8The LMG has an su(2) spectrum generating algebra which has only one Casimir operator.
Chapter 3. The SHA and an Equations-of-Motion Approach 44
RRKK in truncated space
On the other hand, in RRKK, there is an option of solving the model restricted to the
low-lying subspace. This can be done as long as additional unknowns are included in the
system of equations to account for the effects of solving the model in a subspace. We term
these additional unknowns as auxiliary variables. With this technique, the dimension of this
subspace is determined only by number of low-lying states of interest and the accuracy we
require for the eigenvalues and matrix representations. Thus, the challenge is to obtain accurate
solutions for finite sub-matrices of the operators, corresponding to a subset of lowest-energy
eigenstates, when the irrep is very large or even infinite. A more familiar scenario of truncation
can also be found in mathematical analysis. Consider a differential equation defined over the
positive half of the real line, 0 < r < ∞. Such an equation can be solved precisely over a
finite interval 0 < r ≤ R, if one knows the boundary conditions at R. Similarly, matrices
{X(µ)} with very large dimensions D or even infinite dimensions can be truncated to smaller
d× d sub-matrices. Auxiliary variables in the outer rows and columns of the matrices serve as
boundaries. (See figure (3.7).) When solving for the truncated d × d matrix representations,
the set of equations in (3.52), with the auxiliary unknowns included in the (d + 1)th rows and
columns, are approximated as:
f (1)pq (µ, ν) ≈
d+1∑
r=1
[Xpr(µ)Xrq(ν)−Xpr(ν)Xrq(µ)
]−∑
σ
CσµνXpq(σ). (3.55)
Corresponding approximations for the evaluation of f(2)pq (ν) and the matrix elements of the
Casimir invariants are applied. Equation (3.55) is illustrated in figure (3.7) for the commutation
relation [J+, J−] = 2Jz where the matrix representations Jz, J± are indicated using X(z) and
X(±) respectively.
In general, before solving the equations, it is possible to have a sense of what the matrix
representations of the operators in the eigenstate basis will be like. When NVε→ 0 in the
LMG, the main contribution to the Hamiltonian (3.26) comes from the Jz term. Thus, it is
reasonable to expect the matrix representations of the operators to be similar to those in the
Jz-diagonal representation (see equations (3.2) and (3.3)). We can similarly predict that the
matrix representations in the eigenstate basis when NVε� 1 to be similar to those in the Jx-
diagonal representation (see equations (3.29) and (3.30)). In either case, matrix elements near
the diagonal are numerically significant while those away from the diagonal become numeri-
cally insignificant. It is reasonable to expect this band-diagonal characteristic to persist for a
wide range of interactions NVε
because the transition matrix elements connecting neighbouring
eigenstates are usually numerically most significant.
One consequence of band-diagonality for f (1) etc. is that the most significant errors in matrix
Chapter 3. The SHA and an Equations-of-Motion Approach 45
Figure 3.7: We illustrate how the truncated matrix representation equation of [J+, J−] = 2Jz is
used to construct f (1)(+,−). (See equation (3.55)). The auxiliary unknowns mentioned earlier
are indicated. Every element of f (1) is an expression involved in the minimization of F ′ in
equation (3.56) . The elements closest to f(1)d,d (bottom-right corner of f (1)) are most severely
affected by the truncation of the matrix representations and thus least reliable. A weighting
scheme is introduced such that the contributions from these expressions are de-emphasized.
multiplications due to truncation are confined to the lower-right corner of the product matrix.
This suggests that the ‘reliability’ of the expressions in truncated equations (3.52) and (3.53)
diminish approaching the lower-right corner of the product matrix and a suitable weighting
scheme, such as wpq = 1(p+1)(q+1) can be used to indicate this. Thus, the objective function can
be redefined as
F ′ =d∑
p,q=1
w2pq
[∑
µν
∣∣f (1)pq (µ, ν)
∣∣2 +
∑
ν
∣∣f (2)pq (ν)
∣∣2 +
∑
n
∣∣〈p|(Cn(g)− cn)δp,q|q〉
∣∣2]
. (3.56)
Minimization of F ′ is then carried out iteratively starting from a first guess. Such first
guesses involve approximating the unknown matrix elements as those in the no interaction
limit or in asymptotic limits of one of the two phases. In our example, the first guesses will be
generated using the SHA we have developed.
Applying RRKK to the LMG
In the LMG model, we can apply the RRKK to estimate low-lying eigenvalues {E1, . . . Ed}(to keep the notation consistent, the ground state is represented using E1 here) and the cor-
responding matrix representations of the operators Jz, J± in the eigenstate basis. In this case,
we denote the unknown matrix elements in the matrix representation of Jz as Xpq(z) using
the notation in equation (3.51). The operator Jz is self-adjoint so Xpq(z) = Xqp(z). Similarly,
J− = J†+. Thus, Xpq(+) = Xqp(−).
Chapter 3. The SHA and an Equations-of-Motion Approach 46
The su(2) commutation relations are
[
J+, J−]
= 2Jz and[
Jz , J±]
= ±J±
and the su(2) Casimir invariant is given by
C[su(2)] =1
2(J+J− + J−J+) + J2
z
taking the value
c = j(j + 1). (3.57)
The equations-of-motions are given by
[
H, J+
]
= εJ+ + V (J−Jz + JzJ−) and[
H, Jz
]
= −V (J−J− − J+J+).
Since J+ and J− contain the same set of unknown matrix elements to be solved, we just need
the commutation relations involving either J+ or J−. From the above equations, we get the set
of expressions
f (1)pq (+,−) =
d+1∑
r=1
[Xpr(+)Xrq(−)−Xpr(−)Xrq(+)
]− 2Xpq(z) (3.58)
f (1)pq (z,+) =
d+1∑
r=1
[Xpr(z)Xrq(+)−Xpr(+)Xrq(z)
]−Xpq(+) (3.59)
f (2)pq (+) = εXpq(+) + V
d+1∑
r=1
[(Xpr(−)Xrq(z) +Xpr(z)Xrq(z))
]− (Ep − Eq)Xpq(+) (3.60)
f (2)pq (z) = −V
d+1∑
r=1
[Xpr(−)Xrq(−) +Xpr(+)Xrq(+)
]− (Ep − Eq)Xpq(z) (3.61)
〈p|(C(su(2))− c)|p〉 =1
2
d+1∑
r=1
[Xpr(+)2 +Xpr(−)2
]+Xpp(z)
2 − j(j + 1). (3.62)
Here, we have more equations than variables so the system is over-determined. The above five
sets of expressions are used to construct the objective function F ′ given in equation (3.56).
Then F ′ is passed to an optimization routine such as ‘lsqnonlin’ in Matlab. We use the results
from the SHA to generate the first guess for the optimization routine. The results of applying
RRKK to the LMG will be shown in the next section.
3.5 Numerical Results for RRKK
In section 3.3, we have seen the performance of the SHA at interactions away from the critical
region where NVε≈ 1. From figure (3.4), we can see significant changes in the profile of the
Chapter 3. The SHA and an Equations-of-Motion Approach 47
ground state distributions in the domain 0.8 . NVε
. 1.4. In this region, as seen in figure (3.4),
the SHA does not give accurate predictions of the eigenfunctions. Therefore, we do not expect
the SHA to give accurate estimates of the eigenvalues or matrix representations. However, the
SHA can be used as a tool here to generate a first guess which is a required input for the
optimization algorithm used in RRKK. In this section, we will show two results obtained using
RRKK within the critical region 0.99 . NVε
. 1.15 with N = 200. (See figure (2.6). For
N = 200, the turning point of the exact excitation energy, which indicates the phase transition,
falls within 0.99 . NVε
. 1.15.)
For N = 200, the dimension of the full Hilbert space D is 201. We solve for the lowest few
states using a much smaller truncated space with dimensions d = 13(+1) where (+1) refers to
the extra outer columns and rows (containing the auxiliary variables). Altogether, there are
167 unknowns to be solved. (E1 is set as zero since it is energy differences that RRKK solves
for.) This number of variables is smaller than expected because we have exploited the fact that
matrix elements in J± and Jz connecting states of opposite parity will vanish. (See footnote
4.) There are a total of 533 expressions in F ′ to be minimized. In the large interaction limit,
pairs of neighbouring levels coalesce and become degenerate. This has been taken into account
in the first guess by modifying the matrix representation of the raising operator a† for large
NVε
. The details on generating the first guesses are given in table (3.1).
We show the lowest 6×6 sub-matrix of the J+ matrix representation forN = 200, NVε
= 0.99.
The first guesses generated using the SHA with the Jx-diagonal representation (since NVε< 1)
are included. We compare results obtained using RRKK with exact ones. The digits differing
from exact results are highlighted in red.
Chapter 3. The SHA and an Equations-of-Motion Approach 48
Table 3.1: Generation of the first guesses for F ′.
NVε
. 1 NVε
& 1
SHA with Jx-diagonal SHA with Jz-diagonal
eq. (3.33) mo = jx(x)o = 0 eq. (3.32) mo = jxo = − ε
2V
eq. (3.40) σ(x) = 4
√
j2 ε+2V j(ε−2V j) eq. (3.42) σ = j 4
√
2j2
(
1− ε2
4j2V 2
)
eq. (3.47) Jz = σ√2(a+ a†) +mo.
eq. (3.49) J± ≈ j√
(1− x2o)(1− xox
′
1−x2o− x′2
2(1−x2o)2 + . . .)(1∓ 1
jddx
+ 12 (1jddx
)2 + . . .)
eq. (3.46) and eq. (3.48) x′ = 1√2σj(a+ a†), 1
jddx
= 1√2σ
(a− a†)
a† =
0 0 0 . . .√
1 0 0 . . .
0√
2 0. . .
......
. . .. . .
a† =
0 0 0 . . .√
1I2 0 0 . . .
0√
2I2 0. . .
......
. . .. . .
where I2 (0) are 2× 2 identity (zero) matrices.
J ′± = ∓1
2(J+ − J−)− Jz J± as above
J ′z = −1
2(J+ + J−) Jz as above
In this basis, J ′x is diagonal.
Chapter 3. The SHA and an Equations-of-Motion Approach 49
J+(First Guess) =
0 −24.7419 −0.0035 −0.1626 0 0
−28.3744 0 −35.0842 −0.0061 −0.3253 0
0.0035 −40.0336 0 −43.0843 −0.0087 −0.5143
0.1626 0.0061 −48.9160 0 −49.8822 −0.0112
0 0.3253 0.0087 −56.3505 0 −55.9185
0 0 0.5143 0.0112 −62.8533 0
J+(RRKK) =
0 −15.058884466 0 −0.258595146 0 0.018141271
−20.565549629 0 −17.380411663 0 −0.158021848 0
0 −26.240813536 0 −19.0283812422 0 −0.058697944
−1.256677579 0 −30.574410496 0 −20.168494490 0
0 −1.500985835 0 −34.109757326 0 −21.011500467
−0.054919704 0 −1.690051658 0 −37.141499727 0
J+(Exact) =
0 15.058884466 0 −0.258595146 0 0.018141271
20.565549629 0 17.380411663 0 0.158021848 0
0 26.240813536 0 19.0283812422 0 −0.058697944
−1.256677579 0 −30.574410496 0 −20.168494490 0
0 1.500985835 0 −34.109757326 0 −21.011500466
−0.054919704 0 −1.690051658 0 −37.141499727 0
.
In the region of critical interaction, we note that the first guess for J+ etc. differs significantly
from the exact results. Away from this critical region, the first guesses are typically within a
few percent of the exact results. In these cases, the solution will be obtained in a shorter
time. The results presented here have F′2 of the order 10−20 when the searches for the minima
terminate. As we can see, the lowest 6×6 elements of the matrix representations obtained using
RRKK are accurate to 8 to 9 decimal places even though the interaction is NVε
= 0.99. The
poor first guesses did not influence the accuracy of the solution. We also note that the signs of
some matrix elements obtained using RRKK differs from those obtained from diagonalization.
The signs are arbitrary to the extent that J± and Jz give the correct eigenvalues and obey the
commutation relations. Therefore, the difference in signs is not an issue.
The solved J± and Jz matrix representations are substituted back into the LMG Hamiltonian
and the six lowest eigenvalues are compared against exact results. Again, we have included the
first guesses of the energies generated using the SHA with Jx-diagonal for comparison.
Chapter 3. The SHA and an Equations-of-Motion Approach 50
E(First Guess) =
−99.92993
−99.79574
−99.66822
−99.54684
−99.43108
−99.32041
, E(RRKK) =
−100.3780314798271
−100.0719926850164
−99.6723782982393
−99.2167143895793
−98.7162690725088
−98.1782526947687
, E(Exact) =
−100.3780314798271
−100.0719926850163
−99.6723782982393
−99.2167143895794
−98.7162690725050
−98.1782526948669
The results obtained from RRKK are in good agreement with exact ones. In the absence
of exact values, the accuracy of the results obtained from RRKK can be checked by gradually
increasing the dimension d of the truncated unknown matrices until satisfactory convergence of
the solutions is obtained.
We next look at another approximation with NVε
= 1.15 which lies in the other phase within
the critical region. This result could have been obtained by successive application of RRKK
from NVε
= 0.99 in steps to NVε
= 1.15 using the solution for each interaction as the first guess
of the solution for the next interaction to be solved. However, instead of using this approach,
we will check the usefulness of the first guess generated using the SHA in the Jz-diagonal
representation.
J+(First Guess) =
0 48.6305 −0 5.9926 0 −3.0162
−48.6305 0 5.9926 0 −3.0162 0
0 14.2531 0 45.3980 0 8.5957
14.2531 0 45.3980 0 8.5957 0
0 −0.5846 0 19.9307 0 42.1863
−0.5846 0 19.9307 0 42.1863 0
J+(RRKK) =
0 45.795487162 0 6.566783594 0 0.176785174
46.021582805 0 8.646826166 0 1.466423113 0
0 15.567474977 0 30.458857563 0 3.637370785
13.913182945 0 34.599922974 0 22.055472107 0
0 5.567799754 0 32.274437275 0 25.638926565
2.408729664 0 9.838854206 0 37.188691269 0
J+(Exact) =
0 −45.795487162 0 −6.566783589 0 0.176785143
−46.021582805 0 −8.646826168 0 −1.466423129 0
0 −15.567474973 0 30.458857528 0 −3.637370639
−13.913182946 0 34.599922971 0 −22.055472234 0
0 5.567799761 0 −32.274437269 0 25.638925827
2.408729654 0 −9.838854192 0 37.188691378 0
Chapter 3. The SHA and an Equations-of-Motion Approach 51
E(First Guess) =
−101.0556
−101.0556
−100.3119
−100.3119
−99.7225
−99.7225
, E(RRKK) =
−101.1705555551
−101.1653704969
−100.5795385103
−100.4458705191
−100.0551216302
−99.6733246508
, E(Exact) =
−101.1705555551
−101.1653704969
−100.5795385101
−100.4458705198
−100.0551216251
−99.6733246924
When compared with exact solutions, the results obtained from RRKK are a little less
accurate than those in the previous example. However, the accuracy is still very good for most
purposes. The typical time taken ranges from a few seconds to a few minutes depending on the
quality of the first guess and the accuracy required. We expect the computation time to obtain
results of similar accuracy to increase with N .
From the examples shown, we see that RRKK remains effective in the neighbourhood of
the critical interaction for N = 200. As mentioned, the RRKK approximations for NVε
= 1.00,
where the SHA fails, can be obtained by using the results from NVε
= 0.99 as the first guess.
The results obtained using RRKK for NVε
= 1.00 are similar in accuracy as the two examples
we have seen. This good accuracy can be attributed to the fact that matrix representations of
the operators in the eigenstate basis remain band-diagonal with the band still relatively narrow
with respect to d. Thus, RRKK remains effective even in the critical region.
We will conclude this chapter by discussing some of the issues raised earlier and making
comparisons with other approximation techniques.
3.6 Discussion
In making approximations, a problem is either simplified or modified to one that is familiar or
has an analytic solution. Often, we are only interested in part of the solution of the original
problem. In our case, this part comprises the eigenvalues in the low-lying spectrum and the
associated matrix representations of the operators. We have just seen two new approximation
techniques, the SHA and RRKK in the previous sections. In fact, the two can be combined
into a single technique SHA-RRKK when the number of operators in the spectrum generating
algebra of the Hamiltonian is small. The small number of operators keeps the number of
unknowns and expressions to be minimized manageable for the optimization routine. Under
most circumstances, ESHA(I) or ESHA(II) can be used to improve the accuracy of eigenvalues
obtained using the SHA. However, in the absence of an alternative basis to one which gives
eigenfunctions with large spreads, e.g. σ ∼ j, then it is best to consider using RRKK. We will
next revisit some assumptions and issues related to the SHA that have been raised earlier.
Chapter 3. The SHA and an Equations-of-Motion Approach 52
How large does j have to be for the SHA to be valid?
One of the important assumptions in the SHA for su(2) models is that j = N/2 has to be
large. Here, we will illustrate that it is possible that the SHA can remain valid even for small j.
However, there is no simple answer to the above question and the requirements will be different
for different Hamiltonians.
We will study LMG models with NVε
= 0.5 for N = 50, 10 and 2. In figure (3.8), we compare
the exact components of the ground state eigenvectors am with those approximated using ψ(m).
The exact and SHA predicted excitation energy ω, ground state energy E and the spread of
the ground state eigenfunction σ for each N are also included for comparison.
Figure 3.8: The components of the exact ground state eigenvector am are indicated by black
diamonds. The red crosses indicate the SHA eigenfunction ψ(m) evaluated at points m =
−j,−j + 1, . . . , j − 1, j. The interaction strength is set to NVε
= 0.5 for all three plots. The
particle number in the systems considered are N = 50, 10 and 2.
N = 50 N = 10 N = 2
Exact, SHA Exact, SHA Exact, SHA
Eo = −25.06, E = −25.00 Eo = −5.06, E = −5.00 Eo = −1.03, E = −1.00
E1 − Eo = 0.88, ω = 0.87 E1 − Eo = 0.92, ω = 0.87 E1 − Eo = 1.03, ω = 0.87
σ = 6.50, σ = 6.58 σ = 2.80, σ = 2.94 σ = 1.11, σ = 1.31
As seen from figure (3.8), the results confirm the requirement that j has to be sufficiently
large (approximations (A) - (C) in equation (3.21)) for the SHA predictions to be accurate.
While the accuracy of the results deteriorate when j is small, it is notable that the SHA does
not fail even when j ∼ 1 in this example. In this case, σ > j and the eigenfunction does not
vanish at m = ±j. Generally, the SHA yields more accurate results when the Hilbert space of
the model becomes large. This is where it becomes computationally more challenging to solve
a model.
Chapter 3. The SHA and an Equations-of-Motion Approach 53
The results in figure (3.8) also indicated some approximate scaling properties in the LMG
predicted by the SHA. For example, the SHA excitation energy ω is constant for all N . This
is because NVε
is fixed at 0.5 for the three plots here (see equation (3.38)). As j → ∞, the
SHA excitation energy ω becomes exact. The SHA ground state energy E scales approximately
as j and the fractional error reduces significantly with increasing j. The SHA spread σ scales
as√j, becoming more accurate with larger j like other quantities. We see that in the LMG,
the quantities ω, σ and E scale approximately with the size of the system even for interactions
away from the critical points. In the next chapter, we will see such scaling properties again.
Next, we make a comparison of Chen’s SHA and the present SHA.
Comparing Chen’s SHA [13] and the present SHA:
The simple but subtle move of performing the SHA in the x′(= x − xo) coordinates will
yield more information about the operators J±. This information is crucial in ensuring that
the SHA gives good approximations on the both phases separated by the critical point. One
major difference between the version of the SHA in Chen [13] and the present SHA is the way
in which the expansion of√
(1− x2) in J± is done. We will illustrate this using a simple su(2)
Hamiltonian H with adjustable parameters α and β ≥ 0:
H = αJz − βJx= αJz −
β
2(J+ + J−). (3.63)
This Hamiltonian which is an element of the su(2) algebra is exactly solvable. We write down
the SHA representation of J± from both treatments in the Jz-diagonal representation
J±ψi(x) ≈ j(1 −1
2x2)(1 ∓ 1
j
d
dx+
1
2j2d2
dx2)ψi(x). (Chen’s version.)
J±ψi(x′) ≈ j√
(1− x2o)(1−
xox′
1− x2o
− x′2
2(1− x2o)
2)(1∓1
j
d
dx′+
1
2j2d2
dx′2)ψi(x′). (Present version.)
In Chen’s version, x is assumed to be small and√
(1− x2) is expanded about x = 0. In the
present version,√
(1− x2) is expanded about an unknown fixed point x = xo which is solved
by setting the coefficient of the displacement x′ in the Hamiltonian as zero9. The origin of the
coordinate system in Chen’s treatment is at x = 0 while the origin in the present version is at
x = xo. The SHA representation of H from both treatments are
H ≈ −βj 1
2j2d2
dx2+β
2jj2x2 + αjx− βj
= −βj 1
2j2d2
dx2+β
2jj2(
x+α
β
)2
− α2j
2β− βj, (Chen’s version.)
9Whenever the Hamiltonian does not involve expanding√
(1 − x2) at all, the two treatments are more similar- for example, in the LMG or the su(2) pairing model in Chen’s paper [13].
Chapter 3. The SHA and an Equations-of-Motion Approach 54
H ≈ −βj√
1− x2o
1
2j2d2
dx′2+
βj2x′2
2j√
(1− x2o)
3+
(
α+βxo
√
1− x2o
)
jx′ − βj√
1− x2o + αjxo.
(Present version.)
The means of the eigenfunctions given by the two treatments are
xo = −αβ
(Chen’s version) ; xo = − α√
α2 + β2(Present version). (3.64)
The mean value xo of the present treatment agrees with the exact values for all α and β. The
extra shift term, xo√1−x2
o
jx′, generated in the present SHA is responsible for ensuring that the
SHA gives accurate predictions of xo in different phase regions. In contrast to Chen’s SHA,
the present SHA xo complies with the condition |xo| ≤ 1. The expressions for xo, σ, ω and E
from the present treatment agree with the exact results for the whole range of α and β and
are independent of the magnitude of j. On the other hand, those from Chen’s treatment only
become exact when α→ 0.
The eigenvector of the Hamiltonian in the su(2) algebra
The above example suggests something very important - in the limit where any Hamiltonian
is just a linear combination of Jz and Jx in the su(2) algebra (hereafter the su(2) limit), the
SHA will yield exact results for xo, σ, ω and E.
In the su(2) limit, the quantity xo predicted by the SHA gives the exact rotation of the
basis states about an axis (in this case, it is Jy) in quasi-spin space. The same exact result for
xo (and hence ω and E) can be obtained using HF-RPA.
We write down, without derivation (see [79]), the exact ground state eigenvector for Hamil-
tonian (3.63) by rotating the basis states in the quasi-spin space
|φ〉 =
j∑
m=−j
√
(2j)!
(j +m)!(j −m)!sinj+m θ
2 cosj−m θ2 |m〉 (3.65)
where tan θ = βα. This quantity can also be obtained exactly using the HF method. (See
equation (2.34)). From equation (3.64), we can make the connection for the present SHA xo
with the HF prediction: xo = − cos θ.
Near the su(2) limit, we can approximate the components of the ground state eigenvector
of a Hamiltonian:
am ≈ 〈m|φ〉 =
√
(2j)!
(j +m)!(j −m)!
(√
1+xo2
)j+m(√1−xo
2
)j−m
(3.66)
where xo is obtained using the SHA. We can regard this as a binomial approximation because
essentially equation (3.66) is equivalent to
|am|2 ≈N !
(j +m)!(N − (j +m))!pj+m(1− p)N−(j+m) (3.67)
Chapter 3. The SHA and an Equations-of-Motion Approach 55
where N , the number of trials and p, the success probability are parameters of the binomial
distribution B(N, p). Here, we can regard N = 2j in the su(2) algebra. The two binomial
parameters N and p can be approximately related to parameters of Gaussian distribution, mo
and σ by
Np = mo + j ; Np(1− p) = σ2. (3.68)
For any Hamiltonian in the su(2) limit, the binomial approximation (3.67) gives the exact results
regardless of the magnitude of j whereas using the SHA eigenfunction ψ(m) in equation (3.18)
only yields exact results in the limit when j →∞. This is noteworthy because it contrasts the
SHA predictions of xo, σ, ω and E which are exact for all js.
Next, we will revisit figure (3.3) but apply the binomial approximation instead to predict
the components am of the ground state eigenvector. The fit with the binomial approximation
is given in figure (3.9).
Figure 3.9: The components of the exact ground state eigenvector am are indicated by black
diamonds. The components am fitted using the the binomial approximation with the same
mean and spread as in figure (3.3), are indicated using red diagonal crosses. Apart from this,
the figure is the same as figure (3.3). The SHA eigenfunction ψ(m) is given by the red curve.
The blue crosses indicate the ψ(m) evaluated at points m = −j,−j + 1, . . . , j − 1, j(= 50).
In figure (3.9), we see that the exact components am indicated by the black diamonds
are well fitted with the red diagonal crosses obtained from the binomial approximation. In
general, it is observed that the binomial approximation yields more accurate predictions of the
components am than using SHA eigenfunction (3.18) when the parameters of the model are
Chapter 3. The SHA and an Equations-of-Motion Approach 56
such that a Hamiltonian H is near the su(2) limit. We will see further applications of this
technique in chapter 4 and 5.
On the su(2) algebra and the augmented hw(1) contraction limit
Next, we will study what the ground state eigenvectors of Hamiltonian (3.63) and the matrix
representations of the su(2) operators will become when either α or β vanishes:
β → 0⇒ θ → 0, α→ 0⇒ θ → π
2,
xo → −1, σ → 0, xo → 0, σ →√
j
|φ〉 → | − j〉 |φ〉 is distributed with
spread σ =√
j about m = 0.
As expected, in the Jz-diagonal representation, when only the Jz operator (β = 0) is present in
the Hamiltonian, the ground state components am distribute like a delta function at the lowest
weight state |−j〉. On the other hand, when only the Jx operator is present in the Hamiltonian,
the ground state components am distribute like a Gaussian function similar to figure (3.2).
The two cases where either α or β vanishes correspond to two contraction limits. Here,
the su(2) algebra together with the number operator N (where N |j,m〉 = 2j|j,m〉) contracts
to the augmented hw(1) algebra10 when j → ∞. This algebra comprises the set of opera-
tors {a, a†, a†a, I} where a, a† are harmonic oscillator raising and lowering operators satisfying
[a, a†] = I. The recognition of these contraction limits is important because the hw(1) algebra
is an oscillator algebra and it gives us an indication of where the SHA eigenfunction will predict
the components am accurately. The contraction limits of the su(2) operators, characterized by
xo and σ (with j →∞), are as follows:
xo → ±1, σ → 0, xo → 0, σ →√
j
Jz → a†a− jI Jz →σ√2(a+ a†) +mo =
√
j
2(a+ a†) (see eq. 3.47 )
J+ →√
2ja† J+ → j(1 − x′2
2(1 − x2o)
2− 1
j
d
dx′+
1
2(1
j
d
dx′)2) (see eq. 3.49 )
= j(1− 1√2j
(a− a†)− 1
2j(1 + 2a†a))
J− →√
2ja J− → j(1 +1√2j
(a− a†)− 1
2j(1 + 2a†a)).
Aug. hw(1), Rotated aug. hw(1) (hw(1)R)
The representation in the xo → ±1 limit is given by expanding the Holstein-Primakoff represen-
tation (which we will see shortly) but is not obtainable using the SHA though the SHA predicts
10This algebra is also known as the solvable h4 algebra.
Chapter 3. The SHA and an Equations-of-Motion Approach 57
mo and σ accurately in the su(2) limit. Having accurate approximation of these two quantities
only ensures that the matrix elements corresponding to the ground state and first excited state
are accurate. This will be discussed in chapter 6. In the rotated augmented hw(1) algebra, we
note that the su(2) operators contracts to linear combinations of the elements of the augmented
hw(1) algebra. In fact, this algebra can accommodate other bilinear elements such as aa and
a†a† which are present in the SHA representation of the Jz and J± operators in equations (3.47)
to (3.49). These bilinear elements can be transformed into another augmented hw(1) algebra
using the Bogoliubov transformation. This involves finding new bosonic operators (b† and b)
as linear combinations of a† and a:
b† = cosh η a† + sinh η a
b = cosh η a+ sinh η a†
where [b, b†] = I. The process brings the set {a, a†, a†a, aa, a†a†, I} into the set {b, b†, b†b, I}.The inclusion of aa and a†a† in the augmented hw(1) algebra implies that all the SHA rep-
resentations of the su(2) operators can be expressed as elements of the oscillator algebra. In
these cases, the oscillator limits are not restricted to the two cases (xo → 0, σ → √j and xo →±1, σ → 0, both with j →∞) just mentioned. Instead, the contraction to the oscillator limits
can take place over a continuous range of xo and σ. For the models discussed in this thesis, the
oscillator contraction limits are often good approximations of exact results over a wide range
of parameters. Therefore, the SHA, which predicts xo and σ accurately in most cases, is an
effective approximation of exact results. We summarize the appropriate strategies for different
limits for su(2) Hamiltonians in table (3.2).
Limits Guidelines Approximating Approximating
xo, σ, ω and E Components am
Oscillator limits σ > 1, use the SHA am ≈ ψ(m)
(Aug. hw(1) and variants) |mo ± 2σ| . j
The su(2) limit e.g. H ≈ αJz − βJx use the SHA Binomial
Approximation
Table 3.2: Strategies for Different Limits.
Different limits of SHA predictions:
It is possible to exhaust all the different scenarios and contraction limits that are covered
Chapter 3. The SHA and an Equations-of-Motion Approach 58
by the SHA. We write down the SHA equation for easy reference
H(x′,d
dx′) ≈ −A 1
j2d2
dx′2+Bj2x′2 +Djx′ + E. (3.69)
In table (3.3), we list the four different scenarios where A and/or B vanish. These scenarios
will be useful for categorizing the models we have.
Phases A B D Limits Examples so far
P1 � 0 � 0 set to 0 Aug. hw(1)R or su(2) LMG with NVε→ 0 or � 1.
P2 � 0 ∼ 0 set to 0 SHA fails LMG at NVε
= 1, Jx-diagonal
P3 ∼ 0 ∼ 0 6= 0 and set Aug. hw(1) or su(2) equation (3.63)
xo = 0 or ±1 as β → 0
as appropriate
P4 ∼ 0 � 0 set to 0 Standing Wave / See CJH in chapter 4.
Rotor limit
Table 3.3: Domains of applicability of the SHA.
For example, we can classify the LMG model in the Jx-diagonal representation as being the
P1-P2-P1 type with the SHA failing at the critical point. The Hamiltonian (3.63) in the Jz-
diagonal representation is of the type P3-P1 and the SHA will give predictions across the whole
domain of interactions. From the above table and the examples seen in this thesis, we form a
simple hypothesis: as long as either A or B does not vanish alone and σ > 1, the SHA will give
accurate predictions. We will see more examples in the subsequent chapters.
We next look at other approximation techniques that are similar to the SHA. Bosonic
approximations of su(2) operators are already common in the literature [37]. The most famous
being the Holstein-Primakoff transformation [34] and the Dyson representation [23].
Holstein-Primakoff (HP) Transformation:
J+ → a†√
2j
√
1− a†a2j
J− →√
2j
√
1− a†a2j
a
Jz → −j + a†a.
Here, we require [a, a†] = 1 and oscillator levels are only up to 2j. This transformation preserves
the hermiticity property (J−)† = J+ and follows the su(2) commutation relations. In most
Chapter 3. The SHA and an Equations-of-Motion Approach 59
applications, a†a/j is assumed to be small and the approximation is made by Taylor expansion:√
1− a†a/2j ≈ 1 − a†a4j + . . .. An approximation of the Hamiltonian with only up to bilinear
terms in a† and a would be just a non-interacting bosonic approximation and can be easily
diagonalized. Higher order terms need to be incorporated to include the interactions. We will
next look at the Dyson representation.
Dyson Representation:
(I) (II)
J+ → (2j − a†a)a J+ → (2j − x ddx
)d
dxJ− → a† J− → x
Jz → j − a†a Jz → j − x ddx.
This representation is simple and the operators follow the su(2) commutation relations. How-
ever, (I) violates the hermiticity property (J−)† = J+ when a† is interpreted as the hermitian
adjoint of a. For (II), with an appropriately selected measure, it can be shown that hermiticity
property is preserved [31].
In the case of the SHA representation of the su(2) operators given in equations (3.8) and
(3.9), before the Taylor expansion in equation (3.9), the operators obey the hermiticity property
and follow the su(2) commutation relations. Even after Taylor expansions, if 1j
is excluded in the
representation of J± such as in equation (3.10), the SHA Hamiltonians will remain hermitian.
The SHA includes only up to bilinear terms in the boson operators a† and a.
In contrast to the SHA, both the HP and Dyson representations (I) are directly expressed
in terms of a†, a and it is not always easy to decide which terms are the most important if
one intends to construct a differential equation in terms of x and ddx
. In the case of the LMG,
if an approximation is made with a†a2j excluded from both the HP and Dyson representations
(I), predictions of the excitation energies obtained will be similar to those of the RPA. The
approximated Hamiltonian, bilinear in a† and a, can be diagonalized exactly using a Bogoliubov
transformation mentioned earlier.
Replacing the operators in the Hamiltonian with their Dyson representation certainly gives
a differential equation but it does not always guarantee the existence of an analytic solution
and even if one exists, it may not be easy to extract useful information from the differential
equation as what was done in the SHA. The SHA, which operates in the contraction limit to
an oscillator algebra, is able to offer simple and accurate approximations in many situations.
Furthermore, using the Dyson representation, we do not get a direct interpretation of how x is
related to the su(2) algebra and its basis states. Thus, we can expect the SHA to provide more
information on the eigenvectors of a given Hamiltonian than the Dyson representation (II).
Chapter 3. The SHA and an Equations-of-Motion Approach 60
Hartree-Fock and RPA:
To compare the SHA with HF and RPA, we make the following observations:
1. the resemblance between xo in the SHA and cos θ in the HF method,
2. that the SHA also involves finding a minimal energy expectation, i.e. D = 1jdEdxo
= 0, and
3. the dependency of the SHA excitation energy on xo and the RPA excitation energy on θ.
Earlier in section 3.2, we observed that the SHA as a single technique encapsulating the pre-
dictions of both the HF and the RPA:
H(x′,d
dx′) ≈ −A 1
j2d2
dx′2+ Bj2x′2
︸ ︷︷ ︸
similar to RPA
+ Djx′ + E.︸ ︷︷ ︸
similar to HF and its variants
(3.70)
Here, the HF and its variants are used to locate the minimum of the variational ground state
energy while the RPA uses the location of this minimum for obtaining the corresponding ex-
citation energies. In the SHA, the expression of the approximate spread of the eigenfunction
can be read off from the differential equation above (σ = 4
√AB
), allowing us to construct the
approximate eigenvectors for the low-lying states of the original Hamiltonian. We will also see
in chapter 4 that the SHA remains applicable in certain non-vibrational phases. In chapter 5,
we will see that for the BCS pairing Hamiltonians, the SHA is able to preserve the number con-
serving property of the original Hamiltonian - something not captured in the conventional RPA
or most other approximation techniques. More interestingly, the well-known BCS gap equa-
tions can be derived from the SHA. Thus, we see a few advantages of the SHA over traditional
approximation techniques.
Technique of Ulyanov and Zaslavskii:
We end this section by looking at the works of Ulyanov and Zaslavskii (UZ) [75, 80], which
are in some ways similar to the strategy employed in the SHA. As in the SHA, the Hamiltonian of
interest is expressed in the same form as equations (3.4) and (3.5). In the SHA, we approximated
m as a continuous variable and the eigenvector as an eigenfunction 〈m|φi〉 = aim → ψi(x). In
the UZ approach, aim is not used as eigenfunction but is used to construct other eigenfunctions
in a different basis
Φ(x) =
j∑
m=−j
am√
(j −m)!(j +m)!emx. (3.71)
Whenever am ≈ a−m, we can interpret these eigenfunctions as a linear combination of hyperbolic
functions obtained by pairwise addition of the exponentials with m and −m. Here, we have
changed from a distribution on a discrete basis (aim on m as we interpret in the SHA) to analytic
Chapter 3. The SHA and an Equations-of-Motion Approach 61
functions on a continuous basis (Φ(x) on emx). The variable x also takes a different meaning
from the SHA and is not immediately obvious how it is related to the su(2) algebra. In the
next step, the first and second derivatives of Φ(x) are taken and ams are incorporated in Φ(x).
In the example given in UZ in the study of uniaxial paramagnet, the Hamiltonian of interest is
of the form
H = −αJ2z −BJx (3.72)
where α > 0 is the anisotropy constant and B is the magnetic field up to a constant factor.
Using the UZ method, the above Hamiltonian can be expressed as a differential equation of the
form
−αd2Φ
dx2+B sinhx
dΦ
dx−Bj cosh xΦ = EΦ. (3.73)
The term in the first derivative can be eliminated using the Sturm-Liouville transformation
Ψ = Φ exp(− B2α
coshx) (3.74)
to get
−αd2Ψ
dx2+ U(x)Ψ = EΨ (3.75)
where
U(x) =
[B2
4αsinh2 x−B(j +
1
2) cosh x
]
. (3.76)
Thus, whether an exact solution of the differential equation can be found is dependent on
the form of U(x). If the value of j(. 2) is not too large, the Schrodinger equation with
U(x) in equation (3.76) has a simple analytic expression. However, such expressions do not
exist for systems with larger j. On the other hand, we can still make approximations for
U(x) to obtain analytic solutions. The harmonic oscillator potential can be obtained easily
from approximations of hyperbolic potentials. Therefore, though this technique is exact, it is
somewhat limited by the form of the Hamiltonian and the associated potential U(x).
In figure (3.10), we see the shapes of the potential U(x) given in equation (3.72). Though
this problem is different from the LMG, the shapes of the potential U(x) - transiting from
double wells potentials to oscillator-like potential, gives us some insights to the profile of the
distributions in figure (3.4). It is possible to make approximations of the potentials seen in
figure (3.10) using harmonic oscillator potentials or Morse potentials.
As mentioned, the meaning of the variable x of U(x) is not immediately obvious in the
context of the su(2) model. On the other hand, x is clearly defined in the SHA in the Jz-
diagonal representation. As we have seen in the LMG model, the nature of the oscillator
is quite different when the Jx-diagonal representation is used instead. Earlier, the way the
variables such as x(z)o and x
(x)o are related suggest that the choice of the diagonal axes amounts
to two description of a single quantity - like two different projections of a vector. In other
Chapter 3. The SHA and an Equations-of-Motion Approach 62
Figure 3.10: Plot of the potential U(x) for j = 200, α = 2 and B = 50, 400 and 800. The
critical value for B is given by Bc = α(2j + 1) = 402.
B = 50 B = 400 B = 800
approximations such as the HP and HF-RPA, the physical meaning of the co-ordinate in which
the oscillation is taking place is not obvious. However, as we have seen, this knowledge is not
required in the implementation of these approximation techniques.
Whatever the case, if our interest is confined to making accurate approximations of low-
lying eigenvalues and the associated matrix elements of the operators, SHA-RRKK remains a
powerful tool with a simple numerical implementation. In cases where the algebra has many
operators, we still have the SHA to provide good approximations. The philosophy behind
the SHA can also be easily adapted for other algebras. For example, see [67]. Its versatility
also extends beyond techniques involving exactly solvable models, such as the Richardson and
Gaudin models for Hamiltonians with su(2) ⊕ su(2) ⊕ . . . su(2) spectrum generating algebras
[54] and [53]. We will see this in greater detail in Chapter 5 in the discussion of the BCS pairing
model.
Appendix 1: Issues related to the Zero Point Energy.
In the formalism of the SHA, the term 1j
in J± in equation (3.12) has been dropped based on
the assumption that j is large. Another important rationale for excluding 1j
is to make terms
such as x ddx
and ddx
vanish in the approximation, thus making the Hamiltonian hermitian and
the SHA simpler. To study the effect of omitting 1j
on the ground state energy E, we first write
down J± as in equation (3.12) but now with the term 1j
included:
Chapter 3. The SHA and an Equations-of-Motion Approach 63
J ∗±ψ
i(x′) ≈ j√(
1∓ xo +1
j
)
(1± xo)(1− 2jxo ∓ 1
2 (j ∓ jxo + 1) (1± xo)x′
− (2j + 1)2
8 (j ∓ jxo + 1)2(1± xo)
2x′2)(
1∓ 1
j
d
dx′+
1
2j2d2
dx′2
)ψi(x′)
≈ j√(
1∓ xo +1
j
)
(1± xo)(1− 2jxo ∓ 1
2 (j ∓ jxo + 1) (1± xo)x′
− (2j + 1)2
8 (j ∓ jxo + 1)2(1± xo)
2 x′2 +
1
2j2d2
dx′2
)ψi(x′).
(3.77)
We distinguish this expansion from the one with 1j
excluded by defining it as J ∗±. In the last
line of equation (3.77), for the purpose of comparing with J±, we did not include the terms
involving x ddx
and ddx
. We write down the shifted oscillator Hamiltonian for our reference
H(x′,d
dx′) ≈ −A 1
j2d2
dx′2+Bj2x′2 +Djx′ + E. (3.78)
It is clear that the contributions to d2
dx′2 will come from the operators J±. As the Hamiltonians
we deal with will be Hermitian, we consider the sum −(J+ + J−). (Here, the −ve sign ensures
that A and B are both positive.) Then, we compare the expression for the term E in −(J++J−)
with the term E∗ from −(J ∗+ + J ∗
−). We write down the expression J± for our reference
J±ψi(x′) ≈ j√
(1− x2o)
(
1− xox′
1− x2o
− j2x′2
2j2(1− x2o)
2+
1
2j2d2
dx′2
)
ψi(x′). (3.79)
The expression for the ‘excitation energy’ for −(J+ + J−)
ω = 2√AB = 2
√
1
(1− x2o). (3.80)
The expression for ground state energy with 1j
excluded is given by
E = −2j√
(1− x2o). (3.81)
As we have seen earlier in equation (3.64), the quantity xo is exact in the SHA treatment with
the term 1j
excluded. So, we assume that xo in E∗ is the same for simplicity. The expression
Chapter 3. The SHA and an Equations-of-Motion Approach 64
for ground state energy with 1j
included is given by
E∗ = −j(√
(1− xo +1
j)(1 + xo) +
√
(1 + xo +1
j)(1 − xo)
)
= −j√
(1− x2o)
(√
(1 +1
j(1 + xo)) +
√
(1 +1
j(1 − xo))
)
≈ −j√
(1− x2o)
(
1 +1
2j(1 + xo)+ 1 +
1
2j(1 − xo)
)
= −j√
(1− x2o)
(
2 +1
j(1 − x2o)
)
= −2j√
(1− x2o)−
1√
(1− x2o)
= E − 1
2ω. (3.82)
Thus, E ≈ E∗ + 12ω.
The error associated with omitting 1j
resulted in an extra contribution of approximately 12ω
in E. Therefore, it is more consistent to write
Eν = νω + E. (3.83)
where ν = 0, 1 . . . , j for equation (3.16). This can be generalized to other Hamiltonians as the
origin of the error comes from a single source, i.e. J±, from the omission of the contributions
of 1j.
Chapter 4
The Canonical Josephson
Hamiltonian and the 5-D Quartic
Oscillator
In chapter 3, we have seen that the SHA is applicable to approximating the low-lying eigen-
values and eigenvectors of the LMG. In the first half of this chapter, we will apply the SHA
to the Canonical Josephson Hamiltonian (CJH) which is a simplified model of Bose-Einstein
condensate atoms trapped in a two-well potential. In the second half of this chapter, the SHA
will be applied to a model with a non-compact spectrum generating algebra which has matrix
representations that are infinite dimensional. The model we will study is the radial part of
the Bohr-Mottelson Hamiltonian mentioned in the introduction. It is a model for the nuclear
vibration-rotation transition and the Hamiltonian has an su(1, 1) spectrum generating algebra.
Again, we will show that the SHA consistently gives accurate predictions over a wide domain
of interaction strengths.
4.1 Bose-Einstein Condensates and the Canonical Josephson
Hamiltonian
In chapter 2, we mentioned two species of particles - bosons and fermions. Unlike fermions,
bosons are allowed to be in the same quantum state (see equation (2.2)). Thus, when the total
energy of a confined system of weakly interacting bosons is sufficiently low, it is possible that a
large fraction of the bosons will reside in the lowest possible single particle energy level. This
state of matter known as the Bose-Einstein condensate (BEC) was first predicted by Einstein
in 1925. Bose was the originator of the idea that a statistics different from that of Maxwell-
65
Chapter 4. The CJH and 5-D Quartic Oscillator 66
Boltzmann’s (for classical systems) is required to deal with the ‘counting’ of indistinguishable
particles [41], [4].
When Helium-4 is cooled below the critical temperature of 2.17 K, it exhibits many unusual
properties different from those of the normal fluid, such as having zero viscosity. Thus, this
type of fluid is aptly termed a superfluid. Shortly after its discovery in 1938, it was recognized
that the phenomenon of superfluidity can be attributed to the Bose-Einstein condensation of
the Helium-4 atoms. Ideal non-interacting bosons at T = 0K are all expected to occupy the
lowest single particle energy level. In the case of liquid Helium-4 atoms which are strongly
interacting, only a small fraction of the whole system occupy the lowest single particle energy
level at ground state1.
About fifty years passed before cryogenic techniques were developed to cool gases to very low
temperatures. Techniques such as laser cooling and magnetic traps can be used to cool samples
consisting of millions of gaseous alkali atoms (such as 87Rb) to temperatures of about 10−7K
[15], [16], [17], [39]. Typically, laser beams can exert forces up to two orders of magnitude larger
than the weight of the atoms. The setup involves six lasers (pairs pointing in opposite direction
in each coordinate) intersecting at the sample. The lasers are tuned to a frequency to slow down
the atoms. A second stage cooling involves a magnetic confinement of the sample where the
most energetic atoms are allowed to escape, by evaporation. The remaining low energy atoms
are sufficiently cold for the Bose-Einstein condensate to be formed. The wavelength of the
atoms increases as the atoms slow down. When the temperature is lowered to the critical value,
the wave packets of the atoms coalesce with their neighbours and a single macroscopic packet is
formed. All the atoms in this Bose-Einstein condensate are indistinguishable from each other.
The BEC can be considered as a macroscopic quantum object with wave-like properties. This
macroscopic wave packet which is barely moving can be manipulated to perform experiments
on quantum mechanics, such as those related to interference.
The Canonical Josephson Hamiltonian (CJH)
In this section, we consider a Josephson junction type of setup for the BEC [28], [73], [42]. In
condensed matter physics, the Josephson effect is a phenomenon in which a current can tunnel
from one superconductor across a very thin insulating barrier to another superconductor even
when no external voltage is applied. This barrier is known as the Josephson junction. The
application of a direct external voltage will not lead to a direct current but rather give rise to a
rapidly oscillating current. This is known as the a.c. Josephson effect. A similar phenomenon
1Though liquid Helium-4 is a system of strongly interacting particles, this strong interaction only reinforcesthe formation of the BEC.
Chapter 4. The CJH and 5-D Quartic Oscillator 67
Figure 4.1: The experimentally observed time evolution of the atomic density distribution in
a symmetric two-well potential with a Josephson junction that separates two weakly linked
Bose-Einstein condensates. The time evolution of the population of the left and right potential
well is directly visible in the absorption images. This is the first realization of a single bosonic
Josephson junction with the condensate exhibiting Josephson oscillations [2]. Reprinted figure
with permission from M. Albiez, R. Gati, et al. , Physical Review Letters 95, 010402, 2005.
Copyright 2005 by the American Physical Society.
has also been observed [2], [44] (see figure (4.1)) in a BEC trapped in a two-well potential where
the barrier between the two wells is analogous to a Josephson junction. The Josephson effect
of the BEC is important in the study of macroscopic superposition of states [28], [26].
To study the Josephson effect of BECs experimentally, we can split an elongated cylindrical
trap containing the BEC atoms along the centre using a strongly detuned laser. The frequency
of the laser is detuned away from the resonant frequency of the atom and that ensures that the
individual atoms in the BEC do not get excited. Furthermore, the laser is tuned such that its
electric field exerts a repulsive force on the individual atoms. The intensity of this laser is then
substantially reduced such that the tunnelling of the BEC atoms between the wells cannot be
neglected. This laser barrier acts like the Josephson junction. A bias can be created between
the two wells by altering the magnetic potential of each well. This results in a difference in the
chemical potential ∆µ between the wells. This bias is analogous to the direct voltage applied
across the superconducting Josephson junction. Consequently, the BEC will oscillate between
the two wells similar to the oscillating current in the a.c. Josephson effect [44].
To model the Josephson effect of BECs2, we assume that the BEC atoms can only occupy
2In this case where the BEC tunnels between two states |1〉 and |2〉 which are physically separated, it is known
Chapter 4. The CJH and 5-D Quartic Oscillator 68
the ground states of either of the two wells denoted by |1〉 and |2〉. We consider the BEC
atoms as a system of N identical bosons. The corresponding single particle creation operators
in states |1〉 and |2〉 are denoted by a†1 and a†2 respectively. Thus, the Hilbert space for the N
particle system is spanned by a N + 1 dimensional Fock basis
|j = N2 ,m〉 =
1√
(j +m)!(j −m)!(a†1)
j+m(a†2)j−m|vac〉 (4.1)
where ‘vac’ denotes the vacuum and m = −j,−j + 1, . . . j, is half the difference in number of
atoms between the two states3.
The Canonical Josephson Hamiltonian4 (CJH) given by [42]
H(j,K,∆µ, εJ ) =K
8(a†1a1 − a†2a2)
2 − 1
2∆µ(a†1a1 − a†2a2)−
1
2εJ(a
†1a2 + a†2a1). (4.2)
is used to model a system of N bosons in a two-well potential. The difference in energy between
the two ground states |1〉 and |2〉 is related to the bias ∆µ. See figure (4.2). The single particle
tunnelling amplitude from one well to the other is given by εJ . In the context of alkaline BEC
atoms, K can be regarded as the ‘bulk modulus’ ∂µ/∂N for a single well. The quantity K is
related to the density of the BEC gas and the inter-atomic scattering interaction within each
well.
The operators above can be identified as su(2) operators with Jz = 12(a†1a1 − a†2a2) and
Jx = 12 (a†1a2 + a†2a1), so that
H(j,K,∆µ, εJ ) =K
2J2z −∆µJz − εJ Jx. (4.3)
Here, as in the LMG, Jz and Jx = 12(J+ + J−) are quasi-spin operators. In our treatment,
the main concern is obtaining accurate approximations of the eigenvalues and eigenvectors.
Therefore, we will include the whole domain of parameters of the model in our discussion
regardless of whether they are applicable in real life experiments5.
Applying the SHA to CJH
We will see next how the SHA can be applied to the CJH. We write down the shifted
harmonic oscillator Hamiltonian again for easy reference
H(x′,d
dx′) ≈ −A 1
j2d2
dx′2+Bj2x′2 +Djx′ + E. (4.4)
as the “external” Josephson effect. Alternatively, we can consider a system of alkali BEC atoms, in a single-wellmagnetic trap, coupled between two hyperfine states |1〉 and |2〉 using a laser. Here, where the two states are notphysically separated, it is known as the “internal” Josephson effect. In this thesis, we will confine our discussionto just the “external” effect.
3The original definition of m in Leggett [42] distinguishes between odd and even N . For simplicity, we justconfine the discussion to even N .
4In some literature, this is known as the Bose-Hubbard Hamiltonian.5The CJH is not a good approximation of experimental results when ∆µ is large.
Chapter 4. The CJH and 5-D Quartic Oscillator 69
Figure 4.2: A schematic sketch of a two-well potential for Bose-Einstein condensate atoms. The
BEC atoms are assumed to occupy only ground states with zero point energies Eo1 and Eo2 in
each side of the well. The bias ∆µ is related to the difference in the zero point energies. The
tunnelling amplitude εJ is associated with the barrier between the two wells.
Table (4.1) shows the expressions A, B, D and E in the SHA expansions of the operators Jz,Jx etc. using the Jz-diagonal representation. For example, the first two rows corresponds to
the contributions of the SHA representations of Jz (equation (3.11)) and J+ + J− (equation
(3.12)).
Operators O A B D E
Jz - - 1 jxo
J+ + J−(= 2Jx) j√
1− x2o − 1
j√
(1−x2o)3
− 2xo√1−x2
o
2j√
1− x2o
J 2z - 1 2jxo j2x2
o
J 2+ + J 2
− 4j2(1− x2o) 2 4jxo 2j2(1− x2
o)
J+J− + J−J+ - 2 4jxo 2j2(1− x2o)
Table 4.1: The expressions A, B, D and E (in equation (4.4)) in the SHA expansions of the
operators Jz, Jx etc. using the Jz-diagonal representation.
Using table (4.1), we can apply the SHA to the CJH Hamiltonian (4.3) in the Jz-diagonal
Chapter 4. The CJH and 5-D Quartic Oscillator 70
representation to get
HJ ≈ −(jεJ2
√
1− x2o
)1
j2d2
dx′2 +
(
K
2+
εJ
2j√
(1− x2o)
3
)
j2x′2
+
(
Kjxo +εJxo
√
1− x2o
−∆µ
)
jx′ + j
(K
2jx2
o −∆µxo − εJ√
1− x2o
)
. (4.5)
The minimum of the SHA oscillator potential is located at coordinates (xo, E). As mentioned
earlier, we shifted the origin from x = 0 to x = xo in the x′ coordinate system. Therefore,
setting D = 0 in equation (4.4), the corresponding term in equation (4.5) gives
Kjxo +εJxo
√
1− x2o
−∆µ = 0, (4.6)
and we can solve for xo. Over the range of parameters studied, a unique solution satisfying
|xo| < 1 always exist. However, unlike the LMG example, we do not have a simple expression of
xo. The analytic expression of xo is given by one of the roots of the quartic equation obtained
by expanding equation (4.6). This expression is too complicated to offer any insights for the
physical problem. Therefore, we will study the behaviour of xo when xo ∼ 0 instead. Making
the approximation 1√1−x2
o
≈ 1 in the second term of equation (4.6), we get
xo ≈∆µ
εJ + jK. (4.7)
This simple expression shows how the parameters of the model are related. In the CJH, there
are no restrictions on the signs of the physical quantities K, εJ and ∆µ. However, we will
confine our discussion to conventional quantities with K and εJ positive while ∆µ can be either
positive or negative. Given our restrictions on the signs of K, εJ and ∆µ, we see that equation
(4.5) is a bona fide oscillator for the whole range of parameters of the model. In this case, the
coordinate axis x is associated with the difference in number of atoms between the two wells.
In addition, following our definition of x = m/j, we expect |xo| < 1 and xo cannot increase
indefinitely with ∆µ. Therefore, for equation (4.6) when |∆µ| is large, we expect |xo| → 1 if
the SHA is valid. This change of xo from approximate linear dependency on |∆µ| to becoming
nearly constant should correspond to some significant transition of the system. See figure (4.3).
As in the LMG, the SHA expression of the CJH in equation (4.5) is not unique. It is also
possible to apply the Jx-diagonal representation for J± and Jz, as given in equations (3.29)
and (3.30), to the CJH in equation (4.3). Applying the SHA to the CJH expressed in the
Jx-diagonal representation gives
H′ ≈ j2(K
2(1− x2
o) +∆µ
2j
√
1− x2o
)1
j2d2
dx′2−(
K
2+
∆µ
2j√
(1− x2o)
3
)
j2x′2
−(
Kjxo +∆µxo√
1− x2o
− εJ)
jx′ + j
(Kj
2(1− xo)2 + ∆µ
√
1− x2o + εJxo
)
. (4.8)
Chapter 4. The CJH and 5-D Quartic Oscillator 71
We see that for positive K and εJ , equation (4.8) is a standard oscillator (with correct signs
for kinetic energy and potential energy) only when ∆µ is sufficiently negative6. Since equations
(4.5) and (4.6) already take into account the whole range of parameters for our discussion of
the CJH, it suffices to just use the SHA in the Jz-diagonal representation.
The three regimes of CJH and some results
The CJH can be divided into three regimes of interest [40], characterized by the ratio K/εJ .
To get to the different regimes, we can either adjust K, which is dependent on the density and
the inter-atomic interactions of the gaseous BEC atoms7, or the tunnelling amplitude εJ . The
tunnelling amplitude εJ can be altered by adjusting the intensity of the laser that splits the
trap confining the BEC. A high laser intensity will generate a large potential barrier between
the two wells. This results in a small tunnelling amplitude εJ . We can see from equation (4.5)
that εJ contributes to both the ‘inverse mass’ and ‘spring constant’ of the oscillator. When the
laser intensity is low, εJ is large and hence the ‘frequency’ of the oscillator will be high and vice
versa. This can be related to the oscillation of population in each well as shown in figure (4.1).
Such oscillations are generated by creating an initial imbalance of population between the two
wells by manipulating the two-well potentials8. However, in our discussion, our concern will be
focused on the properties of the eigenvalues and eigenvectors of the CJH rather than a study
of the time-evolution of the BEC in the two-well potential.
When the inter-atomic interactions are negligible compared to the tunnelling amplitude such
that K/εJ � 1/N , the BEC oscillates between the states |1〉 and |2〉 at a tunnelling frequency
predominately dependent on εJ (see equation (4.5)). Such oscillations are reminiscent of Rabi
oscillations in atomic physics where atoms illuminated by a coherent beam of photons cyclically
become excited and de-excite between two states. Thus, the range K/εJ � 1/N is known as the
Rabi regime. When the inter-atomic interaction is more significant compared to the tunnelling
amplitude εJ , such that 1/N � K/εJ � N , we have the Josephson regime. Finally, when
K/εJ � N , tunnelling between the two wells becomes negligible and the condensates in the
two wells can be regarded as in two separate Fock states where the particle number in each
well is considered as fixed. This is known as the Fock regime. For simplicity, in our subsequent
discussions, we will express all quantities in units of εJ . Comparisons will be made for K/εJ
in each regime over a range of ∆µ/εJ . Because K/εJ spans over a few orders of magnitude,
6Otherwise, it involves negative masses and inverted shifted oscillator potentials.7This interaction is related to the scattering length which is in turn dependent on the details of the atomic
potential. Very small changes in the atomic potential, such as the presence of an external magnetic field, canresult in significant changes in the scattering length and even its sign. [43]
8When K/ε ∼ 1N
, there exist a critical limit of imbalance beyond which the population will remain in theinitial well and not oscillate. This is known as self-trapping.
Chapter 4. The CJH and 5-D Quartic Oscillator 72
we will use ∆µ1+jK (≈ xo as seen in equation (4.7), with ∆µ and K in units of εJ here) on the
abscissa in figures (4.3) - (4.5) for ease of comparison. We will use K/εJ = 0.01/N , representing
the Rabi regime and K/εJ = 1, representing the Josephson regime for the comparisons.
For the quantities xo, ω and E in figures (4.3) - (4.4), the Fock regime have been excluded
because these quantities in the Fock regime vary with ∆µ1+jK in similar ways to those in the
Josephson regime, albeit on different scales. Furthermore, the quality (in fractional error terms)
of the SHA predictions for these quantities are similar in both the Fock and Josephson regimes.
However, the fractional errors of the SHA prediction for the spreads σ of eigenfunctions in the
Fock regime are comparatively large, so they have been included in figure (4.5).
Figure 4.3: The mean values x of the components of the exact ground state eigenvectors are
compared with the means xo of the SHA eigenfunctions. Here, N = 200 with K/εJ = 1 and
K/εJ = 0.01/N . The exact results are indicated by the diamonds and boxes while the SHA
predicted ones by crosses. The agreement between the two sets of results is good. A numerical
check indicates that the two quantities typically differ only in the second or third decimal place.
From figure (4.3), we see that the means of the SHA eigenfunctions xo given in equation
(4.6) are in good agreement with the mean values x of the components of the exact ground state
eigenvectors for both regimes. In fact, a numerical check indicates that the two typically differ
only in the second or third decimal place. The quantity xo also gives the SHA predicted mean
Chapter 4. The CJH and 5-D Quartic Oscillator 73
difference of the population between the two wells. Therefore, whenever xo 6= 0, we expect
an asymmetric oscillation of the population about mo = jxo. In the Rabi regime, x (and xo)
varies relatively smoothly throughout the whole domain of ∆µ. In contrast, in the Josephson
regime, we see a distinctive change in x (and xo) when | ∆µ1+jK | ∼ 1. When | ∆µ
1+jK | & 1, only
one of the wells is occupied as |x| → 1. The two-well potential (such as the one in figure (4.2))
is so asymmetrical such that once an atom tunnel to the lower potential well, the bias is so
large that it is improbable that the atom can tunnel back into the higher well. This transition
at ∆µ1+jK ∼ 1 becomes more distinct as K/εJ becomes larger. This can be easily verified by
plotting the solution of equation (4.6) for a range of K/εJ .
In figure (4.4), we see that the SHA predictions of the ground state energy and excitation
energies are again in good agreement with exact results. If greater accuracy of eigenvalues is
required, we can use either ESHA(I), ESHA(II) or RRKK as mentioned in chapter 3. As seen
in figure (4.4), the ground state energies vary continuously with ∆µ1+jK and there is no significant
change in the vicinity of | ∆µ1+jK | ∼ 1 for both the Rabi and Josephson regimes. On the other
hand, a noticeable change occurs in both the excitation energy ω in figure (4.4) and the spread of
the eigenfunction σ in figure (4.5) (which are both associated with the dynamics of the system)
in the Josephson regime when | ∆µ1+jK | ∼ 1. When the bias is large, i.e. | ∆µ
1+jK | > 1, the spreads
of the eigenfunctions σ → 0 signifying that the most important components of the eigenvectors
are highly localized at just a few basis states centered at mo = jxo. As we have seen earlier,
|xo| ∼ 1 in this domain. Thus, we can summarize that in the Josephson (and Fock) regimes
when | ∆µ1+jK | > 1, all the particles are in the lower of the two-well potential. The excitation
energy varies approximately linearly with ∆µ1+jK in this domain.
In figure (4.5), we see that the SHA predicted spreads for the eigenfunctions agree with the
exact results except in the Fock regime where | ∆µ1+jK | < 1. In the Rabi and Fock regime, the
spreads σ are large in the domain | ∆µ1+jK | < 1 . A larger spread σ indicates a greater uncer-
tainty in the difference in particle number between the two wells. This also suggests oscillatory
behaviour in the population in the wells. Previously, for the LMG, in the domain where the
spread is large, the contraction of the su(2) algebra to the hw(1) algebra is a good approxima-
tion. For the CJH, a comparison between the components of the exact eigenvectors with the
SHA eigenfunction (such as in figure (4.6)) confirms that the su(2) algebra also contracts to the
hw(1) algebra for the CJH in these domain. The accuracy of the SHA predicted spreads allows
us to use it as a guide for the choice of techniques to obtain eigenvalues of higher accuracy.
In the Rabi or Josephson regime, domains with large spread suggests the use of ESHA(II) or
RRKK. However, in the Josephson regime, the spread has an intermediary value σ . 3 for our
example. Thus, the use of ESHA(I) is also possible. For the Fock regime, the small spreads
make ESHA(I) the natural choice.
Chapter 4. The CJH and 5-D Quartic Oscillator 74
Figure 4.4: The SHA predictions of: (Top) ground state energies, E/εJ (Bottom) excitation
energies; and ω/εJ are indicated by crosses. Those of exact results are indicated by diamonds
and boxes. Here, N = 200. The vertical axes on the left correspond to K/εJ = 1 (in the
Josephson regime) and those on the right correspond K/εJ = 0.01/N (in the Rabi regime).
Depending on the experiment, K/εJ can be varied such that εJ are different for different
regimes and E or ω have to be scaled accordingly before comparisons are made between different
regimes.
Chapter 4. The CJH and 5-D Quartic Oscillator 75
Figure 4.5: We compare the SHA prediction (crosses) of spread of the eigenfunction, σ with
those of exact results (diamonds). The three regimes Rabi (R), Josephson (J) and Fock (F) are
indicated.
Figure 4.6: We compare the SHA prediction (crosses, diagonal crosses) of the spread σ of
eigenfunctions with those of exact results (diamonds, boxes). The boundaries K/εJ = j are
indicated with the techniques appropriate for the regimes.
Chapter 4. The CJH and 5-D Quartic Oscillator 76
We confirm our strategies by studying the variation of widths σ with K/εJ in figure (4.6) for
systems with N = 200 and N = 1000. A non-vanishing bias of ∆µ = 0.04K has been included
for the purposes of checking the expressions later in tables (4.2) and (4.3).
In figures (4.7)-(4.8), we do a comparison of the components of the exact eigenvectors
(diamonds) with the SHA predicted components (crosses and diagonal crosses). We consider
two examples for each regime: one with small bias ( ∆µ1+jK = 0.05) and one with a large bias
( ∆µ1+jK = 1.50). The components of the eigenvectors for ∆µ
1+jK = 0.05 can be approximated
using am ≈ ψ(m). The SHA predicted components agree with the exact ones for all three
regimes. Although the SHA predicted spreads in the Fock regime are generally not accurate
(see figures (4.5) and (4.8)), they do not affect the prediction significantly in this case because
σ < 1 and x is accurately determined. For ∆µ1+jK = 1.50, we also included results obtained using
the binomial approximation (3.67). As we can see in figures (4.7)-(4.8), this technique gives a
better approximation than using am ≈ ψ(m) for all three regimes when ∆µ1+jK = 1.50.
∆µ1+jK = 0.05 ∆µ
1+jK = 1.50
K/ε = 0.01N
(Rabi)
Figure 4.7: The components of the exact ground state eigenvector am are indicated by black
diamonds. The SHA wave function ψ(m) is given by the red curve. The crosses (‘+’) indicate
ψ(m) evaluated at points m = −j,−j + 1, . . . , j(= 100). (Right) The diagonal crosses (‘x’)
indicate components of the eigenvector obtained using the binomial approximation (3.67).
Chapter 4. The CJH and 5-D Quartic Oscillator 77
∆µ1+jK = 0.05 ∆µ
1+jK = 1.50
K/ε = 1 (Josephson)
∆µ1+jK = 0.05 ∆µ
1+jK = 1.50
K/ε = 100N (Fock)
Figure 4.8: The components of the exact ground state eigenvector am are indicated by black
diamonds. The SHA wave function ψ(m) is given by the red curve. The crosses (‘+’) indicate
ψ(m) evaluated at points m = −j,−j + 1, . . . , j(= 100). (Right) The diagonal crosses (‘x’)
indicate components of the eigenvector obtained using the binomial approximation (3.67).
Chapter 4. The CJH and 5-D Quartic Oscillator 78
For the Rabi and Josephson regimes, the kinetic and the potential terms of the SHA Hamil-
tonian do not vanish for the wide range of parameters. However, when |xo| → 1, the kinetic
term vanishes. This happens when ∆µ1+jK is large. Therefore, according to table (3.3), the
transition from small to large bias ∆µ in the Rabi and Josephson regimes can be classified as
P1-P3. On the other hand, the transition from the Rabi to the Fock regime can be classified
as P1-P4. We will discuss this in greater detail next.
The SHA expressions for excitation and ground state energies
Before proceeding to examine the effectiveness of ESHA(I) and ESHA(II), we will first
study the SHA expressions that gave good approximate results (such as in figure (4.4)) in
greater detail. In particular, we first write down the approximation for excitation energy:
∆E(= E2 − E1) ≈ ω = 2
√√√√
(jεJ2
√
1− x2o
)(
K
2+
εJ
2j√
(1− x2o)
3
)
(4.9)
where xo is given by the solution of equation (4.6). When the bias ∆µ is zero, xo = 0 also. In
this case, the expression for ω agrees with that given by the Gross-Pitaevskii mean-field theory
[30], [50], [28], [26]. (We note that because ~ = 1, ω also gives the tunnelling frequency of
the oscillation in the Rabi and Josephson regimes.) The SHA predicted ground state energy is
given by
Eo ≈ j(K
2jx2
o −∆µxo − εJ√
1− x2o
)
. (4.10)
Using equation (4.9), we will derive some simple scaling relationships between the excitation
energies and the size of a system.
We assume that the number of particles N in the systems are relatively large. In the
absence of a bias, i.e. ∆µ = 0, xo = 0, we can express the ratio K/εJ as a multiple of 1/j, i.e.
K/εJ = aR/j for the Rabi regime, where aR ∼ 1. Using equation (4.9), we get
ωRabi/εJ ≈√aR + 1. (4.11)
We see that the excitation energy is approximately independent of the number of particles.
Similarly, for the Josephson regime, we can write K/εJ = aJj where aJ . 1. Thus, using
equation (4.9), we get
ωJosephson/εJ ≈√aJj. (4.12)
The excitation energy scales approximately with the size of the system in the Josephson regime.
In the Fock regime, where K/εJ � N , we can regard this as εJ → 0, i.e. the laser field intensity
between the wells is very high and consequently, the tunnelling amplitude is very low. In such a
case, the excitation energy given by equation (4.9) vanishes. We can interpret this as the system
Chapter 4. The CJH and 5-D Quartic Oscillator 79
is no longer oscillatory. We can find an alternative approach for estimating the eigenvalues by
looking at equation (4.5). Applying the approximation εJ → 0 to equation (4.5), the SHA
representation of the Hamiltonian simplifies to
HFock ≈K
2j2x′2 + (Kjxo −∆µ) jx′ + j
(K
2jx2
o −∆µxo
)
. (4.13)
Though we no longer have an expression for a shifted oscillator, we still have an expression for
the eigenvalue of the Hamiltonian. Allowing the coefficient of x′ to vanish as before, we get
xo ≈∆µ
jK. (4.14)
Thus, as long as |xo| ∼ 0, the eigenvalues can be approximated as
E ≈ K
2(m− ∆µ
K)2 − ∆µ2
2K(4.15)
where m = 0,±1,±2, . . . ± j. Here, we have used x′ = mj− xo to obtain equation (4.15).
We note that the ground state eigenvalues do not necessary correspond to m = 0 but rather
m ∼ [∆µK
] where [. . .] indicates the nearest integer. Equation (4.15) resembles the expression for
a particle-in-box type of spectrum (see equation (2.12) and apply to one dimension). Thus, we
will call it the Standing Wave Approximation (SWA). This phase region has been introduced as
P4 in table (3.3). As mentioned earlier, in the Fock regime, the tunnelling amplitude εJ between
the two wells is negligible compared to the inter-atomic interaction and hence the quantity K.
Thus, we have two separate BECs, one in each well. The BEC in each well exhibits a particle-
in-box type of spectrum. If the bias is zero, the spectrum from the two wells are degenerate. If
the bias is not zero, each pair of energy levels differ slightly from each other (see figure (4.9))
with the difference determined by the bias.
From figure (4.9), we can see that as K/εJ increases, the system transits from a vibrator to
a particle-in-box type of system. Interestingly, in this standing wave limit, we see from equation
(4.15) that the excitation energy is again approximately independent of the size of the system,
as in the Rabi regime.
Our approximations in equations (4.11) and (4.12) agree with the plots given in figure (4.10).
As predicted, we see the plots for N = 200 and N = 1000 converging in both the Rabi and
Fock regimes, indicative of j independence. In the Rabi regime, ω is nearly constant while in
the Fock regime, ω varies linearly with K/εJ . In the Josephson regime, we can use a simple
approximation: when K/εJ ≈ j, the excitation energy ω/εJ ∝ j. From these approximations,
we see that for su(2), when the total spin j can be associated with the total number of particles
in the system, the SHA gives approximate relations between physical quantities and the size of
the system. The findings on the excitation energy here agree with those reported in [26] and
[73].
Chapter 4. The CJH and 5-D Quartic Oscillator 80
Rabi: K/εJ = 0.01/j = 10−4 Fock: K/εJ = 100j = 104
Figure 4.9: We compare the exact spectra with those obtained using the SHA (Rabi regime)
and the SWA (Fock regime) for a system with N = 200 and ∆µ = 0.04K. The spectrum in
the Rabi regime is vibrator-like while that of the Fock regime is similar to the spectrum of a
particle-in-box system.
The excitation energies presented in figure (4.10) are tabulated in tables (4.2) and (4.3). We
set a non-vanishing bias, ∆µ = 0.04K, to test all the terms in the approximations. The results
obtained using ESHA(I) and ESHA(II) are included. In the Fock regime, we continue to use
xo as before to locate the most relevant basis states for performing accuracy improvement. In
this regime, the spread σ of the eigenfunction is typically small as we have seen in figure (4.6).
We see again in tables (4.2) and (4.3) that very accurate results are obtained. The eigenvalues
of the low-lying excited states (which are not shown) are also obtained to similar accuracy.
Chapter 4. The CJH and 5-D Quartic Oscillator 81
Figure 4.10: We compare the SHA prediction (crosses) of the excitation energies, ω with those
of exact results (diamonds) for N = 200, 1000 with ∆µ = 0.04K. The domains where the SWA
is used are indicated by the arrows. The numerical results are given in tables (4.2) and (4.3).
log(K/εJ) Exact SHA [SWA] Improved (E2 − E1)/εJ Technique used
(matrix size)
- 4.0 1.0049628084298 1.004995 1.0049628084298 ESHA(II) 21 by 21
- 3.0 1.0485812372548 1.048809 1.0485812372551 ”
- 2.0 1.4128554992607 1.414214 1.4128554992611 ”
- 1.0 3.3082186560930 3.316628 3.3082186560936 ”
0.0 9.9444647577184 10.050 9.9444647577175 ”
1.0 30.40867052246806 31.639 30.408957471293 ESHA(II) 24 by 24
30.40867052494311 ESHA(I) 21 by 21
*2.0 87.96472353210262 100.005 87.96472353210268 ESHA(I) 17 by 17
3.0 473.33549299801725 316.229 [460.0] 473.33549299801720 ”
4.0 4601.39189808456102 1000.000 [4600.0] 4601.39189808455769 ”
5.0 46000.13925016221988 3162.278 [46000.0] 46000.13925016224192 ”
6.0 460000.01392507642659 9999.999 [460000.0] 460000.01392507654259 ”
Table 4.2: N = 200, µ = 0.04K. The exact results are obtained by diagonalizing a 201 × 201
Hamiltonian matrix. An ‘*’ is used to indicate K/εJ = j. The techniques used to obtain the
more accurate excitation energy (E2 − E1)/εJ are indicated in the last column.
Chapter 4. The CJH and 5-D Quartic Oscillator 82
log(K/εJ) Exact SHA [SWA] Improved (E2 − E1)/εJ Technique used
(matrix size)
- 4.0 1.024671259022 1.024695 1.024671259024 ESHA(II) 21 by 21
- 3.0 1.224573972767 1.224745 1.224573972767 ”
- 2.0 2.448635031862 2.449490 2.448635031867 ”
- 1.0 7.131672850083 7.141428 7.131672850084 ”
0.0 22.267743827801 22.38303 22.267743827804 ”
1.0 69.479176541388 70.718 69.479176541384 ESHA(II) 24 by 24
69.479176541386 ESHA(I) 17 by 17
2.0 210.392834675586 223.609 210.39283498148 ”
*2.7 438.912759159224 500.001 438.912759159224 ”
3.0 642.850063474911 707.107 [460.0] 642.850063474909 ”
4.0 4634.164052312388 2236.068 [4600.0] 4634.164052312387 ”
5.0 46003.453238280126 7071.068 [46000.0] 46003.453238280116 ”
6.0 460000.345360962898 22360.680 [460000.0] 460000.345360962664 ”
Table 4.3: N = 1000, µ = 0.04K. The exact results are obtained by diagonalizing a 1001×1001
Hamiltonian matrix. An ‘*’ is used to indicate K/εJ = j. The techniques used to obtain the
more accurate excitation energy (E2 − E1)/εJ are indicated in the last column.
Putting the ESHA(I) and ESHA(II) to test
We end this section by exploiting the accuracy improvement technique to solve for the excita-
tion energies of systems with N = 102 . . . 109 atoms for K/εJ = 0.1N which is in the Josephson
regime. To obtain exact results, we have to diagonalize a (N+1)×(N+1) Hamiltonian matrix.
(We will continue to use the bias ∆µ = 0.04K as before.)
The SHA predicts that mo ∼ 0 and the width remain relatively constant at σ ∼ 1.5 for the
range of N . Thus, we expect the ESHA(I) to work well. See figure (4.11). We use 41 × 41
Hamiltonian matrices to obtain the low-lying eigenvalues and check for convergence using 51×51
matrices. The list of excitation energies (E1 −Eo)/εJ are given to the precision where the two
results agree in table (4.4). We see that the first excitation energy is obtained to a high degree
of accuracy and we have a huge improvement in computation efficiency compared to getting
exact results by diagonalizing (N + 1)× (N + 1) matrices.
In table (4.4), we see a very obvious trend of how excitation energy vary with N . This
concurs with the prediction in equation (4.12) which indicates that in the Josephson regime,
Chapter 4. The CJH and 5-D Quartic Oscillator 83
Figure 4.11: The full Hamiltonian matrix of the CJH can vary in size depending on N . In the
Josephson regime, the most important information on the low-lying part of the spectrum is
contained in the Hamiltonian sub-matrix H ′, depicted by the grey box. The dimension of H ′
remains relatively constant at a small value even for N ranging over a few orders of magnitude.
Thus, ESHA(I) remains applicable for models with very large N in this regime.
log(N) ω/εJ (SHA) (E2 − E1)/εJ ESHA(I)
2 22.38302574 21.155212983576
3 223.6090334 210.392834675584
4 2236.068201 2102.9028183199
5 22360.67980 21028.016070793
6 223606.7978 210279.1499375
7 2236067.978 2102790.4887389
8 22360679.77 21027903.876766
9 223606797.7 210279037.757040
Table 4.4: The table above shows how the excitation energy varies with particle number N for
systems with K/εJ = 0.1N (Josephson), µ = 0.04K. The second column shows the prediction
of ω/εJ given by the SHA. The exact results are obtained by diagonalizing (N + 1)× (N + 1)
Hamiltonian matrices. For example, for log(N) = 9, N = 109. However, using ESHA(I),
we only need to diagonalize 41 × 41 Hamiltonian matrices and we check for convergence using
51×51 matrices. The results are listed under the column (E2−E1)/εJ are given to the precision
in which the two results agree. We see an obvious trend of how excitation energy varies with
N .
the excitation energy is proportional to the size of the system. This simple scaling is achieved by
expressing K/εJ as a factor of j in equation (4.12). Thus, if we need to know the approximate
excitation energy of a huge system, we can first work out the excitation energy of a small system
Chapter 4. The CJH and 5-D Quartic Oscillator 84
first and then scale it accordingly as we have seen. This works similarly for other values of K/εJ
in this regime. It is noteworthy that in the CJH when a parameter is expressed as a function of
the size of the system, (such as K/εJ = aJj) such approximate scaling behaviour can be found
even away from critical points.
In this section, we see that for the CJH, very good estimates of the low-lying spectrum
of large systems can be obtained using the SHA techniques which require only relatively little
computational resources. We can use the SHA or ESHA(II) for the Rabi regime, ESHA(I) for
the Josephson regime and SWA or ESHA(I) for the Fock regime. We have covered all domains
and the techniques mentioned are relatively independent of the size of the system. In the next
section, we will proceed to study a different system in which the spectrum generating algebra
is the non-compact su(1, 1).
4.2 The 5-D Quartic Oscillator
The Bohr-Mottelson model applies to a sub-dynamics of the nucleus in which the nucleons
behave collectively as a liquid drop. In the collective model, one assumes that the low-lying
states of the nucleus can be described in terms of vibrations and rotations of the nuclear-matter
distribution. The dynamics of the collective motion is associated with the ‘shape’ of the nuclear
surface. Here, it is assumed that the size of nucleus is sufficiently large such that the size of
the individual nucleons can be ignored. Thus, collective models work best for medium to heavy
nuclei. It is convenient to model the shape of its surface as a variation of radius R at different
angles (θ, φ), using a multi-pole expansion [9], [57] and [66]:
R(θ, φ, t) = Ro(1 + α00(t) +
∞∑
λ=1
λ∑
µ=−λα∗λµ(t)Yλµ(θ, φ)) (4.16)
where αλµ are parameters that describe the deviation from a sphere with radius Ro. Here,
Yλµ(θ, φ) are the spherical harmonics. As mentioned, our interest lies in the low excitations.
Therefore, the contributions from higher order harmonics with λ ≥ 3 are assumed to be in-
significant in our consideration. For the monopole mode λ = 0, the parameter α00 corresponds
to a change in radius of the sphere. However, in this ‘breathing mode’, the energy associated
with a constant variation of the volume of the nucleus is also too high to be of interest in our
consideration. For small dipole deformation associated with λ = 1, the parameters α1µ are
mainly associated with the translation of the whole system. Thus, we are not interested in this
contribution either.
The most important terms in our consideration comes from the quadrupole deformations
(λ = 2) which comprise five components. For the model we are considering, three of these
components are related to the determining the orientation of the drop and two are associated
Chapter 4. The CJH and 5-D Quartic Oscillator 85
with the shape of the drop. We can transform this set of coordinates to the body-fixed set
which coincides with the coordinates of the principal axes of the mass distribution of the drop.
In this set of coordinates, we are reduced to just two independent variables a2 0 and a2 2 = a2−2.
The coordinates a2−1 = a2 1 vanish. It is more convenient to express a2 0 and a2 2 in the terms
of the so-called Hill-Wheeler [32] coordinates β(> 0) and γ where
a20 = β cos γ , a22 =β√2
sin γ (4.17)
satisfying∑
µ
|α2µ|2 = a220 + 2a2
22 = β2. (4.18)
Writing equation (4.16) in full with just the quadrupole deformation gives
R(θ, φ) ≈ Ro[
1 + β
√
5
16π
(
cos γ(3 cos2 θ − 1) +√
3 sin γ sin2 θ cos 2φ)]
. (4.19)
For small deformation, the multi-pole moments Qλµ, of a nucleus with density distribution
ρ(r), are approximately proportional to the deformation parameters αλµ
Qλµ ≈ −kλαλµ∫
rλ+3dρ
drdr
∝ αλµ. (4.20)
The details of the derivation can be found in [66].
Therefore, we can introduce phenomenological collective Hamiltonians (involving just quad-
rupole deformation) as
Hcol =2∑
µ=−2
1
M|P2µ|2 + U(Q) (4.21)
where M is a mass parameter, P2µ is a set of momentum coordinates canonical to Q2µ. The
collective potential U(Q) should be some rotationally invariant function of the multi-pole mo-
ments. Coupling the quadrupole moments Q2 to zero momentum yields
(Q2 ×Q2)0 ∝ β2 (4.22)
(Q2 ×Q2 ×Q2)0 ∝ β3 cos 3γ. (4.23)
For a simple spherical vibrator, we have
U(β, γ) = c2β2 (4.24)
while for a nucleus with axially symmetric deformed equilibrium shape, a suitable potential is
given by
U(β, γ) = c2β2 + c3β
3 cos 3γ + c4β4. (4.25)
Chapter 4. The CJH and 5-D Quartic Oscillator 86
In quantizing the Hamiltonian in equation (4.21), we assume the Heisenberg commutation
relations
[Qλµ, Pλ′µ′ ] = i~δλλ′δµµ′ , [Qλµ, Qλ′µ′ ] = [Pλµ, Pλ′µ′ ] = 0. (4.26)
The 5-D Quartic Oscillator
The model we consider has a γ-independent potential [67], [74], [18]
U(β)α = V (α) =M
2
[(1− α)β2 + α(−β2 + β4)
](4.27)
where M is a dimensionless mass parameter and α is a control parameter (not to be confused
with the deformation parameters) such that when α < 0.5, the minimum of the potential term
is at βo = 0 and we have a spherical vibrator. When α > 0.5, the minimum is at βo =√
2α−12α .
This is commonly known as a γ-soft rotor. See figure (4.12).
Figure 4.12: The potential V (α) = M2
((1− 2α) β2 + αβ4
)is plotted for values of α =
0.1, 0.5 (dash line) , 1.0 and 2.0. The vertical axis scales with M . In this case, M = 10.
Essentially, the Hamiltonian we have is a five-dimensional quartic oscillator
H = − 1
2M∇2 +
1
2M((1− 2α)β2 + αβ4
)(4.28)
where β is a radial coordinate given in terms of five spherical coordinates Qν by β2 = Q.Q =
Chapter 4. The CJH and 5-D Quartic Oscillator 87
∑
ν QνQν . The radial part of equation (4.28) can be expressed in terms of su(1, 1) operators
S± =1
4
[1
M∇2ν +Mβ2 ∓ (2β
∂
∂β+ 1)
]
(4.29)
So =1
4
[
− 1
M∇2ν +Mβ2
]
(4.30)
where ∇2ν = d2
dβ2 − (ν+1)(ν+2)β2 . The quantity ν = 0, 1, . . . is the ‘angular momentum’ in the
SO(5) group, known as seniority. The su(1, 1) operators satisfy the commutation relations
[
Sνo , Sν±]
= ±Sν±, (4.31)[
Sν−, Sν+
]
= 2Sνo . (4.32)
We will work with models with the same ν. Thus, the irrep label ν on the operators will be
dropped from this point. The Hamiltonian in eq. (4.28) can be written as
H = 2So − αM(β2 − 1
2β4) (4.33)
with Mβ2 = 2So + S+ + S−.
For su(1, 1), the matrix representation of the operators written in the basis where So is
diagonal is given by
〈λn|So|λ′m〉 =1
2(λ+ 2n) δλ,λ′ δn,m (4.34)
〈λn|S+|λ′m〉 =√
(λ+ n− 1)n δλ,λ′ δm,n−1 (4.35)
〈λn|S−|λ′m〉 =√
(λ+ n)(n+ 1) δλ,λ′ δm,n+1 (4.36)
where λ = ν + 52 characterizes an irreducible representation (irrep). The basis states are
labelled by n = 0, 1, 2 . . .. We will next explore the possibility of applying the SHA to the
quartic oscillator.
Applying the SHA to the 5-D Quartic Oscillator
Unlike su(2), the su(1, 1) algebra is non-compact and the irrep label λ cannot be used to
assess the ‘granularity’ of distributions before applying the SHA to a problem. A preliminary
inspection of the ground state eigenvectors such as those in figure (4.13) suggests that it may
be possible to adapt the previous approach in the SHA for su(1, 1) models. However, in this
case, we will simply approximate n as a continuous variable directly (using approximation (A)
in equation (3.21)). There will be no other variable such as x = m/j in su(2).
As before, we seek the form of the SHA Hamiltonian H such that
〈λn|H|φiλ〉 = Eiaiλ,n → Hψiλ(n) = Eiψiλ(n) (4.37)
Chapter 4. The CJH and 5-D Quartic Oscillator 88
where the eigenvector in the ith excited state, corresponding to the eigenvalue Ei is given by
|φiλ〉 =∑
n=0
aiλ,n|λn〉 (4.38)
where |λn〉 can be realized as spherical harmonic functions. In figure (4.13), we show the
ground state eigenfunctions of the quartic oscillator for M = 100, α = 0.7, 2.0 and ν = 2.
In both cases, especially for α = 2.0, the components aλ,n distribute like the ground state
eigenfunction of a shifted oscillator in the n coordinate system. This indicates that the su(1, 1)
algebra contracts to the augmented hw(1) algebra in this domain and the SHA should give
accurate approximations. Thus, we proceed as before by first deriving the SHA representations
for su(1, 1) operators.
α = 0.7 α = 2.0
Figure 4.13: The components of the ground state eigenvector an obtained from diagonalization
are indicated by black diamonds. The SHA wave function ψ(n) (which will be derived later)
is given by the red curve. The red crosses indicate the ψ(n) evaluated at points n = 0, 1 . . ..
In this example, M = 100, α = 0.7, 2.0 and ν = 2. The location of the minimum of the SHA
oscillator potential is indicated by no in the plot.
As in equations (3.4) to (3.12) for su(2) operators, we will first identify aiλ,n from the action
of various su(1, 1) operators on the eigenvectors. Using equations (4.34)-(4.36), we can write
〈λn|So|φiλ〉 = 12 (λ+ 2n)aiλ,n (4.39)
and similarly
〈λn|S+|φiλ〉 =√
(λ+ n− 1)n aiλ,n−1. (4.40)
Chapter 4. The CJH and 5-D Quartic Oscillator 89
Here, if we approximate n as a continuous variable, then 〈λn|φiλ〉 = aiλ,n → ψiλ(n). The terms
aiλ,n∓1 can be regarded as
aiλ,n∓1 → ψiλ(n∓ 1) = e∓d
dnψiλ(n) (4.41)
≈[
1∓ d
dn+
1
2
d2
dn2
]
ψiλ(n). (4.42)
Thus, in general, the su(1, 1) operators So and S± can be mapped into differential operators,
So and S±, defined by
〈λn|So|φiλ〉 → Soψiλ(n) =1
2(λ+ 2n)ψiλ(n) := No(n)ψiλ(n) (4.43)
〈λn|S+|φiλ〉 → S+ψiλ(n) =
√
(λ+ n− 1)n e−d
dnψiλ(n) (4.44)
≈√
(λ+ n− 1)n
[
1− d
dn+
1
2
d2
dn2
]
ψiλ(n). (4.45)
If the eigenfunctions of H are localized about a value no (yet to be determined), it is reasonable
to expand√
(λ+ n− 1)n about no up to the first few terms in n′ = n−no. In the n′ coordinates,
So is given by
Soψiλ(n′) =1
2(λ+ 2n′ + 2no)ψ
iλ(n
′). (4.46)
For S±, we make the approximation√
(λ+ n− 1)n ≈√
(λ+ n)(n+ 1) ≈√
(λ+ n)n. This is
similar to what is done for J± in su(2) in approximation (B) (see equation (3.21)). Essentially,
the action of the shift operators (S±) on a basis state |λn〉 will only be distinguished by the
direction of the shift (to either |λn + 1〉 or |λn − 1〉) but not the multiplicative factor9. This
approximation plays an important role for the technique to give accurate results for distributions
over a wide range of σs including σ < 1. On the other hand, for σ > 1, not making this
approximation yields more accurate results. However, since our priority is to have a SHA that
yields good approximations over the wide range of σ, we will exclude ‘1’ in the expansion of√
(λ+ n− 1)n and√
(λ+ n)(n+ 1). Next, we perform Taylor expansion about n = no to
bilinear terms (approximation (C)) to get10
S±ψiλ(n′) ≈√
(λ+ no)no
(
1 +1
2
(λ+ 2no)
(λ+ no)non′ − 1
8
λ2
((λ+ no)no)2n′2)(
1∓ d
dn′+
1
2
d2
dn′2
)
ψiλ(n′).
(4.47)
9 From the examples in the thesis, it is imperative to make the multiplicative factors of raising and loweringoperations the same when the variable is continuous in order for the SHA to give valid results in a wide rangeof parameters. An analogy can be found by comparing the expression for centrifugal potential in classical andquantum physics: L2/2mr2 in classical physics where the quantity can vary continuously and l(l + 1)/2mr2 inquantum physics where the quantity can only take discrete values. We see that the ‘+1’ is not included in thecontinuous quantity.
10We note that in Hermitian Hamiltonians, the contributions from the operators S± will always be such thatthe d
dn′ and n′ ddn′ will vanish. Thus, the net contributions of S+ and S− are the same in Hermitian Hamiltonians.
The same applies to S2± following the approximations (B)-(C).
Chapter 4. The CJH and 5-D Quartic Oscillator 90
We note that this expansion would require n′no
to be small for the series to converge. However,
as in equation (3.12), obtaining accurate information about the SHA potential does not depend
on the range of n′ where the expansions converge. We can expect as before that quantities such
as the excitation energy and spread of the eigenfunctions will be simply dependent on no and
the parameters used. Thus, all the information needed is already in the neighbourhood of no
and captured in the coefficients of the expansion.
We will next illustrate how the SHA can be applied to the product of operators, such as S2±
and S±S∓, in greater detail:
〈λn|S2±|φiλ〉 → S2
±ψiλ(n) =
∑
n
〈n|S±|n∓ 1〉〈n ∓ 1|S±|ψiλ〉
= 〈n|S±|n∓ 1〉〈n ∓ 1|S±|n∓ 2〉aiλn∓2
= 〈n|S±|n∓ 1〉〈n ∓ 1|S±|n∓ 2〉e∓2 ddnψiλ(n)
≈ N±(n)N±(n ∓ 1)
[
1∓ 2d
dn+ 2
d2
dn2
]
ψiλ(n). (4.48)
(We will abbreviate |λn〉 as |n〉 etc. wherever it is unambiguous.) Here, we will not distin-
guish n± 1 from n in N±(n ∓ 1) for the reasons mentioned earlier in footnote (9). Therefore,
N±(n)N±(n ∓ 1) → N2±(n) ≈ (λ+ n)n. For the products of operators S±S∓ whose actions on
any states |n〉 do not result in any shift in n, the approximations are done in a slightly different
way:
〈λn|S±S∓|φiλ〉 → S±S∓ψiλ(x) =∑
n
〈n|S±|n ∓ 1〉〈n ∓ 1|S∓|ψiλ〉
= 〈n|S±|n∓ 1〉〈n ∓ 1|S∓|n〉 aiλn= 〈n|S±|n∓ 1〉〈n|S±|n∓ 1〉ψiλ(n)
= N2±(n)ψiλ(n) (4.49)
whereN2+(n) = (λ+n+1)n and N2
−(n) = (λ+n)(n+1). Here, we adopted a consistent principle
of not distinguishing multiplicative factors (such as√
(λ+ n)(n+ 1) and√
(λ+ n− 1)n ) of
raising and lowering actions (see footnote 9). On the other hand, whenever the successive
actions of two operators are such that there is no net raising or lowering (for example, S±S∓),
we use the exact multiplicative factors without making any approximations.
We now map the full equation (4.33) into SHA operators to get
H = 2So − α(2So + S+ + S−)
+α
2M
(
4S2o + (S2
+ + S2−) + 2So(S+ + S−) + 2(S+ + S−)So + (S+S− + S−S+)
)
. (4.50)
Approximating n as a continuous variable and arranging terms by their shift actions on |n〉, we
Chapter 4. The CJH and 5-D Quartic Oscillator 91
get
Hψiλ(n) =
[{
2No(n)− 2αNo(n) +α
2M(4N2
o (n) +N2+(n) +N2
−(n))}
+{
−αN+(n) +α
MNo(n)N+(n) +N+(n)No(n− 1)
}
e−d
dn
+{
−αN−(n) +α
MNo(n)N−(n) +N−(n)No(n+ 1)
}
ed
dn
+α
2M
{
N+(n)N+(n− 1)e−2 d
dn +N−(n)N−(n+ 1)e2d
dn
}]
ψiλ(n).
(4.51)
In the next step, approximations like those in equations (4.48) and (4.49) are incorporated in
the above equation and the remaining terms are Taylor expanded about n = no. Finally, we
shift the coordinates to n′(= n− no) to get
H ≈ −A d2
dn′2+Bn′2 +Dn′ + E (4.52)
where
A = α(√
(λ+ no)no
)
− α
M
[
2(λ+ no)no +√
(λ + no)no(λ + 2no)]
(4.53)
B =1
4α√
(λ+ no)no
(λ2
n2o(λ+ no)2
)
+α
M
[
4−√
(λ+ no)no
(
(λ+ 2no)(1
4
λ2
n2o(λ+ no)2
)− 2λ+ 2no
(λ+ no)no
)]
(4.54)
D = 2− α(
2 +λ+ 2no
√
(λ+ no)no
)
+α
M
[
4(λ+ 2no) +(λ+ 2no)
2
√
(λ+ no)no
+ 4√
(λ + no)no
]
(4.55)
E = λ+ 2no − α(
λ+ 2no + 2√
(λ+ no)no
)
+1
2
α
M
[
(λ+ 2no) + 2√
(λ + no)no
]2
. (4.56)
It is important to distinguish at this point between the SHA oscillator potential (in equation
(4.52)) defined on the n′ coordinate system and the quartic oscillator potential defined in β
coordinates (as seen in figure (4.12)). The SHA n′ coordinates are related to the su(1, 1) basis
states while the β coordinates relate to a component of the quadrupole deformation. Our
subsequent discussions will be focused mainly on SHA oscillator potential defined on the n′
coordinate system.
In equations (4.53) and (4.54), a simple check indicates that if αM
is not too large, then the
kinetic and potential terms will not vanish and the model can be classified as P1 for α > 0.5.
When α→ 0, the model can be classified as P3. Therefore, we expect the SHA to give accurate
predictions as long as αM
is not too large. In fact, we will confine our discussions to the domain
0 < α . 2 which is of primary physical relevance. We will discuss a case whereby α is relatively
large near the end of this chapter. As we have done in the LMG and CJH before, we will solve
no first.
Chapter 4. The CJH and 5-D Quartic Oscillator 92
The two limits of the 5-D quartic oscillator and some results
The minimum of the SHA oscillator potential is located at coordinates (no, E). As mentioned
earlier, we shifted the origin from n = 0 to n = no in the n′ coordinate system. Therefore, the
value of no can be obtained by solving D = 0 (equation (4.55)). Over the range of parameters
of interest, the equation D = 0 has only one real positive root. We do not gain much physical
insight by just examining equation (4.55). However, if we restrict to the domain where no � λ,
a simple expression for no can be obtained:
D ≈ 2− 4α+ 16α
Mno = 0
no ≈M
4(1− 1
2α). (4.57)
Since we require no to be positive, the above approximation requires α > αc = 0.5. In addition,
since no � λ, α cannot be too close to the critical point, αc. In equation (4.57), we see that
a large no implies a large M . Therefore, we are essentially dealing with the large M limit.
Substituting no in equation (4.57) into equations (4.53)-(4.56) with no � λ, we get
AL ≈ M
8
[
1− 1
2α
]
(4.58)
BL ≈ 8α
M(4.59)
EL ≈ (1− α)λ− M
4
(2α− 1)2
2α. (4.60)
Therefore, an approximate excitation energy ωL and spread σL can be obtained:
ωL ≈ 2
√
α(1 − 1
2α) (4.61)
σL ≈ 4
√
M2
64α(1− 1
2α). (4.62)
The approximate excitation energy ωL in the large M limit is in agreement with the excitation
energy in asymptotic limit predicted in [33]. With expressions for no and σ known, the ma-
trix representations of the su(1, 1) operators So and S± in this domain can be approximated
in the same way it is done in equations (3.46)-(3.49) for su(2) operators. Using the matrix
representation of the ‘position’ (n) of the harmonic oscillator, we can write
n− no = n′ =σL√
2(a+ a†) (4.63)
where a† and a are the raising and lowering operators of the harmonic oscillator. Using equation
(4.46), we get
So =σL√
2(a+ a†) + (
1
2λ+ no). (4.64)
Chapter 4. The CJH and 5-D Quartic Oscillator 93
Likewise, we can also writed
dn′=
1√2σL
(a− a†) (4.65)
and we can obtain the approximate representations of S± using equation (4.47),
S± ≈√
(λ+ no)no
(
1 +1
2
(λ+ 2no)
(λ+ no)non′ − 1
8
λ2
((λ+ no)no)2n′2)(
1∓ d
dn′+
1
2(d
dn′)2)
.
These representations are good approximations of the exact matrix representations. This is in
the limit where the contraction of the su(1, 1) algebra to the augmented hw(1) algebra is a good
approximation. The above matrix representations can also be used as first guesses if we solve
this problem using RRKK.
Next, we will make approximations in the α � 0.5 domain for relatively large M . A
harmonic approximation can be made by leaving out the quartic contributions (in β coordinates)
in equation (4.28). Similarly, for equations (4.53)-(4.56), the quartic contributions can be easily
identified as those multiplied by the factor αM
. These terms will be left out in our approximation
here. For example, in this limit, no is given by the solution of the equation
D ≈ 2− α(
2 +λ+ 2no
√
(λ+ no)no
)
= 0. (4.66)
Solving the above equation, we get
no ≈λ
2
[1− α√1− 2α
− 1
]
. (4.67)
Then, substituting the above expression of no into equations (4.53)-(4.56) and approximating
αM→ 0, we have
AS ≈ α2λ
2√
1− 2α(4.68)
BS ≈ 2√
(1− 2α)3
α2λ(4.69)
ES ≈ λ√
1− 2α. (4.70)
Therefore, an approximate excitation energy ωS and spread σS can be obtained:
ωS ≈ 2√
1− 2α (4.71)
σS ≈ α
√
λ
2(1− 2α). (4.72)
The excitation energy is the same as that derived using the RPA [33].
As mentioned earlier, the model can be classified as P3 in the limit when α→ 0. We check
the predictions by directly setting α = 0 in equations (4.53)-(4.56). However, in this approach,
Chapter 4. The CJH and 5-D Quartic Oscillator 94
D does not vanish when we set α = 0. (See equation (4.66).) Thus, H in equation (4.52)
trivially becomes
Ho ≈ Dn′ + E
= 2(n − no) + λ
= 2n + λ (4.73)
since no → 0 when α→ 0. Here, with n = 0, 1 . . ., we obtain the spectrum in the no interaction
limit. As for the critical point, at α = 0.5 where we have a pure quartic oscillator, we do not
have simple expressions for no, ω etc. but all the approximations needed can be obtained by
substituting α = 0.5 into equations (4.53)-(4.56). Alternatively, we can make use of the critical-
point scaling symmetry [70] at α = 0.5. By exploiting this symmetry, it means that if the results
are known for one value of M, they can simply be inferred for any M.
We will next compare the SHA predictions no, ω, Eo and σ (obtained using equations (4.53)-
(4.56)) with exact results. In the plots, we will use M = 100 and M = 1000 for 0 < α < 2
where ν = 0 and ν = 4.
M = 100 M = 1000
Figure 4.14: The mean values n of the components of the exact ground state eigenvectors
compared with the means no of the SHA eigenfunction. The exact results are indicated by the
crosses (ν = 0) and circles (ν = 4) while the SHA predicted ones by a lines. The agreement
between the exact and SHA results is good. A numerical check indicates that the two quantities
typically differ only in the first decimal place for α > 0.5.
Chapter 4. The CJH and 5-D Quartic Oscillator 95
M = 100 M = 1000
Figure 4.15: The exact ground state energies are indicated by the crosses (ν = 0) and circles
(ν = 4) while the SHA predicted ones by lines. The agreement between the exact and the SHA
results is good.
We can see from figure (4.14)-(4.17) that the predictions of equations (4.53)-(4.56) are
generally in good agreement with the exact results. Thus, again the SHA expressions work well
even across the critical point, though this is where the discrepancy is the largest (especially for
small M). For example, see the excitation energy for M = 100 in figure (4.16).
Again, we note some interesting scaling properties that can be associated with the size of
the system. For example, as given in equations (4.61) and (4.71), the SHA predicts that the
excitation energy is independent of mass M . On the other hand, the SHA predicts that the
ground state energy is independent of M for α < 0.5 (see equation (4.70)) but varies with M
when α > 0.5 (see equation (4.60)). As we have seen, these SHA predictions concur with exact
results. Thus, we have another example that approximate scaling can happen away from the
critical point.
Approximation of the eigenvectors for small α
The accurate approximation of excitation energy ω, ground state energy E, the mean of the
eigenfunction no and its spread σ indicates that the approximations (A)-(C) are applicable to
the quartic oscillator model with an su(1, 1) spectrum generating algebra. We have also seen
in figure (4.13) that the approximation ain ≈ ψi(n) also applies when α > 0.5. However, a
Chapter 4. The CJH and 5-D Quartic Oscillator 96
M = 100 M = 1000
Figure 4.16: The exact excitation energies (E1 − Eo) are indicated by the crosses (ν = 0) and
circles (ν = 4) while ω predicted by the SHA are indicated by lines. The discrepancies between
the exact results and those of the SHA is the largest at about α = 0.5, the critical value.
M = 100 M = 1000
Figure 4.17: The spread σ of the components of the exact ground state eigenvectors compared
with those of the SHA. The exact results are indicated by the crosses (ν = 0) and circles (ν = 4)
while the SHA predicted ones by lines.
Chapter 4. The CJH and 5-D Quartic Oscillator 97
slight modification of the conditions is required since the basis label has different range, i.e.
0 ≤ n <∞. Now ψ(n) is evaluated at n = 0, 1 . . . nmax where nmax ∼ no+2σ. We will adapt the
conditions used in the LMG and CJH. For example, we expect that the conditions σ > 1 and
no − 2σ > 0 to be fulfilled for the approximation aiλ,n ≈ ψiλ(n) to work well. As seen in figure
(4.17), for α � 0.5, the SHA predicted spreads are small (σ < 1) and the SHA eigenfunctions
do not vanish at n = 0 because no − 2σ < 0. Previously for the CJH, we have successfully
applied the binomial approximation (3.67) to obtain the components am when the Hamiltonian
is in the su(2) limit with σ < 1. In this model, for the domain α� 0.5 where the Hamiltonian is
approximately an element of the su(1, 1) algebra, we will explore the possibility of adapting the
binomial approximation by making some modifications. For a binomial distribution B(N, p),
we now choose N to be a sufficiently large integer such that the binomial distribution becomes
independent of N . Having done that, we set p = no/N . The result obtained is shown in figure
(4.18). We can see that the binomial approximation (3.67), with the adaptation, gives accurate
results in α� 0.5 domain.
Figure 4.18: The components of the exact ground state eigenvector an are indicated by black
diamonds. The SHA wave function ψ(n) is given by the red curve. The crosses (‘+’) indicate
the ψ(n) evaluated at points n = 0, 1 . . .. In this example, M = 100, α = 0.2 and ν = 2. As
we can see when the width of the distribution an is narrow, the continuous wave function ψ(n)
with the same mean and spread will not coincide with it. We apply the binomial distribution
(3.67) to estimate an. The results are indicated using diagonal crosses (‘x’).
Chapter 4. The CJH and 5-D Quartic Oscillator 98
Some Numerical Results
There is much we can gain from the knowledge of how no and σ vary with the parameters.
As the matrix representations of su(1, 1) operators are infinite dimensional, the eigenvalues of
su(1, 1) Hamiltonians are obtained by diagonalizing a truncated d× d Hamiltonian matrix. In
this procedure, d is gradually increased until the required eigenvalues converge to the accuracy
required. (In this section, we have used the term exact to mean results obtained this way.) The
quantities no and σ gives us a good indication of d. In fact, the size of the Hamiltonian matrix
needed can be estimated as d ∼ no + tσ where t is dependent on the accuracy required for the
lowest few eigenvalues. For example, for the lowest ten states, we can set t ∼ 10. If σ is very
much smaller than no, then we just need to use the set of basis states with no−tσ < n < no+tσ
to construct the Hamiltonian matrix for finding the eigenvalues. From equations (4.57) and
(4.62), we see that for α > 0.5, σ is indeed always smaller than no because no ∝ M and
σ ∝√M . In cases where d is large, diagonalizing a smaller 2 t σ × 2 t σ Hamiltonian matrix
will yield low-lying eigenvalues of similar accuracy. This technique we are suggesting here is
essentially ESHA(I). Defining exact results as those obtained by diagonalizing the Hamiltonian
matrix written in basis states n = 0 . . . no + t σ and ESHA(I) results as those obtained using
no − t σ < n < no + t σ, we plot a graph to see the how the size of the matrix required for a
range of mass M with interaction α = 2 and ν = 0. Here, we set t = 10.
It is clear from figure (4.19) that ESHA(I) is more effective than conventional computation
techniques. In the table in figure (4.19), we show the six lowest eigenvalues for M = 100000,
α = 2.0 and ν = 0. These results are obtained by diagonalizing the 1800 × 1800 Hamiltonian
matrix using the basis states n = 17850 . . . 19650, identified by using SHA. These results are
checked against those obtained from diagonalizing an even larger Hamiltonian matrix and the
results are quoted to the precision the two agree. We have illustrated that the SHA and ESHA(I)
are relatively general techniques which are also applicable to this quartic oscillator with su(1, 1)
algebra. While ESHA(I) greatly reduced the computational resources required, there exists
more effective methods, such as the Algebraic Collective Model [69], which has been designed
to obtaining accurate eigenvalues for such models using just a small number of optimized basis
states.
So far, we have avoided ESHA(II) because the spreads σ are relatively small and ESHA(I)
can be used. In addition, we cannot predict when αM
will become too large such that the
distribution aon start to become non-Gaussian. We see an example of this type of eigenfunction
in figure (4.20) where M = 100, α = 60 and ν = 0.
Despite the mismatch in the SHA eigenfunction with the exact distribution, the SHA ground
state energies and excitation energies are unexpectedly similar to those of the exact values. In
Chapter 4. The CJH and 5-D Quartic Oscillator 99
Six lowest eigenvalues for
α = 2.0, M = 100000 and ν = 0
obtained using ESHA(I)
-56248.7752451284
-56246.325775386
-56243.876325643
-56241.426895902
-56238.9774861626
-56236.5280964252
Figure 4.19: The estimated dimensions of the Hamiltonian matrix required using conventional
diagonalization and ESHA(I) to obtain exact results. The parameters are α = 2 with ν = 0
for M = 1 . . . 100000. The tabulated results are obtained by diagonalizing the 1800 × 1800
Hamiltonian matrix using the basis states n = 17850 . . . 19650.
this case, where α = 60,
Eo(Exact) = −2942.49, Eo(SHA) = −2949.43;
E1 − Eo(Exact) = 15.412, ω(SHA) = 15.427.
This can be attributed to the fact that when αM
is large, the Hamiltonian probably corresponds
to some other type of oscillator limit11. However, we will not pursue these possibilities further
but will explore the use of RRKK in this model instead.
Applying RRKK to the 5-D Quartic Oscillator
We briefly recall that in RRKK, the matrix elements of the operators (in this case, So
and S+) and the eigenvalues corresponding to the lowest N eigenstates are defined as un-
knowns to be solved. This set of unknowns are related by the su(1, 1) commutation relations,
Casimir equations and commutators taken between the Hamiltonian and the operators, i.e.
11The exact low-lying spectrum resembles that of an oscillator. The eigenfunction may involve the Laguerrepolynomial.
Chapter 4. The CJH and 5-D Quartic Oscillator 100
Figure 4.20: The components of the ground state eigenvector aon for M = 100, α = 60.0 with
ν = 0 is shown. The exact results are indicated by blue crosses and connected by a dotted line.
The SHA eigenfunction is shown in the red line.
the equations-of-motion. The same technique for handling truncated finite representations (as
discussed in section 3.4) can be applied to this case where infinite-dimensional matrix repre-
sentations are truncated.
The Casimir invariant for su(1, 1) given
Cν [su(1, 1)] = S0(S0 − 1)− S+S− (4.74)
takes the value
cν = 14
(v + 5
2
) (v + 1
2
). (4.75)
With this, the commutation relations (4.31) and (4.32) and the equations-of-motion, we have
the complete set of equations needed.
We apply RRKK to a range of M and α with ν = 2 in the quartic oscillator model. Our
interest is confined to the lowest N = 10 eigenstates. The lowest m states (of the N = 10
used) achieving an accuracy of at least eight decimal places is compared against the sizes of the
Hamiltonian matrices that need to be diagonalized to obtain the same accuracy. The results
are shown in table (4.5) (taken from [33]).
Minimization of F ′ is carried out iteratively starting from a first guess. Such first guesses
can involve approximating the unknown matrix elements as those in the limit involving only
the quadratic (β) potential (for α� 0.5) or the large M limit (for α > 0.5) mentioned earlier.
Chapter 4. The CJH and 5-D Quartic Oscillator 101
α M = 5 M = 50 M = 500 M = 5000
0.2 m =7 (25) 8 (15) 9 (20) 9 (20)
0.5 7 (30) 7 (20) 7 (45) 7(95)
0.7 7 (30) 6 (20) 6 (75) 7 (490)
1.0 7 (35) 5 (25) 7 (100) 8 (770)
2.0 7 (45) 5 (35) 7 (135) 8 (1065)
5.0 6 (65) 6 (60) 8 (175) 8 (1235)
10.0 6 (80) 6 (85) 8 (200) 8 (1295)
50.0 5 (150) 7 (180) 8 (310) 8 (> 1500)
Table 4.5: Values of m for the m × m submatrices, obtained with N = 10 for ν = 2, whose
elements are accurate to at least 8 decimal places. The numbers in brackets are the estimated
sizes of the Hamiltonian matrix required to achieve similar accuracy by diagonalization.
Subsequently, the solved unknowns can serve as initial guesses for other values of α and this
process can be iterated to the intended interaction α. We note here that the RRKK gives
accurate approximations in the domain where α ∼ M (see columns M = 5, 50 in table (4.5)).
This is where ESHA(I) is inadequate.
In the quartic oscillator model, we have illustrated how the SHA, ESHA(I) and RRKK
can be used to obtain accurate eigenvalues and matrix representations which would have been
computationally challenging using diagonalization. In this model, the domains where diago-
nalization is computationally challenging may not be of physical interest. However, this study
has given us greater insights into the techniques, especially on their applicability for models
with non-compact spectrum generating algebras. We have seen that the SHA and RRKK give
accurate approximations for models, with either su(2) and su(1, 1) spectrum generating algebra,
over a wide range of parameters. In the next chapter, we will deal with the multi-level pairing
model for a finite number of particles. In the pairing model, the dimension of the Hilbert space
that we generally deal with can be extremely huge. We will see how the SHA can again size
down the problem to something that is computationally manageable even on a laptop computer.
Chapter 5
The BCS Pairing Hamiltonian
5.1 Outline of the BCS Theory
Introduction
Kamerlingh Onnes discovered the phenomena of superconductivity in 1911 shortly after he
liquefied Helium-4. This was before Quantum Mechanics was formulated in 1925-6. Subse-
quently, it was also recognized that superconductivity in metals is related to superfluidity in
Helium-4. Both are uninhibited coherent flows of particles. However, in superconductivity, the
particles carry an electric charge while Helium-4 atoms are neutral. Another difference is that
Helium-4 atoms are bosons and the principles of Bose-Einstein condensation apply but elec-
trons are fermions, and a different type of coherent state is needed. Many attempts had been
made to explain superconductivity using Quantum Mechanics, but it was not until 1957 when
Bardeen, Cooper and Schrieffer (hereafter BCS) [5] successfully developed a theory to explain
the mechanism in which superconductivity emerges. The BCS theory also correctly explained
the thermodynamic and transport properties of superconductors.
One of the reasons it took so long to discover the mechanism for superconductivity is that
it involves the formation of bound pairs of electrons. This is counter-intuitive because the
Coulomb interaction between electrons is repulsive. In 1950, Frohlich [25] suggested that the
exchange of a phonon between two electrons via the metallic lattice can result in such an at-
tractive interaction. Essentially, an electron passing a region of the metallic lattice distorts
it and results in a net attractive force on its pair that follows. We note that in ordinary su-
perconductors the physical separation of these pairs is in fact orders of magnitude larger than
the average inter-electron spacing. Subsequently, Cooper [14] realized that pairs of electrons,
102
Chapter 5. The BCS Pairing Hamiltonian 103
with approximately opposite momentum and opposite spin, slightly above the Fermi surface1
can form a bound state no matter how weak the attractive interaction is. Thus, at sufficiently
low temperatures when such electron-phonon-electron interactions are no longer interrupted
by thermal excitation, the so-called Cooper pairs are formed. This state of matter is energet-
ically more favoured compared to the normal metallic state. The idea of Cooper pairs was
instrumental in explaining superconductivity and it also formed the basis of the BCS theory.
The idea of fermions pairing is not restricted to electrons in metals. The pairing force
is also important in explaining the stability of atomic nuclei. In nuclear systems, the physical
separations of the pairs are limited to the confines of the nucleus, significantly shorter than those
in metallic superconductors. Bohr, Mottelson and Pines [10] suggested a possible connection
between the excitation spectra of the metallic superconductors and nuclear systems. The energy
gap associated with the breaking of a pair can be found in both types of spectra. In the nuclear
system, the effects of the pairing interaction is most easily studied in even-even nuclei in the
vicinity of doubly closed shells, which corresponds to filled Fermi spheres. The core of such nuclei
can be treated as inert while pairs of nucleons with the same total angular momentum J in time-
reversed states (i.e. |J,M〉 and |J, M 〉) just outside the closed shells form the superconducting
pair2.
The BCS Theory
The BCS Hamiltonian can be written in terms of creation and annihilation operators:
H =∑
εka†kak −
∑
Gk,k′a†k′a
†k′akak. (5.1)
Here, k and k′ denote the momenta of the particles . The interaction strength Gk,k′ is only
non-vanishing when the momenta are close to the Fermi surface. For condensed matter physics,
ks are the linear momenta and k = −k with opposite spin. In nuclear systems, k = J,M where
J are the angular momenta of the particles, M the z components and k = J, M are the time
reversed states. The operators a†k and ak follow the anti-commutation relations
{ak, a†k′} = δk,k′ , {ak, ak′} = {a†k, a†k′} = 0. (5.2)
There is no known exact solution to the BCS Hamiltonian (5.1) and we have to resort to
approximate methods. The approximate many-body ground state wave function proposed by
1In our discussion in chapter 2, we mentioned that fermions deep within the Fermi surface have no availablestates to scatter to. See figure (2.2). This is one of the reasons why only pairing interactions near the Fermisurface contribute to the Hamiltonian.
2Applies similarly to pairs of holes in the vicinity of the closed shell.
Chapter 5. The BCS Pairing Hamiltonian 104
Schrieffer for the BCS Hamiltonian is given by
|ΦBCSo 〉 =
∏
k>0
(uk + vka†ka
†k)|vac〉 (5.3)
where ‘vac’ denotes the zero-fermion vacuum and u2k+v
2k = 1. The quantity v2
k can be interpreted
as the probability that the state with (k, k) is occupied. The expression above can be used as a
trial wave function in a variational approximation where the constants uk and vk minimize the
energy expectation value 〈ΦBCSo |H|ΦBCS
o 〉 of the system. However, we note that the above wave
function does not have a definite particle number, since it is a superposition of n = 0, 2, 4, . . .
particles. Therefore, to solve the problem, the energy expectation value has to be minimized
with a constraint such that the average number of pairs corresponds to a definite value N :
δ〈ΦBCSo |H − λN |ΦBCS
o 〉 = 0 (5.4)
where λ is the Lagrange multiplier. Physically, λ is the Fermi energy.
The BCS Theory and Quasi-spin Operators
If the single particle energy (essentially kinetic energies when there is no pairing interaction)
levels are degenerate, the BCS formalism can be simplified and the quasi-spin language we used
earlier can be adopted. In a metallic superconductor, we can use a simple model for the
distribution of the available states using the particles-in-a-box analysis mentioned earlier in
chapter 2. In this model, the states in momentum space will distribute in a Fermi sphere (see
figure (2.2)). It is possible for a relatively large kinetic energy |k|22m to have many combinations
of momenta (kx, ky, kz). Thus, near the Fermi surface, each energy level corresponding to a
particular kinetic energy can be highly degenerate. In the BCS theory, the Cooper pairs are
only formed very near the Fermi surface. Therefore, there are just a few levels below and above
the Fermi surface that we need to include in our model. Thus, in this treatment, we can think
of the BCS model as a multi-level model3. This is an extension to the two level models (LMG
and CJH) we have seen in chapter 3 and 4. For the atomic nuclei, we can regard each angular
momentum Jp corresponding to a single particle energy level p. With these considerations, it is
possible to express the BCS pairing Hamiltonian using quasi-spin operators as before. Following
3 In ultra-small superconducting grains where the magnitude of the Fermi wave vector kF is similar to thedimensions of the system, the shape of the boundary of such grains will affect the distribution of levels. The levelsare much closer and less degenerate than our consideration here [51]. The level separations are approximated byδE ∼ 2π2
~2/(mkFV ) where m is the mass of the electrons and V is the size of the grains.
Chapter 5. The BCS Pairing Hamiltonian 105
Kerman et al. [36] and Anderson [3], we define
Jp+ = (Jp−)† =∑
kp>0
a†kpa†kp, (5.5)
Jpz = 12
∑
kp>0
(a†kpakp − akp
a†kp
) (5.6)
where Jp+ creates a pair of particles with (angular) momenta (kp, kp) at some single particle
energy level labelled p. (All (angular) momenta (kp, kp) at level p have the same magnitude.)
The quasi-spin operators belong to an su(2)1 ⊕ su(2)2 ⊕ . . .⊕ su(2)r algebra with commutation
relations
[Jp+, Jq−] = 2Jpz δpq (5.7)
[Jpz , Jq±] = ±Jp±δpq. (5.8)
Thus, the BCS pairing Hamiltonian (5.1) can also be written as
H =r∑
p=1
εpnp −
r∑
p,q
GpqJp+J
q−. (5.9)
The particle number operator, for level p with single particle energy εp, is given by np =
2(Jpz + jp). The operator Jp+Jq− scatters a pair of particles from level q to level p and Gpq is the
corresponding interaction strength4. Without loss of generality, we will assume that there are
no unpaired particles present in the system. Thus, the number of pairs will always be half the
number of particles. Therefore, we can give a physical interpretation to every su(2) irreducible
representation |jp,mp〉 in terms of the occupancy of level p. A state without pairs corresponds
to |jp,mp = −jp〉, and the quasi-spin projection quantum number mp increases with every
added pair to reach the value mp = jp when the level is completely filled. For a r-level pairing
model, we can construct a basis |m〉 = |m1,m2, . . . ,mr〉 (the quantum numbers jp have been
omitted for brevity). The zero-particle vacuum in this notation is thus | − j1,−j2, . . . ,−jr〉.An outline of the important steps for solving the BCS Hamiltonian will be done so that a
comparison with the SHA approach can be made subsequently. Regrouping the terms in the
BCS wave function given in equation (5.3), we can write
|ΦBCSo 〉 =
r∏
p
2jp∏
ip=1
(up + vpa†p,ip
a†p,ip
)|vac〉. (5.10)
Without going through the details, we write down the expectation value of H − λN
〈ΦBCSo |H − λN |ΦBCS
o 〉 =r∑
p
2jpv2p
[2(εp − λ)−Gppv2
p
]− 4
r∑
pq
Gpqjpjqupvpuqvq. (5.11)
4Here, we have assumed Gpq to be the same for all scatterings from k at level q to k′ at level p. Thissimplification is less restrictive than in integrable models where we would require the scattering between any twolevels to be fixed, i.e. Gpq = G.
Chapter 5. The BCS Pairing Hamiltonian 106
The quartic term 2jpGppv4p is often dropped because it is not very important and its effects are
only limited to the adjustments of the single particle energy levels. Its inclusion complicates
the treatment without giving any additional physical insights. We will omit it for the time
being until the section where we compare the BCS theory and the SHA approach. Minimizing
equation (5.11) with respect to vq gives us an expression in terms of the parameters of the
model with vp and up(=√
1− v2p):
2(εp − λ)upvp −r∑
q
2jqGpq(u2p − v2
p)uqvq = 0. (5.12)
The above equation can be simplified by using
up = cos θp; vp = sin θp (5.13)
where 0 ≤ θp ≤ π2 such that up, vp > 0. Thus, the variational equations (5.12) can be written
as
(εp − λ) tan 2θp =
r∑
q
jqGpq sin 2θq. (5.14)
Defining the gap energy for level p as
∆p =
r∑
q
Gpqjq sin 2θq = 2
r∑
q
Gpqjquqvq, (5.15)
we can write equation (5.14) as
tan 2θp =∆p
εp − λ. (5.16)
It follows that
cos 2θp =εp − λ
√
(εp − λ)2 + ∆2p
(5.17)
sin 2θp =∆p
√
(εp − λ)2 + ∆2p
(5.18)
Therefore, equation (5.15) can be written as
∆p =
r∑
q
Gpqjq∆q√
(εq − λ)2 + ∆2q
(5.19)
The approximate solution of the BCS Hamiltonian can be obtained by solving for {∆p, λ} using
the above set of equations simultaneously with
N = 〈ΦBCSo |N |ΦBCS
o 〉 ≈∑
q
jq
1− εq − λ√
(εq − λ)2 + ∆2q
(5.20)
Chapter 5. The BCS Pairing Hamiltonian 107
where the mean number of pairs is given by N . With {∆p, λ} known, we can easily obtain vp
using equation (5.16). The approximate mean ground state energy is given by
〈ΦBCSo |E|ΦBCS
o 〉 =∑
q
4jq|vq|2εq −∑
pq
Gpqjpjq∆p
Ep
∆q
Eq. (5.21)
where we define the so-called quasi-particle energy as
Ep =√
(εp − λ)2 + ∆2p. (5.22)
The BCS treatment of metallic superconductors, which involve many electrons with weak pair-
ing interactions, often works well. The approximate ground state BCS wave function, where
the pair number is allowed to fluctuate about a fixed mean value N , also correctly predicts
many physical properties. This pair number fluctuation is not a major concern in the statisti-
cal limit but needs to be taken care of when the pair number is small. For example, the spread
(and hence error) associated with the fluctuation of a system of N pairs is of the order√N .
Thus, the fractional error 1√N
is largest for systems with small pair number. Quantities such as
the mean ground state energy in equation (5.21) are dependent on this fluctuation [65]. Other
quantities such as vq and hence the gap energy ∆q are not affected by this fluctuation. For finite
systems such as the atomic nuclei, the treatment involving the ground state BCS wave function
does not always work well. Often, the eigenvalues of the BCS Hamiltonian are obtained by
diagonalization if it is tractable.
Such finite systems can also be found in metallic superconductors. Since the 1960s, metallic
superconducting nanograins can be fabricated [27]. In the 1990s, techniques have been devel-
oped to measure their spectroscopic properties [6], [1], [77]. These grains are small enough
to be considered as finite systems where diagonalization can be applied directly to obtain the
eigen-energies. However, the Hilbert space involved can be large (as we will see later), making
diagonalization computationally challenging. This revived interest in the so-called Richardson-
Gaudin method [55] [54], [56] which is based on an algebraic Bethe ansatz. The pairing model
is integrable and exactly solvable if the magnitude of pairing interaction is the same for all
pairs, i.e. Gpq = G. However, pairing models with general interaction strengths cease to be
integrable leaving only a limited number of exactly solvable special cases [20], [21]. Moreover,
in order to compute the associated eigenvalues and Bethe ansatz eigenfunctions, one needs to
solve the computationally challenging Richardson-Gaudin equations [59]. In contrast to the
original BCS wave function and the Richardson-Gaudin methods, the SHA allows for a number
conserving treatment of pairing models with general interaction strengths, while keeping the
computational efforts minimal.
Chapter 5. The BCS Pairing Hamiltonian 108
On Hilbert Spaces and their Dimensions
The eigenstates of the pairing Hamiltonian in equation (5.9), corresponding to pair number
N , are linear combinations of basis states |m〉 = |m1,m2, . . . ,mr〉 in the Hilbert space. The
maximum possible number of pairs Nmax in a model is given by
Nmax = 2
r∑
p
jp. (5.23)
For a system with N pairs, the components of |m〉 are related such that
r∑
p=1
mp = MT = N − 12Nmax. (5.24)
If we plot the set of the allowed basis states |m〉 as points on a {m1,m2, . . . ,mr} grid, these
points will lie on a (r−1)-dimensional hyperplane with unit normal vector u = 1√r[1, 1, . . . , 1]T .
See figure (5.1) for an example in a 2-level system.
It is useful to know the dimensions of the N -pair Hilbert spaces before diagonalizing the
Hamiltonian matrix. On a 32-bit system with 2 gigabyte memory, a numerical computing
system such as Matlab can diagonalize a matrix with dimensions of a few thousand. Counting
the number of basis states in a multi-level problem with fixed pair number N is essentially a
combinatorial problem5:
finding the number of sets {m1, . . . ,mr} where m1 + . . .+mr = MT with −jp ≥ mp ≥ jp, ∀p.
If r = 2, we can exhaust all possible combinations (m1,m2) by defining the following
generating function,
R = (t0 + t1 + . . . + t2j1)(t0 + t1 + . . . + t2j2) (5.25)
=
2j1+2j2∑
i=0
diti. (5.26)
When the expression R is evaluated in polynomial form in the second line, it can be verified
that the coefficient dN gives the number of ways N can be obtained - this gives the dimension
of the Hilbert space for a system with N pairs.
This simple technique can be generalized to an r-level system by redefining
R =r∏
p=1
(
2jp∑
µ=0
tµ) =
2(j1+...jr)∑
i=0
diti (5.27)
and extracting the coefficients of a selected tN as before gives the dimension of the Hilbert
space. Such a mathematical operation can be performed on a computer algebra system such
5The author acknowledges a combinatorics tutorial given by Dr. T. Welsh.
Chapter 5. The BCS Pairing Hamiltonian 109
Configurations (j1, j2 . . . jr) Nmax N Dimensions, d
(100, 100) 400 200 201
60 1891
(50, 60, 70) 360 180 10581
300 1891
60 39711
(50, 60, 70, 80) 520 260 1379761
500 1771
8 71214
(jp = 2, 1 ≥ p ≥ 12) 48 24 19611175
44 1365
2 20
(5, 4, 3, 1, 7) 22 11 622
configuration for pfh-shell 16 256
Table 5.1: The dimensions of the Hilbert space for various configurations. When the system is
half-filled, i.e. N = 12Nmax, the dimensions are maximum. The dimension of a Hilbert space is
dependent both on the maximum pair occupancy 2jp of each level p and total number of levels
r. The last row of contains the number of orbitals in the pfh-shell of the atomic nucleus.
as Maple. In table (5.1), we show some typical dimensions of the Hilbert spaces of the BCS
pairing Hamiltonian, expressed in this coupled su(2) scheme.
We see that the dimension of the Hilbert space from table (5.1) is both dependent on the
maximum pair occupancy 2jp of each level p and total number of levels k. The last row of table
(5.1) contains the number of orbitals in the pfh-shell of the atomic nucleus. This can be used
to estimate the dimensions of some nuclear pairing model. For example, we can regard 21282Pb
as a system of 2 pairs of neutrons outside the 20882Pb inert core. (The 208
82Pb nuclei is doubly
closed.) For low-lying excitations, it suffices for present purposes to consider just the five levels
and their respective occupancy (from table (5.1)) available for these 2 pairs. The dimension of
the Hilbert space for this simple model is just 20 as seen from the table. However, if the core
Chapter 5. The BCS Pairing Hamiltonian 110
is not treated as inert, d will be enormous.
The same technique for computing the dimensions of the Hilbert space can in fact be used to
calculate the exact degeneracies of the levels for the Fermi sphere of particles-in-box systems6.
However, generally, the degeneracy and separation of the levels are treated is more simplistically
as mentioned in footnote (3).
5.2 Application of the SHA to the BCS Hamiltonian
Applying the SHA
As we have seen, the pairing model has an su(2)1 ⊕ su(2)2 ⊕ . . . su(2)r spectrum generating
algebra. Therefore, we expect that the SHA for this model will be a r-dimensional oscillator.
For a multi-dimensional oscillator, we extend our earlier definition of m = jx to mp = jpxp
where 1 ≤ p ≤ r. Likewise, we introduce the vector x which will correspond to |m〉. To apply
the SHA to the pairing Hamiltonian (5.9), we first map the Hamiltonian from the discrete |m〉basis to the continuous x basis (approximation (A))
〈m|H |Φi〉 → HΨi(x) =r∑
p=1
2εpjp(xp + 1)Ψi(x)−r∑
p=1
Gppj2pM+(xp)M−(xp − 1
jp)Ψi(x)
−r∑
p 6=qGpqjpjqM+(xp)M−(xq)e
− 1jp
∂∂xp
+ 1jq
∂∂xq Ψi(x) (5.29)
where M±(xr) =√
(1∓ xr + 1jr
)(1± xr). We note that M−(xp − 1jp
) = M+(xp). Extending
equations (3.11) and (3.12) to the multi-dimensional case, we get
J pz Ψi(x′) = jp(x′p + xop)Ψ
i(x′) (5.30)
J p±Ψi(x′) ≈ jp√
(1− x2op)
[
1−xopx
′p
1− x2op
−x
′2p
2(1 − x2op)
2
][
1∓ 1
jp
∂
∂x′p+
1
2j2p
∂2
∂x′2p
]
Ψi(x′). (5.31)
6 The equation (5.27) can be easily modified for computing the degeneracy of each level in the vicinity ofthe Fermi surface. In a simple non-interacting particles-in-box model, the momentum states within its Fermisphere can be labelled by the triplets of integers (nx, ny , nz) as shown in equation (2.12). States with the sameη2 = n2
x + n2y + n2
z belong to the same degenerate single particle energy level. Thus, equation (5.27) can bemodified such that
RF =
3∏
p=1
(1 + 2
zmax∑
z=1
rz2
) =∑
dηrη2
(5.28)
where provisions have also been made to allow for particles to be assigned negative momentum. The term zmax
is determined by the radius of the Fermi sphere under considerations. The pre-factor of ‘2’ is to account for the
fact that each state can accommodate an ‘up’ and ‘down’ spin. The coefficient of each rη2
gives the degeneracyat the level with kinetic energy associated with η2. At 0 K, the cumulative sum of these coefficients will give thetotal number of particles if the level at the Fermi surface is fully filled.
Chapter 5. The BCS Pairing Hamiltonian 111
Next, we assume the low-lying eigenfunctions Ψi(x) to be sufficiently smooth around a point
xo that higher derivatives beyond the second order can be dropped. As before, we expand
equation (5.29) up to bilinear terms in x′p(= xp − xop) and ∂∂x′p
to get
H ≈ HSHA =
r∑
p,q
[
− 1
jp
∂
∂x′pApq
1
jq
∂
∂x′q+ jpx
′pBpqjqx
′q
]
+
r∑
p
(Dpjpx
′p
)+ E, (5.32)
which is essentially the Hamiltonian for a r-dimensional shifted coupled harmonic oscillator.
Here, we have applied approximation (B) and (C). The SHA Hamiltonian contains: an in-
verse mass tensor A, a spring constant tensor B, a set of shifts Dp and a constant E. Their
components, defined in terms of κt = (1− x2ot) and Tpq = Gpqjpjq
√κpκq, are given by
Apq =
r∑
s
Tpsδpq − Tpq, (5.33)
Bpq =
r∑
s
Tpsδpqj2pκ
2p
− Tpqxopxoqjpjqκpκq
, (5.34)
Dp = 2εp −Gpp +2xopjpκp
r∑
s
Tps, (5.35)
E =r∑
p
jp(2εp −Gpp)(1 + xop)−r∑
p,q
Tpq. (5.36)
Solving the SHA equations
The pairing Hamiltonian (5.9) is number conserving and this property is preserved by the
SHA. It can be readily verified that the SHA Hamiltonian (5.32) commutes with the number
operator N in the SHA description
[
HSHA, N]
=
HSHA,r∑
p=1
jp(x′p + xop + 1)
= 0. (5.37)
Thus, our earlier description using the {m1,m2, . . . ,mr} grid and the constant number (r− 1)-
dimensional hyperplane remains valid for the SHA Hamiltonian (5.32). Another important
observation is that the inverse mass tensor A has an exact eigenvalue of zero with eigenvector
in the direction normal to the hyperplane, i.e. u = 1√k[1, 1, . . . , 1]T . We call this the spurious
direction as there is no dynamics associated with this direction when N is fixed. It follows
that the unitary transformation UA that diagonalizes A will decouple HSHA into the spurious
direction and the (r−1)-dimensional hyperplane. This means that particle number is conserved
in the SHA and we can select a hyperplane by choosing N .
As before, the inverse mass tensor, spring constant tensor and ground state energy will be
functions of {xop}. So, to proceed any further, we need to solve for {xop}. If a pair number N is
Chapter 5. The BCS Pairing Hamiltonian 112
selected for the Hamiltonian, our definition of the x′p coordinate system will require the energy
minimum to be located at {xop} on the selected hyperplane. Since N is fixed, the variables
{m′p(= jpx
′p)} are no longer linearly independent and are related by (see equation (5.24))
r∑
p
(m′p +mop) = N −
r∑
p
jp = MT . (5.38)
From the above equation, we see that setting the two equations∑r
pm′p = 0 and
∑rpmop = MT
is a convenient and logical choice. As before, we start solving the SHA equations from the
displacement term∑r
pDpjpx′p. Geometrically, the equation
∑rpm
′p = 0 confines the problem
to a (r − 1)-dimensional hyperplane that contains the origin. Using the equation∑r
pm′p = 0,
we can eliminate any one variable to make the remaining variables independent again. If say
the variable m′r is eliminated by substituting m′
r = −∑r−1p=1m
′p into
∑rpDpm
′p, we get
r∑
p=1
Dpm′p =
r−1∑
p=1
Dpm′p +Dr(−
r−1∑
p=1
m′p)
=
r−1∑
p=1
(Dp −Dr)m′p. (5.39)
Since the variables {m′p} are now independent on the hyperplane, we can use the r−1 equations
(5.35)
Dp −Dr = 0 where 1 ≥ p ≥ r − 1. (5.40)
Essentially,
D1 = D2 = . . . = Dr = Λ (5.41)
where Λ is a constant. This result is independent of the variable we eliminate earlier using∑r
pm′p = 0. The pair number N can be selected using
r∑
p=1
jpxop = N − 1
2Nmax = MT . (5.42)
Thus, the k unknowns {xop} can be solved using equations (5.40) and (5.42). Over the whole
range of parameters where real solutions exist, the solution satisfying |xop| < 1 is always found
to be unique in the cases studied. With {xop} known, we can proceed to diagonalize A and
obtain the transformation UA that can be used to decouple the spurious direction from the
(r − 1)-dimensional hyperplane for the remaining terms in equation (5.32).
From there, we can apply normal-mode theory7 to diagonalize both the transformed (r− 1)
7We use the fact that A and B respectively act on canonical conjugate pairs, say ∂/∂yp and yp. The inversemass tensor A is first diagonalized and the corresponding unitary matrix is UA. Then, the coordinate U
−1
Ayp
(less the spurious direction) is scaled such that the diagonal (r − 1)-dimensional A in HSHA becomes a unitmatrix. Having done the scaling, the transformed U
−1
ABUA can be diagonalized. Finally, when both tensors
are diagonal, we unscale the coordinates so that the volume element on the hyperplane is preserved.
Chapter 5. The BCS Pairing Hamiltonian 113
tensors A and B. Denoting {ξi} as the coordinates after all transformations, we have
HSHA =r−1∑
i=1
(
−αi∂2
∂ξ′2i
+ βiξ′2i
)
+ E, (5.43)
where ξ′ = ξi − ξoi and the eigenvalues are
E{νi} =
r−1∑
i=1
(νi +1
2)ωi +E. (5.44)
Here, ωi = 2√αiβi and {νi} are sets of integers. However, as we have mentioned in equation
(3.16) in chapter 3, it is more consistent to write
E{νi} =
r−1∑
i=1
(νi)ωi + E (5.45)
since 12
∑
i ωi is already incorporated in the ground state energy. The widths of the ground
state eigenfunction are σi = 4
√αi
βi. The set of eigenfunctions, which we will call the SHA basis
states, are
Ψν(ξ′) = η
r−1∏
i=1
(2νiσiνi!
√π)− 1
2Hνi(ξ′iσi
)e− 1
2(
ξ′iσi
)2(5.46)
and Hνiare Hermite polynomials. We include the normalization factor η if Ψν(ξ
′) are eval-
uated at discrete points (ξ′1, . . . , ξ′r−1) transformed from the original basis |m1, . . . ,mr〉. The
normalization factor η can be obtained from a discrete sum
∑
{ξ′1,...,ξ′r−1}|Ψν(ξ
′)|2 = 1. (5.47)
Each point on the hyperplane at which Ψν(ξ′) is evaluated has a volume element of
√r associated
with it. To understand this better, consider the r = 2 example in figure (5.1). Along each
constant hyperplane (which in this example is just a line), the points on ξ′1 are spaced at an
interval of√
2. We identify√
2 as the width of each elemental strip of rectangular area that
contributes to the discrete Riemann sum in equation (5.47) that approximates, for example,
the area under the curve in (a) of figure (5.1). We can regard η2 as the width,√
2. The same
can be generalized8 to models with larger r. Thus, we can take η2 as√r. In cases where the
eigenfunctions do not vanish at |mp| = jp, η has to be computed numerically as before. From
equation (5.47), the mean x is given by
xp =∑
{ξ′1,...,ξ′r−1}xp|Ψν(ξ
′)|2. (5.48)
8We make use of the fact that for a r-level model, the spacing between hyperplanes is 1√r. Thus, if the
volume associated with each point in r-dimensional space is 1, then the volume element of each point on the(r − 1)-dimensional hyperplane is
√r.
Chapter 5. The BCS Pairing Hamiltonian 114
For large interaction, xp have nearly the same values as xop. Since x = m/j, the quantity xop
gives us an indication of the most probable proportion of a level p that is filled. For example,
xop = 0 implies that level p is most probably half filled on average. We note here that x is
confined to the interval −1 ≤ xop ≤ 1 in the SHA and the fractional occupancy in BCS, |vp|2
is confined to 0 ≤ |vp|2 ≤ 1. Thus, we can easily identifyxop+1
2 as having the same physical
meaning as |vp|2 in BCS. Therefore, using equation (5.15), it is possible to compute the gap
energy (5.15) using {xop}. A more detailed comparison will be done later. But first, we further
illustrate the ideas developed using a 2-level pairing model shown in figure (5.1).
Illustration using a 2-level Model and Accuracy Improvement Strategies
Figure 5.1: The components of the exact ground state eigenvectors (am1,m2) plotted on a
m1 −m2 plane for (a) large and (b) near critical interactions are indicated with dots. For (a)
and (b), N = 90. (c) shows the eigenvector for a different N . The solid and dashed lines are
the respective SHA eigenfunctions. The transformed ξ′ coordinates for (a) are also indicated.
The coordinate ξ′sp indicates the spurious direction while ξ′1 lies on the hyperplane which in this
case is just a line. We also note that the spacings between the basis states is√
2. This gives the
volume element if we do the discrete sum to find the area under curves. This volume element
generalizes to√r for a r-level model.
In figure (5.1), the ground state eigenfunctions of a model with (a) large interaction and (b)
near critical interaction are plotted on the {m1,m2} grid. The set of points labeled as (c) refer
to the same model but with different N . We show the transformed coordinates in (a): ξ′sp, in
the spurious direction; and ξ′1 is along the constant N -hyperplane which in this 2-level model
Chapter 5. The BCS Pairing Hamiltonian 115
is just a line. In the SHA, the origin (O) of the ξ′-coordinates is located at where the peak of
the SHA ground state eigenfunction is. The coordinates of the origin are given by (mo1,mo2)
on the {m1−m2} grid. The point (xo1 = mo1/j1, xo2 = mo2/j2) is important in the evaluation
of the quantities A, B, D and E.
In figure (5.1), the SHA eigenfunction corresponding to a large interaction, indicated as (a),
fits the set of points am1,m2 obtained from exact results very well. The fits for the next few
low-lying states (not shown) are also very good. Thus, the approximate low-lying eigenvectors
of H can be constructed easily and a near diagonal, smaller Hamiltonian sub-matrix can be
obtained. Diagonalizing this sub-matrix gives improved results. This technique is ESHA(II) as
we have seen earlier.
For near critical interactions as in (b), the SHA eigenfunction does not fit as well as (a).
However, the number of basis states that contribute significantly is very small. Therefore, we
just need to diagonalize the Hamiltonian sub-matrix using the lowest subset of su(2) ⊕ su(2)
basis, sorted in the no interaction scheme. This technique is essentially ESHA(I).
In figure (5.2), we see an example of applying the binomial approximation (3.67) to estimate
the ground state eigenvector am1,m2corresponding to a Hamiltonian with a weak interaction.
The agreement with the exact results is good. However, further exploration will be required to
check if the binomial approximation can be simply extended to a multinomial approximation
for models with r > 2. Next, we will see some numerical results from a 4-level model.
Some Results from a 4-level Model
As seen in eqs. (5.33) - (5.36), the quantities A, B, D and E depend on {xop}. For example,
in figure (5.1), we see that the widths σ determined using {xop} in both (a) and (b) are accurate.
This enables us to decide the technique to use for finding more accurate eigenvalues. We will
illustrate this using a sample 4-level model with j = [7, 8, 9, 10], ε = [0.5, 2.3, 6.1, 7.3] with
N = 28 pairs. It is reasonable to expect the pairing interaction to be larger when the single
particle energy levels are nearer. A simple arbitrary rule for G is used:
Gpq = g(2.0 − 0.1|εp − εq|) (5.49)
where g controls the interaction strength. Exact results are obtained by diagonalizing the
3231 × 3231 Hamiltonian matrix.
In figure (5.3), we see that the means {xop} of the SHA eigenfunction are in good agreement
with the means {xp} of components of the exact ground state eigenvector for the whole range of
g tested. What is notable here is that the predictions {xop} are accurate even though the quasi-
spins js and pair number N are quite small. We expect the SHA to give accurate predictions
for other pairing models configurations. The profile of the curves in figure (5.3) indicates that
Chapter 5. The BCS Pairing Hamiltonian 116
Figure 5.2: The components of the exact ground state eigenvector am1,m2 are indicated by black
diamonds. The coordinates (m1,m2) (see figure (5.1)) are labelled by 1, 2, . . .. The diagonal
crosses (‘x’) indicate approximate component of the eigenvector obtained using the binomial
approximation.
when the interaction strength g is small, only the low lying single-particle energy levels are
occupied9 (x1 ∼ 1.0 and x2 ∼ 0.88) while the higher single-particle energy levels are empty
(x3, x4 ∼ −1.0). When the interaction strength g is larger, the fractional occupancy of each of
the 4-levels become more similar.
To get a sense of the profile of the eigenfunction (equation (5.46)), the SHA predictions of
the spreads of the eigenfunctions also need to be accurate. The spreads given in figure (5.4)
are in the transformed coordinates ξ′ where both A and B are diagonal. For comparison, the
corresponding transformation is performed on the basis states before the spreads of the exact
eigenfunction are computed. Again, we see the SHA predicted spreads are in good agreement
with exact ones. For g < 0.1, the spreads increase with increasing interaction strength, and in
the large interaction domain, the spreads level off. This suggests that ESHA(I) can be used
when g is small and up till when diagonalization becomes intractable. In the large interaction
domain, we can use ESHA(II). Using xoi and σi, we can have a geometric picture of how the
9In the g = 0 limit, with j = [7, 8, 9, 10] and N = 28, the first level is fully filled with 14 pairs and the secondlevel is filled with the remaining 14 pairs because it can accommodated a total of 16 pairs.
Chapter 5. The BCS Pairing Hamiltonian 117
Figure 5.3: The crosses are the mean values xop of SHA ground state functions for each level.
The diamonds show the mean values xp of components of the exact ground state eigenvectors
for each level. The two results are in good agreement. We indicate three points of interest
(g = 0.010, 0.045, 0.200) in three different regions of interaction strengths.
eigenfunction is changing as the interaction changes. As g increases, the eigenfunction moves
away from the ‘boundaries’, mp = −jp, with spread increasing. In the large interaction domain,
we can deduce from figure (5.3) and figure (5.4) that the eigenfunction approaches a point on
the hyperplane with spreads nearly constant even as g increases10. This behaviour for the
4-level model is similar to what we have seen in the 2-level model in figure (5.1).
The lowest SHA predicted excitation energies ωi are compared against exact results in figure
(5.5). The lowest SHA excitation energies: ∆E{1,0,0}, ∆E{0,1,0} and ∆E{0,0,1}, are indicated using
crosses. When g is small, the values for ∆E{2,0,0} are included and indicated with red crosses.
The results at the three selected points (1) g = 0.010, (2) g = 0.045 and (3) g = 0.200 are
tabulated in table (5.2) and compared with exact ones. We see that the low-lying SHA excitation
energies are in good agreement with exact results. The largest discrepancies can be found in
the transition region (g ∼ 0.05). The SHA approximations of ωi for both points (1) and (3)
are accurate to about 1%. For point (2), the largest discrepancy with exact results is 15%.
The accuracy of these excitation energies can be improved using the accuracy improvement
10There is an interesting kink on two of the spreads in σhyp at g ∼ 0.18 relatively fixed. The increase in spreadin one of the direction is matched by a decrease in another. We can think of this as some form ‘rotation’ of themulti-dimensional eigenfunction in the hyperplane. The projected cross section of the eigenfunction is ellipsoidaland a rotation causes an increase in spread in one direction corresponds to a reduction in the other.
Chapter 5. The BCS Pairing Hamiltonian 118
Figure 5.4: The crosses are the spreads σ of the components of SHA ground state eigenvectors
for each level. The squares show the same quantities obtained from exact results and lines
connecting them are include to guide the eye. The set of three points/lines labelled as σhyp
refers to the spread of the eigenfunction on the hyperplane. The set labelled as σsp is the spread
in the spurious direction and it vanishes as predicted since the pair number associated with the
eigenfunction is exact.
techniques, ESHA(I) and ESHA(II). We have applied ESHA(I) for (1) and (2) using the lowest
50 and 300 su(2)1⊕ . . . su(2)4 basis states respectively. For (3), we have used ESHA(II) with the
lowest 286 SHA basis states. This is small in contrast to the 3231 basis states needed for the full
Hilbert space. The number of basis states used in both techniques can be gradually increased
to check for convergence. The reduction in the size of the matrix we need to diagonalize will
even be greater for configurations with larger pair number N and quasi-spins js. We see in
table (5.2) that significant improvements have been achieved for all three points. All results
are now within 0.05% of the exact ones. From table (5.2), we note that the eigenvalues are also
clustered according to the degeneracy of a 3-dimensional oscillator (in groups of 1, 3, 6 etc. )
when the interaction is large.
We have seen that the SHA is a versatile and accurate approximation technique - extracting
the most important information from an r-level model with a relatively large Hilbert space.
Essentially, we only need to solve a system of r equations and subsequently diagonalize two
r × r matrices, regardless of the dimension of the Hilbert space. If more accurate eigenvalues
are required, we can use ESHA(I) and ESHA(II). As we can see, the use of ESHA(I) will be
Chapter 5. The BCS Pairing Hamiltonian 119
Figure 5.5: The crosses are the SHA predicted excitation energies ωi: {1, 0, 0}, {0, 1, 0}, {0, 0, 1}.See equation (5.44). The crosses marked red are oscillator states {2, 0, 0}. The diamonds show
the same quantities obtained from exact results.
(1) g = 0.010 (2) g = 0.045 (3) g = 0.200
ESHA(I) ESHA(I) ESHA(II)
77.1767459 [.1767459] 69.1912 [.1911] -166.430 [.430]
80.8270588 [.8270586] 72.7942 [.7928] -146.090 [.094]
84.1251463 [.1251415] 74.4495 [.4493] -144.512 [.514]
84.5572474 [.5572468] 76.8397 [.8385] -142.478 [.484]
86.5977586 [.5977517] 77.0467 [.0345] -126.391 [.413]
87.7298973 [.7298812] 77.5620 [.5598] -124.895 [.909]
90.2033869 [.2033397] 80.0067 [.0064] -123.344 [.349]
91.1697702 [.1694111] 80.9987 [.9887] -122.723 [.751]
91.4155253 [.4153708] 81.4723 [.4455] -121.203 [.218]
93.5991577 [.5979537] 81.7621 [.7611] -119.198 [.237]
Table 5.2: The ten lowest eigenvalues for (1) g = 0.010 (2) g = 0.045 and (2) g = 0.200 obtained
using ESHA(I) and ESHA(II). Digits of exact eigenvalues are enclosed in [. . .].
Chapter 5. The BCS Pairing Hamiltonian 120
Figure 5.6: The diamonds are the exact ground state energies. The line show the same quantities
obtained from E from the SHA (see equations (5.32) and (5.36)). The two results are in good
agreement.
limited by the spread of the eigenfunction. If there are too many levels or the spreads are too
large, ESHA(I) will also become intractable. On the other hand, the application of ESHA(II)
will also be computationally challenging because transforming the BCS Hamiltonian H from
the su(2)1 ⊕ su(2)2 ⊕ . . . ⊕ su(2)r basis to the SHA basis involves a unitary transformation U
where U−1HU → H ′. Each column of U is essentially an approximate eigenvector of H given
by the SHA. The dimension d of each eigenvector is the same as that of the full Hilbert space.
If we are only interested in the lowest d′ < d states, then essentially, U is a d× d′ rectangular
matrix. Therefore, the Hamiltonian we need to diagonalize is a d′ × d′ matrix. In most cases,
d′ � d. This is where the computational reduction comes from. However, if d is too large,
U may also be too large for the computer to handle. Thus, without further approximations,
ESHA(II) is also limited. However, we note that the SHA works best for large interaction
(where the spreads are large), so we can revert back to SHA whenever ESHA(II) fails.
Next, we will make a comparison between the BCS variational approach of solving the
pairing Hamiltonian (5.9) using the BCS wave function (5.10) and applying the SHA.
Chapter 5. The BCS Pairing Hamiltonian 121
5.3 Comparison between the BCS approach and the SHA
Most models will have Hilbert spaces with dimensions much larger than those that we have
seen in the 4-level model, making diagonalization, ESHA(I) or ESHA(II) intractable. We will
have to rely on the BCS approach or the SHA at some point. As mentioned earlier, the BCS
approximation is accurate when the number of particles is large and the SHA is accurate when
the quasispins j for each level is sufficiently large. In the 4-level model, we see that even for
7 ≤ jp ≤ 10, the SHA gives accurate approximations. Subsequently, we will make a comparison
between the two techniques using numerical examples. Before that, we will first compare some
common physical quantities and equations derived using the BCS approach and the SHA.
Fractional Occupancy for level
As mentioned earlier, the fractional occupancy of each level given by the BCS approach and
the SHA are respectively given by
v2q ;
xoq + 1
2. (5.50)
Earlier, we defined
vq = sin θBCSq , (5.51)
and making provisions for xoq = 2v2q − 1, we define
xoq = − cos 2φSHAq = 2 sin2 φSHA
q − 1. (5.52)
If indeed the two quantities v2q and
xoq+12 are equivalent, then θBCS
q = ±φSHAq .
Wave functions, eq. (5.10), eq. (5.46)
In solving the BCS Hamiltonian, the BCS theory starts with an approximate ground state
wave function from which the set of variational equations are deduced. On the other hand, in
the SHA, an approximate shifted harmonic oscillator Hamiltonian is first obtained. It is from
this approximate Hamiltonian that we get the set of SHA eigenfunctions. The BCS ground state
wave function and the SHA eigenfunctions (including those for excited states) are as follows
|ΦBCSo 〉 =
r∏
p
2jp∏
ip=1
(up + vpa†p,ip
a†p,ip
)|vac〉; Ψν(ξ′) = η
r−1∏
i=1
(2νiσiνi!
√π)− 1
2Hνi(ξ′iσi
)e− 1
2(
ξ′iσi
)2.
Equations for determining vq and xop, and selecting N .
In the BCS approach, a set of equations is deduced by minimizing the BCS Hamiltonian
constrained by the pair number N . The solution to this set of equations is given by the
Chapter 5. The BCS Pairing Hamiltonian 122
parameters ∆q and the Lagrange multiplier λ. From there, other quantities like vq and uq can
be obtained. In SHA, a set of shift expressions Dp are obtained from the SHA Hamiltonian.
By recognizing the interdependency of the variables m′p and selecting a pair number N , the set
of unknowns xop can be obtained.
For BCS, see eq. (5.19), eq. (5.20)
∆p =r∑
q
Gpqjq∆q√
(εq − λ)2 + ∆2q
where ∆p = 2r∑
q
Gpqjquqvq
N = 〈ΦBCSo |N |ΦBCS
o 〉 ≈r∑
q
jq
1− εq − λ√
(εq − λ)2 + ∆2q
;
For ease of comparison with SHA, we express the above two equations in terms of θBCSq .
From equations (5.14) and (5.17), the gap equation and the equation for constraining the mean
pair number to N are obtained from
(εp − λ) tan 2θBCSp =
r∑
q
Gpqjq sin 2θBCSq (5.53)
and
N =
r∑
q
jq[1− cos 2θBCS
q
]. (5.54)
For SHA, see eq. (5.35), eq. (5.41), eq. (5.42)
Dp = 2εp −Gpp +2xop
√
1− x2op
r∑
q
Gpqjq
√
1− x2oq = Λ
and
N − 1
2Nmax = MT =
r∑
p=1
jpxop
Similarly, we express the above two equations in terms of φSHAq , using equation (5.52):
2εp −Gpp − Λ +2xop
√
1− x2op
r∑
q
Gpqjq
√
1− x2oq = 0 (5.55)
(εp −1
2Gpp −
1
2Λ) tan 2φSHA
p =
r∑
q
Gpqjq sin 2φSHAq (5.56)
and
N =
r∑
p
jp −r∑
p
jp cos 2φSHAp . (5.57)
We see that the equations (5.53) and (5.56) are equivalent if the diagonal elements Gpp
are constant, i.e. Gpp = G′ ∀ p. Then, λ = 12(G′ + Λ). The equations (5.54) and (5.57) are
Chapter 5. The BCS Pairing Hamiltonian 123
equivalent. Though the origins of the two techniques are different, the solutions will be identical
if Gpp = G′. Thus, when Gpp = G′, as in our 4-level model example, the BCS and the SHA
predictions of the fractional occupancy of each level, the gap energies ∆p and quasi-particle
energies Ep will all be the same. The term 12Gpp in the SHA equation is reminiscent of the
quartic term∑
p 2jpGppv4q that was dropped in BCS treatment in equation (5.11). If this term
is included, then the SHA will not be identical to the BCS theory even if Gpp = G′. However,
the difference in the solution of the BCS gap equations will only be marginal. In table (5.3),
we make a comparison of how the BCS (with the quartic term included) compare with the
predictions of the SHA and exact results for the 4-level model at the three points of interest.
g p Exact results xop from SHA xp from SHA xop = 2v2q − 1 from BCS
xop = 2v2q − 1 from BCS (with quartic term)
(no quartic term)
0.010 1 0.9982 0.9984 1.0000 0.9984
2 0.7505 0.7508 0.7500 0.7508
3 -0.9994 -0.9996 -1.0000 -0.9996
4 -0.9997 -0.9998 -1.0000 -0.9998
0.045 1 0.9342 0.9360 0.9204 0.9396
2 0.6788 0.6790 0.6644 0.6885
3 -0.9259 -0.9258 -0.9103 -0.9333
4 -0.9637 -0.9651 -0.9566 -0.9685
0.200 1 0.1664 0.1612 0.1612 0.1628
2 -0.0048 -0.0071 -0.0072 0.0086
3 -0.3243 -0.3220 -0.3220 -0.3288
4 -0.4208 -0.4173 -0.4173 -0.4249
Table 5.3: We compare the xop and xp of the 4-level model for the three points of interest.
For the three points tested, the inclusion of the quartic term in the BCS treatment does not
improve the accuracy.
While we cannot generalize the impact of the inclusion of the quartic term in the BCS
treatment because the equations are highly non-linear, it is reasonable to expect the results we
see in table (5.3) to be representative of other configurations. We see that for the three points
Chapter 5. The BCS Pairing Hamiltonian 124
representing the three domains, the inclusion of the quartic term on the BCS treatment only
affects the computation of xo marginally.
Ground state energies, eq. (5.21), eq. (5.36)
The BCS and SHA ground state energies are respectively expressed in terms of the solutions
of the variational equation (∆q, λ) and the shift equations (xoq):
〈ΦBCSo |E|ΦBCS
o 〉 =∑
q
4jq|vq|2εq −∑
pq
Gpqjpjq∆p
Ep
∆q
Eq;
E =
r∑
p
2εpjp(1 + xop)−Gppjp(1 + xop)−r∑
p,q
Gpqjpjq
√
(1− x2op)(1 − x2
oq).
Again, we will express the equations in terms of θBCSq and φSHA
q for easy comparison of the
two expressions. Using equations (5.18) and (5.52), we get
〈ΦBCSo |E|ΦBCS
o 〉 =∑
q
4jqεq sin2 θBCSq −
∑
pq
Gpqjpjq sin 2θBCSp sin 2θBCS
q ; (5.58)
E =
r∑
p
4εpjp(sin2 φSHA
p )−Gppjp(sin2 φSHAp )−
r∑
p,q
Gpqjpjq sin 2φSHAp sin 2φSHA
q
(5.59)
We saw that if Gpp = G′ then φSHAq = θBCS
q because the gap equations and the equation
for fixing N are equivalent. Therefore, the two expressions for the ground state energy are the
same except for the term −∑rpGppjp(sin
2 φSHAp ) in the SHA. In fact, this term is responsible for
a more accurate approximation. Again, this is reminiscent of the term∑
p 2jpGppv4p which was
dropped in the BCS treatment in equation (5.11). In table (5.4), we have included the numerical
results of the ground state energy of the 4-level model, obtained using different techniques, for
comparison.
Through the whole range of g, the SHA prediction is the best approximation among the
three techniques. The agreement with exact results is also very good even in the phase transition
region where 0.04 < g < 0.06. The inclusion of the quartic term −∑p 2jpGppv4p improved the
results predicted by the BCS but does not make it better than the SHA prediction. Thus, for
approximating the ground state energy, the SHA is better than the traditional BCS approach.
Next, we will discuss about collective excitation energies which can be approximated using the
SHA but not the traditional BCS theory.
Chapter 5. The BCS Pairing Hamiltonian 125
Points of g Exact results E from SHA 〈ΦBCSo |E|ΦBCS
o 〉 〈ΦBCSo |E|ΦBCS
o 〉
Interest −∑p 2jpGppv4p
(1)→ 0.010 77.18 77.21 77.77 77.24
0.020 75.69 75.82 76.94 75.89
0.030 73.78 74.11 75.79 74.23
0.040 71.09 71.76 74.00 71.97
(2)→ 0.045 69.19 70.03 72.55 70.34
0.050 66.67 67.55 70.35 68.03
0.055 63.36 64.17 67.25 64.87
0.060 59.24 59.96 63.32 60.88
0.080 36.61 37.00 41.48 38.77
0.100 7.87 8.02 13.62 10.60
0.120 -24.19 -24.24 -17.52 -20.89
0.150 -75.74 -76.05 -67.53 -71.60
(3)→ 0.200 -166.43 -167.09 -155.89 -160.85
0.250 -260.02 -260.98 -246.98 -251.45
Table 5.4: We compare the exact ground state energy of the 4-level model. The exact results
and E from the SHA are plotted in figure (5.6). The three points of interest in our previous
discussion are indicated in the first column. We compare against the exact results using (a) E
from SHA, eq. (5.59); (b) 〈ΦBCSo |E|ΦBCS
o 〉 from BCS, eq. (5.58) and (c) including the quartic
term to the BCS expectation value of the ground state energy.
Excitation Energies
It is important here to distinguish between the excitation energy in the SHA and the ex-
citation energies in BCS theory commonly associated with the breaking of Cooper pairs. The
excitation energies ω in the SHA, with N fixed, are related to the collective excitations of the
whole system without breaking any of the Cooper pairs.
We can consider a state with a broken pair quite easily by (i) excluding a particle pair
with momentum k′ from the original BCS ground state wave function |ΦBCSo 〉 given in equation
(5.60) and then (ii) creating a single particle with momentum k′1 and another with momentum
Chapter 5. The BCS Pairing Hamiltonian 126
k′2:
|ΦBCS1 〉 =
∏
k>0,k 6=k′1,k′2
(uk + vka†ka
†k)a†k′1a†k′2|vac〉. (5.60)
Essentially, we have a system with N−1 pairs and two particles one with momentum k′1 and the
other with momentum k′2. Without going into details, we state the energy difference compared
to the BCS ground state can be approximated by
〈ΦBCS1 |H − λN |ΦBCS
1 〉 − 〈ΦBCSo |H − λN |ΦBCS
o 〉≈ Ek′1 + Ek′2 =
√
(εk′1 − λ)2 + ∆2k′1
+√
(εk′2 − λ)2 + ∆2k′2. (5.61)
This term Ek′ known as the quasi-particle energy was introduced earlier in equations (5.19)-
(5.21). The excitation energy for a pair scattering into two different single particle energy levels
s and t (see figure (5.7)) can be written as
∆E = Es + Et =√
(εs − λ)2 + ∆2s +
√
(εt − λ)2 + ∆2t ≥ ∆s + ∆t (5.62)
with minimum value being ∆s + ∆t.
Figure 5.7: A quasi-particle treatment of excitation: (left) the BCS ground state is regarded as
a vacuum, (right) in this case, a pair is broken and one scatters into single particle energy level
2 and the other level 3. The levels where the particles are scattered into each have occupancy
reduced by one. This is equivalent to a reduction of 12 in the quasi-spin.
This energy associated with pair breaking can be obtained exactly or using the SHA ap-
proximated ground state energies. To compute the energy associated with breaking a pair in a
system of N pairs exactly, we first solve for the ground state energy EN of an N -pair system
with j = [j1 . . . , js, . . . jt, . . . , jr]. If a pair of particles break and scatter into single particle
energy level s and t, then these two levels can accommodate one less particle each. The energy
for this new configuration can be obtained by solving for the ground state energy E′N−1 of an
Chapter 5. The BCS Pairing Hamiltonian 127
(N − 1)-pair system with j = [j1 . . . , js − 12 , . . . jt − 1
2 , . . . , jr]. Thus, in the SHA the excitation
energy due to pair breaking is given by
∆Epair break = (E′N−1 − EN ) + εs + εt. (5.63)
The same technique can be extended to excitations due to the breaking of two or more pairs.
Equation (5.60) can be rewritten in the so-called Valatin-Bogoliubov [7], [76] quasi-particle
operators
α†k = uka
†k − vkak (5.64)
α†k
= uka†k
+ vkak (5.65)
αk = ukak + vka†k (5.66)
αk = ukak − vka†k. (5.67)
These quasi-particle operators obey the fermion anti-commutation relations
{αk, α†k′} = δk,k′. (5.68)
The BCS ground state can be regarded as the quasi-particle vacuum since
αk|ΦBCSo 〉 = 0. (5.69)
The wave function |ΦBCS1 〉 can be regarded as an excited state from the BCS ground state:
|ΦBCS1 〉 = α†
k′1α†k′2|ΦBCSo 〉. (5.70)
The energy associated with this excitation is given by the quasi-particle energy mentioned
earlier in equation (5.61).
In comparison, the SHA ground state is given by the (r−1)-dimensional oscillator eigenfunc-
tion (5.46). This eigenfunction Ψν(ξ′) is evaluated at discrete points (ξ′1, . . . , ξ
′r−1) (transformed
from the original basis |m1, . . . ,mr〉). It has (r− 1) oscillator creation (annihilation) operators
b†i (bi), with [bi, b†j ] = δij , that act on the low-lying states on the constant-N hyperplane col-
lectively. Therefore, as mentioned earlier, the SHA excitation energy is conceptually different
from the quasi-particle excitations which pertain to the scattering of individual pairs out of
their bound states.
We will use the 4-level model again to see how these collective excitations compare with
the quasi-particle excitations. The calculation in figure (5.8) is done for g = 0.2. We compare
the low-lying SHA excitation energies for a system with N = 28 with some examples of quasi-
particle excitations involving one broken pair and
1. both particles scatter into the same level εp where 1 ≤ p ≤ 4,
Chapter 5. The BCS Pairing Hamiltonian 128
2. one scattered into level with ε1 and the other into ε3 and
3. one scattered into level with ε2 and the other into ε4.
Figure 5.8: The exact collective energies of the 4-level model for g = 0.2 are labelled (0, 1, 2,
3). The exact quasi-particle excitation energies of the broken pairs scattering into levels with
single particle energies εp, εq are indicated by [p, q].
In figure (5.8), we note that the SHA excitation energies are similar in magnitude to the
quasi-particle excitation energy. Both energies are dependent on the interaction strength and
we will see in the 10-level example that this similarity remains in both the weak interaction and
strong interaction domain. We see that it is energetically possible for some collective excited
states (such as 2) to de-excite by breaking a pair to a state (such as [1,1]). To generalize this,
we write:
SystemNcollective excitation−→ System∗
Nde-excitation−→ System(N−1) + εp + εq + ~ν (5.71)
where ~ν is the energy carried away by a phonon (photon). Here, we hypothesize that the
collective excited state can be an intermediate state before a pair is broken. As for the collective
excited state 3, it is possible for it to de-excite by breaking a pair into either [2,2], [1,3] or [2,4].
From the argument of energetics alone, the two other quasi-particle excitation considered here,
[3,3] and [4,4] can only be formed from higher excited collective states.
Chapter 5. The BCS Pairing Hamiltonian 129
In table (5.5), we make a comparison of different techniques used for the computation of
quasi-particle excitation: (i) using the expressions for the quasi-particle energies equation (5.61)
and (ii) taking differences of energies of systems with N and N − 1 as mentioned in the caption
of figure (5.7) and equation (5.63). Again, we see that the SHA approximations are in good
agreement with exact results.
Broken pairs Exact SHA/BCS predictions SHA predictions BCS predictions
scatter to Results using equation (5.61). (Differences in E{0,0,0}) (Differences in 〈Φo|E|Φo〉)
εp, εq [p, q]
[1, 1] 21.311 21.336 21.356 21.162
[2, 2] 22.346 22.404 22.385 22.142
[3, 3] 24.076 24.097 24.113 23.869
[4, 4] 24.367 24.346 24.407 24.173
[1, 3] 22.731 22.717 22.771 22.537
[2, 4] 23.390 23.375 23.429 23.224
Table 5.5: A comparison of different techniques use for the computation of quasi-particle exci-
tation. For the BCS prediction in the last column, we included the quartic term.
The subject of collective excitation in a metallic superconducting system has also been
studied in Anderson’s [3] RPA treatment of the theory of superconductivity. However, these
collective excitations are not predicted within the framework of the BCS treatment.
Therefore, in addition to yielding more accurate approximations than the BCS theory, the
SHA also provide two quantities not given within the formalism of the BCS approach:
1. a set of number conserving eigenfunctions; and
2. their corresponding collective excitation energies.
5.4 A sample 10-level model
We end this chapter with a sample calculation for a multi-level model and compare the predic-
tions of the BCS treatment and the SHA treatment of the Hamiltonian (5.9). In particular, we
will check if the quasi-particle excitations from the BCS theory are numerically similar to the
SHA predictions of the collective excitation energies.
Chapter 5. The BCS Pairing Hamiltonian 130
In our simple model, all the single particle energy levels are equally spaced (as mentioned
in footnote (3)). Here, we observe that in the momentum space, momenta (±kx,±ky,±kz)correspond to the same kinetic energy (and hence single particle energy). Therefore, each
single particle energy level can accommodate multiples of eight pairs of electrons. Hence, the
quasi-spins jp will be in multiples of 4.
We will use a simple 10-level model with configurations given by:
jp = 8, εp = 0.25p for 1 ≤ p ≤ 10; (5.72)
Gpq = g(2.0 − 0.1|εp − εq|), N = 80. (5.73)
The exact result for this configuration can only be obtained by diagonalizing a Hamiltonian
matrix with dimensions 51125645317. However, this is computationally intractable. So instead,
we will use the SHA and the BCS approach to make approximations of the low-lying eigenvalues
for the model. For the BCS approach, we include the quartic term as appropriate based on our
observations in the 4-level model.
We will make a comparison between the approximations obtained from the two approaches
for an example with weak interaction (g = 0.005) and another with large interaction (g = 0.200).
g = 0.005
Using the SHA, we get the fractional level occupancy as
v2 =xo + 1
2= [0.957, 0.932, 0.887, 0.794, 0.617, 0.383, 0.206, 0.113, 0.068, 0.043]. (5.74)
As mentioned earlier, the same result will be obtained using the BCS (less the quartic term).
The ground state energies are given by
E = 111.355 (SHA), E = 112.155 (BCS), 111.548 (with quartic term). (5.75)
The collective excitation energies of the nine lowest eigenstates obtained using the SHA are
given by
ω = [1.033, 1.167, 1.172, 1.496, 1.499, 1.912, 1.913, 2.370, 2.371]. (5.76)
Here, the excitation energies ω are computed in the transformed coordinates (with both mass
and spring constant tensor diagonal) on the constant-N hyperplane. The lowest ten quasi-
particle excitation energies of the BCS theory are
2E = [1.066, 1.066, 1.276, 1.276, 1.617, 1.617, 2.023, 2.023, 2.464, 2.464]. (5.77)
As mentioned earlier, this same result can be obtained using the SHA. The energies are in pairs
because broken pairs scattering to level [p,q] will have the same energies as those scattering
Chapter 5. The BCS Pairing Hamiltonian 131
to [q,p]. We note that the quasi-particle excitation energies are indeed similar numerically
to the collective excitation energies ω. Finally, the SHA eigenfunction can be constructed
(using equation (5.46)) with the SHA predicted spread σ and the mean ξo in the transformed
coordinates,
σ = [1.843, 2.681, 1.866, 1.974, 2.105, 1.375, 1.455, 1.528, 1.665] (5.78)
ξ = [10.940,−8.945, 10.730,−10.302,−9.848, 0.000, 0.000, 0.000, 0.000, 0.000]. (5.79)
g = 0.200
Using the SHA, we get the fractional level occupancy as
v2 =xo + 1
2= [0.519, 0.514, 0.510, 0.506, 0.502, 0.498, 0.494, 0.490, 0.486, 0.481]. (5.80)
The ground state energies are given by
E = −2267.756 (SHA), E = −2235.756 (BCS), −2251.733 (with quartic term). (5.81)
The collective excitation energies for the nine lowest eigenstates obtained using the SHA are
given by
ω = [59.883, 60.530, 60.827, 61.139, 61.374, 61.583, 61.744, 61.868, 61.951]. (5.82)
The lowest ten quasi-particle excitation energies of the BCS theory are
2Ep = [60.425, 60.425, 61.048, 61.048, 61.828, 61.516, 61.516, 61.828, 61.984, 61.984]. (5.83)
The SHA approximate eigenfunction can be constructed (using equation (5.46)) with the spread
σ and the mean ξo in the transformed coordinates,
σ = [2.876, 2.825, 2.848, 2.823, 2.826, 2.839, 2.801, 2.800, 2.781] (5.84)
ξ = [−0.571, 0.012, 0.184, 0.048, 0.102, 0.000, 0.000, 0.000, 0.000, 0.000]. (5.85)
From the good agreement with exact results in the 4-level model discussed earlier, we expect
the SHA predictions to similarly agree with exact results although they are not available for
comparison. From the check with quasi-particle excitation, the collective excitation energies
are similar in magnitude, like what we have seen in figure (5.8).
Thus, it is reasonable to conclude that the SHA is an effective approximation technique for
solving the BCS Hamiltonian. For an r-level problem, the most important information about the
low-lying states can be obtained by solving a system of r equations and diagonalizing two r× rmatrices. More importantly, though starting from a totally different approach, the SHA gave
Chapter 5. The BCS Pairing Hamiltonian 132
similar results with the predictions of the BCS theory without making a priori assumptions
about the ground state wave function. Therefore, the SHA can be used to systematically
make accurate approximations of many-body models with an su(2) ⊕ su(2) . . . su(2) spectrum
generating algebra as we have seen in the BCS model.
Chapter 6
Conclusion
The central theme of the thesis involves the development of a technique that can bring out the
salient features of the low-lying states of algebraic Hamiltonians without diagonalization. It has
been shown that this can be achieved using the Shifted Harmonic Approximation (SHA) for
certain classes of models with su(2), su(1, 1) and su(2)1 ⊕ su(2)2 . . . su(2)r spectrum generating
algebras. The SHA also provides information to construct basis states which yield accurate
eigenvalues for low-lying states by diagonalizing Hamiltonians in small subspaces of what may
be huge Hilbert spaces. This technique has been explicitly illustrated using the CJH in table
(4.4) where the dimensions of the Hilbert spaces are up to 109. We can also see a similar
application in the quartic oscillator in figure (4.19) where sufficiently precise eigenvalues require
diagonalization in a Hilbert space with at least 20000 basis states. In both cases, we are able
to check that the low-lying eigenvalues converge to a precision of beyond ten significant figures.
Another notable example is the r-level BCS pairing Hamiltonian which has an su(2)1 ⊕su(2)2⊕ . . .⊕ su(2)r spectrum generating algebra. The dimensions of the Hilbert spaces usually
depend exponentially on r and the SHA provides accurate approximations obtained by just
solving systems of r equations and performing matrix operations on two r × r matrices. Fur-
thermore, we can systematically construct an accurate set of eigenvectors corresponding to the
low-lying collective states.
From these illustrations, we can see a great potential in the SHA as an approximation for
similar many-body models which may be computationally intractable to solve by diagonaliza-
tion. We will recapitulate some of the other findings in the thesis and discuss possible extensions
of the SHA to a wider class of problems.
The location of the SHA oscillator minima and the identification of phases
As mentioned, the SHA provides accurate approximations of low-lying eigenvalues. One of
the other most notable feature of the SHA is its accurate predictions of the most significant
133
Chapter 6. Conclusion 134
basis states in the eigenvectors. This information is encapsulated in the quantity xo (or mo),
which gives the location of the SHA oscillator minima; and σ, which gives the width of the SHA
eigenfunction. The effectiveness of the SHA in predicting the location of the minima across the
whole domain of parameters can be seen in figure (3.4) for xo in the LMG and in figure (4.3) for
the CJH; in figure (4.14) for no in the quartic oscillator; and in figure (5.3) for xop in the BCS
pairing model. Subsequent discussions on oscillator minima xo (or mo) of su(2) Hamiltonians
can be similarly extended to the other spectrum generating algebra we have discussed. As we
will also see, the variation of xo with the parameters of the model gives us insights into how
the system will transit through different phase regions as the parameter changes.
In every one of the examples in this thesis, we have seen that when xo remains constant1
as the parameters are varied, the system remains in one phase. When xo starts to change with
the parameters, the system starts to transit to another phase. Thus, given a good knowledge of
how the minima xo for a given Hamiltonian varies with the whole range of parameters, we can
easily identify different phases. It is reasonable to contemplate that identifying phases using
the rate of change of xo with the parameters, e.g. dxo
dVin the LMG, can be related to the concept
of order parameters - which are normally zero in the phase with a higher degree of symmetry
and non-zero in another phase with a lower degree of symmetry. While the expressions for xo
may also be obtained using other techniques in certain cases (such as HF for su(2))2, we have
seen that the expressions for the minima can be systematically derived by using the shift terms
in the SHA for Hamiltonians with su(2), su(2)1 ⊕ su(2)2 . . . su(2)r and even the non-compact
su(1, 1) algebra.
As seen in the thesis, certain phase transitions associated with the breaking of symmetries
can also be identified from the emergence of multiple roots in xo. An example is the LMG in the
Jx-diagonal basis. We have seen in section 3.3 that the SHA correctly predicted the existence of
double roots above the critical interaction, i.e. NV/εJ > 1. In this region, the eigenstates are
paired as doublets (see figure (2.6)), with one of the eigenstate having a symmetric distribution
of components and the other, an anti-symmetric one. This is as given by the SHA eigenfunction
in equation (3.44). The occurrence of double roots can be associated with the emergence of a
new degree of freedom in parity when the su(2) symmetry is broken as seen in figure (3.5) and
figure (3.6).
We find a similar example in the su(1, 1) quartic oscillator. We note that in the spherical
oscillator phase given by α < 0.5, Hamiltonians with different irrep labels λ but the same M
and α have very similar roots, no ≈ 0. See figure (4.14). In the rotor-vibrator phase given by
1but with width changing.2We saw in section 3.6 that the quantity xo in the SHA is related to the HF angle of rotation θ in quasi-spin
space by xo = − cos θ.
Chapter 6. Conclusion 135
α > 0.5, the roots no corresponding to different λ become more distinct from each other. This
can be viewed as the system moving from a phase region with only a single root no (for all λ) to
another with multiple roots where each root is associated with a particular degree of freedom
in rotation labelled by the quantity λ (related to the so(5) angular momentum). This can be
viewed as the emergence of a new degree of freedom in λ when the su(1, 1) symmetry is broken.
It is important that the SHA not only gives accurate prediction of xo but also a complete set
of roots as we have seen in the LMG.
In the case of the BCS pairing model, the SHA treatment of the BCS Hamiltonian is gauge
invariant (i.e. pair number conserving) and we do not see the breaking of symmetry (in pair
numbers) associated with using the traditional BCS ansatz ground state wave function. It is
notable that all the predictions of the traditional BCS treatment are captured in the SHA, even
though it is a very different approach. For example, the set of SHA shift equations are nearly
identical to the BCS gap equations. As discussed in chapter 5, the SHA multi-dimensional
minima {xop} such as those shown in figure (5.3) are in fact the fractional occupancies {vp} of
each level p in the traditional treatment. We see that the quantities xop can also offer important
physical insights into a model. A further investigation involving the application of the SHA to a
BCS model with the same set of parameters but with different pair numbers may provide some
perspectives of symmetry breaking associated with the degree of freedom in pair numbers.
A gauge invariant set of eigenfunctions for the BCS Hamiltonian
Another important perspective gained by analyzing the BCS Hamiltonian using the gauge
invariant SHA is that the sub-dynamics (associated with the low-lying eigenstates) of an r-level
pairing model can be regarded as an (r−1)-dimensional oscillator in the superconducting limit.
As seen in table (5.2) (under the column with g = 0.20) and figure (5.5), the degeneracy of an
(r− 1)-dimensional oscillator can be used to explain the manner in which the low-lying excited
eigenstates are clustered.
We can see that the SHA overlaps with and complements the traditional methods in many
aspects. As we have seen in chapter 5, the traditional BCS treatments only give the approximate
ground state wave function and quasi-particle excitations. On the other hand, the SHA provides
accurate predictions for both a set of low-lying eigenfunctions for the BCS Hamiltonian and
quasi-particle excitations across the whole domain of interaction strengths. A possible candidate
associated with these low-lying excitations given by the SHA is the exciton. This is still a subject
of investigation at the point of writing.
Next, we will re-visit a slightly different approach for analyzing phases which we have
discussed earlier.
Chapter 6. Conclusion 136
Strategies at different contraction limits
In section 3.6, we introduced the P1-P4 scheme for the classification of phases in a given
model. Instead of using xo directly as we have just discussed, the P1-P4 scheme uses the
inverse masses A and the spring constants B of the oscillators (which are functions of both xo
and parameters of the models), to predict the various phase regions. The vanishing of A alone
signifies the eigenfunction approaching a δ−function because the width σ = 4
√AB
also vanishes.
This corresponds to the eigenvector localizing on certain basis states. It can be interpreted
that the model is approaching a contraction or an algebraic limit. On the other hand, if B
vanishes alone, we no longer have an oscillator equation. We have an example of this in section
3.3 where the LMG (expressed in the Jx-diagonal basis) is near the critical point. With B = 0,
the differential equation that remains can either be regarded as a free particle equation or Airy
equation. These possibilities have not been discussed in the thesis. A further investigation into
these possibilities using the appropriate boundary conditions should give more insights into the
nature of such critical points.
We have seen that the SHA predictions are most accurate in the oscillator limit where the
spectrum generating algebras contract to the augmented hw(1) algebra. With the accurately
predicted mo and σ using the SHA, a good approximation of the eigenstates of the Hamilto-
nian can be constructed. Thus, as mentioned earlier, the dimension of the space needed for
diagonalization to obtain highly accurate eigenvalues of low-lying states is greatly reduced. In
the complementary algebraic limits (such as in su(2)) where the ground state eigenvector is
dominated by the lowest weight state, the matrix representations of Jz and J± are usually
obtained by the standard contractions such as by taking the limit of the Dyson expansion (see
section (3.6)). The spectral properties can be obtained using the RPA. We see that the SHA is
complementary to these techniques.
From the examples given in the thesis, this oscillator limit corresponds to the domain where
both A and B are non-vanishing and σ > 1. We list down below the occurrences of this oscillator
limit in the examples we have seen in the thesis:
LMG: NVε� 1 (Jz-diagonal basis) and NV
ε� 1 (Jx-diagonal basis);
CJH: using the Jz-diagonal basis, in the Rabi and Josephson regime (i.e. K/εJ < N) where
the bias is small, ∆µ1+jK < 1;
Quartic Oscillator: using the Sz-diagonal basis, in the rotor-vibrator phase, 0.5 < α . 2.0,
provided α/M � 1 ;
BCS: using the Jpz -diagonal basis, in the superconducting regime, where the interactions Gpq
are at least comparable to the single-particle energy levels εp spacing.
Chapter 6. Conclusion 137
In the oscillator limit, we can use the SHA minimum mo and the spread σ to generate a
set of accurate low-lying eigenfunctions. However, in the su(2) (or su(1, 1)) limit where σ < 1,
the SHA minima mo and spread σ remains applicable only for the ground state eigenfunction
where the ground state components am can be obtained using the binomial approximation. As
we have seen, the SHA predicted mo and σ become exact when the Hamiltonian is an element
of the su(2) algebra as we have seen in section (3.6). However, the quantities mo and σ cannot
be used directly in the prediction of eigenvectors of excited states.
In other cases where σ < 1 and the Hamiltonian is not in the su(2) limit, a simple modi-
fication can be made such that the SHA equation can be used in a different way. We see an
example of this in the Fock regime of the CJH in figure (4.5). In this case, because the inverse
mass term A is small, the kinetic term can be excluded and the shifted oscillator equation (4.5)
becomes an eigenvalue equation (4.15). From there, we can continue to obtain accurate eigen-
values in the Fock regime. See figure (4.9). Furthermore, the numerically most significant basis
are highly localized about mo because the spread is very small. This facilitates the accurate
approximation of the lowest eigenvalues and eigenstates by diagonalizing the Hamiltonian in a
small relevant set of basis states as we have seen in ESHA(I).
As seen in section 3.3, the SHA predicted mo and σ can also be used to construct matrix
representations. Generally, in the oscillator limit, the SHA predicted matrix elements of Jz and
J± are accurate. However, in the su(2) limit, the SHA is only applicable for the lowest matrix
elements of Jz and J±. This is consistent with the fact that the quantities mo and σ can only
be used in the prediction of the ground state eigenvectors and any of the excited states in the
su(2) limit. However, because the Hamiltonian is approximately an element of the algebra, its
spectral properties can be easily deduced.
Next, we will recapitulate how the various scaling properties associated with the size of the
system can be deduced.
On scaling
We observe that for the su(2) Hamiltonians discussed in the thesis, the size of the system is
given by the irrep label j. We have seen that properties, such as excitation energies and ground
state energies, scale approximately with the size of the system. Such scaling relations can
be systematically deduced by using SHA or the other approximation techniques mentioned.
With this knowledge, we can study the variation of these physical properties as the system
changes from a finite one to an infinite one. Table (4.4) gives a simple illustration of a possible
investigation of the CJH. There, we see how closely the excitation energy scales with the size
of the system for a given set of parameters. Using the ESHA, accurate estimations of the
excitation energies can still be obtained when exact diagonalization becomes computationally
Chapter 6. Conclusion 138
intractable.
In the case of the su(1, 1) quartic oscillator, the size of the system is given by the mass
parameter M and notably the excitation energies are approximately independent of M for both
the spherical vibrator3 and rotor-vibrator phase. See equations (4.61) and (4.71). On the other
hand, the ground state energy is also independent of M in the spherical vibrator phase but is
dependent on M in the rotor-vibrator phase. See equations (4.60) and (4.70).
For the BCS Hamiltonian, the excitation energies are relatively constant as the number
of pairs in the system is varied with the rest of the parameters kept constant. There was
no discussion on this observation within the thesis but this can be verified by some simple
calculations using the SHA equations.
In the examples just mentioned, we see some potential use of the SHA in the investigation
of how physical properties scale with the size of the system, especially when the dimensions of
the Hilbert space increases with the size of the system.
Possible extensions of the SHA to Hamiltonians with other Spectrum Generating Algebras
The applicability of the SHA in the pairing model with an su(2)1⊕ su(2)2 . . . su(2)r algebra
suggests that the SHA can also be applied to algebras of higher ranks. Thus, for example, we
can explore extending the application of the SHA to models with algebras such as so(4), su(3)
by adapting the present set of approximations and assumptions.
The so(4) algebra has similar root diagram as su(2)1 ⊕ su(2)2 so the SHA can be applied
immediately. On the other hand, for su(3), the angle between the roots is 60◦ so extra consid-
eration is needed. As in the su(2) algebras which contracts to an oscillator algebra, the su(3)
algebra also has a contraction limit but is of the rotor-vibrator type [68]. This contraction limit
is a possible point of preliminary investigation.
In general, many-body systems which are spatially confined are expected to be restricted to
translation, rotational and vibrational types of motions. We can expect that their correspond-
ing spectrum generating algebras to contract to limits associated with the motions. These
contraction limits will give an indication of how an approximation such as the SHA can be
made for such algebras.
The successful application of the SHA to the multi-level BCS pairing model also indi-
cates how we can systematically approximate the low-lying eigenstates of models with su(2)1⊕su(2)2 . . . su(2)r spectrum generating algebras with large spins. A large class of models in con-
densed matter physics and statistical physics have the same spectrum generating algebra but
3We note that when the Hamiltonian is a pure vibrator, i.e. α = 0 in equation (4.28), the excitation energyis independent of M . This independence is due to a choice of parametrization, of which the excitation energy isa separate parameter.
Chapter 6. Conclusion 139
each copy of the su(2) algebra associated with a site in a lattice typically has a spin j = 12 . As
in the pairing model, the Hilbert spaces of these models are extremely large. Therefore, if the
SHA can be adapted to make the approximations for models with non-degenerate small spins,
it will be applicable to a wide class of problems.
There also exists other models which explicitly involve both su(2) and hw(1) operators. One
such model, of current interest, is the Bose-Hubbard model. Essentially, this model is similar
to the CJH in chapter 4, except that the bosonic atoms are in a multi-well potential. The Bose-
Hubbard model under consideration can comprise arrays of multi-well potential in one, two or
three dimensions. The dimension of the Hilbert space of the Bose-Hubbard model is huge and
grows exponentially with the number of atoms and wells. In this model, the tunnelling terms
between neighbouring wells p and q are given by a†paq + a†qap, where a and a† are oscillator
creation and annihilation operators. In the CJH with just a two-well potential, the operators
a†1a2+a†2a1 in equation (4.2) can be simply regarded as an su(2) Jx operator in the Hamiltonian.
However, this simplification is not applicable in a multi-well model. To apply the present scheme
of the SHA to the Bose-Hubbard model, a further approximation can be made by deforming
the operators ap to Jp− and a†q to Jq+. This is another possible way of extending of the SHA to
a wider class of models.
Other possible applications of the SHA
The application of the SHA is not limited to just approximating eigenvalues or eigenvectors.
For example, the SHA can be used to approximate Clebsch-Gordon coefficients. This can be
done by combining the angular momentum of two systems, say J1 and J2. The total angular
momentum of the entire system is given by J = J1 + J2. The basis states for the entire system
can either be expressed as |j1j2;m1m2〉 or |j1j2; jm〉 where j labels the angular momentum and
m its z-components. Essentially, we have an su(2)⊕ su(2) system and the SHA can be used to
approximate the Clebsch-Gordon coefficients that relate the two different basis.
Through the examples in this thesis, we have seen the effectiveness and versatility of the
SHA and in this conclusion, we have also seen many possibilities for future developments.
Bibliography
[1] L. L. A. Adams, B. W. Lang, Y. Chen, and A. M. Goldman, Signatures of Random Matrix
Theory in the Discrete Energy Spectra of Subnanosize Metallic Clusters, Phys. Rev. B 75
(2007), 205107.
[2] M. Albiez, R. Gati, J. Folling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Di-
rect Observation of Tunneling and Nonlinear Self-Trapping in a Single Bosonic Josephson
Junction, Phys. Rev. Lett. 95 (2005), no. 1, 010402.
[3] P. W. Anderson, Random-Phase Approximation in the Theory of Superconductivity, Phys.
Rev. 112 (1958), no. 6, 1900–1916.
[4] J. A. Annett, Superconductivity, Superfluids and Condensates, Oxford University Press,
UK, 2006.
[5] J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of Superconductivity, Phys. Rev.
108 (1957), no. 5, 1175–1204.
[6] C. T. Black, D. C. Ralph, and M. Tinkham, Spectroscopy of the Superconducting Gap in
Individual Nanometer-Scale Aluminum Particles, Phys. Rev. Lett. 76 (1996), 688–691.
[7] N. N. Bogoliubov, A New Method in the Theory of Superconductivity 1, Soviet Phys. JETP-
USSR 7 (1958).
[8] D. Bohm and D. Pines, A Collective Description of Electron Interactions: III. Coulomb
Interactions in a Degenerate Electron Gas, Phys. Rev. 92 (1953), no. 3, 609–625.
[9] A. Bohr and B. R. Mottelson, Nuclear Structure (Vol I): Single-Particle Motion, (Vol II):
Nuclear Deformation, World Scientific, Singapore, Reprinted 1999.
[10] A. Bohr, B. R. Mottelson, and D. Pines, Possible Analogy between the Excitation Spectra
of Nuclei and Those of the Superconducting Metallic State, Phys. Rev. 110 (1958), no. 4,
936–938.
140
Bibliography 141
[11] P. A. Braun, Discrete semiclassical methods in the theory of rydberg atoms in external
fields, Rev. Mod. Phys. 65 (1993), no. 1, 115–161.
[12] P. A. Braun, A. M. Shirokov, and Y. F. Smirnov, On the level clustering in the spectra of
the non-rigid spherical top molecules, Mol. Phys. 56 (1985), no. 3, 573–587.
[13] H. Chen, J. R. Brownstein, and D. J. Rowe, Model of a Superconducting Phase-Transition,
Phys. Rev. C 42 (1990), 1422–1431.
[14] L. N. Cooper, Bound Electron Pairs in a Degenerate Fermi Gas, Phys. Rev. 104 (1956),
no. 4, 1189–1190.
[15] E. A. Cornell and C. E. Wieman, Nobel Lecture: Bose-Einstein Condensation in a Dilute
Gas, the First 70 years and Some Recent Experiments, Rev. Mod. Phys. 74 (2002), no. 3,
875–893.
[16] E. A. Cornell and C. E. Wieman, The Bose-Einstein Condensate, Scientific American
March (1998), 40–45.
[17] F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Theory of Bose-Einstein con-
densation in Trapped Gases, Rev. Mod. Phys. 71 (1999), no. 3, 463–512.
[18] S. De Baerdemacker, The Geometrical Bohr-Mottelson model: Analytic Solutions and an
Algebraic Cartan-Weyl persepctive, Universitieit Gent, Belgium, 2007.
[19] R. M. Dreizler and A. Klein, Concept of Transition - Operator Boson and Its Application
to an Exactly Soluble Model, Phys. Rev. C 7 (1973), 512–521.
[20] J. Dukelsky, C. Esebbag, and P. Schuck, Class of exactly solvable pairing models, Phys.
Rev. Lett. 87 (2001), no. 6, 664031.
[21] J. Dukelsky, S. Pittel, and G. Sierra, Colloquium: Exactly solvable Richardson-Gaudin
models for many-body quantum systems, Rev. Mod. Phys. 76 (2004), no. 3, 643–662.
[22] S. Dusuel and J. Vidal, Continuous Unitary Transformations and Finite-size Scaling Ex-
ponents in the Lipkin-Meshkov-Glick model, Phys. Rev. B 71 (2005), 224420.
[23] F. J. Dyson, General Theory of Spin-Wave Interactions, Phys. Rev. 102 (1956), no. 5,
1217–1230.
[24] A. Dzhioev, Z. Aouissat, A. Storozhenko, A. Vdovin, and J. Wambach, Extended Holstein-
Primakoff Mapping for the Next-to-leading Order of the 1/N Expansion at Finite Temper-
ature, Phys. Rev. C 69 (2004), 014318.
Bibliography 142
[25] H. Frohlich, Isotope effect in superconductivity [3], Proc. Phys. Soc., Sect. A 63 (1950),
no. 7, 778.
[26] R. Gati and M. K. Oberthaler, A Bosonic Josephson Junction, J. Phys. B 40 (2007),
no. 10, R61–R89.
[27] I. Giaever and H. R. Zeller, Superconductivity of Small Tin Particles Measured by Tunnel-
ing, Phys. Rev. Lett. 20 (1968), no. 26, 1504–1507.
[28] S. Giovanazzi, A. Smerzi, and S. Fantoni, Josephson Effects in Dilute Bose-Einstein Con-
densates, Phys. Rev. Lett. 84 (2000), no. 20, 4521–4524.
[29] A. J. Glick, H. J. Lipkin, and N. Meshkov, Validity of Many-body Approximation Methods
for a Solvable Model .3. Diagram Summations, Nuclear Physics 62 (1965), 211–224.
[30] E. P. Gross, Structure of a Quantized Vortex in Boson Systems, Il Nuovo Cimento Series
10 20 (1961), no. 3, 454–477.
[31] K. T. Hecht, The Vector Coherent State Method and its Application to Problems of Higher
Symmetries, Springer-Verlag, Berlin-Heidelberg, 1987.
[32] D. L. Hill and J. A. Wheeler, Nuclear constitution and the interpretation of fission phe-
nomena, Phys. Rev. 89 (1953), no. 5, 1102–1145.
[33] S. Y. Ho, G. Rosensteel, and D. J. Rowe, Equations-of-Motion Approach to Quantum
Mechanics: Application to a Model Phase Transition, Phys. Rev. Lett. 98 (2007), 080401.
[34] T. Holstein and H. Primakoff, Field Dependence of the Intrinsic Domain Magnetization of
a Ferromagnet, Phys. Rev. 58 (1940), no. 12, 1098–1113.
[35] A. K. Kerman and A. Klein, Generalized Hartree-Fock Approximation for the Calculation
of Collective States of a Finite Many-Particle System, Phys. Rev. 132 (1963), no. 3, 1326–
1342.
[36] A. K. Kerman, R. D. Lawson, and M. H. Macfarlane, Accuracy of the Superconductivity
Approximation for Pairing Forces in Nuclei, Phys. Rev. 124 (1961), no. 1, 162–167.
[37] A. Klein and E. R. Marshalek, Boson Realizations of Lie algebras with Applications to
Nuclear Physics, Rev. Mod. Phys. 63 (1991), no. 2, 375–558.
[38] K. S. Krane, Introductory Nuclear Physics, John Wiley & Sons, USA, 1987.
[39] M. Le Bellac, Quantum Physics, Cambridge University Press, UK, 2006.
Bibliography 143
[40] A. J. Leggett, BEC: The Alkali Gases from the Perspective of Research on Liquid Helium,
The sixteenth international conference on atomic physics 477 (1999), no. 1, 154–169.
[41] A. J. Leggett, Quantum Liquids, Oxford University Press, UK, 2006.
[42] A. J. Leggett, Bose-Einstein Condensation in the Alkali Gases: Some fundamental con-
cepts, Rev. Mod. Phys. 73 (2001), no. 2, 307–356.
[43] V. S. Letokhov, Laser Control of Atoms and Molecules, Oxford University Press, UK, 2007.
[44] S. Levy, E. Lahoud, I. Shomroni, and J. Steinhauer, The a.c. and d.c. Josephson effects in
a Bose-Einstein condensate, Nature 449 (2007), no. 7162, 579.
[45] C. T. Li, A. Klein, and F. Krejs, Matrix Mechanics as a Practical Tool in Quantum Theory:
The Anharmonic Oscillator, Phys. Rev. D 12 (1975), no. 8, 2311–2324.
[46] H. J. Lipkin, N. Meshkov, and A. J. Glick, Validity of Many-body Approximation Methods
for a Solvable Model .1. Exact Solutions and Perturbation Theory., Nuclear Physics 62
(1965), 188–198.
[47] R. D. Mattuck, A Guide to Feynman Diagrams in the Many-Body Problem, McGraw-Hill
Book Company, Berkshire, England, 1967.
[48] N. Meshkov, A. J. Glick, and H. J. Lipkin, Validity of Many-body Approximation Methods
for a Solvable Model .2. Linearization Procedures, Nuclear Physics 62 (1965), 199–210.
[49] J. C. Parikh and D. J. Rowe, Investigation of Ground-State Correlations For A Model
Hamiltonian Of Nucleus , Phys. Rev. 175 (1968), no. 4, 1293–1300.
[50] L. P. Pitaevskii, Vortex Lines in an Imperfect Bose Gas, Soviet Phys. JETP-USSR 13
(1961), no. 2, 451–454.
[51] D. C. Ralph, S. Guron, C. T. Black, and M. Tinkham, Electron Energy Levels in Super-
conducting and Magnetic Nanoparticles, Physica B 280 (2000), no. 1-4, 420–424.
[52] P. L. Richards and M. Tinkham, Far-Infrared Energy Gap Measurements in Bulk Super-
conducting In, Sn, Hg, Ta, V, Pb, and Nb, Phys. Rev. 119 (1960), 575–590.
[53] R. W. Richardson, Application to the exact theory of the pairing model to some even
isotopes of lead, Phys. Lett. A 5 (1963), no. 1, 82–84.
[54] R. W. Richardson, A Restricted Class of Exact Eigenstates of the Pairing-force Hamilto-
nian, Phys. Lett. 3 (1963), no. 6, 277–279.
Bibliography 144
[55] R. W. Richardson, Exactly solvable many-boson model, J Math Phys 9 (1968), no. 9, 1327–
1343.
[56] R. W. Richardson and N. Sherman, Exact eigenstates of the pairing-force hamiltonian,
Nucl. Phys. 52 (1964), no. C, 221–238.
[57] P. Ring and P. Schuck, The Nuclear Many-Body Problem, Springer-Verlag, New York,
1980.
[58] N. I. Robinson, R. F. Bishop, and J. Arponen, Extended Coupled-cluster Method. IV.
An Excitation Energy Functional and Applications to the Lipkin Model, Phys. Rev. A 40
(1989), 4256–4276.
[59] S. Rombouts, D. Van Neck, and J. Dukelsky, Solving the Richardson Equations for
Fermions, Phys. Rev. C 69 (2004), no. 6, 061303–1.
[60] G. Rosensteel, Analytic Formulae for Interacting Boson Model Matrix Elements in the
SU(3) basis, Phys. Rev. C 41 (1990), no. 2, 730–735.
[61] G. Rosensteel and D. J. Rowe, Seniority-conserving Forces and USp(2j + 1) Partial Dy-
namical Symmetry, Phys. Rev. C 67 (2003), no. 1, 014303.
[62] G. Rosensteel, D. J. Rowe, and S. Y. Ho, Equations-of-Motion for a Spectrum-generating
Algebra: Lipkin-Meshkov-Glick model, J. Phys. A 41 (2008), 025208.
[63] R. Roth and P. Navratil, Ab initio study of 40ca with an importance-truncated no-core shell
model, Phys. Rev. Lett. 99 (2007), no. 9, 092501.
[64] D. J. Rowe, Equations-of-Motion Method and the Extended Shell Model, Rev. Mod. Phys.
40 (1968), no. 1, 153–166.
[65] D. J. Rowe, Nuclear Collective Motion: Models and Theory, Methuen and Co. Ltd., Lon-
don, 1970.
[66] D. J. Rowe, Microscopic Theory of the Nuclear Collective Model, Rep. Prog. Phys. 48
(1985), no. 10, 1419–1480.
[67] D. J. Rowe, Phase Transitions and Quasidynamical Symmetry in Nuclear Collective Mod-
els: I. The U(5) to O(6) Phase Transition in the IBM, Nucl. Phys. A 745 (2004), no. 1-2,
47–78.
[68] D. J. Rowe, R. Le Blanc, and J. Repka, A Rotor Expansion of the su(3) Lie algebra, J.
Phys. A: Gen. Phys. 22 (1989), no. 8, L309–L316.
Bibliography 145
[69] D. J. Rowe and P. S. Turner, The Algebraic Collective Model, Nuclear Physics A 753
(2005), no. 1-2, 94–105.
[70] D. J. Rowe, P. S. Turner, and G. Rosensteel, Scaling Properties and Asymptotic Spectra of
Finite Models of Phase Transitions as they approach Macroscopic Limits, Phys. Rev. Lett.
93 (2004), no. 23.
[71] G. Scharff-Goldhaber, C. B. Dover, and A. L. Goodman, The Variable Moment of Inertia
(VMI) Model and Theories of Nuclear Collective Motion, Annu. Rev. Nucl. Sci. 26 (1976),
239–317.
[72] B. Singh, Nuclear data sheets for A = 164, Nuclear Data Sheets 93 (2001), 243–445.
[73] A. Smerzi, S. Fantoni, S. Giovanazzi, and S. R. Shenoy, Quantum Coherent Atomic Tunnel-
ing between Two Trapped Bose-Einstein Condensates, Phys. Rev. Lett. 79 (1997), no. 25,
4950–4953.
[74] P. S. Turner and D. J. Rowe, Phase Transitions and Quasidynamical Symmetry in Nuclear
Collective Models. II. The Spherical Vibrator to Gamma-soft Rotor Transition in an SO(5)-
invariant Bohr Model, Nucl. Phys. A 756 (2005), no. 3-4, 333–355.
[75] V. V. Ulyanov and O. B. Zaslavskii, New Methods in the Theory of Quantum Spin Systems,
Phys. Rep. 216 (1992), no. 4, 179–251.
[76] J. G. Valatin, Comments on the Theory of Superconductivity, Il Nuovo Cimento Series 10
7 (1958), no. 6, 843–857.
[77] J. Von Delft and D. C. Ralph, Spectroscopy of Discrete Energy Levels in Ultrasmall Metallic
Grains, Phys. Rep. 345 (2001), no. 2-3, 61–173.
[78] X. G. Wen, Quantum Field Theory of Many-body Systems, Oxford Graduate Texts, Oxford,
2004.
[79] E. P. Wigner, Group Theory and its Application to the Quantum Mechanics of Atomic
Spectra, Academic Press, New York and London, 1959.
[80] O. B. Zaslavskii and V. V. Ulyanov, Periodic Effective Potentials for Spins Systems and
New Exact-Solutions of the One-dimensional Schrodinger-equation for the Energy-bands,
Theor. Math. Phys. 71 (1987), no. 2, 520–528.