numerical simulation in statistical physics

201
Numerical Simulation in Statistical Physics common lecture in Master 2 Theoretical physics of complex systemsand Modeling, Statistics and Algorithms for out-of-equilibrium systems. Pascal Viot Laboratory de Physique Th´ eorique de la Mati` ere Condens´ ee, Boˆ ıte 121, 4, Place Jussieu, 75252 Paris Cedex 05 Email : [email protected] December 6, 2010

Upload: masamichi-nogawa

Post on 27-Nov-2014

151 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Numerical Simulation in Statistical Physics

Numerical Simulation in Statistical Physicscommon lecture in Master 2 “Theoretical physicsof complex systems” and “Modeling, Statisticsand Algorithms for out-of-equilibrium systems.

Pascal ViotLaboratory de Physique Theorique de la Matiere Condensee, Boıte 121,

4, Place Jussieu, 75252 Paris Cedex 05Email : [email protected]

December 6, 2010

Page 2: Numerical Simulation in Statistical Physics

These lecture notes provide an introduction to the methods of numerical sim-ulation in classical statistical physics.

Based on simple models, we start in a first part on the basics of the MonteCarlo method and of the Molecular Dynamics. A second part is devoted to theintroduction of the basic microscopic quantities available in simulation methods,and to the description of the methods allowing for studying the phase transitions.In a third part, we consider the study of out-of-equilibrium systems as well as thecharacterization of the dynamics: in particular, aging phenomena are presented.

2

Page 3: Numerical Simulation in Statistical Physics

Chapter 1

Statistical mechanics andnumerical simulation

Contents1.1 Brief History of simulation . . . . . . . . . . . . . . . . 3

1.2 Ensemble averages . . . . . . . . . . . . . . . . . . . . . 5

1.2.1 Microcanonical ensemble . . . . . . . . . . . . . . . . . 5

1.2.2 Canonical ensemble . . . . . . . . . . . . . . . . . . . 6

1.2.3 Grand canonical ensemble . . . . . . . . . . . . . . . . 7

1.2.4 Isothermal-isobaric ensemble . . . . . . . . . . . . . . 8

1.3 Model systems . . . . . . . . . . . . . . . . . . . . . . . 8

1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 8

1.3.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . 9

1.3.3 Ising model and lattice gas. Equivalence . . . . . . . . 10

1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.5.1 ANNNI Model . . . . . . . . . . . . . . . . . . . . . . 15

1.6 Blume-Capel model . . . . . . . . . . . . . . . . . . . . 16

1.6.1 Potts model . . . . . . . . . . . . . . . . . . . . . . . . 18

1.1 Brief History of simulation

Numerical simulation started in the fifties when computers were used for the firsttime for peaceful purposes. In particular, the computer MANIAC started in 19521

1The computer MANIAC is the acronym of ”mathematical and numerical integrator andcomputer”. MANIAC I started March 15, 1952.

3

Page 4: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

at Los Alamos. Simulation provides a complementary approach to theoreticalmethods2. Areas of physics where the perturbative approaches are efficient (dilutegases, vibrations in quasi-harmonic solids) do not require simulation methods.Conversely, liquid state physics, where few exact results are known and where thetheoretical developments are not always under control, has been developed largelythrough simulation. The first Monte Carlo simulation of liquids was performedby Metropolis et al. in 19533.

The first Molecular Dynamics was realized on the hard disk model by Alderand Wainwright in 1957Alder and Wainwright [1957]. The first Molecular Dy-namics of a simple liquid (Argon) was performed by Rahman in 1964.

In these last two decades, the increasing power of computers associated withtheir decreasing costs allowed numerical simulations with personal computers.Even if supercomputers are necessary for extensive simulations, it becomes possi-ble to perform simulations on low cost computers. In order to measure the powerof computers, the unit of performance is the GFlops (or billion of floating pointoperations per second). Nowadays, a Personal computer PC (for instance, IntelCore i7) has a processor with four cores (floating point unit) and offers a powerof 24 Gflops. Whereas the frequency of processors seems to have plateaued lastfew years, power of computers continues to increase because the number of coreswithin a processor grows. Nowadays, processors with 6 or 8 cores are available.In general, the power of a computer is not a linear function of cores for manyprograms. For scientific codes, it is possible to exploit this possibility by develop-ing scientific programs incorporating libraries which perform parallel computing(OpenMP, MPI). The gnu compilers are free softwares allowing for parallel com-puting. For massive parallelism, MPI (message passing interface) is a librarywhich spreads the computing load over many cores.

Table 1.1 gives the characteristics and power of most powerful computers inthe world

It is worth noting that the rapid evolution of the power of the computers. In2009, only one computer exceeded the TFlops, while they are 4 this year. In thesame period, the IDRIS computer went from 9 last year to 38 this year. Note thatChina owns the second most powerful computer in the world. The last remarkconcerns the operating system of the 500 fastest computers: Linux and associateddistributions 484, Windows 5, others 11.

2Sometimes, theories are in their infancy and numerical simulation is the sole manner forstudying models

3Metropolis, Nicholas Constantine (1915-1999) both mathematician and physicist of educa-tion was hired by J. Robert Oppenheimer at the Los Alamos National Laboratory in April 1943.He was one of scientists of the Manhattan Project and collaborated with Enrico Fermi and Ed-ward Teller on the first nuclear reactors. After the war, Metropolis went back to Chicago as anassistant professor, and returned to Los Alamos in 1948 by creating the Theoretical Division.He built the MANIAC computer in 1952, then 5 years later MANIAC II. He returned from1957 to 1965 to Chicago and founded the Computer Research division, and finally, returned toLos Alamos.

4

Page 5: Numerical Simulation in Statistical Physics

1.2 Ensemble averages

Ranking Vendor /Cores. (TFlops) Pays1 Cray XTS / 224162 1759 2331 Oak Ridge Nat. Lab. USA 20092 Dawning / 120640 1271 2984 NSCS (China)20103 Roadrunner /122400 1042 1375 Doe USA 20094 Cary XTS /98928 831 1028 NICST USA 20095 Blue GeneP 212992 825 1002 FZJ Germany 2009... ... ... ...18 SGI /23040 237 267 France 2010

Table 1.1 – June 2009 Ranking of supercomputers

1.2 Ensemble averages

Knowledge of the partition function of a system allows one to obtain all thermody-namic quantities. First, we briefly review the main ensembles used in StatisticalMechanics. We assume that the thermodynamic limit leads to the same quan-tities, a property for systems where interaction between particles are not longranged or systems without quenched disorder.

For finite size systems (which correspond to those studied in computer simu-lation), there are differences that we will analyze in the following.

1.2.1 Microcanonical ensemble

The system is characterized by the set of macroscopic variables: volume V , totalenergy E and the number of particles N . This ensemble is not appropriate forexperimental studies, where one generally has

• a fixed number of particles, but a fixed pressure P and temperature T . Thiscorresponds to a set of variables (N,P, T ) and is the isothermal-isobaricensemble,

• a fixed chemical potential µ, volume V and temperature T , ensemble (µ, V, T )or grand canonical ensemble,

• a fixed number of particles, a given volume V and temperature T , ensemble(N, V, T ) or canonical ensemble.

There is a Monte Carlo method for the micro canonical ensemble, but it israrely used, in particular for molecular systems. Conversely, the micro-canonicalensemble is the natural ensemble for the Molecular Dynamics of Hamiltoniansystems where the total energy is conserved during the simulation.

The variables conjugated to the global quantities defining the ensemble fluc-tuate in time. For the micro-canonical ensemble, this corresponds to the pressure

5

Page 6: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

P conjugate to the volume V , to the temperature T conjugate to the energy Eand to the chemical potential µ conjugate to the total number of particles N .

1.2.2 Canonical ensemble

The system is characterized by the following set of variables: the volume V ; thetemperature T and the total number of particles N . If we denote the Hamiltonianof the system H, the partition function reads

Q(V, β,N) =∑α

exp(−βH(α)) (1.1)

where β = 1/kBT (kB is the Boltzmann constant). The sum runs over all config-urations of the system. If this number is continuous, the sum is replaced with anintegral. α denotes the index of these configurations. The free energy F (V, β,N)of the system is equal to

βF (V, β,N) = − ln(Q(V, β,N)). (1.2)

One defines the probability of having a configuration α as

P (V, β,N ;α) =exp(−βH(α))

Q(V, β,N). (1.3)

One easily checks that the basic properties of a probability are satisfied, i.e.∑α P (V, β,N ;α) = 1 and P (V, β,N ;α) > 0.The derivatives of the free energy are related to the moments of this prob-

ability distribution, which gives a microscopic interpretation of the macroscopicthermodynamic quantities. The mean energy and the specific heat are then givenby

• Mean energy

U(V, β,N) =∂(βF (V, β,N))

∂β(1.4)

=∑α

H(α)P (V, β,N ;α) (1.5)

=〈H(α)〉 (1.6)

• Specific heat

Cv(V, β,N) = −kBβ2∂U(V, β,N)

∂β(1.7)

= kBβ2

∑α

H2(α)P (V, β,N ;α)−

(∑α

H(α)P (V, β,N ;α)

)2

(1.8)

= kBβ2(〈H(α)2〉 − 〈H(α)〉2

)(1.9)

6

Page 7: Numerical Simulation in Statistical Physics

1.2 Ensemble averages

1.2.3 Grand canonical ensemble

The system is then characterized by the following set of variables: the volume V ,the temperature T and the chemical potential µ. Let us denote the HamiltonianHN the Hamiltonian of N particles, the grand partition function Ξ(V, β, µ) reads:

Ξ(V, β, µ) =∞∑N=0

∑αN

exp(−β(HN(αN)− µN)) (1.10)

where β = 1/kBT (kB is the Boltzmann constant) and the sum run over allconfigurations of N particles and over all configurations for systems having anumber of particles going from 0 to infty. The grand potential is equal to

βΩ(V, β, µ) = − ln(Ξ(V, β, µ)) (1.11)

In a similar way, one defines the probability (distribution) P (V, β, µ;αN) of havinga configuration αN (with N particles) by the relation

P (V, β, µ;αN) =exp(−β(HN(αN)− µN))

Ξ(V, β, µ)(1.12)

The derivatives of the grand potential can be expressed as moments of the prob-ability distribution

• Mean number of particles

〈N(V, β, µ)〉 =− ∂(βΩ(V, β, µ))

∂(βµ)(1.13)

=∑N

∑αN

NP (V, β, µ;αN) (1.14)

• Susceptibility

χ(V, β, µ) =β

〈N(V, β, µ)〉ρ∂〈N(V, β, µ)〉

∂βµ(1.15)

〈N(V, β, µ)〉ρ

∑N

∑αN

N2P (V, β, µ;αN)−

(∑N

∑αN

NP (V, β, µ;αN)

)2

(1.16)

〈N(V, β, µ)〉ρ(〈N2(V, β, µ)〉 − 〈N(V, β, µ)〉2

)(1.17)

7

Page 8: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

1.2.4 Isothermal-isobaric ensemble

The system is characterized by the following set of variables: the pressure P , thetemperature T and the total number of particles N . Because this ensemble isgenerally devoted to molecular systems and not used for lattice models, one onlyconsiders continuous systems. The partition function reads:

Q(P, β,N) =βP

Λ3NN !

∫ ∞0

dV exp(−βPV )

∫ V

0

drN exp(−βU(rN)) (1.18)

where β = 1/kBT (kB is the Boltzmann constant).The Gibbs potential is equal to

βG(P, β,N) = − ln(Q(P, β,N)). (1.19)

One defines the probability Π(P, β, µ;αV )4 of having a configuration αV ≡ rN

(particle positions rN), with a temperature T and a pressure P ).

Π(P, β, µ;αV ) =exp(−βV ) exp(−β(U(rN)))

Q(P, β,N). (1.20)

The derivatives of the Gibbs potential are expressed as moments of this prob-ability distribution. Therefore,

• Mean volume

〈V (P, β,N)〉 =∂(βG(P, β,N))

∂βP(1.21)

=

∫ ∞0

dV V

∫ V

0

drNΠ(P, β, µ;αV ). (1.22)

This ensemble is appropriate for simulations which aim to determine the equationof state of a system.

Let us recall that a statistical ensemble can not be defined from a set ofintensives variables only. However, we will see in Chapter 7 that a technique socalled Gibbs ensemble method is close in spirit of such an ensemble (with thedifference we always consider in simulation finite systems).

1.3 Model systems

1.3.1 Introduction

We restrict the lecture notes to classical statistical mechanics, which means thatthe quantum systems are not considered here. In order to provide many illustra-tions of successive methods, we introduce several basic models that we considerseveral times in the following

4In order to avoid confusion with the pressure, the probability is denoted Π.

8

Page 9: Numerical Simulation in Statistical Physics

1.3 Model systems

1.3.2 Simple liquids

A simple liquid is a system of N point particles labeled from 1 to N , of identicalmass m, interacting with an external potential U1(ri) and among themselves bya pairwise potential U2(ri, rj) (i.e. a potential where the particles only interactby pairs). The Hamiltonian of this system reads:

H =N∑i=1

[p2i

2m+ U1(ri)

]+

1

2

∑i 6=j

U2(ri, rj), (1.23)

where pi is the momentum of the particle i.For instance, in the grand canonical ensemble, the partition function Ξ(µ, β, V )

is given by

Ξ(µ, β, V ) =∞∑N=0

1

N !

∫ N∏i=1

(ddpi)(ddri)

hdNexp(−β(H− µN)) (1.24)

where h is the Planck constant and d the space dimension.The integral over the momentum can be obtained analytically, because there

is factorization of the multidimensional integral on the variables pi. The one-dimensional integral of each component of the momentum is a Gaussian integral.Using the thermal de Broglie length

ΛT =h√

2πmkBT, (1.25)

one has ∫ +∞

−∞

ddp

hdexp(−βp2/(2m)) =

1

ΛdT

. (1.26)

The partition function can be reexpressed as

Ξ(µ, β, V ) =∞∑N=0

1

N !

(eβµ

ΛdT

)NZN(β,N, V ) (1.27)

where ZN(β,N, V ) is called the configuration integral.

ZN(β,N, V ) =

∫drN exp(−βU(rN)) (1.28)

One defines z = eβµ as the fugacity.The thermodynamic potential associated with the partition function, Ω(µ, β, V ),

is

Ω(µ, β, V ) = − 1

βln(Ξ(µ, β, V )) = −PV (1.29)

where P is the pressure.Note that, for classical systems, only the part of the partition function with

the potential energy is non-trivial. Moreover, there is decoupling between thekinetic part and potential part, contrary to the quantum statistical mechanics.

9

Page 10: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

1.3.3 Ising model and lattice gas. Equivalence

The Ising model is a lattice model where sites are occupied by particles with verylimited degrees of freedom. Indeed, the particle is characterized by a spin whichis a two-state variable (−1,+1). Each spin interacts with its nearest neighborsand with a external field H, if it exists. The Ising model, initially introducedfor describing the behavior of para-ferromagnetic systems, can be used in manyphysical situations. This simple model can be solved analytically in several situ-ations (one and two dimensions) and accurate results can be obtained in higherdimensions.

The Hamiltonian reads

H = −J∑<i,j>

SiSj (1.30)

where < i, j > denotes a summation over nearest-neighbor sites and J is theinteraction strength. If J > 0, the interaction is ferromagnetic and conversely, ifJ < 0, the interaction is antiferromagnetic The analytical solution of the systemin one dimension shows that there is no phase transition at a finite temperature.In two dimensions, Onsager (1944) obtained the solution in the absence of anexternal field. in three dimensions, no analytical solution has been obtained, buttheoretical developments and numerical simulations give the properties of thesystem in the phase diagram magnetization-temperature.

The lattice gas model was introduced by Lee and Yang. The basic idea, greatlyextended later, consists of assuming that the macroscopic properties of a systemwith a large number of particles do not crucially depend on the microscopic detailsof the interaction. By performing a coarse-graining of the microscopic system,one builds an effective model with a smaller number of degrees of freedom. Thisidea is often used in statistical physics, because it is often necessary to reducethe complexity of the original system for several reasons: 1) practical: In asimpler model, theoretical treatments are more tractable and simulations can beperformed with larger system sizes. 2) theoretical: macroscopic properties arealmost independent of some microscopic degrees of freedom and a local averageis efficient method for obtaining an effective model for the physical properties ofthe system. This method underlies the existence of a certain universality, whichis appealing to many physicists.

From a Hamiltonian of a simple liquid to a lattice gas model, we proceedin three steps. The first consists of rewriting the Hamiltonian by introducing amicroscopic variable: this step is exact. The second step consists of performinga local average to define the lattice Hamiltonian; several approximations areperformed in this step and it is essential to determine their validity. In the thirdstep, several changes of variables are performed in order to transform the latticegas Hamiltonian into a spin model Hamiltonian: this last step is exact again.

10

Page 11: Numerical Simulation in Statistical Physics

1.3 Model systems

Rewriting of the Hamiltonian

First, let us reexpress the Hamiltonian of the simple liquid as a function of themicroscopic5 density

ρ(r) =N∑i=1

δ(r− ri). (1.31)

By using the property of the Dirac distribution∫f(x)δ(x− a)dx = f(a) (1.32)

one obtains

N∑i=1

U1(ri) =∑i=1

∫V

U1(r)δ(r− ri)ddr =

∫V

U1(r)ρ(r)ddr (1.33)

in a similar way∑i 6=j

U2(ri, rj) =∑i 6=j

∫V

U2(r, rj)δ(r− ri)ddr (1.34)

=∑i 6=j

∫V

∫V

U2(r, r′)δ(r− ri)δ(r′ − rj)d

drddr′ (1.35)

=

∫V

∫V

U2(r′, r)ρ(r)ρ(′r)ddrddr′ (1.36)

Local average

The area (”volume”) V of the simple liquid is divided into Nc cells such thatthe probability of finding more than one particle center per cell is negligible6

(typically, this means that the diagonal of each cell is slightly smaller than theparticle diameter, see Fig. 1.1). Let us denote by a the linear length of the cell,then one has ∫

V

N∏i=1

ddri = adNc∑α=1

(1.37)

which gives Nc = V/ad. The lattice Hamiltonian is

H =Nc∑α=1

U1(α)nα + 1/2Nc∑α,β

U2(α, β)nαnβ (1.38)

5 The local density is obtained by performing a local average that leads to a smooth function6The particle is an atom or a molecule with its own characteristic length scale.

11

Page 12: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

0 0.2 0.4 0.6 0.8 1

X

0

0.2

0.4

0.6

0.8

1

Y

Figure 1.1 – Particle configuration of a two-dimensional simple liquid. The gridrepresents the cells used for the local average. Each cell can accommodate zeroor one particle center.

where nα is a Boolean variable, namely nα = 1 when a particle center is withina cell α, and 0 otherwise. Note that the index α of this new Hamiltonian isassociated with cells whereas the index of the original Hamiltonian is associatedwith the particles. One obviously has U(α, α) = 0, no self-energy, because thereis no particle overlap. Since the interaction between particles is short range, U2(r)is also short range, and is replaced with an interaction between nearest neighborcells:

H =Nc∑α=1

U1(α)n(α) + U2

∑<αβ>

nαnβ (1.39)

The factor 1/2 does not appear because the bracket < αβ > only considersdistinct pairs.

12

Page 13: Numerical Simulation in Statistical Physics

1.3 Model systems

Equivalence with the Ising model

We consider the previous lattice gas model in a grand canonical ensemble. Therelevant quantity is then

H− µN =∑α

(U1(α)− µ)nα + U2

∑<α,β>

nαnβ. (1.40)

Let us introduce the following variables

Si = 2ni − 1. (1.41)

As expected, the spin variable is equal to +1 when a site is occupied by a particle(ni = 1) and −1 when the site in unoccupied (ni = 0). One then obtains∑

α

(U1(α)− µ)nα =1

2

∑α

(U1(α)− µ)Sα +1

2

∑α

(U1(α)− µ) (1.42)

and

U2

∑<α,β>

nαnβ =U2

4

∑<α,β>

(1 + Sα)(1 + Sβ) (1.43)

=U2

4

(Ncc

2+ c∑α

Sα +∑<α,β>

SαSβ

)(1.44)

where c is the coordination number (number of nearest neighbors) of the lattice.Therefore, this gives

H− µN = E0 −∑α

HαSα − J∑<α,β>

SαSβ (1.45)

with

E0 = Nc

(〈U1(α)〉 − µ

2+U2c

8

)(1.46)

where 〈U1(α)〉 corresponds to the average of U1 over the sites.

Hα =µ− U(α)

2− cU2

4(1.47)

and

J = −U2

4(1.48)

where J is the interaction strength. Finally, one gets

Ξgas(N, V, β, U(r)) = e−βE0QIsing(H, β, J,Nc). (1.49)

13

Page 14: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

This analysis confirms that the partition function of the Ising model in the canon-ical ensemble has a one-to-one map with the partition function of the lattice gas inthe grand canonical ensemble. One can easily show that the partition function ofthe Ising model with the constraint of a constant total magnetization correspondsto the partition function in the canonical ensemble of the lattice gas model.

Some comments on these results: first, one checks that if the interactionis attractive, U2 < 0, one has J > 0, which corresponds to a ferromagneticinteraction. Because the interaction is divided by 4, The critical temperatureof the Ising model is four times higher than of the lattice gas model. Notealso that the equivalence concerns the configuration integral and not the initialpartition function. This means that, like for the Ising model, the lattice gasdoes not own a microscopic dynamics, unlike to the original liquid model. Byperforming a coarse-graining, one has added a symmetry between particles andholes, which does not exist in the liquid model. This leads for the lattice gasmodel a symmetrical coexistence curve and a critical density equal to 1/2. Acomparison with the Lennard-Jones liquid gives a critical packing fraction equalto 0.3 in three dimensions. In addition, the coexistence curves of liquids are notsymmetric between the liquid phase and the gas phase, whereas the lattice gasdue to its additional symmetry ρliq = 1− ρgas, leads to a symmetric coexistencecurve.

Before starting a simulation, it is useful to have an estimate of the phasediagram in order to correctly choose the simulation parameters. The mean-fieldtheory gives a first approximation, for instance, of the critical temperature. Forthe Ising model (lattice gas), one obtains

Tc = cJ =cU2

4(1.50)

As expected, the mean-field approximation overestimates the value of thecritical temperature because the fluctuations are neglected. This allows orderto persist to a temperature higher than the exact critical temperature of thesystem. Unlike to the critical exponents which do not depend on the lattice,the critical temperature depends on the details of the system. However, largerthe coordination number, the closer the mean-field value to the exact criticaltemperature.

1.4 Conclusion

In this chapter, we have drawn the preliminary scheme before the simulation: 1)Select the appropriate ensemble for studying a model 2) When the microscopicmodel is very complicated, perform a coarse-graining procedure which leads toan effective model that is more amenable for theoretical treatments and/or moreefficient for simulation. As we will see in the following chapters, refined Monte

14

Page 15: Numerical Simulation in Statistical Physics

1.5 Exercises

Carlo methods consists of including more and more knowledge of the StatisticalMechanics, which finally leads to a more efficient simulation. Therefore, I rec-ommend reading of several textbooks on the Statistical PhysicsChandler [1987],Goldenfeld [1992], Hansen and Donald [1986], Young [1998]. In addition to theselecture notes, several textbooks or reviews are available on simulationsBinder[1997], Frenkel and Smit [1996], Krauth [2006], Landau and Binder [2000].

1.5 Exercises

1.5.1 ANNNI Model

Frustrated systems are generally characterized by a large degeneracy of the groundstate. This property can be illustrated through a simple model, the ANNNI model(Axial Next Nearest Neighbor Ising). One considers a lattice in one dimension, ofunit step and of length N , on which, with periodic boundary conditions, an Isingvariable exists on each lattice point. Spins interact with a ferromagnetic near-est neighbor interaction J1 > 0 and a antiferromagnetic next nearest neighborinteraction J2 < 0. The Hamiltonian reads

H = −J1

N∑i=1

SiSi+1 − J2

N∑i=1

SiSi+2 (1.51)

such that SN+1 = S1 and SN+2 = S2.

♣ Q. 1.5.1-1 One considers a staggered configuration of spins. Si = (−1)i. Cal-culate the energy per spin of this configuration. Infer that this configurationcannot be the ground state of the ANNNI model whatever the values of J1 andJ2. The calculation should be done for a lattice with a number of even.

♣ Q. 1.5.1-2 Calculate the energy per spin for a configuration of aligned (con-figuration A) and for a periodic configuration pour a configuration of alternatedspins of period 4 (configuration B) , corresponding to a sequence of two spins upand two spins down, and two spins up, etc. . . The calculation will be done for alattice of length multiple of 4.

♣ Q. 1.5.1-3 Show that the configuration A has a lower energy than the config-uration B for a ratio κ = −J2/J1 to be determined.

In the case where κ = 1/2, one admits that the ground-state is stronglydegenerated and that the configurations associated are sequences of spins of lengthk ≥ 2.

♣ Q. 1.5.1-4 From one of these configurations on a lattice of length L − 1, oneinserts at the end of the lattice one site with a spin SL. What sign must thisspin be with respect to SL−1 in order that the new configuration belongs to theground state of a lattice of length L?

15

Page 16: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

♣ Q. 1.5.1-5 From one of these configurations on a lattice of length L − 2, oneinserts two sites at the end of the lattice with two spins SL−1 and SL of same sign.What sign must be these two spins with respect to SL−2 in order that the newconfiguration belongs to the ground state of a lattice of length L and is differentfrom a configuration generated by the previous process?

♣ Q. 1.5.1-6 Let us denote by DL the degeneracy of a lattice of size L. By usingprevious results, justify the relation

DL = DL−1 +DL−2 (1.52)

♣ Q. 1.5.1-7 Show that for large values of L, one has

DL ∼ (aF )L (1.53)

where aF is a constant to be determined.The entropy per spin of the ground state in the thermodynamic limit is defined

as

S0 = limL→∞kBL

ln(DL). (1.54)

♣ Q. 1.5.1-8 Calculate numerically this value.

♣ Q. 1.5.1-9 Justify that S0 is less than ln(2). What can one say about thedegeneracy of the ground state for a value of κ = 1/2?

1.6 Blume-Capel model

The Blume-Capel model describes a lattice spin system where the spin variablescan take 3 values : −1, 0 and +1. This model describes metallic ferromagnetismand superfluidity in a Helium He3-He4 mixture. Here, we consider a mean-fieldapproach, that allows tractable calculations, by assuming that lattice spins inter-act all together (long ranged interactions). The Hamiltonian is

H = ∆N∑i=1

S2i −

1

2N

(N∑i=1

Si

)2

(1.55)

where the second term of the Hamiltonian describes a ferromagnetic coupling and∆ > 0 is the difference between ferromagnetic states (Si = ±1) and paramagneticstates (Si = 0).

♣ Q. 1.6.0-10 Determine the ground states and the associated energies. Verifythat the ground state energy is extensive. Infer a phase transition at T = 0 for avalue ∆ to be determined.

16

Page 17: Numerical Simulation in Statistical Physics

1.6 Blume-Capel model

♣ Q. 1.6.0-11 Let us denote by N+, N− and N0 the number of sites occupied byspins +1, −1 and 0 respectively. Show that the total energy of the system canbe expressed as a function of ∆, N , M = N+ −N− and Q = N+ +N−.

♣ Q. 1.6.0-12 Justify that the number of available microstates is given by

Ω =N !

N+!N−!N0!. (1.56)

♣ Q. 1.6.0-13 Using a Stirling’s formula, show that the microcanonical entropyper spin s = S/N can be written as a function of m = M/N and q = Q/N (Onesets kB = 1).

♣ Q. 1.6.0-14 By introducing ε = E/N , show that the energy per site is givenby

q = 2Kε+Km2 (1.57)

where K is a parameter to be determined.

♣ Q. 1.6.0-15 Show that the entropy per spin reads

s(m, ε) = −(1− 2Kε−Km2) ln(1− 2Kε−Km2)− 1

2(2Kε+Km2 +m) ln(2Kε+Km2 +m)

− 1

2(2Kε+Km2 −m) ln(2Kε+Km2 −m) + (2Kε+Km2) ln 2. (1.58)

In the following, we determine the transition line in the diagram T −∆.

♣ Q. 1.6.0-16 Show that expansion of the entropy per spin to 4th order in mcan be written as

s = s0 + Am2 +Bm4 +O(m6) (1.59)

with

A = −K ln

(Kε

1− 2Kε

)− 1

4Kε(1.60)

and

B = − K

4ε(1− 2Kε)+

1

8Kε2− 1

96(Kε)3. (1.61)

♣ Q. 1.6.0-17 When A and B are negative, what is the value of m for which theentropy is maximum? Which is the corresponding phase?

17

Page 18: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

♣ Q. 1.6.0-18 By using the definition of the microcanonical temperature, showthat in the same phase

β = 2K ln

(1− 2Kε

). (1.62)

♣ Q. 1.6.0-19 The transition line corresponds to A = 0 and B < 0. Infer thatthe transition temperature is given by the implicit equation

β =exp( β

2K)

2+ 1. (1.63)

♣ Q. 1.6.0-20 The transition line ends in a tricritical point when the coefficientB is equal to zero. Show for this point that the temperature is given by

K2

2β2

[1 + 2 exp

(−β2K

)]− K

2β+

1

12= 0. (1.64)

♣ Q. 1.6.0-21 For lower temperature, where it is necessary to perform the ex-pansion beyond the 4th order, explain why the transition line becomes first order.

1.6.1 Potts model

The Potts model is a lattice model where the spin variables are denoted σi whereσi goes from 1 to q. The interaction between nearest neighbor sites is given bythe Hamiltonian:

H = −J∑<i,j>

δσiσj (1.65)

where δij denotes the Kronecker symbol and is equal to 1 when i = j and 0 wheni 6= j.

♣ Q. 1.6.1-1 For q = 2, show that this model is equivalent to a Ising model(H = −JI

∑<i,j> SiSj) with a coupling constant equal to J = 2JI .

To estimate the phase diagram of this model, one can use a mean-field ap-proach. Let xi is the fraction of spins in the state i where i runs i = 1, 2, ..., q.

♣ Q. 1.6.1-2 Assuming that the spins are independent, give a simple expressionfor the entropy per site s as a function of xi and the Boltzmann constant.

18

Page 19: Numerical Simulation in Statistical Physics

1.6 Blume-Capel model

♣ Q. 1.6.1-3 Assuming that the spins are independent, justify that the energyper site is given by

e = −cJ2

∑i

x2i . (1.66)

where c is the lattice coordination number.

♣ Q. 1.6.1-4 Infer the dimensionless free energy per site βf .

♣ Q. 1.6.1-5 A guess is x1 = 1q(1 + (q − 1)m) and xi = 1

q(1−m) for i = 2, ...q.

Infer the difference βf(m)− βf(0)

♣ Q. 1.6.1-6 Show that m = 0 is always solution of the previous equation whenβf(m)− βf(0) = 0.

♣ Q. 1.6.1-7 Consider first the case q = 2. Determine the temperature of tran-sition by calculating ∂βf(m)

∂m= 0.

♠ Q. 1.6.1-8 For q > 2, show that

cβJm = ln

(1 + (q − 1)m

1−m

)(1.67)

One admits that mc = q−2q−1

, determine the “temperature”of transition 1/βc. Show

that βf(mc) = βf(0) . Show that the order of the transition changes for q > qcwith qc to be determined.

♠ Q. 1.6.1-9 Calculate the latent heat of the transition when the transition be-comes first order..

One wishes to perform the simulation of the two dimensional Potts model. Itwas shown that in two dimensions the critical value of q is 4.

♣ Q. 1.6.1-10 Write a Metropolis algorithm for the simulation of the q−statePotts model.

♣ Q. 1.6.1-11 Describe briefly the convergence problems of the Metropolis algo-rithm for q ≤ 4 and q > 4

♣ Q. 1.6.1-12 Give a method adapted for studying the transition in the twocases. Justify your answer.

19

Page 20: Numerical Simulation in Statistical Physics

Statistical mechanics and numerical simulation

20

Page 21: Numerical Simulation in Statistical Physics

Chapter 2

Monte Carlo method

Contents2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2 Uniform and weighted sampling . . . . . . . . . . . . . 22

2.3 Markov chain for sampling an equilibrium system . . 23

2.4 Metropolis algorithm . . . . . . . . . . . . . . . . . . . 25

2.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . 26

2.5.1 Ising model . . . . . . . . . . . . . . . . . . . . . . . . 26

2.5.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . 29

2.6 Random number generators . . . . . . . . . . . . . . . 31

2.6.1 Generating non uniform random numbers . . . . . . . 33

2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.7.1 Inverse transformation . . . . . . . . . . . . . . . . . . 38

2.7.2 Detailed balance . . . . . . . . . . . . . . . . . . . . . 39

2.7.3 Acceptance probability . . . . . . . . . . . . . . . . . . 39

2.7.4 Random number generator . . . . . . . . . . . . . . . 41

2.1 Introduction

Once a model of a physical system has been chosen, its statistical properties ofthe model can be determined by performing a simulation. If we are interested inthe static properties of the model, we have seen in the previous chapter that thecomputation of the partition function consists of performing a multidimensionalintegral or a multivariable summation similar to

Z =∑i

exp(−βU(i)) (2.1)

21

Page 22: Numerical Simulation in Statistical Physics

Monte Carlo method

where i is an index running over all configurations available to the system 1. Ifone considers the simple example of a lattice gas in three dimensions with a linearsize of 10, the total number of configurations is equal to 21000 ' 10301, for whichis impossible to compute exactly the sum, Eq. (2.1). For a continuous system,calculation of the integral starts by a discretization. Choosing 10 points for eachspace coordinate and with 100 particles evolving in a three dimensional space,the number of points is equal to 10300, which is of the same order of magnitudeof the previous lattice system with a larger number of sites. It is necessary tohave specific methods for evaluating the multidimensional integrals. The specificmethod used is the Monte Carlo method with an importance sampling algorithm.

2.2 Uniform and weighted sampling

To understand the interest of a weighted sampling, we first consider a basicexample, an one-dimensional integral

I =

∫ b

a

dxf(x). (2.2)

This integral can be recast as

I = (b− a)〈f(x)〉 (2.3)

where 〈f(x)〉 denotes the average of the function f on the interval [a, b]. By choos-ing randomly and uniformlyNr points along the interval [a, b] and by evaluatingthe function of all points, one gets an estimate of the integral

INr =(b− a)

Nr

Nr∑i=1

f(xi). (2.4)

The convergence of this method can be estimated by calculating the variance, σ2,of the sum INr

2. Therefore, one has

σ2 =1

N2r

Nr∑i=1

Nr∑j=1

(f(xi)− 〈f(xi)〉〉)(〈f(xj)− 〈f(xj)〉). (2.5)

The points being chosen independently, crossed terms vanish, and one obtains

σ2 =1

Nr

〈f(x)2〉 − 〈f(x)〉2). (2.6)

1We will use in this chapter a roman index for denoting configurations.2The points xi being chosen uniformly on the interval [a, b], the central limit theorem is

valid and the integral converges towards the exact value according to a Gaussian distribution

22

Page 23: Numerical Simulation in Statistical Physics

2.3 Markov chain for sampling an equilibrium system

The 1/Nr-dependence gives a a priori slow convergence, but there is no simplemodification for obtaining a faster convergence. However, one can modify in asignificant manner the variance. It is worth noting that the function f has signif-icant values on small regions of the interval [a, b] and it is useless to calculate thefunction where values are small. By using a random but non uniform distributionwith a weight w(x), the integral is given by

I =

∫ b

a

dxf(x)

w(x)w(x). (2.7)

If w(x) is always positive, one can define du = w(x)dx with u(a) = a andu(b) = b, and

I =

∫ b

a

duf(x(u))

w(x(u)), (2.8)

the estimate of the integral is given by

I ' (b− a)

Nr

Nr∑i=1

f(x(ui))

w(x(ui)), (2.9)

with the weight w(x). Similarly, the variance of the estimate then becomes

σ2 =1

Nr

(⟨(f(x(u))

w(x(u))

)2⟩−⟨f(x(u))

w(x(u))

⟩2). (2.10)

By choosing the weight distribution w proportional to the original functionf , the variance vanishes. This trick is only possible in one dimension. In higherdimensions, the change of variables in a multidimensional integral involves theabsolute value of a Jacobian and one cannot find in a intuitive manner the changeof variable to obtain a good weight function.

2.3 Markov chain for sampling an equilibrium

system

Let us return to the statistical mechanics: very often, we are interested in thecomputation of the thermal average of a quantity but not in the partition functionitself.

〈A〉 =

∑iAi exp(−βUi)

Z. (2.11)

Let us recall that

Pi =exp(−βUi)

Z(2.12)

defines the probability of having the configuration i (at equilibrium). The basicproperties of a probability distribution are satisfied: Pi is strictly positive and

23

Page 24: Numerical Simulation in Statistical Physics

Monte Carlo method

∑i Pi = 1. If one were able to generate configurations with this weight, the

thermal average of A should be given by

〈A〉 ' 1

Nr

Nr∑i

Ai (2.13)

where Nr is the total number of configurations where A is evaluated. In this way,the thermal average becomes an arithmetic average.

The trick proposed by Metropolis, Rosenbluth and Teller in 1953 consistsof introducing a stochastic Markovian process between successive configurations,and which converges towards the equilibrium distribution Peq.

First, we introduce some useful definitions: To follow the sequence of con-figurations, one defines a time t equal to the number of “visited“ configurationsdivided by the system size. This time has no relation with the real time of thesystem. Let us denote P (i, t) the probability of having the configuration i at timet.

Let us explain the meaning of the dynamics: ”stochastic“ means that goingfrom a configuration to another does not obey a ordinary differential equationbut is a random process determined by probabilities; ”Markovian“ means thatthe probability of having a configuration j at time t+ dt, (dt = 1/N where N isthe particle number of the system), only depends on the configuration i at timet, but not on previous configurations (the memory is limited to the time t); thisconditional probability is denoted by W (i → j)dt. The master equation of thesystem is then given (in the thermodynamic limit) by :

P (i, t+ dt) = P (i, t) +∑j

(W (j → i)P (j, t)−W (i→ j)P (i, t)) dt (2.14)

This equation corresponds to the fact that at time t + dt, the probability of thesystem of being in the configuration i is equal to the probability of being in a samestate at time t, (algebraic) added of the probability of leaving the configurationtowards a configuration j and the probability of going from a configuration jtowards the configuration i.

At time t = 0, the system is in a initial configuration i0: The initial probabilitydistribution is P (i) = δi0,i, which means that we are far from the equilibriumdistribution

At equilibrium, according the master equation Eq. (4.56), one obtains the setof conditions ∑

j

W (j → i)Peq(j) = Peq(i)∑j

W (i→ j) (2.15)

A simple solution solution of these equations is given by

W (j → i)Peq(j) = W (i→ j)Peq(i) (2.16)

24

Page 25: Numerical Simulation in Statistical Physics

2.4 Metropolis algorithm

This equation, (2.16), is known as the condition of detailed balance. It expressesthat, in a stationary state (or equilibrium state), the probability that the systemgoes from a stationary state i to a state j is the same that the reverse situation.Let us add this condition is not a sufficient condition, because we have not provedthat the solution of the set of equations (2.15) is unique and that the equation(2.16) is a solution with a simple physical interpretation. Because it is difficultto prove that a Monte Carlo algorithm converges towards equilibrium, severalalgorithms use the detailed balance condition. As we will see later, recentlya few algorithms violating detailed balance have been proposed that convergesasymptotically towards equilibrium (or a stationary state).

Equation (2.16) can be reexpressed as

W (i→ j)

W (j → i)=Peq(j)

Peq(j)(2.17)

= exp(−β(U(j)− U(i))). (2.18)

This implies that W (i → j) does not depend on the partition function Z, butonly on the Boltzmann factor.

2.4 Metropolis algorithm

The choice of a Markovian process in which detailed balance is satisfied is asolution of Eqs. (2.16). We will discuss other choices later. In order to obtainsolutions of Eqs. (2.16) or, in other words, to obtain the transition matrix (W (i→j)), note that a stochastic process is the sequence of two elementary steps:

1. From a configuration i, one selects randomly a new configuration j, with apriori probability α(i→ j).

2. This new configuration is accepted with a probability Π(i→ j).

Therefore, one hasW (i→ j) = α(i→ j)Π(i→ j). (2.19)

In the original algorithm developed by Metropolis (and many others MonteCarlo algorithms), one chooses α(i→ j) = α(j → i); we restrict ourselves to thissituation in the remainder of this chapter

Eqs. (2.18) are reexpressed as

Π(i→ j)

Π(j → i)= exp(−β(U(j)− U(i))) (2.20)

The choice introduced by Metropolis et al. is

Π(i→ j) = exp(−β(U(j)− U(i))) if U(j) > U(i) (2.21)

= 1 if U(j) ≤ U(i) (2.22)

25

Page 26: Numerical Simulation in Statistical Physics

Monte Carlo method

As we will see later, this solution is efficient in phase far from transitions.Moreover, the implementation is simple and can be used as a benchmark formore sophisticated methods. In chapter 5, specific methods for studying phasetransitions will be discussed.

Additional comments for implementing a Monte Carlo algorithm:

1. Computation of a thermal average only starts when the system reachesequilibrium, namely when P ' Peq. Therefore, in a Monte Carlo simulation,there are generally two time scales: The first one where one starts from aninitial configuration, one performs a first dynamics in order to lead thesystem close to equilibrium, the second one where the system evolves in thevicinity of equilibrium and where the computation of averages is performed.In the absence of a precise criteria, the duration of the first stage is not easilypredictable.

2. A naive approach consists of following the evolution of the instantaneousenergy of the system and in considering that equilibrium is reached whenthe energy is stabilized around a quasi-stationary value.

3. A more precise method estimates the relaxation time of a correlation func-tion and one chooses a time significantly larger than the relaxation time.For disordered systems, and at temperature larger the phase transition, thiscriteria is reasonable in a first approach. More sophisticated estimates willbe discussed in the lecture.

We now consider how the Metropolis algorithm can be implemented for severalbasic models.

2.5 Applications

2.5.1 Ising model

Some known results

Let us consider the Ising model (or equivalently a lattice gas model) defined bythe Hamiltonian

H = −J∑<i,j>

SiSj −HN∑i=1

Si (2.23)

where the summation < i, j > means that the interaction is restricted to distinctpairs of nearest neighbors and H denotes an external uniform field. If J > 0,interaction is ferromagnetic and if J < 0, interaction is antiferromagnetic.

In one dimension, this model can be solved analytically and one shows thatthe critical temperature is the zero temperature.

26

Page 27: Numerical Simulation in Statistical Physics

2.5 Applications

In two dimensions, Onsager (1944) solved this model in the absence of anexternal field and showed that there exists a para-ferromagnetic transition at afinite temperature. For the square lattice, this critical temperature is given by

Tc = J2

ln(1 +√

2)(2.24)

and a numerical value equal to Tc ' 2.269185314 . . . J .In three dimensions, the model can not be solved analytically, but numerical

simulations have been performed with different algorithms and the values of thecritical temperature obtained for various lattices and from different theoreticalmethods are very accurate (see Table 2.1).

A useful, but crude, estimate of the critical temperature is given by the mean-field theory

Tc = cJ (2.25)

where c is the coordination number.

D Lattice Tc/J (exact) Tc,MF/J1 0 22 square 2.269185314 42 triangular 3.6410 62 honeycomb 1.5187 33 cubic 4.515 63 bcc 6.32 83 diamond 2.7040 44 hypercube 6.68 8

Table 2.1 – Critical Temperature of the Ising model for different lattices in 1 to4 dimensions.

Table 2.1 illustrates that the critical temperature given by the mean-fieldtheory are always an upper bound of the exact values and that the quality of theapproximation becomes better when the spatial dimension of the system is largeand/or the coordination number is large.

Metropolis algorithm

Since simulation cells are always of finite size, the ratio of the surface times thelinear dimension of the particle over the volume of the system is a quite largenumber. To avoiding large boundary effects, simulations are performed by usingperiodic boundary conditions. In two dimensions, this means that the simulationcell must tile the plane: due to the geometric constraints, two types of cell arepossible: a elementary square and a regular hexagonal, with the conditions that a

27

Page 28: Numerical Simulation in Statistical Physics

Monte Carlo method

particle which exits by a given edge returns into the simulation cell by the oppositeedge. Note that this condition can bias the simulation when the symmetry of thesimulation cell is incompatible with the symmetry of the low-temperature phase,e.g. a crystal phase, a modulated phase,...

For starting a Monte Carlo simulation, one needs to define a initial configu-ration, which can be:

1. The ground-state with all aligned spins +1 or −1,

2. A infinite-temperature configuration: on each site, one chooses randomlyand uniformly a number between 0 and 1. If this selected number is between0 and 0.5, the site spin is taken equal to +1; if the selected number isbetween 0.5 and 1, the site spin is taken equal to −1.

As mentioned previously, Monte Carlo dynamics involves two elementarysteps: first, one selects randomly a trial configuration and secondly, one con-siders acceptance or rejection by using the Metropolis condition.

In order that the trial configuration can be accepted, this configuration needsto be ”close” to the previous configuration; indeed, the acceptance condition isproportional to the exponential of the energy difference of the two configurations.If this difference is large and positive, the probability of acceptance becomesvery small and the system can stay “trapped“ a long time a local minimum.For a Metropolis algorithm, the single spin flip is generally used which leads toa dynamics where energy changes are reasonable. Because the dynamics mustremain stochastic, a spin must be chosen randomly for each trial configuration,and not according to a regular sequence.

In summary, an iteration of the Metropolis dynamics for an Ising model con-sists of the following:

1. A site is selected by choosing at random an integer i between 1 and thetotal number of lattice sites.

2. One computes the energy difference between the trial configuration (inwhich the spin i is flipped) and the old configuration. For short-range in-teractions, this energy difference involves a few terms and is ”local” becauseonly the selected site and its nearest neighbors are considered.

3. If the trial configuration has a lower energy, the trial configuration is ac-cepted. Otherwise, a uniform random number is chosen between 0 and 1and if this number is less than exp(β(U(i)− U(j))), the trial configurationis accepted. If not, the system stays in the old configuration.

Computation of thermodynamic quantities (mean energy, specific heat, mag-netization, susceptibility,. . . ) can be easily performed. Indeed, if a new config-uration is accepted, the update of these quantities does not require additional

28

Page 29: Numerical Simulation in Statistical Physics

2.5 Applications

calculation: En = Eo + ∆E where En and Eo denote the energies of the new andthe old configuration; similarly, the magnetization is given byMn = Mo+2sgn(Si)where sgn(Si) is the sign of spin Si in the new configuration. In the case wherethe old configuration is kept, instantaneous quantities are unchanged, but timemust be incremented.

As we discussed in the previous chapter, thermodynamic quantities are ex-pressed as moments of the energy distribution and/or the magnetization distri-bution. The thermal average can be performed at the end of the simulationwhen one uses a histogram along the simulation and eventually one stores thesehistograms for further analysis.

Note that, for the Ising model, the energy spectra is discrete, and computationof thermal quantities can be performed after the simulation run. For continuoussystems, it is not equivalent to perform a thermal average “on the fly“ and byusing a histogram. However, by choosing a sufficiently small bin size, the averageperformed by using the histogram converges to the ”on the fly“ value.

Practically, the implementation of a histogram method consists of definingan array with a dimension corresponding the number of available values of thequantity; for instance, the magnetization of the Ising model is stored in an arrayhistom[2N+1], where N is the total number of spins. (The size of the array couldbe divided by two, because only even integers are available for the magnetization,but state relabeling is necessary.) Starting with a array whose elements are setto zero at the initial time, magnetization update is performed at each simulationtime by the formal code line

histom[magne] = histom[magne] + 1 (2.26)

where magne is the variable storing the instantaneous magnetization.

2.5.2 Simple liquids

Brief overview

Let us recall that a simple liquid is a system of point particles interacting by apairwise potential. When the phase diagram is composed of different regions (liq-uid phase, gas and solid phases), the interaction between particles must containa repulsive short-range interaction and an attractive long-range interaction. Thephysical interpretation of these contributions is the following: at short distance,quantum mechanics prevent atom overlap which explains the repulsive part of thepotential. At long distance, even for neutral atoms, the London forces (whoseorigin is also quantum mechanical) are attractive. A potential satisfying thesetwo criteria, is the Lennard-Jones potential

uij(r) = 4ε

[(σr

)12

−(σr

)6]

(2.27)

29

Page 30: Numerical Simulation in Statistical Physics

Monte Carlo method

where ε defines a microscopic energy scale and σ denotes the atom diameter.In a simulation, all quantities are expressed in dimensionless units: temper-

ature is then T ∗ = kBT/ε where kB is the Boltzmann constant, the distance isr∗ = r/σ and the energy is u∗ = u/ε. The three-dimensional Lennard-Jonesmodel has a critical point at T ∗c = 1.3 and ρ∗c = 0.3, and a triple point T ∗t = 0.6and ρ∗t = 0.8

Metropolis algorithm

For an off-lattice system, a trial configuration is generated by moving at randoma particle; this particle is obviously chosen at random. In a three-dimensionalspace a trial move is given by

x′i → xi + ∆(rand− 0.5) (2.28)

y′i → yi + ∆(rand− 0.5) (2.29)

z′i → zi + ∆(rand− 0.5) (2.30)

with the condition that (x′i − xi)2 + (y′i − yi)2 + (z′i − zi)2 ≤ ∆2/4 (this conditioncorresponds to considering isotropic moves) where rand denotes a uniform randomnumber between 0 and 1. ∆ is a a distance of maximum move by step. The valueof this quantity must be set at the beginning of the simulation and generally itsvalue is chosen in order to keep a reasonable ratio of new accepted configurationsover the total number of configurations.

Note that x′i → xi + ∆rand would be incorrect because only positive movesare allowed and the detailed balance is then not satisfied (see Eq. (2.16)).

Calculation of the energy of a new configuration is more sophisticated thanfor lattice models. Indeed, because the interaction involves all particles of thesystem, the energy update contains N terms, where N is the total number ofparticles of the simulation cell. Moreover, periodic boundary conditions coversspace by replicating the original simulation unit cell. Therefore, with periodicboundary conditions, the interaction now involves not only particles inside thesimulation cell, but also particles with the simulation cell and particles in allreplica.

Utot =1

2

′∑i,j,n

u(|rij + nL|) (2.31)

where L is the length of the simulation cell and n is a vector with integer com-ponents. For interaction potential decreasing sufficiently rapidly (practically forpotentials such that

∫dV u(r) is finite, where dV is the infinitesimal volume), the

summation is restricted to the nearest cells, the so-called minimum image con-vention.; for calculating of the energy between the particle i and the particle j,a test is performed, for each spatial coordinate if |xi − xj| (|yi − yj| and |zi − zj|,respectively) is less than one-half of length size of the simulation cell L/2. If

30

Page 31: Numerical Simulation in Statistical Physics

2.6 Random number generators

|xi−xj| (|yi−yj| and |zi−zj|, respectively) is greater than to L/2, one calculatesxi − xjmod L (yi − yjmod L and zi − zjmod L, respectively), which considersparticles within the nearest cell of the simulation box.

This first step is time consuming, because the energy update requires cal-culation of N2/2 terms. When the potential decreases rapidly (for instance, theLennard-Jones potential), at long distance, the density ρ(r) becomes uniform andone can estimate this contribution to the mean energy by the following formula

ui =1

2

∫ ∞rc

4r2dru(r)ρ(r) (2.32)

' ρ

2

∫ ∞rc

4r2dru(r) (2.33)

This means that the potential is replaced with a truncated potential

utrunc(r) =

u(r) r ≤ rc,

0 r > rc.(2.34)

For a finite-range potential, energy update only involves a summation of afinite number of particles at each time, which becomes independent of the systemsize. For a Monte Carlo simulation, the computation time becomes proportionalto the number of particles N (because we keep constant the same number ofmoves per particle for all system sizes). In the absence of truncation of potential,the update energy would be proportional to N2 Note that the truncated potentialintroduces a discontinuity of the potential. This correction can be added to thesimulation results, which gives an ”impulse” contribution to the pressure. For theMolecular Dynamics, we will see that this procedure is not sufficient.

In a similar manner to the Ising model (or lattice models), it is useful tostore the successive instantaneous energies in a histogram. However, the energyspectra being continuous, it is necessary to introduce a adapted bin size (∆E).If one denotes Nh the dimension of the histogram array, the energies are storedfrom Emin to Emin + ∆(ENt − 1). If E(t) is the energy of the system at time t,the index of the array element is given by the relation

i = Int(E − Emin/∆E) (2.35)

In this case, the numerical values of moments calculated from the histogramare not exact unlike the Ising model, and it is necessary to choose the bin size inorder to minimize this bias.

2.6 Random number generators

Monte Carlo simulation uses extensively random numbers and it is useful to ex-amine the quality of these generators. The basic generator is a procedure that

31

Page 32: Numerical Simulation in Statistical Physics

Monte Carlo method

gives sequences of uniform pseudo-random numbers. The quality of a genera-tor depends on a large number of criteria: it is obviously necessary to have allmoments of a uniform distribution satisfied, but also since the numbers mustbe independent, the correlation between successive trials must be as weak aspossible.

As the numbers are coded on a finite number of bytes, a generator is char-acterized by a period, that we expect very large and more precisely much largerthan the total random numbers required in a simulation run. In the early stage ofcomputational physics, random number generators used a one-byte coding lead-ing to very short period, which biases the first simulations. We are now beyondthis time, since the coding is performed with 8 or 16 byte words. As we will seebelow, there exists nowadays several random number generators of high quality.

A generator relies on an initial number (or several numbers). If the seed(s) ofthe random number generator is (are) not set, the procedure has a seed by default.However, when one restarts a run without setting the seed, the same sequenceof random numbers is generated and while this feature is useful in debugging,production runs require different number seeds, every time one performs a newsimulation. If not, dangerous biases can result from the absence of a correctinitialization of seeds.

Two kinds of algorithms are at the origin of random number generators. Thefirst one is based on a linear congruence relation

xn+1 = (axn + c)modm (2.36)

This relation generates a sequence of pseudo-random integer numbers between0 and m − 1. m corresponds to the generator period. Among generators usinga linear congruence relation, one finds the functions randu (IBM), ranf (Cray),drand48 on the Unix computers, ran (Numerical Recipes ,Knuth), etc. The peri-ods of these generators go from 229 (randu IBM) to 248 (ranf).

Let us recall that 230 ' 109; if one considers a Ising spin lattice in threedimensions with 1003 sites, only 103 spin flips per site can be done on the smallestperiod. For a model lattice with 103 sites, allowed spin flips are multiplied bythousand and can be used for a preliminary study of the phase diagram (outsideof the critical region, where a large number of configuration is required.

The generator rng cmrg (Lecuyer)3 provides a sequence of numbers from therelation:

zn = (xn − yn)modm1, (2.37)

where xn and yn are given by the following relations

xn = (a1xn−1 + a2xn−2 + a3xn−3)modm1 (2.38)

yn = (b1yn−1 + b2yn−2 + b3yn−3)modm2. (2.39)

3P. Lecuyer contributed many times to develop several generators based either on linearcongruence relation or on shift register.

32

Page 33: Numerical Simulation in Statistical Physics

2.6 Random number generators

The generator period is 2305 ' 1061

The second class of random number generators is based on the register shiftthrough the logical operation “exclusive or“. An example is provided by theKirkpatrick and Stoll generator.

xn = xn−103

⊕xn−250 (2.40)

Its period is large, 2250, but it needs to store 250 words. The generator with thelargest period is likely of that Matsumoto and Nishimura, known by the nameMT19937 (Mersenne Twister generator). Its period is 106000! It uses 624 wordsand it is equidistributed in 623 dimensions!

2.6.1 Generating non uniform random numbers

Introduction

If, as we have seen in the above section, it is possible to obtain a ”good” generatorof uniform random numbers, there exists some physical situations where it isnecessary to generate non uniform random numbers, namely random numbersdefined by a probability distribution f(x) on an interval I, such that

∫Idxf(x) = 1

(condition of probability normalization). We here introduce several methodswhich generated random numbers with a specific probability distribution.

Inverse transformation

If f(x) denotes the probability distribution on the interval I, one defines thecumulative distribution F as

F (x) =

∫ x

f(t)dt (2.41)

If there exists an inverse function F−1, then u = F−1(x) define a cumulativedistribution for random numbers with a uniform distribution on the interval [0, 1].

For instance, if one has a exponential distribution probability with a averageλ, usually denoted E(λ), one has

F (x) =

∫ x

0

dtλe−λt

= 1− e−λx (2.42)

Therefore, inverting the relation u = F (x), one obtains

x = − ln(1− u)

λ(2.43)

Considering the cumulative distribution F (x) = 1 − x, one immediately ob-tains that u is a uniform random variable defined on the unit interval, and 1−u is

33

Page 34: Numerical Simulation in Statistical Physics

Monte Carlo method

also a uniform random variable uniform defined on the same interval. Therefore,Equ. (2.43) can be reexpressed as

x = − ln(u)

λ(2.44)

Box-Muller method

The Gaussian distribution is frequently used in simulations Unfortunately, thecumulative distribution is an error function. for a Gaussian distribution of unitvariance centered on zero, denoted by N (0, 1) ( N corresponds to a normal dis-tribution), one has

F (x) =1√2π

∫ x

−∞dte−t

2/2 (2.45)

This function is not invertible and one cannot apply the previous method. How-ever, if one considers a couple of independent random variables (x, y) = the jointprobability distribution is given by f(x, y) = exp(−(x2 + y2)/2)/(2π). Using thechange of variables (x, y)→ (r, θ), namely polar coordinates, the joint probabilitybecomes

f(r2)rdrdθ = exp(−r2

2)dr2

2

2π. (2.46)

The variable r2 is a random variable with a exponential probability distributionof average 1/2, or E(1/2), and θ is a random variable with a uniform probabilitydistribution on the interval [0, 2π].

If u and v are two uniform random variables on the interval [0, 1], or U[0,1],one has

x =√−2 ln(u) cos(2πv)

y =√−2 ln(u) sin(2πv) (2.47)

which are independent random variables with a Gaussian distribution .

Acceptance rejection method

The inverse method or the Box-Muller method can not be used for general prob-ability distribution. The acceptance-rejection method is a general method whichgives random numbers with various probability distributions.

This method uses the following property

f(x) =

∫ f(x)

0

dt (2.48)

Therefore, by considering a couple of random variables with an uniform distribu-tion, one defines the joint probability g(x, u) with the condition 0 < u < f(x);

34

Page 35: Numerical Simulation in Statistical Physics

2.6 Random number generators

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

2.5

3

1624 black 3376 rednumerical ratio 0.324 (exact ratio 1/3)

Figure 2.1 – Computation of the probability distribution B(4, 8) with theacceptance-rejection method with 5000 random points.

the function f(x) is the marginal distribution (of the x variable) of the jointdistribution,

f(x) =

∫dug(x, u) (2.49)

By using a Monte Carlo method with an uniform sampling, one obtains inde-pendent random variables with a distribution, not necessary invertible. The priceto pay is illustrated in Fig. 2.1 where the probability distribution xα(1− x)β (ormore formally, a distribution B(α, β)) is sampled . For a good efficiency of themethod, it is necessary to choose for the variable u an uniform distribution whoseinterval U[0,m] with m greater or equal than the maximum of f(x). Therefore, themaximum of f(x) must be determined in the definition interval before selectingthe range for the random variables. By choosing a maximum value for u equal tothe maximum of the function optimizes the method.

More precisely, the efficiency of the method is asymptotically given by theratio of areas: the area whose external boundaries are given by the curve, thex-axis and the two vertical axis over that of the rectangle defined by the twohorizontal and vertical axis (in Fig. 2.1, the area of the rectangle is equal to 1).

35

Page 36: Numerical Simulation in Statistical Physics

Monte Carlo method

For the probability distribution β(3, 7) and by choosing a maximum value forthe variable u equal to 3, one obtains a ratio of 1/3 (which corresponds to aninfinitely large number of trials) between accepted random numbers over totaltrial numbers (5000 in Fig. 2.1 with 1624 acceptances and 3376 rejections).

The limitations of this method are easy to identify: only probability distri-bution with a compact support can be well sampled. If not, a truncation of thedistribution must be done and when the distribution decreases slowly, the ratiogoes to a small value which deteriorates the efficiency of the method.

In order to improve this method, the rectangle can be replaced with a areawith an upper boundary defined by a curve. If g is a simple function to sampleand if g > f for all values on the interval, one samples random numbers with thedistribution g and the acceptance-rejection method is applied for the functionf(x)/g(x). The efficiency of the method then becomes the ratio of areas definedby the two functions, respectively, and the number of rejections decreases.

Method of the Ratio of uniform numbers

This method generates random variables for various probability distributions byusing the ratio of two uniform random variables. Let us denote z = a1 + a2y/xwhere x and y are uniform random numbers.

If one considers an integrable function r normalized to one, if x is uniform onthe interval [0, x∗] and y on the interval [y∗, y

∗], and by introducing w = x2 andz = a1 + a2y/x, if w ≤ r(z), then z has the distribution r(z), for −∞ < z <∞.

In order to show the property, let us consider the joint distribution fX,Y (x, y)which is uniform in the domain D, which is within the rectangle [0, x∗]× [y∗, y

∗].The joint distribution fW,Z(w, z) is given by

fW,Z(w, z) = JfX,Y (√w, (z − a1)

√w/a2) (2.50)

where J is the Jacobian of the change of variables.Therefore, one has

J =

∣∣∣∣ ∂x∂w

∂x∂z

∂y∂w

∂y∂z

∣∣∣∣ (2.51)

=

∣∣∣∣∣1

2√w

0z−a1

2a2√w

√wa2

∣∣∣∣∣ (2.52)

=1

2a2

(2.53)

By calculating

fZ(z) =

∫dwfW,Z(w, z) (2.54)

one infers fZ(z) = r(z).

36

Page 37: Numerical Simulation in Statistical Physics

2.6 Random number generators

To determine the equations of the boundaries D, one needs to solve the fol-lowing equations

x(z) =√r(z) (2.55)

y(z) = (z − a1)x(z)/a2 (2.56)

Let us consider the Gaussian distribution. We restrict the interval to positivevalues. The bounds of the domain are given by the equations.

x∗ = sup(√r(z) (2.57)

y∗ = inf((z − a1)√r(z)/a2) (2.58)

y∗ = sup((z − a1)√r(z)/a2) (2.59)

One chooses a1 = 0 and a2 = 1. Since r(z) = e−z2/2, by using Eqs. (2.57)- (2.59),

one infers that x∗ = 1, y∗ = 0 and y∗ =√

2/e. On can show that the ratioof the domain D over the rectangle available for values of x and y is equal to√πe/4 ' 0.7306. The test to do is x ≤ e−(y/x)2/2, which can be reexpressed as

y2 ≤ −4x2 ln(x). Because this computation involves logarithms, which are timeconsuming to compute, the interest of the method could be limited. But it ispossible to improve the performance by avoiding the calculation of transcendentalfunctions as follows: the logarithm function is bounded between 0 and 1 by thefunctions

x− 1

x≤ ln(x) ≤ 1− x (2.60)

To limit the calculation of logarithms, one performs the following pretests

• y2 ≤ 4x2(1−x). If the test is true, the number is accepted (this correspondsto a domain inside to the domain of the logarithm), otherwise the secondtest is performed.

• y2 ≤ 4x(1 − x). If the test is false, the number is rejected, otherwise, thesecond test is performed.

With this procedure, one can show that we save computing time. A simple ex-planation comes from the fact that a significant fraction of numbers are acceptedby the pretest and only few accepted numbers need the logarithm test. Althoughsome numbers need the three tests before acceptance, this is compensated by thefact that the pretest selects the largest fraction of random numbers.

Beyond this efficiency, the ROU method allows the sampling of probabilitydistributions whose interval is not compact (see above the Gaussian distribution).This represents a significant advantage to the acceptance-rejection method, wherethe distribution needs to be truncated.

It is noticeable that many softwares have libraries where many probabilitydistribution are implemented nowadays: for instance, the Gnu Scientific Library

37

Page 38: Numerical Simulation in Statistical Physics

Monte Carlo method

owns Gaussian, Gamma, Cauchy distributions, etc,... This feature is not re-stricted to the high-level language, because scripting languages like Python, Perlalso implement these functions.

2.7 Exercises

2.7.1 Inverse transformation

Random numbers are provided by softwares are basically uniform random num-bers on the interval [0, 1]. For Monte Carlo simulations, it is necessary to samplerandom numbers with non uniform distribution. There exists several methodsfor generating these non uniform random numbers.

♣ Q. 2.7.1-1 Give briefly two methods that generate random numbers with nonuniform distribution.

In order to simulate a uniform rotation of a vector in three dimensions, one canuse the spherical coordinates with the usual angles θ and φ, which are related tothe Cartesian coordinates with (x = cos(φ) sin(θ), y = sin(φ) sin(θ), z = cos(θ)).

The probability distribution is then given by f(θ, φ) = sin(θ)4π

with θ comprisedbetween 0 and π and φ between 0 and 2π.

♣ Q. 2.7.1-2 Using the inverse transformation method, show that one can findtwo uniform random variables u and v, with θ = h(u) and φ = g(v), where h andg are two functions to be determined

For rotating a solid in three dimension, unit quaternions are often used.Quaternions are a three dimensional generalization of complex numbers and areexpressed as

Q = q01 + q1i + q2j + q3k, (2.61)

where 1, i, j,k are elementary quaternions. q0 is called real part and (q1, q2, q3)are imaginary components of a vector.

Quaternion multiplication is not commutative (like rotation composition inthree dimension) and is given by the following rules 1.a = a with a any elementaryquaternion, i.i = j.j = k.k = −1, i.j = −j.i = k, i.k = −k.i = jandj.k = −k.j =i.

One defines the conjugate of Q by

Q∗ = q01− q1i− q2j− q3k, (2.62)

♣ Q. 2.7.1-3 Calculate the square module Q.Q∗ of a quaternion.

♣ Q. 2.7.1-4 Justify that quaternions can be interpreted as points belonging toa hypersphere of a 4-dimensional space

38

Page 39: Numerical Simulation in Statistical Physics

2.7 Exercises

2.7.2 Detailed balance

To obtain a system at equilibrium, the dynamics is given by a Monte Carlosimulation. The goal of the exercise is to prove some properties of algorithmsdriving to equilibrium.

In the first three questions, one assumes that the algorithm satisfies detailedbalance.

♣ Q. 2.7.2-1 Let us denote two states α and β of the phase space; give detailedbalance satisfied by this algorithm. Pα→β is transition probability of the Marko-vian dynamics and which represents the transition of the state α towards a stateβ and ρα is the equilibrium density.

♣ Q. 2.7.2-2 Give two Monte Carlo algorithms which satisfy detailed balance.

♣ Q. 2.7.2-3 Consider 3 states, denoted α, β andγ; show that there exists asimple relation between Pα→βPβ→γPγ→α and Pα→γPγ→βPβ→α in which the equi-librium densities are absent.

We look for to prove the conversion of the above question: let an algorithmverifying the relation of question 3, one wants to show that it satisfies detailedbalance.

♣ Q. 2.7.2-4 By assuming that there is at least one state α such that Pα→δ 6= 0for each state δ and that the equilibrium probability associated with α is ρα, showthat one can infer the detailed balance equation between two states β and δ, bysetting that the equilibrium density ρδ equal to

ρδ =

(∑α

Pδ→αPα→δ

ρα

)−1

(2.63)

♣ Q. 2.7.2-5 Calculate the right-hand side de Eq. (2.63) assuming that the de-tailed balance is satisfied for the Monte Carlo algorithm.

2.7.3 Acceptance probability

Consider a particle of mass m moving in a one-dimensional potential v(x). In aMonte Carlo simulation with a Metropolis algorithm, at each step of the simula-tion, a random and uniform move is proposed for the particle with an amplitudedisplacement comprised between −δ and δ.

39

Page 40: Numerical Simulation in Statistical Physics

Monte Carlo method

♣ Q. 2.7.3-1 Write the master equation of the Monte Carlo algorithm for theprobability P (x, t) of finding the particle at x at time t as a function of

W (x→ x+ h) = min(1, exp(−v(x+ h) + v(x)))

and of W (x+ h→ x). Verify that the system goes to equilibrium.

♣ Q. 2.7.3-2 The acceptance rate is defined as the ratio of new configurationsover the total number of configurations. Justify that this ratio is given by theexpression

Pacc(δ) =1

2δR

∫ +∞

−∞dx exp(−βv(x))

∫ δ

−δdhW (x→ x+ h) (2.64)

where R =∫ +∞−∞ dx exp(−βv(x)).

♣ Q. 2.7.3-3 Show that the above equation can be reexpressed as

d(δPacc(δ))

dδ=

1

2R

∫ +∞

−∞dx exp(−βv(x))(W (x→ x+ δ) +W (x→ x− δ)) (2.65)

Let us consider the following potential:

v(x) =

0 |x| ≤ a,

+∞ |x| > a.(2.66)

One restricts to the case where δ < a. Show that

Pacc(δ) = 1− Aδ (2.67)

where A is a constant to be determined.

♣ Q. 2.7.3-4 The potential is now:

v(x) =

−ε |x| ≤ b,

0 a ≥ |x| > b

+∞ |x| > a.

(2.68)

Let us denote α = exp(βε).

♣ Q. 2.7.3-5 Assuming that b > 2a, show that

Pacc(δ) =

1−Bδ δ ≤ b,

C −Dδ 2b > δ ≥ b.(2.69)

where B,C and D are quantities to be expressed as functions of a, b and α.

40

Page 41: Numerical Simulation in Statistical Physics

2.7 Exercises

2.7.4 Random number generator

Random number generators available on computers are uniform on the interval]0, 1]. It is necessary for Monte Carlo simulation to have random numbers with anon uniform distribution. One seeks to obtain random numbers between −1 and+1 such that the probability distribution f(x) is given by the equations

f(x) = 1 + x for −1 < x < 0

f(x) = 1− x for 0 < x < 1 (2.70)

♣ Q. 2.7.4-1 Determine the cumulative probability distribution F (x) associatedwith f(x). You should consider two cases x < 0 and x > 0.

♣ Q. 2.7.4-2 Denoting η as a uniform random variable between −1 and 1, de-termine the relation between x and η. Consider the two cases again.

We now have two generators of random uniform independent numbers between−1/2 and +1/2. Let us denote by η1 and η2 two numbers associated with thesedistributions. Only pairs (η1, η2) such that η2

1 + η22 ≤ 1/4 are selected.

♣ Q. 2.7.4-3 Considering the joint probability distribution f(η1, η2), show thatdisc sampling of radius 1/2 is uniform.

♣ Q. 2.7.4-4 Calculate the acceptance probability of pairs of random numbersfor generating this distribution.

♣ Q. 2.7.4-5 One considers the ratio z = η1/η2. Determine the probability dis-tribution of f(z). Hint: use polar coordinate

♣ Q. 2.7.4-6 From the preceding distribution f(z), one calculates the numbersu = α + βz, determine the probability distribution g(u) as a function of thedistribution f .

41

Page 42: Numerical Simulation in Statistical Physics

Monte Carlo method

42

Page 43: Numerical Simulation in Statistical Physics

Chapter 3

Molecular Dynamics

Contents3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2 Equations of motion . . . . . . . . . . . . . . . . . . . . 45

3.3 Discretization. Verlet algorithm . . . . . . . . . . . . 45

3.4 Symplectic algorithms . . . . . . . . . . . . . . . . . . . 47

3.4.1 Liouville formalism . . . . . . . . . . . . . . . . . . . . 47

3.4.2 Discretization of the Liouville equation . . . . . . . . . 50

3.5 Hard sphere model . . . . . . . . . . . . . . . . . . . . 51

3.6 Molecular Dynamics in other ensembles . . . . . . . . 53

3.6.1 Andersen algorithm . . . . . . . . . . . . . . . . . . . 54

3.6.2 Nose-Hoover algorithm . . . . . . . . . . . . . . . . . . 54

3.7 Brownian dynamics . . . . . . . . . . . . . . . . . . . . 57

3.7.1 Different timescales . . . . . . . . . . . . . . . . . . . 57

3.7.2 Smoluchowski equation . . . . . . . . . . . . . . . . . 57

3.7.3 Langevin equation. Discretization . . . . . . . . . . . 57

3.7.4 Consequences . . . . . . . . . . . . . . . . . . . . . . . 58

3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.9.1 Multi timescale algorithm . . . . . . . . . . . . . . . . 59

3.1 Introduction

Monte Carlo simulation has an intrinsic limitation: its dynamics does not cor-respond to the “real” dynamics. For continuous systems defined from a classical

43

Page 44: Numerical Simulation in Statistical Physics

Molecular Dynamics

Hamiltonian, it is possible to solve the differential equations of all particles si-multaneously. This method provides the possibility of precisely obtaining thedynamical properties (temporal correlations) of equilibrium systems, quantitieswhich are available in experiments, which allows one to test, for instance, thequality of the model.

The efficient use of new tools is associated with knowledge of their ability; first,we analyze the largest available physical time as well as the particle numbers thatwe can consider for a given system through Molecular Dynamics. For a threedimensional system, one can solve the equations of motion for systems up toseveral hundred thousands particles. Note that for a simulation with a moderatenumber of particles, namely 104, the average number of particles along an edgeof the simulation box is (104)(1/3) ' 21. This means that for a thermodynamicstudy one needs to use the periodic boundary conditions like in Monte Carlosimulations.

For atomic systems far from the critical region, the correlation length is smallerthan the dimensionless length of 21, but Molecular Dynamics is not the bestsimulation method for a study of phase transitions. The situation is worse ifone considers more sophisticated molecular structures, in particular biologicalsystems.

By using a Lennard-Jones potential, it is possible to find a typical time scalefrom an analysis of microscopic parameters of the system (m mass of the particle,σ diameter, and ε energy scale of the interaction potential) is

τ = σ

√m

ε(3.1)

This time corresponds to the duration for a atom of moving on a distance equalto its linear size with a velocity equal to the mean velocity in the liquid. Forinstance, for argon, one has the numerical values σ = 3A, m = 6.63.10−23kgand ε = 1.64.10−20J , which gives τ = 2.8.10−14s. For solving the equations ofmotion, the integration step must be much smaller than the typical time scale τ ,typically ∆t = 10−15s, even smaller. The total number of steps performed in arun is typically of order of magnitude from 105 to 107; therefore, the duration ofthe simulation for an atomic system is 10−8s.

For many atomic systems, relaxation times are much smaller than 10−8s andMolecular Dynamics is a very good tool for investigating dynamic and thermody-namic properties of the systems. For glassformers, in particular, in for supercooledliquids, one observes relaxation times to the glass transition which increases byseveral orders of magnitude, where Molecular Dynamics cannot be equilibrated.For conformational changes of proteins, for instance, to the contact of a solid sur-face, typical relaxation times are of order of a millisecond. In these situations, it isnecessary to coarse-graining some of the microscopic degrees of freedom allowingthe typical time scale of the simulation to be increased.

44

Page 45: Numerical Simulation in Statistical Physics

3.2 Equations of motion

3.2 Equations of motion

We consider below the typical example of a Lennard-Jones liquid. Equations ofmotion of particle i are given by

d2ridt2

= −∑j 6=i

∇riu(rij). (3.2)

For simulating an “infinite” system, periodic boundary conditions are used.The calculation of the force between two particles i and j can be performed byusing the minimum image convention (if the potential decreases sufficiently fastto 0 at large distance. This means that for a given particle i, one needs to find ifj or one of its images which is the nearest neighbor of i.

Similarly to a Monte Carlo, simulation, force calculation on the particle iinvolves the computation of de (N − 1) elementary forces between each particleand particle i. By using a truncated potential, one restricts the summation toparticles within a sphere whose radius corresponds to the potential truncation.

In order that the forces remain finite whatever the particle distance, the trun-cated Lennard-Jones potential used in Molecular Dynamics is the following

utrunc(r) =

u(r)− u(rc) r < rc

0 r ≥ rc.(3.3)

One has to account for the bias introduced in the potential in the thermodynamicsquantities compared to original Lennard-Jones potential.

3.3 Discretization. Verlet algorithm

For obtaining a numerical solution of the equations of motion, it is necessaryto discretize. Different choices are a priori possible, but as we will see below,it is crucial that the total energy which is constant for a isolated Hamiltoniansystem remains conserved along the simulation (let us recall that the ensemble ismicrocanonical). The Verlet’s algorithm is one of first methods and remains onemost used nowadays.

For the sake of simplicity, let us consider an Hamiltonian system with identicalN particles. r, is a vector with 3N components: r = (r1, r2, ..., rN , where ridenotes the position of particle i. The system formally evolves as

md2r

dt2= f(r(t)). (3.4)

A time series expansion gives

r(t+ ∆t) = r(t) + v(t)∆t+f(r(t))

2m(∆t)2 +

d3r

dt3(∆t)3 +O((∆t)4) (3.5)

45

Page 46: Numerical Simulation in Statistical Physics

Molecular Dynamics

and similarly”

r(t−∆t) = r(t)− v(t)∆t+f(r(t))

2m(∆t)2 − d3r

dt3(∆t)3 +O((∆t)4). (3.6)

Adding these above equations, one obtains

r(t+ ∆t) + r(t−∆t) = 2r(t) +f(r(t))

m(∆t)2 +O((∆t)4). (3.7)

The position updates are performed with an accuracy of (∆t)4. This algorithmdoes not use the particle velocities for calculating the new positions. One canhowever determine these as follows:

v(t) =r(t+ ∆t)− r(t−∆t)

2∆t+O((∆t)2) (3.8)

Let us note that most of the computation is spent by the force calculation, andmarginally spent on the position updates. As we will see below, we can improvethe accuracy of the simulation by using a higher-order time expansion, but thisrequires the computation of spatial derivatives of forces and the computation timeincreases rapidly compared to the same simulation performed with the Verletalgorithm.

For the Verlet algorithm, the accuracy of trajectories is roughly given by

∆t4Nt (3.9)

where Nt is the total number of integration steps, and the total simulation timeis given by ∆tNt.

It is worth noting that the computation accuracy decreases with simulationtime, due to the round-off errors which are not included in Eqs. (3.9).

There are more sophisticated algorithms involving force derivatives; they im-prove the short-time accuracy, but the long-time precision may deteriorate fasterthan with a Verlet algorithm. In addition, the computation efficiency is significantlower.

Time symmetry is an important property of the equations of motion. Notethat Verlet’s algorithm preserves this symmetry. This is, if we change ∆t→ −∆t,Eq. (3.7) is unchanged. As a consequence, if at a given time t of the simulation,one inverts the arrow of time , the Molecular Dynamics trajectories are retraced.The round-off errors accumulative in the simulation limit the the reversibilitywhen the total number of integration steps becomes very large.

For Hamiltonian systems, the volume of phase space is conserved with time,and numerical simulation must preserve this property. Conversely, if an algorithmdoes not have this property, the total energy is no longer conserved, even for ashort simulation. In order to keep this property, it is necessary that the Jacobianof the transformation in phase space, namely between the old and new coordinates

46

Page 47: Numerical Simulation in Statistical Physics

3.4 Symplectic algorithms

in phase space; is equal to one. We will see below that the Verlet algorithmsatisfies this property.

An a priori variant of the Verlet algorithm is known as the Leapfrog algorithmand is based on the following procedure: velocities are calculated on half-timeintervals, and positions are obtained on integer time intervals. Let us define thevelocities for t+ ∆t/2 and t−∆t/2

v(t+ ∆t/2) =r(t+ ∆t)− r(t)

∆t, (3.10)

v(t−∆t/2) =r(t)− r(t−∆t)

∆t, (3.11)

one immediately obtains

r(t+ ∆t) = r(t) + v(t+ ∆t/2)∆t (3.12)

and similarlyr(t−∆t) = r(t)− v(t−∆t/2)∆t. (3.13)

By using Eq. (3.7), one gets

v(t+ ∆t/2) = v(t−∆t/2) +f(t)

m∆t+O((∆t)3). (3.14)

Because the Leapfrog algorithm leads to Eq. (3.7), the trajectories are identi-cal to those calculated by the Verlet’s algorithm. The computations of velocitiesfor half-integer times are temporary variables, but the positions are identicalSome care is necessary when calculating the thermodynamics quantities becausethe mean potential energy, which is calculated at integer times, and the kineticenergy which is calculated at half-integer times, do not give a constant totalenergy.

Many attempts to obtain better algorithms have been made, and, in order toovercome the previous heuristic derivation, a more rigorous approach is neces-sary. From the Liouville formalism, we are going to derive symplectic algorithms,namely algorithms conserving the phase space volume and consequently the totalenergy of the system. This method provides algorithms with greater accuracythan the Verlet algorithm for a fixed time step ∆t. Therefore, more precise tra-jectories can be obtained.

3.4 Symplectic algorithms

3.4.1 Liouville formalism

In the Gibbs formalism, the phase space distribution is given by an N -particleprobability distribution f (N)(rN ,pN , t) (N is the total number of particles of

47

Page 48: Numerical Simulation in Statistical Physics

Molecular Dynamics

the system) rN denotes the coordinate set and pN that of the impulses. Iff (N)(rN ,pN , t) is known, the average of a macroscopic quantity can be calcu-lated Frenkel and Smit [1996], Tsai et al. [2005].

Time evolution of this distribution is given by the Liouville equation

∂f (N)(rN ,pN , t)

∂t+

N∑i=1

(dridt

∂f (N)

∂ri+dpidri

∂f (N)

∂pi

)= 0, (3.15)

or,∂f (N)(rN ,pN , t)

∂t− HN , f

(N) = 0, (3.16)

where HN denotes the Hamiltonian of the system of N particles, and the bracketA,B corresponds the Poisson bracket1. The Liouville operator is defined as

L =iHN , (3.18)

=N∑i=1

((∂HN

∂ri

)∂

∂pi−(∂HN

∂pi

)∂

∂ri

)(3.19)

with (∂HN

∂pi

)=vi =

dridt

(3.20)(∂HN

∂ri

)=− fi = −dpi

dt. (3.21)

(3.22)

Therefore, Eq. (3.16) is reexpressed as

∂f (N)(rN ,pN , t)

∂t= −iLf (N), (3.23)

and one gets the formal solution

f (N)(rN ,pN , t) = exp(−iLt)f (N)(rN ,pN , 0). (3.24)

Similarly, if A is a function of positions rN , and impulses, pN , (but withoutexplicit time dependence) obeys

dA

dt=

N∑i=1

(∂A

∂ri

dridt

+∂A

∂pi

dpidt

). (3.25)

1 If A andB are two functions of rN andpN , the Poisson bracket between A andB is definedas

A, B =N∑

i=1

(∂A

∂ri

∂B

∂pi− ∂B

∂ri

∂A

∂pi

)(3.17)

48

Page 49: Numerical Simulation in Statistical Physics

3.4 Symplectic algorithms

Using the Liouville operator, this equation becomes

dA

dt= iLA, (3.26)

which gives formally

A(rN(t),pN(t)) = exp(iLt)A(rN(0),pN(0)). (3.27)

Unfortunately (or fortunately, because, if not, the world would be too simple),exact expression of the exponential operator is not possible in general case. How-ever, one can obtain it in two important cases: the first expresses the Liouvilleoperator as the sum of two operators

L = Lr + Lp (3.28)

where

iLr =∑i

dridt

∂ri(3.29)

and

iLp =∑i

dpidt

∂pi(3.30)

In a first case, one assumes that the operator Lp is equal to zero. Physically,this corresponds to the situations where particle impulses are conserved duringthe evolution of the system and

iL0r =

∑i

dridt

(0)∂

∂ri(3.31)

Time evolution of A(t) is given by

A(r(t),p(t)) = exp(iL0rt)A(r(0),p(0)) (3.32)

Expanding the exponential, one obtains

A(r(t),p(t)) = exp

(∑i

dridt

(0)t∂

∂ri

)A(r(0),p(0)) (3.33)

= A(r(0),p(0)) + iL0rA(r(0),p(0)) +

(iL0r)

2

2!A(r(0),p(0)) + . . .

(3.34)

=∞∑n=0

∑i

(dridt

(0)t)n

n!

(∂n

∂rni

)A(r(0),p(0)) (3.35)

= A

((ri +

dridt

(0)t

)N,pN(0)

)(3.36)

49

Page 50: Numerical Simulation in Statistical Physics

Molecular Dynamics

The solution is a simple translation of spatial coordinates, which correspondsto a free streaming of particles without interaction, as expected.

The second case uses a Liouville operator where L0r = 0. Applying to A, L0

p

defined like L0r in the first case, one obviously obtains the solution of the Liouville

equation which corresponds to a simple impulse translation.

3.4.2 Discretization of the Liouville equation

In the previous section, we expressed the Liouville operator as the sum of twooperators Lr and Lp. These two operators do not commute

exp(Lt) 6= exp(Lrt) exp(Lpt). (3.37)

The key point of the reasoning uses the Trotter identity

exp(B + C) = limP→∞

(exp

(B

2P

)exp

(C

P

)exp

(B

2P

))P. (3.38)

For a finite number of iterations, one obtains for a finite number P

exp(B + C) =

(exp

(B

2P

)exp

(C

P

)exp

(B

2P

))Pexp

(O(

1

P 2

))(3.39)

When truncating after P iterations, the approximation is of order 1/P 2. There-fore, by using ∆t = t/P , by replacing the formal solution of the Liouville equationby a discretized version, and by introducing

B

P=iLptP

(3.40)

andC

P=iLrtP

, (3.41)

one obtains for an elementary step

eiLp∆t/2eiLr∆teiLp∆t/2. (3.42)

Because the operators Lr and Lp are hermitian, the exponential operator is uni-tary, and by using the Trotter formula, one gets a method for which we mayderive symplectic algorithms namely algorithms preserving the volume of thephase space.

eiLp∆t/2A(rN(0),pN(0)

)=

A

(rN(0),

(p(0) +

∆t

2

dp(0)

dt

)N) (3.43)

50

Page 51: Numerical Simulation in Statistical Physics

3.5 Hard sphere model

then

eiLr∆tA

(rN(0),

(p(0) +

∆t

2

dp(0)

dt

)N)=

A

(r(0) + ∆tdr(∆t

2)

dt

)N

,

(p(0) +

∆t

2

dp(0)

dt

)N (3.44)

and lastly, applying eiLp∆t/2, we have

A

(r(0) + ∆tdr(∆t

2)

dt

)N

,

(p(0) +

∆t

2

dp(0)

dt+

∆t

2

dp(∆t)

dt

)N (3.45)

In summary, one obtains the global transformations

p(∆t) = p(0) +∆t

2(f(r(0)) + f(r(∆t))) (3.46)

r(∆t) = r(0) + ∆tdr(∆t/2)

dt(3.47)

By using Eq. (3.46) with the impulse (defined at half-integer times) and Eq. (3.47)for the initial and final times 0 and −∆t, one removes the velocity dependenceand recover the Verlet algorithm at the lowest order expansion of the Trotterformula. Note that it is possible of restarting a new derivation by inverting theroles of operators Lr and Lp in the Trotter formula and to obtains new evolutionequations. Once again, the discretized equations of trajectories are those of Verletalgorithm.

In summary, the product of the three unitary operators is also a unitaryoperator which leads to a symplectic algorithm. In other words, the Jacobian ofthe transformation, Eq. (3.42), is the product of three Jacobian and is equal to1. Expansion can be performed to the next order in the Trotter formula: Moreaccurate algorithms can be derived, which are always symplectic, but they involveforce derivatives. Since force calculation is always very demanding, calculationof force derivatives would add a large penalty of the computing performance andare only used when short-time accuracy is a crucial need.

3.5 Hard sphere model

The hard sphere model is defined by the Hamiltonian

HN =N∑i

(1

2mv2

i + u(ri − rj)

), (3.48)

51

Page 52: Numerical Simulation in Statistical Physics

Molecular Dynamics

where the interaction potential is

u(ri − rj) =

+∞ |ri − rj| ≤ σ

0 |ri − rj| > σ(3.49)

From this definition, one sees that the Boltzmann factors, which are involved inthe configurational integral exp(−βu(ri−rj)) are equal either 0 (when two spheresoverlap) or 1 otherwise, which means that this integral does not depend on thetemperature, but only on the density. As for systems with a purely repulsiveinteraction, the hard sphere model does not undergo a liquid-gas transition, buta liquid-solid transition exists.

This model was been intensely studied both theoretically and numerically. Arenewal of interest in the two last decades comes in the non equilibrium version,commonly used for granular gases. Indeed, when the packing fraction is small,the inelastic hard sphere is a good reference model. The interaction potentialremains the same, but dissipation that occurring during collisions is accountedfor in a modified collision rule (see Chapter 7).

Note that the impulsive character of forces between particles prevents use ofthe Verlet algorithm and others; all previous algorithms assume that forces arecontinuous functions of distance, which is not the case here.

For hard spheres, particle velocities are only modified during collisions. Be-tween two collisions, particles follow rectilinear trajectories. Moreover, since thecollision is instantaneous, the probability of having three spheres in contact atthe same time is infinitesimal. The dynamics is a sequence of binary collisions.

Consider two spheres of identical mass: during the collision, the total momen-tum is conserved

v1 + v2 = v′1 + v′2 (3.50)

For an elastic collision, the normal component of the velocity at contact is in-verted. Let us denote ∆vi = vi − v′i, one infers

(v′2 − v′1).(r2 − r1) = −(v1 − v2).(r2 − r1), (3.51)

whereas the tangential component of the velocity is unchanged.

(v′2 − v′1).n12 = (v1 − v2).n12, (3.52)

where n12 is the normal vector to the collision axis and belonging to the planedefined by the two incoming velocities.

By using Eqs. (3.51) and (3.52), one obtains the post-collisional velocities

v′1 = v1 +(v2 − v1).(r1 − r2)

(r1 − r2)2(r1 − r2) (3.53)

v′2 = v2 +(v1 − v2).(r1 − r2)

(r1 − r2)2(r1 − r2) (3.54)

52

Page 53: Numerical Simulation in Statistical Physics

3.6 Molecular Dynamics in other ensembles

It is easy to check that the total kinetic energy is constant during the simu-lation. Indeed, since the algorithm being exact (discarding for the moment theround-off problem), the algorithm is obviously symplectic.

Practically, the hard sphere algorithm is as follows: starting from an ini-tial (non overlapping) configuration, all possible binary collisions are considered,namely for each pair of particles, collision time is calculated.

Between collisions, the positions of i and j are given by

ri = r0i + vit (3.55)

rj = r0j + vjt (3.56)

where = r0i and = r0

i are the particle positions i andj after collision.The contact condition between two particles is given by relation

σ2 = (ri − ri)2 = (r0

i − r0j)

2 + (vi − vj)2t2 + 2(r0

i − r0j)(vi − vj)t (3.57)

Because the collision time is a solution of a quadratic equation, several cases mustbe considered: if the roots are complex, the collision is set to a very large valuein simulation. If the two roots are real, either they are negative and the collisiontime is set to a very large value in simulation again. If only one root is positive,the collision time corresponds to this value. If the two roots are positive, thesmallest one is selected as the collision time.

A necessary condition for having a positive solution for t (collision in thefuture) is that

(r0i − r0

j).(vi − vj) < 0 (3.58)

Once all pair of particles have been examined, the shortest time is selected,namely the first collision. The trajectories of particles evolve rectilinearly until thenew collision occurs (calculation of trajectories is exact, because forces betweenparticles are only contact forces). At the collision time, velocities of the twoparticles are updated, and one calculates the next collision again.

The algorithm provides trajectories with an high accuracy, bounded only bythe round-off errors. The integration step is not constant, contrary to the Verlet’salgorithm. This quasi-exact method has some limitation, when the density be-comes significant, because the mean time between two collisions decreases rapidlyand the system then evolves slowly. In addition, rattling effects occur in isolatedregions of the system, which decreases the efficiency of the method.

3.6 Molecular Dynamics in other ensembles

As discussed above, Molecular Dynamics corresponds to a simulation in the mi-crocanonical ensemble. Generalizations exist in other ensembles, for instance, incanonical ensemble. Let us note that the thermal bath introduces random forceswhich changes dynamics with an ad hoc procedure. Because the dynamics then

53

Page 54: Numerical Simulation in Statistical Physics

Molecular Dynamics

depends on a adjustable parameter, different dynamics can generated that aredifferent from the “real“ dynamics obtained in a microcanonical ensemble. Themethods consist of modifying the equations of motion so that the velocity distri-bution reaches a Boltzmann distribution. However, this condition is not sufficientfor characterizing a canonical ensemble. Therefore, after introducing a heuristicmethod, we will see how to build a more systematic method for performing Molec-ular Dynamics in generalized ensembles.

3.6.1 Andersen algorithm

The Andersen algorithm is a method where the coupling of the system witha bath is done as follows: a stochastic process modifies velocities of particlesby the presence of instantaneous forces. This mechanism can be interpretedas Monte Carlo-like moves between isoenergy surfaces. Between these stochastic”collisions”, the system evolves with the usual Newtonian dynamics. The couplingstrength is controlled by the collision frequency denoted by ν. In addition, oneassumes that the stochastic “collisions“ are totally uncorrelated, which leads to aPoissonian distribution of collisions, P (ν, t)

P (ν, t) = e−νt (3.59)

where P (ν, t) denotes the probability distribution of having the next collisionbetween a particle and the bath occurs in a time interval dt.

Practically, the algorithm is composed of three steps

• The system follows Newtonian dynamics over one or several time steps,denoted ∆t.

• On chooses randomly a number of particles for stochastic collisions. Theprobability of choosing a particle in a time interval ∆t is ν∆t.

• When a particle is selected, its velocity is chosen randomly in a Maxwelliandistribution with a temperature T . Other velocities are not updated.

This procedure ensures that the velocity distribution goes to equilibrium, butobviously, real dynamics of the system is deeply disturbed. Moreover, the fulldistribution obtained with the Andersen thermostat is not a canonical distribu-tion: This can be shown by considering the fluctuation calculation. In particular,correlation functions relax at a rate which strongly depends on coupling with thethermostat.

3.6.2 Nose-Hoover algorithm

As discussed above, the Andersen algorithm biases dynamics of the simulation,which is undesirable, because the main purpose of the Molecular Dynamics is to

54

Page 55: Numerical Simulation in Statistical Physics

3.6 Molecular Dynamics in other ensembles

provide a real dynamics. To overcome this problem, a more systematic approach,introduced by Nose enlarges the system by introducing additional degrees offreedom. This procedure modifies the Hamiltonian dynamics by adding frictionforces which change the particle velocities. These changes increase or decreasethe velocities.

Let us consider the Lagrangian

L =N∑i=1

mis2

2

(dri

dt

)2

− U(rN) +Q

2

(ds

dt

)2

− L

βln(s) (3.60)

where L is a free parameter and U(rN) is the interaction potential between Nparticles. Q represent the effective mass of the unidimensional variable s. TheLagrange equations of this system read

pi =∂L∂ri

= mis2ri (3.61)

ps =∂L∂s

= Qs (3.62)

By using a Legendre transformation, one obtains the Hamiltonian

H =N∑i=1

(pi)2

2mis2+ U(rN) +

p2s

2Q+L

βln(s) (3.63)

Let us reexpress the microcanonical partition function of this system made ofN interacting particles and of this additional degree of freedom interacting withN particles.

Q =1

N !

∫dpsdsdp

NdrNδ(H− E) (3.64)

The partition function corresponds to N indistinguishable particles and to theadditional variable in the microcanonical ensemble where microstates of isoenergyare considered equiprobable.

Let us introduce p′ = p/s. One obtains that

Q =1

N !

∫dpsdss

3Ndp′NdrN×

δ(N∑i=1

(p′i)2

2mi

+ U(rN) +p2s

2Q+L

βln(s)− E) (3.65)

One defines H′ as

H′ =N∑i=1

(p′i)2

2mi

+ U(rN) (3.66)

55

Page 56: Numerical Simulation in Statistical Physics

Molecular Dynamics

and the partition function becomes

Z =1

N !

∫dpsdss

3Ndp′NdrN

δ(H′ + p2s

2Q+L

βln(s)− E) (3.67)

By using the property

δ(h(s)) =δ(s− s0)

h′(s0)(3.68)

where h(s) is a function with only one zero on the x-axis at the value s = s0.One integrates over the variable s and one obtains

δ(H′ + p2s

2Q+L

βln(s)− E) =

βs

Lδ(s− exp(−β

L(H′ + p2

s

2Q− E))) (3.69)

This gives for the partition function

Z =β exp(E(3N + 1)/L)

LN !

∫dpsdp

′NdrN exp

(−β(3N + 1)

L(H′ + p2

s

2Q)

)(3.70)

Setting L = 3N + 1, and integrating over the additional degree of freedom, thepartially integrated partition functionQ of the microcanonical ensemble of the fullsystem (N particles and the additional degree of freedom), becomes a canonicalpartition function of a system with N particle.

It is worth noting that variables used are r, p′ and ′t′. Discretizing the equa-tions of motion would give a variable time step which is not easy to control inMolecular Dynamics. It is possible to return to a constant time step by usingadditional changes of variable. This work was done by Hoover. Finally, theequations of motion are

ri =pimi

(3.71)

pi = −∂U(rN)

∂ri− ξpi (3.72)

ξ =

(N∑i=1

p2i

2mi

− L

β

)1

Q(3.73)

s

s= ξ (3.74)

The two first equations are a closed set of equations for N particles. The lastequation controls the evolution of the additional degree of freedom in the simu-lation.

56

Page 57: Numerical Simulation in Statistical Physics

3.7 Brownian dynamics

3.7 Brownian dynamics

3.7.1 Different timescales

Until now, we have considered simple systems with a unique microscopic time.For a mixture of particles, whose size are very different (an order of magnitudeis sufficient, two different microscopic time scales are present (the smallest oneis associated with the smallest particles). To obtain the statistical properties oflarge particles, simulation time exceeds the computer ability. This situation isnot exceptional, because it corresponds, for instance, to the physical situation ofbiological molecules in water. Diluted nanoparticles, like ferrofluids or other col-loids, are examples where the presence of several microscopic timescales prohibitssimulation with standard Molecular Dynamics. Brownian dynamics (which refersto the Brownian motion) corresponds to a dynamics where smallest particles arereplaced with a continuous media. The basic idea is to perform averages on thedegrees of freedom associated with the smallest relaxation times. Timescales as-sociated with the largest particle moves are assumed larger than the relaxationtimes of the velocities.

3.7.2 Smoluchowski equation

The Liouville equation describes the evolution of the system at a microscopicscale. After averaging over the smallest particles (solvent) and on the velocitydistribution of the largest particles, the joint probability distribution Pr offinding the N particles at positions r is given by a Smoluchowski equation.

∂P (rN)∂t

=∂

∂rN.D∂P (rN)∂rN

− β ∂

∂rN.DF(rN)P (rN) (3.75)

where β = 1/kBT and D is a 3N × 3N matrix which depends on the particleconfiguration and contains the hydrodynamic interactions.

3.7.3 Langevin equation. Discretization

A Smoluchowski equation corresponds to a description in terms of the probabilitydistribution; it is also possible to describe the same process in term of trajectories,which corresponds to the Langevin equation

drN

dt= βDF +

∂rN.D + ξrN (3.76)

where D is the diffusion matrix which depends on the particle configuration inthe solvent and ξrN is a Gaussian white noise.

By discretizing the stochastic differential equation, Eq. (3.76) with a con-stant time step ∆t, one obtains with a Euler algorithm the following discretized

57

Page 58: Numerical Simulation in Statistical Physics

Molecular Dynamics

equation

∆rN =

(βDF +

∂rN.D

)∆t+ R (3.77)

∆t is a sufficiently small time step in order that the force changes remain weak inthe time step, but sufficiently large for having an equilibrated velocity distribu-tion. R is a random move chosen from a Gaussian distribution with a zero mean< R >= 0, and with a variance given by

〈RRT 〉 = 2D∆t (3.78)

RT denotes the transpose of the vector R.A main difficulty with the dynamics is to obtain an expression of the diffusion

matrix as a function of particle configuration. For an infinitely dilute system,the diffusion matrix becomes diagonal and the nonzero matrix elements are allequal to the diffusion constant in the infinite dilution limit. In a general case, thesolvent plays a more active role by adding hydrodynamic forces. Theses forces aregenerally long ranged and their effects can not be neglected for modest packingfractions. In the case of the dilute limit, spherical particles with translational de-grees of freedom are involved, the diffusion matrix has the asymptotic expression

D = D0I +O

(1

r2ij

)(3.79)

with

D0 =1

3πησβ(3.80)

where η is the solvent viscosity, σ is the diameter of the large particles, and β isthe inverse of the temperature. This corresponds to Stokes law.

3.7.4 Consequences

This stochastic differential equation does not satisfy the time reversal symmetryobserved in Hamiltonian systems. Therefore, the time correlation functions cal-culated from the Brownian dynamics will be different on the short time scales.Conversely, one finds glassy behaviors with many studied systems with the Brow-nian Dynamics. This universal behavior is observed in Molecular Dynamics withmolecular systems (Orthophenyl, salol, glycerol,...) and nanomolecules (ferroflu-ids, polymers,...).

Let us note that Brownian dynamics which combines aspect of Molecular Dy-namics (deterministic forces) and of stochastic process is a intermediate methodbetween Molecular dynamics and Monte Carlo simulation. Optimization tech-niques and hydrodynamics forces are topics which goes beyond these lecture notes,but must be considered for efficient and accurate simulation of complex systems.

58

Page 59: Numerical Simulation in Statistical Physics

3.8 Conclusion

3.8 Conclusion

The Molecular Dynamics method has developed considerably during these lastdecades. Initially devoted for simulating atomic and molecular systems, general-ized ensembles and/or Brownian dynamics allow the study of systems containinglarge molecules, like biological molecules.

3.9 Exercises

3.9.1 Multi timescale algorithm

When implementing a Molecular Dynamics simulation, the standard algorithmis the Verlet algorithm (or Leap-Frog). The time step of the simulation is chosensuch that the variations of different quantities (forces and velocities) are smallon a time step. However, interaction forces contain short-range and long-rangeinteraction

F = Fshort + Flong (3.81)

The Liouville operator iL for a particle reads

iL = iLr + iLF (3.82)

= v∂

∂r+

F

m

∂v(3.83)

♣ Q. 3.9.1-1 Using the Trotter formula at first order in time step, recall equa-tions of motion. One denotes the time step by ∆t.

The Liouville operator can be separated in two parts as follows

iL = (iLr + iLFshort) + iLFlong (3.84)

with

iLFshort =Fshort

m

∂v(3.85)

iLFlong =Flong

m

∂v(3.86)

♣ Q. 3.9.1-2 By taking a shorter time step, sub-multiple of the original timestep ∆t, δt = ∆t/n for the short-range forces, show that the propagator eiL∆t

can be expressed at first order as

eiLFlong∆t/2 (eiLFshort

δt/2eiLrδteiLFshortδt/2)neiLFlong

∆t/2 (3.87)

♣ Q. 3.9.1-3 Describe the principle of this algorithm. What can one say aboutthe reversibility of the algorithm and the conservation of volume about phasespace?

59

Page 60: Numerical Simulation in Statistical Physics

Molecular Dynamics

60

Page 61: Numerical Simulation in Statistical Physics

Chapter 4

Correlation functions

Contents4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 62

4.2 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.2.1 Radial distribution function . . . . . . . . . . . . . . . 62

4.2.2 Structure factor . . . . . . . . . . . . . . . . . . . . . . 66

4.3 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 67

4.3.2 Time correlation functions . . . . . . . . . . . . . . . . 67

4.3.3 Computation of the time correlation function . . . . . 68

4.3.4 Linear response theory: results and transport coefficients 69

4.4 Space-time correlation functions . . . . . . . . . . . . 70

4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 70

4.4.2 Van Hove function . . . . . . . . . . . . . . . . . . . . 70

4.4.3 Intermediate scattering function . . . . . . . . . . . . 72

4.4.4 Dynamic structure factor . . . . . . . . . . . . . . . . 72

4.5 Dynamic heterogeneities . . . . . . . . . . . . . . . . . 72

4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 72

4.5.2 4-point correlation function . . . . . . . . . . . . . . . 73

4.5.3 4-point susceptibility and dynamic correlation length . 73

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.7.1 “Le facteur de structure dans tous ses etats!” . . . . . 74

4.7.2 Van Hove function and intermediate scattering function 75

61

Page 62: Numerical Simulation in Statistical Physics

Correlation functions

4.1 Introduction

With Monte Carlo or Molecular Dynamics simulation, one can calculate thermo-dynamic quantities , as seen in previous chapters, but also the spatial correlationfunctions. These functions characterize the structure of the system and providemore detailed information that thermodynamic quantities. Correlation functionscan be directly compared to light or neutron scattering, and/or to several theo-retical approaches that have been developed during the last decades (for instance,the integral equations of liquid state).

Time correlation functions can be compared to experiments if the simulationdynamics corresponds to a ”real”dynamics, namely a Molecular Dynamics. MonteCarlo dynamics strongly depends of the chosen rule. However, although theMetropolis algorithm does not correspond to a real dynamics, global trends of areal dynamics are generally contained (slowing down for a continuous transition,glassy behavior in glass formers liquids, ...). Using the linear response theory,the transport coefficients can be obtained by integrating the time correlationfunctions.

In the second part of this chapter, we consider the glassy behavior wheresimulation showed recently that dynamics of the system is characterized by dy-namical heterogeneities. Indeed, while many particles seem to be frozen, isolatedregions contain mobile particles. This phenomenon evolves continuously withtime where mobile particles becomes frozen and conversely. This feature can becharacterized by a high-order correlation function (4-point correlation function).Therefore, specific behavior can be characterized by a more or less sophisticatedcorrelation functions.

4.2 Structure

4.2.1 Radial distribution function

We now derive the equilibrium correlation functions as a function of the localdensity (already defined in Chap. 1). The main interest is to show that theseexpression are well adapted for simulation. In addition, we pay special attentionto finite size effects; indeed the simulation box containing a finite number ofparticles, quantities of interest display some differences with the same quantitiesin the thermodynamic limit and one must account for it.

Let us consider a system of N particles defined by the Hamiltonian

H =N∑i=1

p2i

2m+ V (rN). (4.1)

In the canonical ensemble, the density distribution associated with the probabilityof finding n particles in the elementary volume

∏ni=1 dri, denoted in the following

62

Page 63: Numerical Simulation in Statistical Physics

4.2 Structure

as drn, is given by

ρ(n)N (rn) =

N !

(N − n)!

∫. . .∫

exp(−βV (rN))drN−n

ZN(V, T )(4.2)

where

ZN(V, T ) =

∫. . .

∫exp(−βV (rN))drN . (4.3)

Normalization of ρ(n)N (rn) is such that∫

. . .

∫ρ

(n)N (rn)drn =

N !

(N − n)!, (4.4)

which corresponds to the number of occurrence of finding n particles among N .In particular, one has ∫

ρ(1)N (r1)dr1 =N (4.5)∫ ∫ρ

(2)N (r2)dr2 =N(N − 1) (4.6)

which means for Eqs.(4.5) and (4.6) that one can find N particles and one canfind N(N − 1) pairs of particles in the total volume, respectively.

The pair distribution function is defined as

g2N(r1, r2) = ρ

(2)N (r1, r2)/(ρ

(1)N (r1)ρ

(1)N (r2)). (4.7)

For an homogeneous system (which is not the case of liquids in the vicinity of

interfaces), one has ρ(1)N (r) = ρ and the pair distribution function only depends

on the relative distance between particles:

g(2)N (r1, r2) = g(|r1 − r2|) =

ρ(2)N (|r1 − r2|)

ρ2(4.8)

where g(r) is called the radial de distribution function.When the distance |r1 − r2| is large, the radial distribution function goes to

1 − 1N

. To the thermodynamics limit, one recovers that the radial distributionfunction goes to 1, but in finite size, a correction appears which changes the largedistance limit.

Noting that

〈δ(r− r1)〉 =1

ZN(V, T )

∫. . .

∫δ(r− r1) exp(−βVN(rN))drN (4.9)

=1

ZN(V, T )

∫. . .

∫exp(−βVN(r, r2, . . . , rN))dr2 . . . drN (4.10)

63

Page 64: Numerical Simulation in Statistical Physics

Correlation functions

the probability density is reexpressed as

ρ(1)N (r) = 〈

N∑i=1

δ(r− ri)〉. (4.11)

Similarly, one infers

ρ(2)N (r, r′) = 〈

N∑i=1

N∑j=1,j 6=i

δ(r− ri)δ(r′ − rj)〉. (4.12)

By using microscopic densities, the radial distribution function is expressed as

g(2)(r1, r2)ρ(r1))ρ(r2) = 〈N∑i=1

N∑j=1,j 6=i

δ(r− ri)δ(r′ − rj)〉. (4.13)

For a homogeneous and isotropic system, the radial distribution function onlydepends on the module of relative distance |r− r′|. Integrating over the volume,one obtains that

ρg(|r|) =1

N〈N∑i=1

N∑j=1,j 6=i

δ(r− ri + rj)〉. (4.14)

In a simulation using periodic boundary conditions (with cubic symmetry),one cannot obtain the structure of the system beyond a distance equal to L/2,because of the spatial periodicity of each main directions of the box. Computationof the radial distribution function lies on a space discretization. Practically, onestarts with an empty distance histogram of spatial step of ∆r, one calculates thenumber of pairs whose distance is between r and r+∆r, one performs integrationof Eq. (4.14) on the shell labeled by the index j of thickness ∆r by using theapproximate formula:

ρg(∆r(j + 0.5)) =1

N(Vj+1 − Vj)2Np(r), (4.15)

where Np(r) is the number of distinct pairs whose center to center distances arebetween j∆r and (j + 1)∆r and where Vj = 4π

3((j + 1)∆r)3.

This formula is accurate if the radial distribution smoothly varies betweenthe distance r and r + ∆r. By decreasing the spatial steps, one can check aposteriori that this condition is satisfied. However, by decreasing the spatial stepof the histogram, the statistical accuracy in each bin diminishes. Therefore, acompromise must be found for the step, based in part on a rule of thumb.

Last, note that computation of this correlation function (for a given config-uration) involves all pairs, namely the square of the particle number, a intrinsic

64

Page 65: Numerical Simulation in Statistical Physics

4.2 Structure

0 0.2 0.4 0.6 0.8 1X

0

0.2

0.4

0.6

0.8

1

Y

Figure 4.1 – Basics of the computation of the radial distribution function: startingfrom a particle, one determines the pair number whose distance is between thesuccessive built from the discretization given by Eq. (4.15).

property of the correlation function. Because a correct average needs to considera sufficient number of different configurations, only configurations where all par-ticles have been moved will be considered instead of all configurations. The partof computation time spent for the correlation is then divided by N . In summary,calculation of the correlation function scales as N and is comparable to this of thesimulation dynamics. The computation time can be decreased if only the shortdistance structure is either significant or of interest.

By expressing the radial distribution function with the local density, note thatthe distribution function is then defined in a statistical manner. The ensembleaverage is not restricted to equilibrium. We will see later (Chapter 7) that itis possible to calculate this function for out of equilibrium systems ( randomsequential addition) where the process is not described by a Gibbs distribution.

65

Page 66: Numerical Simulation in Statistical Physics

Correlation functions

4.2.2 Structure factor

The static structure factor, S(k), is a quantity available experimentally fromlight or neutron scattering; this is the Fourier transform of the radial distributionfunction g(r).

For a simple liquid, the static structure factor is defined as

S(k) =1

N〈ρkρ−k〉 (4.16)

where ρk is the Fourier transform of the microscopic density ρ(r). This reads

S(k) =1

N

⟨N∑i=1

N∑j=1

exp(−ikri) exp(ikrj)

⟩. (4.17)

Using delta distributions, S(k) is expressed as

S(k) = 1 +1

N

⟨∫ ∫exp(−ik(r− r′))

N∑i=1

N∑j=1,i 6=j

δ(r− ri)δ(r− rj)drdr′

⟩(4.18)

which gives

S(k) = 1 +1

N

∫ ∫exp(−ik(r− r′))ρ(r, r′)drdr′. (4.19)

For a homogeneous fluid (isotropic and uniform)

S(k) = 1 +ρ2

N

∫ ∫exp(−ik(r− r′))g(r, r′)drdr′ (4.20)

For an isotropic fluid, the radial distribution function only depends on |r − r′|.One then obtains

S(k) = 1 + ρ

∫exp(−ikr)g(r)dr. (4.21)

Because the system is isotropic, the Fourier transform only depends on the mod-ulus k of wave vector, |k|, which gives in three dimensions

S(k) = 1 + 2πρ

∫r2g(r)

∫ π

0

exp(−ikr cos(θ)) sin(θ)dθdr (4.22)

and, after some calculation

S(k) = 1 + 4πρ

∫ ∞0

r2g(r)sin(kr)

krdr. (4.23)

Finally, calculation of the static structure factor requires a one-dimensionalsine-Fourier transform on the positive axis. Efficient codes of fast Fourier trans-forms calculate S(k) very rapidly.

66

Page 67: Numerical Simulation in Statistical Physics

4.3 Dynamics

In a simulation, one can calculate either g(r) and obtain S(k) by a Fouriertransform or S(k) from Eq. (4.17) and obtain g(r) by an inverse Fourier transform.In both cases, knowledge of correlations is limited to a distance equal to the halflength of the simulation box.

For the static structure factor, this yields an infrared cut-off (π/L). Thesimulation cannot provide information for smaller wave vectors. Because thecomputation of the spatial correlation function over pairs of particles for a givenconfiguration, the number of terms of the summation is proportional to N2 (timesthe number of configurations used for the average in the simulation). In thedirect calculation of the static structure factor S(k), the number of terms toconsider is proportional to N multiplied by the number of wave numbers whichare the components of the structure factor (and finally multiplied by the numberof configurations used in the simulation.

Repeated evaluation of trigonometric functions in Eq. (4.17) can be avoidedby tabulating complex exponentials in an array at the beginning of the simulation

Basically, computation of g(r), followed of the Fourier transform, should givethe same result than the direct computation of the structure factor. However,because of the finite duration of the simulation as well as the finite size of thesimulation box, statistical errors and round-off errors can give differences betweenthe two manners of computation.

4.3 Dynamics

4.3.1 Introduction

Spatial correlation functions are relevant probes of the presence or absence ofthe order at many lenghtscales. Similarly, knowledge of equilibrium fluctuationsis provided by time correlation functions. Indeed, let us recall that equilibriumdoes not mean absence of particle dynamics: the system undergoes continuouslyfluctuations and how a fluctuation is able to disappear is a essential characteristicassociated with to macroscopic changes, observed in particular in the vicinity ofa phase transition.

4.3.2 Time correlation functions

Let us consider two examples of time correlation functions using simple modelsintroduced in the first chapter of these lecture notes.

For the Ising model, one defines the spin-spin time correlation function

C(t) =1

N

N∑i=1

< Si(0)Si(t) > (4.24)

67

Page 68: Numerical Simulation in Statistical Physics

Correlation functions

where the brackets denote equilibrium average and Si(t) is the spin variable of sitei at time t. This function measures the system looses memory of a configurationat time t. At time t = 0, this function is equal to 1 (value which can not exceeded,by definition), then decreases and goes to 0 when the elapsed time is much largerthan the equilibrium relaxation time.

For a simple liquid made of point particles, one can monitor the density au-tocorrelation function

C(t) =1

V

∫dr〈δρ(r, t)δρ(r, 0)〉 (4.25)

where δρ(r, t) denotes the local density fluctuation. This autocorrelation functionmeasures the evolution of the local density along the time. It goes to zero whenthe elapsed time is much larger to the relaxation times of the system.

4.3.3 Computation of the time correlation function

We now go in details for implementing the time autocorrelation function, for abasic example, namely the spin correlation in the Ising model. This function isdefined in Eq. (4.24)1.

Time average of a correlation function (or others quantities) in the equilibriumsimulation (Molecular Dynamics or Monte Carlo) is calculated by using a fun-damental property of equilibrium systems, namely, time translational invariance:in other words, if one calculates < Si(t

′)Si(t′ + t) >, the results is independent

of t′. In order to have a statistical average well defined, one must iterate thiscomputation for many different t′.

C(t) =1

NM

M∑j=1

N∑i=1

Si(tj)Si(tj + t). (4.26)

where the initial instant is a time where the system is already at equilibrium.

We have assumed that the time step is constant. When the later is variable,one can add a constant time step for the correlation function (in general, slightlysmaller than the mean value of variable time step) and the above method can bethen applied.

For practical computation of C(t), one first defined Nc vectors of N compo-nents (N is the total number of spins and once equilibrated, one real vector ofNc components and one integer vector of Nc components. All vectors are initiallyset to 0.

1Ising model does now own a Hamiltonian dynamics, and must be simulated by a MonteCarlo simulation, but the method described here can be applied for all sorts of dynamics, MonteCarlo or Molecular Dynamics.

68

Page 69: Numerical Simulation in Statistical Physics

4.3 Dynamics

• At t = 0, the spin configuration is stored in the first N component vector,scalar product

∑i Si(t0)Si(t0) is performed and added to the first compo-

nent of the real Nc component vector

• At t = 1, the spin configuration is stored in the second N componentvector, scalar products

∑i Si(t1)Si(t1) and

∑i Si(t0)Si(t1) are performed

and added to the first and second component of the real Nc componentvector

• For t = k with k < Nc, the same procedure is iterated and k + 1 scalarproducts are performed and added in the k + 1 components of the real Nc

component vector.

• When t = Nc−1, the first N component vector is erased and replaced withthe new configuration. NC scalar products can be performed and added tothe Nc component vector. For t = Nc, the second vector is replaced by thenew configuration and so on.

• Finally, this procedure is stopped at time t = Tf , and because number ofconfigurations involved in averaging the correlation function are not equal,the integer vector of Nc is used as a histogram of configuration average.

• Finally Eq. (4.26) is used for calculating the correlation function. )

A last remark concerns the delicate choice of the time step as well the num-ber of components of the correlation function. First, the correlation function iscalculated for a duration equal to (Nc − 1)∆t, which is bounded

τeq << Nc∆tc << Tsim (4.27)

Indeed, configuration averaging needs that Nc∆tc is less than the total simulationtime, but much larger than the equilibrium relaxation. Therefore, ∆tc which is afree parameter, is chosen a multiple of the time step of the simulation.

4.3.4 Linear response theory: results and transport coef-ficients

When a equilibrium system is slightly perturbed by an external field, the linearresponse theory provides the following result: the response of the system andreturn to equilibrium are directly related to the ability of the equilibrium system(namely in the absence of a external field) to answer to fluctuations ( Onsagerassumption of fluctuation regression). Derivation of the relation between responsefunction and fluctuation correlation is given in appendix A.

In a simulation, it is easier to calculate a correlation function than a linearresponse function for several reasons: i) Computation of a correlation function

69

Page 70: Numerical Simulation in Statistical Physics

Correlation functions

is done at equilibrium. Practically, the correlation function is obtained simul-taneously with other thermodynamic quantities along the simulation. ii) Forcomputing response functions, it is necessary to perform different simulations foreach response functions. Moreover, for obtaining the linear part of the responsefunction (response of a quantity B(t) to a external field∆F ), one needs to en-sure that the ratio 〈∆B(t)〉/∆F is independent of ∆F : this constraint leads tochoose a small value of ∆F , but if the perturbative field is too weak, the function〈∆B(t)〉 will be small and could be of the same order of magnitude of statis-tical fluctuations of the simulation. In the last chapter of these lecture notes,we will introduce a recent method allowing to calculate response functions with-out perturbing the system. However, this method is restricted to Monte Carlosimulations. For Molecular Dynamics, fluctuations grow very rapidly with time2.

In summary, by integrating the correlation function C(t) over time, on obtainsthe system susceptibility.

4.4 Space-time correlation functions

4.4.1 Introduction

To go further in correlation study, one can follow correlation both in space andtime. This information is richer, but the price to pay is the calculation of a two-variable correlation function, at least. We introduce these functions because theyare also available in experiments, essentially form neutron scattering.

4.4.2 Van Hove function

Let us consider the density correlation function which depends both on space andtime

ρG(r, r′; t) = 〈ρ(r′ + r, t)ρ(r′, 0)〉 (4.28)

This function can be expressed from the microscopic densities as

ρG(r, r′; t) = 〈N∑i=1

N∑j=1

δ(r′ + r− ri(t))δ(r− rj(0))〉 (4.29)

For a homogeneous system, G(r, r′; t) only depends on the relative distance. In-tegrating over volume, one obtains

G(r, t) =1

N〈N∑i=1

N∑j=1

δ(r− ri(t) + rj(0))〉 (4.30)

2It exists some cases where the response function is easier to calculate than the correlationfunction, namely viscosity for liquids.

70

Page 71: Numerical Simulation in Statistical Physics

4.4 Space-time correlation functions

At t = 0, the Van Hove function G(r, t) is simplified and one obtains

G(r, 0) =1

N〈N∑i=1

N∑j=1

δ(r + ri(0)− rj(0))〉 (4.31)

= δ(r) + ρg(r) (4.32)

Except a singularity at the origin, at t = 0, the Van Hove function is proportionalto the pair distribution function g(r). On can separate this function in two parts,self and distinct

G(r, 0) = Gs(r, 0) +Gd(r, 0) (4.33)

with

Gs(r, 0) =1

N〈N∑i=1

δ((r + ri(0)− ri(t))〉 (4.34)

Gd(r, 0) =1

N〈N∑i 6=j

δ(r + rj(0)− rj(t))〉 (4.35)

These correlation functions have the physical interpretation: the Van Hovefunction is the probability density of finding a particle i in the vicinity of r attime t knowing that a particle j is in the vicinity of the origin at time t = 0.Gs(r, t) is the probability density of finding a particle i at time t knowing thatthis particle was at the origin at time t = 0; finally, Gd(r, t) corresponds to theprobability density of finding a particle j different of i at time t knowing that theparticle i was at the origin at time t = 0.

Normalization of these two functions reads∫drGs(r, t) = 1 (4.36)

and corresponds to be certain that the particle is in the volume at any time(namely, particle conservation).

Similarly, one has ∫drGd(r, t) = N − 1 (4.37)

with the following physical meaning: by integrating over space, the distinct cor-relation function is able to count the remaining particles.

In the long time limit, the system looses memory of the initial configurationand the correlation functions becomes independent of the distance r.

limr→∞

Gs(r, t) = limt→∞

Gs(r, t) '1

V' 0 (4.38)

limr→∞

Gd(r, t) = limt→∞

Gs(r, t) 'N − 1

V' ρ (4.39)

71

Page 72: Numerical Simulation in Statistical Physics

Correlation functions

4.4.3 Intermediate scattering function

Instead of considering correlations in space, one can see in reciprocal space,namely in Fourier components. Let us define a correlation function so calledintermediate scattering function, which is the Fourier transform of the Van Hovefunction

F (k, t) =

∫dkG(r, t)e−ik.rt (4.40)

In a similar manner, one defines self and distinct parts of the function

Fs(k, t) =

∫dkGs(r, t)e

−ikrt (4.41)

Fd(k, t) =

∫dkGd(r, t)e

−ikrt (4.42)

The physical interest of splitting the function is related to the fact that coher-ent and incoherent neutron scattering provide the total F (k, t) and self Fs(k, t)intermediate scattering functions.

4.4.4 Dynamic structure factor

The dynamics structure factor is defined as the time Fourier transform of theintermediate scattering function

S(k, ω) =

∫dtF (k, t)eiωt (4.43)

Obviously, one has the following sum rule: integrating the dynamic structurefactor over ω gives the static structure factor.∫

dωS(k, ω) = S(k) (4.44)

4.5 Dynamic heterogeneities

4.5.1 Introduction

In previous sections, we have seen that accurate information how the systemevolves by monitoring space and time correlation functions. However, it existsimportant physical situations where relevant information is not provided by theprevious defined functions. Indeed, many systems have relaxation times grow-ing in spectacular manner with a rather small variation of a control parameter,namely the temperature. This corresponds to a glassy behavior, where no signifi-cant changes in the structure appear (spatial correlation functions do not changewhen the control parameter is changes), no phase transition occurs. Numerical

72

Page 73: Numerical Simulation in Statistical Physics

4.5 Dynamic heterogeneities

simulations as well experimental results have shown that particles of the sys-tem, assumed identical, behave very differently at the same time: while most ofthe particles are characterized by a extremely slow evolution, a small part evolvesmore rapidly. In order to know these regions have a collective behavior, one needsto build more suitable correlation functions. Indeed, the previous functions areirrelevant for characterizing this heterogeneous dynamics.

4.5.2 4-point correlation function

The heterogeneities cannot be characterized by a two-point correlation function,but one needs a higher-order correlation function, and let us define the 4-pointcorrelation function

C4(r, t) = 〈 1

V

∫dr′δρ(r′, 0)δρ(r′, t)δρ(r′ + r, 0)δρ(r′ + r, t)〉

− 〈 1

V

∫dr′δρ(r′, 0)δρ(r′, t)〉〈 1

V

∫dr′δρ(r′ + r, 0)δρ(r′ + r, t)〉 (4.45)

where δρ(r, t) is a density fluctuation at the position r at time t. The physicalmeaning of this function is as follows: : when a fluctuation occurs at r′ at timet = 0, how long this heterogeneity will survive and what is the spatial extensionof this heterogeneity?

4.5.3 4-point susceptibility and dynamic correlation length

In order to measure the strength of the correlation defined above, one must per-form the spatial integration of the 4-point correlation function. This defines the4-point susceptibility

χ4(t) =

∫drC4(r, t) (4.46)

This function can be reexpressed as

χ4(t) = N(< C2(t) > − < C(t) >2) (4.47)

where one defines C(t)

C(t) =1

V

∫drδρ(r, t)δρ(r, 0) (4.48)

as the correlation function (not averaged). For glassy systems, this 4-point sus-ceptibility is a function which displays a maximum at the typical relaxation time.Considering this quantity as algebraic of a quantity ξ(t); this later is interpretedas a dynamic correlation length which characterized the spatial heterogeneities.

73

Page 74: Numerical Simulation in Statistical Physics

Correlation functions

4.6 Conclusion

Correlation functions are tools allowing for detailed investigation of spatial andtime behavior of equilibrium systems. As seen above with the 4-point functions,collective properties of systems can be revealed by building ad hoc correlationfunctions, when thermodynamic quantities as well as two point correlation func-tions do not change.

4.7 Exercises

4.7.1 “Le facteur de structure dans tous ses etats!”

Characterization of spatial correlations can be done by knowing the static struc-ture factor. We propose to study how the particle correlations modify the shapeof the static structure factor in different physical situations

Let N particles places within a simulation box of volume V . One defines themicroscopic density as

ρ(r) =N∑i=1

δ(r− ri) (4.49)

where ri denotes the particle position i. One defines the structure factor as

S(k) =< ρ(k)ρ(−k) >

N(4.50)

where ρ(k) is the Fourier transform of the microscopic density and the brackets< ... > denote the average over the configuration ensemble of the system.

♣ Q. 4.7.1-1 Express the structure factor S(k) as a function of terms as <e−ikrij >, where rij = ri − rj with i 6= j.

♣ Q. 4.7.1-2 Show that S(k)→ 1 when k→ +∞.

♣ Q. 4.7.1-3 The simulation box is a cubic with a linear dimension L with peri-odic boundary conditions. What the minimum modulus of wave vectors k avail-able in simulation?

♣ Q. 4.7.1-4 Consider a gas model defined on a cubic lattice with a spatial stepof a (with the periodic boundary conditions), particles occupy lattice nodes. Showthat there is a maximum modulus of wave vectors. Count all wave vectors avail-able in simulation..

One now considers a one-dimensional lattice with a step a and N nodes whereeach site is occupied by a particle.

74

Page 75: Numerical Simulation in Statistical Physics

4.7 Exercises

♣ Q. 4.7.1-5 Calculate ρ(k) and infer the structure factor associated with thisconfiguration. By taking the thermodynamic limit (N → ∞), show that thestructure factor is equal to zero in the Brillouin zone (−π

a< k < π

a).

Each particle is split in two identical particles shifted symmetrically on eachside of the lattice node on a random distance u with the probability distributionp(u). The microscopic density of the system then becomes

ρd(r) =N∑i=1

(δ(r− ri − ui) + δ(r− ri + ui)) (4.51)

and the structure factor is given by

Sd(k) =< ρd(k)ρd(−k) >

2N. (4.52)

♣ Q. 4.7.1-6 Show that Sd(k) = 2 < (cos(ku))2 > +2 < cos(ku) >2 (S(k)− 1).

♣ Q. 4.7.1-7 Assuming that the moments < u2 > and < u4 > are finite, cal-culate the expansion of Sd(k) to order k4. Configurations corresponding to astructure factor which goes to zero when k → 0, are called “super-uniform”.Justify this definition.

♣ Q. 4.7.1-8 Recent simulations have shown that the structure factor of densegranular particles behaves as k at small wave vectors. Why needs the simulationa large number of particles (106) ?

We now consider the structure factor of a simple liquid in the vicinity ofthe critical point. The critical correlation function is given by the scaling lawg(r) ∼ r−(1+η) for r > rc.

♣ Q. 4.7.1-9 By using the relation between the correlation function and thestructure factor, show that this latter diverges at small wave vectors as k−A,where A is an exponent to be determined (the integral

∫∞0du u−η sin(u) is finite).

4.7.2 Van Hove function and intermediate scattering func-tion

The goal of this problem is to understand the behavior change of the self VanHove function Gs(r, t) (or its spatial Fourier transform, namely the intermediatescattering function Fs(k, t)) when a system goes from a usual dense phase toglassy phase. Without loss of generality, one writes the self Van Hove correlationfunction as Gs(r, t) =< δ(r + ri(0)− ri(t)) > where i is a tagged particle.

One considers the long time and long distance behavior of the self Van Hovefunction.

75

Page 76: Numerical Simulation in Statistical Physics

Correlation functions

♣ Q. 4.7.2-1 Write the conservation of mass of tagged particles which relatedthe local density ρ(s)(r, t) and the particle flux j(s)(r, t).

♣ Q. 4.7.2-2 One assumes that particles undergo a diffusion motion, describedby the relation j(s)(r, t) = −D∇ρ(s)(r, t), what is the time dependent equationsatisfied by ρ(s)(r, t)?

♣ Q. 4.7.2-3 Defining the Fourier transform of ρ(s)(r, t), as ρ(s)(k, t) =∫dr3ρ(s)(r, t)e−ikr,

obtain the differential equation satisfied by ρ(s)(k, t). Solve this equation. Theinitial condition is called ρ(s)(k, 0)

♣ Q. 4.7.2-4 One admits that Fs(k, t) =< ρ(s)(k, t)ρ(s)(−k, 0) > where thebracket denotes the average on the initial conditions. Infer from the above ques-tion that Fs(k, t) ∼ exp(−Dk2t).

♣ Q. 4.7.2-5 By using the inverse Fourier transform, obtain the expression ofGs(r, t).

One now considers the situation where dynamics is glassy and one of character-istics is the presence of heterogeneities. Experiments and numerical simulationreveal that the long distance behavior of the Van Hove correlation function isslower than a usual Gaussian and close to an exponential decay. One suggests asimple model for particle dynamics. Neglecting all vibration modes, one assumesthat a particle undergoes stochastic jumps whose time distribution is given byφ(t) = τ−1e−t/τ (One also assumes that the probability distribution associated tothe first jump after the initial time is also φ(t)). The self Van Hove function thenreads

Gs(r, t) =∞∑n=0

p(n, t)f(n, r) (4.53)

where p(n, t) is the probability of jumping n times at time t and f(n, r) theprobability of moving of a distance r after n jumps.

♣ Q. 4.7.2-6 Show that p(0, t) = 1−∫ t

0dt′φ(t′). Calculate explicitly p(0, t).

♣ Q. 4.7.2-7 Find the relation between p(n, t) and p(n− 1, t).

♣ Q. 4.7.2-8 Same question between f(n, r) and f(n− 1, r).

♣ Q. 4.7.2-9 Show that f(n, k) = f(k)n and that p(n, s) = p(0, s)φ(s)n

One defines the Fourier-Laplace transform as

G(k, s) =

∫d3re−ikr

∫ ∞0

dte−stGs(r, t) (4.54)

76

Page 77: Numerical Simulation in Statistical Physics

4.7 Exercises

♣ Q. 4.7.2-10 Show that

G(k, s) =p(0, s)

1− φ(s)f(k)(4.55)

♣ Q. 4.7.2-11 Let us denote G0(k, s) = p(0, s). Calculate G0(r, t). What is thephysical meaning of this quantity?

♠ Q. 4.7.2-12 Show that

G(r, t)−G0(r, t) =e−t/τ

(2π)3

∫d3k

[etf(k)/τ − 1

]eikr (4.56)

♠ Q. 4.7.2-13 If f(r) = (2πd2)−3/2e−r2/(2d2), calculate f(k). One considers the

instant t = τ . Expanding the exponential of Eq. (4.56), and performing theinverse Fourier transform, show that

G(r, τ)−G0(r, τ) ∼ e−rd (4.57)

Give a physical meaning of this result.

77

Page 78: Numerical Simulation in Statistical Physics

Correlation functions

78

Page 79: Numerical Simulation in Statistical Physics

Chapter 5

Phase transitions

Contents5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 79

5.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.2.1 Critical exponents . . . . . . . . . . . . . . . . . . . . 81

5.2.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . 82

5.3 Finite size scaling analysis . . . . . . . . . . . . . . . . 87

5.3.1 Specific heat . . . . . . . . . . . . . . . . . . . . . . . 87

5.3.2 Other quantities . . . . . . . . . . . . . . . . . . . . . 88

5.4 Critical slowing down . . . . . . . . . . . . . . . . . . . 90

5.5 Cluster algorithm . . . . . . . . . . . . . . . . . . . . . 90

5.6 Reweighting Method . . . . . . . . . . . . . . . . . . . 93

5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.8.1 Finite size scaling for continuous transitions: logarithmcorrections . . . . . . . . . . . . . . . . . . . . . . . . 96

5.8.2 Some aspects of the finite size scaling: first-order tran-sition . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

5.1 Introduction

One very interesting application of simulations is to the study of phase transitions,even though they can occur only in infinite systems (namely in the thermody-namic limit). This apparent paradox can be resolved as follows: in a continuousphase transition (unlike a first-order transition), fluctuations occur at increasinglength scales as the temperature approaches the critical point. In a simulation

79

Page 80: Numerical Simulation in Statistical Physics

Phase transitions

box, when fluctuations are smaller than the linear dimension of the simulationcell, the system does not realize that the simulation cell is finite and its behavior(and the thermodynamic quantities) is close to an infinite system. When the fluc-tuations are comparable to the system size, the behavior differs from the infinitesystem close the critical temperature, but the finite size behavior is similar to theinfinite system when the temperature is far from the critical point.

Finite size scaling analysis which involves simulations with different systemsizes1 provides an efficient tool for determining the critical exponents of the sys-tem.

Many simulations investigating the critical phenomena (continuous phase tran-sitions) were performed with lattice models. Studies of continuous systems are lesscommon. the critical phenomena theory show that phase transitions are charac-terized by scaling laws (algebraic laws) whose exponents are universal quantitiesindependent of a underlying lattice. This is due to the fact that a continuousphase transition is associated with fluctuations of all length scales at the criticalpoint. Therefore, microscopic details are irrelevant in the universal behavior ofthe system, but the space dimension of the system as well as the symmetry groupof the order parameter are essential.

Phase transitions are characterized by critical exponents which define univer-sality classes. Studies can therefore be realized accurately on simple models. Forexample, the liquid-gas transition for liquids interacting with a pairwise potentialbelongs to the same universality class as the para-ferromagnetic transition of theIsing model.

As we will see below, simulations must be performed with different sizes inorder to determine the critical exponents. If we compare lattice and continuoussystems, the former requires less simulation time. Indeed, varying the size ofa continuous system over several orders of magnitude is a challenging problem,whereas it is possible with lattice systems.

Finally, let us note that the critical temperature obtained in simulations variesweakly with the system size in continuous systems, whereas a strong dependencecan be observed in lattice models. Consequently, with a modest size, a rathergood estimate of the critical temperature can be obtained with a simulation.

1For a good accuracy for the critical exponents, one must choose large simulation cells (andthe accuracy of critical exponents are restricted by the computer powers). Indeed, scaling lawsused for the critical exponent calculation corresponds to asymptotic behaviors, whose validityis for large simulation cells. To have accurate critical exponents, scaling law corrections can beincluded, which introduces additional parameters to be estimated.

80

Page 81: Numerical Simulation in Statistical Physics

5.2 Scaling laws

5.2 Scaling laws

5.2.1 Critical exponents

Scaling law assumptions were made in the sixties and were subsequently preciselyderived with the renormalization group theory, which provides a general frame-work for studying critical phenomena. Detailed presentation of this theory goesbeyond the scope of these lecture notes, and we only introduce the basic conceptsuseful for studying phase transitions in simulations

For the sake of simplicity, we consider the Ising model whose Hamiltonian isgiven by

H = −JN∑

<i,j>

SiSj −HN∑i=1

Si (5.1)

where < i, j > denotes a summation over nearest sites, J is the ferromagneticinteraction strength, and H a uniform external field.

In the vicinity of the critical point (namely, for the Ising model, H = 0,Tc/J = 2.2691 . . . in two dimensions), thermodynamic quantities, as well as spa-tial correlation functions behave according to scaling laws. The magnetizationper spin is defined by

mN(t, h) =1

N

N∑i=1

〈Si〉, (5.2)

where t = (T − Tc)/Tc is the dimensionless temperature and h = H/kBT thedimensionless external field.

Magnetization (Eq. (5.2)) is defined as

m(t, h) = limN→∞

mN(t, h). (5.3)

In the absence of an external field, the scaling law is

m(t, h = 0) =

0 t > 0

A|t|β t < 0(5.4)

where the exponent β characterizes the spontaneous magnetization in the ferro-magnetic phase.

Similarly, along the critical isotherm, one has

m(t = 0, h) =

−B|h|1/δ h < 0

B|h|1/δ h > 0(5.5)

where δ is the exponent of the magnetization in the presence of an external field.

81

Page 82: Numerical Simulation in Statistical Physics

Phase transitions

The specific heat cv is given by

cv(t, h = 0)

C|t|−α t < 0

C ′|t|−α′ t > 0(5.6)

where α and α′ are the exponents associated with the specific heat.Experimentally, one always observes that α = α′. The amplitude ratio C/C ′

is also a universal quantity. Similarly, one can define a pair of exponents for eachquantity for positive or negative dimensionless temperature, but exponents arealways identical and we only consider an exponent for each quantity. Isothermalsusceptibility in zero external field diverges at the critical point as

χT (h = 0) ∼ |t|−γ, (5.7)

where γ is the susceptibility exponent.The spatial correlation function, denoted by g(r), behaves in the vicinity of

the critical point as

g(r) ∼ exp(−r/ξ)rd−2+η

, (5.8)

where ξ is the correlation length, which behaves as

ξ ∼ |t|−ν , (5.9)

where ν is the exponent associated with the correlation length.This correlation function decreases algebraically at the critical point as follows

g(r) ∼ 1

rd−2+η(5.10)

where η is the exponent associated with the correlation function.These six exponents (α, β, γ, δ, ν, η) are not independent! Assuming that the

free energy per volume unit and the pair correlation function obey a scalingfunction, it can be shown that only two exponents are independent.

5.2.2 Scaling laws

Landau theory neglects the fluctuations of the order parameter (relevant in thevicinity of the phase transition) and expresses the free energy as an analyticalfunction of the order parameter. The phase transition theory assumes that theneglected fluctuations of the mean-field approach make a non analytical contri-bution to the thermodynamic quantities, e.g. the free energy density, denotedfs(t, h):

fs(t, h) = |t|2−αF±f(

h

|t|∆

)(5.11)

82

Page 83: Numerical Simulation in Statistical Physics

5.2 Scaling laws

where F±f are functions defined below and above the critical temperature andwhich approach a non zero value when h → 0 and have an algebraic behaviorwhen the scaling variable goes to infinity,

F±f (x) ∼ xλ+1, x→∞. (5.12)

Properties of this non analytical term give relations between critical expo-nents. Therefore, magnetization is obtained by taking the derivative of the freeenergy density with respect to the external field h,

m(h, t) = − 1

kBT

∂fs∂h∼ |t|2−α−∆F±

f

(h

|t|∆

). (5.13)

For h→ 0, one identifies exponents of the algebraic dependence in temperature:

β = 2− α−∆. (5.14)

Similarly, by taking the second derivative of the free energy density with respectto the field h, one obtains the isothermal susceptibility

χT (t, h) ∼ t2−α−2∆F±′′

f

(h

|t|∆

). (5.15)

For h→ 0, one identifies exponents of the algebraic dependence in temperature:

−γ = 2− α− 2∆. (5.16)

By eliminating ∆, one obtains a first relation between exponents α, β and γ,called Rushbrooke scaling law

α + 2β + γ = 2. (5.17)

This relation does not depend on the space dimension d. Moreover, one has

∆ = β + γ. (5.18)

Let us consider the limit when t goes to zero when h 6= 0. One then obtainsfor the magnetization, along the critical isotherm:

m(t, h) ∼|t|β(

h

|t|∆

)λ(5.19)

∼|t|β−∆λhλ. (5.20)

In order to recover Eq. (5.5), magnetization must be finite when t→ 0, i.e. doesnot diverge or vanish. This yields

β = ∆λ, (5.21)

83

Page 84: Numerical Simulation in Statistical Physics

Phase transitions

By identifying exponents of h in Eqs. (5.5) and (5.20) one infers

λ =1

δ. (5.22)

Eliminating ∆ and λ in Eqs. (5.21), (5.22) and (5.18) , one infers the followingrelation

βδ = β + γ. (5.23)

The next two relations are inferred from the scaling form of the free energydensity and of the spatial correlation relation g(r). By considering that therelevant macroscopic length scale of the system is the correlation length, thesingular free energy density has the following expansion

fskBT

∼ ξ−d(A+B1

(l1ξ

)+ . . .) (5.24)

where l1 is a microscopic length scale. When t→ 0, subdominant corrections canbe neglected and one has

fskBT

∼ ξ−d ∼ |t|νd. (5.25)

Taking the second derivative of this equation with respect to the temperature,one obtains for the specific heat

cv = −T ∂2fs∂T 2

∼ |t|νd−2. (5.26)

By using Eq. (5.6), one infers the Josephson relation (so called the hyper scalingrelation),

2− α = dν, (5.27)

which involves the space dimension. Knowing that the space integral of g(r)is proportional to the susceptibility, one performs the integration of g(r) over avolume whose linear dimension is ξ, which gives∫ ξ

0

ddrg(r) =

∫ ξ

0

ddrexp(−r/ξ)rd−2+η

(5.28)

With the change of variable u = r/ξ in Eq. (5.28), it comes∫ ξ

0

ddrg(r) = ξ2−η∫ 1

0

dduexp(−u)

ud−2+η(5.29)

On the right-hand side of Eq. (5.29), the integral is a finite value. By using therelation between the correlation length ξ and the dimensionless temperature t(Eq. (5.9)), one obtains that

χT (h = 0) ∼ |t|−(2−η)ν , (5.30)

84

Page 85: Numerical Simulation in Statistical Physics

5.2 Scaling laws

Figure 5.1 – Spin configuration of Ising model in 2 dimensions at high tempera-ture.

By considering Eq. (5.7), one infers

γ = (2− η)ν. (5.31)

Finally, since there are four relations between critical exponents that impliesonly two are independent 2.

An significant feature of the critical phenomena theory is the existence of aupper and lower critical dimensions: At the upper critical dimension dsup andabove, the mean-field theory (up to some logarithmic subdominant corrections)describes the critical phase transition. Below lower critical dimension dinf , nophase transition can occur. For the Ising model, dinf = 1 and dsup = 4. Thisimplies that in three-dimensional Euclidean space, a mean-field theory cannotdescribe accurately the para-ferromagnetic phase transition, in particular thecritical exponents. Because the liquid-gas transition belongs to the universalityclass of the Ising model, a similar conclusion applies to the phase transition.

After this brief survey of critical phenomena, we now apply scaling laws forfinite size systems to develop a method which gives the critical exponents of thephase transition as well as non universal quantities in the thermodynamic limit.

2 This property does not apply to quenched disordered systems.

85

Page 86: Numerical Simulation in Statistical Physics

Phase transitions

Figure 5.2 – Spin configuration of the Ising model in 2 dimensions close to thecritical temperature.

Figure 5.3 – Spin configuration of the Ising model in 2 dimensions below the thecritical temperature.

86

Page 87: Numerical Simulation in Statistical Physics

5.3 Finite size scaling analysis

5.3 Finite size scaling analysis

5.3.1 Specific heat

A close examination of spin configuration in Fig. 5.4 merely illustrates that a sys-tem close to the phase transition shows domains of spins of the same sign. Thesize of domains increases as the transition is approached. The group renormaliza-tion theory showed that, close to the critical point, the thermodynamic quantitiesof a finite system of linear size L for a dimensionless temperature t, and a di-mensionless field h,. . . , are the same of a system of size L/l for a dimensionlesstemperature tlyt and a dimensionless field hlyh ,. . . , which gives

fs(t, h, . . . L−1) = l−dfs(tl

yt , hlyh , . . . , (l/L)−1). (5.32)

where yt and yh are the exponents associated with the fields.A phase transition occurs when all variables of fs go to 0. In zero field, one

hasfs(t, . . . L

−1) = |t|2−αF±f (|t|−ν/L). (5.33)

If the correlation length ξ is smaller than L, the system behaves like an infinitesystem. But, in simulation, the limit t→ 0 can be taken before than L→∞, andξ can be larger than L, the linear dimension of the simulation cell; in other words,this corresponds to the fact that the variable of the function Ff (Eq. (5.33)) thengoes to infinity; this means, one moves away the critical point. Below and abovethe critical point of the infinite and for a vicinity which shrinks with increasingsizes of simulation cells, the system behaves like an infinite system whereas inthe region where the correlation length is equal to size of the simulation cell, oneobserves a size-dependence (See Fig. 5.4). If we denote by Fc the scaling functionassociated with the specific heat, which depends on the scaling variable |t|−ν/L,one has

cv(t, L−1) = |t|−αF±c (|t|−ν/L). (5.34)

Because |t|−α goes to infinity when t→ 0, F±c (x) must go to zero when x goes tozero. Reexpressing this function as

F±c (|t|−ν/L)) = (|t|−ν/L)−κD±(Ltν) (5.35)

with D±(0) finite. Since the specific heat does not diverge when |t| goes to zero,one requires that

κ = α/ν (5.36)

which gives for the specific heat

cv(t, L−1) = Lα/νD(L|t|ν). (5.37)

The function D goes to zero when the scaling variable is large and is always finiteand positive. D is a continuous function, and displays a maximum for a finite

87

Page 88: Numerical Simulation in Statistical Physics

Phase transitions

1.8 2 2.2 2.4 2.6 2.8 3T

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

Cv

L=16L=24L=32L=48

Figure 5.4 – Specific heat versus temperature of the Ising model in two dimensions,The simulation results correspond to different system sizes where L is the lineardimension of the lattice.

value of the scaling variable, denoted x0. Therefore, for a finite size system, oneobtains the following result

• The maximum of the specific heat occurs at a temperature Tc(L) which isshifted with respect to that of the infinite system

Tc(L)− Tc ∼ L−1/ν . (5.38)

• The maximum of the specific heat of a finite size system L is given by thescaling law

Cv(Tc(L), L−1) ∼ Lα/ν . (5.39)

5.3.2 Other quantities

Similar results can be obtained for other thermodynamic quantities, available insimulations. For instance, let us consider the absolute value of the magnetization

〈|m|〉 =1

N〈|

N∑i=1

Si|〉 (5.40)

and the isothermal susceptibility

kBTχ = N(〈m2〉 − 〈m〉2). (5.41)

88

Page 89: Numerical Simulation in Statistical Physics

5.3 Finite size scaling analysis

It is possible to calculate a second susceptibility

kBTχ′ = N(〈m2〉 − 〈|m|〉2) (5.42)

χ increases as N when T → 0, due to the existence of two peaks in the magnetiza-tion distribution and does not display a maximum at the transition temperature.Conversely, χ′ goes to 0 when T → 0 and has a maximum at the critical point.At high temperature, both susceptibilities are related in the thermodynamic limitby the relation χ = χ′(1− 2/π). At the critical point, both susceptibilities χ andχ′ diverge with the same exponent, but χ′ has a smaller amplitude.

Another quantity, useful in simulation, is the Binder parameter

U = 1− 〈m4〉3〈m2〉2

(5.43)

The reasoning leading to the scaling function of the specific heat can be usedfor other thermodynamic quantities and scaling laws for finite size systems aregiven by the relations

〈|m(t, 0, L−1)|〉 = L−β/νF±m(tL1/ν) (5.44)

kBTχ(t, 0, L−1) = Lγ/νF±χ (tL1/ν) (5.45)

kBTχ′(t, 0, L−1) = Lγ/νF±χ′(tL

1/ν) (5.46)

U(t, 0, L−1) = F±U (tL1/ν) (5.47)

where F±m , F±χ , F±χ′ , and F±U are 8 scaling functions (with different maxima).Practically, by plotting Binder’s parameter as a function of temperature, all

curves U(t, 0, L−1) intersect at the same abscissa (within statistical errors), whichdetermines the critical temperature of the system in the thermodynamic limit.Once the temperature of the transition obtained, one can compute β/ν from 〈|m|〉, then γ/ν from χ (or χ′). By considering the maximum of Cv or other quantities,one derives the value of the exponent 1/ν.

Note that in simulation more information is available than the two indepen-dent exponents and the critical temperature. Indeed, because of uncertainty andsub-dominant corrections to scaling laws,3, by comparing results, one can deter-mine more accurately critical exponents.

Monte Carlo simulation, combined with finite size scaling, is a powerful methodfor calculating critical exponents. Moreover, one also obtains non-universal quan-tities, like the critical temperature, that no analytical treatment is able to providewith sufficient accuracy until now. Indeed, the renormalization group can onlygive critical exponents and is unable to provide a reasonable value of the crit-ical temperature. Only the functional or non perturbative approach can givequantitative results, but with a high price to pay in terms of computation time.

3Scaling laws correspond to asymptotic behavior for very large system sizes: in order toincrease the accuracy of the simulation, one needs to incorporate subdominant corrections.

89

Page 90: Numerical Simulation in Statistical Physics

Phase transitions

5.4 Critical slowing down

The nice method, developed in the previous section, uses the idea that theMetropolis algorithm remains efficient even in the vicinity of the critical region.This is not the case, and in order to obtain equilibrium, fluctuations must besampled at all scales. But, close to the critical point, large scale fluctuations arepresent and relaxation times increase accordingly; these times are related to thegrowing correlation length by the scaling relation

τ ∼ (ξ(t))z (5.48)

where z is a new exponent, the so-called dynamical exponent. It typically variesbetween 2 and 5 (for a Metropolis algorithm). For a infinite system, knowingthat ξ ∼ |t|−ν , the relaxation time then diverges as

τ ∼ |t|−νz. (5.49)

For a finite-size system, the correlation length is bound by the linear system sizeL and the relaxation time increases as

τ ∼ Lz; (5.50)

Simulation times then become prohibitive!

5.5 Cluster algorithm

Let us recall that the Metropolis algorithm is a Markovian dynamics for particlemotion (or spin flips in lattices). A reasonable acceptance ratio is obtained bychoosing a single particle (or spin) in order to have a product |β∆E| which isrelatively small. A major advance of twenty years ago consists of generatingmethods where a large number of particles (or spins) are moved (or flipped)simultaneously, and keeping a quite high acceptance ratio (or even equal to 1),as well as by reaching a equilibrium state.Swendsen and Wang [1987].

We consider below the Ising model, but be generalized the method can bedone for all locally symmetric Hamiltonian (for the Ising model, this correspondsto the up/down symmetry).

Ideally, a global spin flip should have a acceptance ratio equal to 1. Theproposed method adds a new step to the Metropolis algorithm: from a given spinconfiguration, one characterizes the configuration by means of bonds betweenspins. The detailed balance equation (2.18) Chap. 2 is modified as follows

Pgc(o)W (o→ n)

Pgc(n)W (n→ o)= exp(−β(U(n)− U(o))) (5.51)

90

Page 91: Numerical Simulation in Statistical Physics

5.5 Cluster algorithm

where Pgc(o) is the probability of generating a bond configuration from o to n(o and n denote the old and new configurations, respectively). If one wishes toaccept all new configurations (W (i → j) = 1), whatever configurations i and j,one must have the following ratio

Pgc(o)

Pgc(n)= exp(−β(U(n)− U(o))) (5.52)

In order to build such an algorithm, let us note that energy is simply expressedas

U = (Na −Np)J (5.53)

where Np is the number of pairs of nearest neighbor spins of the same sign and Na

the number of pairs of nearest neighbor spins of the opposite sign. This formula,beyond its simplicity, has the advantage of being exact whatever the dimension-ality of the system. The method relies on rewriting of the partition functiongiven by Fortuyn and Kasteliyn (1969), but exploiting this idea for simulationoccurred almost twenty years later with the Swendsen and WangSwendsen andWang [1987] work.

For building clusters, one defines bonds between nearest neighbor spins withthe following rules

1. If two nearest neighbor spins have opposite signs, they are not connected.

2. If two nearest neighbor spins have the same sign, they are connected witha probability p and disconnected with a probability (1− p).

This rule assumes that J is positive (ferromagnetic system). When J is neg-ative, spins of same sign are always disconnected and spins of opposite signsare connected with a probability p, and disconnected with a probability (1− p).Let consider a configuration of Np pairs of spins of same sign, the probabilityof having nc pairs connected (and consequently nb = Np − nc pairs of spins aredisconnected) is given by the relation

Pgc(o) = pnc(1− p)nb . (5.54)

Once bonds are set, a cluster is a set of spins connected by at least one bondbetween them. If one flips a cluster (o→ n), the number of bonds between pairsof spins of same and opposite signs is changed and one has

Np(n) = Np(o) + ∆ (5.55)

and similarlyNa(n) = Na(o)−∆. (5.56)

The energy of the new configuration U(n) is simply related to the energy of theold configuration U(o) by the equation:

U(n) = U(o)− 2J∆. (5.57)

91

Page 92: Numerical Simulation in Statistical Physics

Phase transitions

Let us now consider the probability of the inverse process. One wishes to generatethe same cluster structure, but one starts from a configuration where one hasNp + ∆ parallel pairs and Na − ∆ antiparallel pairs. Antiparallel bonds areassumed broken. One wishes that the bond number n′c is equal to nc becauseone needs to have the same number of connected bonds for generating the samecluster. The difference with the previous configuration is the number of bonds tobreak. Indeed, one obtains

Np(n) = n′c + n′b (5.58)

Np(o) = nc + nb (5.59)

Np(n) = Np(o) + ∆ (5.60)

= nc + nb + ∆, (5.61)

which immediately givesn′b = nb + ∆. (5.62)

The probability of generating the new bond configuration from the old configu-ration is given by

Pgc(n) = pnc(1− p)nb+∆ (5.63)

Inserting Eqs. (5.54) and(5.63) in Eq. (5.52), one obtains

(1− p)−∆ = exp(2βJ∆) (5.64)

One can solve this equation for p, which gives

(1− p) = exp(−2βJ), (5.65)

Finally, the probability p is given by

p = 1− exp(−2βJ). (5.66)

Therefore, if the probability is chosen according Eq. (5.66), new configuration isalways accepted!

The virtue of this algorithm is not only its ideal acceptance rate, but onecan also show that, in the vicinity of the critical point, the critical slowing downdrastically decreases. For example, the critical exponent of the two-dimensionalIsing model is equal to 2.1 with the Metropolis algorithm while it is 0.2 with acluster algorithm. Therefore, a cluster algorithm becomes quite similar to theMetropolis algorithm in term of relaxation time far from the critical point.

More generally, Monte Carlo dynamics is accelerated when one finds a methodable to flip large clusters corresponding to the underlying physics. In the Swendsen-Wang method, one selects spin sets which correspond to equilibrium fluctuations;conversely, in a Metropolis method, by choosing randomly a spin, one generallycreates a defect in the bulk of a large fluctuation, and this defect must diffuse

92

Page 93: Numerical Simulation in Statistical Physics

5.6 Reweighting Method

from the bulk to the cluster surface in order that the system reaches equilibrium.Knowing that diffusion is like a random walk and that the typical cluster size iscomparable to the correlation length, the relaxation time is then given by τ ∼ ξ2,which gives for a finite size system τ ∼ L2, and explains why the dynamicalexponent is close to 2.

Note that the necessary ingredient for defining a cluster algorithm is the exis-tence of a symmetry. C. Dress et W. Krauth Dress and Krauth [1995] generalizedthese methods when the symmetry of the Hamiltonian has a geometric origin. Forlattice Hamiltonians, where the up-down symmetry does not exist, J. Heringa andH. Blote generalized the Dress and Krauth method by using the discrete symme-try of the lattice Heringa and Blote [1998a,b].

5.6 Reweighting Method

To obtain a complete phase of a given system, a large number of simulations arerequired for scanning the entire range of temperatures, or for the range of externalfields. When one attempts to locate a phase transition precisely, the number ofsimulation runs can become prohibitive. The reweighting method is a powerfultechniques that estimates the thermodynamic properties of the system by usingthe simulation data performed at a temperature T , for a set of temperaturesbetween [T −∆T, T + ∆T ] (or for an external field H, within a set of values ofthe external field [H −∆H,H + ∆H]).

The method relies on the following results: Consider the partition func-tion Ferrenberg and Swendsen [1989, 1988], Ferrenberg et al. [1995],

Z(β,N) =∑α

exp(−βH(α)) (5.67)

=∑i=1

g(i) exp(−βE(i)) (5.68)

where α denotes all available states, the index i runs over all available energiesof the system, g(i) denotes the density of states of energy E(i). For a giveninverse temperature β′ = 1/(kBT

′), the partition function can be expressed as

Z(β′, N) =∑i

g(i) exp(−β′E(i)) (5.69)

=∑i

g(i) exp(−βE(i)) exp(−(β′ − β)E(i)) (5.70)

=∑i

g(i) exp(−βE(i)) exp(−(∆β)E(i)), (5.71)

where ∆β = (β′ − β).

93

Page 94: Numerical Simulation in Statistical Physics

Phase transitions

Similarly, a thermal average, like the mean energy, is expressed as:

〈E(β′, N)〉 =

∑i g(i)E(i) exp(−β′E(i))∑

i g(i) exp(−β′E(i))(5.72)

=

∑i g(i)E(i) exp(−βE(i)) exp(−(∆β)E(i))∑

i g(i) exp(−βE(i)) exp(−(∆β)E(i)). (5.73)

In a Monte Carlo simulation, at equilibrium, one can monitor an energy histogramassociated with visited configurations.

This histogram, denoted Dβ(i), is proportional to the density of states g(i)weighted by the Boltzmann factor,

Dβ(i) = C(β)g(i) exp(−βE(i)) (5.74)

where C(β) is a temperature dependent constant. One can easily see that Eq. (5.73)can be reexpressed as

〈E(β′, N)〉 =

∑iDβ(i)E(i) exp(−(∆β)E(i))∑

iDβ(i) exp(−(∆β)E(i)). (5.75)

An naive (or optimistic) view would consist in believing that a single simula-tion at given temperature could give the thermodynamic quantities of the entirerange of temperature4. As shown in the schematic of Fig. 5.5, when the histogramis reweighted with a lower temperature than the temperature of the simulation,(β1 > β), energy histogram is shifted towards to lower energies. Conversely,when reweighted energy histogram with a higher temperature (β2 > β), energyhistogram is shifted towards the higher energies.

In a Monte Carlo simulation, the number of steps is finite, which means thatfor a given temperature, states of energy significantly lower or larger than themean energy are weakly sampled, or not sampled at all. Reweighting leads to anexponential increase of errors with the temperature difference. Practically, whenthe reweighted histogram has a maximum at a distance larger than the standarddeviation of the simulation data, the accuracy of the reweighted histogram, andof the thermodynamic quantities is poor.

We briefly summarize the results of a more refined analysis of the reweightingmethod

For a system with a continuous phase transition, the energy histogram Dβ(i) isbell-shaped, and can be approximated by a Gaussian far from a phase transition.This is associated with the fact that at equilibrium, available energy states arearound the mean energy value. Far from the critical point, the histogram widthcorresponds to fluctuations and is then proportional to 1/

√N where N is the site

number or the particle number. This shows that the efficiency of this reweighting

4By using a system of small dimensions, one can obtain an estimate for a large range oftemperatures.

94

Page 95: Numerical Simulation in Statistical Physics

5.7 Conclusion

0 2 4 6 8 10E

0

0.2

0.4

0.6

0.8

1D

β(E)

β1β2

Figure 5.5 – Energy histogram, Dβ(E) of a Monte Carlo simulation (middle curve)to the inverse temperature β; ,Dβ1(E) and Dβ2(E) are obtained by a reweightingmethod.

method decreases when the system size increases. In the critical region, the widthof the energy histogram increases, and if simulation sampling is done correctly,an increasing range of temperatures is estimated by the reweighting method.

In summary, this method is accurate in the vicinity of the temperature of thesimulation and allows one to build a complete phase diagram rapidly, becausethe number of simulations to perform is quite limited. By combining severalenergy histograms (multiple reweighting method), one can improve the estimatesof thermodynamic quantities for a large range of temperatures.

The principle of the method developed by considering energy and the conju-gate variable β can be generalized for all pairs of conjugate variables, e.g. themagnetization M and an external uniform field H.

5.7 Conclusion

Phase transitions can be studied by simulation, by using several results of statisti-cal physics. Indeed, the renormalization group theory has provided the finite sizeanalysis, which permits the extrapolations of the critical exponents of the system

95

Page 96: Numerical Simulation in Statistical Physics

Phase transitions

in the thermodynamic limit by using simulation results of different systems size.Cluster methods are required for decreasing the dynamic exponent associatedwith the phase transition. By combining simulation results with a reweightingmethod, one drastically reduces the number of simulations. If the accuracy ofsimulation results are comparable to the theoretical methods for simple systems,like the Ising model, it is superior for almost more complicated models.

5.8 Exercises

5.8.1 Finite size scaling for continuous transitions: loga-rithm corrections

The goal of this problem is to recover the results of the finite size analysis witha simple method.

♣ Q. 5.8.1-1 Give the scaling laws for a infinite system, of the specific heat C(t),of the correlation length ξ(t) and of the susceptibility χ(t) as a function of usualcritical exponents α, ν, γ and of t = T−Tc

Tc, the dimensionless temperature, where

Tc is the critical temperature.

♣ Q. 5.8.1-2 In a simulation, explain why there are no divergences of quantitiespreviously defined.

Under an assumption of finite size analysis, one has the relation

CL(0)

C(t)= FC

(ξL(0)

ξ(t)

)(5.76)

where FC is a scaling function, CL(0) the maximum value of the specific heatobtained in a simulation of a finite system with a linear dimension L.

♣ Q. 5.8.1-3 By assuming that ξL(0) = L and that the ratio(ξL(0)ξ(t)

)is finite and

independent of L, infer that t ∼ Lx where x is an exponent to be determined.

♣ Q. 5.8.1-4 By using the above result and Eq. (5.76), show that

CL(0) ∼ Ly (5.77)

where y is an exponent to calculate.

♣ Q. 5.8.1-5 By assuming that

χL(0)

χ(t)= Fχ

(ξL(0)

ξ(t)

)(5.78)

96

Page 97: Numerical Simulation in Statistical Physics

5.8 Exercises

show thatχL(0) ∼ Lz (5.79)

where z is an exponent to be determined.

Various physical situations occur where scaling laws must be modified to ac-count for logarithmic corrections. The rest of the problem consists of obtainingrelations between exponents associated with these corrections. We now assumethat for an infinite system

ξ(t) ∼ |t|−ν | ln |t||ν (5.80)

C(t) ∼ |t|−α| ln |t||α (5.81)

χ(t) ∼ |t|−γ| ln |t||γ (5.82)

♣ Q. 5.8.1-6 By assuming that ξL(0) ∼ L(ln(L))q and the finite size analysis isvalid for the specific heat, Eq.(5.76), show that

CL(0) ∼ Lx(ln(L))y (5.83)

where y is expressed as a function of α, α, ν, ν and of q.

Hint: Consider the equation y ln(y)c = x−a| ln(x)|b; for y > 0 and going to infinity,the asymptotic solution is given by

x ∼ y−1/a ln(y)(b−c)/a (5.84)

A finite size analysis of the partition function (details are beyond of the scope ofthis problem) shows that if α 6= 0 the specific heat of a finite size system behavesas

CL(0) ∼ L−d+ 2ν (ln(L))−2 ν−q

ν (5.85)

♣ Q. 5.8.1-7 By using the results of question 5.8.1-6 show that one recovers thehyperscaling relation and an additional relation between α, q, ν, ν and α.

♣ Q. 5.8.1-8 By using the hyperscaling relation, show that the new relation canbe expressed as a function of α, q, ν, and d.

When α = 0, the specific heat of a finite size system behaves as

CL(0) ∼ (ln(L))1−2 ν−qν (5.86)

97

Page 98: Numerical Simulation in Statistical Physics

Phase transitions

♣ Q. 5.8.1-9 What is the relation between α, q, ν and d?

Let us consider now the logarithmic corrections of the correlation function

g(r, t) =(ln(r))η

rd−2+ηD

(r

ξ(t)

)(5.87)

♣ Q. 5.8.1-10 By calculating the susceptibility χ(t) from the correlation func-tion, recover Fisher’s law as well as a new relation between the exponents η, γ, νand η.

5.8.2 Some aspects of the finite size scaling: first-ordertransition

Let us consider discontinuous (or first-order) transitions. One can show that thepartition function of a finite size system of linear size L, with periodic boundaryconditions is expressed

Z =k∑i=1

exp(−βcfi(βc)Ld) (5.88)

where k is the number of coexisting phases (often k = 2, but it is not necessary),fi(βc) is the free energy per site of the ithe phase in the thermodynamic limit.At the transition temperature, the free energies of each phase are equal.

♣ Q. 5.8.2-1 Assume only two coexisting phases. Expand the free energies tofirst order in deviations from the transition temperature.

βfi(β) = βcfi(βc)− βceit+O(t2) (5.89)

where ei is the free energy per site of the phase i and t = 1 − Tc/T . Moreover,one assumes that the partition function expressed as the sum of free energies ofdifferent phases is valid in the vicinity of the transition temperature.

Express the partition function by keeping first-order terms in t. Infer that thetwo phases coexist only if tLd << 1.

One can show that the probability of finding an energy E per site is given forlarge L as

P (E) =Kδ(E − e1) + δ(E − e2)

1 +K(5.90)

where K is a function of the temperature as K = K(tLd); K(x) → ∞ whenx→ −∞ and K(x)→ 0 when x→ +∞.

98

Page 99: Numerical Simulation in Statistical Physics

5.8 Exercises

♣ Q. 5.8.2-2 Knowing that the specific heat per site C(L, T ) for a lattice of sizeL is given by

C(L, T )L−d = β2(< E2 > − < E >2) (5.91)

express C(L, T ) as a function of e1, e2, β and K.

♣ Q. 5.8.2-3 Show that C(L, T ) goes to zero for temperature much larger andmuch smaller than the temperature of the transition.

♣ Q. 5.8.2-4 Determine the value of K where C(L, T ) is maximal.

♣ Q. 5.8.2-5 Show that there exists a maximum of C(L, T ).

♣ Q. 5.8.2-6 Determine the corresponding value of C(L, T ), expressed as a func-tion of e1 e2, βc.

♣ Q. 5.8.2-7 For characterizing a first-order transition, one can consider theBinder parameter

V4(L, T ) = 1− < E4 >

< E2 >2. (5.92)

Express V4 as a function of e1, e2, β and K.

♣ Q. 5.8.2-8 Determine the value of K where V4 is maximal. Calculate thecorresponding value of V4.

♣ Q. 5.8.2-9 What can one say about the specific heat and V4 when e1 is closeto e2?

99

Page 100: Numerical Simulation in Statistical Physics

Phase transitions

100

Page 101: Numerical Simulation in Statistical Physics

Chapter 6

Monte Carlo Algorithms basedon the density of states

Contents6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 101

6.2 Density of states . . . . . . . . . . . . . . . . . . . . . . 102

6.2.1 Definition and physical meaning . . . . . . . . . . . . 102

6.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . 103

6.3 Wang-Landau algorithm . . . . . . . . . . . . . . . . . 105

6.4 Thermodynamics recovered! . . . . . . . . . . . . . . . 106

6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.6.1 Some properties of the Wang-Landau algorithm . . . . 108

6.6.2 Wang-Landau and statistical temperature algorithms . 109

6.1 Introduction

When a system undergoes a continuous phase transition, Monte Carlo methodswhich consists of flipping (or moving) a single spin (or particle) converge moreslowly in the critical region than the cluster algorithm (when it exists). Theorigin of this slowing down is associated with real physical phenomena, evenif Monte Carlo dynamics can never to be considered as a true dynamics. Therapid increase of the relaxation time is related to the existence of fluctuations oflarge sizes (or the presence of domains) which are difficult to sample by a localdynamics: indeed, cluster algorithms accelerate the convergence in the criticalregion, but not far away.

101

Page 102: Numerical Simulation in Statistical Physics

Monte Carlo Algorithms based on the density of states

For a first-order transition, la convergence of Monte Carlo methods basedon a local dynamics is weak because close to the transition, there exists two ormore metastable states: the barriers between metastable states increase with thesystem size, and the relaxation time has an exponential dependence of system sizewhich rapidly exceeds a reasonable computing time. Finally, the system is unableto explore the phase space and is trapped in a metastable region. The replicaexchange method provides a efficient way for crossing metastable regions, butthis method requires a fine tuning of many parameters (temperatures, exchangefrequency between replica).

New approaches have been proposed in the last decade based on the compu-tation of the density of states. These methods are general in the sense that thedetails of dynamics are not imposed. Only a condition similar to the detailedbalance is given. The goal of these new methods is to obtain the key quantitywhich is the density of states.

We first see why knowledge of this quantity is essential for obtaining thethermodynamics quantities.

6.2 Density of states

6.2.1 Definition and physical meaning

For a classical systems characterized by a Hamiltonian H, the partition functionreads (see Chapter 1)

Z(E) =∑α

δ(H− E) (6.1)

where E denotes the energy of the system and the index α runs over all availableconfigurations. The symbol δ denotes the Kronecker symbol where the summationis discrete; an analogous formula can be written for the microcanonical partitionfunction when the energy spectra is continuous; the sum is replaced with anintegral and the Kronecker symbol with a δ distribution.

Equation (6.1) means that the microcanonical partition function is a sum overall microstates of total energy E, each state has a identical weight. Introducingthe degeneracy of the system for a given energy E, the function g(E) is definedas follows: in the case of a continuous energy spectra, g(E)dE is the number ofstates available of the system for a total energy between E and E + dE. Themicrocanonical partition function is then given as

Z(E) =

∫dug(u)δ(u− E) (6.2)

= g(E) (6.3)

Therefore, the density of states is nothing else than the microcanonical partitionfunction of the system.

102

Page 103: Numerical Simulation in Statistical Physics

6.2 Density of states

-2 -1 0 1 2E

0

200

400

600

800

1000ln

(g(E

))

Figure 6.1 – Logarithm of the density of states ln(g(E)) as a function of energyE for a two-dimensional Ising model on a square lattice of linear size L = 32.

6.2.2 Properties

The integral (or the sum) over all energies of the density of states gives thefollowing rule ∫

dEg(E) = N (6.4)

where N is the total number of configurations (for a lattice model).For a Ising model, this numberN is equal to 2N where N is the total number

of spins of the system. This value is then very large and grows exponentially withthe system size. From a numerical point of view, one must use the logarithm ofthis function ln(g(E)) for avoiding overflow problem.

For simple models, one can obtain analytical expressions for small or moderatesystem sizes, but combinatorics becomes rapidly a tedious task. Some results cannot obtained easily: Indeed, for the Ising model, there are two ground statescorresponding to full ordered states. The first nonzero value of the density ofstates is equal to 2N which correspond to the number of possibilities of flippingone spin on the lattice of N sites.

Figure (6.1) shows the logarithm of the density of states of the two-dimensionalIsing model on a square lattice with a total number of site equal to 1024. The

103

Page 104: Numerical Simulation in Statistical Physics

Monte Carlo Algorithms based on the density of states

shape of this curve is the consequence of specific and universal features that wenow detail: for all systems where the energy spectra is discrete, the range ofenergy has a lower bound given by the ground state of the system. In the caseof simple liquids, the density of states does not have an upper bound, becauseoverlap between particles which interact with a Lenard-Jones potential is alwayspossible, which gives very large values of the energy; however the equilibriumprobability of having such configurations is weak.

The shape of the curve which displays a maximum is a characteristic of allsimple systems, but the Oy-symmetry is a consequence of the up-down symmetryof the corresponding Hamiltonian. This symmetry is also present in thermody-namic quantities, like magnetization versus temperature (see Chapter 1).

For Monte Carlo algorithms based on the density of states, one can simplyremark: With the Metropolis algorithm, the acceptance probability of a newconfiguration satisfies detailed balance, namely Π(i→ j)Peq(i) = Π(j → i)Peq(j)where Peq(j) is the probability of having the configuration j at equilibrium andΠ(i → j is the transition probability of going from the state i towards the statej. One can reexpress detailed balance in energy variable.

Π(E → E ′) exp(−βE) = Π(E ′ → E) exp(−βE ′) (6.5)

Let us denote Hβ(E) the energy histogram associated to successive visits of thesimulation. Consequently, one has

Hβ(E) ∝ g(E) exp(−βE) (6.6)

Assume that one imposes the following balance

Π(E → E ′) exp(−βE + w(E)) = Π(E ′ → E) exp(−βE ′ + w(E ′)) (6.7)

energy histogram then becomes proportional to

Hβ(E) ∝ g(E) exp(−βE + w(E)) (6.8)

It immediately appears that, when w(E) ∼ βE− ln(g(E)), the energy histogrambecomes a flat histogram and independent of the temperature.

The stochastic process corresponds to a random walk in energy space. Theconvergence of this method is related to the fact that the available interval of thewalker is bounded. For the Ising model, these bounds exist, for other models, itis possible of restricting the range of the energy interval in a simulation.

This very interesting result is at the origin of multicanonical methods (intro-duced for the study of first-order transition). However, it is necessary to knowthe function w(E), namely to know the function g(E). In multicanonical method,a guess for w(E) is provided from the density of states obtained (by simulation)with small system sizes. By extrapolating an expression for larger system sizes,the simulation is performed and by checking the deviations of the flatness of the

104

Page 105: Numerical Simulation in Statistical Physics

6.3 Wang-Landau algorithm

energy histogram, one can correct the weight function, in order to flatten theenergy histogram in a second simulation run.

Performing this kind of procedure consisting of suppressing the free energybarriers which exist in a Metropolis algorithm, and which vanishes here by theintroduction of an appropriate weight function. The difficulty of this method isto obtain an good extrapolation of the weight function for large system sizes.Indeed, a bias of the weight function can insert new free energy barriers andsimulations converge slowly again.

For obtaining a simulation method that computes the density of states with-out knowledge of a good initial guess, one needs to add an ingredient to themulticanonical algorithm. A new method has been proposed in 2001 by J.S.Wang et D.P. LandauWang and Landau [2001a,b], which we detail below.

6.3 Wang-Landau algorithm

The basics of the algorithm are:

1. A change of the initial configuration is proposed (generally a local modifi-cation)

2. One calculates the energy associated with the modified configuration E ′.This configuration is accepted with a Metropolis rule Π(E → E ′) = Min(1, g(E)/g(E ′)),if not the configuration is rejected and the old configuration is kept (andadded as in an usual Metropolis algorithm)

3. The accepted configuration modifies the density of states as follows: ln(g(E ′))←ln(g(E ′)) + ln(f)

This iterative scheme is repeated until the energy histogram is sufficiently flat:practically, the original algorithm proposes the histogram is flat when each valueof the histogram must be larger than 80% of the mean value of this histogram.

f is called the modification factor. Because the density of states changescontinuously in time, it is not a stationary quantity. The accuracy of the densityof states depends on the statistical quantity of each value of g(E). Because thedensity of states increases with time, the asymptotic accuracy is given by

√ln(f).

To this first stage, the precision is not sufficient for calculating the thermodynamicquantities.

Wang and Landau completed the algorithm as follows: One resets the energyhistogram (but, obviously not the histogram of the density of states). One restarta new simulation described above, but with a new modification factor equal to

√f .

Once this second stage achieved, the density of states is refined with respect tothe previous iteration, because the accuracy of the density is given by

√ln(f)/2.

105

Page 106: Numerical Simulation in Statistical Physics

Monte Carlo Algorithms based on the density of states

In order to have an accurate density of states, the simulation is continued bydiminishing the modification factor by the relation

f ←√f (6.9)

When ln(f) ∼ 10−8, the density of states does not longer evolve, the dynamicsbecomes a random walk in energy space and the simulation is then stopped.

Once the density of states obtained, one can easily derive all thermodynamicquantities from this alone function. A additional virtue in this kind of simulationis that if the accuracy is not reached, the simulation can be easily prolongated byusing the previous density of states until a sufficient accuracy is reached. Contraryto other methods based on one or several values of the temperature, one can herecalculate the thermodynamic quantities for all temperatures (corresponding tothe energy scale used in the simulation).

Considering that the motion in the energy space is purely diffusive (what isonly true for the last iterations of the algorithm, namely values of the modificationfactor close to one), the computer time which is necessary is proportional to(∆E)2, where ∆E represents the energy interval. This means that computertime increases as the square of the system size. The computer time of this kingof simulation is then larger than this of Metropolis algorithm, but for the secondcase, one obtains the thermodynamic quantities for one temperature (or by usinga reweighting method a small interval of temperature), whereas for only oneWang-Landau simulation, one can obtains these quantities for a large range oftemperatures (as shown below) and the accuracy is much better.

For decreasing the computing time, it is judicious to reduce the energy inter-val in the simulation, and eventually to perform several simulations for different(overlapping) energy ranges. In the second case, the total duration of all sim-ulations is divided by two compared to an unique simulation over the completeenergy space. One can then match the density of states of each simulation. How-ever, this task is subtle, even impossible, if the energy interval belongs to theregion of a transition. Indeed, it occurs errors in the density of states at the endsof each density of states forbidding a complete rebuilding of the density of states.With limitations, the method is very efficient for lattice systems with discreteenergy spectra (additional problems occur with continuous energy spectra. Inany case, free energy barriers disappear for first order transition, and it allowsfor an efficient exploration of phase space.

6.4 Thermodynamics recovered!

Once obtained the density of states (to an additive constant), one can calculatethe mean energy a posteriori by computing the ratio of following integrals

< E >=

∫E exp(−βE + ln(g(E)))dE∫exp(−βE + ln(g(E)))dE

(6.10)

106

Page 107: Numerical Simulation in Statistical Physics

6.4 Thermodynamics recovered!

The practical calculation of these integrals requires some caution. The argumentsof these exponentials can be very large and lead to overflows. For avoiding theseproblems, let us note that for a given temperature (or β ), the expression

−βE + ln(g(E)) (6.11)

is a function which admits one or several maxima (but close) and which decreasesrapidly toward negative values when the energy is much smaller or greater tothese minima. Without changing the result, one can multiply each integral byexp(−Max(−βE + ln(g(E))))1, the exponentials of each interval have an argu-ment always negative or equal to zero. On can then truncate the bounds of eachintegral when the arguments of each exponential become equal to −100 and thecomputation of each interval is safe, without any overflows (or underflows).

The computation of the specific heat is done with the following formula

Cv = kBβ2(〈E2〉 − 〈E〉2

)(6.12)

It is easy to see that the procedure skechted for the computation of the energycan be transposed identically for the computation of the specific heat. One cancalculate the thermodynamic quantities for any value of the temperature andobtain an accurate estimate of the maximum of the specific hear as well as theassociated temperature.

A last main advantage of this method is to obtain the free energy of the systemfor all temperatures as well as the entropy of the system, a different situation ofa Monte Carlo simulation in the canonical ensemble.

One can also obtain the thermodynamic magnetic quantities with a modestcomputational effort. Indeed, one obtained the density of states, it is possible torun a simulation where the modification factor is equal to 1 (the density of statesdoes not evolve) and therefore the simulation corresponds to a random walk in thespace energy: one stores a magnetization histogram M(E) as well as the squaremagnetization M2(E), even higher moments Mn(E).... From these histogramsand the density of states g(E), one can calculate the mean magnetization of thesystem as a function of the temperature by using the following formula

< M(β) >=

∫ M(E)H(E)

exp(−βE + ln(g(E)))dE∫exp(−βE + ln(g(E)))dE

(6.13)

and one can obtain the critical exponent associated to the magnetization for afinite size scaling.

Similarly, one can calculate the magnetic susceptibility as a function of thetemperature.

Therefore, one can write a minimal program for studying a system by calcu-lating the quantities involving the moments of energy in order to determine the

1When there are several maxima, one chooses the maximum which gives the largest value.

107

Page 108: Numerical Simulation in Statistical Physics

Monte Carlo Algorithms based on the density of states

region of the phase diagram where the transition takes place and what is the orderof the transition. Secondly, one can calculate other thermodynamic quantities byperforming an additional simulation in which the modification factor is equal to1. Conversely, for a simulation in a canonical ensemble, a complete program mustbe written for calculating all quantities from the beginning.

6.5 Conclusion

To simply summarize the Monte Carlo simulations based on the density of states,one can say that an efficient method for obtaining thermodynamics of a systemconsists in performing a simulation where thermodynamics is not involved.

Indeed, simulation is focussed on the density of states which is not a thermo-dynamic quantity, but a statistical quantity intrinsic of the system.

It is possible to perform simulations of Wang-Landau with a varying numberof particles. It allows one to obtain the thermodynamic potential associated witha grand canonical ensemble. On can also perform simulation of simple lquidsand build the coexistence curves with accuracu. Let us note that additionaldifficulties occur when the energy spectra is continuous and some refinementswith respect to the original algorithm have been proposed by several authorscorrecting in part the drawbacks of the method. Because this method is veryrecent , improvements are expected in the near future, but this method representsa breakthough and can alllow studies of systems where standard methods fail toobtain reliable thermodynamic quantities. There is an active fied for bringingmodifications to the original method (see e.g. the references Poulain et al. [2006],Zhou and Bhatt [2005]).

6.6 Exercises

6.6.1 Some properties of the Wang-Landau algorithm

♣ Q. 6.6.1-1 The Wang-Landau algorithm allows calculating the density of statesg(E) of a system described by a Hamilonian H. Knowing that the logarithm ofthe partition function in the microcanonical ensemble is equal to the entropyS(E) where E is the total energy , show simply that the relation between g(E)and S(E). For the most of the systems, far away of the ground state, the entropuis a extensive quantity, how depend the density of states g(E) on the particlenumber?

♣ Q. 6.6.1-2 Simulation results of a Monte Carlo simulation of Wand-Landautype gives the density of states up to a multiplicative constant (Cg(E), ou Cis a constant). Show that the quantities in the canonical ensemble as the meanenergy and the specific heat do not depend on C.

108

Page 109: Numerical Simulation in Statistical Physics

6.6 Exercises

♣ Q. 6.6.1-3 Knowing that the canonical equilibrium distribution Pβ(E) ∼ g(E) exp(−βE),at a inverse temperature β behaves (far away of a phase transition) as a Gaus-sian aroung the mean energy E0, expand the logarithm at E0. Show that thefirst order term is equal to zero. Infer that one obtains a gaussian approximationfor the equilibrium distribution. In a simulation, in a plot of ln(g(E)) versus E,what is the physical meaning of the slope for a given energy E ?

One wants to study the convergence of the Wang-Landau algorithm.

♣ Q. 6.6.1-4 Noting that the energy histogram Hk(E) for the k simulation run,namely when the modification factor is fk, show that the density of states is givenby the relation

ln(gk(E)) =k∑i=1

Hi(E) ln(fk). (6.14)

♣ Q. 6.6.1-5 Noting that Hk(E) = Hk(E)−MinEHk(E), explain why

ln(gk(E)) =k∑i=1

Hi(E) ln(fk) (6.15)

is equal to the above equation up to a additive constant.

♣ Q. 6.6.1-6 Several simulations have shown that δHk =∑

E Hk(E) behave as

1/√

ln(fk). Knowing that fk =√fk−1, infer that the density of states goes to a

finite value.

6.6.2 Wang-Landau and statistical temperature algorithms

The Wang-Landau algorithm is a Monte-Carlo method which allows obtainingthe density of states g(E) of a systems in a finite interval of energy.

♣ Q. 6.6.2-1 Why is it necessary to decrease the modification factor f to eachiteration? Justify your answer.

♣ Q. 6.6.2-2 Why is it numerically interesting of working with the logarithm ofthe density of states?

♣ Q. 6.6.2-3 At the beginning of the simulation, the density of states g(E) isgenerally chosen equal to 1. If one takes another initial condition, does it obtainthe same density of states at the end of the simulation.

109

Page 110: Numerical Simulation in Statistical Physics

Monte Carlo Algorithms based on the density of states

♣ Q. 6.6.2-4 Wang-Landau dynamics is generally performed by using elemen-tary moves of a single particle, in a similar manner of a Metropolis algorithm.If dynamics involves several particles, why does the efficiency decreases rapidlywith the systems size, and not for the Wang-Landau algorithm?

For models with a continuous energy spectra, the Wang-Landau algorithm isgenerally less efficient and sometimes does not converge towards the equilibriumdensity of states. We are going to determine the origin of this problem; forcalculating the density of states, it is necessary to discretize the spectra of thedensity of states. Let us denote the bin size ∆E of of the histogram of the densityof states.

♣ Q. 6.6.2-5 If the mean energy change of a configuration is δEc << ∆E, andif the initial configuration has an energy located within the interval of the meanenergy E, what can one say about the acceptance probability of the new configu-ration? Why does it imply a certain bias in the convergence of the Wang-Landaumethod?

Pour corriger ce defaut de la methode de Wang-Landau, Kim et ses col-laborateurs ont propose d’effectuer la mise a jour de la temperature effective

T (E) =∂E

∂S, ou S(E) est l’entropie microcanonique, plutot que celle de la den-

site d’etat.

♣ Q. 6.6.2-6 By discretizing the derivative of the density of states with respectto the energy (with kB = 1) :

∂S

∂E

∣∣∣∣E=Ei

=1

Ti= βi '

(Si+1 − Si−1)

2∆E(6.16)

show that for a elementary move whose new configuration has an energy Ei, twostatistical temperatures must be changed according to the formula

β′i±1 = βi±1 ∓ δf (6.17)

where δf =ln f

2∆Eand f is the modification factor facteur de modification. Show

that

T ′i±1 = αi±1Ti±1

where αi±1 is a parameter to be determined. How doe Ti+1 and Ti−1 evolve alongthe simulation? What can one conclude about the temperature variation withenergy?

110

Page 111: Numerical Simulation in Statistical Physics

6.6 Exercises

♣ Q. 6.6.2-7 One then calculates the entropy of each configuration by using alinear interpolation of the temperature between two successive intervals i andi+ 1, show that the entropy is then given by the equation

S(E) = S(E0) +i∑

j=1

∫ Ej

Ej−1

dE

Tj−1 + λj−1(E − Ej−1)+

∫ E

Ei

dE

Ti + λi(E − Ei)(6.18)

where λi is a parameter to be determined as a function of ∆E and of temperatureTi and Ti+1.

♣ Q. 6.6.2-8 What can one say about the entropy difference between two con-figurations belonging to the same interval for the statistical temperature andseparated by an energy δEc? What can one conclude by comparing this algo-rithm to the original Wang-Landau algorithm?

111

Page 112: Numerical Simulation in Statistical Physics

Monte Carlo Algorithms based on the density of states

112

Page 113: Numerical Simulation in Statistical Physics

Chapter 7

Monte Carlo simulation indifferent ensembles

Contents7.1 Isothermal-isobaric ensemble . . . . . . . . . . . . . . 113

7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 113

7.1.2 Principe . . . . . . . . . . . . . . . . . . . . . . . . . . 114

7.2 Grand canonical ensemble . . . . . . . . . . . . . . . . 116

7.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 116

7.2.2 Principle . . . . . . . . . . . . . . . . . . . . . . . . . 116

7.3 Liquid-gas transition and coexistence curve . . . . . . 119

7.4 Gibbs ensemble . . . . . . . . . . . . . . . . . . . . . . . 121

7.4.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . 121

7.4.2 Acceptance rules . . . . . . . . . . . . . . . . . . . . . 121

7.5 Monte Carlo method with multiples Markov chains . 123

7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 127

7.1 Isothermal-isobaric ensemble

7.1.1 Introduction

L’ensemble isobare-isotherme (P, V, T ) est largement utilise car il correspond aune grande majorite de situations experimentales pour lesquelles la pression etla temperature sont imposees au systeme a etudier. De plus, quand le potentield’interaction entre particules n’est pas additif de paires, la pression ne peut pasetre obtenue a partir de l’equation du viriel et la simulation dans un ensembleN,P, T permet d’obtenir directement l’equation d’etat du systeme.

113

Page 114: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

Une deuxieme utilisation consiste a realiser des simulations a pression nulle, enpartant a haute densite et en augmentant progressivement la temperature. Celapermet d’obtenir une estimation rapide de la courbe de coexistence des phasesliquide-gaz.

7.1.2 Principe

Le systeme considere est constitue d’un reservoir de gaz ideal comprenant M−Nparticules, occupant un volume V0 − V , et de N particules occupant un volumeV .

La fonction de partition du systeme total est donnee par

Q(N,M, V, V0, T ) =1

Λ3MN !(M −N)!

∫ L′

0

drM−N∫ L

0

drN exp(−βU(rN)) (7.1)

ou L3 = V et L′3 = V0 − V . On introduit les notations suivantes:

ri = siL. (7.2)

Apres le changement de variables, la fonction de partition s’ecrit

Q(N,M, V, V0, T ) =V N(V0 − V )M−N

Λ3MN !(M −N)!

∫dsN exp(−βU(sN ;L)) (7.3)

ou la notation U(sN ;L) rappelle que le potentiel n’a generalement pas une ex-pression simple dans le changement de variable rN → sN .

Prenons la limite thermodynamique du reservoir (V0 →∞, M →∞ et (M −N)/V0 → ρ); on a

(V0 − V )M−N = V M−N0 (1− V/V0)M−N → V M−N

0 exp(−ρV ). (7.4)

En notant que la densite ρ est celle du gaz ideal, on a donc ρ = βP .Le systeme que l’on souhaite etudier par la simulation est le sous-systeme

consitue de N particules, independamment des configurations des particules dugaz parfait placees dans la deuxieme boite. La fonction de partition du systemedes N particules est alors donnee en utilisant l’equation (7.3) par

Q′(N,P, V, T ) = limV0,(M−N)/V0→∞

Q(N,M, V, V0, T )

Qid(M −N, V0, T )(7.5)

ou Qid(M −N, V0, T ) est la fonction de partition d’un gaz ideal de M −N par-ticules occupant un volume V0, a la temperature T .

En utilisant l’ equation (7.4), on obtient

Q′(N,P, V, T ) =V N

Λ3NN !exp(−βPV )

∫dsN exp(−βU(sN ;L)). (7.6)

114

Page 115: Numerical Simulation in Statistical Physics

7.1 Isothermal-isobaric ensemble

Si on permet au systeme de modifier le volume V , la fonction de partitionassociee au sous-systeme des particules interagissant par le potentiel U(rN) estdonne en considerant la somme des fonctions de partitions avec des volumes Vdifferents. Cette fonction est alors egale a

Q(N,P, T ) =

∫ +∞

0

dV (βP )Q′(N,P, V, V0, T ) (7.7)

=

∫ ∞0

dV βPV N exp(−βPV )

Λ3NN !dsN exp(−βU(sN)) (7.8)

La probabilite P (V, sN) que N particules occupent un volume V avec les partic-ules situees aux points sN est donnee par

P (V, sN) ∼ V N exp(−βPV − βU(sN ;L)) (7.9)

∼ exp(−βPV − βU(sN ;L) +N ln(V )). (7.10)

En utilisant la relation du bilan detaille, il est alors possible de donner les prob-abilites de transition pour un changement de volume avec la regle de Metropolis.

Pour un changement de volume de la boıte de simulation, le taux d’acceptationde l’algorithme de Metropolis est donc

Π(o→ n) = Min(1, exp(−β[U(sN , Vn)− U(sN , Vo) + P (Vn − Vo)]

+N ln(Vn/Vo))) . (7.11)

Excepte si le potentiel a une forme simple (U(rN) = LNU(sN)), le changementde volume est un calcul couteux dans la simulation numerique. On effectue unefois en moyenne ce changement de volume quand on a deplace chacune des par-ticules en moyenne. Pour des raisons de performance de l’algorithme on prefereeffectuer un changement d’echelle logarithmique en volume au lieu d’un change-ment lineaire. Dans ce cas, on doit reecrire la fonction de partition (Eq. (7.6))comme

Q(N,P, T ) =βP

Λ3NN !

∫d ln(V )V N+1 exp(−βPV )

∫dsN exp(−βU(sN ;L))

(7.12)ce qui donne un taux d’acceptation de l’algorithme de Metropolis modifie commesuit:

Π(o→ n) = Min(1, exp(−β[U(sN , Vn)− U(sN , Vo) + P (Vn − Vo)]+

+(N + 1) ln(Vn/Vo))) . (7.13)

L’algorithme de la simulation se construit de la maniere suivante:

1. Tirer un nombre aleatoire entre 0 et le nombre de particules N .

115

Page 116: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

2. Si ce nombre est different de zero, choisir la particule correspondant aunombre aleatoire tire et tenter un deplacement a l’interieur de la boıte devolume V .

3. Si ce nombre est egal a 0, tirer un nombre aleatoire entre 0 et 1 et calculerle volume de la nouvelle boite selon la formule

vn = v0 exp(ln(vmax)(rand− 0.5)). (7.14)

Changer alors le centre de masse de chaque molecule (pour les particulesponctuelles, cela revient aux coordonnees) par la relation

rNn = rNo (vn/vo)1/3 (7.15)

Calculer l’energie de la nouvelle configuration (si le potentiel n’est pas sim-ple, cette etape demande un temps de calcul tres important), et effectuerle test Metropolis donne par la relation (7.11). Si ce nombre est plus grandqu’un nombre aleatoire compris entre 0 et 1, on accepte la nouvelle boıte,sinon on garde l’ancienne boıte (et on n’oublie pas de compter l’ancienneconfiguration).

7.2 Grand canonical ensemble

7.2.1 Introduction

L’ensemble grand-canonique (µ, V, T ) est un ensemble adapte pour un systemeen contact avec un reservoir de particules a temperature constante. Cet ensembleest approprie pour decrire les situations physiques correspondant aux etudes desisothermes d’adsorption (fluides dans les zeolithes ou les milieux poreux) et, dansune certaine mesure, l’etude d’une courbe de coexistence pour des systemes purs.

7.2.2 Principle

Le systeme considere dans ce cas est constitue d’un reservoir de gaz ideal com-prenant M−N particules, occupant un volume V −V0, et de Nparticules occupantun volume V0. Au lieu d’echanger du volume comme dans la section precedente,on va echanger des particules.

La fonction de partition est

Q(M,N, V, V0, T ) =1

Λ3MN !(M −N)!

∫ L′

0

drM−N∫ L

0

drN exp(−βU(rN)).

(7.16)

116

Page 117: Numerical Simulation in Statistical Physics

7.2 Grand canonical ensemble

On peut effectuer de la meme maniere le changement de variable sur les coor-donnees rN et rM−N que l’on a effectue dans la section precedente. et on obtientalors

Q(M,N, V, V0, T ) =V N(V0 − V )M−N

Λ3MN !(M −N)!

∫dsN exp(−βU(sN)). (7.17)

En prenant la limite thermodynamique pour le reservoir de gaz ideal, M →∞, (V0 − V )→∞ avec M/(V0 − V )→ ρ, on obtient

M !

(V0 − V )N(M −N)!→ ρN . (7.18)

On peut aussi exprimer la fonction de partition associee au sous-systeme dela boite de simulation comme le rapport des fonctions de partition du systemetotal (Eq. (7.16)) sur celle de la contribution d’un gaz parfait Qid(M,V0 − V, T )comprenant M particules dans le volume V0 − V .

Q′(N, V, T ) = limM,(V0−V )→∞

Q(N,M, V, V0, T )

Qid(M,V0 − V, T )(7.19)

Pour un gaz ideal, le potentiel chimique est donne par la relation

µ = kBT ln(Λ3ρ). (7.20)

On peut alors reecrire la fonction de partition du sous systeme definie par levolume V a la limite thermodynamique comme

Q′(µ,N, V, T ) =exp(βµN)V N

Λ3NN !

∫dsN exp(−βU(sN) (7.21)

Il ne reste qu’a sommer sur toutes les configurations avec un nombre N departicules allant de 0 a l’infini.

Q′(µ, V, T ) = limN→∞

M∑N=0

Q′(N, V, T ) (7.22)

soit encore

Q(µ, V, T ) =∞∑N=0

exp(βµN)V N

Λ3NN !

∫dsN exp(−βU(sN)). (7.23)

La probabilite P (µ, sN) queN particules occupent un volume V avec les particulessituees aux points sN est donnee par

P (µ, sN) ∼ exp(βµN)V N

Λ3NN !exp(−βU(sN)). (7.24)

En utilisant la relation du bilan detaille, il est alors possible de donner lesprobabilites de transition pour un changement du nombre de particules dans laboite de simulation selon la regle de Metropolis.

Le taux d’acceptation de l’algorithme de Metropolis est le suivant:

117

Page 118: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

1. Pour inserer une particule dans le systeme,

Π(N → N + 1) = Min

(1,

V

Λ3(N + 1)exp(β(µ− U(N + 1) + U(N)))

).

(7.25)

2. Pour retirer une particule dans le systeme,

Π(N → N − 1) = Min

(1,

Λ3N

Vexp(−β(µ+ U(N − 1)− U(N)))

).

(7.26)

L’algorithme de la simulation se construit de la maniere suivante:

1. Tirer un nombre aleatoire entre 1 et N1 avec N1 = Nt + nexc (le rapportnexc/〈N〉 fixe la frequence moyenne des echanges avec le reservoir) avec Nt

le nombre de particules dans la boite a l’instant t.

2. Si ce nombre est compris entre 1 et le nombre de particules Nt, choisir laparticule correspondant au nombre aleatoire tire et tenter un deplacementde cette particule selon l’algorithme de Metropolis standard.

3. Si ce nombre est superieur a Nt, tirer un nombre aleatoire entre 0 et 1.

• Si ce nombre est inferieur a 0.5, on tente la suppression d’une partic-ule. Pour cela, calculer l’energie avec les Nt − 1 particules restantes,effectuer le test de Metropolis de l’equation (7.26).

– Si la nouvelle configuration est acceptee, supprimer la particule,en ecrivant la formule

r[retir] = r[Nt]) (7.27)

Nt = Nt − 1. (7.28)

– Sinon garder l’ancienne configuration.

• Si le nombre est superieur a 0.5, on essaie d’inserer une particule,on calcule l’energie avec Nt + 1 particules et on effectue le test deMetropolis de l’equation (7.25).

– Si la nouvelle configuration est acceptee, ajouter la particule enecrivant la formule

Nt = Nt + 1 (7.29)

r[Nt] = r[insr]. (7.30)

– Sinon garder l’ancienne configuration.

La limitation de cette methode concerne les milieux tres denses ou la probabilited’insertion devient tres faible

118

Page 119: Numerical Simulation in Statistical Physics

7.3 Liquid-gas transition and coexistence curve

7.3 Liquid-gas transition and coexistence curve

Si pour les reseaux l’etude du diagramme de phase est souvent limite au voisinagedes points critiques, la determination du diagramme de phase pour des systemescontinus concerne une region beaucoup plus large que la (ou les regions) cri-tique(s). La raison vient du fait que meme pour un liquide simple, la courbede coexistence1 liquide-gaz presente une asymetrie entre la region liquide et laregion gazeuse et traduit un caractere non universel lie au systeme etudie, alorsque pour un reseau l’existence d’une symetrie entre trou et particule conduit aune courbe de coexistence temperature densite parfaitement symetrique de paret d’autre de la densite critique.

La condition de coexistence entre deux phases est que les pressions, les tem-peratures et les potentiels chimiques doivent etre egaux. L’idee naturelle seraitd’utiliser un ensemble µ, P, T . Malheureusement un tel ensemble n’existe pas, carun ensemble ne peut pas etre seulement defini par des parametres intensifs. Pourobtenir un ensemble clairement defini, il est necessaire d’avoir au moins soit unevariable extensive, soit le volume, soit le nombre de particules. La methode, intro-duite par Panagiotopoulos en 1987, permet d’etudier l’equilibre de deux phases,car elle supprime l’energie interfaciale qui existe quand deux phases coexistent al’interieur d’une meme boıte. Une telle interface signifie une barriere d’energieque l’on doit franchir pour passe d’une phase a une autre, et necessite des tempsde calcul qui croissent exponentiellement avec l’energie libre interfaciale.

1Pour un liquide, en dessous de la temperature de transition liquide-gaz, il apparaıt uneregion du diagramme de phase ou le liquide et le gaz peuvent coexister; la frontiere de cetteregion definit ce que l’on appelle la courbe de coexistence.

119

Page 120: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

Figure 7.1 – Courbe de coexistence (diagramme temperature densite) liquide-gazdu modele de Lenard-Jones. La densite critique est voisine de 0.3 a compareravec le modele de gaz sur reseau qui donne 0.5 et la courbe montre une asymetrieau dessous du point critique.

120

Page 121: Numerical Simulation in Statistical Physics

7.4 Gibbs ensemble

7.4 Gibbs ensemble

7.4.1 Principle

On considere un systeme constitue de N particules occupant un volume V =V1 + V2, en contact avec un reservoir thermique a la temperature T .

La fonction de partition de ce systeme s’ecrit

Q(M,V1, V2, T ) =N∑

N1=0

1

Λ3NN1!(N −N1)!

∫ V

0

dV1VN1

1 (V − V1)N−N1

∫dsN1

1 exp(−βU(sN11 ))

∫dsN−N1

2 exp(−βU(sN−N12 )).

(7.31)

Si l’on compare avec le systeme utilise pour la derivation de la methode del’ensemble grand canonique, on voit que les particules qui occupent le volume V2

ne sont pas ici des particules de gaz ideal mais des particules interagissant par lememe potentiel que les particules situees dans l’autre boıte.

On peut facilement deduire la probabilite P (N1, V1, sN11 , sN−N1

2 ) de trouverune configuration avec N1 particules dans la boıte 1 avec les positions sN1

1 et lesN2 = N −N1 particules restantes dans la boıte 2 avec les positions sN2

2 :

P (N1, V1, sN11 , sN−N1

2 ) ∼ V N11 (V − V1)N−N1

N1!(N −N1)!exp(−β(U(sN1

1 )+U(sN−N12 ))). (7.32)

7.4.2 Acceptance rules

Il y a trois types de deplacements avec cet algorithme: les deplacements individu-els de particules dans chaque boıte, les changements de volume avec la contrainteque le volume total soit conserve, les changements de boıte pour une particule.

Pour les deplacements a l’interieur d’une boıte de simulation, la regle deMetropolis est celle definie pour un ensemble de particules dans un ensemblecanonique .

Pour le changement de volume, le choix du changement d’echelle lineaire envolume (Vn = Vo + ∆V ∗ rand) conduit a un taux d’acceptation egal a

π(o→ n) = Min

(1,

(Vn,1)N1(V − Vn,1)N−N1 exp(−βU(sNn ))

(Vo,1)N1(V − Vo,1)N−N1 exp(−βU(sNo ))

). (7.33)

Il est generalement plus avantageux de proceder a un changement d’echellelogarithmique du rapport des volumes ln(V1/V2), de maniere analogue a ce qui aete fait dans l’ensemble N,P, T .

Pour determiner quel est le nouveau taux d’acceptation, on reecrit la fonctionde partition avec ce changement de variable:

121

Page 122: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

0 0.2 0.4 0.6 0.8 1

X

0

0.2

0.4

0.6

0.8

1

YV-V

1 N-N1

V1 N

1

Figure 7.2 – Representation d’une simulation de l’ensemble de Gibbs ou le systemeglobal est en contact avec un reservoir isotherme et ou les deux sous-systemespeuvent echanger a la fois des particules et du volume.

Q(M,V1, V2, T ) =N∑

N1=0

∫ V

0

d ln

(V1

V − V1

)(V − V1)V1

V

V N11 (V − V1)N−N1

Λ3NN1!(N −N1)!∫dsN1

1 exp(−βU(sN11 ))

∫dsN−N1

2 exp(−βU(sN−N12 )).

(7.34)

On en deduit alors la nouvelle probabilite avec le deplacement logarithmiquedu rapport des volumes des boıtes,

N (N1, V1, sN11 , sN−N1

2 ) ∼ V N1+11 (V − V1)N−N1+1

V N1!(N −N1)!exp(−β(U(sN1

1 ) + U(sN−N12 )))

(7.35)ce qui conduit a un taux d’acceptation de la forme

π(o→ n) = Min

(1,

(Vn,1)N1+1(V − Vn,1)N−N1+1 exp(−βU(sNn ))

(Vo,1)N1+1(V − Vo,1)N−N1+1 exp(−βU(sNo ))

). (7.36)

122

Page 123: Numerical Simulation in Statistical Physics

7.5 Monte Carlo method with multiples Markov chains

La derniere possibilite de mouvement concerne le changement de boıte pourune particule. Pour faire passer une particule de la boıte 1 dans la boıte 2, lerapport des poids de Boltzmann est donne par

N(o)

N(n)=

N1!(N −N1)!(V1)N1−1(V − V1)N−(N1−1)

(N1 − 1)!(N − (N1 − 1))!(V1)N1(V − V1)N−N1exp(−β(U(sNn )− U(sNo ))

=N1(V − V1)

(N − (N1 − 1))V1

exp(−β(U(sNn )− U(sNo )))

(7.37)

ce qui conduit a un taux d’acceptation

Π(o→ n) = Min

(1,

N1(V − V1)

(N − (N1 − 1))V1

exp(−β(U(sNn )− U(sNo )))

). (7.38)

L’algorithme correspondant a la methode de l’ensemble de Gibbs est une general-isation de l’algorithme de l’ensemble (µ, V, T ) dans lequel on remplace l’insertionet le retrait de particules par les echanges entre boıtes et dans lequel le change-ment de volume que nous avons vu dans l’algorithme (N,P, T ) est remplace icipar le changement simultane de volume des deux boıtes.

7.5 Monte Carlo method with multiples Markov

chains

Les barrieres d’energie libre empechent le systeme de pouvoir explorer la totalitede l’espace des phases; en effet, un algorithme de type Metropolis est fortementralenti par la presence de barrieres elevees.

Pour calculer une courbe de coexistence ou pour obtenir un syteme equilibreau dessous de la temperature critique, on doit realiser une succession de simula-tions pour des valeurs successives de la temperature (voire aussi simultanementde potentiel chimique dans le cas d’un liquide). La methode de “trempe parallele”cherche a exploiter le fait suivant: s’il est de plus en plus difficile d’equilibrer unsysteme quand on abaisse la temperature, on peut exploiter la facilite du systemea changer de region de l’espace des phases a temperature elevee en “propageant”des configurations vers des temperatures plus basses.

Considerons pour des raisons de simplicite le cas d’un systeme ou seul onchange la temperature (la methode est facilement generalisable pour d’autresgrandeurs intensives (potentiel chimique, changement du Hamiltonien,...)): lafonction de partition d’un systeme a la temperature Ti est donnee par la relation

Qi =∑α=1

exp(−βiU(α)) (7.39)

123

Page 124: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

ou α est un indice parcourant l’ensemble des configurations accessibles au systemeet βi est l’inverse de la temperature.

Si on considere le produit direct de l’ensemble de ces systemes evoluant ades temperatures toutes differentes, on peut ecrire la fonction de partition de cetensmble comme

Qtotal =N∏i=1

Qi (7.40)

ou N designe le nombre total de temperatures utilise dans les simulations.

On peut tirer un grand avantage en realisant simultanement les simulationspour les differentes temperatures; la methode de “trempe parallele” introduit unechaine de Markov supplementaire en sus de celles presentes dans chaque boite desimulation; cette chaine se propose d’echanger, en respectant les conditions d’unbilan detaille, la totalite des particules entre deux boites consecutives choisies auhasard parmi la totalite de celles-ci.

La condition de bilan detaille de cette seconde chaine peut s’exprimer comme

Π((i, βi), (j, βj)→ (j, βi), (i, βj))

Π((i, βj), (j, βi)→ (i, βi), (j, βj))=

exp (−βjU(i)) exp (−βiU(j))

exp (−βiU(i)) exp (−βjU(j))(7.41)

ou U(i) et U(j) representent les energies des particules de la boite i et j.

L’algorithme de la simulation est constitue de deux types de mouvement

• Deplacement de particule dans chaque boıte (qui peut etre realise en par-allele) selon l’algorithme de Metropolis La probabilite d’acceptation: pourchaque pas de simulation a l’interieur de chaque boite est donnee par

min(1, exp(βi(Ui(n)− Ui(o)))) (7.42)

ou les notations n et o correspondent a la nouvelle et l’ancienne configura-tion respectivement.

• Echange de particules entre deux boıtes consecutives. la probabilite d’acceptationest alors donnee par

min(1, exp((βi − βj)(Uj − Ui))) (7.43)

La proportion de ces echanges reste a fixer afin d’optimiser la simulation pourl’ensemble de ce systeme.

Afin d’illustrer tres simplement ce concept, nous allons etudie le systeme mod-ele suivant. Soit une particule dans un espace unidimensionnel soumise au po-

124

Page 125: Numerical Simulation in Statistical Physics

7.5 Monte Carlo method with multiples Markov chains

Figure 7.3 – Potentiel unidimensionel auquel est soumise une particule

tentiel exterieur V (x) suivant

V (x) =

+∞ si x < −2

1 + sin(2πx) si −2 ≤ x ≤ 1.25

2 + 2 sin(2πx) si −1.25 ≤ x ≤ −0.25

3 + 3 sin(2πx) si −0.25 ≤ x ≤ 0.75

4 + 4 sin(2πx) si 0.75 ≤ x ≤ 1.75

5 + 5 sin(2πx) si 1.75 ≤ x ≤ 2.0

(7.44)

Les probabilites d’equilibre Peq(x, β) pour ce systeme s’exprime simplementcomme

Peq(x, β) =exp(−βV (x))∫ 2

−2dx exp(−βV (x))

(7.45)

La figure 7.4 montre ces probabilites a l’equilibre. Notons que comme les min-ima de l’energie potentielle sont identiques, les maxima des probabilites d’equilibresont les memes. Quand la temperature est abaissee, la probabilite entre deuxminima d’energie potentielle devient extremement faible.

La figure 7.4 montre aussi le resultat de simulations Monte Carlo de typeMetropolis independantes tandis que la figure 7.5 montre le resultat avec un

125

Page 126: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

Figure 7.4 – Probabilites d’equilibre (courbes pleines ) et probabilites obtenuespar une simulation Metropolis pour la particule soumise au potentiel unidimen-sionnel pour trois temperatures T = 0.1, 0.3, 2

126

Page 127: Numerical Simulation in Statistical Physics

7.6 Conclusion

Figure 7.5 – Identique a la figure 7.4 excepte que la simulation est une trempeparallele

simulation de “trempe parallele”. Le nombre total de configurations effectueespar la simulation reste inchangee.

Avec une methode de Metropolis, le systeme n’est equilibre que pour la tem-perature la plus elevee. Pour T = 0.3, le systeme ne peut explorer que les deuxpremiers puits de potentiel (sachant que la particule etait situee dans le premierpuits a l’instant initial) et pour T = 0.1, la particule est incapable de franchirune seule barriere. Avec la methode de “trempe parallele”, le systeme devientequilibre a toute temperature en particulier a tres basse temperature.

Pour des systemes a N particules, la methode de trempe parallele necessiteun nombre de boites superieur a celui utilise dans cet exemple tres simple. Onpeut meme montrer que pour un meme intervalle de temperatures, le nombre deboites necessaire pour obtenir un systeme equilibre augmente quand la taille dusysteme augmente.

7.6 Conclusion

Nous avons vu que la methode Monte Carlo peut etre utilisee dans un nom-bre tres varie de situations. Ce rapide survol de ces quelques variantes ne peutrepresenter la tres vaste litterature sur le sujet. Cela permet, je l’espere, de don-

127

Page 128: Numerical Simulation in Statistical Physics

Monte Carlo simulation in different ensembles

ner des elements pour construire, en fonction du systeme etudie, une methodeplus specifiquement adaptee.

128

Page 129: Numerical Simulation in Statistical Physics

Chapter 8

Out of equilibrium systems

Contents8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 130

8.2 Random sequential addition . . . . . . . . . . . . . . . 131

8.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . 131

8.2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 131

8.2.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 132

8.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 134

8.3 Avalanche model . . . . . . . . . . . . . . . . . . . . . . 137

8.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 137

8.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 137

8.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 139

8.4 Inelastic hard sphere model . . . . . . . . . . . . . . . 141

8.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 141

8.4.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 141

8.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 142

8.4.4 Some properties . . . . . . . . . . . . . . . . . . . . . 143

8.5 Exclusion models . . . . . . . . . . . . . . . . . . . . . . 144

8.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 144

8.5.2 Random walk on a ring . . . . . . . . . . . . . . . . . 146

8.5.3 Model with open boundaries . . . . . . . . . . . . . . 147

8.6 Kinetic constraint models . . . . . . . . . . . . . . . . 149

8.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 149

8.6.2 Facilitated spin models . . . . . . . . . . . . . . . . . 152

8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 154

129

Page 130: Numerical Simulation in Statistical Physics

Out of equilibrium systems

8.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

8.8.1 Haff Law . . . . . . . . . . . . . . . . . . . . . . . . . 154

8.8.2 Croissance de domaines et modele a contrainte cinetique156

8.8.3 Diffusion-coagulation model . . . . . . . . . . . . . . . 158

8.8.4 Random sequential addition . . . . . . . . . . . . . . . 159

8.1 Introduction

Nous abordons, dans les deux derniers chapitres, le vaste domaine de la mecaniquestatistique hors d’equilibre. Les situations dans lesquelles un systeme evolue sansrelaxer vers un etat d’equilibre sont de loin les plus frequentes dans la nature.Les raisons en sont diverses: i) un systeme n’est pas forcement isole, en contactavec un thermostat, ou entoure d’un ou plusieurs reservoirs de particules,... cequi empeche une evolution vers un etat d’equilibre; ii) il peut exister des tempsde relaxation tres superieurs aux temps microscopiques (adsorption de proteinesaux interfaces, polymeres, verres structuraux, verres de spins); iii) le phenomenepeut posseder une irreversibilite intrinseque (liee a des phenomenes dissipatifs)qui necessite l’introduction d’une modelisation fondamentalement hors d’equilibre(milieux granulaires, fragmentation, phenomenes de croissance, avalanches, trem-blements de terre)Hinrichsen [2000], Talbot et al. [2000], Turcotte [1999].

On peut distinguer trois grandes classes de systemes en ce qui concernel’evolution du systeme dans un regime hors d’equilibre: dans un premier cas, lessystemes sont le siege de temps de relaxation tres longs (tres superieurs au tempscaracteristique de l’observation), dans un second cas, les systemes qui evoluentvers un etat stationnaire loin de l’equilibre. Dans un dernier cas, les systemessont soumis a une ou plusieurs contraintes exterieures et evoluent vers un etatstationnaire, la dynamique microscopique etant hamiltonienne ou non.

Nous illustrons, sur quatre systemes modeles, la pertinence d’une approchestatistique des phenomenes hors d’equilibre et, particulierement, l’interet d’uneetude par simulation numerique des modeles.

Cette approche numerique est d’autant plus justifiee qu’en l’absence d’uneformulation analogue a la thermodynamique, les methodes theoriques de la me-canique statistique sont moins puissantes. L’utilisation de la simulation numeriqueest donc a la fois un moyen essentiel pour comprendre la physique des modeleset un support pour le developpement de nouveaux outils theoriques.

130

Page 131: Numerical Simulation in Statistical Physics

8.2 Random sequential addition

8.2 Random sequential addition

8.2.1 Motivation

Quand on place une solution contenant des macromolecules (proteines, colloıdes),en contact avec une surface solide, on observe une adsorption sur la surface sousla forme d’une monocouche de macromolecules. Ce processus d’adsorption estlent, l’adsorption necessitant des durees qui peuvent aller de quelques minutesa plusieurs semaines. Une fois l’adsorption achevee, si on remplace la solutioncontenant les macromolecules par une solution tampon, on n’observe aucun (oupeu de) changement dans la monocouche adsorbee. Quand l’experience est repeteeavec des solutions de concentrations differentes, la densite de particules adsorbeesa la surface est pratiquement toujours la memeTalbot et al. [2000].

Compte tenu des echelles de temps et de la complexite microscopique desmolecules, il n’est pas possible de faire une Dynamique Moleculaire pour decrirele procesus. Une premiere etape consiste a faire une modelisation sur une echelle“mesoscopique”. Compte tenu de la taille des macromolecules (de 10nm a quelquesmm), on peut considerer que les interactions entre particules sont de portee tresinferieure a la taille des particules. Ces particules sont aussi peu deformables etne peuvent pas s’interpenetrer facilement. En premiere approximation, le poten-tiel entre particules peut etre pris comme celui d’objets durs (par exemple pourles colloıdes, spheres dures).

Quand la particule est au voisinage immediat de la surface, l’interaction avecla surface est fortement attractive sur une courte distance, et fortement repulsivequand la particule penetre dans la surface. Le potentiel d’interaction entre uneparticule et la surface peut etre pris comme un potentiel de contact fortementnegatif. Une fois adsorbees, les particules ne desorbent pas durant le tempsde l’experience et ne diffusent pas sur la surface. Compte tenu du mouvementbrownien des particules en solution, on peut considerer que les particules arriventau hasard sur la surface. Si la particule rentre en contact avec une particuledeja adsorbee, elle est repoussee et elle “repart” vers la solution. Si elle entre encontact avec la surface solide, sans recouvrement des particules deja presentes,elle se “colle” a la surface de maniere definitive.

8.2.2 Definition

La modelisation ainsi definie conduit naturellement au modele suivant, appeleAdsorption (ou Addition) Sequentielle Aleatoire (en anglais, Random SequentialAdsorption (or Addition), RSA).

Soit un espace de dimensions d, on place sequentiellement des particules duresa des positions aleatoires, avec la condition suivante: si la particule introduite nerecouvre aucune autre particule deja deposee, cette particule est placee de manieredefinitive a cette position; sinon, cette particule est retiree et on procede a un

131

Page 132: Numerical Simulation in Statistical Physics

Out of equilibrium systems

nouveau tirage.

Un tel modele contient deux ingredients que l’on a consideres comme etantessentiels pour caracteriser la cinetique d’adsorption: l’effet d’encombrementsterique des particules et l’irreversibilite du phenomene.

Si l’on considere des particules spheriques, cela signifie qu’a une dimension,on depose des segments sur une ligne, a deux dimensions, on depose des disquessur un plan, et a trois dimensions des boules dans l’espace.

Comme nous l’avons dit dans le chapitre 2, la dynamique de la methode MonteCarlo est celle d’un processus Markovien qui converge vers un etat d’equilibre. Ladefinition du modele RSA est celle d’un procesus Markovien (tirage aleatoire departicules), sans condition de convergence vers un etat d’equilibre. Nous pouvonsdonc facilement decrire l’algorithme de la simulation numerique adaptee a cemodele.

8.2.3 Algorithm

Le principe de l’algorithme de base ne depend pas de la dimension du probleme.Comme le modele est physiquement motive par des problemes d’adsorption surune surface, nous considerons ici des disques durs (qui representent la projectiond’une sphere sur le plan). Compte tenu du diametre σ des particules (la boitede simulation est choisie comme un carre de cote unite), on reserve un tableaupar dimension d’espace pour stocker la liste de chaque coordonnee de position desparticules qui seront adsorbees au cours du processus. Sa taille est necessairementbornee par 4/(πσ2).

Ces tableaux ne contiennent aucune position de particules a l’instant initial,(on suppose que la surface est initialement vide). Un pas elementaire de la cine-tique est le suivant:

1. On tire au hasard avec une distribution uniforme la position d’une particuledans la boıte de simulation.

x0 = rand

y0 = rand(8.1)

On incremente le temps de 1: t = t+ 1.

2. on teste si la particule ainsi choisie ne recouvre aucune particule dejapresente. Il faut verifier que (r0 − ri)

2 > σ2 avec i prenant les valeursde 1 a la derniere particule adsorbee a l’instant t, notee na· Des que ce testn’est pas verifie, on arrete la boucle sur les indices, on revient a l’etape 1pour un nouveau tirage. Si ce test est satisfait pour toutes les particules,on incremente l’indice na (na = na + 1), on ajoute la position de la derniere

132

Page 133: Numerical Simulation in Statistical Physics

8.2 Random sequential addition

0.2 0.4 0.6 0.8

X

0.2

0.4

0.6

0.8Y

Figure 8.1 – Disques adsorbes en cinetique RSA: la particule au centre de la figureest la particule qui vient d’etre choisie. La cellule correspondant a son centre esten rouge. Les 24 cellules a considerer pour le test de non-recouvrement sont enorange.

particule dans les tableaux des coordonnees de position:x[na] = x0

y[na] = y0.(8.2)

Comme on ne connaıt pas a l’avance le nombre de particules que l’on peutplacer sur la surface, on fixe le nombre maximum de tentatives au depart de lasimulation, ou en d’autres termes le temps maximum de la simulation.

L’algorithme presente ci-dessus est correct, mais tres lent. En effet, a chaquepas de temps, il est necessaire d’effectuer na tests pour determiner si la nouvelleparticule peut etre inseree. Quand le remplissage est proche du remplissage max-imum, on passe beaucoup de temps a effectuer des tests qui sont inutiles, car lesparticules sont pratiquement toutes rejetees.

Sachant qu’un disque dur ne peut etre entoure, au maximum, que de six autresdisques, il est facile de voir qu’il faut eviter de faire un test de recouvrement avecdes particules qui sont pour la plupart tres eloignees de la particule que l’on vientde choisir.

133

Page 134: Numerical Simulation in Statistical Physics

Out of equilibrium systems

Une solution consiste a faire une liste de cellules. En effet si on choisitune grille dont le carre elementaire est strictement inferieur au diametre d’undisque, cela signifie que dans chaque carre on peut placer, au plus, un et un seuldisque. On cree donc un tableau supplementaire dont les deux indices en X eten Y reperent les differents carres elementaires. Ce tableau est initialise a 0.L’algorithme est alors modifie de la maniere suivante:

1. La premiere etape est inchangee.

2. On determine les indices correspondant a la cellule elementaire ou la par-ticule est choisie

i0 = Int(x0/σ)

j0 = Int(y0/σ).(8.3)

Si la cellule est pleine (on a alors cell[i0][j0] 6= 0), on revient a l’etape 1. Sila cellule est vide, il faut tester les 24 cellules entourant la cellule centraleet voir si elles sont vides ou occupees (voir la Fig 8.1). Des qu’une celluleest occupee (cell[i][j] 6= 0), il est necessaire de faire un test pour verifier lapresence ou non d’un recouvrement entre la nouvelle particule et celle dejaadsorbee. S’il y a recouvrement, on repasse a l’etape 1, sinon on continuea tester la cellule suivante. Si toutes les cellules testees sont soit vides soitoccupees par une particule qui ne recouvre pas la particule que l’on cherchea inserer, on ajoute alors la nouvelle particule comme le precise le precedentalgorithme et on remplit la cellule de la nouvelle particule ajoutee commesuit:

cell[i0][j0] = na. (8.4)

Avec ce nouvel algorithme le nombre de tests est au plus egal a 24 quel que soitle nombre de particules adsorbees. L’algorithme est maintenant un algorithmeproportionnel au nombre de particules et non plus proportionnel au carre dunombre. Des methodes comparables sont utilisees en dynamique moleculaire afinde limiter le nombre de particules a tester dans le calcul des forces.

8.2.4 Results

A une dimension, un systeme de particules a l’equilibre n’a pas de transitionde phase a temperature finie. Ainsi la resolution de tels modeles n’apporte pasd’informations tres utiles pour l’etude du meme systeme en dimension superieure(ou generalement, il n’y a des transitions de phase, mais pas de solutions ana-lytiques). Au contraire, dans le cas de la RSA (ainsi que d’autres modeles horsequilibre), la solution du modele unidimensionnel apporte beaucoup d’informationsutiles pour comprendre le meme modele etudie en dimension superieure.

Qualitativement, le phenomene d’adsorption se deroule en plusieurs etapes:dans un premier temps, les particules introduites sont adsorbees sans rejet et la

134

Page 135: Numerical Simulation in Statistical Physics

8.2 Random sequential addition

densite augmente rapidement. Au fur et a mesure que la surface se remplit, desparticules que l’on tente d’inserer sont rejetees. Quand le nombre de tentativesdevient de l’ordre de quelques fois le nombre de particules adsorbees, le tauxde remplissage est, a 10% pres, la valeur que l’on peut atteindre au maximum.Il apparaıt alors un regime lent ou les espaces disponibles pour l’insertion departicules, sont isoles sur la surface et representent une fraction tres faible de lasurface initiale.

La version unidimensionnelle du modele, qui est appelee aussi modele du park-ing, peut etre resolue analytiquement (voir Appendice C). Nous allons ici nouscontenter d’exploiter ces resultats pour en deduire des comportements generiquesen dimension superieure.

A une dimension, en prenant la longueur des segments egale a l’unite, onobtient l’expresion suivante pour le taux de remplissage de la ligne. lorsque cettederniere est vide a l’instant initial:

ρ(t) =

∫ t

0

du exp

(−2

∫ u

0

dv1− e−v

v

). (8.5)

Quand t → ∞, la densite tend vers 0.7476 . . .. Dans cette dynamique, les rear-rangements de particules sont interdits, contrairement a une situation a l’equilibre.A la saturation, il reste donc des espaces vides sur la ligne, dont la longueur in-dividuelle est necessairement inferieure au diametre d’une particule. Il faut aussinoter que la densite maximale a saturation depend fortement de la configurationinitiale. Il s’agit a nouveau d’une difference notable avec un processus a l’equilibrequi ne depend jamais du chemin thermodynamique choisi mais uniquement desconditions thermodynamiques imposees.

On aboutit a la situation apparemment paradoxale suivante: la dynamiquestochastique du remplissage est Markovienne stationnaire, c’est a dire que lamemoire de selection des particules est nulle, mais comme les particules adsorbeessont fixees, le systeme garde la memoire de la configuration initiale jusqu’a la findu processus.

Pres de la saturation, le developpement asymptotique (t→∞) de l’equation(8.5) donne

ρ(∞)− ρ(t) '(e−2γ

t

), (8.6)

ou γ est la constante d’Euler. L’approche de l’etat de saturation est donc lente etest donnee par une loi algebrique. En dimension superieure, on peut montrer quepour des particules spheriques la saturation est approchee comme ρ(∞)− ρ(t) '1/t1/d ou d est la dimension de l’espace. Ceci donne un exposant egal a 1/2 pourd = 2.

Parmi les differences que l’on peut aussi noter avec le meme systeme de partic-ules, mais considere a l’equilibre, il y a par exemple les fluctuations de particulesd’un systeme de taille finie. A l’equilibre, si l’on fait une simulation Monte-Carlo,

135

Page 136: Numerical Simulation in Statistical Physics

Out of equilibrium systems

on peut facilement calculer les fluctuations de particules a partir des momentsde la distribution de particules, 〈N〉 et 〈N2〉 dans l’ensemble grand-canonique.Dans une simulation de type RSA, ces fluctuations sont accessibles de la manieresuivante: si on realise une serie de simulations, pour une taille de systeme etune configuration initiale identiques, on peut enregistrer un histogramme de den-site a un temps donne de l’evolution du processus. On calcule alors les valeursmoyennes de 〈N〉 et 〈N2〉. La signification des crochets correspond a une moyennesur differentes realisations du processus.

On peut donc voir que ces grandeurs peuvent etre definies en l’absence d’unemoyenne de Gibbs pour le systeme. Pour un systeme thermodynamique, onpeut calculer ces moyennes en procedant de la meme maniere, mais compte tenude l’unicite de l’etat d’equilibre, une moyenne statistique sur des realisations dif-ferentes est equivalente a une moyenne a l’equilibre sur une seule realisation. C’estevidemment la raison pour laquelle on n’effectue pas generalement de moyennesur des realisations dans le cas de l’equilibre. En changeant de realisations, letemps de calcul necessaire est considerablement plus eleve, car il faut reequilibrerchaque realisation avant de commencer le calcul de la moyenne.

L’importance de ces fluctuations a une densite donnee est differente entre unprocessus RSA et un systeme a l’equilibre. A une dimension, ces fluctuationspeuvent etre calculees analytiquement dans les deux cas et leur amplitude estmoins grande en RSA. Ce phenomene persiste en dimension superieure, ou uncalcul numerique est alors necessaire; c’est un moyen experimental pour distinguerdes configurations generees par un processus irreversible et celui provenant deconfigurations a l’equilibre.

Comme nous l’avons vu au chapitre 4, la fonction de correlation g(r) peut etredefinie a partir de considerations de geometrie probabiliste (pour des particulesdures) et non necessairement a partir d’une distribution de Gibbs sous-jacente.Cela implique qu’il est possible de calculer, pour un processus RSA, les correla-tions entre paires de particules. En utilisant la moyenne statistique, introduitepour le calcul des fluctuations, on peut calculer la fonction de correlation g(r) aun temps donne de l’evolution de la dynamique en considerant un grand nombrede realisations.

Comme resultat significatif concernant cette fonction, on peut montrer qu’ala densite de saturation, la fonction diverge logarithmiquement au contact:

g(r) ∼ ln

((r − 1)

σ

). (8.7)

Un tel comportement differe fortement d’une situation d’equilibre, ou la fonctionde correlation reste finie au contact pour une densite correspondant a la densitede saturation RSA.

136

Page 137: Numerical Simulation in Statistical Physics

8.3 Avalanche model

8.3 Avalanche model

8.3.1 Introduction

Il y a une quinzaine annees, Bak, Tang et Wiesenfeld ont propose un modele pourexpliquer le comportement observe dans les tas de sable. En effet, si l’on laissetomber, avec un flux qui doit etre tres faible, des particules de sable au dessusd’un point donne d’une surface, on voit se former progressivement un tas, dontla forme est grossierement celle d’un cone et dont l’angle ne peut depasser unevaleur limite. Une fois cet etat atteint, on peut voir que si l’on continue d’ajouterdu sable, toujours avec la meme intensite, on observe a des instants differentsdes avalanches de particules qui ont des tailles tres differentes. Ces particulesdevalent la pente brusquement. Ce phenomene ne semble pas a priori dependrede la nature microscopique des interactions entre particules.

8.3.2 Definition

Considerons un reseau carre a deux dimensions (N × N). En chaque point dureseau, on appelle z(i, j) le nombre entier associe au site (i, j). Par analogie autas de sable, ce nombre sera considere comme la pente locale du tas de sable. Lesetapes elementaires du modele sont les suivantes:

1. A l’instant t, on choisit aleatoirement un site du reseau note i0, j0. Onaccroıt la valeur du site d’une unite

z(i0, j0)→ z(i0, j0) + 1

t→ t+ 1.(8.8)

2. Si la valeur de z(i0, j0) > 4, la difference de pente devient trop elevee et il ya redistribution de particules sur les sites qui sont les plus proches voisinsavec les regles suivantes:

z(i0, j0) → z(i0, j0)− 4

z(i0 ± 1, j0) → z(i0 ± 1, j0) + 1

z(i0, j0 ± 1) → z(i0, j0 ± 1) + 1

t→ t+ 1.

(8.9)

3. La redistribution sur les sites voisins peut provoquer, sur ces site, le de-passement du seuil critique. Si un site se trouve dans un etat critique, ildeclenche a son tour l’etape 2...

137

Page 138: Numerical Simulation in Statistical Physics

Out of equilibrium systems

Pour les sites se trouvant a la limite de la boıte, on considere que les par-ticules sont perdues et les regles d’actualisation sont les suivantes.

z(0, 0) → z(0, 0)− 4

z(1, 0) → z(1, 0) + 1

z(0, 1) → z(0, 1) + 1

t→ t+ 1.

(8.10)

Dans ce cas, deux particules sortent de la boıte.

Pour i compris entre 1 et N − 2z(0, i) → z(0, i)− 4

z(1, i) → z(1, i) + 1

z(0, i± 1) → z(0, i± 1) + 1

t→ t+ 1.

(8.11)

Dans ce cas, une seule particule sort de la boıte.

On peut aussi avoir des variantes de ces regles concernant les limites dela boıte, en considerant les seuils de declenchement sont modifies pour lessites situes a la frontiere. Par exemple, pour le site situe en coin de boıtele seuil est alors de 2 et l’on a

z(0, 0) → z(0, 0)− 2

z(1, 0) → z(1, 0) + 1

z(0, 1) → z(0, 1) + 1

t→ t+ 1.

(8.12)

Pour i compris entre 1 et N − 2, on un seuil de 3z(0, i) → z(0, i)− 3

z(1, i) → z(1, i) + 1

z(0, i± 1) → z(0, i± 1) + 1

t→ t+ 1.

(8.13)

On peut, de maniere similaire, ecrire les conditions aux limites sur les quatrecotes du reseau.

Ce cycle est poursuivi jusqu’a ce qu’aucun site du reseau ne depasse le seuilcritique1. A ce moment, on recommence l’etape 1.

1Comme les sites qui deviennent critiques a un instant donne t ne sont pas plus prochesvoisins, l’ordre dans lequel on effectue la mise a jour n’influe pas l’etat final du cycle. End’autres termes, ce modele est Abelien.

138

Page 139: Numerical Simulation in Statistical Physics

8.3 Avalanche model

Cet algorithme peut facilement etre adapte pour des reseaux differents et pourles dimensions differentes d’espace.

Les grandeurs que l’on peut enregistrer au cours de la simulation sont lessuivantes: la distribution du nombre de particules impliquees dans une avalanche,c’est a dire le nombre de sites du reseau qui se sont retrouves dans une situationcritique (et qu’il a fallu actualiser) avant de redemarrer l’ajout de particules (etape1). On peut aussi considerer la distribution du nombre de particules perdues parle reseau (par les bords) au cours du processus. On peut s’interesser enfin a laperiodicite des avalanches au cours du temps.

8.3.3 Results

Les resultats essentiels de ce modele aux regles tres simples sont les suivants:

• Le nombre d’avalanches de particules suit une loi de puissance

N(s) ∼ s−τ (8.14)

ou τ ' 1.03 a deux dimensions.

• Les avalanches apparaissent de maniere aleatoire et leur frequence d’apparitiondepend de leur taille s selon une loi de puissance

D(s) ∼ s−2. (8.15)

• Si le temps associe aux nombres d’etapes elementaires dans une avalancheest T , le nombre d’avalanches avec T etapes est donne par

N ∼ T−1. (8.16)

Il est maintenant interessant de comparer ces resultats avec ceux que l’on arencontres dans l’etude de systemes a l’equilibre.

Une des premieres caracteristiques inhabituelles de ce modele est qu’il evoluespontanement vers un etat que l’on peut qualifier de critique, car, tant sur leplan spatial (distribution des avalanches) que sur le plan temporel, il y a unegrande analogie avec un point critique usuel dans un systeme thermodynamiqueusuel. A l’inverse, alors que dans un systeme thermodynamique, l’obtention d’unpoint critique necessite l’ajustement d’un ou de plusieurs parametres de controle(la temperature, la pression,...), il n’y a pas ici de parametre de controle. Eneffet, le flux de particules deposees peut etre aussi faible que l’on veut, la distri-bution des avalanches reste toujours inchangee. Les auteurs de ce modele, quiont identifie ces caracteristiques, ont appele cet etat la criticalite auto-organisee.(Self-Organized Criticality (SOC), en anglaisBak et al. [1987], Turcotte [1999]).

La specificite de ce modele a fascine une communaute de scientifiques, quidepasse largement celle de la physique statistique. De nombreuses variantes de

139

Page 140: Numerical Simulation in Statistical Physics

Out of equilibrium systems

ce modele ont ete etudiees et il est apparu que les comportements observes dansle modele primitif se retrouvaient dans un grand nombre de situations physiques.Citons pour exemple le domaine de la sismologie. Les tremblements de terre resul-tent de l’accumulation de contraintes dans le manteau terrestre, sur des echelles detemps qui depassent les centaines d’annees. Ces contraintes resultent du deplace-ment tres lent des plaques tectoniques. Les tremblements de terre apparaissentdans des regions ou il y accumulation de contraintes. Un comportement universeldes tremblements de terre est le suivant: quelle que soit la region de la surface ter-restre ou ceux-ci se produisent, il est tres difficile de prevoir la date et l’intensited’un tremblement de terre dans le futur, mais statistiquement, on observe quela distribution de cette intensite obeit a une loi de puissance concernant leur in-tensite avec un exposant α ' 2. A titre de comparaison, le modele SOC prevoitun exposant α = 1, ce qui montre la description des tremblements de terre estqualitativement correct, mais que des modeles plus raffines sont necessaires pourun accord plus quantitatif.

Les feux de forets representent un deuxieme exemple pour lequel l’analogieavec le modele SOC est interessante. Durant une longue periode, il y a croissancede la foret, qui peut etre detruite partiellement ou quasi totalement par un feu deforet. Dans les modeles utilises pour decrire ce phenomene, il y a deux echellesde temps: la premiere est associee a la deposition aleatoire d’arbres sur un reseaudonne; la seconde est l’inverse de la frequence avec laquelle l’on tente de demarrerun feu de foret a un site aleatoire2. La propagation du feu se poursuit par con-tamination de sites voisins; si ceux-ci sont occupes par un arbre. La distributiondes aires de forets brulees est experimentalement decrite par une loi de puissanceavec α ' 1.3− 1.4, voisine du resultat predit par le SOC.

Nous terminons cette section par un retour sur les objectifs initiaux du mod-ele. Est-ce que les avalanches observees dans les tas de sables sont bien decritespar ce type de modele? La reponse est plutot negative. Dans la plupart desdispositifs experimentaux utilises, la frequence d’apparition des avalanches n’estpas aleatoire, mais plutot periodique. La distribution des tailles n’est pas nonplus une loi de puissance. Ces desaccords est lies aux effets inertiels et cohesifsqui ne peuvent pas en general etre negliges. Le systeme, qui respecte le plusprobablement les conditions du SOC, est celui d’une assemblee de grains de rizpour lesquels une loi de puissance est bien observee3.

2Je rappelle bien entendu que la realisation experimentale, sans invitation des autoritescompetentes, est reprehensible tant sur le plan moral que sur le plan penal.

3I let you the responsability of doing a experimental setup in your kitchen for checking thetruth of these results, but I cannot take in charge the resulting damages.

140

Page 141: Numerical Simulation in Statistical Physics

8.4 Inelastic hard sphere model

8.4 Inelastic hard sphere model

8.4.1 Introduction

Les milieux granulaires (poudres, sables, ...) sont caracterises par les proprietessuivantes: la taille des particules est tres superieure aux echelles atomiques (aumoins plusieurs centaines de micrometres); les interactions entre les particulesse manifestent sur des distances tres inferieures aux dimensions de celles-ci; lorsdes collisions entre particules, une partie de l’energie cinetique est transformee enechauffement local au niveau du contact; compte tenu de la masse des particules,l’energie d’agitation thermique est negligeable devant l’energie gravitationnelle etl’energie injectee dans le systeme (generalement sous forme de vibrations).

En l’absence d’apport continu d’energie, un milieu granulaire transforme tresrapidement l’energie cinetique recue en chaleur. Au niveau macroscopique (quiest celui de l’observation), le systeme apparaıt comme dissipatif. Compte tenu dela separation importante entre les echelles de temps et celle entre les longueursmicroscopiques et macroscopiques, on peut considerer que le temps de collision estnegligeable devant le temps de vol d’une particule, c’est a dire que la collision entreparticules peut etre supposee instantanee. Comme les interactions sont a trescourte portee et que les particules sont peu deformables (pour des collisions “pastrop violentes”), on peut supposer que le potentiel d’interaction est un potentielde coeur dur.

La quantite de mouvement reste bien entendu conservee lors de la collision:

v1 + v2 = v′1 + v′2. (8.17)

Lors de la collision, la vitesse relative du point de contact subit le changementsuivant. En ce placant dans le cas ou la composante tangentielle de la vitesse decontact est conservee (ce qui correspond a un coefficient de restitution tangentielleegal a 1) on a la relation suivante

(v′1 − v′2).n = −e(v1 − v2).n (8.18)

ou e est appele le coefficien de restitution normale et n un vecteur normal unitairedont la direction est donne la droite joingnant les centres des deux particules encollision. La valeur de ce coefficient est comprise entre 0 et 1, ce dernier cascorrrespondant a un systeme de spheres dures parfaitement elastiques.

8.4.2 Definition

Ainsi le systeme des spheres dures inelastiques est defini comme un ensemble departicules spheriques interagissant par un potentiel repulsif infiniment dur. Enutilisant les relations ci-dessus, on obtient alors les regles suivantes de collision :

v′i,j = vi,j ±1 + e

2[(vj − vi).n]n (8.19)

141

Page 142: Numerical Simulation in Statistical Physics

Out of equilibrium systems

avec

n =r1 − r2

|r1 − r2|. (8.20)

On obtient alors un modele particulierement adapte pour une etude en sim-ulation de Dynamique Moleculaire. Nous avons vu, au chapitre 3, le principe dela Dynamique Moleculaire pour des spheres dures (elastiques); l’algorithme desimulation pour des spheres dures inelastiques est fondamentalement le meme.Une premiere difference intervient dans l’ecriture des regles de collisions qui, bienevidemment, doivent respecter l’equation (8.19). Une deuxieme difference appa-raıt durant la simulation, car le systeme perd regulierement de l’energie.

8.4.3 Results

Cette propriete de perte d’energie a plusieurs consequences: d’une part, pourgarder l’energie constante en Dynamique Moleculaire, il est necessaire de fournirune energie externe au systeme, d’autre part, l’existence de cette perte d’energieest a l’origine du phenomene d’agregation des particules. Cela se traduit parle fait que, meme si l’on part avec une distribution uniforme de particules dansl’espace, le systeme evolue spontanement aux temps longs vers une situation oules particules se regroupent par amas. La densite locale de ces derniers est tressuperieure a la densite uniforme initiale; correlativement, il apparaıt des regionsde l’espace, ou la densite de particules devient tres faible.

Quand la densite des particules augmente, il se produit un phenomene qui vabloquer rapidement la simulation: il s’agit de l’effrondrement inelastique. Commeles particules perdent de l’energie au cours des collisions, on peut se trouverdans la situation ou trois particules (pratiquement alignees) vont etre le siege decollisions successives. Dans cet amas de trois particules, le temps entre collisionsva diminuer au cours du temps jusqu’a tendre vers zero. De facon asymptotique,on assiste a un “collage de deux particules” entre elles4.

L’effet qui peut apparaıtre dans une simulation entre une particule “encadree”par deux voisines peut etre compris en considerant le phenomene elementaire durebondissement d’une seule bille inelastique, sur un plan, soumise a un champde pesanteur. On appelle h la hauteur a laquelle la bille est lachee. Au premiercontact avec le plan, le temps ecoule est t1 =

√2h/g et la vitesse avant la collision

est√

2gh. Juste apres le premier contact avec le plan, la vitesse devient e√

2gh.Au second contact, le temps ecoule est

t2 = t1 + 2et1 (8.21)

et pour le nieme contact, on a

tn = t1

(1 + 2

n−1∑i=1

ei

). (8.22)

4Il ne s’agit pas d’un vrai collage car le temps de simulation s’est arrete avant.

142

Page 143: Numerical Simulation in Statistical Physics

8.4 Inelastic hard sphere model

Ainsi, pour un nombre infini de collisions, on a

t∞ =t1

(1 +

2e

1− e

)(8.23)

=t11 + e

1− e(8.24)

Un temps fini s’est ecoule jusqu’a la situation finale ou la bille est au repos surle plan, et cela necessite un nombre infini de collisions.

Pour la simulation, un tel phenomene conduit a un arret de l’evolution dela dynamique. Au dela de l’aspect genant pour la simulation, nos hypothesesphysiques concernant la collision ne sont plus valables aux tres faibles vitesses.Quand la vitesse de la particule devient tres petite, celle-ci devient inferieure ala vitesse d’agitation thermique, qui est certes faible, mais finie dans la realite.Physiquement, le coefficient de restitution depend en fait de la vitesse de collisionet n’est constant que pour des vitesses relatives au moment du choc qui restentmacroscopiques. Quand les vitesses tendent vers zero, le coefficient de restitutiontend vers 1,et les collisions entre particules redeviennent elastiques.

Finalement, meme si on s’attend que des spheres inelastiques finissent parvoir decroıtre leur energie cinetique jusqu’a une valeur egale a zero (en l’absenced’une source exterieure qui apporterait de l’energie pour compenser la dissipationliee aux collisions), la simulation des spheres inelastiques ne permet d’atteindrecet etat d’energie nulle, car celle-ci est stoppee bien avant, des qu’une sphere setrouve dans la situation de collisions multiples entre deux partenaires.

Si l’on souhaite avoir une modelisation proche de l’experience, il est necessairede tenir compte des conditions aux limites proches de l’experience: pour fournirde l’energie de maniere permanente au systeme, il y a generalement un mur quieffectue un mouvement vibratoire. A cause des collisions entre les particules et lemur qui vibre, cela se traduit par un bilan net positif d’energie cinetique injecteedans le systeme, ce qui permet de maintenir une agitation des particules malgreles collisions inelastiques entre particules.

En conclusion, le modele des spheres inelastiques est un bon outil pour la mod-elisation des milieux granulaires, si ceux-ci conservent une agitation suffisante.En l’absence d’injection d’energie, ce modele ne peut decrire que les phenomenesprenant place dans un intervalle de temps limite.

8.4.4 Some properties

En l’absence d’apport d’energie, le systeme ne peut que se refroidir. Il existe unregime d’echelle avant la formation d’agregats dans lequel le systeme reste globale-ment homogene et dont la distribution des vitesses s’exprime par une loi d’echelle.Pour comprendre simplement ce regime, on considere la perte d’energie cinetiqueresultant d’une collision entre deux particules. En utilisant l’equations (8.19), on

143

Page 144: Numerical Simulation in Statistical Physics

Out of equilibrium systems

obtient pour la perte d’energie cinetique

∆E = −1− α2

4m((vj − vi).n)2 (8.25)

On remarque bien evidemment que si α = 1, cas des spheres elastiques, l’energiecinetique est conservee comme nous l’avons vu dans le chapitre3. Inversement,la perte d’energie est maximale quand le coefficient de restitution est egale a 0.

En supposant que le milieu reste homogene, la frequence de collision moyenneest de l’ordre de l/∆v. La perte d’energie moyenne est donnee par ∆E = −ε(∆v)2

ou ε = 1− α2 ce qui donne l’equation d’evolution

dT

dt= −εT 3/2 (8.26)

ce qui donne la loi de refroidissement suivante

T (t) ' 1

(1 + Aεt)2(8.27)

Cette propriere est connu sous le nom de loi de Haff (1983).Dans le cas ou le milieu granulaire recoit de l’energie de maniere continue,

le systeme atteint un regime stationnaire ou les proprietes statistiques s’ecartentde celles de l’equilibre. En particulier, la distribution des vitesses n’est jamaisgaussienne. Cela implique en autres que la temperature granulaire qui est definitcomme la moyenne moyenne du carre de la vitesse est differente de la temperatureassociee a la largeur de la distribution des vitesses. On a pu montrer que la queuede la distribution des vitesses ne decroit pas en general de maniere gaussienne etque la forme asymptotique est intimement relie a la maniere dont l’energie estinjectee dans le systeme.

Parmi les proprietes differentes de celles de l’equilibre, dans le cas d’un melangede deux especes de particules granulaires, (par exemple, spheres de taille dif-ferente), la temperature granulaire de chaque espece est differente; il n’y a doncpas equipartition de l’energie comme dans le cas de l’equilibre.

8.5 Exclusion models

8.5.1 Introduction

Alors que la mecanique statistique d’equilibre dispose de la distribution de Gibbs-Botzmann pour caracteriser la distribution statistique des configurations d’unsysteme a l’equilibre, il n’existe pas de methode generale pour construire la dis-tribution des etats d’un systeme en regime stationnaire. Les modeles d’exclusionsont des exemples de modeles simples qui ont ete introduits durant ces quinze

144

Page 145: Numerical Simulation in Statistical Physics

8.5 Exclusion models

dernieres annees, avec lesquels de nombreux travaux theoriques ont ete effec-tues et qui se pretent assez facilement a des simulations numeriques pour testerles differentes approches theoriques. Ils peuvent servir de modeles simples pourrepresenter le traffic des voitures le long d’une route, en particulier, decrire lephenomene d’embouteillage ainsi que les fluctuations de flux qui apparaissentspontanement quand la densite de voitures augmente.

Il existe un grand nombre de variantes de ces modeles et nous nous limitonsa presenter les exemples les plus simples pour en degager les proprietes les plussignificatives. Pour permettre des traitements analytiques, les version unidimen-sionnnelles de ces modeles ont ete les plus largement etudiees.

On considere un reseau unidimensionnel regulier consitue de N sites sur lequelon place n particles. Chaque particle a une probabilite p de sauter sur le siteadjacent a droite si celui-ci est vide et une probabilite 1− p de sauter sur le sitede gauche si celui-ci est vide. Des variantes de ce modele correspondent a dessituations sont les sites 1 et N .

Dans le cas de phenomene hors d’equilibre, la specificite de la dynamiquestochastique intervient de facon plus ou moins importante dans les phenomenesobserves. Pour ces modeles, on dispose au moins de trois methodes differentespour realiser cette dynamique.

• La mise a jour parallele. Au meme instant toutes les regles sont appliqueesaux particules. On utilise aussi le terme de mise a jour synchrone. Cetype de dynamique induit generalement les correlations les plus importantesentre les particules. L’equation maıtresse correspondante est une equationdiscrete en temps.5

• La mise a jour asynchrone. Le mecanisme le plus utilise consiste a choisirde maniere aleatoire et uniforme une particule a l’instant t et d’appliquerles regles d’evolution du modele. L’equation maıtresse correspondante estune equation continue.

• Il existe aussi des situations ou la mise a jour est sequentielle et ordonnee.On peut soit considerer successivement les sites du reseau ou les particulesdu reseau. L’ordre peut etre choisi de la gauche vers la droite ou inverse-ment. L’equation maıtresse correspondante est aussi une equation a tempsdiscret.

5Pour le modele ASEP, pour eviter que deux particules venant l’une de la gauche et l’autrede la droite choisissent le meme site au meme temps, on realise la dynamique parallele toutd’abord sur le sous-reseau des sites pairs puis sur le sous-reseau des sites impairs.

145

Page 146: Numerical Simulation in Statistical Physics

Out of equilibrium systems

8.5.2 Random walk on a ring

Stationarity and detailed balance

Dans le cas ou l’on impose des conditions aux limites periodiques, le marcheurqui arrive au site N a une probabilite p d’aller sur le site 1. Il est facile de serendre compte qu’un tel systeme a un etat stationnaire ou la probabilite d’etresur un site est la meme quel que soit le site est egale a Pst(i) = 1/N . Ainsi on a

Π(i→ i+ 1)Pst(i) =p

N(8.28)

Π(i+ 1→ i)Pst(i+ 1) =1− pN

(8.29)

Hormis le cas ou la probabilite de sauter a droite ou a gauche est identique, c’est-a-dire p = 1/2, la condition de bilan detaille n’est pas satisfaite alors que celle destationnarite l’est. Cela ne constitue pas une surprise, mais c’est une illustrationdu fait que les conditions de stationnarite de l’equation maıtresse sont beaucoupgenerales que celle du bilan detaille. Ces dernieres sont generalement utilises dansdes simulations de mecanique statistique d’equilibre.

Totally assymetric exclusion process

Pour permettre d’aller assez loin dans le traitement analytique et deja d’illusterune partie de la phenomenologie de ce modele, on peut choisir d’interdire a uneparticule d’aller sur la gauche. Dans ce cas, on a p = 1· Avec un algorithmede temps continu, une particule au site i a une probabilite dt de se deplacer adroite si le site est vide, sinon elle reste sur le meme site. Considerons P particlessur un reseau comprenant N sites, avec la condition P < N . On introduit desvariables τi(t) qui representent le taux d’occupation a l’instant t du site i. Unetelle variable vaut 1 si une particule est presente sur le site i a l’instant t et 0 sile site est vide.

Il est possible d’ecrire une equation d’evolution du systeme pour la variableτi(t). Si le site i est occupe, il peut se vider si la particule presente a l’instant tpeut aller sur le site i + 1, qui doit alors etre vide. Si le site i est vide, il peutse remplir s’il existe une particule sur le site i− 1 a l’instant t. Ainsi l’etat de lavariable τi(t + dt) est le meme que τi(t) si aucun des liens a droite ou a gauchedu site i n’ont change

τi(t+dt)

τi(t) Proba d’avoir aucun saut 1-2dt

τi(t) + τi−1(t)(1− τi(t)) Proba saut venant de la gauche dt

τi(t)− τi(t)(1− τi+1(t)) = τi(t)τi+1(t) Proba saut vers la droite dt

(8.30)

146

Page 147: Numerical Simulation in Statistical Physics

8.5 Exclusion models

En prenant la moyenne sur l’histoire de la dynamique, c’est-a-dire de l’instantO a l’instant t, on obtient l’equation d’evolution suivante:

d〈τi(t)〉dt

= −〈τi(t)(1− τi+1(t))〉+ 〈τi−1(t)(1− τi(t))〉 (8.31)

On voit que l’evolution de l’etat d’un site depend de l’evolution de la paire dedeux sites voisins. Par un raisonnement similaire a celui qui vient d’etre fait pourl’evolution de la variable τi(t), l’equation d’une paire de sites adjacents dependde l’etat des sites situes en amont ou en aval de la paire consideree. On a ainsi

d(〈τi(t)τi+1(t)〉)dt

= −〈τi(t)τi+1(t)(1− τi+2(t)〉) + 〈τi−1(t)τi+1(t)(1− τi(t))〉 (8.32)

On obtient une hierarchie d’equations faisant intervenir des agregats de sites detaille de plus en plus grande. Il existe une solution simple dans le cas de l’etatstationnaire, car on peut montrer que toutes les configurations ont un poids equalqui est donne par

Pst =P !(N − P )!

N !(8.33)

qui correspond l’inverse du nombre de possibilites de placer P particules sur Nsites. Les moyennes dans l’etat stationnaire donnent

〈τi〉 =P

N(8.34)

〈τiτj〉 =P (P − 1)

N(N − 1)(8.35)

〈τiτjτk〉 =P (P − 1)(P − 2)

N(N − 1)(N − 2)(8.36)

8.5.3 Model with open boundaries

En placant des frontieres ouvertes, le systeme va pouvoir faire echanger des par-ticules avec des“reservoirs”de particles. Pour le site 1, si le site est vide a l’instantt, il y a une probabilite (ou densite de probabilite pour une dynamique a tempscontinu) α que le site qu’une particule soit injecte au site 1, et si le site est occupe,il y a une probabilite γ que la particule situee au site 1 sorte par la gauche dureseau. De maniere similaire, si le site N est occupe, il y a une probabilite β quela particule soit ejectee du reseau et si le site N est vide, il y a une probabilite δpour qu’une particule soit injecte sur le site N .

On peut ecrite des equations d’evolution dans le cas general, mais leur reso-lution n’est pas possible en general. Par souci de simplification, nous allons nouslimiter au cas totalement asymetrique. Les equations d’evolution des variablesd’occupation obtenues dans le cas d’un systeme periodique sont modifiees pourles sites 1 et N de la maniere suivante: le site 1 peut, s’il est vide, recevoir une

147

Page 148: Numerical Simulation in Statistical Physics

Out of equilibrium systems

0 0.5 1α

0

0.5

1

β

CPLDP

HDP

Figure 8.2 – Diagramme de phase du modele ASEP. La ligne en pointilles est uneligne de transition du premier ordre et les deux lignes pleines sont des lignes detransition continue

particule provenant du reservoir avec une probabilite α et s’il est occupe perdrela particule soit parce qu’elle repart dans le reservoir avec la probabilite γ, soitparce qu’elle a saute sur le site de droite. On obtient alors

d〈τ1(t)〉dt

= α(1− 〈τ1(t)〉)− γ〈τ1(t)〉 − 〈τ1(t)(1− τ2(t))〉 (8.37)

De maniere similaire pour le site N , on a

d〈τN(t)〉dt

= 〈τN−1(t)(1− τN(t))〉+ δ〈(1− τN(t))〉 − β〈τN(t)〉 (8.38)

Il est possible d’obtenir une solution de ces equations dans le cas stationnaire.Nous nous contentons de resumer les caracteristiques essentielles de celles-ci entracant le diagramme de phase, dans le cas ou δ = γ = 0, voir figure 8.2.

La region LDP (low density phase, en Anglais) correspond a une densitestationnaire (loin des bords) egale a α si α < 1/2 et α < β. La region HDP(high density phase) correspond a une densite stationnaire 1−β et apparaıt pourdes valeurs de β < 0.5 et α > β. La derniere region du diagramme est ditephase de courant maximal et correspond a des valeurs de α et β strictement

148

Page 149: Numerical Simulation in Statistical Physics

8.6 Kinetic constraint models

superieures a 1/2. La ligne en pointille separant les phases LCP et HDP est uneligne de transition de premier ordre. En effet, a la limite “thermodynamique”, entraversant la ligne de transition, la densite subit une discontinuite

∆ρ = ρHCP − ρLCP= 1− β − α (8.39)

= 1− 2α (8.40)

qui est une grandeur non-nulle pour tout α < 0.5. En simulation, pour un systemede taille finie, on note une brusque variation de densite pour le site situe au milieudu reseau, variation d’autant plus prononcee que la taille du reseau est grande.

8.6 Kinetic constraint models

8.6.1 Introduction

La problematique de l’approche de la transition vitreuse des verres structurauxconsiste a expliquer l’apparition de phenomenes tels que la croissance extreme-ment grande des temps de relaxation vers l’equilibre alors qu’aucune caracteris-tique associee a une transition de phase (divergence d’une longueur de correlation)n’est presente. Pour rassembler les donnees de nombreux verres structuraux, ona coutume de montrer la croissance rapide des temps de relaxation de ces sys-temes (ou la viscosite) en fonction de l’inverse de la temperature normalisee pourchaque corps avec sa temperature de transition vitreuse6

La figure 8.3 montre le diagramme d’Angell pour plusieurs systemes. Quandon obtient une droite dans ce diagramme, on dit alors que le liquide (verre)est “fort” tandis que si on observe une courbure, on parle alors de liquide (verre)“fragile”. La fragilite est d’autant plus marquee que la courbure est importante; enl’occurrence le liquide le plus fragile est donc dans ce diagramme l’orthoterphenyl.

Dans le cas de verre fort, la dependance en temperature du temps de relaxationest evidente, compte tenu de la figure 8.3, et donne

τ = τ0 exp(E/T ) (8.41)

ou E est une energie d’activation qui ne depend de la temperature sur le domaineconsidere.

Dans le cas de verres fragiles, plusieurs formules d’ajustement ont ete pro-posees et sont associees, depuis leurs introductions, a l’existence ou l’absnce d’unetemperature de transition sous-jacente; cette transition n’est pas observee car elleserait situee dans une region non accessible experimentalement. Pour ne citer que

6Cette temperature est definie par une condition experimentale ou le temps de relaxationatteint la valeur de 1000s. Cette temperature depend de la vitesse de trempe imposee ausysteme.

149

Page 150: Numerical Simulation in Statistical Physics

Out of equilibrium systems

Figure 8.3 – Logarithme (decimal) de la viscosite (ou du temps de relaxation)en fonction de l’inverse de la temperature normalisee avec la temperature detransition vitreuse associee a chaque corps. L’oxyde de Germanium est considerecomme un verre fort tandis que l’orthoterphenyl est le verre le plus fragile.

les plus connues, il y a la loi de Vogel-Fulscher-Thalmann (VFT) et qui s’exprimecomme

τ = τ0 exp(A/(T − T0)). (8.42)

La temperature T0 est souvent interpretee comme une temperature de transitionqui ne peut pas etre atteinte. On peut aussi considerer que le rapport A/(T −T0)est une energie d’activation qui augmente donc fortement quand on abaisse latemperature. On peut noter qu’une loi de la forme

τ = τ0 exp(B/T 2) (8.43)

permet de reproduire assez bien la courbure de plusieurs verres fragiles et nepresuppose rien sur l’existence d’une temperature de transition non nulle sous-jacente.

Une autre caracteristique generique des verres est l’existence d’une decrois-sance des fonctions de correlation selon une forme que ne peut pas generalementajustee par une simple exponentielle. De nouveau, ce comportement de decrois-sance lente est souvent ajustee par une loi de Kohraush-Williams-Watts. Cette

150

Page 151: Numerical Simulation in Statistical Physics

8.6 Kinetic constraint models

loi s’exprime commeφ(t) = exp(−atb) (8.44)

ou φ(t) represente la fonction de correlation de la grandeur mesuree. Tres generale-ment, b est une fonction decroissante de la temperature, partant generalement de1 a haute temperature (region ou la decroissance des fonctions est exponentielle)pour aller vers 0.5 a 0.3 On peut aussi ecrire la fonction de correlation comme

φ(t) = exp(−(t/τ)b) (8.45)

ou τ = a−1/b. On parle souvent d’exponentielle etiree, l’etirement etant d’autantplus important que l’exposant b est faible.

Pour determiner le temps de relaxation caracterisque de la fonction de cor-relation, il existe trois manieres de mesurer ce temps

• Considerer que le temps caracteristique est donnee par l’equation φ(τ) =0.1φ(0). Ce moyen est utilisee a la fois dans les simulations et sur le planexperimental, la valeur de la fonction de correlation peut etre determineealors precisement car les fluctuations (statistiques) ne dominent pas alors lavaleur de la fonction de correlation, ce qui serait le cas si on choisissait uncritere φ(τ) = 0.01φ(0) a la fois dans des donnees de simulation ou cellesd’experience.

• De maniere plus rigoureuse, on peut dire que le temps caracteristique estdonnee par l’integrale de la fonction de correlation τ =

∫∞0dtφ(t). Ce

moyen est interessant quand on travaille avec des expressions analytiques,mais reste difficile a mettre en oeuvre dans le cas de resultats numeriquesou experimentaux.

• On peut aussi determiner ce temps a partir de la loi d’ajustement, Eq.(8.45).Cette methode souffre du fait qu’elle necessite de determiner en meme tempsl’exposant de l’exponentielle etiree et la valeur du temps τ

Dans le cas ou la fonction de correlation serait decritepar une simple expo-nentielle, les determinations du temps caracteristique par chacune des methodescoincident a un facteur multiplicatif pres. Dans le cas general, les trois temps nesont pas relies entre eux simplement; toutefois dans la mesure ou la decroissancelente est decrite par un comportement dominant de type KWW et que l’exposantb n’est pas trop proche de 0 (ce qui correspond a la situation experimentale desverres structuraux), la relation entre les trois temps reste quasi-lineaire et lescomportements essentiellement les memes. Ainsi la premiere methode est cellequi est retenue dans le cas d’une etude par simulation.

Une derniere caracteristique observee dans les verres est l’existence d’une re-laxation dite heterogene; en d’autres termes, le liquides surfondu serait diviseen regions ou il existe un temps caracteristique specifique. La relaxation globale

151

Page 152: Numerical Simulation in Statistical Physics

Out of equilibrium systems

observee serait la superposition de relaxations “individuelles”. Une interpreta-tion de ce phenomene conduisant une loi KWW est possible; en effet chaquedomaine relaxant exponentiellement avec une frequence de relaxation typique ν,la relaxation globale est donnee par

φ(t) =

∫dνexp(−tν)D(ν) (8.46)

Il s’agit mathematiquement de la transformee de Laplace de la densite en frequencesdes differents domaines.

8.6.2 Facilitated spin models

Introduction

Compte tenu de la nature universelle de comportement vitreux observee dans cessituations physiques qui concernent les verres structuraux, les polymeres voireles milieux granulaires, il est tentant de s’interesser a des approches qui fontune moyenne locale sur le detail microscopique des interactions. L’ingredient desmodeles a contrainte cinetique utilise sur cette hypothese. En considerant deplus que la dynamique heterogene est la source de la relaxation lente observeedans les systemes vitreux, de nombreux modeles sur reseau avec des variablesdiscretes ont ete proposes. Ces modeles ont la vertu de mettre en evidence quememe si la dynamique satisfait le bilan detaille et en utilisant un Hamiltoniende particules n’interagissant pas entre elles, on peut creer des dynamiques derelaxation dont le comportement est hautement non triviale et reproduire unepartie des phenomenes observesRitort and Sollich [2003]. Comme il s’agit encored’un domaine en pleine evolution, nous ne nous engageons pas sur la pertinencede ceux-ci, mais nous les considerons comme des exemples significatifs d’unedynamique qui est bouleverse par rapport a une evolution donnee par une regleclassique de Metropolis. Nous verrons dans le prochain un autre exemple (modeled’adsorption-desorption) ou l’absence de mouvements diffusifs presents dans unedynamique usuelle conduit a une dynamique tres lente et aussi non triviale.

Friedrickson-Andersen model

Le modele de Friedrickson-Andersen, introduit il y a une vingtaine d’annees parces deux auteurs, est le suivant: on considere des objets discrets sur un reseau(dans le cas de ce cours nous nous limitons au cas d’un reseau unidimensionnel,mais des generalisations sont possibles en dimensions superieures). Les variablesdiscretes sont notees ni et valent soit 0 soit 1. Le Hamiltonien d’interaction estdonnee par

H =N∑i=1

ni (8.47)

152

Page 153: Numerical Simulation in Statistical Physics

8.6 Kinetic constraint models

A une temperature T (soit β = 1/kBT ), on a la densite de variables dans l’etat1, note n qui est donnee par

n =1

1 + exp(β)(8.48)

Les variables dans cet etat sont consideres comme les regions mobiles et les regionsdans l’etat 0 comme des regions peu mobiles. Ce modele essaie de reproduire unedynamique heterogene. A basse temperature, la densite de regions mobiles estfaible n ∼ exp(−β). Avec une dynamique de Metropolis usuelle, quatre situationsde changement de l’etat d’un site sont a considerer. La regle de transition dechangement d’etat d’un site depend de l’etat de ces deux voisins. On a donc lesquatres situations suivantes

...000...↔ ...010... (8.49)

...100...↔ ...110... (8.50)

...001...↔ ...011... (8.51)

...101...↔ ...111... (8.52)

Le modele de Friedrickson-Andersen consite a interdire la premiere des quatresituations, ce qui correspond a interdire a la fois la creation ou la destructiond’une region mobile a l’interieur d’une region immobile. La restriction de cettedynamique permet l’exploration de la quasi-totalite de l’espace des phases commeavec des regles usuelles de Metropolis, hormis une seule configuration qui ne peutjamais etre atteinte qui est celle d’une configuration ne contenant que des sitesavec la valeur 0.

Dans ce cas, on peut montrer que le temps caracteristique de la relaxationevolue a une dimension comme

τ = exp(3β) (8.53)

ce qui correspond a une situation de verre fort et de dynamique induite par desprocessus actives.

East Model

Ce modele est inspire du modele precedent par le fait que le Hamiltonien estidentique, mais la dynamique est plus contrainte avec l’introduction d’une brisureavec de symetrie gauche-droite de ce modele. En effet, les regles dynamiques util-isees sont alors les regles du modele precedent amputees de la regle, Eq.(8.51), etdonc la creation de regions mobiles situes a gauche d’une region mobile est inter-dite. On peut verifier aisement que toutes les configurations du modele precedentrestent accessibles avec l’introduction de cette contrainte supplementaire, mais letemps caracteristique de la relaxation devient alors

τ = exp(1/(T 2 ln(2)) (8.54)

153

Page 154: Numerical Simulation in Statistical Physics

Out of equilibrium systems

Avec un temps de relaxation qui croıt plus rapidement qu’avec une loi d’Arrhenius,ce modele est un exemple de verre dit fragile tandis que le modele FA est un ex-emple de verre fort. Avec des regles aussi simples, il est possible de realiser dessimulations tres precises et d’obtenir des resultats analytiques dans un grandnombre de situations.

8.7 Conclusion

On voit sur ces quelques exemples la diversite et donc la richesse des comporte-ments des systemes en dehors d’une region d’equilibre. En l’absence de theoriegenerale pour des systemes en dehors de l’equilibre, la simulation numeriquereste un outil puissant pour etudier le comportement de ces systemes et permetde tester les developpements theoriques que l’on peut obtenir sur les modeles lesplus simples.

8.8 Exercises

8.8.1 Haff Law

On considere un systeme de spheres dures inelastiques de masse identique m,assujetties a se deplacer le long d’une ligne droite. Les spheres sont initialementplacees aleatoirement sur la droite (sans se recouvrir) et avec une distribution devitesse p(v, t = 0).

On considere tout d’abord une collision entre 2 spheres possedant des vitessesv1 et v2 qui subissent une collision. Au moment du choc inelastique, on a la reglede collision suivante

v′1 − v′2 = −α(v1 − v2) (8.55)

ou v′1 et v′2 sont les vitesses apres collision et α est appele le coefficient de resti-tution (0 ≤ α < 1).

♣ Q. 8.8.1-1 Quelle est la loi de conservation satisfaite lors de la collision. Don-ner l’equation correspondante.

♣ Q. 8.8.1-2 Calculer v′1 et v′2 en fonction de v1, v2 et de α.

♣ Q. 8.8.1-3 Calculer la perte d’energie cinetique au cours de la collision, quel’on note ∆ec, en fonction de ε = 1− α2, de ∆v = (v2 − v1) et de m.

On suppose dans la suite une absence de correlation dans les collisions entreparticules, ce qui implique que la probabilite jointe p(v1, v2, t) (probabilite detrouver la particule 1 et la particule 2 avec les vitesses v1 et v2) se factorise

p(v1, v2, t) = p(v1, t)p(v2, t) (8.56)

154

Page 155: Numerical Simulation in Statistical Physics

8.8 Exercises

♣ Q. 8.8.1-4 Montrer que la valeur moyenne < (∆v)2 > perdue par collision

< (∆v)2 >=

∫ ∫dv1dv2p(v1, v2, t)(∆v)2 (8.57)

s’exprime en fonction de la temperature granulaire T definie par la relation T =m < v2 > et de la masse m.

♣ Q. 8.8.1-5 Le systeme subit des collisions successives au cours du temps etson energie cinetique totale (et donc sa temperature) va decroıtre. Montrer que

dT

dτ= −ε T

N(8.58)

ou τ est le nombre moyen de collisions et N le nombre total de particules dusysteme.

♣ Q. 8.8.1-6 En definissant n = τN

, le nombre moyen de collisions par particule,montrer que

T (n) = T (0)e−εn (8.59)

ou T (0) est la temperature initiale du systeme.

♣ Q. 8.8.1-7 La frequence moyenne de collisions ω(T ) par bille est donnee par

ω(T ) = ρ

√T

m(8.60)

ou ρ est la densite du systeme. Etablir l’equation differentielle reliant dTdt

, T (t),m et ρ.

♣ Q. 8.8.1-8 Integrer cette equation differentielle et montrer que la solution secomporte pour les grands temps comme T (t) ∼ tβ ou β est un exposant que l’ondeterminera. Cette decroissance de l’energie est appelee loi de Haff.

♣ Q. 8.8.1-9 On suppose que la distribution des vitesses p(v, t) s’exprime commeune fonction d’echelle

p(v, t) = A(t)P

(v

v(t)

)(8.61)

Montrer que A(t) est une fonction simple de v(t), appelee vitesse caracteristique.

♣ Q. 8.8.1-10 En utilisant l’expression de la probabilite definie a l’equation(8.61), calculer ep(t) = m/2

∫dvv2p(v, t) en fonction de la vitesse caracteristique

et de constantes.

155

Page 156: Numerical Simulation in Statistical Physics

Out of equilibrium systems

♣ Q. 8.8.1-11 La loi d’echelle pour la distribution des vitesses est-elle compatibleavec la loi de Haff? Justifier la reponse.

♣ Q. 8.8.1-12 Pour nc ∼ 1/ε, l’energie cinetique ne decroıt plus algebrique-ment et des agregats commencent a se former, associes a l’apparition d’un effron-drement inelastique. Decrire simplement le phenomene d’effrondrement inelas-tique.

8.8.2 Croissance de domaines et modele a contrainte cine-tique

De nombreux phenomenes a l’equilibre et hors d’equilibre sont caracterises par laformation de regions spatiales appelees domaines ou un ordre local est present. Lalongueur caracteristique de ces domaines est notee l. Leur evolution temporelle semanifeste par le deplacement des frontieres, appelees murs de domaines. Le me-canisme de deplacement depend du systeme physique considere. Dans le cas d’unphenomene hors d’equilibre, l’equation d’evolution de la longueur l est donneepar

dl

dt=

l

T (l)(8.62)

ou T (l) est le temps caracteristique du deplacement de mur.

♥ Q. 8.8.2-1 Dans le cas ou le mur se deplace librement sous l’effet de la diffu-sion, on a T (l) = l2. Resoudre l’equation differentielle et determiner l’evolutionde la croissance de l.

Dans un cas plus general, le deplacement d’un mur necessite le franchissementd’une barriere d’energie (libre) et le temps caracteristique est alors donne par larelation

T (l) = l2 exp(β∆E(l)) (8.63)

ou ∆E(l) est une energie d’activation.

♣ Q. 8.8.2-2 Dans le cas ou ∆E(l) = lm, que devient l’evolution de l avec letemps? Est-elle plus rapide ou moins rapide que dans le cas precedent? Inter-preter ce resultat.

Quand le phenomene de croissance de domaines se manifeste dans une dy-namique pres de l’equilibre, l’equation d’evolution est modifiee comme suit

dl

dt=

l

T (l)− leqT (leq)

(8.64)

ou leq designe la longueur typique d’un domaine a l’equilibre.

156

Page 157: Numerical Simulation in Statistical Physics

8.8 Exercises

♥ Q. 8.8.2-3 Resoudre pour l ∼ leq cette equation differentielle. En deduire le

temps caracteristique de retour a l’equilibre en fonction de T (leq) et(d ln(T (l))d ln(l)

)leq

.

Les modeles a contrainte cinetique sont des exemples de systemes ou la relax-ation vers l’equilibre est tres lente, meme si la thermodynamique reste elemen-taire. On considere l’exemple appele East Model. L’Hamiltonien de ce modeleest donne par

H = 1/2∑i

Si (8.65)

♥ Q. 8.8.2-4 On considere un systeme de N spins. Calculer la fonction de par-tition Z de ce systeme.

♣ Q. 8.8.2-5 Calculer la concentration c de spins ayant la valeur +1 a l’equilibre.On se place a tres basse temperature, montrer que l’on peut alors approcher cetteconcentration par e−β.

La dynamique de ce modele est celle de Metropolis a l’exception du fait quele retournement d’un spin +1 n’est possible que si celui-ci possede a gauche unvoisin qui est aussi un spin +1.

♥ Q. 8.8.2-6 Expliquer pourquoi l’etat ou tous les spins sont negatifs est un etatinatteignable si l’on part d’une configuration quelconque. On appelle domaineune suite de spins −1 bordee a droite par un spin +1. Le domaine de plus petitetaille a la valeur 1 et est constitue d’un seul spin +1.

♣ Q. 8.8.2-7 Compte tenu de la dynamique imposee, donner le chemin (c’est adire la suite de configurations) permettant de fusionner un mur de taille 2 situea droite d’un spin +1. En d’autres termes, on cherche la suite de configurationsallant de +1− 1 + 1 a 1− 1− 1.

Le retournement d’un spin −1 en un spin +1 est defavorable a basse tempera-ture et coute une energie d’activation +1. De maniere generale, la creation d’unesuite de longueur n de spins +1 situee a droite d’un mur quelconque (c’est-a-direune suite de n domaines de taille 1) coute une energie d’activation n.

Un chemin possible pour detruire un mur de taille n consiste a trouver unesuite de configurations consecutives de retournement de spins individuels allantde 1− 1...− 11 a 1− 1...− 1

♣ Q. 8.8.2-8 Donner ce chemin (la suite de configurations) Ce chemin n’est pasle plus favorable car il impose de creer une suite tres grande de spins +1. On vamontrer qu’un chemin plus economique existe.

♣ Q. 8.8.2-9 On considere un mur de taille l = 2n. Si on appelle h(l) la pluspetite suite a creer pour faire disparaıtre un mur de taille l, montrer par recurrenceque

h(2n) = h(2(n−1)) + 1 (8.66)

157

Page 158: Numerical Simulation in Statistical Physics

Out of equilibrium systems

♣ Q. 8.8.2-10 En deduire que l’energie d’activation de la destruction d’un murde taille l vaut ln(l)/ ln(2)

♠ Q. 8.8.2-11 En admettant que leq = e−β et en inserant le resultat prece-dent dans celui-ci obtenu a la question 2.2, montrer que le temps de relax-ation de ce modele se comporte asymptotiquement a basse temperature commeexp(β2/ln(2)).

8.8.3 Diffusion-coagulation model

On considere le modele stochastique suivant: sur un reseau unidimensionnel detaille L, a l’instant t = 0, on place aletoirement N particules notees A, avec N <L. La dynamique du modele est composee de deux mecanismes: une particulepeut sauter aleatoirement a droite ou a gauche sur un site voisin; si le site estvide, la particule occupe ce site et libere le site precedent, si le site est occupe,la particule “coagule” avec la particule presente pour donner une seule particule.Les deux mecanismes sont representes par les reactions suivantes

A+ .↔ .+ A (8.67)

A+ A→ A (8.68)

ou le point designe un site vide.

♣ Q. 8.8.3-1 Pour un reseau de taille L avec des conditions aux limites peri-odiques, quel est l’etat final de la dynamique? En deduire la densite ρL(∞). Quevaut cette limite quand L→∞.

♣ Q. 8.8.3-2 On veut faire la simulation numerique de ce modele stochastiqueavec une dynamique sequentielle. Proposer un algorithme permettant de simulerce modele sur un reseau de taille L

♣ Q. 8.8.3-3 Est-ce que l’algorithme propose verifie le bilan detaille? Justifiervotre reponse.

On note P (l, t) la probabilite de trouver un intervalle de longueur l ne con-tenant aucune particule.

♣ Q. 8.8.3-4 Exprimer la probabilite Q(l, t) de trouver un intervalle vide delongueur l et au moins une particule situee sur l’un des deux sites bordantl’intervalle de longueur l en fonction de P (l, t) et de P (l + 1, t).

♣ Q. 8.8.3-5 Montrer que l’evolution de la probabilite P (l, t) est donnee parl’equation differentielle suivante

∂P (l, t)

∂t= Q(l − 1, t)−Q(l, t) (8.69)

158

Page 159: Numerical Simulation in Statistical Physics

8.8 Exercises

♣ Q. 8.8.3-6 Exprimer la densite de particules ρ(t) sur le reseau en fonction deP (0, t) et P (1, t).

♣ Q. 8.8.3-7 Justifier le fait que P (0, t) = 1.

♣ Q. 8.8.3-8 En passant a la limite continue, c’est-a-dire en posant que P (l +

1, t) = P (l, t) + ∂P (l,t)∂l

+ 1/2∂2P (l,t)∂l2

+ ..., montrer que l’on obtient l’equation dif-ferentielle suivante

∂P (l, t)

∂t=∂2P (l, t)

∂l2(8.70)

et que

ρ(t) = − ∂P (l, t)

∂l

∣∣∣∣l=0

(8.71)

♣ Q. 8.8.3-9 Verifier que 1−erf( l2√t) est une solution pour P (l, t) dans la limite

continue, ou erf est la fonction erreur. (Indication: erf(x) = 2√π

∫ x0dt exp(−t2)).En

deduire la densite ρ(t)On considere maintenant une approche de champ moyen pour l’evolution de

la dynamique. L’evolution de densite locale moyenne ρ(x, t) est alors donnee par

∂ρ(x, t)

∂t=∂2ρ(l, t)

∂l2− ρ(x, t)2 (8.72)

On cherche une solution homogene de cette equation ρ(x, t) = ρ(t).

♣ Q. 8.8.3-10 Resoudre alors l’equation differentielle. Comparer ce resultat avecle resultat exact precedent. Quelle est l’origine de la difference?

8.8.4 Random sequential addition

On considere le modele unidimensionnel du parking ou l’on insere des particulesdures de taille unite, sequentiellement et aleatoirement. On repere les positionsdes centres de particules adsorbees sur la ligne par les abscisses xi ordonnees demaniere croissante ou i est un indice allant de 1 a n(t), n(t) etant le nombrede particules adsorbees a l’instant t (ce tableau ne correspond bien evidemmentpas a l’ordre d’insertion des particules). Le systeme est borne a gauche par unmur en x = 0 et a droite par un mur en x = L. On definit aussi un tableaudes intervalles hi; chaque intervalle est defini comme la partie de la ligne nonrecouverte comprise soit entre deux particules consecutives, soit entre un mur etune particule pour deux intervalles situes aux limites du systeme. Ce tableaucontient n(t) + 1 elements.

♣ Q. 8.8.4-1 Exprimer la densite de la ligne recouverte par des particules enfonction de n(t) et de L.

159

Page 160: Numerical Simulation in Statistical Physics

Out of equilibrium systems

♣ Q. 8.8.4-2 Soit un intervalle h , justifier le fait que Max(h−1, 0) est la partiedisponible pour l’insertion d’une nouvelle particule. En deduire la fraction de laligne disponible Φ pour l’insertion d’une nouvelle particule en fonction des hi,puis en fonction des xi.

♣ Q. 8.8.4-3 Pour chaque nouvelle particule inseree a l’interieur d’un intervallede longueur h, il apparaıt deux intervalles: un de longueur h′ et un autre delongueur h′′. Donner la relation entre h, h′ et h′′. (Indication: Faire un schemaeventuellement).

Pour realiser la simulation numerique de ce systeme, un algorithme, note A,consiste a choisir aleatoirement la position d’une particule et de tester si cettenouvelle particule ne recouvre pas une ou des particules precedemment adsorbees.Si la particule ne recouvre aucune particule deja adsorbee, cette particule estplacee; sinon elle est rejetee et on procede a un nouveau tirage. Pour chaquetirage, on incremente le temps par ∆t. Aux temps longs, la probabilite pourinserer une nouvelle particule est tres faible et les evenements couronnes de succesdeviennent rares.

♣ Q. 8.8.4-4 En considerant l’evolution du systeme aux temps courts, et desboites de simulation de taille differente, justifier que ∆t = 1/L afin que le resultatdonne par une simulation soit independant de la taille de la boıte.

On propose d’utiliser un autre algorithme, note B, defini de la maniere suiv-ante: au debut de la simulation, on choisit aleatoirement et uniformement unnombre η compris entre 0.5 et L − 0.5. On place la particule en η. L’intervalleinitial est remplace par deux nouveaux intervalles. On incremente le temps de1/L et on considere les intervalles de cette nouvelle generation.

Si un intervalle est inferieur a 1, on considere que l’intervalle est devenu bloqueet il n’est plus considere dans la suite de la simulation. Pour chaque intervalle delongueur h superieur a 1, on tire aleatoirement un nombre compris entre la borneinferieure de l’intervalle augmentee de 0.5 et la borne superieure de l’intervallediminuee de 0.5; on a alors cree deux nouveaux intervalles pour la generationsuivante et on incremente alors le temps de ∆τ = 1/L pour chaque particuleinseree. Une fois que cette operation est realisee sur tous les intervalles de lameme generation, on recommence la meme procedure sur les intervalles de lageneration suivante. La simulation est stoppee quand il n’existe plus d’intervallesde taille superieure a 1.

♣ Q. 8.8.4-5 Justifier le fait pour un systeme de taille finie, le temps de simula-tion est alors fini.

♣ Q. 8.8.4-6 Donner une borne superieure du temps de simulation.

♣ Q. 8.8.4-7 La densite maximale atteinte est-elle superieure, inferieure ou iden-tique a celle obtenue avec l’algorithme A. Justifier votre reponse.

160

Page 161: Numerical Simulation in Statistical Physics

8.8 Exercises

♣ Q. 8.8.4-8 Comment evolue la densite avec le temps de la simulation?

♣ Q. 8.8.4-9 Etablir la relation entre ∆t (algorithme A), Φ et ∆τ (algorithmeB).

♣ Q. 8.8.4-10 Proposer un algorithme parfait en definissant pour l’algorithme Bun nouveau ∆τ1 par une quantite accessible a la simulation que l’on determinera.

161

Page 162: Numerical Simulation in Statistical Physics

Out of equilibrium systems

162

Page 163: Numerical Simulation in Statistical Physics

Chapter 9

Slow kinetics, aging.

Contents9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 163

9.2 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . 164

9.2.1 Two-time correlation and response functions. . . . . . 164

9.2.2 Aging and scaling laws . . . . . . . . . . . . . . . . . . 165

9.2.3 Interrupted aging . . . . . . . . . . . . . . . . . . . . . 166

9.2.4 “Violation” of the fluctuation-dissipation theorem . . . 166

9.3 Adsorption-desorption model . . . . . . . . . . . . . . 167

9.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 167

9.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 168

9.3.3 Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . 169

9.3.4 Equilibrium linear response . . . . . . . . . . . . . . . 170

9.3.5 Hard rods age! . . . . . . . . . . . . . . . . . . . . . . 171

9.3.6 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 171

9.4 Kovacs effect . . . . . . . . . . . . . . . . . . . . . . . . 173

9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 174

9.1 Introduction

Nous avons vu, au chapitre precedent, que la physique statistique pouvait decriredes phenomenes qui etaient en dehors d’un equilibre thermodynamique. Les sit-uations physiques considerees entraınaient le systeme a evoluer vers un etat horsd’equilibre. Une autre classe de situations physiques, rencontrees frequemmentsur le plan experimental, concerne les systemes dont le retour a un etat d’equilibreest tres lent.

163

Page 164: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

Quand le temps de relaxation est tres superieur au temps de l’observation,les grandeurs, que l’on peut mesurer dans une experience (ou une simulationnumerique), dependent plus ou moins fortement de l’etat du systeme. Cela signifieque, si l’on effectue deux experiences sur le meme systeme, a quelques joursvoire quelques annees d’intervalle, les grandeurs mesurees different. Pendantlongtemps, une telle situation a represente un frein dans la comprehension ducomportement de ces systemes.

Depuis une dizaine d’annees, grace a l’accumulation de resultats experimen-taux (verres de spins, vieillissement des polymeres plastiques, milieux granu-laires,...), a l’etude theorique des phenomenes de croissance de domaines (simu-lation et resultats exacts), et a la dynamique des modeles de spins (avec desordregele), il apparaıt qu’il existe une certaine forme d’universalite des proprietes denon-equilibre.

L’activite scientifique portant sur cette physique statistique hors d’equilibreest, a l’heure actuelle, tres importante. Une grande partie des lois proposeesprovenant des developpement theoriques, restent a ce jour des conjectures. Lasimulation numerique, permettant de determiner le comportement precis de l’evolutiondes systemes, constitue un apport important pour une meilleure connaissance dela physique de ces systemes.

9.2 Formalism

9.2.1 Two-time correlation and response functions.

Quand le temps de relaxation du systeme est superieur au temps de l’observation,le systeme ne peut pas etre considere a l’equilibre au debut de l’experience, nisouvent durant le deroulement de celle-ci.

Il est donc necessaire de considerer non seulement le temps τ ecoule durantl’experience (comme dans une situation a l’equilibre), mais aussi de garder lamemoire du temps ecoule tw, depuis l’instant initial de la preparation du systemejusqu’au debut de l’experience. Supposons par exemple que le systeme soit al’equilibre a l’instant t = 0, a la temperature T , il subit alors une trempe rapidepour etre amene a une temperature plus basse T1. On note alors tw le tempsecoule depuis cette trempe jusqu’au debut de l’experience. Pour une grandeur A,fonction des variables microscopiques du systeme (aimantations pour un modelede spin, vitesse et position des particules dans un liquide,...), on definit la fonctionde correlation a deux temps CA(t, tw) comme

CA(t, tw) = 〈A(t)A(tw)〉 − 〈A(t)〉〈A(tw)〉. (9.1)

Le crochet signifie que l’on prend la moyenne sur un grand nombre de systemesprepares de maniere identique1.

1Par exemple, on considere un systeme equilibre “rapidement” a haute temperature. On

164

Page 165: Numerical Simulation in Statistical Physics

9.2 Formalism

De maniere similaire, on peut definir une fonction de reponse pour la variableA, precedemment introduite. Soit h son champ conjugue thermodynamiquementa la variable A (δE = −hA), on appelle la fonction de reponse impulsionnelle adeux temps, la derivee fonctionnelle de la variable A par rapport au champ h:

RA(t, tw) =

(δ〈A(t)〉δh(tw)

)h=0

. (9.2)

Dans les simulations et experimentalement, il est plus facile de calculer la fonctionde reponse integree qui est definie comme

χA(t, tw) =

∫ t

tw

dt′RA(t, t′) (9.3)

En d’autres termes, la fonction de reponse impulsionnelle peut etre exprimeecomme

〈A(t)〉 =

∫ t

tw

dsRA(t, s)h(s) +O(h2) (9.4)

Pour respecter la causalite, la fonction de reponse est telle que

RA(t, tw) = 0, t〈tw. (9.5)

9.2.2 Aging and scaling laws

Pour tout systeme, au moins deux echelles de temps caracterisent la dynamiquedu systeme: un temps de reorganisation locale (temps moyen de retournementd’un spin pour les modeles de spins), que l’on peut appeler temps microscopiquet0, et un temps d’equilibration teq. Quand le temps d’attente tw et le tempsd’observation τ se situent dans l’intervalle suivant

t0 << tw << teq (9.6)

t0 << tw + τ << teq, (9.7)

le systeme n’est pas encore equilibre et ne pourra pas atteindre l’equilibre surl’echelle de temps de l’experience.

Dans un grand nombre d’experiences, ainsi que pour des modeles solubles(generalement dans l’approximation du champ moyen), il apparaıt que la fonctionde correlation s’exprime comme la somme de deux fonctions,

CA(t, tw) = CST (t− tw) + CAG

(ξ(tw)

ξ(t)

)(9.8)

applique une trempe a l’instant t = 0, et l’on laisse relaxer le systeme pendant un temps tw, apartir duquel on procede a la mesure de la fonction de correlation.

165

Page 166: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

ou ξ(tw) est une fonction non-universelle qui depend du systeme. Ceci signifieque, sur une periode courte, le systeme reagit essentiellement comme s’il etaitdeja a l’equilibre; ce regime est decrit par la fonction CST (τ = t − tw), et nedepend pas du temps d’attente tw, comme dans un systeme deja equilibre.

Prenons l’exemple de la croissance d’un domaine ferromagnetique dans unmodele de spins, trempe a une temperature inferieure a la temperature critique.Le premier terme de l’equation (9.8) correspond a la relaxation des spins al’interieur d’un domaine. Le second terme depend du temps d’attente et decritla relaxation des parois des domaines (ou ξ(t) est la taille caracteristique desdomaines a l’instant t).

Le terme de“vieillissement”se justifie par le fait que, plus on attend longtempspour faire une mesure, plus le systeme a besoin de temps pour perdre la memoirede sa configuration initiale.

La fonction ξ(tw) n’est pas pas connue en general, ce qui signifie qu’il est neces-saire de proposer des fonctions d’essai simples et de verifier si les differentes fonc-

tions de correlation CAG

(ξ(tw)ξ(t)

)se superposent sur une seule courbe-maıtresse.

9.2.3 Interrupted aging

Si la seconde inegalite de l’equation (9.7) n’est pas verifiee, c’est a dire si le tempst devient tres superieur au temps d’equilibration teq, alors que tw est inferieur acelui-ci, on va se trouver dans la situation dite, de “vieillissement interrompu”.

Au dela des temps courts, le systeme relaxe comme un systeme hors d’equilibre,c’est-a-dire avec une partie non stationnaire de la fonction de correlation. Auxtemps tres longs, le systeme retourne a l’equilibre; sa fonction de correlation re-devient invariante par translation dans le temps et verifie a nouveau le theoremede fluctuation-dissipation.

9.2.4 “Violation” of the fluctuation-dissipation theorem

Dans le cas d’un systeme a l’equilibre, les fonctions de reponse et de correlationsont a la fois invariantes par translation dans le temps RA(t, tw) = RA(τ = t− tw)et CA(t, tw) = CA(τ), et reliees l’une a l’autre par le theoreme de fluctuation-dissipation (voir Appendice A).

RA(τ) =

−βdCA(τ)

dττ ≥ 0

0 τ < 0(9.9)

Une telle relation ne peut etre etablie dans le cas d’un systeme evoluant endehors d’un etat d’equilibre2. On parle de maniere abusive de la violation du

2Le point cle dans la demonstration du theoreme de fluctuation-dissipation est la connais-sance de la distribution de probabilite a l’equilibre.

166

Page 167: Numerical Simulation in Statistical Physics

9.3 Adsorption-desorption model

theoreme de fluctuation-dissipation pour souligner le fait que la relation etabliea l’equilibre n’est plus verifiee en dehors de l’equilibre, mais il ne s’agit pas d’uneveritable violation car les conditions de ce theoreme ne sont remplies alors. Uneveritable violation consisterait a observe le non respect du theoreme dans lesconditions de l’equilibre, ce qui n’est jamais observe.

Compte tenu des resultats analytiques obtenus sur des modeles solubles et desresultats de simulation numerique, on definit formellement une relation entre lafonction de reponse et la fonction de correlation associee,

RA(t, tw) = −βX(t, tw)∂CA(t, tw)

∂tw(9.10)

ou la fonction X(t, tw) est alors definie par cette relation: cette fonction n’estdonc pas connue a priori.

Dans le cas ou le systeme est a l’equilibre, on a

X(t, tw) = 1. (9.11)

Aux temps courts, les decroissances de la fonction de correlation et de la fonctionde reponse sont donnees par des fonctions stationnaires qui ne dependent pas dutemps d’attente; on a alors

X(t, tw) = 1 τ = t− tw << tw. (9.12)

Au dela de ce regime, le systeme vieillit et la fonction Xo est differente de 1.Dans les modeles de type champ moyen, le rapport fluctuation-dissipation

ne depend que de la fonction de correlation X(t, tw) = X(C(t, tw)) aux grandstemps. Dans ce cas, on peut suppose que le systeme viellissant est caracterisepar une temperature effective

Teff (t, tw) =T

X(t, tw)(9.13)

9.3 Adsorption-desorption model

9.3.1 Introduction

Nous allons illustrer les concepts introduits ci-dessus sur un modele d’adsorption-desorption. Ce modele presente plusieurs avantages: il represente une extensiondu modele du parking que nous avons vu au chapitre precedent; pour des tempstres grands (qui peuvent etre superieurs au temps de la simulation numerique),le systeme retourne vers un etat d’equilibre. En consequence, en changeant unseul parametre de controle (essentiellement le potentiel chimique du reservoir quifixe la constante d’equilibre entre taux d’adsorption et taux de desorption), on

167

Page 168: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

peut passer progressivement de l’observation de proprietes d’equilibre a celle desproprietes hors d’equilibre.

La dynamique de ce systeme correspond a une forme de simulation MonteCarlo grand canonique , dans laquelle seuls les echanges de particules avec unreservoir sont permis, mais ou aucun deplacement de particules dans le systemen’est effectue. Meme si, a la limite des temps tres longs, le systeme converge versl’etat d’equilibre, la relaxation du systeme peut devenir extremement lente, et onva pouvoir observer le vieillissement de ce systeme.

En dernier lieu, ce modele permet de comprendre un grand nombre de resultatsexperimentaux concernant les milieux granulaires denses.

9.3.2 Definition

Des batonnets durs sont places au hasard sur une ligne, avec une constante devitesse k+. Si la nouvelle particule ne recouvre aucune particule deja adsorbee,cette particule est acceptee, sinon elle est rejetee et l’on procede a un nouveautirage. De plus, toutes les particules adsorbees sont desorbees aleatoirement avecune constante de vitesse k−. Quand k− = 0, ce modele correspond au modele duparking pour lequel la densite maximale est egale a ρ∞ ' 0.7476 . . . . (pour uneligne initialement vide). Quand k− 6= 0, mais tres petit, le systeme converge treslentement vers un etat d’equilibre.

En notant que les proprietes de ce modele dependent seulement du rapportK = k+/k−, et apres un changement d’echelle de l’unite de temps, on peut ecrirel’evolution du systeme comme

dt= Φ(t)− ρ

K, (9.14)

ou Φ(t) est la probabilite d’inserer une particule a l’instant t.A l’equilibre, on a

Φ(ρeq) =ρeqK

(9.15)

La probabilite d’insertion est alors donnee par

Φ(ρ) = (1− ρ) exp

(−ρ

1− ρ

). (9.16)

En inserant l’equation (9.16) dans l’equation (9.15), on obtient l’expression de ladensite a l’equilibre,

ρeq =Lw(K)

1 + Lw(K), (9.17)

ou Lw(x), la fonction Lambert-W, est la solution de l’equation x = yey. Dans lalimite des petites valeurs de K, ρeq ∼ K/(1 +K), tandis que pour des valeurs deK tres grandes,

ρeq ∼ 1− 1/ ln(K). (9.18)

168

Page 169: Numerical Simulation in Statistical Physics

9.3 Adsorption-desorption model

0 5 10 15

ln(t)

0.40

0.50

0.60

0.70

0.80

0.90ρ(

t)

1/ln(t)

1/t

e(-Γt)

Figure 9.1 – Evolution temporelle de la densite du modele d’adsorption-desorption. Schematiquement, la cinetique du processus est divisee en troisregimes a haute densite: en 1/t, puis en 1/ ln(t) et enfin en exponentielleexp(−Γt).

La densite d’equilibre ρeq tend vers 1 quand K → ∞. On peut noter qu’ily a une discontinuite entre la limite K → ∞ et le cas K = ∞, car les densitesmaximales sont respectivement de 1 et de 0.7476 . . ..

9.3.3 Kinetics

Contrairement au modele du parking, la cinetique du modele d’adsorption-desorptionne peut pas etre obtenue analytiquement. Il est toujours possible de faire uneanalyse qualitative de la dynamique en decoupant la cinetique en differents regimes.Nous nous placons par la suite dans les cas ou la desorption est tres faible1/K << 1.

1. Jusqu’a ce que la densite soit voisine de la densite a saturation du modeledu parking, la desorption peut etre negligee et avant un temps t ∼ 5, ladensite croıt comme celle du modele du parking, c’est a dire en 1/t.

2. Quand l’intervalle de temps entre deux adsorptions devient comparable autemps caracteristique d’une adsorption, la densite ne croit que lorsque, suite

169

Page 170: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

0 0.2 0.4 0.6 0.8

C(t,tw

)

0

0.2

0.4

0.6

0.8

1

χ(t,t

w)

Figure 9.2 – Fonction de reponse integree χ(τ) a l’equilibre et normalisee parsa valeur a l’equilibre versus la fonction de correlation C(τ) celle-ci egalementnormalisee pour K = 300 (le temps d’attente est de tw = 3000 > teq ).

a une desorption liberant un espace assez grand, deux nouvelles particulessont adsorbees. Dans ce regime, on peut montrer que la croissance de ladensite est essentiellement de l’ordre de 1/ ln(t) (voir figure 9.1).

3. Aux temps tres longs, le systeme retourne a l’equilibre; le nombre de desorp-tions equilibre le nombre d’adsorptions et l’approche de la densite d’equilibreest exponentielle exp(−Γt).

9.3.4 Equilibrium linear response

Dans la limite ou le temps d’attente est tres superieur au temps du retour al’equilibre, on doit retrouver que le theoreme de fluctuation-dissipation est verifie.

A l’equilibre, le modele d’adsorption-desorption correspond a un systeme debatonnets durs place dans un ensemble grand-canonique avec un potentiel chim-ique donne par βµ = ln(K).

La fonction de reponse integree peut etre facilement calculee

χeq = ρeq(1− ρeq)2. (9.19)

170

Page 171: Numerical Simulation in Statistical Physics

9.3 Adsorption-desorption model

De maniere similaire on peut calculer la valeur de la fonction de correlation al’equilibre.

Ceq = ρeq(1− ρeq)2. (9.20)

Le theoreme de fluctuation-dissipation donne

χ(τ) = C(0)− C(τ). (9.21)

Pour des objets durs, la temperature n’est pas un parametre essentiel, ce qui ex-plique que le theoreme de fluctuation-dissipation est legerement modifie. La figure9.2 montre le trace de la fonction χ(τ)/χeq en fonction de C(τ)/Ceq. La diagonalecorrespond a la valeur donnee par le theoreme de fluctuation-dissipation.

9.3.5 Hard rods age!

Nous nous interessons aux proprietes de non-equilibre de ce modele dans le regimecinetique en 1/ ln(t), c’est-a-dire dans un domaine de temps ou l’equilibre ther-modynamique n’est pas encore atteint. La grandeur interessante de ce modeleest la fonction d’auto-correlation de la densite (normalisee), definie comme

C(t, tw) = 〈ρ(t)ρ(tw)〉 − 〈ρ(t)〉〈ρ(tw)〉 (9.22)

ou les crochets signifient une moyenne sur un ensemble de simulations indepen-dantes.

Quand τ et tw sont assez grands, mais plus petits que le temps du retour al’equilibre τeq, le vieillissement est decrit par une fonction d’echelle. Il apparaıt iciempiriquement que cette fonction decrit un vieillissement simple. On a en effet

C(t, tw) = f(tw/t) (9.23)

9.3.6 Algorithm

La fonction de reponse integree a deux temps est definie comme

χ(t, tw) =δρ(t)

δ ln(K(tw)). (9.24)

En simulation, la fonction de reponse integree necessite a priori de faire unesimulation en ajoutant un champ a partir de l’instant tw. Une telle proceduresouffre du fait que l’on doit utiliser un champ faible pour rester dans le cadre dela reponse lineaire, mais si ce dernier est trop petit, la fonction de reponse (quis’obtient en soustrayant les resultats de simulation a champ nul) devient extreme-ment bruitee. La premiere methode qui a ete utilisee consiste a faire trois simu-lations, une a champ nul, une seconde a champ positif et une troisieme a champnegatif. Par une combinaison lineaire des resultats, il est possible d’annuler les

171

Page 172: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

termes non lineaires en champ quadratique, mais il subsiste des termes cubiqueset au dela. La methode que nous allons presenter est tres recente (2007) et a eteproposee par L. BerthierBerthier [2007] et permet de faire une seule simulationen champ nul.

L’idee de base de la methode consiste a exprimer la moyenne en termes deprobabilites d’observer le systeme a l’instant t

〈ρ(t)〉 =1

N

N∑k=1

ρ(t)Pk(tw → t) (9.25)

ou Pk(tw → t) est la probabilite que la kieme trajectoire (ou simulation) parte detw et aille a t et k un indice parcourant les N trajectoires independantes.

Compte tenu du caractere markovien de la dynamique, la probabilite Pk(tw →t) s’exprime comme le produit de probabilites de transition des configurationssuccessives menant le systeme de tw a t

Pk(tw → t) =t−1∏t′=tw

WCkt′→C

kt′+1

(9.26)

ou WCkt′→C

kt′+1

est la probabilite de transition de la configuration Ckt′ de la kieme

trajectoire a l’instant t′ vers la configuration de la kieme trajectoire Ckt′+1 a

l’instant t′ + 1. Pour une dynamique Monte Carlo standard, cette probabilitede transition s’exprime comme

WCkt→Ckt+1= δCkt+1C

′tΠ(Ck

t → C ′t) + δCt+1Ckt(1− Π(Ck

t → C ′t)) (9.27)

ou finalement Π(Ckt → C ′t) est la probabilite d’acceptation du passage de la con-

figuration Ckt a l’instant t a la configuration C ′t+1 a l’instant t+1. Le second terme

de la partie droite de l’equation (9.27) tient compte du fait que si la configurationC ′t est refusee la trajectoire reste avec la configuration de l’instant precedent avecla probabilite complementaire.

La susceptibilite integree s’ecrit comme

χ(t, tw) =∂ρ(t)

∂h(tw)(9.28)

ou dans le cas du modele du parking h = ln(K).

En utilisant les probabilites de trajectoires, on peut exprimer la susceptibilitecomme

χ(t, tw) =1

N

N∑k=1

ρ(t)∂Pk(tw → t)

∂h(9.29)

172

Page 173: Numerical Simulation in Statistical Physics

9.4 Kovacs effect

Si maintenant, on ajoute un champ infinitesimal, les probabilites de transitionsont modifiees et dans le calcul en prenant le logarithme de l’equation (9.25)

∂Pk(tw → t)

∂h= Pk(tw → t)

t−1∑t′=tw

∂ ln(WCkt′→C

kt′+1

)

∂h(9.30)

Ainsi, on peut calculer la fonction reponse integree comme la moyenne suivante

χ(t, tw) = 〈ρ(t)H(tw → t)〉 (9.31)

ou la valeur moyenne s’effectue sur une trajectoire non perturbee et avec la fonc-tion Hk(tw → t) qui est donnee par

Hk(tw → t) =∑t′

∂ ln(WCkt′→C

kt′+1

)

∂h(9.32)

On peut aussi calculer facilement la fonction de correlation a deux tempssur les memes trajectoires non perturbees. Pour estimer le rapport fluctuation-dissipation, on utilise la relation

∂χ(t, tw)

∂tw= −X(t, tw)

T

∂C(t, tw)

∂tw(9.33)

En tracant dans un diagramme parametrique la susceptibilite integree en fonctionde la fonction de correlation a deux temps, pour un temps fixe t et pour des valeursdifferentes tw on peut calculer la fonction X(t,tw)

Ten calculant la pente locale du

trace precedent. Si cette fonction est constante par morceaux, on interpretela fonction X(t, tw) comme une temperature effective. Je souhaite insister surle fait que le trace parametrique doit se faire a t constant et pour des valeursdifferentes de tw, ce fait est une consequence de l’equation (9.33). En effet, ladifferentiation se fait par rapport a tw et non pas par rapport a t. Les premieresetudes ont ete faites tres souvent de maniere incorrecte et il est apparu que descorrections substantielles apparaissent quand les calculs ont ete refaits selon lamethode exposee ci-dessus.

9.4 Kovacs effect

Pour des systemes dont les temps de relaxation sont extremement longs, lareponse a un succession de perturbations appliquee au systeme conduit a uncomportement souvent tres different de ce que l’observe pour un systeme relax-ant rapidement vers l’equilibre. Il semble qu’il y ait une aussi grande universalitedes comportements des reponses des systemes. L’effet Kovacs est un exemple quenous allons illustrer sur le modele du Parking. Historiquement, ce phenomene aete decouvert dans les annees soixante et consiste a appliquer a des polymeres

173

Page 174: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

1 10 100t-t

w

0

0.001

0.002

0.003

0.0041/

ρ(t)

-1/ρ

eq(K

=50

0)

Figure 9.3 – Evolution temporelle du volume d’exces pour le modele d’adsorption-desorption 1/ρ(t) − 1/ρeq(K = 500) en fonction de t − tw. Dans la courbe duhaut K est change de K = 5000 a K = 500 pour tw = 240, dans la courbeintermediaire K est change de K = 2000 a K = 500 pour tw = 169, et dans lacourbe du bas, K passe de 1000 a 500 pour tw = 139.

une trempe rapide suivie d’un temps dZattente, puis un chauffage rapide a unetemperature intermediaire. Dans le cas ou le volume atteint par le systeme aumoment ou l’on procede au rechauffement est egale a celui qu’aurait le systemes’il etait a l’equilibre, on observe alors une augmentation du volume dans un pre-mier regime suivi d’une diminution pour atteindre asymptotiquement le volumed’equilibre. Ce phenomene hors d’equilibre est en fait tres universel et on peutl’observer dans le modele d’adsorption-desorption. Sur la figure 9.3, on observeque le maximum du volume est d’autant plus important que la “trempe” du sys-teme a ete importante. Ce phenomene a ete observe aussi dans le cas des verresfragiles comme l’orthoterphenyl en simulation.

9.5 Conclusion

L’etude des phenomenes hors d’equilibre est encore en grande partie dans sonenfance, comparee a celle des systemes a l’equilibre. Pour des systemes plus

174

Page 175: Numerical Simulation in Statistical Physics

9.5 Conclusion

complexes que ceux decrits precedemment et pour lesquels il existe toute unehierarchie d’echelles de temps, il est necessaire d’envisager une sequence de vieil-lissement de type

CA(tw + τ, tw) = CST (τ) +∑i

CAG,i

(ξi(tw)

ξi(tw + τ)

)(9.34)

L’identification de la sequence est alors de plus en plus delicate, mais reflete lacomplexite intrinseque du phenomene. C’est probablement le cas des verres despins experimentaux. On a propose ces dernieres annees plusieurs modes opera-toires, applicables a la fois experimentalement et numeriquement, dans lesquelson perturbe le systeme plusieurs fois sans qu’il puisse alors revenir a l’equilibre.La diversite des comportements observes permet de comprendre progressivementla structure de ces systemes a travers l’etude de leurs proprietes dynamiques.

175

Page 176: Numerical Simulation in Statistical Physics

Slow kinetics, aging.

176

Page 177: Numerical Simulation in Statistical Physics

Appendix A

Reference models

Apres un developpement considerable de la Physique Statistique au siecle prece-dent, plusieurs Hamiltoniens sont devenus des systemes modeles car ils permet-tent de decrire de nombreux phenomenes sur la base d’hypotheses. Le propos decette appendice est de donner une liste (non exhaustive) mais assez large de cesmodeles ainsi qu’une description breve de leurs proprietes essentielles.

A.1 Lattice models

A.1.1 XY model

Le modele XY est un modele de spins sur reseau ou les spins sont alors desvecteurs a deux dimensions. Le reseau support de ces spins est de dimensionquelconque. Le Hamiltonien de ce systeme s’ecrit

H = −J∑〈i,j〉

Si.Sj (A.1)

La notation 〈i, j〉 designe une double sommation ou i est un indice parcourant lessites et j les sites plus proches voisins de i. Si.Sj represente le produit scalaireentre les deux vecteurs Si et Sj. La dimension crituque de ce modele est 4 etle theoreme de Mermin-WagnerMermin and Wagner [1966] empeche une transi-tion a temperature finie en dimensions 2 associee au parametre d’ordre qui estl’aimantation. En fait, il apparaıt une transition de phase a temperature finie as-sociee aux defauts topologiques qui subissent une transition de type etat lie-etatdissocie. (Concernant les defauts topologiques en matiere condensee, on peuttoujours consulter cette revue un peu ancienne mais tellement complete de Mer-minMermin [1979]). Cette transition a ete decrite la premiere fois par Kosterlitzet ThoulessKosterlitz and Thouless [1973].

A deux dimensions, ette transition est une transition qui n’est marque paraucune discontinuite des derivees du potentiel thermodynamique. La signatureen simulation se fait sur la grandeur que l’on appelle l’helicite.

177

Page 178: Numerical Simulation in Statistical Physics

Reference models

A trois dimensions, cette transition redevient plus classique avec des exposantscritiques de ceux d’une transition continue.

A.1.2 Heisenberg model

Le modele d’Heisenberg est un modele sur reseau ou les spins sont alors desvecteurs d’un espace a trois dimensions. Formellement, le Hamiltonien est donnepar l’equation

H = −J∑〈i,j〉

Si.Sj (A.2)

avec Si un vecteur tridimensionnel, les autres notations restant identiques a cellesdefinies dans le paragraphe precedent. La dimension critique inferieure de ce mod-ele est de deux, ce qui signifie la temperature critique est egale est nulle. Desque la temperature est differente de zero, le systeme perd son aimantation. Con-trairement au modele XY, le systeme ne possede pas de defauts topologiques: lesvecteurs accedant a une troisieme dimensionMermin [1979]. A trois dimensions,le systeme possede une transition continue entre une phase paramagnetique etune phase ferromagnetique a temperature finie.

A.1.3 O(n) model

La generalisation des modeles precedents est simple, car on peut toujours con-siderer des spins caracterises par un vecteur de dimension n. Outre qu’il existedes exemples de realisation physique ou n = 4, la generalisation avec un n quel-conque a un interet theorique: quand n tend vers l’infini, on converge le modelespherique qui peut-etre resolu en general de maniere analytique et qui possedeun diagramme de phases non trivial. De plus, il existe des methodes analytiquesqui permettent de travailler autour du modele spherique (developpement en 1/net qui par extrapolation donnent des predictions interessantes pour un n fixe.

A.2 off-lattice models

A.2.1 Introduction

Nous avons utilise au cours de ce cours deux modeles de liquides simples que ceuxrespectivement, le modele des spheres dures et le modele de Lennard-Jones. Lepremier tient compte exclusivement les problemes d’encombrement geometriqueset possede ainsi un diagramme de phases ou il apparaıt une transition liquide-solide (pour une dimension egale a trois ou plus) et le modele de Lennard-Jonesdisposant d’un potentiel attractif a longue distance (representant les forces de Vander Waals) etoffe son diagramme de phases, on a alors un alors une transitionliquide-gaz et un point critique associe, une courbe de coexistence liquide-gaz a

178

Page 179: Numerical Simulation in Statistical Physics

A.2 off-lattice models

basse temperature. Dans la region ou le systeme est dense, on retrouve une lignede transition liquide-solide (du premier ordre pour un systeme a trois dimensions)finissant par un point triple a basse temperature.

A.2.2 StockMayer model

Le modele de Lennard-Jones est un bon modele tant que les molecules ou atomesconsideres ne possedent pas de charge ou de dipole permanent. Cette situ-ation est bien evidemment trop restrictive pour decrire les solutions qui fontl’environnement de notre quotidien. En particulier, l’eau qui est constitue pourl’essentiel de modecules H2O (en negligeant dans un premier temps la dissociationionique) possede un fort moment dipolaire. En restant a un niveau de descriptiontres elementaire, le Hamiltonien de Stockmayer est celui de Lennard-Jones ajouted’un terme d’interaction dipolaire.

H =N∑i=1

p2i

2m+ε

2

N∑i 6=j

[(σ

rij

)12

−(σ

rij

)6]

+1

r3ij

(µi.µj − 3µi .riµj .rj

)(A.3)

A.2.3 Kob-Andersen model

Les verres surfondues ont la caracteristique essentielle de presenter une augmen-tation tres spectaculaire du temps de relaxation quand la temperature diminue.Cette accroissement atteint 15 ordres de grandeur jusqu’a un point ou a defautde cristalliser, le systeme se gele et se trouve pour des temperatures inferieuresa la temperature dite de transition vitreuse dans un etat hors d’equilibre (etatamorphe). Cette temperature de transition n’est pas unique pour un systemedonne, car elle depend en partie de la vitesse de trempe du systeme et une partiedes caracteristiques habituelles liees aux transitions de phase n’est pas presente.Comme la situation concerne un tres grand nombre de systemes, il est tentant depenser que des liquides simples pourraient presenter ce type de phenomenologie.Or dans un espace tridimensionnel, les liquides simples purs subissent (spheresdures, Lennard-Jones) subissent toujours une transition de type liquide-solide etil est tres difficile d’observer une surfusion de ces liquides, meme si la vitesse detrempe est diminuee au maximum (en simulation, elle reste toutefois importantepar rapport a celle des protocoles experimentaux). Un des modeles les plus con-nus parmi les liquides simples qui presentent une phase surfondue importante(avec une cristallistion evitee) est le modele de Kob-AndersenKob and Andersen[1994, 1995a,b] Ce mdodele est un melange de deux especes de particules de typeLennard-Jones avec un potentiel d’interaction

vαβ = 4εαβ

[(σαβr

)12

−(σαβr

)6]

(A.4)

179

Page 180: Numerical Simulation in Statistical Physics

Reference models

Les deux especes sont notees A et B et la proportion du melange est 80% de Aet 20% de B. Les parametres d’interaction sont εAA = 1, εAB = 1.5, et εBB = 0.5,et les tailles sont donnees par σAA = 1, σAB = 0.8 et σBB = 0.88,

180

Page 181: Numerical Simulation in Statistical Physics

Appendix B

Linear response theory

Soit un systeme a l’equilibre decrit par le Hamiltonien H0, on considere qu’al’instant t = 0, ce systeme est perturbe par un action exterieure decrite par leHamiltonien H′

H′ = −A(rN)F (t) (B.1)

ou F (t) est une force perturbatrice qui ne depend que du temps et A(rN) est lavariable conjuguee a la force F . On suppose que cette force decroit quand t→∞de telle maniere que le systeme retourne a l’equilibre. Il est possible de considereune force dependante de l’espace, mais pour des raisons de simplicite nous nouslimitons ici a une force uniforme.

L’evolution du systeme peut etre decrite par l’equation de Liouville

∂f (N)(rN ,pN , t)

∂t= −iLf (N)(rN ,pN , t) (B.2)

=H0 +H′, f (N)(rN ,pN , t) (B.3)

=− iL0f(N)(rN ,pN , t)− A, f (N)(rN ,pN , t)F (t) (B.4)

ou L0 designe l’operateur de Liouville associe a H0

L0 = iH0, (B.5)

Comme le systeme etait initialement a l’equilibre, on a

f (N)(rN ,pN , 0) = C exp(−βH0(rN ,pN)), (B.6)

ou C est une constante de normalisation. Dans la mesure ou le champ exterieurest suppose faible, on fait un developpement perturbatif de l’equation (B.4) enne gardant que le premier terme non nul. On pose

f (N)(rN ,pN , t) = f(N)0 (rN ,pN) + f

(N)1 (rN ,pN , t) (B.7)

On obtient alors a l’ordre le plus bas:

∂f(N)1 (rN ,pN , t)

∂t= −iL0f

(N)1 (rN ,pN , t)− A(rN), f

(N)0 (rN ,pN)F (t). (B.8)

181

Page 182: Numerical Simulation in Statistical Physics

Linear response theory

L’equation (B.8) est resolue formellement avec la condition initiale donnee parl’equation (B.6). Ceci conduit a

f(N)1 (rN ,pN , t) = −

∫ t

−∞exp(−i(t− s)L0)A, f (N)

0 F (s)ds. (B.9)

Ainsi, la variable 〈∆B(t)〉 = 〈B(t)〉 − 〈B(−∞)〉 evolue comme

〈∆B(t)〉 =

∫ ∫drNdpN

(f (N)(rN ,pN , t)− f (N)

0 (rN ,pN))B(rN)drNdpN .

(B.10)

A l’ordre le plus bas, la difference des distributions de Liouville est donnee parl’equation (B.9) et l’equation (B.10) devient

〈∆B(t)〉 =−∫ ∫

drNdpN∫ t

−∞exp(−i(t− s)L0)A, f (N)

0 B(rN)F (s)ds

(B.11)

=−∫ ∫

drNdpN∫ t

−∞A, f (N)

0 exp(i(t− s)L0)B(rN)F (s)ds (B.12)

en utilisant l’hermiticite de l’operateur de Liouville.En calculant le crochet de Poisson, il vient que

A, f (N)0 =

N∑i=1

(∂A

∂ri

∂f(N)0

dpi− ∂A

∂pi

∂f(N)0

dri

)(B.13)

=− βN∑i=1

(∂A

∂ri

∂H(N)0

dpi− ∂A

∂pi

∂H(N)0

dri

)f

(N)0 (B.14)

=− βiL0Af(N)0 (B.15)

=− βdA(0)

dtf

(N)0 (B.16)

En inserant l’equation (B.16) dans l’equation (B.12), on a

〈∆B(t)〉 = β

∫ ∫drNdpNf

(N)0

∫ t

−∞

dA(0)

dtexp(−i(t− s)L0)B(rN)F (s)ds

(B.17)En utilisant le fait que

B(rN(t)) = exp(itL0)B(rN(0)) (B.18)

l’equation (B.17) devient

〈∆B(t)〉 = β

∫ t

−∞ds

⟨dA(0)

dtB(t− s)

⟩F (s) (B.19)

182

Page 183: Numerical Simulation in Statistical Physics

On definit la fonction de reponse lineaire de B par rapport a F comme

〈∆B(t)〉 =

∫ ∞−∞

dsχ(t, s)F (s) +O(F 2) (B.20)

Compte tenu de l’equation (B.19), on trouve les proprietes suivantes:

1. En identifiant les equations (B.19) et (B.20), on obtient le theoreme defluctuation-dissipation (Fluctuation-dissipation theorem, en anglais FDT),

χ(t) =

−βd

dt〈A(0)B(t)〉 t > 0

0 t < 0(B.21)

2. Un systeme ne peut pas repondre a une perturbation avant que celle-ci nese soit produite. Cette propriete s’appelle le respect de la causalite.

χ(t, s) = 0, t− s ≤ 0 (B.22)

3. La fonction de reponse (pres) de l’equilibre est invariante par translationdans le temps

χ(t, s) = χ(t− s) (B.23)

Quand A = B, on note la fonction d’autocorrelation comme

CA(t) = 〈A(0)A(t)〉, (B.24)

et on a

χ(t) =

−βdCA(t)

dtt > 0

0 t < 0(B.25)

En definissant la reponse integree

R(t) =

∫ t

0

χ(s)ds (B.26)

Le theoreme de fluctuation-dissipation s’exprime alors

R(t) =

β(CA(0)− CA(t)) t > 0

0 t < 0(B.27)

183

Page 184: Numerical Simulation in Statistical Physics

Linear response theory

184

Page 185: Numerical Simulation in Statistical Physics

Appendix C

Ewald sum for the Coulombpotential

Ewald sum

On considere un ensemble de N particules possedant des charges, tel que la chargetotale du systeme est nulle,

∑i qi = 0, dans une boite cubique de longueur L avec

conditions aux limites periodiques. A une particule i situee a ri dans la boite dereference correspond une infinite d’images situees dans les copies de cette boiteinitiale et reperees par les coordonnees en ri + nL, ou n est un vecteur dontles composantes (nx, ny, nz) sont entieres. L’energie totale du systeme s’exprimecomme

Ucoul =1

2

N∑i=1

qiφ(ri), (C.1)

ou φ(ri) est le potentiel electrostatique au site i,

φ(ri) =∗∑j,n

qj|rij + nL|

. (C.2)

L’etoile indique que la somme est faite sur toutes les boıtes et sur toutes lesparticules, hormis j = i quand n = 0.

Pour des potentiels a courte portee (decroissance plus rapide que 1/r3), l’interactionentre deux particules i et j est en tres bonne approximation calculee commel’interaction entre la particule i et l’image de j la plus proche de i (conventiond’image minimale). Cela signifie que cette image n’est pas necessairement dansla boıte initiale. Dans le cas de potentiels a longue portee, cette approximationn’est pas suffisante car l’energie d’interaction entre deux particules decroıt troplentement pour qu’on puisse se limiter a la premiere image. Dans le cas des po-tentiels coulombiens, il est meme necessaire de tenir compte de l’ensemble desboıtes, ainsi que de la nature des conditions aux limites a l’infini.

185

Page 186: Numerical Simulation in Statistical Physics

Ewald sum for the Coulomb potential

Pour remedier a cette difficulte, la methode des sommes d’Ewald consiste a se-parer (C.1) en plusieurs parties: une partie a courte portee, obtenue en ecrantantchaque particule avec une distribution de charge (que l’on prend souvent gaussi-enne) de meme intensite mais de signe oppose a celle de la particule (sa contribu-tion pourra alors etre calculee avec la convention d’image minimale) et une autrea longue portee, due a l’introduction d’une distribution de charge symetrique ala precedente et dont la contribution sera calculee dans l’espace reciproque dureseau cubique. De la forme plus ou moins diffuse de la distribution de chargedependra la convergence de la somme dans l’espace reciproque.

Si on introduit comme distribution de charge une distribution gaussienne,

ρ(r) =N∑j=1

∑n

qj(α

π)

12 exp[−α|r− (rj + nL)|2], (C.3)

la partie a courte portee est donnee par

U1 =1

2

N∑i=1

N∑j=1

∑n

qiqjerfc(α|rij + nL|)|rij + nL|

, (C.4)

ou la somme sur n est tronquee a la premiere image et ou erfc est la fonctionerreur complementaire, et la partie a longue portee par

U2 =1

2πL3

N∑i=1

N∑j=1

∑k 6=0

qiqj(4π2

k2) exp(

−k2

4α)cos(krij). (C.5)

A cette derniere expression, on doit retirer un terme dit d’auto-interaction du al’interaction de chaque distribution de charge qj avec la charge ponctuelle situeeau centre de la gaussienne. Le terme a retirer est egal a

α

π1/2

N∑i=1

q2i = −U3. (C.6)

L’energie d’interaction coulombienne devient donc:

Ucoul = U1 + U2 + U3. (C.7)

Pour des charges (ou spins, ou particules) placees sur les sites d’un reseau, il estpossible d’effectuer les sommes dans l’espace reciproque une fois pour toutes, audebut de chaque simulation. Il est donc possible de prendre en compte un tresgrand nombre de vecteurs d’ondes, ce qui assure une tres bonne precision pour lecalcul de l’energie coulombienne.

Dans le cas de systemes continus, on doit effectuer le calcul dans l’espace achaque fois que les particules sont deplacees, ce qui fait que l’algorithme est penal-isant pour les grands systemes. Il existe alors des algorithmes plus performantscomme ceux bases sur un developpement multipolaire.

186

Page 187: Numerical Simulation in Statistical Physics

Remarks

La somme dans l’expression (C.5) n’est effectuee que pour k 6= 0. Ceci re-sulte de la convergence conditionnelle des sommes d’Ewald et a des consequencesphysiques importantes. Dans un systeme coulombien periodique, la forme del’energie depend en effet de la nature des conditions aux limites a l’infini et le faitde negliger la contribution a l’energie correspondant a k = 0 revient a considererque le systeme est plonge dans un milieu de constante dielectique infinie (c’est adire un bon conducteur). C’est la convention qui est utilisee dans les simulationsde systemes ioniques. Dans le cas inverse, c’est a dire si le systeme se trouve dansun milieu dielectrique, les fluctuations du moment dipolaire du systeme creentdes charges de surface qui sont responsables de l’existence d’un champ depolar-isant. Celui-ci rajoute un terme a l’energie qui n’est autre que la contributioncorrespondant a k 6= 0.

187

Page 188: Numerical Simulation in Statistical Physics

Ewald sum for the Coulomb potential

188

Page 189: Numerical Simulation in Statistical Physics

Appendix D

Hard rod model

D.1 Equilibrium properties

Considerons le systeme constitue de N segments impenetrables de longueur iden-tique, σ, assujettis a se deplacer sur une droite. Le Hamiltonien de ce systemes’ecrit

H =N∑i

(1

2m

(dxidt

)2

+ v(xi − xj)

)(D.1)

ou

v(xi − xj) =

+∞ |xi − xj| > σ

0 |xi − xj| ≤ σ.(D.2)

Le calcul de la fonction de partition se factorise comme pour tous les systemesclassiques en deux parties: la premiere correspond a la partie cinetique et peutetre facilement calculee (integrale gaussienne), la seconde est l’integrale de con-figuration qui dans le cas de ce systeme unidimensionnel peut etre aussi calculeexactement. On a

QN(L,N) =

∫. . .

∫ ∏dxi

exp(−β/2∑i 6=j

v(xi − xj)). (D.3)

Comme le potentiel est soit nul soit infini, l’integrale de configuration nedepend pas de la temperature. Puisque deux particules ne peuvent se recouvrir,on peut reecrire l’integrale QN en ordonnant les particules:

QN(L,N) =

∫ L−Nσ

0

dx1

∫ L−(N−1)σ

x1+σ

. . .

∫ L−σ

xN−1+σ

dxN . (D.4)

L’integration successive de l’equation (D.4) donne

QN(L,N) = (L−Nσ)N . (D.5)

189

Page 190: Numerical Simulation in Statistical Physics

Hard rod model

La fonction de partition canonique du systeme est donc determinee et l’equationd’etat est donnee par la relation

βP =lnQN(L,N)

L(D.6)

ce qui donneβP

ρ=

1

1− σρ(D.7)

ou ρ = N/L.Le potentiel chimique d’exces est donne par la relation

exp(−βµex) = (1− ρ) exp(− ρ

1− ρ). (D.8)

Les fonctions de correlation peuvent egalement etre calculees analytiquement.

D.2 Parking model

L’addition sequentielle aleatoire est un processus stochastique ou des particulesdures sont ajoutees sequentiellement dans un espace de dimension D a des posi-tions aleatoires en respectant les conditions qu’aucune nouvelle particule ne doitrecouvrir une particule deja inseree et qu’une fois inserees, les particules sontimmobiles.

La version unidimensionnelle du modele est connue sous le nom du problemedu parking et a ete introduite par un mathematicien hongrois, A. Renyi, en1963Renyi [1963].

Des batonnets durs de longueur σ sont jetes aleatoirement et sequentiellementsur une ligne en respectant les conditions ci-dessus. Si ρ(t) represente la densitede particules sur la ligne a l’instant t, la cinetique du processus est gouvernee parl’equation maıtresse suivante:

∂ρ(t)

∂t= kaΦ(t), (D.9)

ou ka est une constante de vitesse par unite de longueur (qui l’on peut choisiregale a l’unite en changeant l’unite de temps) et Φ(t), qui est la probabilited’insertion a l’instant t, est aussi la fraction de la ligne disponible pour l’insertiond’une nouvelle particule a t. Le diametre des particules est choisi comme unitede longueur.

Il est utile d’introduire la fonction de distribution des intervalles G(h, t), quiest definie telle queG(h, t)dh represente la densite de vides de longueurs comprisesentre h and h+dh a l’instant t. Pour un intervalle de longueur h, le vide disponiblepour inserer une nouvelle particule est h − 1, et en consequence la fraction de

190

Page 191: Numerical Simulation in Statistical Physics

D.2 Parking model

la ligne disponible Φ(t) est simplement la somme de (h − σ) sur l’ensemble desintervalles disponibles, i.e. G(h, t):

Φ(t) =

∫ ∞σ

dh(h− 1)G(h, t). (D.10)

Comme chaque intervalle correspond a une particule, la densite de particules ρ(t)s’exprime comme

ρ(t) =

∫ ∞0

dhG(h, t), (D.11)

tandis que la fraction de la ligne non recouverte est reliee a G(h, t) par la relation

1− ρ(t) =

∫ ∞0

dhhG(h, t). (D.12)

Les deux equations precedentes representent des regles de somme pour la fonctionde distribution des intervalles. Durant le processus, cette fonction G(h, t) evoluecomme

∂G(h, t)

∂t= −H(h− 1)(h− 1)G(h, t) + 2

∫ ∞h+1

dh′G(h′, t), (D.13)

ou H(x) est la fonction de Heaviside. Le premier terme du membre de droite del’equation (D.13) (terme de destruction) correspond a l’insertion d’une particule al’interieur d’un intervalle de longueur h (pour h ≥ 1), tandis que le second terme(terme de creation) correspond a l’insertion d’une particule dans un intervalle delongueur h′ > h + 1. Le facteur 2 tient compte des deux possibilites de creerun intervalle de longueur h a partir d’un intervalle plus grand de longueur h′.Notons que l’evolution temporelle de la fonction de distribution des intervallesG(h, t) est entierement determinee par des intervalles plus grands que h.

Nous avons maintenant un ensemble complet d’equations, qui resulte de lapropriete que l’insertion d’une particule dans un intervalle donne n’a aucun effetsur les autres intervalles (effet d’ecrantage). Les equations peuvent etre resoluesen utilisant l’ansatz suivant, pour h > σ,

G(h, t) = F (t) exp(−(h− 1)t), (D.14)

ce qui conduit a

F (t) = t2 exp

(−2

∫ t

0

du1− e−u

u

). (D.15)

En integrant alors l’equation (D.13) avec la solution de G(h, t) pour h > 1, onobtient G(h, t) pour 0 < h < 1,

G(h, t) = 2

∫ t

0

du exp(−uh)F (u)

u. (D.16)

191

Page 192: Numerical Simulation in Statistical Physics

Hard rod model

Les trois equations (D.10), (D.11) et (D.12) conduisent bien evidemment au memeresultat pour la densite ρ(t),

ρ(t) =

∫ t

0

du exp

(−2

∫ u

0

dv1− e−v

v

). (D.17)

Ce resultat a ete obtenu la premiere fois par Renyi.Une propriete non evidente du modele est que le processus atteint une limite

d’encombrement (quand t → ∞) a laquelle la densite sature pour une valeurρ∞σ = 0.7476 . . .; celle-ci est de maniere significative plus petite que la densited’encombrement geometrique maximale (ρ∞ = 1) qui est obtenue quand les par-ticules peuvent diffuser sur la ligne. De plus, il est facile de voir que la limited’encombrement depend des conditions initiales: ici, nous sommes partis d’uneligne vide. Au contraire, a l’equilibre, l’etat final du systeme est determine seule-ment par le potentiel chimique et ne garde aucune memoire de l’etat initial.

La cinetique aux temps longs peut etre obtenue a partir de l’equation (D.17):

ρ∞ − ρ(t) '(e−2γ

) 1

t(D.18)

ou γ est la constante d’Euler, ce qui montre que l’approche vers la saturation estalgebrique.

La structure des configurations generees par ce processus irreversible a un cer-tain nombre de proprietes inhabituelles. A saturation, la distribution d’intervallea une divergence logarithmique (integrable bien entendu) au contact h→ 0,

G(h,∞) ' −e−2γ ln(h). (D.19)

De plus, les correlations entre paires de particules sont extremement faibles alongue distance,

g(r)− 1 ∝ 1

Γ(r)

(2

ln r

)r(D.20)

ou Γ(x) est la fonction Gamma: la decroissance de g(r) est dite super-exponentielleet est donc differente de la situation a l’equilibre ou l’on obtient une decroissanceexponentielle.

192

Page 193: Numerical Simulation in Statistical Physics

Bibliography

B. J. Alder and T. E. Wainwright. Phase transition for a hard sphere system. TheJournal of Chemical Physics, 27(5):1208–1209, 1957. doi: 10.1063/1.1743957.URL http://link.aip.org/link/?JCP/27/1208/1.

Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-organized criticality: An ex-planation of the 1/f noise. Phys. Rev. Lett., 59(4):381–384, Jul 1987. doi:10.1103/PhysRevLett.59.381.

Ludovic Berthier. Efficient measurement of linear susceptibilities in molecu-lar simulations: Application to aging supercooled liquids. Physical ReviewLetters, 98(22):220601, 2007. doi: 10.1103/PhysRevLett.98.220601. URLhttp://link.aps.org/abstract/PRL/v98/e220601.

K Binder. Applications of monte carlo methods to statistical physics. Reportson Progress in Physics, 60(5):487–559, 1997. URL http://stacks.iop.org/

0034-4885/60/487.

D. Chandler. Introduction to Modern Statistical Mechanics. Oxford UniversityPress, New York, USA, 1987.

C. Dress and W. Krauth. Cluster algorithm for hard spheres and related sys-tems. Journal Of Physics A-Mathematical And General, 28(23):L597–L601,December 1995.

Alan M. Ferrenberg and Robert H. Swendsen. Optimized monte carlo data anal-ysis. Phys. Rev. Lett., 63(12):1195–1198, 1989. URL http://link.aps.org/

abstract/PRL/v63/p1195.

Alan M. Ferrenberg and Robert H. Swendsen. New monte carlo technique forstudying phase transitions. Phys. Rev. Lett., 61:2635, 1988.

Alan M. Ferrenberg, D. P. Landau, and Robert H. Swendsen. Statistical errorsin histogram reweighting. Phys. Rev. E, 51(5):5092–5100, 1995. URL http:

//link.aps.org/abstract/PRE/v51/p5092.

D. Frenkel and B. Smit. Understanding Molecular Simulation: from algorithmsto applications. Academic Press, London, UK, 1996.

193

Page 194: Numerical Simulation in Statistical Physics

BIBLIOGRAPHY

Nigel Goldenfeld. Lectures on Phase Transitions and the Renormalization Group.Addison-Wesley, New-York, USA, 1992.

J. P. Hansen and I. R. Mc Donald. Theory of simple liquids. Academic Press,London, UK, 1986.

J. R. Heringa and H. W. J. Blote. Geometric cluster monte carlo simulation.Physical Review E, 57(5):4976–4978, May 1998a.

J. R. Heringa and H. W. J. Blote. Geometric symmetries and cluster simulations.Physica A, 254(1-2):156–163, May 1998b.

Haye Hinrichsen. Non-equilibrium critical phenomena and phase transitions intoabsorbing states. Advances in Physics, 49(7):815–958, 2000. URL http://

www.informaworld.com/10.1080/00018730050198152.

W. Kob and H. C. Andersen. Scaling behavior in the beta-relaxation regime of asupercooled lennard-jones mixture. Physical Review Letters, 73(10):1376–1379,September 1994.

W. Kob and H. C. Andersen. Testing mode-coupling theory for a supercooledbinary lennard-jones mixture - the van hove correlation-function. Phys. Rev.E, 51(5):4626–4641, May 1995a.

W. Kob and H. C. Andersen. Testing mode-coupling theory for a supercooledbinary lennard-jones mixture .2. intermediate scattering function and dynamicsusceptibility. Phys. Rev. E, 52(4):4134–4153, October 1995b.

J. M. Kosterlitz and D.J. Thouless. Metastability and phase transitions in two-dimensional systems. J. Phys. C, 6:1181, 1973.

Werner Krauth. Statistical Mechanics: Algorithms and Computations. OxfordUniversity Press, London, UK, 2006.

David P. Landau and Kurt Binder. A Guide to Monte Carlo Simulations inStatistical Physics. Cambridge University Press, New York, USA, 2000.

N. D. Mermin. The topological theory of defects in ordered media. Rev. Mod.Phys., 51:591, 1979.

N.D. Mermin and H. Wagner. Absence of ferromagnetism and antiferomagnetismin one and two dimensional isotropic heisenberg models. Phys. Rev. Lett., 17:1133, 1966.

P. Poulain, F. Calvo, R. Antoine, M. Broyer, and Ph. Dugourd. Performances ofwang-landau algorithms for continuous systems. Phys. Rev. E, 73(5):056704,2006. URL http://link.aps.org/abstract/PRE/v73/e056704.

194

Page 195: Numerical Simulation in Statistical Physics

BIBLIOGRAPHY

A. Renyi. Sel. Trans. Math. Stat. Prob., 4:205, 1963.

F. Ritort and P. Sollich. Glassy dynamics of kinetically constrained models.Advances in Physics, 52(4):219–342, 2003. URL http://www.informaworld.

com/10.1080/0001873031000093582.

Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics inmonte carlo simulations. Phys. Rev. Lett., 58(2):86–88, 1987. URL http:

//link.aps.org/abstract/PRL/v58/p86.

J. Talbot, G. Tarjus, P. R. Van Tassel, and P. Viot. From car parking toprotein adsorption: an overview of sequential adsorption processes. Col-loids and Surfaces A: Physicochemical and Engineering Aspects, 165(1-3):287–324, May 2000. URL http://www.sciencedirect.com/science/article/

B6TFR-3YF9X8C-M/2/a91f10d760e37c8def4d51f17c548c3e.

S. H. Tsai, H. K. Lee, and D. P. Landau. Molecular and spin dynamics simulationsusing modern integration methods. American Journal Of Physics, 73(7):615–624, July 2005.

Donald L Turcotte. Self-organized criticality. Reports on Progress in Physics, 62(10):1377–1429, 1999. URL http://stacks.iop.org/0034-4885/62/1377.

Fugao Wang and D. P. Landau. Efficient, multiple-range random walk algorithmto calculate the density of states. Phys. Rev. Lett., 86(10):2050–2053, 2001a.URL http://link.aps.org/abstract/PRL/v86/p2050.

Fugao Wang and D. P. Landau. Determining the density of states for classi-cal statistical models: A random walk algorithm to produce a flat histogram.Phys. Rev. E, 64(5):056101, 2001b. URL http://link.aps.org/abstract/

PRE/v64/e056101.

A. P. Young. Spin glasses and random fields. World Scientific, Singapore, 1998.

Chenggang Zhou and R. N. Bhatt. Understanding and improving the wang-landau algorithm. Phys. Rev. E, 72(2):025701, 2005. URL http://link.aps.

org/abstract/PRE/v72/e025701.

195

Page 196: Numerical Simulation in Statistical Physics

BIBLIOGRAPHY

196

Page 197: Numerical Simulation in Statistical Physics

Contents

1 Statistical mechanics and numerical simulation 31.1 Brief History of simulation . . . . . . . . . . . . . . . . . . . . . . 31.2 Ensemble averages . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.1 Microcanonical ensemble . . . . . . . . . . . . . . . . . . . 51.2.2 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . 61.2.3 Grand canonical ensemble . . . . . . . . . . . . . . . . . . 71.2.4 Isothermal-isobaric ensemble . . . . . . . . . . . . . . . . . 8

1.3 Model systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.3 Ising model and lattice gas. Equivalence . . . . . . . . . . 10

1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.5.1 ANNNI Model . . . . . . . . . . . . . . . . . . . . . . . . 151.6 Blume-Capel model . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.6.1 Potts model . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2 Monte Carlo method 212.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2 Uniform and weighted sampling . . . . . . . . . . . . . . . . . . . 222.3 Markov chain for sampling an equilibrium system . . . . . . . . . 232.4 Metropolis algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 252.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.5.1 Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.6 Random number generators . . . . . . . . . . . . . . . . . . . . . 312.6.1 Generating non uniform random numbers . . . . . . . . . 33

2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382.7.1 Inverse transformation . . . . . . . . . . . . . . . . . . . . 382.7.2 Detailed balance . . . . . . . . . . . . . . . . . . . . . . . 392.7.3 Acceptance probability . . . . . . . . . . . . . . . . . . . . 392.7.4 Random number generator . . . . . . . . . . . . . . . . . . 41

197

Page 198: Numerical Simulation in Statistical Physics

CONTENTS

3 Molecular Dynamics 433.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 Equations of motion . . . . . . . . . . . . . . . . . . . . . . . . . 453.3 Discretization. Verlet algorithm . . . . . . . . . . . . . . . . . . . 453.4 Symplectic algorithms . . . . . . . . . . . . . . . . . . . . . . . . 47

3.4.1 Liouville formalism . . . . . . . . . . . . . . . . . . . . . . 473.4.2 Discretization of the Liouville equation . . . . . . . . . . . 50

3.5 Hard sphere model . . . . . . . . . . . . . . . . . . . . . . . . . . 513.6 Molecular Dynamics in other ensembles . . . . . . . . . . . . . . . 53

3.6.1 Andersen algorithm . . . . . . . . . . . . . . . . . . . . . . 543.6.2 Nose-Hoover algorithm . . . . . . . . . . . . . . . . . . . . 54

3.7 Brownian dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 573.7.1 Different timescales . . . . . . . . . . . . . . . . . . . . . . 573.7.2 Smoluchowski equation . . . . . . . . . . . . . . . . . . . . 573.7.3 Langevin equation. Discretization . . . . . . . . . . . . . . 573.7.4 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.9.1 Multi timescale algorithm . . . . . . . . . . . . . . . . . . 59

4 Correlation functions 614.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.2 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.2.1 Radial distribution function . . . . . . . . . . . . . . . . . 624.2.2 Structure factor . . . . . . . . . . . . . . . . . . . . . . . . 66

4.3 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 674.3.2 Time correlation functions . . . . . . . . . . . . . . . . . . 674.3.3 Computation of the time correlation function . . . . . . . 684.3.4 Linear response theory: results and transport coefficients . 69

4.4 Space-time correlation functions . . . . . . . . . . . . . . . . . . . 704.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 704.4.2 Van Hove function . . . . . . . . . . . . . . . . . . . . . . 704.4.3 Intermediate scattering function . . . . . . . . . . . . . . . 724.4.4 Dynamic structure factor . . . . . . . . . . . . . . . . . . . 72

4.5 Dynamic heterogeneities . . . . . . . . . . . . . . . . . . . . . . . 724.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 724.5.2 4-point correlation function . . . . . . . . . . . . . . . . . 734.5.3 4-point susceptibility and dynamic correlation length . . . 73

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.7.1 “Le facteur de structure dans tous ses etats!” . . . . . . . . 744.7.2 Van Hove function and intermediate scattering function . . 75

198

Page 199: Numerical Simulation in Statistical Physics

CONTENTS

5 Phase transitions 795.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.2.1 Critical exponents . . . . . . . . . . . . . . . . . . . . . . 815.2.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.3 Finite size scaling analysis . . . . . . . . . . . . . . . . . . . . . . 875.3.1 Specific heat . . . . . . . . . . . . . . . . . . . . . . . . . . 875.3.2 Other quantities . . . . . . . . . . . . . . . . . . . . . . . . 88

5.4 Critical slowing down . . . . . . . . . . . . . . . . . . . . . . . . . 905.5 Cluster algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.6 Reweighting Method . . . . . . . . . . . . . . . . . . . . . . . . . 935.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.8.1 Finite size scaling for continuous transitions: logarithm cor-rections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.8.2 Some aspects of the finite size scaling: first-order transition 98

6 Monte Carlo Algorithms based on the density of states 1016.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016.2 Density of states . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6.2.1 Definition and physical meaning . . . . . . . . . . . . . . . 1026.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

6.3 Wang-Landau algorithm . . . . . . . . . . . . . . . . . . . . . . . 1056.4 Thermodynamics recovered! . . . . . . . . . . . . . . . . . . . . . 1066.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.6.1 Some properties of the Wang-Landau algorithm . . . . . . 1086.6.2 Wang-Landau and statistical temperature algorithms . . . 109

7 Monte Carlo simulation in different ensembles 1137.1 Isothermal-isobaric ensemble . . . . . . . . . . . . . . . . . . . . . 113

7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1137.1.2 Principe . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

7.2 Grand canonical ensemble . . . . . . . . . . . . . . . . . . . . . . 1167.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1167.2.2 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

7.3 Liquid-gas transition and coexistence curve . . . . . . . . . . . . . 1197.4 Gibbs ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

7.4.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217.4.2 Acceptance rules . . . . . . . . . . . . . . . . . . . . . . . 121

7.5 Monte Carlo method with multiples Markov chains . . . . . . . . 1237.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

199

Page 200: Numerical Simulation in Statistical Physics

CONTENTS

8 Out of equilibrium systems 1298.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1308.2 Random sequential addition . . . . . . . . . . . . . . . . . . . . . 131

8.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1318.2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 1318.2.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

8.3 Avalanche model . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1378.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

8.4 Inelastic hard sphere model . . . . . . . . . . . . . . . . . . . . . 1418.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1418.4.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 1418.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428.4.4 Some properties . . . . . . . . . . . . . . . . . . . . . . . . 143

8.5 Exclusion models . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1448.5.2 Random walk on a ring . . . . . . . . . . . . . . . . . . . . 1468.5.3 Model with open boundaries . . . . . . . . . . . . . . . . . 147

8.6 Kinetic constraint models . . . . . . . . . . . . . . . . . . . . . . 1498.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1498.6.2 Facilitated spin models . . . . . . . . . . . . . . . . . . . . 152

8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

8.8.1 Haff Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548.8.2 Croissance de domaines et modele a contrainte cinetique . 1568.8.3 Diffusion-coagulation model . . . . . . . . . . . . . . . . . 1588.8.4 Random sequential addition . . . . . . . . . . . . . . . . . 159

9 Slow kinetics, aging. 1639.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639.2 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

9.2.1 Two-time correlation and response functions. . . . . . . . . 1649.2.2 Aging and scaling laws . . . . . . . . . . . . . . . . . . . . 1659.2.3 Interrupted aging . . . . . . . . . . . . . . . . . . . . . . . 1669.2.4 “Violation” of the fluctuation-dissipation theorem . . . . . 166

9.3 Adsorption-desorption model . . . . . . . . . . . . . . . . . . . . . 1679.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1679.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689.3.3 Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1699.3.4 Equilibrium linear response . . . . . . . . . . . . . . . . . 1709.3.5 Hard rods age! . . . . . . . . . . . . . . . . . . . . . . . . 171

200

Page 201: Numerical Simulation in Statistical Physics

CONTENTS

9.3.6 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719.4 Kovacs effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1739.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

A Reference models 177A.1 Lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

A.1.1 XY model . . . . . . . . . . . . . . . . . . . . . . . . . . . 177A.1.2 Heisenberg model . . . . . . . . . . . . . . . . . . . . . . . 178A.1.3 O(n) model . . . . . . . . . . . . . . . . . . . . . . . . . . 178

A.2 off-lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . 178A.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 178A.2.2 StockMayer model . . . . . . . . . . . . . . . . . . . . . . 179A.2.3 Kob-Andersen model . . . . . . . . . . . . . . . . . . . . . 179

B Linear response theory 181

C Ewald sum for the Coulomb potential 185

D Hard rod model 189D.1 Equilibrium properties . . . . . . . . . . . . . . . . . . . . . . . . 189D.2 Parking model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

201