non-equilibrium statistical physics (incl. adsorption).pdf

Upload: masse

Post on 11-Oct-2015

68 views

Category:

Documents


2 download

TRANSCRIPT

  • Contents

    1 APERITIFS 5

    1.1 Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.2 Single-Species Annihilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1.3 Two-Species Annihilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2 RANDOM WALK/DIFFUSION 15

    2.1 Langevin Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    2.2 Master Equation for the Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 16

    2.3 Connection to First-Passage Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    2.4 The Reaction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    3 AGGREGATION 27

    3.1 Exact Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    3.2 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    3.3 Aggregation with Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    3.4 Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

    3.5 Finite Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    4 FRAGMENTATION 47

    4.1 Binary Breakup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    4.2 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    4.3 Fragmentation with Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    4.4 Geometrical Fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    5 ADSORPTION 61

    5.1 k-mer Adsorption in One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    5.2 Cooperative adsorption of monomers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    5.3 Adsorption on a one-dimensional line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    5.4 Adsorption on higher-dimensional substrates . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    5.5 Post-Adsorption Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    5.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    6 SPIN DYNAMICS 83

    6.1 Glauber Spin-Flip Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    6.2 Kawasaki spin-exchange dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    6.3 Cluster dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

    6.4 Extremal dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

    6.5 Voter Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    6.6 Disordered Spin Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

    6.7 Disordered Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    1

  • 2 CONTENTS

    7 ANOMALOUS TRANSPORT 1137.1 The Asymmetric Exclusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137.2 Random Walks in Random Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187.3 Random Walks in Random Velocity Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

    8 HYSTERESIS 1258.1 Homogeneous Ferromagnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258.2 Disordered Ferromagnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

    9 REACTIONS 1439.1 Single-Species . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439.2 Two Species Annihilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1519.3 The Trapping Reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

    9.3.1 Exact Solution in One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1569.3.2 Lifshitz Argument for General Spatial Dimension . . . . . . . . . . . . . . . . . . . . . 158

    10 COARSENING 16310.1 The Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16310.2 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16410.3 Non-conservative Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

    11 COLLISIONS 17311.1 Inelastic Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17411.2 Lorentz Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18811.3 Traffic Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18811.4 Sticky Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19611.5 Ballistic Annihilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

    12 GROWING NETWORKS 20112.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20212.2 Structure of the Growing Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20312.3 Global Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

    13 REFERENCES 211

    A MATTERS OF TECHNIQUE 221A.1 Relation between Laplace Transforms and Real Time Quantities . . . . . . . . . . . . . . . . 221A.2 Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223A.3 Scaling Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223A.4 Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223A.5 Partial Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225A.6 Extreme Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225A.7 Probability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

    B Formulas & Distributions 227B.1 Useful Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

  • Preface

    Statistical physics is an unusual branch of physics because it is not really a well-defined field in a formalsense, but rather, statistical physics is a viewpoint indeed, the most appropriate viewpoint to investigatesystems with many degrees of freedom. Part of the appeal of statistical physics is that it can be applied to adisparate range of systems that are in the mainstream of physics, as well as to problems that might appearto be outside physics, such as econophysics, quantitative biology, and social organization phenomena. Manyof the basic features of these systems involve fluctuations or explicit time evolution rather than equilibriumdistributions. Thus the approaches of non-equilibrium statistical physics are needed to discuss such systems.

    While the tools of equilibrium statistical physics are well-developed, the statistical description of systemsthat are out of equilibrium is still relatively primitive. In spite of more than a century of effort in constructingan over-arching approach for non-equilibrium phenomena, there still does not exist canonical formulations,such as the Boltzmann factor or the partition function in equilibrium statistical physics. At the presenttime, some of the most important theoretical approaches for non-equilibrium systems are either technical,such as deriving hydrodynamics from the Boltzmann equation, or somewhat removed from the underlyingphenomena that are being described, such as non-equilibrium thermodynamics.

    Because of this disconnect between fundamental theory and applications, our view is that it is moreinstructive to illustrate non-equilibrium statistical physics by presenting a number of current and paradig-matic examples of systems that are out of equilibrium, and to elucidate, as completely as possible, the rangeof techniques available to solve these systems. By this approach, we believe that readers can gain generalinsights more quickly compared to formal approaches, and, further, will be well-equipped to understandmany other topics in non-equilibrium statistical physics. We have attempted to make our treatment asself-contained and user-friendly as possible, so that an interested reader can work through the book withoutencountering unresolved methodological mysteries or hidden calculational pitfalls. Thus while much of thematerial is mathematical in nature, we have tried to present it as pedagogically as possible. Our targetaudience is graduate students with a one-course background in equilibrium statistical physics. Each of themain chapters is intended to be self-contained. We also made an effort to supplement the chapters withresearch exercises and open questions, in the hopes of stimulating further research.

    The specific examples presented in this book are primarily based on irreversible stochastic processes. Thisbranch of statistical physics is in many ways the natural progression of kinetic theory that was initially usedto describe dynamics of simple gases and fluids. We will discuss the development of basic kinetic approachesto more complex and contemporary systems. Among the large menu of stochastic and irreversible processes,we chose the ones that we consider to be among the most important and most instructive in leading togeneric understanding. Our main emphasis is on exact analytical results, but we also send time developingheuristic and scaling methods. We largely avoid presenting numerical simulation results because these areless definitive and instructive than analytical results. An appealing (at least to us) aspect of these examplesis that they are broadly accessible. One needs little background to appreciate the systems being studied andthe ideas underlying the methods of solution. Many of these systems naturally suggest new and non-trivialquestions that an interested reader can easily pursue.

    We begin our exposition with a few aperitifs an abbreviated qualitative discussion of basic problemsand a general hint at the approaches that are available to solve these systems. Chapter 2 provides a basicintroduction to diffusion phenomena because of the central role played by diffusion in many non-equilibriumstatistical systems. These preliminary chapters serve as an introduction to the rest of the book.

    The main body of the book is then devoted to working out specific examples. In the next three chapters,we discuss the fundamental kinetic processes of aggregation, fragmentation, and adsorption (chapters 35).

    3

  • 4 CONTENTS

    Aggregation is the process by which two clusters irreversibly combine in a mass-conserving manner to form alarger cluster. This classic process that very nicely demonstrates the role of conservation laws, the utility ofexact solutions, the emergence of scaling in cluster-size distributions, and the power of heuristic derivations.Many of these technical lessons will be applied throughout this book.

    We then turn to the complementary process of fragmentation, which involves the repeated breakup ofan element into smaller fragments. While this phenomenon again illustrates the utility of exact and scalingsolutions, fragmentation also exposes important new concepts such as multiscaling, lack of self-averaging, andmethods such as traveling waves and velocity selection. All of these are concepts that are used extensively innon-equilibrium statistical physics. We then discuss the phenomenon of irreversible adsorption where, again,the exact solutions of underlying master equations for the occupancy probability distributions provides acomprehensive picture of the basic phenomena.

    The next two chapters (6 & 7) discuss the time evolution of systems that involve the competition betweenmultiple phases. We first treat classical spin systems, in particular, the kinetic Ising model and the votermodel. The kinetic Ising model occupies a central role in statistical physics because of its broad applicabilityto spins systems and many other dynamic critical phenomena. The voter model is perhaps not as well knownin the physics literature, but it is an even simpler model of an evolving spin system that is exactly solublein all dimensions. In chapter 7, we study phase ordering kinetics. Here the natural descriptions are in termsof continuum differential equations, rather than master equations.

    In chapter 8, we discuss collision-driven phenomena. Our aim is to present the Boltzmann equation inthe context of explicitly soluble examples. These include traffic models, and aggregation and annihilationprocesses in which the particles move at constant velocity between collisions. The final chapter (#9) presentsseveral applications to contemporary problems, such as the structure of growing networks and models of self-organized criticality.

    In an appendix, we present the fundamental techniques that are used throughout our book. Theseinclude various types of integral transforms, generating functions, asymptotic analysis, extreme statistics,and scaling approaches. Each of these methods is explain fully upon its first appearance in the book itself,and the appendix is a brief compendium of these methods that can be used either as a reference or a studyguide, depending on ones technical preparation.

  • TENTATIVE PY 896 COURSE OUTLINE: FUNDAMENTAL KINETIC PROCESSES

    Preface:

    While equilibrium statistical physics is well-developed, the statistical description of non-

    equilibrium systems is still not mature. In spite of much effort, an over-arching formalism, such as

    the Boltzmann factor or the partition function, does not yet exist in non-equilibrium phenomena.

    Because of this disconnect between fundamentals and applications, the approach of this course is

    to present paradigmatic yet pedagogical examples as a user-friendly way to gain general insights. I

    will attempt to make this course as self-contained and accessible as possible, so that an interested

    student can appreciate the main results by carefully working through the lecture notes. While PY

    541 and 542 would provide helpful background, a motivated graduate student should be able to

    enjoy this course without these background classes.

    A major main focus of the course will be on the fundamental kinetic processes of aggregation,

    fragmentation, and adsorption. Aggregation is the process by which two clusters irreversibly com-

    bine in a mass-conserving manner to form a larger cluster. This classic process nicely demonstrates

    the role of conservation laws, the utility of exact solutions, the emergence of scaling in cluster-size

    distributions, and the power of heuristic derivations. Fragmentation is the complementary process

    in which an object undergoes repeated breakup into smaller fragments. This phenomenon again

    illustrates the utility of exact and scaling solutions. Finally, in irreversible adsorption, the kinetic

    approach provides a simple understanding of the final density of absorbed particles, a quantity that

    is difficult to obtain by direct means.

    Another major portion of the course will be devoted to the time evolution of systems that

    involve the competition between multiple phases. Examples will include classical spin systems, such

    as the kinetic Ising model and the voter model. The kinetic Ising model occupies a central role

    in statistical physics because of its broad applicability to spin systems and many other dynamic

    critical phenomena. While the voter model is not as well known, it is even simpler and is exactly

    soluble in all dimensions. We will then discuss a number fundamental examples associated with

    kinetic phenomena in phase ordering problems.

    If there is time, I will discuss the structure of growing networks and collision-driven phenom-

    ena. For the former, I plan to present the master equation to determine the degree distribution

    of growing networks and other fundamental geometrical features. For the latter, Ill present the

    Boltzmann equation in the context of explicitly soluble examples. These include traffic models,

    and aggregation and annihilation processes in which particles move at constant velocity between

    collisions.

    Basic Course Information:

    The text for the course is based on a book that I am currently writing in collaboration with

    Eli Ben-Naim and Paul Krapivsky. Relevant sections of the book will be posted on the course

    website: physics.bu.edu/redner/896.html periodically during the semester.

    As befits an advanced graduate course, the grading requirements will be informal and the

    details will be announced at the first lecture.

  • Tentative Course Outline

    1. DIFFUSION

    A Fundamentals of Random WalksThe probability distribution; central limit theorem; transience and recurrence

    B The Diffusion EquationBasic solution methods; first-passage processes; connection with electrostatics

    2. BASIC TOOLS

    A Langevin EquationB Fokker-Planck EquationC Master Equation

    3. AGGREGATION

    A Theory of the Reaction RateB Exact Solutions: reaction rates Kij = 1, Kij = ij, Kij = i + j.C Scaling TheoryD Extensions: aggregation with input, exchange processes, finite systems

    4. FRAGMENTATION

    A Examples of Exact Solutionsconstant & linear breakup rate, shattering transition

    B Scaling theoryC Applications: steady material input & geometrical fragmentation.

    5. ADSORPTION

    A Dimer Adsorption in One Dimension: jamming and final coverageB Adsorption of Sticks on a One-Dimensional LineC Adsorption-Desorption Processes

    6. KINETICS OF SPIN SYSTEMS

    A Ising Spin Systems with Glauber Dynamicsexact results in one dimension, domain size distribution

    B Ising Spin Systems with Kawasaki DynamicsC Disordered SystemsD Voter ModelE Domain Evolution Processes

    7. PHASE ORDERING KINETICS

    A Kolmogorov-Avrami ModelB Landau Ginzburg Equation/Curvature-Driven GrowthC Lifshitz-Slyozov Theory

    8. GROWING NETWORKS

    A The Basic ModelsB Master Equation and Degree DistributionC Structural Properties

    9. COLLISION PROCESSES

    A The Maxwell-Boltzmann DistributionB Boltzmann Transport EquationC Lorentz GasD Inelastic CollisionsE Traffic ModelsF Annihilation

  • Chapter 1

    APERITIFS

    Broadly speaking, non-equilibrium statistical physics describes the time-dependent evolution of many-particlesystems. The individual particles are elemental interacting entities which, in some situations, can changein the process of interaction. In the most interesting cases, interactions between particles are strong andhence the deterministic description of even few-particle systems are beyond the reach of exact theoreticalapproaches. On the other hand, many-particle systems often admit an analytical statistical description whentheir number becomes large and in that sense they are simpler than few-particle systems. This feature hasseveral different namesthe law of large numbers, ergodicity, etc.and it is one of the reasons for thespectacular successes of statistical physics and probability theory.

    Non-equilibrium statistical physics is quite different from other branches of physics, such as the fun-damental fields of electrodynamics, gravity, and elementary-particle physics that involve a reductionistdescription of few-particle systems, and applied fields, such as hydrodynamics and elasticity that are primar-ily concerned with the consequences of fundamental governing equations. Some of the key and distinguishingfeatures of non-equilibrium statistical physics include:

    no basic equations (like Maxwell equations in electrodynamics or Navier-Stokes equations in hydrody-namics) from which the rest follows;

    intermediate between fundamental applied physics;

    the existence of common underlying techniques and concepts in spite of the wide diversity of the field;

    non-equilibrium statistical physics naturally leads to the creation of methods that are quite useful inapplications far removed from physics (for example the Monte Carlo method and simulated annealing).

    Our guiding philosophy is that in the absence of underlying principles or governing equations, non-equilibriumstatistical physics should be oriented toward explicit and illustrative examples rather than attempting todevelop a theoretical formalism that is still incomplete.

    Lets start by looking briefly at the random walk to illustrate a few key ideas and to introduce severaluseful analysis tools that can be applied to more general problems.

    1.1 Diffusion

    For the symmetric diffusion on a line, the probability density

    Prob [particle (x, x + dx)] P (x, t) dx (1.1)

    satisfies the diffusion equationP

    t= D

    2P

    x2. (1.2)

    As we discuss soon, this equation describes the continuum limit of an unbiased random walk. The diffusionequation must be supplemented by an initial condition that we take to be P (x, 0) = (x), corresponding toa walk that starts at the origin.

    5

  • 6 CHAPTER 1. APERITIFS

    Dimensional Analysis

    Lets pretend that we dont know how to solve (1.2) and try to understand the behavior of the walker withoutexplicit solution. What is the mean displacement? There is no bias, so clearly

    x

    x P (x, t) dx = 0 .

    The next moment, the mean square displacement,

    x2

    x2P (x, t) dx

    is non-trivial. Obviously, it should depend on the diffusion coefficient D and time t. We now apply dimen-sional analysis to determine these dependences. If L denotes the unit of length and T denotes the time unit,then from (1.2) the dimensions of x2, D, and t are

    [x2] = L2 , [D] = L2/T , [t] = T .

    The ratio x2/Dt is dimensionless and thus be constant, as a dimensionless quantity cannot depend ondimensional quantities. Hence

    x2 = C Dt. (1.3)

    Equation (1.3) is one of the central results in non-equilibrium statistical physics, and we derived it usingjust dimensional analysis! To determine the numerical constant C = 2 in (1.3) one must work a bit harder(e.g., by solving (1.2), or by multiplying Eq. (1.2) by x2 and integrating over the spatial coordinate to giveddtx

    2 = 2D). We shall therefore use the power of dimensional analysis whenever possible.

    Scaling

    Lets now apply dimensional analysis to the probability density P (x, t|D); here D is explicitly displayedto remind us that the density does depend on the diffusion coefficient. Since [P ] = L1, the quantity

    Dt P (x, t|D) is dimensionless, so it must depend on dimensionless quantities only. From variables x, t, Dwe can form a single dimensionless quantity x/

    Dt. Therefore the most general dependence of the densityon the basic variables that is allowed by dimensional analysis is

    P (x, t) =1

    DtP() , =

    x

    Dt. (1.4)

    The density depends on a single scaling variable rather than on two basic variables x and t. This remarkablefeature greatly simplifies analysis of the typical partial differential equations that describe non-equilibriumsystems. Equation (1.4) is often referred to as the scaling ansatz. Finding the right scaling ansatz for aphysical problem often represents a large step step toward a solution. For the diffusion equation (1.2),substituting in the ansatz (1.4) reduces this partial differential equation to the ordinary differential equation

    2P + P + P = 0 .

    Integrating twice and invoking both symmetry (P (0) = 0) and normalization, we obtain P = (4pi)1/2 e2/4,

    and finally the Gaussian probability distribution

    P (x, t) =1

    4piDtexp

    {

    x2

    4Dt

    }. (1.5)

    In this example, the scaling form was rigorously derived from simple dimensional reasoning. In morecomplicated situations, arguments in favor of scaling are less rigorous, and scaling is usually achieved only insome asymptotic limit . The above example where scaling applies for all t is an exception; for the diffusionequation with an initial condition on a finite rather than a point support, scaling holds only in the limitx, t with the scaling variable kept finite. Nevertheless, we shall see that where applicable, scalingprovides a significant step toward the understanding of a problem.

  • 1.2. SINGLE-SPECIES ANNIHILATION/COALESCENCE 7

    Renormalization

    The strategy of the renormalization group method is to understand the behavior on large scalehere largetimeiteratively in terms of the behavior on smaller scales. For the diffusion equation, we start with identity

    P (x, 2t) =

    P (y, t) P (x y, t) dy (1.6)

    that reflects the fact that the random walk is a Markov process. Namely, to reach x at time 2t, the walk firstreaches some intermediate point y at time t and then completes the journey to y in the remaining time t.(Equation (1.6) is also the basis for the path integral treatment of diffusion processes but we will not delveinto this subject here.)

    The convolution form of Eq. (1.6) calls out for applying the Fourier transform,

    P (k, t) =

    eikx

    P (x, t) dx , (1.7)

    that recasts (1.6) into the algebraic relation P (k, 2t) = [P (k, t)]2. The scaling form (1.4) shows that P (k, t) =P() with = k

    Dt, so the renormalization group equation is

    P(

    2 ) = [P()]2 .

    Taking logarithms and making the definitions z 2, Q(z) ln P(), we arrive at Q(2z) = 2Q(z), whose

    solution is Q(z) = Cz, or P (k, t) = e2k2Dt. (The constant C = 2 may be found, e.g., by expanding (1.7)

    for small k, P (k, t) = 1k2x2, and recalling that x2 = 2Dt). Performing the inverse Fourier transform werecover (1.5). Thus the Gaussian probability distribution represents an exact solution to a renormalizationgroup equation. Our derivation shows that the renormalization group is ultimately related to scaling.

    1.2 Single-Species Annihilation/Coalescence

    In non-equilibrium statistical physics, we study systems that contain a macroscopic number of interactingparticles. To understand collective behaviors it is useful to ignore complications resulting from finiteness,i.e., to focus on situations when the number of particles is infinite. Perhaps the simplest interacting infinite-particle systems of this kind are single-species annihilation, where particles diffuse freely and annihilateinstantaneously upon contact, and single-species coalescence, where the reactants merge upon contact. Theseprocesses have played an important role in development of non-equilibrium statistical physics and theyprovide excellent illustrations of techniques that can be applied to other infinite-particle systems.

    The annihilation process is symbolically represented by the reaction scheme

    A + A , (1.8)

    while the coalescence reaction is represented by

    A + A A. (1.9)

    The density n(t) of A particles for both reactions obviously decays with time; the question is how.

    Hydrodynamics

    In the hydrodynamic approach, one assumes that the reactants are perfectly mixed at all times. This meansthat the density at every site is the same and that every particle has the same probability to react at thenext instant. In this well-mixed limit, and also assuming the continuum limit, the global particle density nfor both annihilation and coalescence decays with time according to the rate equation

    dn

    dt= Kn2 . (1.10)

  • 8 CHAPTER 1. APERITIFS

    This equation reflects that fact that two particles are needed for a reaction to occur and the probability fortwo particle to be at the same location is proportional to the density squared. Here K is the reaction ratethat describes the propensity for two diffusing particles to interact; the computation of this rate requiresa detailed microscopic treatment (see chapter 3). The rate equation (1.10) is a typical hydrodynamic-likeequation whose solution is

    n(t) =n0

    1 + Kn0t (Kt)1 . (1.11)

    However, simulations show more interesting long-time behaviors that depends on the spatial dimension d:

    n(t)

    t1/2

    d = 1;

    t1 ln t d = 2;

    t1

    d > 2.

    (1.12)

    The sudden change at dc = 2 illustrates the important notion of the critical dimension: above dc, the rateequation leads to asymptotically correct behavior; below dc, the rate equation is wrong; at dc, the rateequation approach is almost correctit typically is in error by a logarithmic correction term.

    To obtain a complete theory of the reaction, one might try to write formally exact equations for correlationfunctions. That is, if (r, t) is the microscopic density, the true dynamical equation for n(t) (r, t) involvesthe second order correlators (r, t)(r, t). Then an equation for the second-order correlation functionsinvolves third-order correlators, etc. These equations are hierarchical and the only way to proceed is toimpose some sort of closure scheme in which higher-order correlators are factorized in terms of lower-ordercorrelators. In particular, the hydrodynamic equation (1.10) is recovered if we assume that second-ordercorrelators factorize; that is, (r, t)(r, t) = (r, t)(r, t) = n(t)2. Thus Eq. (1.10) is the factorizedversion of the Boltzmann equation for the annihilation process (1.8). Attempts to describe this reactionscheme more faithfully by higher-order correlators have not been fruitful. Thus the revered kinetic theoryapproach is helpless for the innocent-looking process (1.8)! Lets try some other approaches.

    Dimensional Analysis

    Lets determine the dependence of the rate K on fundamental parameters of the reaction, i.e., on the diffusioncoefficient D of the reactants and radius R of each particle. From Eq. (1.10), [K] = Ld/T , and the onlypossible dependence is 1

    K = DRd2 (1.13)

    Using (1.13) in (1.10) and solving this equation yields

    n(t) 1

    Rd2Dt. (1.14)

    We anticipate that the density ought to decay more quickly when the radius of the particles is increased.According to (1.14), this is true only when d > 2. Thus the rate equation could be correct in this regime.Surprisingly, however, the reaction rate is not proportional to the cross-sectional area, Rd1, but rather toR

    d2; this feature stems from the vagaries of diffusive motion. For d = 2, the decay is independent of thesize of particlesalready a bit of a surprising result. However, for d < 2, we obtain the obviously wrongresult that the density decays more slowly if particles are larger.

    The density is actually independent of R for d < 2. This fact is easy to see for d = 1 because all thatmatters is the spacing between particles. If we now seek, on dimensional grounds, the density in the R-independent form n(D, t), we find that the only possibility is n (Dt)d/2 in agreement with prediction of(1.12) in one dimension. In the context of the reaction rate, this slow decay is equivalent to a reaction ratethat decreases with time. We will return to this point in the next chapter.

    1Here we omit a numerical factor of order one; in the future, we shall often ignore such factors without explicit warning.

  • 1.2. SINGLE-SPECIES ANNIHILATION/COALESCENCE 9

    Heuristic Arguments

    Dimensional analysis often gives correct dependences but does not really explain why these behaviors arecorrect. For the annihilation process (1.8), we can understand the one-dimensional asymptotic, n (Dt)1/2,in a physical way by using a basic feature (1.3) of random walks: in a time interval (0, t), each particle exploresthe region `

    Dt, and therefore a typical separation between surviving particles is of order `, from whichn `

    1 (Dt)1/2 follows.

    Guided by this understanding, lets try to understand (1.12) for all dimensions. First, we slightly modifythe process so that particles undergo diffusion on a lattice in d dimensions (lattice spacing plays the role ofthe radius). What is the average number of sites N visited by a random walker after N steps? This questionhas a well-known and beautiful answer:

    N

    N1/2

    d = 1;

    N/ lnN d = 2;

    N d > 2.

    (1.15)

    With a little contemplation, one should be convinced the density in single-species annihilation scales as theinverse of the average number of sites visited by a random walker; if there is more that one particle in thevisited region, it should have been annihilated previously. Thus (1.15) is essentially equivalent to (1.12).

    Exact Solution in One Dimension

    The diffusion-controlled annihilation process admits an exact solution in one dimension. This is an excep-tional featuremost infinite-particle systems cannot be solved even in one dimension. Moreover, for thesesolvable cases, we can usually compute only a limited number of quantities. For one-dimensional annihilation,for example, while the density is known exactly, the distribution of distances ` between adjacent particlesP (`, t) is unknown even in the scaling limit ` and t , with = `/

    Dt being finite. Althoughnumerical simulations strongly indicate that the interval length distribution approaches the scaling form,P (`, t) (Dt)1/2P(), nobody yet knows how compute the scaled length distribution P().

    Exact results for the diffusion-controlled annihilation process will be presented later when we develop thenecessary technical tools. However to illustrate a simple exact solution, lets consider diffusion-controlledcoalescence, A+A A, that is readily soluble in one dimension because it can be reduced to a two-particleproblem. To compute the density it is convenient to define particle labels so that in each collision the leftparticle disappears and the right particle survives. Then to compute the survival probability of a test particlewe may ignore all particles to the left. Such a reduction of the original two-sided problem to a one-sided oneis extremely helpful. Furthermore, only the closest particle to the right of the test particles is relevanttheright neighbor can merge with other particles further to the right; however, these reactions never affect thefate of the test particle. Thus the system reduces to a soluble two-particle problem.

    The interparticle distance between the test particle and its right neighbor undergoes diffusion with dif-fusivity 2D because the spacing diffuses at twice the rate of each particle. Consequently, the probabilitydensity (`, t) that the test particle is separated by distance ` from its right neighbor satisfies the diffusionequation subject to the absorbing boundary condition:

    t= 2D

    2

    `2, (0, t) = 0. (1.16)

    The solution (1.16) for an arbitrary initial condition 0(`) is

    (`, t) =1

    8piDt

    0

    0(y)[e(`y)2/8Dt

    e(`+y)2/8Dt

    ]dy

    =1

    2piDtexp

    (

    `2

    8Dt

    )

    0

    0(y) exp

    (

    y2

    8Dt

    )sinh

    (`y

    4Dt

    )dy . (1.17)

    In the first line, the solution is expressed as the superposition of a Gaussian and an image anti-Gaussianthat automatically satisfies the absorbing boundary condition. In the long time limit, the integral on the

  • 10 CHAPTER 1. APERITIFS

    second line tends to `4Dt

    0 dy 0(y) y =`

    4Dtn0. Therefore

    (`, t) `

    4Dtn0

    2piDtexp

    (

    `2

    8Dt

    ),

    so that the survival probability is

    S(t) =

    0

    (`, t) d` n10 (2piDt)1/2

    ,

    and the density n(t) = n0 S(t) decays as

    n(t) (2piDt)1/2 when t . (1.18)

    To summarize, the interval length distribution P (`, t) is just the probability density (`, t) conditionedon the survival of the test particle. Hence

    P (`, t) (`, t)

    S(t)

    `

    4Dtexp

    (

    `2

    8Dt

    ).

    The average interparticle spacing grows as

    Dt which equivalent to the particle density decaying as 1/

    Dt.

    1.3 Two-Species Annihilation

    Consider two diffusing species A and B which are initially distributed at random with equal concentrations:nA(0) = nB(0) = n0. When two particles of opposite species approach within the reaction radius, theyimmediately annihilate:

    A + B . (1.19)

    For this reaction, the density decreases as

    n(t)

    {td/4

    d 4;

    t1

    d > 4,(1.20)

    as t , so the critical dimension is dc = 4. This result shows that hydrodynamic description is wrongeven in the most relevant three-dimensional case.

    In this striking example, neither a hydrodynamic description (that gives n t1) nor dimensional analysiscan explain the decay of the density. Here, a simple heuristic argument helps us determine the density decayof Eq. (1.20). To understand why the naive approaches fail, consider a snapshot of a two-dimensional systemat some time t 1 (Fig. fig-c1-snapshot). We see that the system spontaneously organizes into a mosaicof alternating domains. Because of this organization, annihilation can occur only along domain boundariesrather than throughout the system. This screening effect explains why the density is much larger than inthe hydrodynamic picture where particles are assumed to be well-mixed.

    To turn this picture into a semi-quantitative estimate for the density, note that in a spatial region oflinear size `, the initial number of A particles is NA = n0`

    d (n0`)

    d/2 and similarly for B particles. Here the term signifies that the particle number in a finite region is a stochastic variable that typically fluctuatesin a range of order (n0`)

    d/2 about the mean value n0`d. The typical value of the difference NANB for this

    d-dimensional regionNA NB = (n0`)

    d/2,

    arises because of initial fluctuations and is not affected by annihilation events. Therefore after the minorityspecies in a given region is eliminated, the local density becomes n (n0`)

    d/2/`

    d. Because of the diffusivespreading (1.3), the average domain size scales as `

    Dt, and thus n

    n0 (Dt)d/4. Finally, notice

    that the density decay cannot be obtained by dimensional analysis alone because now are there at least twoindependent length scales, the domain size

    Dt and the interparticle spacing. Additional physical input,here in the form of the domain picture, is needed to obtain n(t).

  • 1.3. TWO-SPECIES ANNIHILATION 11

    Figure 1.1: Snapshot of the particle positions in two-species annihilation in two dimensions.

    Notes

    There is a large literature on the topics discussed in this introductory chapter. Among the great many bookson random walks we mention 3; 8 which contain numerous further references. Dimensional analysis andscaling are especially popular in hydrodynamics, see e.g. excellent reviews in 4 and the classical book byBarenblatt 1 which additionally emphasizes the onnection of scaling, especially intermediate asymptotics,and the renormalization group. This latter connection has been further explored by many authors, partic-ularly Goldenfeld and co-workers (see 6). Kinetics of single-species and two-species annihilation processeswere understood in a pioneering works of Zeldovich, Ovchinnikov, Burlatskii, Toussaint, Wilczek, Bramson,Lebowitz, and many others; a review of this work is given in 5; 2. One-dimensional diffusion-controlledcoalescence process is one of the very few examples which can justifiably be called completely solvable;numerous exact solutions of this model (and generalizations thereof) found by ben-Avraham, Doering, andothers are presented in Ref. 2.

  • 12 CHAPTER 1. APERITIFS

  • Bibliography

    G. I. Barenblatt, Scaling, Self-Similarity, and Intermediate Asymptotics (Cambridge University Press, Cam-bridge, 1996).

    D. ben-Avraham and S. Havlin, The Ant in the Labyrinth: Diffusion and Kinetics of Reactions in Fractalsand Disordered Media (Cambridge University Press, Cambridge, 2000).

    W. Feller, An Introduction to Probability Theory and its Applications, Vol 1, (Wiley, New York, 1968).

    Hydrodynamic review.

    Kinetics and Spatial Organization in Competitive Reactions, S. Redner and F. Leyvraz, in Fractals andDisordered Systems, Vol. II eds. A. Bunde and S. Havlin (Springer-Verlag 1993).

    N. Goldenfeld, Lectures on Phase Transitions and the Renormalization Group (Addison-Wesley, 1992).

    V. Privman (editor), Nonequilibrium Statistical Mechanics in One Dimension (Cambridge University Press,Cambridge, 1997).

    S. Redner, A Guide to First-Passage Processes (Cambridge University Press, Cambridge, 2001).

    N. G. Van Kampen, Stochastic Processes in Physics and Chemistry, 3rd edition (North-Holland, Amster-dam, 2001).

    G. I. Barenblatt, Scaling, Self-Similarity, and Intermediate Asymptotics (Cambridge University Press, Cam-bridge, 1996).

    13

  • Chapter 2

    RANDOM WALK/DIFFUSION

    Because the random walk and its continuum diffusion limit underlie so many fundamental processes innon-equilibrium statistical physics, we give a brief introduction to this central topic. There are severalcomplementary ways to describe random walks and diffusion, each with their own advantages.

    2.1 Langevin Equation

    We begin with the phenomenological Langevin equation that represents a minimalist description for thestochastic motion of a random walk. We mostly restrict ourselves to one dimension, but the generalizationto higher dimensions is straightforward. Random walk motion arises, for example, when a microscopicbacterium is placed in a fluid. The bacterium is constantly buffeted on a very short time scale by therandom collisions with fluid molecules. In the Langevin approach the effect of these rapid collisions isrepresented by an effective, but stochastic, external force (t). On the other hand, if the bacterium had anon-zero velocity in the fluid, there would be a systematic frictional force proportional to the velocity thatwould bring the bacterium to rest. Under the influence of these two forces, Newtons second law leads forthe bacterium gives the Langevin equation

    mdv

    dt= v + (t). (2.1)

    This equation is very different from the deterministic equation of motion that one normally encounters inmechanics. Because the stochastic force is so rapidly changing with time, the actual trajectory of the particlecontains too much information. The velocity changes every time there is a collision between the bacteriumand a fluid molecule; for a particle of linear dimension 1m, there are of the order of 1020 collisions per secondand it is pointless to follow the motion on such a short time scale. For this reason, it is more meaningfulphysically to study the trajectory that is averaged over longer times. To this end, we need to specify thestatistical properties of the random force. Because the force is a result of molecular collisions, it is natural toassume that the force (t) is a random function of time with zero mean, (t) = 0. Here the angle bracketsdenote the time average. Because of the rapidly fluctuating nature of the force, we also assume that there isno correlation between the force at two different times, so that (t)(t) = 2D2(t t). As a result, theproduct of the forces at two different times has a mean value of zero. However, the mean-square force at anytime has the value D. This statement merely states that the average magnitude of the force is well-defined.

    In the limit where the mass of the bacterium is sufficiently small that it may be neglected, we obtain aneven simpler equation for the position of the bacterium:

    dx

    dt=

    1

    (t) (t). (2.2)

    In this limit of no inertia (m = 0) the instantaneous velocity equals the force. In spite of this strangefeature, Eq. (2.2) has a simple interpretationthe change in position is a randomly fluctuating variable.This corresponds to a naive view of what a random walk actually does; at each step the position changes bya random amount.

    15

  • 16 CHAPTER 2. RANDOM WALK/DIFFUSION

    One of the advantages of the Langevin equation description is that average values of the moments of theposition can be obtained quite simply. Thus formally integrating Eq. (2.1), we obtain

    x(t) =

    t0

    (t) dt. (2.3)

    Because (t) = 0, then x(t) = 0. However, the mean-square displacement is non-trivial. Formally,

    x(t)2 =

    t0

    t0

    (t)(t) dt dt. (2.4)

    Using (t)(t) = 2D(tt), it immediately follows that x(t)2 = 2Dt. Thus we recover the classical resultthat the mean-square displacement grows linearly in time. Furthermore, we can identify D as the diffusioncoefficient. The dependence of the mean-square displacement can also be obtained by dimensional analysisof the Langevin equation. Because the delta function (t) has units of 1/t (since the integral

    (t) dt = 1),

    the statement (t)(t) = 2D(t t) means that has the units

    D/t. Thus from Eq. (2.3), x(t) must

    have units of

    Dt.

    The Langevin equation has the great advantage of simplicity. With a bit more work, it is possible todetermine higher moments of the position. Furthermore there is a standard prescription to determine theunderlying and more fundamental probability distribution of positions. This prescription involves writinga continuum Fokker-Planck equation for the evolution of this probability distribution. The Fokker-Planckequation is in the form of a convection-diffusion equation, namely, the diffusion equation augmented by aterm that accounts for a global bias in the stochastic motion. The coefficients in this Fokker-Planck equationare directly related to the parameters in the original Langevin equation. The Fokker-Planck equation canbe naturally viewed as the continuum limit of the master equation, which represents perhaps the mostfundamental way to describe a stochastic process. We will not pursue this conventional approach becausewe are generally more interested in developing direct approaches to write the master equation.

    2.2 Master Equation for the Probability Distribution

    Discrete space and time

    Consider a random walker on a one-dimensional lattice that hops to the right with probability p or to theleft with probability q = 1 p in a single step. Let P (x, N) be the probability that the particle is at site xat the N th time step. Then evolution of this occupation probability is described by the master equation

    P (x, N + 1) = p P (x 1, N) + q P (x + 1, N). (2.5)

    Because of translational invariance in both space and time, it is expedient to solve this equation by transformtechniques. One strategy is to Fourier transform in space and write the generating function (sometimes calledthe z-transform). Thus multiplying the master equation by zN+1 eikx and summing over all N and x gives

    N=0

    x=

    zN+1

    eikx [P (x, N + 1) = p P (x 1, N) + q P (x + 1, N)] . (2.6)

    We now define the joint transformthe Fourier transform of the generating function

    P (k, z) =

    N=0

    zN

    x=

    eikx

    P (x, N).

    In what follows, either the arguments of a function or the context (when obvious) will be used to distinguishtransforms from the function itself. The left-hand side of (2.6) is just the joint transform P (k, z), except thatthe term P (x, N = 0) is missing. Similarly, on the right-hand side the two factors are just the generating

  • 2.2. MASTER EQUATION FOR THE PROBABILITY DISTRIBUTION 17

    function at x 1 and at x + 1 times an extra factor of z. The Fourier transform then converts these shiftsof 1 in the spatial argument to the phase factors eik, respectively. Thus

    P (k, z)

    x=

    P (x, N = 0)eikx = zu(k)P (k, z), (2.7)

    where u(k) = p eik + q eik is the Fourier transform of the single-step hopping probability. For the initialcondition of a particle initially at the origin, P (x, N = 0) = x,0, the joint transform becomes

    P (k, z) =1

    1 zu(k). (2.8)

    We now invert the transform to reconstruct the probability distribution. Expanding P (k, z) in a Taylorseries, the Fourier transform of the generating function is simply P (k, N) = u(k)N . Then the inverse Fouriertransform is

    P (x, N) =1

    2pi

    pipi

    eikx

    u(k)N dk, (2.9)

    To evaluate the integral, we write u(k)N = (p eik + q eik)N in a binomial series. This gives

    P (x, N) =1

    2pi

    pipi

    eikx

    Nm=0

    (N

    m

    )p

    me

    ikmq

    Nmeik(Nm)

    dk. (2.10)

    The only non-zero term is the one with m = (N + x)/2 in which all the phase factors cancel. This leads tothe classical binomial probability distribution of a discrete random walk

    P (x, N) =N !

    (N+x2 )!(Nx

    2 )!p

    N+x

    2 qNx

    2 . (2.11)

    Finally, using Stirlings approximation, the binomial approaches the Gaussian probability distribution in thelong-time limit,

    P (x, N) 1

    2piNpqe[xN(pq)]2/2Npq

    . (2.12)

    This result is a particular realization of the central-limit theoremnamely, that the asymptotic probabilitydistribution of an N -step random walk is independent of the form of the single step distribution, as long asthe the mean displacement x and the mean-square displacement x2 in a single step are finite; we willpresent the central limit theorem in Sec. 2.3.

    Continuous time

    Alternatively, we can treat the random walk in continuous time by replacing N by continuous time t, theincrement N N + 1 with t t + t, and finally Taylor expanding the master equation (2.5) to first orderin t. These steps give

    P (x, t)

    t= w+P (x 1, t) + wP (x + 1, t) w0P (x, t) (2.13)

    where w+ = p/t and w = q/t are the hopping rates to the right and to the left, respectively, andw0 = 1/t is the total hopping rate from each site. This hopping process satisfies detailed balance, as thetotal hopping rates to a site equal the total hopping rate from the same site.

    Again, the simple structure of Eq. (2.13) calls out for applying the Fourier transform. After doing so,the master equation becomes

    dP (k, t)

    dt= (w+e

    ik + weik

    w0) P (k, t) w(k) P (k, t). (2.14)

    For the initial condition P (x, t = 0) = x,0, the corresponding Fourier transform is P (k, t = 0) = 1, and thesolution to Eq. (2.14) is P (k, t) = ew(k)t. To invert this Fourier transform, lets consider the symmetric case

  • 18 CHAPTER 2. RANDOM WALK/DIFFUSION

    where w = 1/2 and w0 = 1. Then w(k) = w0(cos k 1), and we use the generating function representationfor the the modified Bessel function of the first kind of order x, ez cos k =

    x= eikx

    Ix(z) (?), to give

    P (k, t) = et

    x=

    eikx

    Ix(t), (2.15)

    from which we immediately obtainP (x, t) = etIx(t). (2.16)

    To determine the probability distribution in the scaling limit where x and t both diverge but x2/t remainsfinite, it is more useful to Laplace transform the master equation (2.13) to give

    sP (x, s) P (x, t = 0) =1

    2P (x + 1, s) +

    1

    2P (x 1, s) P (x, s). (2.17)

    For x 6= 0, we solve the resulting difference equation, P (x, s) = a[P (x+1, s)+P (x1, s)], with a = 1/2(s+1),by assuming the exponential solution P (x, s) = Ax for x > 0; by symmetry P (x, s) = Ax for x < 0.Substituting P (x, s) = Ax into the recursion for P (x, s) gives a quadratic characteristic equation for whose solution is = (1

    1 4a2)/2a. For all s > 0, are both real and positive, with + > 1 and < 1. We reject the solution that grows exponentially with x, thus giving Px = A

    x

    . Finally, we obtainthe constant A from the x = 0 boundary master equation

    sP (0, s) 1 =1

    2P (1, s) +

    1

    2P (1, s) P (0, s) = P (1, s) P (0, s). (2.18)

    The 1 on the left-hand side arises from the initial condition, and the second equality follows by spatialsymmetry. Substituting P (n, s) = Ax

    into Eq. (2.18) gives A, from which we finally obtain

    P (x, s) =1

    s + 1

    x

    . (2.19)

    This Laplace transform diverges at s = 0; consequently, we may easily obtain the interesting asymptoticbehavior by considering the limiting form of P (x, s) as s 0. Since 1

    2s as s 0, we find

    P (x, s) (1

    2s)x

    2s + s

    ex

    2s

    2s. (2.20)

    We now invert the Laplace transform P (x, t) = s0+i

    s0iP (x, s) est ds by using the integration variable u =

    s.

    This immediately leads to the Gaussian probability distribution quoted in Eq. (2.26) for the case x = 0and x2 = 1.

    Continous space and time

    When both space and time are continuous, we expand the master equation (2.5) in a Taylor series tolowest non-vanishing ordersecond order in space x and first order in time twe obtain the fundamentalconvection-diffusion equation,

    P (x, t)

    t+ v

    P (x, t)

    x= D

    2P (x, t)

    x2, (2.21)

    for the concentration P (x, t). Here v = (p q)x/t is the bias velocity and D = x2/2t is the diffusioncoefficient. Notice that the factor v/D diverges as 1/x in the continuum limit. Therefore the convective

    term Px

    invariably dominates over the diffusion term 2P

    x2. To construct a non-pathological continuum limit,

    the bias pq must be proportional to x as x 0 so that both the first- and second-order spatial derivativeterms are simultaneously finite. For the diffusion equation, we obtain a non-singular continuum limit merelyby ensuring that the ratio x2/t remains finite as both x and t approach zero.

    To solve the convection-diffusion equation, we introduce the Fourier transformP (k, t) =

    P (x, t) eikx dx

    to simplify the convection-diffusion equation to P (k, t) = (ikv Dk2)P (k, t), with solution

    P (k, t) = P (k, 0)e(ikvDk2)t = e(ikvDk

    2)t, (2.22)

  • 2.3. CENTRAL LIMIT THEOREM 19

    for the initial condition P (x, t = 0) = (x). We then obtain the probability distribution by inverting theFourier transform to give, by completing the square in the exponential,

    P (x, t) =1

    4piDte(xvt)2/4Dt

    . (2.23)

    Alternatively, we may first Laplace transform in the time domain. For the convection-diffusion equation,this yields the ordinary differential equation

    sP (x, s) (x) + vP (x, s) = Dc(x, s), (2.24)

    where the delta function reflects the initial condition. This equation may be solved separately in the half-spaces x > 0 and x < 0. In each subdomain Eq. (2.24) reduces to a homogeneous constant-coefficientequation that has exponential solutions. The corresponding solution for the entire line has the form c+(x, s) =A+e

    x for x > 0 and c(x, s) = Ae+x for x < 0, where =

    (v

    v2 + 4Ds)/2D are the roots of

    the characteristic polynomial. We join these two solutions at the origin by applying the joining conditionsof continuity of P (x, s) at x = 0, and a discontinuity in c

    xat x = 0 whose magnitude is determined

    by integrating Eq. (2.24) over an infinitesimal domain which includes the origin. The continuity conditiontrivially gives A+ = A A, and the condition for the discontinuity in P (x, s) is D

    (P

    +|x=0 P

    |x=0

    )= 1.

    This gives A = 1/

    v2 + 4Ds. Thus the Laplace transform of the probability distribution is

    c (x, s) =1

    v2 + 4Dse|x|. (2.25)

    For zero bias, this coincides with Eq. (2.20) and thus recovers the Gaussian probability distribution.

    2.3 Central Limit Theorem

    The centeral limit theorem states that the asymptotic N probability distribution of an N -step randomwalk is the universal Gaussian function

    P (x, N) 1

    2piN2e(xx)2/2N2

    , (2.26)

    where x and x2 are respectively the mean and the mean-square displacement for a single step of the walk,and 2 = x2 x2. A necessary condition for the central limit theorem to hold is that each step of thewalk is an independent identically distributed random variable that is drawn from a distribution p(x) suchthat x and x2 are both finite. We now give a simple derivation of this fundamental result. For simplicitywe give the derivation for a one-dimensional system, but this derivation can immediately be extended to anydimension.

    When the steps of the random walk are independent, the probability distribution after N steps is relatedto the probability after N 1 steps by the recursion (also known as the Chapman-Kolmogorov equation)

    PN (x) =

    PN1(x

    )p(x x) dx. (2.27)

    This equation merely states that to reach x in N steps, the walk first reaches an arbitrary point x in N 1steps and then makes a transition from x to x with probability p(x x). It is now useful to introduce theFourier transforms

    f(k) =

    f(x)eikx dx f(x) =1

    2pi=

    f(k)eikx dk

    to transfrom Eq. (2.27) to the algebraic equation PN (k) = PN1(k)p(k) that we iterate to give PN (k) =P0(k)p(k)

    N . At this stage, there is another mild condition for the central limit theorem to holdthe initialcondition cannot be too long range in space. The natural condition is for the random walk to start atthe origin, P0(x) = x, 0 for which the Fourier transform of the initial probability distribution is simplyP0(k) = 1. Then the Fourier transform of the probability distribution is simply

    PN (k) = p(k)N

    , (2.28)

  • 20 CHAPTER 2. RANDOM WALK/DIFFUSION

    so that

    PN (x) =1

    2pi

    p(k)N eikx dk. (2.29)

    To invert the Fourier transform, we now use the fact that the first two moments of p(x) are finite to writethe Fourier transform p(k) as

    p(k) =

    p(x) eikx dx

    =

    p(x)

    [1 + ikx

    1

    2k

    2x

    2 + . . .

    ]dx

    = 1 + ikx 1

    2k

    2x

    2+ . . .

    Now the probability distribution is

    PN (x) 1

    2pi

    [1 + ikx 1

    2k

    2x

    2]N eikx dx

    1

    2pi

    eN ln[1+ikx 1

    2k2x2]

    eikx

    dx

    1

    2pi

    eN [1+ikx k

    2

    2(x2x2)]

    eikx

    dx (2.30)

    We now complete the square in the exponent and perform the resulting Gaussian integral to arrive at thefundamental result

    PN (x) 1

    2piN2e(xNx)2/2N2

    . (2.31)

    2.4 Connection to First-Passage Properties

    An intriguing property of random walks is the transition between recurrence and transience as a functionof the spatial dimension d. Recurrence means that a random walk is certain to return to its starting point;this occurs for d 2. Conversely, d > 2 the random walk is transient in that there is positive probability fora random walk to never return to its starting point. It is striking that the spatial dimensionand not anyother features of a random walkis the only parameter that determines this transition.

    The qualitative explanation for this transition is quite simple. Consider the trajectory of a typicalrandom walk. After a time t, a random walk explores a roughly spherical domain of radius

    Dt while thetotal number of sites visited during this walk equals to t. Therefore the density of visited sites within anexploration sphere is t/td/2 t1d/2 in d dimensions. For d < 2 this density grows with time; thus arandom walk visits each site within the sphere infinitely often and is certain to return to its starting point.On the other hand, for d > 2, the density decreases with time and so some points within the explorationsphere never get visited. The case d = 2 is more delicate but turns out to be barely recurrent.

    +

    t-t

    0

    r,t

    0

    =

    0,r

    r,t

    Figure 2.1: Diagrammatic relation between the occupation probability of a random walk (propagation isrepresented by a wavy line) and the first-passage probability (straight line).

  • 2.4. CONNECTION TO FIRST-PASSAGE PROPERTIES 21

    We now present a simple-minded approach to understand this transition between recurrence and tran-sience. Let P (r, t) be probability that a random walk is at r at time t when it starts at the origin. Similarly,let F (r, t) be the first-passage probability, namely, the probability that the random walk visits r for the firsttime at time t with the same initial condition.

    For a random walk to be at r at time t, the walk must first reach r at some earlier time step t and thenreturn to r after t t (Fig. 2.1). This connection between F (r, t) and P (r, t) may therefore be expressed asthe convolution

    P (r, t) = r,0t,0 +

    t0

    F (r, t) P (0, t t) dt. (2.32)

    The delta function term accounts for the initial condition. The second term accounts for the ways that awalk can be at r at time t. To reach r at time t, the walk must first reach r at some time t, t. Oncea first passage has occurred, the walk must return to r exactly at time t (and the walk can also return tor at earlier times, so long as the walk is also at r at time t). Because of the possibility of multiple visitsto r between time t and t, the return factor involves P rather than F . This convolution equation is mostconveniently solved in terms of the Laplace transform to give P (r, s) = r,0 +F (r, s)P (0, s). Thus we obtainthe fundamental connection

    F (r, s) =

    P (r, s)

    P (0, s), r 6= 0

    11

    P (0, s), r = 0,

    (2.33)

    in which the Laplace transform of the first-passage probability is determined by the corresponding transformof the probability distribution of diffusion P (r, t).

    We now use the techniques of Section A.1 to determine the time dependence of the first-passage probabilityin terms of the Laplace transform for the occupation probability. For isotropic diffusion, P (r = 0, t) =(4piDt)d/2 in d dimensions and the Laplace transform is P (0, s) =

    0P (0, t) est, dt. As discussed in

    Section A.1, this integral has two fundamentally different behaviors, depending on whether

    P (0, t) dtdiverges or converges. In the former case, we apply the last step in Eq. (A.1) to obtain

    P (0, s)

    t=1/s(4piDt)d/2 dt

    {Ad(t

    )1d/2 = Adsd/21

    , d < 2

    A2 ln t = A2 ln s, d = 2,

    (2.34)

    where the dimension-dependent prefactor Ad is of the order of 1 and does not play any role in the asymptoticbehavior.

    For d > 2, the integral

    P (0, t) dt converges and one has to be more careful to extract the asymptoticbehavior by studying P (0, 1) P (0, s). By such an approach, it is possible to show that P (0, s) has theasymptotic behavior

    P (0, s) (1R)1 + Bdsd/21 + . . . , d > 2, (2.35)

    where R is the eventual return probability, namely, the probability that a diffusing particle random walkultimately reaches the origin, and Bd is another dimension-dependent constant of the order of 1. Using theseresults in Eq. (2.33), we infer that the Laplace transform for the first-passage probability has the asymptoticbehaviors

    F (0, s)

    1Ads1d/2

    , d < 2

    1 + A2(ln s)1

    , d = 2

    R+ Bd(1R)2s

    d/21, d > 2,

    (2.36)

    From this Laplace transform, we determine the time dependence of the survival probability by approxi-mation (A.4); that is,

    F (0, s = 1 1/t)

    t0

    F (0, t) dt T (t), (2.37)

    where T (t) is the probability that the particle gets trapped (reaches the origin) by time t and S(t) is thesurvival probability, namely, the probability that the particle has not reached the origin by time t. Here

  • 22 CHAPTER 2. RANDOM WALK/DIFFUSION

    the trick of replacing an exponential cutoff by a sharp cutoff provides an extremely easy way to invert theLaplace transform. From Eqs. (2.36) and (2.37) we thus find

    S(t)

    Adt1d/2

    , d < 2

    A2(ln t)1

    , d = 2

    (1R) + Cd(1R)2t1d/2

    , d > 2.

    (2.38)

    where Cd is another d-dependent constant of the order of 1. Finally, the time dependence of the first-passageprobability may be obtained from the basic relation 1 S(t)

    tF (0, t) dt to give

    F (0, t) = S(t)

    t

    td/22,

    d < 2

    t1(ln t)2, d = 2

    td/2

    , d > 2.

    (2.39)

    It is worth emphasizing several important physical ramifications of the above first-passage properties.First, the asymptotic behavior is determined by the spatial dimension only and that there is a dramaticchange in behavior when d = 2. For d 2, the survival probability S(t) ultimately decays to zero. Thismeans that a random walk is recurrent and is certain to eventually return to its starting point, and indeedvisit any site of an infinite lattice. Finally, because a random walk has no memory, it is renewed everytime a specific lattice site is reached. Thus recurrence also implies that every lattice site is visited infinitelyoften.

    We can give is a simple physical explanation for this efficient visitation of sites. After a time t, a randomwalk explores a roughly spherical domain of radius

    Dt. The total number of sites visited during thisexploration is also proportional to t. Consequently in d dimensions, the density of visited sites within thisexploration sphere is t/td/2 t1d/2. For d < 2, diverges as t and a random walk visits each sitewithin the sphere infinitely often. This feature is termed compact exploration. Paradoxically, although everysite is visited with certainty, these visitations take forever because the mean time to return to the origin,t =

    t F (0, t) dt, diverges for all d 2.

    Finally, we outline a useful technique to compute where on a boundary is a diffusing particle absorbedand when does this absorption occur. This method will provide helpful in understanding finite-size effect inreaction kinetics. For simplicity, consider a symmetric nearest-neighbor random walk in the finite interval[0, 1]. Let E+(x) be the probability that a particle, which starts at x, eventually hits x = 1 without hittingx = 0. This eventual hitting probability E+(x) is obtained by summing the probabilities for all paths thatstart at x and reach 1 without touching 0. Thus

    E+(x) =

    p

    Pp(x), (2.40)

    where Pp(x) denotes the probability of a path from x to 1 that does not touch 0. The sum over all suchpaths can be decomposed into the outcome after one step (the factors of 1/2 below) and the sum over allpath remainders from the location after one step to 1. This gives

    E+(x) =

    p

    [1

    2Pp(x + x) +

    1

    2Pp(x x)

    ]=

    1

    2[E+(x + x) + E+(x x)]. (2.41)

    By a simple rearrangement, this equation is equivalent to

    (2)E+(x) = 0, (2.42)

    where (2) is the second-difference operator. Notice the opposite sense of this recursion formula comparedto the master equation Eq. (2.5) for the probability distribution. Here E+(x) is expressed in terms of outputfrom x, while in the master equation, the occupation probability at x is expressed in terms of input to x.For this reason, Eq. (2.41) is sometimes referred to as a backward master equation. This backward equationis just the Laplace equation and gives a hint of the deep relation between first-passage properties, such asthe exit probability, and electrostatics. Equation (2.42) is subject to the boundary conditions E+(0) = 0

  • 2.5. THE REACTION RATE 23

    and E+(1) = 1; namely if the walk starts at 1 it surely exits at 1 and if the walk starts at 0 it has nochance to exit at 1. In the continuum limit, Eq. (2.42) becomes the Laplace equation E = 0, subject toappropriate boundary conditions. We can now transcribe well-known results from electrostatics to solve theexit probability. For the one dimensional interval, the result is remarkably simple: E+(x) = x!

    This exit probability also represents the solution to the classic gamblers ruin problem: let x representyour wealth that changes by a small amount dx with equal probability in a single bet with a Casino. Youcontinue to bet as long as you have money. You lose if your wealth hits zero, while you break the Casinoif your wealth reaches 1. The exit probability to x = 1 is the same as the probability that you break theCasino.

    Lets now determine the mean time for a random walk to exit a domain. We focus on the unconditionalexit time, namely, the time for a particle to reach any point on the absorbing boundary of this domain. Forthe symmetric random walk, let the time increment between successive steps be t, and let t(x) denote theaverage exit time from the interval [0, 1] when a particle starts at x. The exit time is simply the time foreach exit path times the probability of the path, averaged over all trajectories, and leads to the analog ofEq. (2.40)

    t(x) =

    p

    Pp(x) tp(x), (2.43)

    where tp(x) is the exit time of a specific path to the boundary that starts at x.In analogy with Eq. (2.41), this mean exit time obeys the recursion

    t(x) =1

    2[(t(x + x) + t) + (t(x x) + t)] , (2.44)

    This recursion expresses the mean exit time starting at x in terms of the outcome one step in the future, forwhich the initial walk can be viewed as restarting at either x+x or xx, each with probability 1/2, but alsowith the time incremented by t. This equation is subject to the boundary conditions t(0) = t(1) = 0; theexit time equals zero if the particle starts at the boundary. In the continuum limit, this recursion formulareduces to the Poisson equation Dt(x) = 1. For diffusion in a d-dimensional domain with absorptionon a boundary B, the corresponding Poisson equation for the exit time is D2t(r) = 1, subject to theboundary condition t(r) = 0 for r B. Thus the determination of the mean exit time has been recast asa time-independent electrostatic problem! For the example of the unit interval, the solution to the Laplaceequation is just a second-order polynomial in x. Imposing the boundary conditions immediately leads to theclassic result

    t(x) =1

    2Dx(1 x). (2.45)

    2.5 The Reaction Rate

    Suppose that you wanted to hit the side of a barn using an ensemble of blind riflemen that fire bullets inrandom directions as your incident beam. What is the rate at which the barn is hit? Theorists that weare, lets model the barn as a sphere of radius R. A patently obvious fact is that if the radius of the barnis increased, the number of bullets that hit our theoretical barn increases as its cross-sectional area. In dspatial dimensions, the cross section therefore scales as Rd1. Now suppose that we take away the rifles fromour blind marksmen and give them the task of hitting the barn simply by wandering around. Surprisingly,the rate at which the blind riflemen diffuse to the barn is proportional to Rd2 for d > 2. Thus in thephysical case of 3 dimensions, the absorption rate is proportional to the sphere radius rather than to itscross section! Even more striking for d 2 the absorption rate no longer depends on the radius of theabsorbing sphere. The rate at which diffusing particles hit an absorbing sphere is the underlying mechanismof diffusion-controlled reactions. Because of the centrality of this topic to reaction kinetics and because itrepresents a nice application of first-passage ideas, we now determine this reaction rate.

    As in the original Smoluchowski theory for the reaction rate, we fix a spherical absorbing particle ofmass mi radius Ri at the origin, while a gas of non-interacting particles each of mass mj and radii Rj freelydiffuses outside the sphere. The separation between the absorbing sphere and a background particle diffuseswith diffusion coefficient Di + Dj , where Di is the diffusion coefficient of a droplet of radius Ri. When the

  • 24 CHAPTER 2. RANDOM WALK/DIFFUSION

    separation first reaches a = Ri + Rj , reaction occurs. The reaction rate is then identified as the flux to anabsorbing sphere of radius a by an effective particle with diffusivity D = Di + Dj .

    The concentration of background particles around the absorbing sphere thus obeys the diffusion equation

    c(~r, t)

    t= D2c(~r, t), (2.46)

    subject to the initial condition c(~r, t = 0) = 1 for r > a and the boundary conditions c(r = a, t) = 0 andc(r , t) = 1. The reaction rate is then identified with the integral of the flux over the sphere surface

    K(t) = D

    S

    c(~r, t)r

    r=a

    d. (2.47)

    There are two regimes of behavior as a function of the spatial dimension. For d > 2, the loss of reactantsat the absorbing sphere is sufficiently slow that it is replenished by the re-supply from larger distances. Asteady state is thus reached and the reaction rate K is finite. In this case, the reaction rate can be determinedmore simply by solving the time-independent Laplace equation, rather than the diffusion equation (2.46).

    The solution to the Laplace equation with the above initial and boundary conditions is

    c(r) = 1(

    a

    r

    )d2.

    The flux is then D cr|r=a = D(d 2)/a and the total current is the integral of this flux over the surface of

    the sphere K = (d 2)d Dad2, where d = 2pi

    d/2/(d/2) is the area of a unit sphere in d dimensions. We

    translate this flux into the reaction kernel for aggregation by expressing a and D in terms of the parametersof the constituent reactants to give

    Kij = (d 2)d (Di + Dj)(Ri + Rj)d2

    .

    We can express this result as a function of reactant masses only for the physical case of three dimension byusing Ri i

    1/3, while for the diffusion coefficient, we use the Einstein-Stokes relation Di = kT/(6piRi) i1/3, where kT is the thermal energy and is the viscosity coefficient to obtain

    Kij 2kT

    3(R1i + R

    1j )(Ri + Rj).

    (2.48)

    Dt r

    c(r,t)

    a

    Figure 2.2: Sketch of the concentration about an absorbing sphere according to the quasi-static approxima-tion. The near- and far-zone concentrations match at r =

    Dt.

    What happens for d < 2? We could solve the diffusion equation with the absorbing boundary conditionand the unit initial condition, from which the time-dependent flux and thereby a time-dependent reaction ratecan be deduced. However, it is simpler and more revealing to apply the general quasi-static approximation.

  • 2.5. THE REACTION RATE 25

    The basis of this approximation is that the region exterior to the absorbing sphere naturally divides intonear and far zones. In the near zone, which extends to a distance

    Dt from the sphere, diffusing particleshave ample time to explore this zone for thoroughly and the concentration is nearly time independent. Inthe complementary far zone there is negligible depletion.

    Based on this picture, we solve the Laplace equation in the near zone with the time-dependent boundarycondition c(r =

    Dt) = 1 as well as c(a) = 0. By elementary methods, the solution is

    c(r, t) =

    (r/a)2d

    1(Dt/a

    )2d 1

    d < 2,

    ln (r/a)

    ln(

    Dt/a) d = 2.

    (2.49)

    Using the definition of the time-dependent reaction rate from Eq. (2.47), we then obtain

    K(t)

    D (Dt)(d2)/2 d < 2;

    4piD

    ln(Dt/a

    2) d = 2;

    Dad2

    d > 2.

    (2.50)

    Notice that the rate does not depend on the cluster radius for d 2. This surprising fact arises becauseof the recurrence of diffusion in d 2 so that two diffusing particles are guaranteed to eventually meetindependent of their radii.

    Problems

    1. Find the generating function for the Fibonacci sequence, Fn = Fn1 + Fn2, with the initial conditionF0 = F1 = 1; that is, determine F (z) =

    0 Fnzn. Invert the generating function to find a closed form

    expression for Fn.

    2. Consider a random walk in one dimension in which a step to the right of length 2 occurs with probability1/3 and a step to the left of length 1 occurs with probability 2/3. Investigate the corrections to theisotropic Gaussian that characterizes the probability distribution in the long-time limit. Hint: Considerthe behavior of moments beyond second order, xk with k > 2.

    3. Solve the gamblers ruin problem when the probability of winning in a single bet is p. The bettinggame is repeated until either you are broke or the casino is broken. Take the total amount of capital tobe $N and you start with $n. What is the probability that you will break the casino? Also determinethe mean time until the betting is over (either you are broke or the Casino is broken). More advanced:Determine the mean time until betting is over with the condition that: (i) you are broke, and (ii) youbreak the Casino. Solve this problem both for fair betting and biased betting.

    4. Consider the gamblers ruin problem under the assumptions that you win each bet with probabilityp 6= 1/2, but that the casino has an infinite reserve of money. What is the probability that you breakthe casino as a function of p? For those values of p where you break the casino, what is the averagetime for this event to occur?

    Notes

    The field of random walks, diffusion, and first-passage processes are classic areas of applied probabilitytheory and there is a corresponding large literature. For the more probabilistic aspects of random walks andprobability theory in general, we recommend 8; 3; 2; 9. For the theory of random walks and diffusion froma physicists perspective, we recommend W,RG,MW. For first-passage properties, please consult 8; 9.

  • Chapter 3

    AGGREGATION

    Aggregation is a fundamental non-equilibrium process in which reactive clusters join together irreversiblywhen they meet (Fig. 3.1) so that the typical mass of a collection of aggregates grows monotonically withtime. Developing an understanding aggregation is important both because of its ubiquitous applicationssuch as the gelation of jello, the curdling of milk, the coagulation of blood, and the formation of stars bygravitational accretionand because aggregation is an ideal setting to illustrate theoretical analysis tools.We schematically write aggregation as

    Ai +AjKijAi+j .

    That is, a cluster of mass i+ j is created at an intrinsic rate Kij by the aggregation of a cluster of mass iand a cluster of mass j. The fundamental observables of the system are ck(t), the concentration of clustersof mass k at time t.

    ji+j

    i

    Figure 3.1: Schematic representation of aggregation. A cluster of mass i and mass j react to create a clusterof mass i+ j.

    The primary goal of this chapter is to elucidate the basic features of the mass distribution ck(t) andto understand which features of the reaction rate Kij influence this distribution. In pursuit of this goal,we will write the master equations that describe the evolution of the cluster mass distribution in an infinitesystem and discuss some of its elementary properties. Next, we work out exact solutions for specific examplesto illustrate both the wide range of phenomenology and the many useful techniques for analyzing masterequations. We will then discuss the reaction rate in general terms and show how to calculate this rate. Thisdiscussion will naturally lead to the notion of dynamical scaling, which provides a simple way to obtain ageneral understanding of aggregation. Finally, we will discuss several important extensions of aggregationthat exhibit rich kinetic properties.

    The Master Equations

    The traditional starting point for treating aggregation is the infinite set of equations that describe how thecluster mass distribution changes with time. For a general reaction rate Kij , the master equations are:

    ck(t) =1

    2

    i,j

    i+j=k

    Kij ci(t) cj(t) ck(t)

    i=1

    Kik ci(t). (3.0.1)

    25

  • 26 CHAPTER 3. AGGREGATION

    Here the overdot denotes the time derivative. For conciseness, we will typically not write arguments of basicvariables henceforth. The first term describes the gain in the concentration of clusters of mass k = i+ j dueto the coalescence of a cluster of mass i with a cluster of mass j. This gain process occurs at the rate Kijcicj ;the product cicj gives the rate at which i-mers and j-mers meet, and the factor Kijthe reaction kernelisthe rate at which k-mers are formed when i-mers and j-mers do meet. The second (loss) term accounts forthe loss of clusters of mass k due to their reaction with clusters of arbitrary mass i. The prefactor of 12 inthe gain and loss terms ensure the correct counting of their relative contributions. To truly appreciate thatthe counting is correct, write out the first few master equations explicitly.

    An important feature of the master equations is that the total mass (generally) is conserved. That is,

    k

    k ck =

    k

    i+j=k

    1

    2Kij (i+ j) ci cj

    i

    k

    Kik k ci ck = 0. (3.0.2)

    In the first term, the sum over k causes the sums over i and j to become independent and unrestricted. Thusthe gain and loss terms become identical and the total mass is manifestly conserved.

    While the master equation approach is the starting point in almost all studies of aggregation, its under-lying assumptions and approximations should be noted at the outset, including:

    Bimolecular reactions. This assumes a dilute system so that higher-body interactions are negligible.

    Spatial homogeneity. The cluster densities are independent of spatial position. This is the mean-fieldassumption of a well-mixed system.

    Shape independence. The aggregate mass is the only dynamical variable; the role of cluster shape onevolution is not considered. Example: the coalescence of liquid droplets that always remain spherical.

    Thermodynamic limit. The system is assumed to be sufficiently large that discreteness effects can beignored and cluster concentrations are continuous functions.

    In spite of these limitations, the master equations capture many of the essential kinetic mechanisms ofaggregation as we now discuss.

    3.1 Exact Solutions

    Although the master equations may appear formidable, they are exactly soluble for a number of prototypicalspecial cases for which many incisive techniques have been developed. We discuss several of these approachesand apply them first to the case of the constant reaction kernel. Because of its simplicity, the constant kernelreaction is an ideal playground with which to build our understanding. We then turn to the more realistic(and more interesting) cases of the product and sum kernels, Kij = ij and Kij = i + j, respectively. It isworth mentioning that the bilinear kernel, Kij = A+B(i+ j) + Cij, is also exactly soluble.

    Constant Reaction Rates

    It turns out to be convenient to choose K = 2. In this case the master equations are:

    ck =

    i+j=k

    cicj 2ck

    i=1

    ci

    i+j=k

    cicj 2ck N (3.1.1)

  • 3.1. EXACT SOLUTIONS 27

    where N(t) =

    i ci, the zeroth moment of the mass distribution, is the concentration of clusters of anymass. The first few of these equations are explicitly:

    c1 = 2c1N

    c2 = c21 2c2N

    c3 = 2c1 c2 2c3N

    c4 = c1 c3 + c22 2c4N

    c5 = c1 c4 + c2 c3 2c5N

    c6 = c1 c5 + c2 c4 + c23 2c6N

    ... (3.1.2)

    Lets solve these equations subject to the monomer-only initial condition, ck(t = 0) = k,0. Naively,these equations can be solved one by one because N(t) can be determined separately and then the masterequations have a recursive structure. Thus as the necessary preliminary, we determine N(t). SummingEq. (3.1.1) over all k, we find N = N2, whose solution is

    N(t) =N(0)

    1 +N(0)t

    1

    tas t. (3.1.3)

    Notice that the concentration does not depend on the initial concentration as t. Substituting N(t) intothe first of (3.1.2) and integrating gives c1(t) =

    1(1+t)2 . Having found c1, the master equation for c2 becomes

    c2 =1

    (1 + t)4 2

    c2

    (1 + t).

    This again can be integrated by elementary methods and the result is c2(t) =t

    (1+t)3 . In principle, this

    approach can be continued straightforwardly to yield ck(t) for all k.Before leaving this pedestrian method, it is worth mentioning that much useful information can often be

    obtained from the much simpler equations for the moments of the mass distribution, Mn(t)

    k knck(t).

    From the master equations (3.1.1) we already obtained the zeroth moment M0 = N , and so we turn tohigher moments. The moment equations are

    Mn =

    k=1

    knck =

    k=1

    kn[

    i+j=k

    cicj 2ck

    i=1

    ci

    ]

    =

    i,j

    (i+ j)n ci cj 2MnM0, (3.1.4)

    where the sums over i and j are now unrestricted in the second line. The explicit equations for the first fewn are

    M0 =i,j

    ci cj M20 = 2M

    20

    M1 =i,j

    (i+ j) ci cj 2M1M0 = 0

    M2 =i,j

    (i2 + 2ij + j2) ci cj 2M2M0 = 2M21

    ... (3.1.5)

    Solving these equations one by one, we obtain M2 = 2t, M3 = 3t2, M4 = 12t

    3, etc.; in general Mk tk1.

    Because M1 = const. by mass conservation, the natural measure of the typical cluster mass is M2 = 2t

  • 28 CHAPTER 3. AGGREGATION

    We now turn to more holistic and elegant approaches for dealing with the master equations. One methodis to consider an appropriately scaled concentration ratio rather than the concentration itself. Thus rewritingthe master equation as

    ck + 2ck N =

    i+j=k

    ci cj ,

    introducing the integrating factor I exp[2

    tN(t) dt

    ], the variable k = ck I , and the rescaled time

    variable dx = dt/I(t), we recast the master equation as

    k =

    i+j=k

    i j , (3.1.6)

    where the prime denotes differentiation with respect to x. This equation contains only gain terms, a featurethat makes it easier to develop intuition about the form of the solution.

    Solving for the first few k it becomes immediately clear that the solution is k = xk1. Since c1 and N

    may be determined separately, from which we obtain 1 = 1, we can then unfold the transformations from(I, x) to (N, t). The rescaled time variable is

    x =

    t0

    dt

    (1 + t)2=

    t

    1 + t,

    and the integrating factor I(t) = exp(2 t0 N(t

    ) dt) = (1 + t)2, from which we obtain the exact solution

    ck(t) =tk1

    (1 + t)k+1

    1

    t2ek/t

    , t. (3.1.7)

    Several points deserve emphasis: first, for fixed k, each ck(t) approaches a common limit that decays ast2 as t (Fig. 3.2). Thus the mass distribution becomes flat for k < s(t) t, as seen on the right side

    of the figure. The area under the mass distribution is therefore t2 t = t1, which reproduces the timedependence of the total concentration of clusters. Finally, the limiting behaviors of ck for short and longtimes can be determined by elementary considerations. The early-time behavior can be inferred by ignoringthe loss terms in the master equations. The resulting equations have the same form as Eqs. (3.1.6), fromwhich we immediately deduce that ck(t) t

    k1 for t 1. Conversely for t , there is no productionof k-mers for fixed k. We may therefore ignore the gain terms in the master equation to give ck 2ckN ,whose solution is ck t

    2.

    As an alternative, consider the concentration ratio ck/c1 rather than the concentration itself. Since c1can be found separately, then the master equation for ck/c1 has the simpler form

    (ckc1

    )= c1

    i+j=k

    ci

    c1

    cj

    c1. (3.1.8)

    It is now expedient to define the rescaled time variable cx = c1 dt so that the master equation becomes

    k =i+j=k i j , with 1 = 1 by definition. Weve seen that the solution is k = x

    k1. From c1 = (1 + t)2,

    we deduce that x = (1 + t)1 and substituting into ck = k c1, we recover (3.1.7).

  • 3.1. EXACT SOLUTIONS 29

    0 2 4 6t

    0

    0.2

    0.4

    0.6

    0.8

    1

    c k(t)

    100 101 102

    k

    105

    104

    103

    102

    101

    100

    c k(t)

    Figure 3.2: Left: Cluster concentrations ck(t) versus time for constant kernel aggregation for k = 1, 2, 3, 4, 5(top to bottom). The concentrations approach a common limit as t , as predicted by the scaling formin Eq. (3.1.7). Right: ck(t) versus k on a double logarithmic scale for t = 1, 2, 5, 10, 20, 50, and 100 (upperleft to lower right).

    Almost exponential solutions

    The solutions to the master equations often have an almost exp