profiles.uonbi.ac.ke · web viewif we express the number of molecules in terms of the number of...

121
SPH 404 STATISTICAL PHYSICS Purpose: The main purpose of this course is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion. Specific objectives (Learning outcomes) At the end of the course you should be able to: Define and give examples of micro and macro states of systems Define the concepts of micro – canonical, canonical and grand canonical ensembles and use them in the calculations of thermal averages of various systems; Derive the Maxwell – Boltzmann distribution functions, Use the Maxwell – Boltzmann distribution functions to derive the equation of state of an ideal gas and compute averages of any function that depends on the coordinates and/or the velocities. State and prove the Equipartition Theorem State and prove the Virial Theorem Establish the connection between statistical mechanics and thermodynamics Derive the laws of phenomenological thermodynamics by Statistical Physics Explain the statistical interpretation of the entropy. Derive and manipulate the Fermi – Dirac distribution function, and the Bose – Einstein distribution function. 1

Upload: others

Post on 11-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

SPH 404 STATISTICAL PHYSICS

Purpose: The main purpose of this course is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.

Specific objectives (Learning outcomes)

At the end of the course you should be able to:

Define and give examples of micro and macro states of systems

Define the concepts of micro – canonical, canonical and grand canonical ensembles and use them in the calculations of thermal averages of various systems;

Derive the Maxwell – Boltzmann distribution functions,

Use the Maxwell – Boltzmann distribution functions to derive the equation of state of an ideal gas and compute averages of any function that depends on the coordinates and/or the velocities.

State and prove the Equipartition Theorem

State and prove the Virial Theorem

Establish the connection between statistical mechanics and thermodynamics

Derive the laws of phenomenological thermodynamics by Statistical Physics

Explain the statistical interpretation of the entropy.

Derive and manipulate the Fermi – Dirac distribution function, and the Bose – Einstein distribution function.

Use the F – D distribution to clarify the electrical conductivity and other physical properties of metals.

Use the quantum statistics of Bose – Einstein to derive the Planck’s radiation formula (the black body radiation spectrum).

1

Page 2: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

CONTENT

Chapter one: Fundamentals of statistical physics

1.1 Introduction to statistical methods: background to the Course Unit, basic ideas, concepts probability theory; Statistical descriptions of systems: macro and micro – states;

1.2 Representative points, phase space, density of states;

1.3 Ignorance, Entropy

1.4 Statistical ensembles and Partition Functions

Chapter Two: Kinetic theory of gases and Boltzmann/Maxwell – Boltzmann distribution

2.1 Velocity and Position Distributions of Molecules in a gas

The kinetic calculation of the pressure of and equation of state of an ideal gas;

Temperature and thermal equilibrium

Theorem of Equipartition of Energy;

The density in an isothermal atmosphere – the Boltzmann factor;

The Maxwell – Boltzmann distribution; averages and distributions

2.2 Introduction to the transport theory and irreversible thermodynamics

The mean free path and free time

Self – diffusion

Viscosity and thermal conductivity of a gas

Electrical conductivity

Chapter Three: Statistical Thermodynamics and some simple applications

3.1 Background on classical thermodynamics: the first law; the second law and the entropy;

statistical interpretation of the entropy.

3.2 The partition function; thermodynamic potentials and free energies; connection between

the partition function and the Helmholtz free energy;

2

Page 3: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

3.3 Harmonic oscillator and Einstein solid: microscopic states, partition function for oscillators;

Einstein solid

3.4 Statistical Mechanics of an ideal gas.

Chapter Four: Quantum statistics of identical particles

4.1 Fermi – Dirac Statistics and Applications: derivation of the Fermi –Dirac distribution from the micro canonical ensemble and the grand canonical ensemble; thermodynamics of fermions; average occupation numbers; applications: conduction electrons in a metal (density of states), filling the allowed states (the Fermi energy), electrical conduction in metals (The F –D probability function) ;

4.2 Bose – Einstein statistics and applications: derivation of the Bose – Einstein distribution function; thermodynamics of bosons; average occupation numbers. Applications: Bose - Einstein distribution and the Planck’s radiation formulae; the Photon gas and Planck’s radiation formulae.

References

1. Amit, D. J. and Verbin Y. (2006). Statistical Physics – An Introductory Course. World Scientific Publishing Co. Pte. Ltd. Tob Tuck Link, Singapore.

2. Beck, A. H. W. (1976). Statistical Mechanics, Fluctuations and Noise. Edward Arnold

(Publishers) Ltd, London, U.K.

3. Pal, P. B. (2008). An Introductory Course to Statistical Mechanics. Alpha Science Intern. Ltd, Oxford, U. K.

4. Peebles, P. J. E.(2003). Quantum Mechanics (Chapter one: Historical Development, pp3 -22). Prentice – Hall of India. New Delhi.

4. Chandler, D. (1987). Introduction to Modern Statistical Mechanics. Oxford University Press, Inc.

5. Rossel, J (1974). Precis de Physique Experimentale et Theorique. Editions du Griffon, Neuchatel – Suisse.

6. Dmitry Garanin (2012). Statistical Physics

7. Widow, B.(2002). Statistical Mechanics – A Concise Introduction for Chemists. Cambridge

3

Page 4: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

University Press, U. K

8. Andrews, F. C. (1975). Equilibrium Statistical mechanics (2nd Ed.), John Wiley & Sons, N. Y.

9. TERLETSKII, YA. P.(1971). Statistical Physics. North – Holland Publishing Company – Amsterdam . London

10. Scott Pratt (2007). Lecture Notes on Statistical Mechanics (PHY 831- Michigan

State University).

11. Fitzpatrick Richard 2006 – 02- 02” http://farside.ph.utexas.edu/teachng /sm1/lectures/no” (downloaded on October 15, 2015).

SPH 404 STATISTICAL PHYSICS

CHAPTER ONE: Fundamentals of Statistical Physics

1.1 Introduction to Statistical Methods

1.1.1 Background to the Course Unit

The main concern of Statistical Physics is to calculate average quantities and probability predictions of macroscopic systems and relate those averages to the system’s atomic structure.

Since all macroscopic systems dealt with the theory consist of a large number of entities (molecules, atoms, photons, spins, etc.), it is impossible to find an exact solution of the classical or quantum mechanical equations of motion. Therefore, we will not be interested in all details of systems’ microscopic dynamics, because the information resulting from these equations would be irrelevant.Thus, this course unit will focus on macroscopic average values and statistical physics is often illustrated by applying it to equilibrium thermodynamic quantities. Therefore, one of the goals of this course unit is to derive the laws of phenomenological thermodynamics from statistical mechanics. When the number of particles is very large (for example 1023 atoms or molecules of an ideal gas in a cubic centimeter), new types of regularity appear: only a few variables are needed to characterize the macroscopic system. Thus, we should confine our attention to macroscopic average values. The system’s microscopic dynamics has no value; the only relevant information is: “how many atoms or molecules have a particular energy (or a particular velocity)”; then one can calculate the observable thermodynamic values. That is, one has to know the distribution function of the particles over energies (or over velocities) from which the macroscopic properties can be calculated. This is the scope of statistical physics.

4

Page 5: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

For instance the knowledge of the distribution of molecules in velocity space enables one to compute averages of any function that depends on the velocities. For electrons and many other microscopic particles such as protons, neutrons, photons, etc. quantum statistical probability distribution functions will be used.

Statistical Physics is subdivided into two parts: the theory of equilibrium states and the theory of non – equilibrium processes. The TES deals with probabilities and mean values which do not depending on time. The TNE processes deals with probabilities and mean values depending on time. The branch of Physics studying non equilibrium processes is called kinetics.

This Course Unit comprises two parts: the classical of statistical physics and quantum statistical systems. In classical mechanics, the position and momentum coordinates of a particle can vary continuously, so the set of microstates is actually uncountable. In quantum statistical mechanics a physical system will have a discrete set of energy eigenstates (states i corresponding to energies Ei). Both classical and quantum statistical physics theories are accurate for systems in which the interactions between atoms/molecules are particularly simple--or absent. In this course unit we will be able to study the ideal gas (using quantum statistics and “classical statistics”).

When densities are very high the quantum nature of the system becomes important and one must use quantum statistics (Fermi – Dirac and Bose – Einstein statistics).

The resulting energy distribution and calculation of observables is simpler in the classical case. However, the use of quantum statistical mechanics presents some advantages: the absolute value of the entropy, including its correct value at T → 0, can only be obtained in the quantum case. This course unit will be mainly concerned with classical and quantum statistics of equilibrium states.This Course Unit is organized into four Chapters. The first Chapter deals with “the Fundamentals of Statistical Physics” with four sections, that is, introduction to statistical methods, statistical description of systems (macro and micro – states),representative point, phase space and density of states, ignorance & entropy, and statistical ensembles and partition functions. Chapter Two, which presents the basic ideas of the “Kinetic Theory of Gases and Maxwell – Boltzmann distribution” is an illustration of statistical method ; this chapter includes an introduction to transport theory and irreversible thermodynamics. Kinetic theory of gases is considered as an intermediate level between thermodynamics and statistical mechanics; it is also viewed as a tool for the calculation of important quantities such as transport coefficients. Chapter Three is about “Statistical Thermodynamics”; here, we first recapitulate essential ideas from phenomenological thermodynamics and thereafter, we establish the connection between Gibbs’ statistical mechanics and thermodynamics; we derive the statistical entropy and focus on statistical formulation of laws of thermodynamics. The Fourth Chapter deals with the

5

Page 6: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

“Quantum statistics of identical particles”. Not only we derive the Fermi – Dirac and Bose – Einstein probability distribution functions but also we describe some of their applications. The Black body radiation is treated within the applications of Bose – Einstein statistics.

1.1.2 Recall of basic ideas and concepts of Probability Theory

Probability theory is a well – defined branch of mathematics. Like any other branch of

mathematics, it is a self – consistent way of defining and thinking about certain idealizations. To

the scientist, mathematics is simply one of his logical tools – broadly speaking the logic of

quantities. The study of statistical physics is impossible without some knowledge of probability

theory. In this section we recall the simpler ideas of classical probability theory. By emphasizing

the concept of “Ensembles and Probabilities”, the approach used in this section differs sharply

from the way it is used in mathematical statistics. Of course statistical physics/mechanics deals

with an enormous number of particles/systems with probability distribution functions which

are continuous functions of the independent variables. For instance, the density of free

electrons in metals is of the order of 1028 -1029 electrons/m3 and typical quantum statistical

mechanical calculations on the properties of the electrons involve such big numbers. Due to the

continuous nature of the probability distribution functions, calculations involving series are

replaced by integrations and differentiation. As a result, statistical mechanical predictions

which are stochastic or probabilistic in nature are extremely precise.

1.1.2.1 The idea of probability or Probability of an event

There are three ways in which probability can be defined.

(a) Frequency definition of probabilities

The probability may be defined on the basis of frequency with which an event occurs when a

series of hypothetical identical experiments is carried out. For example, suppose we know

that an urn contains red, white and blue counters; the experiment is to draw a counter,

record its color and return the counter to the urn. After a large number (N) of trials, the

number of red counters drawn is r, of white is w , and of blue is b. The probability on the

frequency basis of drawing

6

Page 7: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Red = r/N =Pr (probability of the event “red”)

White =w/N=Pw (probability of the event “white”)

Blue =b/N= Pb (probability of the event “blue”).

Since we must withdraw either red, white or blue, the sum must yield the probability of

certainty , which is equal to unity:

Pr + Pw + Pb =1 (1.1)

In general, it is found that the probability of all possible outcomes of an experiment is unity.

Example 2: A coin tossed many times

A coin is tossed a very large numbers of times in air. The event is to record “ the head” or the

“tail”. Let N be the number of tosses, the Ph = (N1/N) and Pt = (N2/N)

(a) Classical definition of probability

Experiments are considered as tests which may have several possible outcomes or results. The

probability of a specific result or event is given by:

Px = Nx/N (1.2)

Where Nx is the number of outcomes yielding x, and N the total number of possible results.

Example: consider the true die. Tossing the die can yield the six results: 1 2 3 4 5 6. The

probability of throwing a 4 is 1/6. The total probability is

(b) Ensembles and Probabilities

Probability theory treats the properties of an “ensemble”. By definition, an ensemble is a

collection of systems/objects/members which has certain interesting properties. Consider as an

ensemble a certain hypothetical collection of cats/rabbits. Some of their properties include,

7

Page 8: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

color, sex, age at last birthday, number of teeth, weight, and length. Here color and sex are

qualitative properties characterizing each cat/rabbit. Age and number of teeth are quantitative

properties that only take discrete values. Weight and length are continuous properties that

characterize each cat/rabbit.

The probability Pi of a certain property is defined to be the fraction of members/objects/systems

having that property:

Pi = Ni/N (1.3)

Where Ni is the number of members/objects/systems of the ensemble having property i, and N

the total number members/objects/systems of the ensemble.

For example, the probability P28 of a rabbit with 28 teeth is the ratio between N28 of such rabbits

in the ensemble and the total number N of rabbits it contains.

Most of the properties of probabilities follow directly from the definition (1.3).

If property i occurs in no members of the ensemble, its probability is zero:

Pi = Ni/N= 0/N=0 (1.4)

If property i appears in all members of the ensemble, its probability is unity:

Pi = Ni/N= N/N=1 (1.5)

If all rabbits in the ensemble are alive, the probability of a living rabbit is unity (Pliving =1)

and the probability of a dead rabbit is zero.

Properties i and j are said to mutually exclusive if no member/object/system of the

ensemble can have both properties i and j

(1.5)

Example: P29 or 30 = P29 + P30 (since no rabbit can have two different numbers of teeth).

Remarks: - three definitions of probability have been given above. In statistical

mechanics, the frequency and ensemble definitions are the important ones.

1.1.2.2 Discrete variable s8

Page 9: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

We consider properties that are quantitative (discrete values), for example the number

of teeth or age at last birth day of our rabbits. The properties i and j are mutually

exclusive (no rabbit can be both 6 and 7 years old). In this case the sum of P i over all

possible values of i of the property yields unity:

(1.6)

If the subscripts refer to the age of the rabbits , since is the probability a rabbit

has some value of age, and we know all have some value, the sum must yield the probability of certainty, which unity. The sum involves all the rabbits in the ensemble. A probability that satisfies equation (1.6) is said to be normalized and Eq.(1.6) is often called the normalization condition. Often , the probability Pi is proportional to some function of i : Pi =c f(i). From equation (1.6), we deduce that

and (1.7)

If properties i and j are uncorrelated Pi and Pj are said to be independent probabilities. The joint probabilities Pij is the product of Pi and Pj :

;

(1.8)

1.1.2.3 Ensemble averages Consider a quantitative, discrete property i, represented by an ensemble. Let g(i) be a function of i. Each member of the ensemble would have a value for the function g. By definition, the average value of g over the entire ensemble is given by:

(1.9)

It is important to note that Nigi is the contribution to the sum from each group of members ni.

Mean square deviation or variance

9

Page 10: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(1.10)

The square root of is called root mean square deviation, or the standard

deviation σ:

(1.11)

A good measure of how the values of g for the members of the ensemble are spread out around is to examine the ratio ; this is a dimensionless measure that is very small for tightly clustered g’s.

1.1.2.4 Continuous variables.

In many cases of interest in physics and engineering, the probability of the event x [P(x)] is a continuous function of x. It is important to introduce the probability density or distribution function, f(x) , associated with x and fulfilling the following conditions:(i) f(x) for all values of x in the range of x, R(x)(ii)

(1.12a)(iii) The probability of finding a value x between a and b, is:

(1.12b)

The terminology probability density for f is the analogue of the mass density ρ, defined by the mass per unit volume: ρ(x, y, z) dx dy dz is the mass contained in the volume element dx dy dz.

1.2 Statistical description of systems: micro – states and macro – states 1.2.1 Introduction

The purpose of Statistical Physics is to study systems composed of many particles (for example 1 cubic centimeter of a gas contains approximately 1023 atoms of molecules). We

can solve the Schrodinger equation ( for one particle (for example one electron in the hydrogen atom) to find the energy and the wave function of the electron. For many

particles, the total wave function will be a linear combination of one – electron wave function:

(1.13)

10

Page 11: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

where is the wave function of the particle i in state α with energy Eα. For non – interacting particles like atoms and molecules of an ideal gas, each particle is characterized by its energy levels. At a certain temperature T different from zero, the system of many particles possesses a total energy E. Some of the questions that Statistical Physics should answer are the following:

How is the total energy distributed among the particles ? How the particles are distributed over energies?

One of the most important requirements of phenomenological thermodynamics is that the entropy should be maximum at equilibrium.

1.2.2 Definitions of micro – state and macro – state 1.2.2.1 Microstate

A microstate is defined as a state of the system where all parameters of constituent particles are specified.

Classical thermodynamics describes macroscopic systems in terms of a few macroscopic variables (T, V, N,…..). We can use two approaches to describe such a macroscopic system: a classical Mechanics and a quantum mechanics.

(a) Classical Mechanics

The state of a single particle is specified by its position coordinates (q1, q2, q3) and its momentum coordinates (p1, p2, p3). For N particles, one needs 6N degrees of freedom to describe the system in the phase space representation. Thus, from the point of view of classical mechanics, the state of a system of N particles is described by 6N variables. This model of matter on a microscopic scale is called microscopic state or microstate. In summary a microstate is defined by the representative point in phase space.

(b) Quantum Mechanics

The energy levels and the state of particles in terms of quantum numbers are used to specify the parameters of a microstate.

1.2.2.2 Macrostate

A macrostate is defined as a state of the system where the distribution of particles over the energy levels is specified. Therefore, the number of particles, Ni, in a particular quantum state i, with a particular energy εi, specify a macrostate. In fact, if these numbers are known, the mean energy and other average quantities of the system can be calculated. A macrostate contains a huge number of microstates.

The equilibrium thermodynamic macrostate is described by 3 macroscopic variables (P,V, T) or (P, V, N) or (E, V, N). These macroscopic variables are related by an equation of state, which for the case of an ideal gas, is given by:

11

Page 12: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

In statistical mechanics, the equilibrium tends towards a macrostate which is the most stable. The stability of the macrostate depends on the perspective of microstate. One of the most important assumption of S. M. is that the most stable macrostate is the one which contains the maximum possible of microstates.

For a large number of particles, each macrostate k can be realized by a very large number Ωk of microstates. The main assumption of statistical physics is that all microstates occur with the same probability. That is, on the dynamical point of view, they are equally probable.

It may appear more easier to use a quantum description which specifies the quantum states of all the atoms--the microstate. Since the atoms interact this state changes very rapidly. But the observed macrostate doesn't change. Therefore, a macrostate contains a huge number of different microstates.

The theory of quantum statistics for systems of non - interacting particles provides more accurate results. In the absence of interaction, each particle has its own set of quantum states to which correspond discrete energy values( in which it can be at the moment), and for identical particles these sets of states are identical. The particles can be distributed over their own quantum states in a great number of different ways, the so-called realizations. Each realization of this distribution is called microstate of the system.

The true probability of the macrostate k is normalized : . It is important to highlight

that both Ωk and pk depend on the whole set of Ni:

pk =pk (N1, N2,…….); Ωk = Ωk (N1, N2,……) (1.13b)

For an isolated system the number of particles N and the total energy E are conserved; we have the following constraints involving Ni

; (1.13c)

Where εi is the energy of the particle in state i. The number of particles averaged over all

microstates k can be calculated using the true probability pk and the number of particles in the microstate i corresponding to the macrostate k. In the case of large N, the dominating true probability pk should be found by maximizing the “ignorance” with respect to all Ni taking into account the constraints of equation (1.13c) [see paragraph 1.4 ]

12

Page 13: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

To illustrate the concepts of micro – states and macro – states, we consider the example of a two – state particles, namely the coin tossing

A tossed coin (a coin thrown in air) can land on the ground in two positions: Head up or tail up. Considering the coin as a particle, one can say that this particle has two “quantum” states, 1 corresponding to the head up and 2 corresponding to the tail up. If N coins are thrown in air, this can be considered as a system of N particles with two quantum states each. The microstates of the system are specified by the states occupied by each coin. As each coin has 2 states, there are altogether

Ω = 2N (1.13)

microstates. The macrostates of this system are defined by the numbers of particles in each state. Let N1 be the number of particles in state 1, and N2 the number of particles in state 2. N1 and N2 satisfy the following constraint condition:

N1 + N2 = N (1.14)Thus, one can take, N1 or N2 as the number k labeling macrostates. The number of microstates in one macrostate, that is, the number of different ways of picking N1 particles being in state 1 (all other being in state 2) within N indistinguishable particles, avoiding

multiple counting of the same microstates. According to combinatorial analysis is

given by:

=

(1.15a)

It can be shown that:

(1.15b)

The thermodynamic probability has a maximum at N1 = N/2, that is, when half of the

coins are in state 1(head up) and half of the coins in state 2(tail up). The corresponding macrostate is the most probable state. Indeed, as for an individual coin the probability to land head up and tail up are both equal to 0.5.

13

Page 14: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

1. 3 Phase space, representative points and density of states1.3.1 Introduction

In this section, we introduce the concept of “phase space” which is a key concept in Statistical Physics and includes the space coordinates and momenta coordinates. The idea of representative points in phase space is also introduced and the expression for the density of states for free particles will be established and discussed.

1.3.2 Phase space, representative point and density of states

1.3.2.1 Phase space and representative point

Let us consider a system of N mass points, moving according to the laws of classical mechanics. In statistical physics the equations of motion of such a system are given in the Hamiltonian form as:

and for all (1.16)

In this equation f is the degree of freedom (f =3N in this case). H(qk, pk, t) is the Hamiltonian function. If H does not depend explicitly on time, then it represents the total energy of the system. Hamilton’s equations of motion constitute a set of 2f first – order differential equations. If all initial conditions are known, the 6N equations above can be solved qk(t) and pk(t) can be obtained, that is the generalized trajectory of the system.If we use the so called canonical variables, X1, X2,…..X6N we constitute the coordinates of an abstract 6N – dimensional space, called phase space of the system. In statistical mechanics, it is important to think in terms of phase space rather than simple configuration space. At any instant, a particle’s state is characterized by its position and momentum coordinates. Therefore, the dynamic state of a single particle is determined by six parameters, three for the position and three for the momentum, corresponding to a representative point in a six – dimensional phase space (μ). Each point in phase space determines a unique phase trajectoryFor the case of one particle, the phase space (μ) is divided into small cells of size dω = d3p d3q, which is a six – dimensional volume element. The probability for the particle to be in one of the cells is proportional to the volume of the cell and to the “Boltzmann factor”. For a system with N particles there are 3N position coordinates and 3 N momenta coordinates. The phase space is a 6N – dimensional hypersurface in which the axes are the q’s and the p’s. The dynamic state of such a system is then fixed by 6 N parameters defining a representative point in the 6 N – dimensional phase space (Г).

14

Page 15: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

1.3.2.2 Density of states The number of dynamic states available in the phase space μ of a single particle depends on the six – dimensional volume element, dω = d3p x d3q and on the “minimum cell” whose extension is given by dωmin = h3, according to Heisenberg uncertainty principle. Therefore, the maximum number of cells corresponding to dynamic states with energy ranges between ε and ε + dε (see Figure 1) is given by:

(1.17)

g(ε) is the density of states per unit energy, and g(ε)dε is the number of possible dynamic states in the energy range between ε and ε + dε.

Example: Density of states for free particles in a volume V To the six – dimensional volume element, dω = dq1dq2 dq3 dp1dp2dp3 = d3q x d3p correspond the number of states dZ’ = (dω/ h3) = (d3q x d3p/ h3). In many cases, we are only interested on the amplitude of the linear momentum, p, and then on the energy. The space of momenta volume element is given by: d3p = 4πp2dp. On the other hand for free particles, interactions are

neglected, so that, U (qi, qj) = 0. Since the total energy equals the kinetic energy, , one

obtains easily the following expression for the density of dynamic states in energy ranges between ε and ε + dε :

(1.18)

For particles having a spin angular momentum, there are additional dynamic states related to the orientation of the spin. Even if these are generally degenerate, they contribute to the density of states by a factor (2s + 1). Therefore, in such a case the density of dynamic states is given by:

(1.19)

For particles having a spin angular momentum, there are additional dynamic states related to the orientation of the spin. Even if these are generally degenerate, they contribute to the density of states by a factor (2s + 1). Therefore, in such a case the density of states is given by:

g(ε) = (2s + 1) (2π V/ h3) (2m)3/2 (ε)1/2 (1.20)15

Page 16: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

In the general case of a multi – dimensional space, the volume element of phase space is written as dΓ = dq1 dq2 dq3…..dq3N x dp1 dp2 dp3………dp3N (1. 21)

The density of dynamic states is also called the number of representative points. Thus, for a system of N particles, where f = 3N degrees of freedom are involved, the volume element dГ cannot contain more than dГ/hf representative points, according to Heisenberg uncertainty principle. Transforming this to energy, the maximum number of representative points (or density of dynamic states) is given by:

(1. 22)

1.4 Ignorance and Entropy

1.4.1 Introduction

In this section, we first introduce the concept of ignorance which is a measure of the number of ways a system can be realized. From Ignorance, we go on with the statistical definition of the entropy. The maximization of the ignorance, or equivalently entropy leads to the fundamental principle of statistical mechanics, that is “all micro states in phase space are equally probable” and to the Boltzmann distribution. The partition function (Z), which is a key function of statistical Physics appears as the normalizing factor of the Boltzmann probability. The partition function is used to calculate various thermal averages (average thermal energy, average charge, and entropy).

1.4.2 Ignorance

(a) Two –state particles

We consider a container with N – atom assembly that is divided into two parts by a barrier that has small hole in it (see figure 1a). Figure 1a shows rather unlikely situation in which all the N points representing the positions and velocities of individual atoms, are in the same half of the container.

16

Page 17: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Figure 1a : Unlikely situation – the N representative points are in the same half of the container.

The figure below (Fig.1b), by contrast, shows a more probable situation in which half of the points are in one part of the container and the other half are in the other part.

Figure 1b: The most probable situation – half of the representative points are in one part of the container and the other half are in the other part.

Actually the distribution of atoms in the above container is a combinatorial problem of the type: “find the number of ways, , in which the N atoms can be distributed in two groups comprising, respectively, N1 and N2 particles”. Ω is given by:

(1.23)

It can be shown that “the ignorance ” is maximum when N1 = N2 (the most probable situation). Therefore, after a sufficient long time an equilibrium condition will be reached where equal number of atoms/molecules is distributed in the halves of phase space. The most probable situation is such that the entropy of the system is maxima. According to Boltzmann’s treatment of the entropy (S =kln ), maximizing ignorance is equivalent to maximizing the entropy. The maximization of the entropy leads to the fact that all microstates in phase space are equally

17

Page 18: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

probable. This is equivalent to stating that there will be always a tendency for states of a system to occupy as much of phase space as they are permitted. In the next paragraph we consider the more general case.

(b) Many – state ParticlesConsider a large number of systems described by a set of states 1, 2, 3,….., i,…. with corresponding energies ε1, ε2, ε3,…..,εi,….. . Furthermore, let the number of systems in state 1 be N1, in state 2 be N2, and so on. We want to find the number of ways of distributing N systems over n boxes so that there are Ni particles in the ith box. That is, we look for the number of microstates in the macrostate described by the numbers Ni. We define ignorance (Ω) as a measure of the number of ways to do this; we have

(1.24)

The following constraint applies:

(1.25)

Our goal is to maximize ignorance while satisfying this constraint.

1.4.3 Entropy and its statistical interpretation Entropy - the most fundamental quantity of Statistical Mechanics- is a very difficult concept. There are two different ways of introducing the entropy. One was due to Boltzmann. In his study, Boltzmann focused on the distribution of possible states of a system, which include the states of motion.Boltzmann’s statistical treatment of entropy defined this quantity by the following relationship:

(1.26)

Where k is the Boltzmann’s constant; . Equation (1.26) is the

Boltzmann’s statistical definition of the entropy. is related to the probability of a state, as measured by the number of ways in which it can be realized. It is very important to note that the word state involves both positions and the velocities of all atoms in the system. The state referred to in the above definition occurs in the phase space rather than simple configuration space.

18

Page 19: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The entropy S will be maximized when Ω is maximized. We have divided by N (number of systems) so that we can speak of the entropy in an individual system. Therefore, the entropy of a macroscopic system is proportional to the natural logarithm of the number of quantum states available at mean energy.

From Eqs. (1.24) and (1.26), the entropy S is given by:

(1.27)

The analysis of expression (2. 4) for large values of N ( is simplified using the Sterling formula:

(1.28)

(1.29)

(1.30)

Substituting Equations (1.29) and (1.30) into Eq. (1.27) and rearranging we find easily that

(as ) (1.31)

Where is the probability of a given system to be in state i. Since ( the

entropy is always positive. If the system was known with certainty to be in a particular N – particle quantum state j, then all members of the ensemble would be in state j; we would have pj =1, and S would be zero.

Our goal is to maximize S. Maximizing a multi-dimensional function (in this case a function of N1, N2…) with a constraint is often done with Lagrange multipliers. In that case, one maximizes the quantity, S – λC (n), with respect to all variables and with respect to λ. Here, the constraint C must be some function of the variables constrained to zero, in our case, we put C =

. The coefficient λ is called the Lagrange multiplier. Stating the minimization,

19

Page 20: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

( 1.32a)

(1.32b)

The second expression leads directly to the constraint while the first expression

leads to the following value for

or (1.33)

The parameter is then chosen to normalize the distribution, or .

The important result here is that all microstates are equally probable. This is the result of stating that you know nothing about which states are populated, i.e. maximizing ignorance is equivalent to stating that all states are equally populated. This can be considered as a fundamental principle – Disorder (or entropy) is maximized. All statistical mechanics derives from this principle. Statistical Mechanics is based on the principle that all states in phase space are equally probable. This is the result of stating that you know nothing about which state are populated. In other words, each cell of the phase space has, on the dynamical point of view, the same probability to be occupied. It is on that principle that the micro- canonical ensemble is based. These constraints (fixing E and /or N) can be incorporated in the theory of Ignorance and Entropy by applying additional Lagrange multipliers. For example, conserving the average energy can be explained by adding an extra Lagrange multiplier β in equation (1.33) . After a few calculations, it is found that the probability pi for being in state i is given by:

pi = exp (- 1 – λ - βεi) (1.34a)

Thus, the probability for a state i to be populated is proportional to the Boltzmann factor (e- βεi). This is the Boltzmann distribution, with β being as (1/kT). Again, the parameter λ is le chosen to normalize the probability.The Boltzmann distribution can be used to give the relationship between the populations of

two different particle states. Let Ni and Nj be the number of elements in states with energies

and , respectively. The ratio of probabilities of occupancy of the two states is given by:

20

Page 21: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(1.34b)

Equation (1.34a) shows that it is most unlikely for particles to be in state i if (ε i – εj) >> kT ; in fact such a case (Ni/Nj) << 1. Therefore, lower energy levels are populated first. Sometimes several distinct particle states have the same energy; they are said to be degenerate. Let gn and gm be the degeneracy of the energy levels n and m, respectively. It can be shown that the ratio of occupancies of the two levels is:

(1.34d)

The difference between equations (1.34c) and (1.34d) is the presence of degeneracies in equation (1.34d), also called the statistical weights.Boltzmann distribution can also be used to calculate the probability for a particle to surmount an energy barrier of height ∆E. Following is the result obtained (see exercise):

(1.34c) Therefore, the probability to surmount the barrier is proportional to the Boltzmann factor. Thus, the probability of surmounting the energy barrier increases as the height of the energy barrier increases and also as the temperature decreases.

For any quantity which is conserved on the average, one needs to use a corresponding Lagrange multiplier. For example, a multiplier α could be used to restrict the average particle number or charge. The probability for the system to be in state i would be:

pi = exp (- 1 – λ - βεi - αQi) (1.35)

The chemical potential μ is related to the multiplier α by:

α = - (μ/kT) (1.36)

In many textbooks, the charge is replaced by the number of particles N. This is correct if the number of particle is conserved.

1.5 Statistical Ensemble and Partition functions

21

Page 22: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

1.5.1 Statistical Ensembles

According to Gibbs (1902), “Ensembles” (also statistical ensembles) are collections of very large numbers of identical systems, which could be microscopic or macroscopic. An ensemble is an idealization consisting of a large number of virtual copies of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a probability distribution for the state of the system. Gibbs used probability calculations to predict average quantities. He assumed that the behavior of an ensemble is the same as the long – time average behavior of a single system. In this section we discuss the effects of fixing energy and/or particle number. Three (3) important thermodynamic ensembles defined by Gibbs:

(i) The Micro- canonical Ensemble: it is a statistical ensemble where the total energy and the number and type of particles in the system are each fixed to particular values; it can be viewed as a large heat bath carefully insulated from the surrounding objects so that the temperature and therefore the total energy is nearly constant;. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium. It I therefore, completely described by N, V, and E.

(ii) The canonical Ensemble (N fixed, E varies): the number of systems or microparticles in the Canonical ensemble is constant, while the energy varies. In place of energy, the temperature is specified; the system is in thermal contact with a very large heat reservoir, which fixes the temperature T. The canonical ensemble is appropriate for describing a closed system, which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium the system must remain totally closed (unable to exchange particles with its environment), and may come into weak thermal contact with other systems that are described by ensembles with the same temperature. The equilibrium state of such an ensemble is completely described by N, V, and T. We can use mathematical techniques to obtain average quantities about a canonical ensemble.

(iii) The Grand Canonical Ensemble (E varies , N varies).The system consists of the material contained in a volume V. His temperature T is also fixed by its thermal contact with a large heat reservoir. This is a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The GCE is appropriate for describing an open system: one which is, or has been in, weak contact with a reservoir (thermal contact, chemical contact, electrical contact, etc.). The GCE remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential. “The wall separating

22

Page 23: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

the GCE from the MCE allows passage of both heat and particles or absorbs particles. From the mathematical point of view, the GCE is the most useful and flexible of the three.

The above ensembles differ by which quantities vary, as in the following table (Table 1).

Ensemble Energy Charges/Number of particles

Micro - canonical Fixed Fixed

Canonical Varies Fixed

Grand Canonical Varies Varies

Table 1: Three Types of Statistical Ensembles. V, N, and T are fixed for the micro- canonical and canonical ensembles, while μ, V, and T are fixed for the grand canonical ensemble.

1.5.2 Partition FunctionsThe probability for the system to be in state i (with energy εi ) is given by:

(1.37a)

Since

(1.37b)

Therefore, Equation (1.37a) can be written as follows:

(1.38a)

(1.38b)

Z is known as the partition function according to Darwin and Fowler or “Zustandssumme”, meaning sum – over – states, according to Planck. The probability given in equation (1.38a) which is the Boltzmann probability distribution is applicable when the system is in thermal equilibrium. It is also known as “Gibbs probability distribution” or the “canonical (standard) distribution”.

23

Page 24: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The partition functions may be used in the calculation of average energy or charge. For instance, the average energy is given by:

(1.39)

The last equality on the right hand side of equation (1.39) is justified by the fact that the numerator of that equation is the negative derivative of Z with respect to ; this is why we have expressed the average energy as the negative derivative of with respect to . It is very important to stress that equation (1.39) constitutes the statistical mechanical expression for the energy of the system.

Statistical mechanics enables one to determine the entropy from the partition function. According to equations (1.31) and (1.38a), we have:

(1.40)

Equation (1.40) is one form of the statistical definition of the entropy.

Exercise 1: The energy levels of a one – dimensional quantum SHO are given by,

, where n = 0, 1, 2, 3,…is the vibrational quantum number and is the

angular frequency. Calculate the partition function of such a SHO and deduce its average energy.

Exercise 2: Using two ways show that where n is the degree of excitation

of the simple harmonic oscillators.

Exercise 3: Consider a three – level system with energies As a function of T find (i) the partition function (ii) the average energy and (iii) the entropy.

Exercise 4: Consider two single – particle levels whose energies are -ε and + ε. Into these levels, we place two electrons (no more than one electron of the same spin per level). As a function of β (or T) find: (a) the partition function, (b) the average energy and (c) the Entropy. Calculate the limits of <E> and S when T →0 and when T→∞.

24

Page 25: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Chapter Two: Kinetic Theory of Gases and the Boltzmann/ Maxwell – Boltzmann distributions2.1 Introduction

The kinetic theory of gases is the molecular approach to the study of gases. The kinetic theory of gases is considered as an intermediate level between thermodynamics and statistical mechanics; it is also viewed as a tool for the calculation of important quantities such as transport coefficients. The kinetic theory was developed by Maxwell, Boltzmann and Gibbs in the 19th century. The theory enabled Gibbs to formulate statistical mechanics. In this Chapter we will see that the results derived from the kinetic theory are compatible with the laws of thermodynamics. In Chapter three we will see that statistical mechanics agrees with the kinetic theory as well as with thermodynamics.

We may look a gas as made up of moving atoms or molecules. The goal of any molecular theory of matter is to understand the link between the macroscopic properties and its atomic and molecular structure. To show how the microscopic properties are related to measurable macroscopic quantities (P and T) of an ideal gas, we need to examine the dynamics of molecular motion. The pressure exerted by a gas results from the collisions of its molecules with the walls of the container. The momentum of a molecule will be changed during such a collision. This will

25

Page 26: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

lead to the change of momentum imparted to the wall. The pressure is equal to the force per unit area or the average rate of change of momentum transferred to the wall per unit area.

In the next section we show that we can obtain a relationship between the pressure and microscopic properties by estimating the rate or change of linear momentum resulting from elastic collisions between gas molecules and the walls of the container. This simple kinetic molecular model enables us to understand how the ideal gas equation of state is related to Newton’s laws.

Also we can deduce that the average translational kinetic energy per molecule is related to the temperature by, KEavg = (3/2) kT. We see that the temperature of a gas is related to the kinetic energy of the molecules. From this relation we embark on the general statement and proof of the equipartition of energy theorem.

2.2 Kinetic calculation of the Pressure and Equation of state of an ideal gas The Maxwell - Boltzmann distribution law is a fundamental principle of statistical mechanics which can be used to derive the ideal gas equation of state.

(i) Avogadro’s law: at equal pressure and temperature, equal volumes of gases contain an equal number of molecules.

(ii) We start by showing that Avogadro’s law is a direct consequence of Newton’s laws and The Maxwell - Boltzmann distribution law.

A microscopic description of a dilute gas begins with the clarification of the concept of the pressure.

Assumption: all the molecules of the gas have the same mass; Elastic collisions between the gas molecules and the walls of the container

By definition, the pressure of a gas is the force which the gas exerts on a unit area of the walls of the container (or the force which must be applied on a wall in order to keep it stationary).Therefore, in such an elastic collision the amount of momentum imparted to the wall is ∆px = 2p’x = 2mvx (2.1)The number of molecules with a velocity component vx, along the x axis, that will strike a wall, of area A, during a very short time interval ∆t, is equal to the number of molecules with such a velocity, inside a cylinder of area A and length vx∆t. This number is given by the product of the volume ∆V = A vx∆t and the density. The amount of momentum, transferred to the piston during the time ∆t, by this type of molecules (with velocity vx along x) is (∆p’x)tot = n(vx). 2mvx . vx∆tA, (2.2)

Where n(vx) is the number of molecules per unit volume with velocity component vx.The force exerted on the piston is given by the amount of the momentum transferred to the piston per unit time; and the pressure is the force per unit area.

26

Page 27: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Therefore, the contribution of molecules with a velocity component vx to the pressure is P (vx) = 2 mv2

xn(vx) (2.3)

Clearly, not all the gas molecules have the same velocity component along x, and we must sum over all the possible values of vx. Using the Maxwell – Boltzmann distribution for the velocity, we find the following expression for the total pressure exerted on the walls:

= nkT = (N/V)kT (2.4a

Rearranging equation (2.4a), we have: PV = NkT (2.4b) Equation (2.4b) is the equation of state of N molecules of an ideal gas.

a. Theorem of equipartition of energy

In a state of equilibrium the averages of v2x, v2

y and v2z will have the same value, therefore, the

pressure of a gas is given by:

P = nkT = (N/V)kT= nm (2.5a)

(2.5b)

In Equation (2.5a), U is the total kinetic energy of the gas; N is the total number of molecules and V its volume. This equation states that two gases kept at the same pressure, and whose molecules have the same average kinetic energies, will occupy equal volumes, if they contain the same number of molecules. This important result comes from the direct use of atomic assumption, Newton’s law and Maxwell – Boltzmann distribution.If we try to identify “temperature” with “average kinetic energy”- up to a constant factor, we obtain the equation of state of an ideal gas. Treating molecules as rigid objects devoid of internal structure, which exchange momentum via elastic collisions, we found that the average kinetic energy in any gas at a given temperature is given by:

KEavg = (2.6)

The total energy of a monoatomic gas is given by U = (3/2)NkT. This result is general and is known as the theorem of equipartition of energy which states that molecules in thermal equilibrium have the same average energy associated with each independent degree of freedom

27

Page 28: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

of their motion and that the energy is per molecule or per mole. The average energy

for translational degrees of freedom, such as an ideal monatomic gas is (3/2) kT per molecule or (3/2) RT per mole. The Equipartition of energy result (Eq. 2.6) serves well in the definition of kinetic temperature since that involves just the translational degrees of freedom, but it fails to predict the specific heat of polyatomic gases because the increase in internal energy associated with heating such gases adds energy to rotational and perhaps vibrational degrees of freedom. Each vibrational mode will get kT/2 for kinetic energy and kT/2 for potential energy the equality of kinetic and potential energy is addressed in the virial theorem.A more accurate statement of the equipartition theorem is the following:” every variable of phase

space on which the energy depends quadratically, contributes to the average energy (the

variables may be any component of the coordinate or the momentum of any particle)”. The proof will be done in classroom.

Remarks: For the translational degrees of freedom only, equipartition can be shown to follow

from the Boltzmann distribution law. Equipartition of energy also has implication for electromagnetic radiation when it is in equilibrium with matter, each mode of radiation having kT of energy in the Rayleigh – Jeans law.

2.3 Temperature and thermal equilibrium

If we adopt the equation KEavg = we obtain the ideal gas equation of state. If

we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

PV = νRT, R = NAk = 8.31 J/K

One of the properties of the temperature is the role it plays in determining the equilibrium between systems which interact thermally as well as mechanically.

State of equilibrium: the distribution of the velocity is independent of a direction (no preferred direction for the velocity v for many collisions).

The condition immediately implies that;

<vx> = <vy> = <vz > = 0,

and therefore < a . v > = 0 for every constant vector a. Using this property, we can show easily that in a state of equilibrium, the average kinetic energy per molecule, in a mixture of two

28

Page 29: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

gases, is equal. The equivalent conclusion is “ If two ideal gases are in equilibrium, their temperatures are equal.

An other very interesting property which results from this state of thermal equilibrium is the Dalton’s law: “The pressure of a mixture of gases is equal to the sum of pressures that gases would exert if each of them were separately in the same conditions as the mixture”.

2.4 The density in an isothermal atmosphere – the Boltzmann factor in the potential energy

How are molecules distributed in the gravitational field force?

We assume that a volume of gas is at a uniform temperature (T), in a closed cylinder.

We want to calculate the density of the gas as a function of height z, at thermal equilibrium.

We divide the volume of the cylinder into layers, of thickness ∆z. Thermal equilibrium guarantees that the velocity distribution is identical in layers.However, due to the gravitational force, the pressure differs at each height (pressure of gas determined by the weight of the gas above it).∆z small: all the molecules in one layer experience an identical gravitational force. Also the density is assumed to be constant in a given layer.Additional assumption:

- The gravitational force does not change with heightLet A be the area of the base of the cylinder; the volume of the layer is A∆z

Gravitational force on the layer ∆F = mg [n(z) A ∆z] (2.7)Where m = mass of one molecule n(z) = density at height z g = gravitational accelerationIn a state of equilibrium, this force (Eq. 1) is balanced by the difference in pressure beneath the layer and above it

[P(z) – P(z + ∆z)]A = ∆F (2.8) -∆P x A = ∆F = mg n(z) A∆z (dP/dz) = - mgn(z) (2.9)Since our layer is thick enough, we can attribute an equation of state for the gas in the layer: P(z) = n(z)kT (2.10)

29

Page 30: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(dn/dz) = -(mg/kT) n(z) (2.11)

We integrate Eq. (2.11) to find n(z): n(z) = + C exp(-mgz/kT) With the “boundary conditions” n = n(0) for z = 0, we find that C = n(0):

(2.12)

This Equation gives the Boltzmann distribution law: U(z) = mgz is the potential energy; exp(- mgz/kT) is the Boltzmann factor.

The density of molecules at z=0, is related to the total number of molecules N, by the following relation (exercise):

n(0) = (Nmg/AkT) [ 1 - exp(- mgz/kT)]-1 (2.13)

2.6 Case where the force field is not uniformWe assume that the force field derive from a potential: F(r) = - (2.14)

F(r) is the force acting on a gas molecule at point r.

When U is the gravitational potential energy, U = mgz, Eq. (8) reduces to:

F = - ez= - mgez (2.15)

The equation for the pressure is obtained by generalizing Eq. (3):

P (r) = F (r) n (r) (2.16)

Let P (r) = n (r)kT be the gas local equation of state:

Since T is independent of r, we obtain from Eq. (8) – (10):

kT n (r) = - n (r) (r) (2.17)

Eq. (11) can be written in the form:

30

n(z) = n(0) exp(- mgz/kT)

Page 31: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

= - (2.28)

The solution of Eq. (2.28) is::

where c is a constant.

(2.19)

Where n(r0) is the density at the reference point where the potential energy vanishes.

Remark: Our aim is to arrive at statistical mechanics via kinetic theory; we describe the result in Eq. (2.19) in a slightly different language. Our system contains many molecules (N . Therefore, [n(r)/N] represents the probability for a molecule to be in an infinitesimal volume dV around the point r:

P(r)dV = [n(r)/N]dV (2.20)

Now we can write down the probability of a given configuration of the system, i.e, the probability that N particles will be in the volumes dV1, dV2,…….,dVN around the points r1, r2, ….,rN, respectively in space with potential field U ( r). Since the positions of the different particles are independent(ideal gas), the probability is the product of individual probabilities:

P (r1, r2, ….,rN) dV1 dV2,…..dVN = P (r1) dV1 P (r2) dV2.............. P (rN) dVN

= [n(r0)/N]N exp [(-1/kT) dV1 dV2,…..dVN (2.21)

In conclusion, the probability density is proportional to the Boltzmann factor exp (-U/kT).

The Maxwell – Boltzmann distribution

The particle density in a small volume around the point r is made up of particles that are moving at different velocities. The distribution of the velocities is independent of r, just as the coordinate distribution P (r) is independent of v. We want to know how many particles , out of n( r) dV, have a

velocity inside the volume element in velocity space around v.

Let f (v) be the probability for a particle to have a velocity v in the volume element :

n( r) dV f (v) = NP(r) f (v) dV (2.22)

31

n(r) = n(r0)e-U(r)/kT

Page 32: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Equation (2.22) gives the number of molecules with a velocity in the volume element around v, that are located in a volume dV around the location r. Note that f(v) is the probability per unit volume for a molecule to have a velocity in the range [v, v +dv]

Derivation of the Maxwell – Boltzmann distribution [f (v)]

The Maxwellian distribution functions give the probabilities of finding a specified molecule in a specified range of energy or momentum or speed or velocity. Our system is then a single molecule, in a heat reservoir. The form of f (v) is a special case of the general assertion of the Boltzmann distribution. Maxwell distribution function can be obtained from two very simple assumptions:

1. In a state of equilibrium there is no preferred direction;

2. Orthogonal motions are independent one another.

From assumption (1), we deduce that f (v) must be a function of v2 only (it has no dependence on the sign of the velocity):

f (v) = h(v2) (2.23)

The second assumption implies that f (v) must be a product of the form:

f(v) =

(2.24)

From equations (2.23) and (2.24), we deduce that:

(2.25)

Eq. (2.25) is a functional equation that determines the forms of h and g. From the form of that equation we suspect that h and g are exponential functions;

Actually, this is the only solution. In fact, if we set :

= X; = Y ; (2.26)

W = X + Y + Z

We want, therefore, to solve the equation

32

Page 33: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

h (W) = g(X) g(Y) g(Z) (2.27)

Note that h depends on x, y, z only through the variable w. In order to solve Eq. (2.27) we differentiate both sides with respect to X. On the right hand side we use of the following chain rule and the fact that

:

(2.28)

We divide both sides of Eq. (2.28) by Eq. (2.27):

(2.29)

Note that the left hand side of Eq. (2.29) can in principle depend on all three variables X, Y, and Z

through W. However, the right hand side depends only on X. Therefore, the function cannot

depend on Y and Z. However, if we differentiate Eq. (2. 28) with respect to Y, we repeat the arguments ,

we will reach the conclusion that the function (a constant function). Integrating this relation

we obtain

h = C (2.30)

where C is a constant. From the above considerations, we obtain using Eq. (2.23)

f (v) = h(v2) = (2.31)

Since f dτ is a probability, f is positive, and its integral over all possible vectors v must be 1. To achieve this, we must have C > 0 and λ < 0. If we denote the negative constant by –α, we can write:

(2.32)

Equation (2.32) is the Maxwell velocity distribution. C is determined as a function of α from the normalization condition:

(2.33)

33

Page 34: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

From Eq. (2.33), we deduce that C = (α/π)3/2. Using the equipartition of energy theorem, it can be shown that α = m/2kT.

$ Positions and velocity distributions

From what has been in the previous paragraphs, the probability for a molecule to be near r, and for the velocity to be near v is given by:

P(r) dV f (v) dτ

In an ideal gas with N molecules , the coordinates and velocities of each molecule are independent of the coordinates and velocities of every other molecules. Therefore, the probability for a configuration where molecule No. 1 is near location r1; molecule No.2 near r2 and v2; and so on, is:

NP(r1,……rN) dV1…..dVN f (v1,…..vN) 1 dτ2………dτN = C exp (-Etot/kT) dV1…..dVN 1 dτ2………dτN (2.34)

Where

is the total energy of the configuration. Therefore, given a complete description of the state of the system, the probability of finding such a state is proportional to the Maxwell – Boltzmann factor[exp (-Etot/kT)]. The distribution (Eq.2.34) is called Maxwell – Boltzmann distribution.

Exercise1

Show that the normalized Maxwell velocity distribution (probability in velocity space) is given by:

f(v) dτ = (2.34)

Exercise 2

Calculate the average height of a molecule in the isothermal atmosphere whose density is given by

n(z) = n(0) exp (-mgz/kT)

Exercise 3

Show that the average square deviation (the variance) of the height of the molecules in the isothermal atmosphere whose density is given by n(z) = n(0) exp (-mgz/kT) is:

34

Page 35: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(∆z)2 = <(z - <z>)2 > =

Where Z(α) =

Exercise 4

Show that <v 2 > = 3kT/m

Exercise 5

The distribution of molecular speeds (Eq. 2.34) can be transformed into the following distribution:

F (v) dv = 4π v2. Show that the most probable speed is

given by:

2. 2 INTRODUCTION TO TRANSPORT THEORY AND IRREVERSIBLE THERMODYNAMICS

2.2.1 INTRODUCTION

Irreversible thermodynamics is a branch of physics which studies the general regularities in transport phenomena (heat transfer, mass transfer, etc.) and their transition from non- equilibrium systems to the thermodynamically equilibrium state). It is possible to use for this purpose, as in reversible thermodynamics (also known as thermostatics), phenomenological approaches based on the generalization of experimental facts and statistical physics methods which establish the links between molecular models and substance behavior on macroscopic level/scale.

The starting points of irreversible thermodynamics are the first and second laws of thermodynamics in local formulation.

35

Page 36: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

In science, a process that is not reversible is called irreversible. This concept arises most frequently in thermodynamics.

An irreversible process increases the entropy of the universe. However, because entropy is a state function, the change in entropy of a system is the same whether the process is reversible or irreversible. The second law of thermodynamics can be used to determine whether a process is reversible or not.

Examples of irreversible processesIn practice, many irreversible processes are present to which the inability to achieve 100% efficiency in energy transfer can be attributed. The following are examples of irreversible processes

Heat transfer through a finite temperature difference Friction

Plastic deformation

Flow of electric current through a conductor

Magnetization with a hysteresis

Spontaneous chemical reactions

Spontaneous mixing of matter of varying composition/states

2. 2. 2 Transport processes

The transport coefficients describe the behavior of systems when there is a slight deviation from an equilibrium state (slight deviation from the Maxwellian equilibrium).

Such a deviation results from an applied external force (in the case of mobility and viscosity) or concentration gradients (in the case of diffusion) or temperature gradients (in the case of thermal conductivity).

Such deviations from equilibrium create currents, which drive the system back to equilibrium. The ratios between the currents and the disturbances that create them are the transport coefficients.

Mean free path

The mean free path is the average distance traversed by a molecule in the gas between two collisions; this is a very central concept of the kinetic theory of gases.

36

Page 37: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The clarification of the concept of mean free path and its quantitative evaluation led to the calculation of many important quantities (transport coefficients) such as the mobility, diffusion coefficients, viscosity, thermal conductivity, etc…

The mean free time is the average time between two consecutive collisions of a given molecule in the gas. Between two collisions, it is assumed that the molecule moves as a free particle.

Expression of the mean free path

The mean free path can be estimated from the kinetic theory. For simplicity we assume that molecules are rigid balls, of radius a (diameter d) so that between two collisions they travel in a straight line. A molecule moving in a straight line will collide with every molecule whose center is found in a cylinder along its direction of motion whose radius is twice the radius of the molecule.

The magnitude of the mean free path depends on the characteristics of the system the molecule is in:

l = (σn)-1 (2.35)

where l is the mean free path, n is the density and σ is the effective cross sectional area for collision. For molecules of diameter d , the effective cross – section for collision can modeled by using a circle of diameter 2d to represent a molecule’s effective collision while treating the target molecules as point masses. In time t, the circle would sweep out the volume (πd2 x vt). The number of collisions can be estimated from the gas molecules that were in that volume, that is, N’ = n x (πd2 x vt). The mean free path would be:

(2.36

Corrections need to be made on expression (2.36) because the target molecules are also moving . The frequency of collisions depends upon the average relative velocity and not on the average molecular velocity.

Since , the expression of the effective volume swept out should be revised. The

resulting expression of the mean free path is:

(2.37)

The number of molecules per unit volume (n) can be determined from the Avogadro’s number and the ideal gas equation of state:

37

Page 38: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(2.38)

In Eq. (2.38), is the number of moles. Substituting Eq. (2.38) into Eq. (2. 37) we find the following expressions of l in terms of d, P and T:

l = [RT/(21/2πd2NAP)] (2.39)

2.2. 3 diffusion

2.2.3.1 Introduction

Diffusion is caused by the motion of particles through space. In some cases (for example in biological processes), it can be regarded as mixing of particles amongst one another.

The phenomenon was investigated by a British Botanist, Robert Brown (1827- 1828) with

pollen grain particles dispersed in water. Using an optical microscope he noticed that

the pollen grain particles suspended in glass of water undergo chaotic or apparently

erratic movements. These movements are known as Brownian motion.

It is only between 1905 and 1908 that Einstein published a series of papers which first

adequately explained the Brownian motion. He showed that it is caused by the impacts,

on the pollen particles, of much smaller water molecules (thus more mobile). In fact,

according to the equipartition of energy theorem, each degree of freedom of a pollen

particle which is in equilibrium with the water molecules has an average kinetic energy

of . If the mass of the pollen particle, Mp ≈10-16 Kg and that of each surrounding

water molecules, mw≈10-26 Kg, the average velocity of the pollen particles, at room

temperature, is approximately 10-2 m/s, and that of each water molecule is

approximately 103 m/s. Since vw is much greater than vp, the water molecules are much

more mobile than the pollen particles. Therefore, the Brownian motion takes place

because the light but relatively large pollen particles are under a constant bombardment

by the water molecules. Einstein’s explanation of the Brownian motion was the first

convincing proof for the existence of atoms and molecules.

2.2.3.2 The diffusion equation38

Page 39: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The common diffusion problem deals with a mixture of two materials, whose relative density varies from place to place (gradient of density). Materials move from higher density to lower density. This flow of materials gives rise to current densities that drive the system to equilibrium.

Jx = nvx (2.40)

The theory of diffusion can be developed from two simple and basic assumptions:

(i) First assumption: the substance will move down its concentration gradient (density gradient). The steeper the gradient the more movement of material. Let n1 be the density of one of the constituents. We assume that n1 varies in one direction only (for example the x direction:n1 = n1 (x)). If the relation between the gradient and flux is linear, then in one dimension we have:

Jx = (2.41)

In Eq. (2.41), x is the position, n1 the density at that position and D the diffusion coefficient. The variable Jx is the flux or amount of material passing across the point x (or through a unit area perpendicular to the direction of flow) per unit time. The minus sign signifies that the current flows from high density to low density.

In three dimensions, n1 varies along any arbitrary direction (n1 = n1 (r)), Eq. (7) becomes:

(2.42)

(ii) The second assumption is about the conservation of matter (conservation of the number of particles).In an element of length dx, the flux into the element from left is different from the flux out of the element to the right, then the density within the element will change. The difference between the two fluxes determine how much material will accumulate within the region bounded by x and (x + dx) in a time interval dt.

( ) the conservation of mass or conservation of the number of

particles gives the following equation of continuity:

= - (2.43a)

39

Page 40: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(2.43b)

The equation of continuity expresses the fact that the density changes must come from the balance of the current entering around a given point and the current that is leaving it. Substituting Eq. (9.1) into Eq. (7), we have:

(2.44)

In the one – dimensional case, n1 is a function only of x and t. Assuming that at the time t =0, all particles of type 1are concentrated at x =0, a typical solution of Eq. (10) is:

n1(x, t) = exp (-x2/4Dt) (2.45)

Where C = is a normalization factor. It can be shown that the average displacement of the

particles of type 1 can be computed from Eq, (2.44):

< x2> = 2Dt (2.46)

In the more realistic case of a three – dimensional system the diffusion equation has the form:

(2.47)

The solution of Eq. (13) is:

n1(x, y, z, t) = (2.48)

Where N1 is the total number of particles of type 1: N1 = . The average square distance

is for the three – dimensional case:

(2.49)

40

Page 41: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Viscosity and Thermal conductivity

1. Viscosity

The viscosity is that property of a fluid that indicates its internal friction. The more viscous a fluid, the greater the force required to cause one layer of fluid to slide past another. Viscosity is what prevents objects from moving freely through a fluid or a fluid from flowing freely in a pipe.

The viscosity of gases is less than that of liquids, and the viscosity of water and light oils is less than that of molasses and heavy oils.

We know from experience that the viscosity of some liquids such as motor oil increases with decreasing temperature.

During cold weather, it is hard to start a car engine because the oil is thick and flows slowly, whereas the same car starts easily as the oil is warm and flows readily during the hot weather.

The viscosity results from a velocity gradient in different layers of a fluid. Each layer in the gas for example, advances at a different speed:

Force per unit area: (2.50)

Due to the existence of a velocity gradient the molecules in two different layers have different speeds. This speed exists in addition to thermal velocities. Thus, in two different layers the molecules have different momenta. But thermal motion transfers molecules from one layer to another. The result is that a certain amount of net momentum is transferred between two layers per unit time per unit area, and this is the viscosity force.

It can be shown the coefficient of viscosity of a gas is given by:

(2.51)

In Equation (17) m is the mass of one molecule, <v> is the mean speed of molecules, l is the mean free path or average distance between two consecutive collisions and n is the density.

2. Thermal conductivity

.Heat energy can be transferred in three ways: conduction, convection, and radiation.

41

Page 42: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(i) Conduction: when one end of a metallic rod is heated, the other end gets warm. In conduction, thermal energy is transferred without any net movement of the material itself. Conduction electrons are responsible for this process. Conduction is a relatively slow process.

(ii) Convection: heat transfer results from the mass motion or flow of some fluid, such as air or water. Examples: warm air flowing about a room; hot and cold liquid being poured together. This process of heat transfer is more rapid than the conduction process.

(iii) Radiation: this process does not require neither contact nor mass flow. The energy from the sun comes to us by radiation. We also feel radiation from warm stoves, fires, and radiators.

Mathematically, conduction and radiation are more easier to treat than convection.

Law of heat conduction

The law of heat conduction states that the time rate at which heat flows through a given material is proportional to its area and to the temperature gradient. Mathematically this can be summarized in the following two equations called heat flow equations:

(∆Q/∆t) = KA (T2 – T1) /L (2.52a)

(dQ/dt) = - KA (dT/dz) (2.52b)

In the above equations, the constant K is called coefficient of thermal conductivity. In SI units, heat flows are in J/s or W; therefore, the units of K are (J/s . m. 0C) or W/m. 0C

Radiation

A hot object also loses energy by radiation. This radiation is known as “the black body radiation” and as electromagnetic radiation it can pass through empty space (a vacuum). The warmth we feel when we warm our self by a fire is due to that radiation. The rate at which an object radiates energy is proportional to its surface area A and to the fourth power of its absolute temperature T. The total energy radiated from an object per unit time (radiated power) is found experimentally to be (Stephan – Boltzmann law):

P = σeAT4 (2.53)

Where σ is the Stephan – Boltzmann constant (σ = 5. 67 x 10-8 W. m-2. K-4) and e is the emissivity of the object. The emissivity is a dimensionless number such that 0 ≤ e ≤ 1and describes the nature of the emitting surface. The emissivity is larger for dark, rough surfaces and smaller for smooth, shiny ones.

42

Page 43: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

According to the Stephan – Boltzmann law, all objects radiate energy, no matter what their temperature happens to be. However, they do not lose all their thermal energy by radiation and cool down to 0 K because they also absorb radiation from surrounding objects and eventually come to thermal equilibrium with their environment. If an object which is radiating (for example the book) is at a temperature T and its surroundings are at a different temperature TS, the net energy gained (or lost) per second by the object is given by

, (2.54)

where A is the surface area of the object. Notice that we have used the same value for the emissivity for absorption as for radiation. This must be correct, because the net heat exchange must go to zero when T = TS. Thus a good radiator is also a good absorber.

Because the radiated power is proportional to T4 Eq.(19), the total power radiated grows rapidly as the temperature increases. The distribution of the radiation, which is composed of many different wavelengths, is also a function of temperature. It is the change in this distribution that accounts for the change of color of a glowing hot object as its temperature is raised.

Thermal conductivity of a gas

When a gas is contained between two parallel planes kept a different temperatures, heat flows through the gas.

If the heat transferred by convection (when parts of the gas move with respect to one another) is negligible, it is observed experimentally that the amount of heat that has to be supplied per unit time per unit area in order to maintain the temperature gradient is given by:

(2.55)

= heat flux in the z direction (the amount of energy that is transferred per unit time across a

unit area).

The role of the kinetic theory is to calculate the coefficient of thermal conductivity (K).

Assumptions: the density is uniform in the container. The temperature is supposed to vary slowly, so that the pressure is constant over the region shown.

The temperature is supposed to vary over the distance of 2l. But the temperature changes, and the average energy per molecule changes from one layer to another along the z direction.

The amount of heat passing through a unit area in unit time, along the z direction is ( see diagram in the classroom) :

43

Page 44: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(2.56)

and to first order in l, when the temperature gradient is not too large:

ε(-l) – ε(l) = (2.57a)

Substituting Eq. (2.57a) into Eq. (2.56), we have

(2.57b)

Comparing (2.55) and (2.57b), the coefficient of thermal conductivity is given by:

(2.58)

In Eq. (4), n is the density of molecules, is the mean speed of energy – transporting particles, l

is the mean free path of molecules; is the specific heat per molecule (at constant volume).

Where c = is the specific heat (at constant volume) per molecule.

Remark: The kinetic theory also enables one to establish a relationship between the coefficient of thermal conductivity and the coefficient of viscosity

K = αηc (2.59)

Where the numerical factor α is found to lie between 1 and 3 for gases.

The diffusion coefficient is given by:

D ≈ (2.60)

The present lack of an adequate theory of the liquid state makes it difficult makes it difficult to use the above formulas in the study of biological substances in liquid environment.

Chapter Three: Statistical Thermodynamics

44

Page 45: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

3.1 IntroductionThe chapter starts with a brief overview of the laws of macroscopic thermodynamics, thereafter we continue by establishing the connection between statistical mechanics and thermodynamics which enables us to obtain the connection between the microscopic laws of the system and thermodynamic properties. Actually, statistical thermodynamics combines statistical mechanics and thermodynamics, that is, this theory investigates heat and work via statistical arguments.Throughout this chapter we will make use of the canonical ensemble to investigate the properties of several simple systems including a collection of quantum simple harmonic oscillators and ideal gases. The assumptions of a canonical ensemble (variable energy and controlled temperature, number of particles and volume) fit with the laws of phenomenological thermodynamics.

3.2 Laws of phenomenological thermodynamicsPhenomenological/macroscopic thermodynamics or theory of heat and work is based on the following laws.

(i) The Zeroth law of thermodynamicsIf two adiabatically isolated systems are in thermal equilibrium with a third system at temperature T, they must also be in equilibrium with one another at the same temperature.

(ii) First law of Thermodynamics

The first law of thermodynamics is concerned with the internal energy (U), which is defined as the total energy of the system. It is an extensive quantity (additive), and therefore depends on the size of the system. The average change in internal energy resulting from a change in an external macroscopic parameter is the thermodynamic work. The first law is a thermodynamic generalization of the energy conservation law from classical mechanics. We note that if the energy of a system changes, it must be as a result of doing something to the system – that is, allowing some form of energy to flow into or out of the system. Empirically, it is on the system, or by allowing heat to flow into the system. The sum dU + dW is identified with the heat transferred to the system in a quasi-static process (dQ). Thus, the first law of thermodynamics takes the form:

dU + dW = dQ (3.1)

Where dW is the differential work done by the system, and dQ is the differential heat flow into the system.

dW = F.dX (3.2)

45

Page 46: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

where F is the applied “force” and X is a mechanical extensive variable. For the extension of a gas, dW = Pext dV, where Pext is the external pressure and V is the volume.

The definition of heat is really not complete unless we postulate a means to control it. Adiabatic walls are the constraints that prohibit the passage of heat into the system.

(iii) The second law of thermodynamics.

The 2nd law introduces the concept of the entropy. The amount of heat absorbed in a reversible process is given by:

dQrev = TdS (3.3)

Where dS is the increment of entropy and dQrev the amount of heat absorbed by the system in a reversible process. The second law of thermodynamics may be stated as follows “heat moves spontaneously from hot to cold” which is equivalent to stating that the entropy S must increase.

For a reversible process the fundamental relation comprising the first and second laws is:

TdS = dU + dW (3.4)

In the case where the work done by the system against the surroundings is of mechanical nature, the relation TdS = dU + PdV enables one to determine the variables on which depend the various free energies (See section 2).

(iii) Third law of Thermodynamics, or Nernst’s heat theorem The experimental failure to reach absolute zero temperature led to the following formulation: “As the temperature tends to zero, the magnitude of the entropy change in any reversible process tends to zero” (third law).

3.3 Thermal interactions between macroscopic systemsLet us consider two macroscopic systems S and S’ which can interact by a purely thermal interaction, that is, they can only exchange heat energy. Suppose that the energies of the two systems are E and E’, respectively. Furthermore, the external parameters such as the volume of any system are held fixed, so that S and S’ cannot do work on one another. Assume that we can divide the energy scale into small subdivisions of width . Let Ω(E) be the number of microstates of system S consistent with a macrostate in which the energy lies in the range between and . Similarly, the number of microstates of system S’ consistent with a macrostate in which the energy lies in the range between and is denoted Ω’(E’). We assume that the combined system S + S’ is thermally isolated so that the total energy remains constant; in the case of sufficiently weak mutual interaction, we can set:

46

Page 47: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.5)

If there is no interaction between S and S’, E +E’ = E(0). However, a small residual interaction is required to enable the two systems to exchange heat energy and eventually reach thermal equilibrium. If the interaction between S and S’ is strong the energies are no longer additive; it is more realistic to consider each system in isolation, since the presence of one system clearly strongly perturbs the other. In this case only the combined system S + S’ can be considered as an isolated system.According to equation (3.5), if the energy of S lies in the range between and , then the

energy of system S’ must lie between and .

Thus, the number of microstates accessible to each system is given by Ω (E) and Ω’(E(0) – E), respectively. Since every possible state of S can be combined with every possible state of S’ to form a distinct microstate, the total number of distinct states accessible to S + S’ is:

(3.6)

Consider an ensemble of two thermally interacting systems S and S’, which are left undisturbed for enough time to achieve thermal equilibrium. The principal of statistical physics which states that all microstates occur with the same probability is applicable in this case. According to this principle, the probability of occurrence of a given macrostate is proportional to the number of accessible microstates. Therefore, the probability that the system S has an energy lying in the range between and is given by:

(3.7)

Where C is a constant which is independent of E. It can be shown that the number of accessible microstates is the proportional to Ef, where f is the number of degrees of freedom. For a macroscopic system of N particles, f =3N; usually f is a very large number. It follows from equation (3.6) that the probability P (E) is the product of an extremely increasing function of E and an extremely decreasing function of E. Thus, it is expected that the probability would exhibit a very pronounced maximum at some particular value of energy. It can be show

[ Fitzpatrick Richard 2006 – 02- 02” http://farside.ph.utexas.edu/teachng /sm1/lectures/no” ]

47

Page 48: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.8)Since the probability is positive and normalized to one, the parameter λ0 must be positive. According to equation (3.13), the probability distribution function P(E) is a Gaussian. This is not surprising, since the central limit theorem ensures that the probability distribution for any macroscopic variable, such as E, is Gaussian in nature. The standard deviation of the distribution (Eq. 3.8) is

(3.9)

In equation (3.9), f is the degree of freedom and E* is the value of energy for which the probability distribution is maximum. Therefore, the fractional width of the probability distribution function is given by:

(3.10)

Example: If S contains 1 mole of particles then f= NA and ∆E/E . We conclude

that the probability distribution for E has an extremely sharp maximum. Experimental measurements of this energy will almost always give the mean value; that is, the statistical nature of the distribution may not be apparent.

3.4 Free energies When one is concerned with changes in states of a system, a full description requires the knowledge of what is known as the free energies or thermodynamic potentials. The most commonly used thermodynamic potentials, with the appropriate independent variables for each, the internal energy U(S, V, N), the Helmholtz free energy F(V, T, N), the Gibbs free energy G(P, T, N) and the enthalpy H(S, P, N). The description of this chapter is based on the canonical ensemble, a system in which the number of particles N, the volume V and the temperature T are known.

(a) The Helmholtz Free Energy , The Helmholtz free energy F (named for Hermann von Helmholtz) is the most appropriate free energy in the canonical description. It is defined by:

F = U – TS dF = dU – TdS – SdT = dU – (dU + PdV) – SdT = - PdV – SdT (3.11)Therefore, F = F(V, T) and

48

Page 49: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

;

(3.12a)

Since F = - kT lnZ (connection between the Helmholtz free energy and the partition function) equations (3.12a) are equivalent to:

P = ; S = (3.12b)

If changes in a system are to occur under constant temperature and volume, it is usually adequate to work with in terms of the Helmholtz free energy.

(b) The Gibbs Free Energy, G (named for Josiah Willard Gibbs) is defined as follows: G = F + PV = U – TS + PV (3.13a) dG = dU – TdS – SdT + PdV + VdP = - SdT + VdP (3.13b)

Therefore, G = G(P, T). If changes in a system are to occur under constant temperature and pressure, it is usually adequate to work with in terms of the Gibbs free energy. For example, reactions that occur in the body usually take place at constant pressure, so it is more common to work through the Gibbs free energy. From Equation (8), we can calculate S and V from the partial derivatives of G.

;

(3.14) The second relation in equation (3.14) is the” equation of state”.

(c) The Enthalpy is defined by the equationH = U + PV, which gives after differentiation and considering also the 1st and 2nd principles of thermodynamics

dH = dU + PdV + VdP = TdS + VdP (3.15)

P = T; (3.16)

3.4 Connection between the Helmholtz free energy and partition function In order to realize the connection between Statistical Mechanics and Thermodynamics, we must determine a statistical quantity which has the same properties as one of the free energies described above. Note that the Helmholtz free energy given by:

49

Page 50: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

F = U – TS = U + T ( (3.17a)

Equation (3.17a) represents a differential equation for F (if U is constant). The connection between statistical mechanics and thermodynamics is achieved by using the partition function Z (Zusstandssume). That function can be derived using a micro - canonical ensemble (energy and charge/number of particles fixed) but it is preferable to deduce it from a canonical ensemble (Energy varies but the number of particles/ charge is fixed – only states of same particle number/charge are considered). The Boltzmann probability used in chapter one is characteristic of the canonical ensemble. According to equation (1.40) we have:

(3.17b)Where β =1/kT. Assuming the last term on the right hand side is negligible and setting <E> =U, we have after rearrangement: Using the definition of the Helmholtz free energy we have:

(3.18a)For a system of N particles, (3.18b)Equations (3.18 a,b) could be deduced from the fact that the probability that the system is in the ith quantum state can be written as:

(3.18c)

We know from thermodynamics that given the Helmholtz free energy F, it is possible to completely describe the macroscopic properties of the system. Since according to equations (3.18a,b) it is possible to calculate F from the partition function, Z, the calculation of Z in the canonical distribution enables us to obtain all information on the system.

3.5 Connection between the laws of thermodynamics and other thermodynamic quantities and the partition function

3.5.1 Relationship between the partition function and internal energy From equation (1.39), the internal energy is given by:

50

Page 51: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.19)

The internal energy is a state function and depends on the volume (V), the number of particles (N), and the temperature (T). Since for a canonical ensemble N is fixed, U = U(X, T), where X is an external parameter and, in this case, stands for the volume. The exact differential of U is given by:

(3.20)

Substituting equation (3.19) into (3.20) we have:

(3.21)

3.5.2 Relationship between thermodynamic work and partition function The work performed by a system on its surroundings is related to the variation of external parameters. We denote the external parameter by X. This external parameter will, in most cases, depend on an external body, which affects the energy of the system’s states. The work done by the system will be performed on the external body.The energy of each microscopic state depends on X: Ei = Ei(X). The force that performs work derives from Ei by:

(3.22)

Suppose that the system is in a state i. The work done by the system on the external body, when the external parameter change from X to X + dX is given by:

(3.23)

The macroscopic work performed by the macroscopic system will be an average of over

the canonical ensemble. If we refer to equation (1.39) this average is given by:

(3.24)

51

Page 52: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Where Z is the partition function and is defined by equation (1.38b). It is clear that the “numerator” of the last term of the right hand side of equation (3.24) is proportional to the negative derivative of Z with respect to X. Therefore, the macroscopic work is given by:

(3.25)

Equation (3.25) tells us that we can calculate the thermodynamic work accompanying every change of the external parameter in a quasi – static process once we have determined the partition function Z. It is important to highlight that equation (3.25) gives the thermodynamic work performed by a system in terms of the changes of internal energy in individual microscopic states.

3.5.3 Relationship between the first law of thermodynamics and the partition function Z The first law of thermodynamics is the generalization of the energy conservation. We recall that the heat absorbed by the system in a quasi- static process, , is given by:

Using equations (3.15) and (3.19), we can write the first law of thermodynamics in a “statistical form”, using the partition function Z. We have,

(3.26)

In equation (3.26), the first law of thermodynamics is related to the change of the external parameter dX and a change in temperature expressed by dβ, in a quasi –static process. According to Gibbs one of the convincing successes of statistical mechanics is that it created a link between the conservation of energy in the microscopic theory and the first law of thermodynamics.

3.5.4 Relationship between the second law of thermodynamics and the partition function Z According to the 2nd principle of thermodynamics, for a reversible process, the exact differential of the entropy is given by:

(3.27a)

(3.27b)

52

Page 53: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Using equation (3.26), we have:

(3.28)

We can find equation (3.28) using the following expressions:

; (3.29)

If we differentiate equation (3.29) with respect to X and β we obtain equation (3.28).In the previous sections we have shown that we can formulate the first and second laws of thermodynamics using the partition function. The main key idea is the connection between the Helmholtz free energy and the partition function. Actually we can relate many other thermodynamic quantities to the partition function.

Table 3.1 Connection between free energies and the partition function

Quantity Formula Relationship with Z

Helmholtz free energy (F)

F =U –TSdF = -SdT –PdV +μdN

Internal energy (U)

U = F + TSdU = TdS – PdV+μdN

Pressure (P)

53

Page 54: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Entropy (S)

Gibbs free energy (G)

The relations given in table (3.1) and many others which are not indicated here show the partition function is widely used to calculate thermodynamic quantities from properties of the molecules. If the system under study is described by a canonical ensemble and if exp (-βE i) can be summed over all states to yield Z, the values of all thermodynamic quantities are obtained.However, the partition function Z can only be obtained for simple systems like ideal gases for which interactions are limited or inexistent.

3.6 Applications of statistical thermodynamics

3.6.1 Statistical mechanics of an ideal gas

An ideal gas is a system whose collection of particles is confined to a given volume, and the energy is purely kinetic (Interactions between particles are neglected). Such an approximation is reasonable for a dilute gas of neutral particles.

3.6.1.1 Partition Function

The Hamiltonian function of N molecules of an ideal gas is given by:

(3.30)

(3.31)

54

Page 55: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.32)

The tables of integrals give the following value for a definite integral of the type as the one into the double bracket

(3.33)

In this case , . Therefore, substituting Eq. (3.33) into (3.32) we have:

(3.34)

As it is shown in the next section, many thermodynamic quantities can be derived from this

expression of the partition function.

3.6.1.2 Helmholtz free energy, pressure and entropy

(i) Helmholtz free energy

(3.35a)

Using Sterling’s approximation, that is, , we have:

(3.35b)

(ii)Pressure

From the definition of F, that is, F = U - TS, using the first and second laws of thermodynamics for reversible processes, we have:

55

Page 56: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.36)

For a mole of ideal gas we have, , which is the equation of state

(i) Entropy

(3.37)

Differentiating Eq. (3.35b) with respect to T we find after rearranging,

=

(3.38)

In Eq. (3.38), Smol represents formally the entropy of a single molecule without internal structure. We note that the entropy is an extensive quantity because it is proportional to N. Thus, statistical mechanics provides an absolute value of the entropy, but thermodynamics gives the entropy with an additional constant.

Proof

Integrating the we find

56

Page 57: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.39)

For a monoatomic gas, the specific heat at constant volume is, ; also the gas

constant is, R =NAk. Therefore,

(3.40)

Eq. (3.40) can be written as follows:

(3.41)

Comparing this result of thermodynamics [Eq. (3.35)] with the result obtained from statistical

mechanics, we see that the constant of integration should have the following value:

= (3.42)

3.6.2 Einstein’s Solid

We consider a crystal whose atoms are oscillating about their equilibrium positions. A solid or a lattice of atoms stores energy in the vibration of atoms about their equilibrium positions. If the oscillations are small, their motion is approximately harmonic. We use the expression of the energy levels of single quantum harmonic oscillators to estimate the energy of the crystal. Einstein assumed that all SHOs have the same natural angular frequency ω. If the crystal is made up of N atoms, the motion of each of them is described by 3 independent coordinates along the x, y, and z directions.

The possible vibrations of each atom are described by a model of 3 SHO. Therefore, the motion of the crystal made up of N atoms is described by a model of 3N simple harmonic oscillators (SHOs). If the 3N SHOs do not affect one another, we may describe a microscopic state of the system by 3N quantum numbers (nα =1, 2, 3, 4……………….,3N).

(a) Partition function, average energy, molar specific heat, Helmholtz free energy and entropy at thermal equilibrium

57

Page 58: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(i) Partition function

(3.43)

(3.44)

(3.45)

In Equation (3.44), represents the energy levels of a one – dimensional quantum SHO,

1, 2, 3…..is the quantum number. Note that the values of are positive, just as for

the classical simple harmonic oscillator.

Substituting equations (3.44) and (3.45) into (3.35) and using the properties of exponentials, we have:

(3.46)

We can write the sum of products as a product of sums

(3.47)

Since all factors in the products on the right hand side of Eq.(3.47) are identical, we can write the partition function of a system of 3N SHO in the form,

(3.48)

Where may be thought as the partition function for a one – dimensional SHO. It can easily be shown that

58

Page 59: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.49)

(ii) Internal energy

We recall that the expression of the average thermal energy of a SHO, that is:

The internal energy of the system made of 3N SHO is given by:

(3.50)

Discussions

At high temperatures, we can make a series expansion of the exponential term. Also by

neglecting the zero order energy which does not play any role in heat exchange, we

remain simply with

(3.51)

Equation (3.51) is the equipartition energy law of classical kinetic theory. In fact, each

degree of freedom of a harmonic oscillator has as kinetic energy which is a quadratic

function of the linear momentum and a potential energy which is a quadratic function

of the position coordinate, and each of these terms contribute (kT/2). Hence, 3N SHO

have an average energy of 3NkT.

At low temperatures (ћω>>kT), it can easily be shown that, the average thermal energy

is given by:

59

Page 60: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.52)

According to Equation (3.52), at low temperatures the average energy tends very rapidly

to the minimum value of (3Nћω/2), allowed by quantum theory for “frozen vibrations”.

(iii) Molar specific heat (N =NA)

The specific heat at constant volume, for the Einstein’s solid is given by:

(3.53)

If we set, , for a mole of matter characterized by NA independent

vibrations we have:

(3.54)

Where = is the gas constant; TC is called Einstein temperature.

Discussions

(i) High temperatures limit: If the temperature T is much larger than TC (T>>TC),

60

Page 61: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(3.55)This is the classical behavior of the specific heat of monoatomic solids, at high temperatures, when quantum effects are negligible. According to Dulong – Petit empirical law, the molar specific heat of solids should have a constant value of at relatively high temperatures.

Low temperatures limit (frozen vibrations) : If T<<TC, using equation (3.54) and neglecting 1 on the denominator of the second term, we have:

( as T→0) (3.56)

Therefore, according to Einstein’s model, the molar specific heat of solids should decrease with decreasing temperature and tends to zero as .The figure below presents CV as a function of (T/TC) for high, intermediate and very low temperatures.

(iv) Helmholtz free energy

(3.57a)

Where is the partition function of a single SHO.

(3.57b)

Substituting Equation (3.57b) into Equation (3.57a), we have:

61

Page 62: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(5.58)

(v) Entropy

The entropy is easily calculated from the Helmholtz free energy:

(3.59)

62

Page 63: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Chapter Four: Quantum statistics

4.1 The Fermi – Dirac statistics

4.1.1 Introduction

63

Page 64: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The Fermi-Dirac distribution applies to particles known as fermions (electrons, protons, neutrons, positrons, etc.) whose total spin angular momentum in units assume the

following values: | |= The wave function describing fermions is

completely antisymmetric, that is, if we exchange two particles with one another, the wave function changes. Fermions are subjected to the Pauli Exclusion Principle (no more than one particle per quantum state). In a multi-electron atom, the Pauli Exclusion Principle states that no two electrons can have the same set of the four quantum numbers (n, l, nl, nS). Therefore, the occupation numbers will be nk =0 or 1.

There are two important differences between classical statistics and quantum statistics: micro-particles of the same species are indistinguishable from one another and in quantum theory the energy levels are discrete.

In this section, we derive the Fermi - Dirac distribution and describe some of its most important applications.

4.1.2 Thermodynamics of fermions –Derivation of the FD distribution from

Gibbs Grand Canonical Ensemble

In the grand canonical ensemble we use the grand canonical partition function ,

which is defined by:

(4.1)

ZGC is again the normalization constant of the probabilities Pα. The summation consists of two parts: a sum over the particle number N and for each particle number, a sum over all microscopic states I of a system with that number of particles. Equation (4.1) may be written as follows,

(4.2a)

Where (4.2b)

64

Page 65: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

Where is the Helmholtz free energy and the canonical partition function

Substituting equation (4.2b) into (4.2a) we can write

(4.3)

The grand canonical potential is related to ZGC by

(4.4)

Calculating the derivative of with respect to T, we have

(4.5)

Using equation (4.1) and calculating the derivative of ln(ZGC) with respect to T, one easily finds after rearrangement,

(4.6)

Where U and <N> are the average energy and average particle number, respectively. Since

(4.7)

Differentiating and using the fundamental thermodynamic relation, TdS =dU+ PdV- μdN, one easily finds

(4.8)

Assuming that is an exact differential, we have the following relations:

65

Page 66: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

;

(4.9)

We assume that n1fermions are in the state with energy ε1, n2 particles in the state with energy ε2,……, nk particles in the state with energy εk. All the particles are identical and we have the following constraints:

and

(4.10)

Substituting equation (4.10) into equation (4.1) we have,

(4.11)

Since the summation is over all possible values of the occupation numbers in an independent manner, we may replace the exponential of a sum by a product of various terms. Therefore,

(4.12)

For fermions, ; therefore,

(4.13)

In the Fermi expression, , the first term represents the

contribution for having zero particles in the state and the second term, the contribution for having one particle in the level.

For fermions, the grand canonical potential is given by:

66

Page 67: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.14)

If the spacing between energy levels is very much less than kT, we introduce the density of states, g(ε)dε, and we replace the summation by integration,

(4.15)

Here, s is the intrinsic spin of the particles. In the Fermi expression,

, the first term represents the contribution for having zero

particles in the state and the second term, the contribution for having one particle in the level.

4.1.3 Average occupation numbers for fermions

In the grand canonical ensemble the number of particles is not fixed but the probability distribution of different states is such that number actually fluctuates very little around an average number which can be calculated from equation (4.7),

(4.16)

The average number of fermions in a state of thermodynamic equilibrium can be calculated using equation (8.8),

(4.17)

Comparing equations (4.17) and (4.16), we conclude that the average occupation number

for fermions, for a state with energy which is identical to the Fermi – Dirac probability

distribution function is given by:

67

Page 68: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.18)

From equation (4.13) we can calculate the charge density and the energy density of a gas of fermions using the following relations (see exercises……):

(4.19)

(4.20)

Where is the Fermi – Dirac probability distribution function (occupation

probability) which is defined by equation (4.15).

4.1. 4 Derivation of the Fermi – Dirac distribution from the MCE

Consider a certain distribution of a very large number of free electrons. Let electrons

have energy and let the number of states available for occupation be .

Electrons are subjected to Pauli Exclusion Principle (no more than one electron per

quantum state – there are more states than particles). Therefore, Then, in the

level, states are filled while are empty. The number of ways in which this

can happen with identical particles is the number of combinations of objects – without

repetitions - in groups of

68

Page 69: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.21)

A similar result is obtained for every other energy range. Each combination may occur

together with any other combination for a different energy level. If is the number of

combinations of objects – without repetitions - in groups of , we can write

(4.22)

(4.23)

(4.24a)

Insert Stirling’s approximation, ln(n!)≈n ln(n) – n (for n→∞), have:

(4.24b)

At thermal equilibrium, the average occupation number will be defined by the

maximum value of corresponding to the most probable arrangement, taking into account the two subsidiary conditions:

(4.25)

69

Page 70: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.26)

The problem of finding the maximum (or the minimum) of with the constraints given in equations (8.5) and (8.6) is solved by the method of Lagrangian multipliers and :

(4.27a)

Differentiating with respect to the , and substituting into equation (8.7a), we find:

(4.27b)

Or (4.27c)

Exponentiation of both sides and rearranging gives

(4.28)

Determination of the Lagrangian multiplier

In the limit of very small concentrations , this happens when

70

Page 71: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.29)

Equation (8.9) is the Boltzmann distribution. Therefore, the Fermi – Dirac statistics coincides with the Boltzmann statistics at very small concentrations. Comparing equation (8.9) with the Maxwell – Boltzmann distribution for an ideal gas for which

, enables us to identify the parameter to . Another way of

explaining the limit of equation (4.29) is the following: “ at the limit of weak concentrations

, the chances of superposition of two particles in a same state are so small that

the Pauli Exclusion Principle plays no role; the FD distribution coincides with the Boltzmann

distribution It can be shown that , where is the Fermi energy. Substituting

this expression of μ into equation (8.8), we have,

(4.30a)

The probability of finding a filled state at is:

(4.30b)

Equation (4.30b) is called the “Fermi – Dirac probability distribution function,” or “F – D function” for short.

Figure ( ) presents nk versus Ek for T =0 and T . It is observed that at T=0 all states below the Fermi energy are filled and all states above EF are vacant

71

Page 72: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

From equations (4.5), (4.6), (4.8) and (4.30a), the number of particles (N) and the total energy

(E) can be calculated using the Fermi – Dirac distribution function, :

(4.31a)

(4.31b)

(4.31c)

We can use equations (8.11a) and (8.11c) to determine the Fermi energy and the mean

energy

When dealing with a continuous distribution, as for electrons in a metal, we can use

expressions analogous to equations (8.11a) and (8.11c) to calculate the number of particles

and the total energy, by replacing the “sum” with the “integral over the volume”:

(4.32)

E = (4.33)

Where g(ε) dε is the density of states in the energy range between ε and ε+dε. It is given by

equation (1.3), for free particles. Equation (8.12) enables one to determine the density of

electrons at absolute zero temperature, value from which the Fermi energy is obtained (see

exercise…..):

72

Page 73: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.34)

Notice that the Fermi energy is also called the chemical potential at absolute zero temperature

(μ0). The density of electrons in any metal is very high (in the order of 1028

/m3

) and the Fermi

energy EF is of the order of 10 eV; this energy is equivalent to a temperature is of the order of

105

K. Therefore, EF is much greater than kT for the temperatures which can be realized in any

laboratory.

4.2 Quantum Statistics :The Bose – Einstein Distribution.

4.2 .1 Introduction

The Bose – Einstein distribution applies to bosons (photons, Helium atoms, etc.), particles whose total spin angular momentum have integral values: | |= The wave function describing bosons is completely symmetric, that is, if we interchange two particles with one another, the wave function remain unchanged. Bosons are not subjected to Pauli Exclusion Principle. Therefore, many bosons may occupy the same quantum state. Thus, the occupation numbers will be nk =0, 1, 2,3, 4,……….Like fermions bosons are indistinguishable from one another.

In this lecture, we derive the Bose – Einstein distribution function using the micro – canonical ensemble and the grand canonical ensemble and describe some of its most important applications.

4.2.2 Derivation of the Bose – Einstein distribution from the MCE

Let g(s) be the number of available states. We must calculate the number of ways in which the ns particles can be distributed among the g(s) states. The problem is identical with the following problem: ns balls are to be arranged in g(s) boxes allowing in any number of balls in a box.

The number of distinguishable arrangements = number of combinations of gk objects in groups of nk (taking nk at a time) with repetitions.

73

Page 74: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.35)

Then, summing over all values of s;

(4.36)

Since g(s) is a large number in all normal cases, compared with both ns and unity;

(4.37)

At thermal equilibrium, the average occupation number will be defined by the

maximum value of corresponding to the most probable arrangement taking into account the two subsidiary conditions:

Mental procedure = to lay out the objects (balls and boxes) in a line in

random order.

Balls to be arranged in boxes:

Arrangement is chosen to start with a box and continue Move those balls which are immediately to the right of a given box into that

box

; (4.38)

The problem of finding a maximum (or a minimum) with specified conditions is solved usually by the method of Lagrangian multipliers ( and ):

74

Page 75: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.39)

We estimate using Sterling’s approximation:

(4.40)

4.2.3 Derivation from the grand canonical ensemble

(a) The Grand canonical partition function and the grand potential75

Page 76: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The calculations done in section 4.2.2 are applicable. In particular the expression derived for the grand canonical partition function will be used for bosons, but the occupation number will take any value from 0 to ∞. We recall equation ()

(4.41)

For bosons the grand partition function is calculated taking into account that nk may have any value from 0 to ∞,

(4.42a)

(4.42b)

Using the well-known property that the natural logarithm of a product of terms is equal the sum of natural logarithm of the various terms, we obtain for the grand canonical potential the following expression,

(4.43)

(a) Average occupation number or Bose –Einstein distribution function

From expression (8.7), we can calculate the average number of bosons in a state of thermodynamic equilibrium just by differentiating the grand canonical potential with respect to the chemical potential (μ):

(4.44)

76

Page 77: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.45)

For a state with energy , the average occupation number which is identical to the Bose –

Einstein probability distribution function is given by:

(4.46)

Diagram: fBE(ε, T) versus ε – see classroom. Note that μ is either zero or negative.

4.3 Applications of the Bose – Einstein distribution function

4.3.1 Derivation of Planck radiation formulaePlanck’s radiation formulae marked one of the origins of the quantum theory. Quantum ideas were introduced to have an agreement between the theory and the observed “black body radiation spectrum”. About 10 years later, Planck’s radiation formula was derived from the Bose – Einstein distribution function.We consider a cavity resonator, isolated from the outside world except for a small hole through which some radiation is emitted. Being very small the amount of radiation escaping from the hole will be negligible. The resonator is maintained in a state of thermodynamic equilibrium at temperature T (see figure…).The number of photons present in the cavity is not fixed: photons are absorbed and reemitted by the walls of the cavity at temperature T. We must abandon the constraint

. The Lagrangian multiplier μ is no longer required: we set μ= 0.

However, the total energy is conserved. The only Lagrangian multiplier that remains is β. The radiation in thermal equilibrium consists of an ensemble of photons, quanta of the electromagnetic field (bosons), subjected to Bose – Einstein statistics. The number of photons in the state with energy is:

(4.47)

77

Page 78: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The momentum of a photon is given by . Using equation ( ) for the density of states and the fact there are two states of polarization for electromagnetic waves, we find that the number of quantum states in the frequency range between ν and ν + dν is given by:

(4.48)

The energy density in the frequency range between ν and ν + dν is given by:

(4.49a)

(4.49b)

Equation (4.49b) is known as Planck’s radiation formulae. This law explains very well all the properties of the black body radiation.

Discussions(i) Classical limit (Rayleigh – Jeans theory)

(hν/kT)<<1: exp(hν/kT) -1 ≈[1 +(hν/kT) +…….-1] = (hν/kT)

(4.50a)

Equation (4.50a) is in agreement with the classical equipartition theorem. The total energy would be:

(4.50b)

The divergence of the total energy as predicted by equation (4.50b) in the Rayleigh – Jeans theory is known as ultra – violet catastrophe. Actually this means that the classical theory failed to explain the black body radiation spectrum.

78

Page 79: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(ii) Quantum limit (hν/kT) >> 1: this is the case of visible and ultraviolet optics corresponding to Wien’s experimental law. Neglecting one (1) in the denominator of equation (9.8), the density of energy becomes,

(5.51)

4.3.2 Laws derived from Planck’s radiation formulaeTwo important laws related to the black body radiation can be deduced easily from the Planck’s radiation formulae, that is, the Wien’s displacement law and the Stephan – Boltzmann law.

4.3.2.1 Wien’s displacement lawWe can find the energy density in the wavelength range between λ and λ +dλ, using the relationship between ν and λ, that is, ν =c/λ; dν = -(c/λ2)dλ. Substituting these relations into equation (9.8a), Planck’s radiation formula becomes:

(4.52)

The wavelength at which this function reaches its maximum value, that is,

at . Using the quantum limit, one establishes easily:

(4.53)

Where is the wavelength at which the maximum of is observed. Equation

(4.53) is known as Wien’s displacement law. The law says that the position of the maximum of the radiation intensity shifts to shorter wavelengths as the temperature increases.

4.3.2.2 Stephan – Boltzmann law

79

Page 80: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

The total radiated energy, per unit volume, by a black body in thermal equilibrium at a temperature T is given by:

(4.54)

One of the more interesting quantities in the study of the black body radiation is the proportion of the intensity emitted or absorbed in a particular range of wavelengths. Defining the spectral radiancy R (λ) so that R(λ)dλ represents the radiated power per unit area within a range of wavelengths between λ and λ + dλ, we have:

(4.55)

Where is the Stephan – Boltzmann constant. Equation

(4.55) is the Stephan – Boltzmann law. It is obvious that the radiancy of a black body increases sharply with increasing temperature.

4.4 The photon gas and Planck’s radiation formulae

(a) The partition function of a photon gas

The derivation of the Planck radiation formulae may be achieved using the idea of electroelectromagnetic radiation as a gas of photons. The number of photons present in a cavity resonator is not fixed: photons are absorbed and reemitted by the walls of the cavity at temperature T. Since the number of photons is not conserved,

we must abandon the condition We also choose the Lagrangian

multiplier μ equal to zero. Since each state with energy εk can contain nk photons, the partition function is given by:

(4.56)

80

Page 81: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

In equation (4.56), ; Since the different modes oscillators are

independent one another, we can write equation (4.56) as a product of terms of the type,

(4.57)

(5.58)

(4.59)

(b) Average number of photons with energy εi

(4.60a)

By differentiation of equation (4.59) with respect to εi we obtain after rearranging,

=

(4.60b)

Equation (4.60b) is the expression of the Bose – Einstein distribution function for photons (bosons for which μ= 0).

(c) Internal energy of the photon gas

(4.61)(d) Planck radiation formulae

81

Page 82: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

We use the expression derived in equation (1.3); the energy of a photon is equal to and since there are two states of polarization for electromagnetic waves. Thus,

the density of state for each energy is given by:

(4.62a)

; (4.62b)

Substituting equation (4.62b) into (4.62a), we have:

(4.63)

Now the total energy in the cavity is given by,

= =

(4.64) The density of energy in the frequency ranges between and is:

(4.65)The later equation is the Planck’s radiation formulae.(e) The Helmholtz free energy

We recall the connection between the Helmholtz free energy and the partition function,

F =-kT lnZ.

Introducing the density of states in the frequency range between ω and ω+ dω (equation 4.63)and replacing the sum by integral in equation (4.59) we have,

82

Page 83: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

(4.66)

With the variable change the expression (4.66) transforms to

(4.67)

Integrating by parts the integral in equation (4.67), we obtain

(4.68)

Substituting equation (4.68) into equation (4.67), we obtain:

(4.69)

Where

(f) The Entropy and pressure of a photon gas

(4.70)

(4.71)

From equations (4.70) and (4.71), we can obtain the total energy radiated by a black body. From the definition of the Helmholtz free energy, the total internal energy in the volume V is given by:

83

Page 84: profiles.uonbi.ac.ke · Web viewIf we express the number of molecules in terms of the number of moles ν and Avogadro’s number NA, N = NA ν, the equation of state takes the form:

=

(4.72a)

If V = one unit volume, equation (4.72a) becomes

(4.72b)

84