turbulence, entropy and dynamics...turbulence, entropy and dynamics lecture notes, upc 2014 jose m....

79
TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Upload: others

Post on 26-Apr-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

TURBULENCE, ENTROPY

AND DYNAMICS

Lecture Notes, UPC 2014

Jose M. Redondo

redondo
Stamp
Page 2: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Contents

1 Turbulence 11.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Examples of turbulence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Heat and momentum transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Kolmogorov’s theory of 1941 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6 References and notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.7.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.7.2 Original scientific research papers and classic monographs . . . . . . . . . . . . . . . . . . 7

1.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Turbulence modeling 82.1 Closure problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Eddy viscosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Prandtl’s mixing-length concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Smagorinsky model for the sub-grid scale eddy viscosity . . . . . . . . . . . . . . . . . . . . . . . 82.5 Spalart–Allmaras, k–ε and k–ω models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 Common models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.7.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.7.2 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Reynolds stress equation model 103.1 Production term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Pressure-strain interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.3 Dissipation term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.4 Diffusion term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.5 Pressure-strain correlation term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.6 Rotational term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.7 Advantages of RSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.8 Disadvantages of RSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

i

Page 3: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

ii CONTENTS

3.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.12 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Boundary layer 124.1 Aerodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.2 Naval architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.3 Boundary layer equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.4 Turbulent boundary layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.5 Heat and mass transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.6 Convective transfer constants from boundary layer analysis . . . . . . . . . . . . . . . . . . . . . . 154.7 Boundary layer turbine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Similitude (model) 185.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.3 Typical applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 Lagrangian and Eulerian specification of the flow field 216.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.2 Substantial derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

7 Lagrangian mechanics 237.1 Conceptual framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

7.1.1 Generalized coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237.1.2 D'Alembert’s principle and generalized forces . . . . . . . . . . . . . . . . . . . . . . . . 247.1.3 Kinetic energy relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247.1.4 Lagrangian and action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257.1.5 Hamilton’s principle of stationary action . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

7.2 Lagrange equations of the first kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267.3 Lagrange equations of the second kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

7.3.1 Euler–Lagrange equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267.3.2 Derivation of Lagrange’s equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Page 4: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

CONTENTS iii

7.3.3 Dissipation function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277.3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7.4 Extensions of Lagrangian mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

8 Hamiltonian mechanics 328.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

8.1.1 Basic physical interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.1.2 Calculating a Hamiltonian from a Lagrangian . . . . . . . . . . . . . . . . . . . . . . . . 32

8.2 Deriving Hamilton’s equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338.3 As a reformulation of Lagrangian mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338.4 Geometry of Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.5 Generalization to quantum mechanics through Poisson bracket . . . . . . . . . . . . . . . . . . . . 348.6 Mathematical formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358.7 Riemannian manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358.8 Sub-Riemannian manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.9 Poisson algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.10 Charged particle in an electromagnetic field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.11 Relativistic charged particle in an electromagnetic field . . . . . . . . . . . . . . . . . . . . . . . . 368.12 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

8.13.1 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.13.2 Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

8.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

9 Classical mechanics 389.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399.2 Description of the theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

9.2.1 Position and its derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419.2.2 Forces; Newton’s second law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429.2.3 Work and energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439.2.4 Beyond Newton’s laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

9.3 Limits of validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439.3.1 The Newtonian approximation to special relativity . . . . . . . . . . . . . . . . . . . . . . 449.3.2 The classical approximation to quantum mechanics . . . . . . . . . . . . . . . . . . . . . 44

9.4 Branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Page 5: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

iv CONTENTS

9.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

10 Entropy (information theory) 4710.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4710.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4810.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4810.4 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4910.5 Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

10.5.1 Relationship to thermodynamic entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . 4910.5.2 Entropy as information content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.5.3 Data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.5.4 World’s technological capacity to store and communicate entropic information . . . . . . . 5110.5.5 Limitations of entropy as information content . . . . . . . . . . . . . . . . . . . . . . . . 5110.5.6 Limitations of entropy as a measure of unpredictability . . . . . . . . . . . . . . . . . . . 5110.5.7 Data as a Markov process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.5.8 b-ary entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

10.6 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.7 Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

10.7.1 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.7.2 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.7.3 Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.7.4 Additivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

10.8 Further properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.9 Extending discrete entropy to the continuous case . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

10.9.1 Differential entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5410.9.2 Relative entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

10.10Use in combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5510.10.1 Loomis-Whitney inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5510.10.2 Approximation to binomial coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

10.11See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5510.12References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5610.13Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

10.13.1 Textbooks on information theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5610.14External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

11 Topological entropy 5811.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

11.1.1 Definition of Adler, Konheim, and McAndrew . . . . . . . . . . . . . . . . . . . . . . . . 5811.1.2 Definition of Bowen and Dinaburg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

11.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5911.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Page 6: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

CONTENTS v

11.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5911.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5911.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

12 Measure-preserving dynamical system 6112.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6112.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6112.3 Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6112.4 Generic points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6212.5 Symbolic names and generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6212.6 Operations on partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6212.7 Measure-theoretic entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6212.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6312.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6312.10Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

13 List of Feynman diagrams 64

14 Canonical quantization 6514.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6514.2 First quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

14.2.1 Single particle systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6514.2.2 Many-particle systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

14.3 Issues and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6614.4 Second quantization: field theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

14.4.1 Field operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6714.4.2 Condensates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

14.5 Mathematical quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6814.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6914.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

14.7.1 Historical References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6914.7.2 General Technical References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

14.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6914.9 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 70

14.9.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7014.9.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7214.9.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Page 7: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 1

Turbulence

For other uses, see Turbulence (disambiguation).In fluid dynamics, turbulence or turbulent flow is a

Flow visualization of a turbulent jet, made by laser-induced flu-orescence. The jet exhibits a wide range of length scales, an im-portant characteristic of turbulent flows.

flow regime characterized by chaotic property changes.This includes low momentum diffusion, high momentumconvection, and rapid variation of pressure and velocityin space and time.Flow in which the kinetic energy dies out due to the ac-tion of fluid molecular viscosity is called laminar flow.While there is no theorem relating the non-dimensionalReynolds number (Re) to turbulence, flows at Reynoldsnumbers larger than 5000 are typically (but not neces-sarily) turbulent, while those at low Reynolds numbersusually remain laminar. In Poiseuille flow, for example,turbulence can first be sustained if the Reynolds numberis larger than a critical value of about 2040;[1] moreover,the turbulence is generally interspersed with laminar flowuntil a larger Reynolds number of about 4000.In turbulent flow, unsteady vortices appear onmany scalesand interact with each other. Drag due to boundary layerskin friction increases. The structure and location ofboundary layer separation often changes, sometimes re-

Laminar and turbulent water flow over the hull of a submarine

Turbulence in the tip vortex from an airplane wing

sulting in a reduction of overall drag. Although laminar-turbulent transition is not governed by Reynolds number,the same transition occurs if the size of the object is grad-ually increased, or the viscosity of the fluid is decreased,or if the density of the fluid is increased. Nobel Laure-

1

Page 8: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

2 CHAPTER 1. TURBULENCE

ate Richard Feynman described turbulence as “the mostimportant unsolved problem of classical physics.”[2]

1.1 Features

Turbulence is characterized by the following features:

• Irregularity: Turbulent flows are always highly ir-regular. For this reason, turbulence problems arenormally treated statistically rather than determinis-tically. Turbulent flow is chaotic. However, not allchaotic flows are turbulent.

• Diffusivity: The readily available supply of energyin turbulent flows tends to accelerate the homoge-nization (mixing) of fluid mixtures. The character-istic which is responsible for the enhanced mixingand increased rates of mass, momentum and energytransports in a flow is called “diffusivity”.

• Rotationality: Turbulent flows have non-zero vor-ticity and are characterized by a strong three-dimensional vortex generation mechanism known asvortex stretching. In fluid dynamics, they are essen-tially vortices subjected to stretching associated witha corresponding increase of the component of vor-ticity in the stretching direction—due to the conser-vation of angular momentum. On the other hand,vortex stretching is the core mechanism on whichthe turbulence energy cascade relies to establish thestructure function. In general, the stretching mecha-nism implies thinning of the vortices in the directionperpendicular to the stretching direction due to vol-ume conservation of fluid elements. As a result, theradial length scale of the vortices decreases and thelarger flow structures break down into smaller struc-tures. The process continues until the small scalestructures are small enough that their kinetic energycan be transformed by the fluid’s molecular viscos-ity into heat. This is why turbulence is always ro-tational and three dimensional. For example, atmo-spheric cyclones are rotational but their substantiallytwo-dimensional shapes do not allow vortex gener-ation and so are not turbulent. On the other hand,oceanic flows are dispersive but essentially non ro-tational and therefore are not turbulent.

• Dissipation: To sustain turbulent flow, a persistentsource of energy supply is required because turbu-lence dissipates rapidly as the kinetic energy is con-verted into internal energy by viscous shear stress.

Turbulent diffusion is usually described by a turbulentdiffusion coefficient. This turbulent diffusion coefficientis defined in a phenomenological sense, by analogy withthe molecular diffusivities, but it does not have a truephysical meaning, being dependent on the flow condi-tions, and not a property of the fluid itself. In addition, the

turbulent diffusivity concept assumes a constitutive rela-tion between a turbulent flux and the gradient of a meanvariable similar to the relation between flux and gradientthat exists for molecular transport. In the best case, thisassumption is only an approximation. Nevertheless, theturbulent diffusivity is the simplest approach for quanti-tative analysis of turbulent flows, and many models havebeen postulated to calculate it. For instance, in large bod-ies of water like oceans this coefficient can be found usingRichardson's four-third power law and is governed by therandomwalk principle. In rivers and large ocean currents,the diffusion coefficient is given by variations of Elder’sformula.Turbulence causes the formation of eddies of many dif-ferent length scales. Most of the kinetic energy of the tur-bulent motion is contained in the large-scale structures.The energy “cascades” from these large-scale structuresto smaller scale structures by an inertial and essentiallyinviscid mechanism. This process continues, creatingsmaller and smaller structures which produces a hierar-chy of eddies. Eventually this process creates structuresthat are small enough that molecular diffusion becomesimportant and viscous dissipation of energy finally takesplace. The scale at which this happens is the Kolmogorovlength scale.Via this energy cascade, turbulent flow can be realizedas a superposition of a spectrum of velocity fluctuationsand eddies upon a mean flow. The eddies are loosely de-fined as coherent patterns of velocity, vorticity and pres-sure. Turbulent flows may be viewed as made of an entirehierarchy of eddies over a wide range of length scalesand the hierarchy can be described by the energy spec-trum that measures the energy in velocity fluctuations foreach length scale (wavenumber). The scales in the en-ergy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scalesthese eddies can be divided into three categories.

1. Integral length scales: Largest scales in the energyspectrum. These eddies obtain energy from themean flow and also from each other. Thus, these arethe energy production eddies which contain most ofthe energy. They have the large velocity fluctuationand are low in frequency. Integral scales are highlyanisotropic and are defined in terms of the normal-ized two-point velocity correlations. The maximumlength of these scales is constrained by the charac-teristic length of the apparatus. For example, thelargest integral length scale of pipe flow is equal tothe pipe diameter. In the case of atmospheric turbu-lence, this length can reach up to the order of severalhundreds kilometers.

2. Kolmogorov length scales: Smallest scales in thespectrum that form the viscous sub-layer range. Inthis range, the energy input from nonlinear inter-actions and the energy drain from viscous dissipa-tion are in exact balance. The small scales have high

Page 9: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

1.2. EXAMPLES OF TURBULENCE 3

frequency, causing turbulence to be locally isotropicand homogeneous.

3. Taylor microscales: The intermediate scales be-tween the largest and the smallest scales which makethe inertial subrange. Taylor micro-scales are notdissipative scale but pass down the energy from thelargest to the smallest without dissipation. Someliteratures do not consider Taylor micro-scales asa characteristic length scale and consider the en-ergy cascade to contain only the largest and small-est scales; while the latter accommodate both theinertial sub-range and the viscous-sub layer. Nev-ertheless, Taylor micro-scales are often used in de-scribing the term “turbulence” more conveniently asthese Taylor micro-scales play a dominant role inenergy and momentum transfer in the wavenumberspace.

Although it is possible to find some particular solutionsof the Navier-Stokes equations governing fluid motion,all such solutions are unstable to finite perturbations atlarge Reynolds numbers. Sensitive dependence on theinitial and boundary conditions makes fluid flow irreg-ular both in time and in space so that a statistical de-scription is needed. The Russian mathematician AndreyKolmogorov proposed the first statistical theory of turbu-lence, based on the aforementioned notion of the energycascade (an idea originally introduced by Richardson)and the concept of self-similarity. As a result, theKolmogorov microscales were named after him. It is nowknown that the self-similarity is broken so the statisticaldescription is presently modified.[3] Still, a complete de-scription of turbulence remains one of the unsolved prob-lems in physics.According to an apocryphal story, Werner Heisenbergwas asked what he would ask God, given the opportu-nity. His reply was: “When I meet God, I am goingto ask him two questions: Why relativity? And whyturbulence? I really believe he will have an answer forthe first.”[4] A similar witticism has been attributed toHorace Lamb (who had published a noted text book onHydrodynamics)—his choice being quantum electrody-namics (instead of relativity) and turbulence. Lamb wasquoted as saying in a speech to the British Associationfor the Advancement of Science, “I am an old man now,and when I die and go to heaven there are two matters onwhich I hope for enlightenment. One is quantum electro-dynamics, and the other is the turbulent motion of fluids.And about the former I am rather optimistic.”[5][6]

A more detailed presentation of turbulence with empha-sis on high-Reynolds number flow, intended for a gen-eral readership of physicists and applied mathematicians,is found in the Scholarpedia articles by R. Benzi and U.Frisch[7] and by G. Falkovich.[8]

There are many scales of meteorological motions; in thiscontext turbulence affects small-scale motions.[9]

1.2 Examples of turbulence• Smoke rising from a cigarette is turbulent flow.For the first few centimeters, the flow is certainlylaminar. Then smoke becomes turbulent as itsReynolds number increases, as its velocity and char-acteristic length are both increasing.

• Flow over a golf ball. (This can be best understoodby considering the golf ball to be stationary, withair flowing over it.) If the golf ball were smooth,the boundary layer flow over the front of the spherewould be laminar at typical conditions. However,the boundary layer would separate early, as the pres-sure gradient switched from favorable (pressure de-creasing in the flow direction) to unfavorable (pres-sure increasing in the flow direction), creating alarge region of low pressure behind the ball that cre-ates high form drag. To prevent this from happen-ing, the surface is dimpled to perturb the boundarylayer and promote transition to turbulence. This re-sults in higher skin friction, but moves the point ofboundary layer separation further along, resulting inlower form drag and lower overall drag.

• The mixing of warm and cold air in the atmo-sphere by wind, which causes clear-air turbulenceexperienced during airplane flight, as well as poorastronomical seeing (the blurring of images seenthrough the atmosphere.)

• Most of the terrestrial atmospheric circulation

• The oceanic and atmospheric mixed layers and in-tense oceanic currents.

• The flow conditions in many industrial equipment(such as pipes, ducts, precipitators, gas scrubbers,dynamic scraped surface heat exchangers, etc.) andmachines (for instance, internal combustion enginesand gas turbines).

• The external flow over all kind of vehicles such ascars, airplanes, ships and submarines.

• The motions of matter in stellar atmospheres.

• A jet exhausting from a nozzle into a quiescent fluid.As the flow emerges into this external fluid, shearlayers originating at the lips of the nozzle are cre-ated. These layers separate the fast moving jet fromthe external fluid, and at a certain critical Reynoldsnumber they become unstable and break down toturbulence.

• Snow fences work by inducing turbulence in thewind, forcing it to drop much of its snow load nearthe fence.

• Bridge supports (piers) in water. In the late sum-mer and fall, when river flow is slow, water flows

Page 10: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

4 CHAPTER 1. TURBULENCE

smoothly around the support legs. In the spring,when the flow is faster, a higher Reynolds Numberis associated with the flow. The flow may start offlaminar but is quickly separated from the leg andbecomes turbulent.

• In many geophysical flows (rivers, atmosphericboundary layer), the flow turbulence is dominatedby the coherent structure activities and associatedturbulent events. A turbulent event is a series of tur-bulent fluctuations that contain more energy than theaverage flow turbulence.[10][11] The turbulent eventsare associated with coherent flow structures such aseddies and turbulent bursting, and they play a crit-ical role in terms of sediment scour, accretion andtransport in rivers as well as contaminant mixing anddispersion in rivers and estuaries, and in the atmo-sphere.

• In the medical field of cardiology, a stethoscope isused to detect heart sounds and bruits, which aredue to turbulent blood flow. In normal individuals,heart sounds are a product of turbulent flow as heartvalves close. However, in some conditions turbu-lent flow can be audible due to other reasons, someof them pathological. For example, in advancedatherosclerosis, bruits (and therefore turbulent flow)can be heard in some vessels that have been nar-rowed by the disease process.

1.3 Heat and momentum transfer

When flow is turbulent, particles exhibit additional trans-verse motion which enhances the rate of energy and mo-mentum exchange between them thus increasing the heattransfer and the friction coefficient.Assume for a two-dimensional turbulent flow that one wasable to locate a specific point in the fluid and measure theactual velocity v = (vx, vy) of every particle that passedthrough that point at any given time. Then one would findthe actual velocity fluctuating about a mean value:vx = vx︸︷︷︸

meanvalue

+ v′x︸︷︷︸fluctuation

,vy = vy + v′y

and similarly for temperature(T = T + T ′) and pres-

sure(P = P + P ′) , where the primed quantities denote

fluctuations superposed to the mean. This decompositionof a flow variable into a mean value and a turbulent fluc-tuation was originally proposed by Osborne Reynolds in1895, and is considered to be the beginning of the sys-tematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are takenas predictable variables determined by dynamics laws, theturbulent fluctuations are regarded as stochastic variables.The heat flux and momentum transfer (represented by theshear stress τ ) in the direction normal to the flow for a

given time are

q = v′yρcPT′︸ ︷︷ ︸

value experimental

= −kturb∂T

∂y

τ = −ρv′yv′x︸ ︷︷ ︸value experimental

= µturb∂vx∂y

where cP is the heat capacity at constant pressure, ρ isthe density of the fluid, µturb is the coefficient of turbulentviscosity and kturb is the turbulent thermal conductivity.[12]

1.4 Kolmogorov’s theory of 1941

Richardson’s notion of turbulence was that a turbulentflow is composed by “eddies” of different sizes. The sizesdefine a characteristic length scale for the eddies, whichare also characterized by velocity scales and time scales(turnover time) dependent on the length scale. The largeeddies are unstable and eventually break up originatingsmaller eddies, and the kinetic energy of the initial largeeddy is divided into the smaller eddies that stemmed fromit. These smaller eddies undergo the same process, giv-ing rise to even smaller eddies which inherit the energyof their predecessor eddy, and so on. In this way, the en-ergy is passed down from the large scales of the motionto smaller scales until reaching a sufficiently small lengthscale such that the viscosity of the fluid can effectivelydissipate the kinetic energy into internal energy.In his original theory of 1941, Kolmogorov postulatedthat for very high Reynolds numbers, the small scale tur-bulent motions are statistically isotropic (i.e. no prefer-ential spatial direction could be discerned). In general,the large scales of a flow are not isotropic, since theyare determined by the particular geometrical features ofthe boundaries (the size characterizing the large scaleswill be denoted as L). Kolmogorov’s idea was that in theRichardson’s energy cascade this geometrical and direc-tional information is lost, while the scale is reduced, sothat the statistics of the small scales has a universal char-acter: they are the same for all turbulent flows when theReynolds number is sufficiently high.Thus, Kolmogorov introduced a second hypothesis: forvery high Reynolds numbers the statistics of small scalesare universally and uniquely determined by the viscosity (ν ) and the rate of energy dissipation ( ε ). With only thesetwo parameters, the unique length that can be formed bydimensional analysis is

η =

(ν3

ε

)1/4

This is today known as the Kolmogorov length scale (seeKolmogorov microscales).

Page 11: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

1.4. KOLMOGOROV’S THEORY OF 1941 5

A turbulent flow is characterized by a hierarchy of scalesthrough which the energy cascade takes place. Dissipa-tion of kinetic energy takes place at scales of the order ofKolmogorov length η , while the input of energy into thecascade comes from the decay of the large scales, of or-der L. These two scales at the extremes of the cascade candiffer by several orders of magnitude at high Reynoldsnumbers. In between there is a range of scales (each onewith its own characteristic length r) that has formed atthe expense of the energy of the large ones. These scalesare very large compared with the Kolmogorov length, butstill very small compared with the large scale of the flow(i.e. η ≪ r ≪ L ). Since eddies in this range are muchlarger than the dissipative eddies that exist at Kolmogorovscales, kinetic energy is essentially not dissipated in thisrange, and it is merely transferred to smaller scales untilviscous effects become important as the order of the Kol-mogorov scale is approached. Within this range inertialeffects are still much larger than viscous effects, and it ispossible to assume that viscosity does not play a role intheir internal dynamics (for this reason this range is called“inertial range”).Hence, a third hypothesis of Kolmogorov was that at veryhigh Reynolds number the statistics of scales in the rangeη ≪ r ≪ L are universally and uniquely determined bythe scale r and the rate of energy dissipation ε .The way in which the kinetic energy is distributed overthe multiplicity of scales is a fundamental characteriza-tion of a turbulent flow. For homogeneous turbulence(i.e., statistically invariant under translations of the ref-erence frame) this is usually done by means of the energyspectrum function E(k) , where k is the modulus of thewavevector corresponding to some harmonics in a Fourierrepresentation of the flow velocity field u(x):

u(x) =∫∫∫

R3

u(k)eik·xd3k

where û(k) is the Fourier transform of the velocity field.Thus, E(k)dk represents the contribution to the kineticenergy from all the Fourier modes with k < |k| < k + dk,and therefore,

1

2⟨uiui⟩ =

∫ ∞

0

E(k)dk

where 1/2⟨uiui⟩ is the mean turbulent kinetic energyof the flow. The wavenumber k corresponding to lengthscale r is k = 2π/r . Therefore, by dimensional analysis,the only possible form for the energy spectrum functionaccording with the third Kolmogorov’s hypothesis is

E(k) = Cε2/3k−5/3

where C would be a universal constant. This is one ofthe most famous results of Kolmogorov 1941 theory, and

considerable experimental evidence has accumulated thatsupports it.[13]

In spite of this success, Kolmogorov theory is at presentunder revision. This theory implicitly assumes thatthe turbulence is statistically self-similar at differentscales. This essentially means that the statistics are scale-invariant in the inertial range. A usual way of studyingturbulent velocity fields is by means of velocity incre-ments:

δu(r) = u(x+ r)− u(x)

that is, the difference in velocity between points sep-arated by a vector r (since the turbulence is assumedisotropic, the velocity increment depends only on themodulus of r). Velocity increments are useful becausethey emphasize the effects of scales of the order of theseparation r when statistics are computed. The statisticalscale-invariance implies that the scaling of velocity incre-ments should occur with a unique scaling exponent β , sothat when r is scaled by a factor λ ,

δu(λr)

should have the same statistical distribution as

λβδu(r)

with β independent of the scale r. From this fact, andother results of Kolmogorov 1941 theory, it follows thatthe statistical moments of the velocity increments (knownas structure functions in turbulence) should scale as

⟨[δu(r)]n⟩ = Cnεn/3rn/3

where the brackets denote the statistical average, and theCn would be universal constants.There is considerable evidence that turbulent flows de-viate from this behavior. The scaling exponents deviatefrom the n/3 value predicted by the theory, becoming anon-linear function of the order n of the structure func-tion. The universality of the constants have also beenquestioned. For low orders the discrepancy with the Kol-mogorov n/3 value is very small, which explain the suc-cess of Kolmogorov theory in regards to low order statis-tical moments. In particular, it can be shown that whenthe energy spectrum follows a power law

E(k) ∝ k−p

with 1 < p < 3 , the second order structure function hasalso a power law, with the form

Page 12: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

6 CHAPTER 1. TURBULENCE

⟨[δu(r)]2⟩ ∝ rp−1

Since the experimental values obtained for the second or-der structure function only deviate slightly from the 2/3value predicted by Kolmogorov theory, the value for pis very near to 5/3 (differences are about 2%[14]). Thusthe “Kolmogorov −5/3 spectrum” is generally observedin turbulence. However, for high order structure func-tions the difference with the Kolmogorov scaling is signif-icant, and the breakdown of the statistical self-similarityis clear. This behavior, and the lack of universality of theCn constants, are related with the phenomenon of inter-mittency in turbulence. This is an important area of re-search in this field, and a major goal of the modern theoryof turbulence is to understand what is really universal inthe inertial range.

1.5 See also

• Astronomical seeing

• Atmospheric dispersion modeling

• Chaos theory

• Clear-air turbulence

• Constructal theory

• Downdrafts

• Eddy covariance

• Fluid dynamics

• Darcy–Weisbach equation• Eddy• Navier-Stokes equations• Large eddy simulation• Poiseuille’s law• Lagrangian coherent structure• Turbulence kinetic energy

• Mesocyclones

• Navier-Stokes existence and smoothness

• Reynolds Number

• Swing bowling

• Taylor microscale

• Turbulence modeling

• Velocimetry

• Vortex

• Vortex generator• Wake turbulence• Wave turbulence• Wingtip vortices• Wind tunnel• Different types of boundary conditions in fluid dy-namics

1.6 References and notes[1] Avila, K.; D. Moxey; A. de Lozar; M. Avila;

D. Barkley; B. Hof (July 2011). “The On-set of Turbulence in Pipe Flow”. Science 333(6039): 192–196. Bibcode:2011Sci...333..192A.doi:10.1126/science.1203223.

[2] “Turbulence theory gets a bit choppy”. USA Today.September 10, 2006.

[3] weizmann.ac.il

[4] MARSHAK, ALEX (2005). 3D radiative transfer incloudy atmospheres; pg.76. Springer. ISBN 978-3-540-23958-1.

[5] Mullin, Tom (11 November 1989). “Turbulent times forfluids”. New Scientist.

[6] Davidson, P. A. (2004). Turbulence: An Introduction forScientists and Engineers. Oxford University Press. ISBN978-0-19-852949-1.

[7] scholarpedia.org; R. Benzi and U. Frisch, Scholarpedia,“Turbulence”.

[8] scholarpedia.org; G. Falkovich, Scholarpedia, “Cascadeand scaling”.

[9] Stull, Roland B. (1994). An Introduction to BoundaryLayer Meteorology (1st ed., repr. ed.). Dordrecht [u.a.]:Kluwer. p. 20. ISBN 978-90-277-2769-5.

[10] Narasimha R, Rudra Kumar S, Prabhu A, Kailas SV(2007). “Turbulent flux events in a nearly neutral atmo-spheric boundary layer”. Philosophical Transactions of theRoyal Society A: Mathematical, Physical and EngineeringSciences (Phil Trans R Soc Ser A, Vol. 365, pp. 841–858)365 (1852): 841–858. Bibcode:2007RSPTA.365..841N.doi:10.1098/rsta.2006.1949.

[11] Trevethan M, Chanson H (2010). “Turbulence and Tur-bulent Flux Events in a Small Estuary”. EnvironmentalFluid Mechanics (Environmental Fluid Mechanics, Vol.10, pp. 345-368) 10 (3): 345–368. doi:10.1007/s10652-009-9134-7.

[12] H. Tennekes and J. L. Lumley, “A First Course in Turbu-lence”, The MIT Press, (1972).

[13] U. Frisch. Turbulence: The Legacy of A. N. Kolmogorov.Cambridge University Press, 1995.

[14] J. Mathieu and J. Scott An Introduction to Turbulent Flow.Cambridge University Press, 2000.

Page 13: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

1.8. EXTERNAL LINKS 7

1.7 Further reading

1.7.1 General

• G Falkovich and K.R. Sreenivasan. Lessons fromhydrodynamic turbulence, Physics Today, vol. 59,no. 4, pages 43–49 (April 2006).

• U. Frisch. Turbulence: The Legacy of A. N. Kol-mogorov. Cambridge University Press, 1995.

• P. A. Davidson. Turbulence - An Introduction forScientists and Engineers. Oxford University Press,2004.

• J. Cardy, G. Falkovich and K. Gawedzki (2008)Non-equilibrium statistical mechanics and turbu-lence. Cambridge University Press

• P. A. Durbin and B. A. Pettersson Reif. StatisticalTheory and Modeling for Turbulent Flows. JohnsWiley & Sons, 2001.

• T. Bohr, M.H. Jensen, G. Paladin and A.Vulpiani.Dynamical Systems Approach to Turbulence, Cam-bridge University Press, 1998.

• J. M. McDonough (2007). Introductory Lectures onTurbulence - Physics, Mathematics, and Modeling

1.7.2 Original scientific research papersand classic monographs

• Kolmogorov, Andrey Nikolaevich (1941). “The lo-cal structure of turbulence in incompressible viscousfluid for very large Reynolds numbers”. Proceedingsof the USSR Academy of Sciences (in Russian)30: 299–303., translated into English by V. Levin:Kolmogorov, Andrey Nikolaevich (July 8, 1991).“The local structure of turbulence in incompress-ible viscous fluid for very large Reynolds num-bers”. Proceedings of the Royal Society A 434(1991): 9–13. Bibcode:1991RSPSA.434....9K.doi:10.1098/rspa.1991.0075.

• Kolmogorov, Andrey Nikolaevich (1941). “Dissi-pation of Energy in the Locally Isotropic Turbu-lence”. Proceedings of the USSR Academy of Sci-ences (in Russian) 32: 16–18., translated into En-glish by Kolmogorov, Andrey Nikolaevich (July 8,1991). “The local structure of turbulence in in-compressible viscous fluid for very large Reynoldsnumbers”. Proceedings of the Royal Society A 434(1980): 15–17. Bibcode:1991RSPSA.434...15K.doi:10.1098/rspa.1991.0076.

• G. K. Batchelor, The theory of homogeneous turbu-lence. Cambridge University Press, 1953.

1.8 External links• Center for Turbulence Research, Stanford Univer-sity

• Scientific American article

• Air Turbulence Forecast

• international CFD database iCFDdatabase

• Turbulent flow in a pipe on YouTube

• Fluid Mechanics website with movies, Q&A, etc

• Johns Hopkins public database with direct numeri-cal simulation data

Page 14: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 2

Turbulence modeling

Turbulence modeling is the construction and use of amodel to predict the effects of turbulence. Averaging isoften used to simplify the solution of the governing equa-tions of turbulence, but models are needed to representscales of the flow that are not resolved.[1]

2.1 Closure problem

A closure problem arises in the Reynolds-averagedNavier-Stokes (RANS) equation because of the non-linear term −ρυ′iυ′j from the convective acceleration,known as the Reynolds stress,

Rij = −ρυ′iυ′j [2]

Closing the RANS equation requires modeling theReynold’s stress Rij .

2.2 Eddy viscosity

Joseph Boussinesq was the first practitioner of this (i.e.modeling the Reynold’s stress), introducing the conceptof eddy viscosity. In 1887 Boussinesq proposed relat-ing the turbulence stresses to the mean flow to close thesystem of equations. Here the Boussinesq hypothesis isapplied to model the Reynolds stress term. Note that anew proportionality constant νt > 0 , the turbulence eddyviscosity, has been introduced. Models of this type areknown as eddy viscosity models or EVM’s.

−υ′iυ′j = νt

(∂υi

∂xj+

∂υj

∂xi

)− 2

3Kδij

Which can be written in shorthand as−υ′iυ′j = 2νtSij − 2

3Kδij

where Sij is the mean rate of strain tensorνt is the turbulence eddy viscosityK = 1

2υ′iυ

′i is the turbulence kinetic energy

and δij is the Kronecker delta.

In this model, the additional turbulence stresses are givenby augmenting the molecular viscosity with an eddyviscosity.[3] This can be a simple constant eddy viscos-ity (which works well for some free shear flows such asaxisymmetric jets, 2-D jets, and mixing layers).

2.3 Prandtl’s mixing-length con-cept

Later, Ludwig Prandtl introduced the additional conceptof the mixing length, along with the idea of a boundarylayer. For wall-bounded turbulent flows, the eddy viscos-ity must vary with distance from the wall, hence the ad-dition of the concept of a 'mixing length'. In the simplestwall-bounded flow model, the eddy viscosity is given bythe equation:

νt =

∣∣∣∣∂u∂y∣∣∣∣ l2m

where:

∂u

∂y

lm

This simple model is the basis for the "law of the wall",which is a surprisingly accurate model for wall-bounded,attached (not separated) flow fields with small pressuregradients.More general turbulence models have evolved over time,with most modern turbulence models given by field equa-tions similar to the Navier-Stokes equations.

2.4 Smagorinsky model for thesub-grid scale eddy viscosity

Among many others , Joseph Smagorinsky (1964) pro-posed a useful formula for the eddy viscosity in numeri-cal models, based on the local derivatives of the velocityfield and the local grid size:

8

Page 15: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

2.7. REFERENCES 9

νt = ∆x∆y

√(∂u

∂x

)2

+

(∂v

∂y

)2

+1

2

(∂u

∂y+∂v

∂x

)2

2.5 Spalart–Allmaras, k–ε and k–ω models

The Boussinesq hypothesis is employed in the Spalart–Allmaras (S–A), k–ε (k–epsilon), and k–ω (k–omega)models and offers a relatively low cost computation forthe turbulence viscosity νt . The S–Amodel uses only oneadditional equation to model turbulence viscosity trans-port, while the k models use two.

2.6 Common models

The following is a list of commonly employed models inmodern engineering applications.

• Spalart–Allmaras (S–A)

• k–ε (k–epsilon)

• k–ω (k–omega)

• SST (Menter’s Shear Stress Transport)

• Reynolds stress equation model

2.7 References

2.7.1 Notes[1] Ching Jen Chen, Shenq-Yuh Jaw (1998), Fundamentals

of turbulence modeling, Taylor & Francis

[2] Andersson, Bengt et al (2012). Computational fluid dy-namics for engineers. Cambridge: Cambridge UniversityPress. p. 83. ISBN 978-1-107-01895-2.

[3] John J. Bertin, Jacques Periaux, Josef Ballmann,Advancesin Hypersonics: Modeling hypersonic flows

2.7.2 Other

• Townsend, A.A. (1980) “The Structure of TurbulentShear Flow” 2nd Edition (Cambridge Monographson Mechanics)

• Bradshaw, P. (1971) “An introduction to turbulenceand its measurement” (Pergamon Press)

• Wilcox C. D., (1998), “Turbulence Modeling forCFD” 2nd Ed., (DCW Industries, La Cañada)

Page 16: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 3

Reynolds stress equation model

Reynolds stress equation model (RSM), also known assecond order or secondmoment closure model is the mostcomplex classical turbulence model. Several shortcom-ings of k-epsilon turbulence model were observed when itwas attempted to predict flows with complex strain fieldsor substantial body forces. Under those conditions the in-dividual Reynolds stresses were not found to be accuratewhile using formula

−ρu′iu′j = µt(

∂Ui

∂xj+ ∂Ui

∂xi

)− 2

3ρkδij = 2µtEij− 23ρkδij

The equation for the transport of kinematic Reynoldsstress Rij = u′iu

′j = −τij/ρ is [1]

DRij

Dt = Dij + Pij +Πij +Ωij − εij

Rate of change of Rij + Transport of Rij by convection= Transport of Rij by diffusion + Rate of production ofRij + Transport of Rij due to turbulent pressure-straininteractions + Transport of Rij due to rotation + Rate ofdissipation of Rij .The six partial differential equations above represent sixindependent Reynolds stresses. The models that we needto solve the above equation are derived from the work ofLaunder, Rodi and Reece (1975).

3.1 Production term

The Production term that is used in CFD computationswith Reynolds stress transport equations is

Pij = −(Rim

∂Uj

∂xm+Rjm

∂Ui

∂xm

)

3.2 Pressure-strain interactions

Pressure-strain interactions affect the Reynolds stressesby two different physical processes: pressure fluctuationsdue to eddies interacting with one another and pressurefluctuation of an eddy with a region of different mean ve-locity. This redistributes energy among normal Reynoldsstresses and thus makes them more isotropic. It also re-duces the Reynolds shear stresses.It is observed that the wall effect increases the anisotropy

of normal Reynolds stresses and decreases Reynoldsshear stresses. A comprehensive model that takes intoaccount these effects was given by Launder and Rodi(1975).

3.3 Dissipation term

The modelling of dissipation rate ϵij assumes that thesmall dissipative eddies are isotropic. This term affectsonly the normal Reynolds stresses. [2]

ϵij = 2/3ϵδij

where ϵ is dissipation rate of turbulent kinetic energy, andδij = 1 when i = j and 0 when i ≠ j

3.4 Diffusion term

The modelling of diffusion term Dij is based on the as-sumption that the rate of transport of Reynolds stressesby diffusion is proportional to the gradients of Reynoldsstresses. The simplest form of Dij that is followed bycommercial CFD codes isDij = ∂

∂xm

(vtσk

∂Rij

∂xm

)= div

(vt

σk∇(Rij)

)where υt = Cµ

k2

ϵ , σk = 1.0 and Cµ = 0.9

3.5 Pressure-strain correlationterm

The pressure-strain correlation term promotes isotropy ofthe turbulence by redistributing energy amongst the nor-mal Reynolds stresses.The pressure-strain interactions isthe most important term to model correctly. Their effecton Reynolds stresses is caused by pressure fluctuationsdue to interaction of eddies with each other and pressurefluctuations due to interaction of an eddy with region offlow having different mean velocity. The correction termis given as [3]

Πij = −C1ϵk

(Rij − 2

3kδij)− C2

(Pij − 2

3Pδij)

10

Page 17: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

3.11. REFERENCES 11

3.6 Rotational term

The rotational term is given as [4]

Ωij = −2ωk (Rjmeikm +Rimejkm)

here ωk is the rotation vector, eijk =1 if i,j,k are in cyclicorder and are different, eijk =−1 if i,j,k are in anti-cyclicorder and are different and eijk =0 in case any two indicesare same.

3.7 Advantages of RSM

1)Compared with k-ε model, it is simple because of theuse of an isotropic eddy viscosity.2)It is the most general of all turbulence models and workreasonably well for a large number of engineering flows.3)It requires only the initial and/or boundary conditionsto be supplied.4)Since the production terms need not be modelled, it canselectively damp the stresses due to buoyancy, curvatureeffects etc.

3.8 Disadvantages of RSM

1)It requires very large computing costs.2)It is not very widely validated as the k-ε model andmix-ing length models.3)Due to identical problems with the ε-equation mod-elling, it performs just as poorly as the k-ε model in someproblems.4)Because of being isotropic, it is not good in predictingnormal stresses and is unable to account for irrotationalstrains.

3.9 See also

• Reynolds Stress

• Isotropy

• Turbulence Modeling

• Eddy

• k-epsilon turbulence model

3.10 See also

• k-epsilon turbulence model

• Mixing length model

3.11 References[1] Bengt Andersson , Ronnie Andersson s (2012). Compu-

tational Fluid Dynamics for Engineers (First ed.). Cam-bridge University Press, New York. p. 97. ISBN9781107018952.

[2] Peter S. Bernard & James M. Wallace (2002). TurbulentFlow: Analysis, Measurement & Prediction. JohnWiley &Sons. p. 324. ISBN 0471332194.

[3] Magnus Hallback (1996). Turbulence and TransitionModelling (First ed.). Kluwer Academic Publishers. p.117. ISBN 0792340604.

[4] H.Versteeg & W.Malalasekera (2013). An Introductionto Computational Fluid Dynamics (Second ed.). PearsonEducation Limited. p. 96. ISBN 9788131720486.

3.12 Bibliography• “An Introduction to Computational Fluid Dynam-ics”,Second Edition by Versteeg & Malalasekera,published by Pearson Education Limited.

• “Turbulence : An Introduction for Scientists and En-gineers” By P.A. Davidson.

• “Turbulence Models & Their Applications” ByTuncer Cebeci, published by Horizons PublicationsInc.

Page 18: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 4

Boundary layer

For the anatomical structure, see Boundary layer ofuterus.In physics and fluid mechanics, a boundary layer is the

Boundary layer visualization, showing transition from laminarto turbulent condition

layer of fluid in the immediate vicinity of a bounding sur-face where the effects of viscosity are significant. In theEarth’s atmosphere, the atmospheric boundary layer is theair layer near the ground affected by diurnal heat, mois-ture or momentum transfer to or from the surface. Onan aircraft wing the boundary layer is the part of the flowclose to the wing, where viscous forces distort the sur-rounding non-viscous flow. See Reynolds number.Laminar boundary layers can be loosely classified accord-ing to their structure and the circumstances under whichthey are created. The thin shear layer which develops onan oscillating body is an example of a Stokes boundarylayer, while the Blasius boundary layer refers to the well-known similarity solution near an attached flat plate heldin an oncoming unidirectional flow. When a fluid rotatesand viscous forces are balanced by the Coriolis effect(rather than convective inertia), an Ekman layer forms.In the theory of heat transfer, a thermal boundary layeroccurs. A surface can have multiple types of boundarylayer simultaneously.

4.1 Aerodynamics

The aerodynamic boundary layer was first defined byLudwig Prandtl in a paper presented on August 12, 1904at the third International Congress of Mathematicians inHeidelberg, Germany. It simplifies the equations of fluidflow by dividing the flow field into two areas: one in-

u0 u(y)

Laminar boundary layer velocity profile

side the boundary layer, dominated by viscosity and cre-ating the majority of drag experienced by the boundarybody; and one outside the boundary layer, where viscositycan be neglected without significant effects on the solu-tion. This allows a closed-form solution for the flow inboth areas, a significant simplification of the full Navier–Stokes equations. The majority of the heat transfer to andfrom a body also takes place within the boundary layer,again allowing the equations to be simplified in the flowfield outside the boundary layer. The pressure distribu-tion throughout the boundary layer in the direction nor-mal to the surface (such as an airfoil) remains constantthroughout the boundary layer, and is the same as on thesurface itself.The thickness of the velocity boundary layer is normallydefined as the distance from the solid body at which theviscous flow velocity is 99% of the freestream veloc-ity (the surface velocity of an inviscid flow). Displace-ment Thickness is an alternative definition stating that theboundary layer represents a deficit in mass flow comparedto inviscid flow with slip at the wall. It is the distance bywhich the wall would have to be displaced in the inviscidcase to give the same total mass flow as the viscous case.The no-slip condition requires the flow velocity at the sur-face of a solid object be zero and the fluid temperature beequal to the temperature of the surface. The flow veloc-ity will then increase rapidly within the boundary layer,governed by the boundary layer equations, below.The thermal boundary layer thickness is similarly the dis-tance from the body at which the temperature is 99% ofthe temperature found from an inviscid solution. The ra-tio of the two thicknesses is governed by the Prandtl num-ber. If the Prandtl number is 1, the two boundary layers

12

Page 19: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

4.2. NAVAL ARCHITECTURE 13

are the same thickness. If the Prandtl number is greaterthan 1, the thermal boundary layer is thinner than the ve-locity boundary layer. If the Prandtl number is less than1, which is the case for air at standard conditions, the ther-mal boundary layer is thicker than the velocity boundarylayer.In high-performance designs, such as gliders and com-mercial aircraft, much attention is paid to controllingthe behavior of the boundary layer to minimize drag.Two effects have to be considered. First, the boundarylayer adds to the effective thickness of the body, throughthe displacement thickness, hence increasing the pressuredrag. Secondly, the shear forces at the surface of the wingcreate skin friction drag.At high Reynolds numbers, typical of full-sized aircraft, itis desirable to have a laminar boundary layer. This resultsin a lower skin friction due to the characteristic velocityprofile of laminar flow. However, the boundary layer in-evitably thickens and becomes less stable as the flow de-velops along the body, and eventually becomes turbulent,the process known as boundary layer transition. One wayof dealing with this problem is to suck the boundary layeraway through a porous surface (see Boundary layer suc-tion). This can reduce drag, but is usually impracticaldue to its mechanical complexity and the power requiredto move the air and dispose of it. Natural laminar flowtechniques push the boundary layer transition aft by re-shaping the aerofoil or fuselage so that its thickest point ismore aft and less thick. This reduces the velocities in theleading part and the same Reynolds number is achievedwith a greater length.At lower Reynolds numbers, such as those seen withmodel aircraft, it is relatively easy to maintain laminarflow. This gives low skin friction, which is desirable.However, the same velocity profile which gives the lam-inar boundary layer its low skin friction also causes it tobe badly affected by adverse pressure gradients. As thepressure begins to recover over the rear part of the wingchord, a laminar boundary layer will tend to separate fromthe surface. Such flow separation causes a large increasein the pressure drag, since it greatly increases the effectivesize of the wing section. In these cases, it can be advan-tageous to deliberately trip the boundary layer into turbu-lence at a point prior to the location of laminar separation,using a turbulator. The fuller velocity profile of the turbu-lent boundary layer allows it to sustain the adverse pres-sure gradient without separating. Thus, although the skinfriction is increased, overall drag is decreased. This isthe principle behind the dimpling on golf balls, as well asvortex generators on aircraft. Special wing sections havealso been designed which tailor the pressure recovery solaminar separation is reduced or even eliminated. Thisrepresents an optimum compromise between the pressuredrag from flow separation and skin friction from inducedturbulence.When using half-models in wind tunnels, a peniche is

sometimes used to reduce or eliminate the effect of theboundary layer.

4.2 Naval architecture

Many of the principles that apply to aircraft also apply toships, submarines, and offshore platforms.For ships, unlike aircraft, one deals with incompressibleflows, where change in water density is negligible (a pres-sure rise close to 1000kPa leads to a change of only 2–3 kg/m3). This field of fluid dynamics is called hydro-dynamics. A ship engineer designs for hydrodynamicsfirst, and for strength only later. The boundary layer de-velopment, breakdown, and separation become criticalbecause the high viscosity of water produces high shearstresses. Another consequence of high viscosity is the slipstream effect, in which the ship moves like a spear tearingthrough a sponge at high velocity.

4.3 Boundary layer equations

The deduction of the boundary layer equations wasone of the most important advances in fluid dynamics(Anderson, 2005). Using an order of magnitude analy-sis, the well-known governing Navier–Stokes equationsof viscous fluid flow can be greatly simplified within theboundary layer. Notably, the characteristic of the partialdifferential equations (PDE) becomes parabolic, ratherthan the elliptical form of the full Navier–Stokes equa-tions. This greatly simplifies the solution of the equa-tions. By making the boundary layer approximation, theflow is divided into an inviscid portion (which is easy tosolve by a number of methods) and the boundary layer,which is governed by an easier to solve PDE. The conti-nuity and Navier–Stokes equations for a two-dimensionalsteady incompressible flow in Cartesian coordinates aregiven by

∂u

∂x+∂υ

∂y= 0

u∂u

∂x+ υ

∂u

∂y= −1

ρ

∂p

∂x+ ν

(∂2u

∂x2+∂2u

∂y2

)u∂υ

∂x+ υ

∂υ

∂y= −1

ρ

∂p

∂y+ ν

(∂2υ

∂x2+∂2υ

∂y2

)where u and υ are the velocity components, ρ is the den-sity, p is the pressure, and ν is the kinematic viscosity ofthe fluid at a point.The approximation states that, for a sufficiently highReynolds number the flow over a surface can be dividedinto an outer region of inviscid flow unaffected by vis-cosity (the majority of the flow), and a region close to thesurface where viscosity is important (the boundary layer).

Page 20: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

14 CHAPTER 4. BOUNDARY LAYER

Let u and υ be streamwise and transverse (wall normal)velocities respectively inside the boundary layer. Usingscale analysis, it can be shown that the above equationsof motion reduce within the boundary layer to become

∂u

∂x+∂υ

∂y= 0

u∂u

∂x+ υ

∂u

∂y= −1

ρ

∂p

∂x+ ν

∂2u

∂y2

and if the fluid is incompressible (as liquids are understandard conditions):

1

ρ

∂p

∂y= 0

The asymptotic analysis also shows that υ , the wall nor-mal velocity, is small compared with u the streamwisevelocity, and that variations in properties in the stream-wise direction are generally much lower than those in thewall normal direction.Since the static pressure p is independent of y , then pres-sure at the edge of the boundary layer is the pressurethroughout the boundary layer at a given streamwise po-sition. The external pressure may be obtained through anapplication of Bernoulli’s equation. Let u0 be the fluidvelocity outside the boundary layer, where u and u0 areboth parallel. This gives upon substituting for p the fol-lowing result

u∂u

∂x+ υ

∂u

∂y= u0

∂u0∂x

+ ν∂2u

∂y2

with the boundary condition

∂u

∂x+∂v

∂y= 0

For a flow in which the static pressure p also does notchange in the direction of the flow then

∂p

∂x= 0

so u0 remains constant.Therefore, the equation of motion simplifies to become

u∂u

∂x+ υ

∂u

∂y= ν

∂2u

∂y2

These approximations are used in a variety of practi-cal flow problems of scientific and engineering interest.The above analysis is for any instantaneous laminar orturbulent boundary layer, but is used mainly in laminarflow studies since the mean flow is also the instantaneousflow because there are no velocity fluctuations present.

4.4 Turbulent boundary layers

The treatment of turbulent boundary layers is far moredifficult due to the time-dependent variation of the flowproperties. One of the most widely used techniques inwhich turbulent flows are tackled is to apply Reynoldsdecomposition. Here the instantaneous flow propertiesare decomposed into a mean and fluctuating component.Applying this technique to the boundary layer equationsgives the full turbulent boundary layer equations not oftengiven in literature:

∂u

∂x+∂v

∂y= 0

u∂u

∂x+v

∂u

∂y= −1

ρ

∂p

∂x+ν

(∂2u

∂x2+∂2u

∂y2

)− ∂

∂y(u′v′)− ∂

∂x(u′2)

u∂v

∂x+v

∂v

∂y= −1

ρ

∂p

∂y+ν

(∂2v

∂x2+∂2v

∂y2

)− ∂

∂x(u′v′)− ∂

∂y(v′2)

Using the same order-of-magnitude analysis as for theinstantaneous equations, these turbulent boundary layerequations generally reduce to become in their classicalform:

∂u

∂x+∂v

∂y= 0

u∂u

∂x+ v

∂u

∂y= −1

ρ

∂p

∂x+ ν

∂2u

∂y2− ∂

∂y(u′v′)

∂p

∂y= 0

The additional term u′v′ in the turbulent boundary layerequations is known as the Reynolds shear stress and isunknown a priori. The solution of the turbulent bound-ary layer equations therefore necessitates the use of aturbulence model, which aims to express the Reynoldsshear stress in terms of known flow variables or deriva-tives. The lack of accuracy and generality of such modelsis a major obstacle in the successful prediction of turbu-lent flow properties in modern fluid dynamics.A laminar sub-layer exists in the turbulent zone; it occursdue to those fluid molecules which are still in the veryproximity of the surface, where the shear stress is maxi-mum and the velocity of fluid molecules is zero.

4.5 Heat and mass transfer

In 1928, the French engineer André Lévêque observedthat convective heat transfer in a flowing fluid is affectedonly by the velocity values very close to the surface.[1][2]For flows of large Prandtl number, the temperature/masstransition from surface to freestream temperature takes

Page 21: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

4.6. CONVECTIVE TRANSFER CONSTANTS FROM BOUNDARY LAYER ANALYSIS 15

place across a very thin region close to the surface. There-fore, the most important fluid velocities are those insidethis very thin region in which the change in velocity canbe considered linear with normal distance from the sur-face. In this way, for

u(y) = u0

[1− (y − h)2

h2

]= u0

y

h

[2− y

h

],

when y → 0 , then

u(y) ≈ 2u0y

h= θy

where θ is the tangent of the Poiseuille parabola inter-secting the wall. Although Lévêque’s solution was spe-cific to heat transfer into a Poiseuille flow, his insighthelped lead other scientists to an exact solution of thethermal boundary-layer problem.[3] Schuh observed thatin a boundary-layer, u is again a linear function of y, butthat in this case, the wall tangent is a function of x.[4] Heexpressed this with a modified version of Lévêque’s pro-file,

u(y) = θ(x)y

This results in a very good approximation, even for lowPr numbers, so that only liquid metals with Pr muchless than 1 cannot be treated this way.[3] In 1962, Kestinand Persen published a paper describing solutions for heattransfer when the thermal boundary layer is contained en-tirely within the momentum layer and for various walltemperature distributions.[5] For the problem of a flatplate with a temperature jump at x = x0 , they propose asubstitution that reduces the parabolic thermal boundary-layer equation to an ordinary differential equation. Thesolution to this equation, the temperature at any point inthe fluid, can be expressed as an incomplete gamma func-tion.[2] Schlichting proposed an equivalent substitutionthat reduces the thermal boundary-layer equation to anordinary differential equation whose solution is the sameincomplete gamma function.[6]

4.6 Convective transfer constantsfrom boundary layer analysis

Paul Richard Heinrich Blasius derived an exact solutionto the above laminar boundary layer equations.[7] Thethickness of the boundary layer δ is a function of theReynolds number for laminar flow.δ ≈ 5.0∗x√

Re

δ = the thickness of the boundary layer: the region offlow where the velocity is less than 99% of the far field

velocity v∞ ; x is position along the semi-infinite plate,and Re is the Reynolds Number given by ρv∞x/µ ( ρ =density and µ = dynamic viscosity).The Blasius solution uses boundary conditions in a di-mensionless form:vx−vSv∞−vS

= vxv∞

=vyv∞

= 0 at y = 0

vx−vSv∞−vS

= vxv∞

= 1 at y = ∞ and x = 0

Velocity Boundary Layer (Top,orange) and Temperature Bound-ary Layer (Bottom, green) share a functional form due to similar-ity in the Momentum/Energy Balances and boundary conditions.

Note that in many cases, the no-slip boundary conditionholds that vS , the fluid velocity at the surface of the plateequals the velocity of the plate at all locations. If the plateis not moving, then vS = 0 . A much more complicatedderivation is required if fluid slip is allowed.[8]

In fact, the Blasius solution for laminar velocity profilein the boundary layer above a semi-infinite plate can beeasily extended to describe Thermal and Concentrationboundary layers for heat and mass transfer respectively.Rather than the differential x-momentum balance (equa-tion of motion), this uses a similarly derived Energy andMass balance:Energy: vx ∂T

∂x + vy∂T∂y = k

ρCp∂2T∂y2

Mass: vx ∂cA∂x + vy

∂cA∂y = DAB

∂2cA∂y2

For the momentum balance, kinematic viscosity ν canbe considered to be the momentum diffusivity. In theenergy balance this is replaced by thermal diffusivityα = k/ρCP , and by mass diffusivity DAB in the massbalance. In thermal diffusivity of a substance, k is itsthermal conductivity, ρ is its density and CP is its heatcapacity. Subscript AB denotes diffusivity of species Adiffusing into species B.

Page 22: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

16 CHAPTER 4. BOUNDARY LAYER

Under the assumption that α = DAB = ν , theseequations become equivalent to the momentum balance.Thus, for Prandtl number Pr = ν/α = 1 and Schmidtnumber Sc = ν/DAB = 1 the Blasius solution appliesdirectly.Accordingly, this derivation uses a related form of theboundary conditions, replacing v with T or cA (absolutetemperature or concentration of species A). The subscriptS denotes a surface condition.vx−vSv∞−vS

= T−TS

T∞−TS= cA−cAS

cA∞−cAS= 0 at y = 0

vx−vSv∞−vS

= T−TS

T∞−TS= cA−cAS

cA∞−cAS= 1 at y = ∞ and

x = 0

Using the streamline function Blasius obtained the fol-lowing solution for the shear stress at the surface of theplate.

τ0 =(

∂vx∂y

)y=0

= 0.332 v∞x Re1/2

And via the boundary conditions, it is known thatvx−vSv∞−vS

= T−TS

T∞−TS= cA−cAS

cA∞−cAS

We are given the following relations for heat/mass fluxout of the surface of the plate(

∂T∂y

)y=0

= 0.332T∞−TS

x Re1/2(∂cA∂y

)y=0

= 0.332 cA∞−cAS

x Re1/2

So for Pr = Sc = 1

δ = δT = δc =5.0∗x√

Re

Where δT , δc are the regions of flow where T and cA areless than 99% of their far field values.[9]

Because the Prandtl number of a particular fluid is notoften unity, German engineer E. Polhausen who workedwith Ludwig Prandtl attempted to empirically extendthese equations to apply for Pr = 1 . His results can beapplied to Sc as well.[10] He found that for Prandtl num-ber greater than 0.6, the thermal boundary layer thicknesswas approximately given by:δδT

= Pr1/3 and therefore δδc

= Sc1/3

From this solution, it is possible to characterize the con-vective heat/mass transfer constants based on the regionof boundary layer flow. Fourier’s law of conduction andNewton’s Law of Cooling are combined with the fluxterm derived above and the boundary layer thickness.qA = −k

(∂T∂y

)y=0

= hx(TS − T∞)

hx = 0.332kxRe

1/2x Pr1/3

This gives the local convective constant hx at one pointon the semi-infinite plane. Integrating over the length ofthe plate gives an average

hL = 0.664kxRe

1/2L Pr1/3

Following the derivation with mass transfer terms ( k =

Plot showing the relative thickness in the Thermal boundary layerversus the Velocity boundary layer (in red) for various PrandtlNumbers. For Pr = 1 , the two are equal.

convective mass transfer constant, DAB = diffusivity ofspecies A into species B, Sc = ν/DAB ), the followingsolutions are obtained:k′x = 0.332DAB

x Re1/2x Sc1/3

k′L = 0.664DAB

x Re1/2L Sc1/3

These solutions apply for laminar flow with aPrandtl/Schmidt number greater than 0.6.[9]

4.7 Boundary layer turbine

This effect was exploited in the Tesla turbine, patentedby Nikola Tesla in 1913. It is referred to as a blade-less turbine because it uses the boundary layer effect andnot a fluid impinging upon the blades as in a conven-tional turbine. Boundary layer turbines are also knownas cohesion-type turbine, bladeless turbine, and Prandtllayer turbine (after Ludwig Prandtl).

4.8 See also

• Boundary layer separation

• Boundary-layer thickness

• Boundary layer suction

• Boundary layer control

• Coandă effect

• Facility for Airborne Atmospheric Measurements

• Logarithmic law of the wall

• Planetary boundary layer

Page 23: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

4.10. EXTERNAL LINKS 17

• Shape factor (boundary layer flow)

• Shear stress

4.9 References[1] Lévêque, A. (1928). “Les lois de la transmission de

chaleur par convection”. Annales des Mines ou Recueilde Mémoires sur l'Exploitation des Mines et sur les Scienceset les Arts qui s’y Rattachent, Mémoires (in French) XIII(13): 201–239.

[2] Niall McMahon. “André Lévêque p285, a review of hisvelocity profile approximation”.

[3] Martin, H. (2002). “The generalized Lévêque equationand its practical use for the prediction of heat and masstransfer rates from pressure drop”. Chemical EngineeringScience 57 (16). pp. 3217–3223. doi:10.1016/S0009-2509(02)00194-X.

[4] Schuh, H. (1953). “On asymptotic solutions for the heattransfer at varying wall temperatures in a laminar bound-ary layer with Hartree’s velocity profiles”. Jour. Aero. Sci.20 (2). pp. 146–147.

[5] Kestin, J. and Persen, L.N. (1962). “The transfer of heatacross a turbulent boundary layer at very high prandtlnumbers”. Int. J. Heat Mass Transfer 5: 355–371.doi:10.1016/0017-9310(62)90026-1.

[6] Schlichting, H. (1979). Boundary-Layer Theory (7 ed.).New York (USA): McGraw-Hill.

[7] Blasius, H. (1908). “Grenzschichten in Flüssigkeiten mitkleiner Reibung”. Z. Math. Phys. 56: 1–37. (Englishtranslation)

[8] Martin, Michael J. Blasius boundary layer solution withslip flow conditions. AIP conference proceedings 585.12001: 518-523. American Institute of Physics. 24 Apr2013.

[9] Geankoplis, Christie J. Transport Processes and Sepa-ration Process Principles: (includes Unit Operations).Fourth ed. Upper Saddle River, NJ: Prentice Hall Pro-fessional Technical Reference, 2003. Print.

[10] Pohlhausen, E. (1921), Der Wärmeaustausch zwischenfesten Körpern und Flüssigkeiten mit kleiner reibung undkleiner Wärmeleitung. Z. angew. Math. Mech., 1: 115–121. doi: 10.1002/zamm.19210010205

• Chanson, H. (2009). Applied Hydrodynamics: AnIntroduction to Ideal and Real Fluid Flows. CRCPress, Taylor & Francis Group, Leiden, The Nether-lands, 478 pages. ISBN 978-0-415-49271-3.

• A.D. Polyanin and V.F. Zaitsev, Handbook of Non-linear Partial Differential Equations, Chapman &Hall/CRC Press, Boca Raton – London, 2004.ISBN 1-58488-355-3

• A.D. Polyanin, A.M. Kutepov, A.V. Vyazmin, andD.A. Kazenin, Hydrodynamics, Mass and HeatTransfer in Chemical Engineering, Taylor & Fran-cis, London, 2002. ISBN 0-415-27237-8

• Hermann Schlichting, Klaus Gersten, E. Krause, H.Jr. Oertel, C. Mayes “Boundary-Layer Theory” 8thedition Springer 2004 ISBN 3-540-66270-7

• John D. Anderson, Jr., “Ludwig Prandtl’s BoundaryLayer”, Physics Today, December 2005

• Anderson, John (1992). Fundamentals of Aerody-namics (2nd edition ed.). Toronto: S.S.CHAND.pp. 711–714. ISBN 0-07-001679-8.

• H. Tennekes and J. L. Lumley, “A First Course inTurbulence”, The MIT Press, (1972).

4.10 External links• National Science Digital Library – Boundary Layer

• Moore, Franklin K., "Displacement effect of a three-dimensional boundary layer". NACA Report 1124,1953.

• Benson, Tom, "Boundary layer". NASA GlennLearning Technologies.

• Boundary layer separation

• Boundary layer equations: Exact Solutions – fromEqWorld

• Jones, T.V. BOUNDARY LAYER HEAT TRANSFER

Page 24: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 5

Similitude (model)

For other uses, see Similitude (disambiguation).Similitude is a concept applicable to the testing of engi-

A full scale X-43 Wind tunnel test. The test is designed to havedynamic similitude with the real application to ensure valid re-sults.

neering models. A model is said to have similitude withthe real application if the two share geometric similarity,kinematic similarity and dynamic similarity. Similarityand similitude are interchangeable in this context.The term dynamic similitude is often used as a catch-all because it implies that geometric and kinematic simil-itude have already been met.Similitude’s main application is in hydraulic andaerospace engineering to test fluid flow conditions withscaled models. It is also the primary theory behind manytextbook formulas in fluid mechanics.

5.1 Overview

Engineering models are used to study complex fluid dy-namics problems where calculations and computer simu-lations aren't reliable. Models are usually smaller than thefinal design, but not always. Scale models allow testing ofa design prior to building, and in many cases are a criticalstep in the development process.Construction of a scale model, however, must be accom-

panied by an analysis to determine what conditions it istested under. While the geometry may be simply scaled,other parameters, such as pressure, temperature or thevelocity and type of fluid may need to be altered. Simili-tude is achieved when testing conditions are created suchthat the test results are applicable to the real design.

The three conditions required for a model to have similitude withan application.

The following criteria are required to achieve similitude;

• Geometric similarity – The model is the sameshape as the application, usually scaled.

• Kinematic similarity – Fluid flow of both themodel and real application must undergo similartime rates of change motions. (fluid streamlines aresimilar)

• Dynamic similarity – Ratios of all forces acting oncorresponding fluid particles and boundary surfacesin the two systems are constant.

To satisfy the above conditions the application is ana-lyzed;

1. All parameters required to describe the system areidentified using principles from continuum mechan-ics.

2. Dimensional analysis is used to express the sys-tem with as few independent variables and as manydimensionless parameters as possible.

18

Page 25: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

5.3. TYPICAL APPLICATIONS 19

3. The values of the dimensionless parameters are heldto be the same for both the scale model and applica-tion. This can be done because they are dimension-less and will ensure dynamic similitude between themodel and the application. The resulting equationsare used to derive scaling laws which dictate modeltesting conditions.

It is often impossible to achieve strict similitude during amodel test. The greater the departure from the applica-tion’s operating conditions, the more difficult achievingsimilitude is. In these cases some aspects of similitudemay be neglected, focusing on only the most importantparameters.The design of marine vessels remains more of an art thana science in large part because dynamic similitude is es-pecially difficult to attain for a vessel that is partially sub-merged: a ship is affected by wind forces in the air aboveit, by hydrodynamic forces within the water under it, andespecially by wave motions at the interface between thewater and the air. The scaling requirements for each ofthese phenomena differ, so models cannot replicate whathappens to a full sized vessel nearly so well as can be donefor an aircraft or submarine—each of which operates en-tirely within one medium.Similitude is a term used widely in fracture mechanicsrelating to the strain life approach. Under given loadingconditions the fatigue damage in an un-notched specimenis comparable to that of a notched specimen. Similitudesuggests that the component fatigue life of the two objectswill also be similar.

5.2 An example

Consider a submarine modeled at 1/40th scale. The ap-plication operates in sea water at 0.5 °C, moving at 5 m/s.The model will be tested in fresh water at 20 °C. Find thepower required for the submarine to operate at the statedspeed.A free body diagram is constructed and the relevant rela-tionships of force and velocity are formulated using tech-niques from continuum mechanics. The variables whichdescribe the system are:This example has five independent variables and threefundamental units. The fundamental units are: metre,kilogram, second.[1]

Invoking the Buckingham π theorem shows that the sys-tem can be described with two dimensionless numbersand one independent variable.[2]

Dimensional analysis is used to re-arrange the units toform the Reynolds number ( Re ) and Pressure coeffi-cient ( Cp ). These dimensionless numbers account forall the variables listed above except F, which will be thetest measurement. Since the dimensionless parameters

will stay constant for both the test and the real applica-tion, they will be used to formulate scaling laws for thetest.Scaling laws:

Re =

(ρV L

µ

)−→Vmodel = Vapplication ×

(ρaρm

)×(La

Lm

)×(µm

µa

)Cp =

(2∆p

ρV 2

), F = ∆pL2 −→Fapplication = Fmodel ×

(ρaρm

)×(VaVm

)2

×(La

Lm

)2

.

The pressure ( p ) is not one of the five variables, but theforce ( F ) is. The pressure difference (Δ p ) has thusbeen replaced with ( F/L2 ) in the pressure coefficient.This gives a required test velocity of:

Vmodel = Vapplication × 21.9

A model test is then conducted at that velocity and theforce that is measured in the model ( Fmodel ) is thenscaled to find the force that can be expected for the realapplication ( Fapplication ):

Fapplication = Fmodel × 3.44

The power P in watts required by the submarine is then:

P [W] = Fapplication × Vapplication = Fmodel[N]× 17.2 m/s

Note that even though the model is scaled smaller, thewater velocity needs to be increased for testing. This re-markable result shows how similitude in nature is oftencounterintuitive.

5.3 Typical applications

See also: List of dimensionless numbers

Similitude has been well documented for a large num-ber of engineering problems and is the basis of manytextbook formulas and dimensionless quantities. Theseformulas and quantities are easy to use without havingto repeat the laborious task of dimensional analysis andformula derivation. Simplification of the formulas (byneglecting some aspects of similitude) is common, andneeds to be reviewed by the engineer for each applica-tion.Similitude can be used to predict the performance of anew design based on data from an existing, similar design.In this case, the model is the existing design. Anotheruse of similitude and models is in validation of computersimulations with the ultimate goal of eliminating the needfor physical models altogether.

Page 26: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

20 CHAPTER 5. SIMILITUDE (MODEL)

Another application of similitude is to replace the oper-ating fluid with a different test fluid. Wind tunnels, forexample, have trouble with air liquefying in certain con-ditions so helium is sometimes used. Other applicationsmay operate in dangerous or expensive fluids so the test-ing is carried out in a more convenient substitute.Some common applications of similitude and associateddimensionless numbers;

5.4 Notes

[1] In the SI system of units newtons can be expressed interms of kg·m/s2.

[2] 5 variables - 3 fundamental units => 2 dimensionless num-bers.

5.5 See also

• Dimensionless number

• Buckingham π theorem

• Dimensional analysis

• MKS system of fundamental units

• Dynamic similarity (Reynolds andWomersley num-bers)

• Similitude of ship models

5.6 References

• Binder, Raymond C.,Fluid Mechanics, Fifth Edition,Prentice-Hall, Englwood Cliffs, N.J., 1973.

• Howarth, L. (editor),Modern Developments in FluidMechanics, High Speed Flow, Oxford at the Claren-don Press, 1953.

• Kline, Stephen J., “Similitude and ApproximationTheory”, Springer-Verlag, New York, 1986. ISBN0-387-16518-5

• Chanson, Hubert "Turbulent Air-water Flows in Hy-draulic Structures: Dynamic Similarity and ScaleEffects, Environmental Fluid Mechanics, 2009, Vol.9, No. 2, pp. 125–142 doi:10.1007/s10652-008-9078-3

• Heller, V., "Scale Effects in Physical HydraulicEngineering Models", Journal of Hydraulic Re-search, 2011, Vol. 49, No. 3, pp. 293–306doi:10.1080/00221686.2011.578914

5.7 External links• MIT open courseware lecture notes on Similitudefor marine engineering (pdf file)

Page 27: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 6

Lagrangian and Eulerian specification ofthe flow field

This article is about fluid mechanics. For the use ofgeneralized coordinates in classical mechanics, seegeneralized coordinates, Lagrangian mechanics andHamiltonian mechanics

In fluid dynamics and finite-deformation plasticity theLagrangian specification of the flow field is a wayof looking at fluid motion where the observer followsan individual fluid parcel as it moves through space andtime.[1][2] Plotting the position of an individual parcelthrough time gives the pathline of the parcel. This can bevisualized as sitting in a boat and drifting down a river.The Eulerian specification of the flow field is a wayof looking at fluid motion that focuses on specific loca-tions in the space through which the fluid flows as timepasses.[1][2] This can be visualized by sitting on the bankof a river and watching the water pass the fixed location.The Lagrangian and Eulerian specifications of the flowfield are sometimes loosely denoted as the Lagrangianand Eulerian frame of reference. However, in gen-eral both the Lagrangian and Eulerian specification of theflow field can be applied in any observer’s frame of refer-ence, and in any coordinate system usedwithin the chosenframe of reference.

6.1 Description

In the Eulerian specification of the flow field, the flowquantities are depicted as a function of position x andtime t. Specifically, the flow is described by a function

v (x, t)

giving the flow velocity at position x at time t.On the other hand, in the Lagrangian specification, indi-vidual fluid parcels are followed through time. The fluidparcels are labelled by some (time-independent) vectorfield a. (Often, a is chosen to be the center of mass ofthe parcels at some initial time t0. It is chosen in this par-

ticular manner to account for the possible changes of theshape over time. Therefore the center of mass is a goodparametrization of the velocity v of the parcel.)[1] In theLagrangian description, the flow is described by a func-tion

X (a, t)

giving the position of the parcel labeled a at time t.The two specifications are related as follows:[2]

v (X(a, t), t) = ∂X∂t

(a, t)

because both sides describe the velocity of the parcel la-beled a at time t.Within a chosen coordinate system, a and x are referredto as the Lagrangian coordinates and Eulerian coor-dinates of the flow.

6.2 Substantial derivative

Main article: Material derivative

The Lagrangian and Eulerian specifications of thekinematics and dynamics of the flow field are relatedby the substantial derivative (also called the Lagrangianderivative, convective derivative, material derivative, orparticle derivative).[1]

Suppose we have a flow field with Eulerian specification v,and we are also given some function F(x,t) defined for ev-ery position x and every time t. (For instance, F could bean external force field, or temperature.) Now one mightask about the total rate of change of F experienced by aspecific flow parcel. This can be computed as

DFDt =

∂F∂t

+ (v · ∇)F

21

Page 28: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

22 CHAPTER 6. LAGRANGIAN AND EULERIAN SPECIFICATION OF THE FLOW FIELD

(where ∇ denotes the gradient with respect to x, and theoperator v⋅∇ is to be applied to each component of F.)This tells us that the total rate of change of the function Fas the fluid parcels moves through a flow field describedby its Eulerian specification v is equal to the sum of thelocal rate of change and the convective rate of change ofF. This is a consequence of the chain rule since we aredifferentiating the function F(X(a,t),t) with respect to t.Conservation laws for a unit mass have a Lagrangianform, which together withmass conservation produce Eu-lerian conservation; on the contrary when fluid particlecan exchange the quantity (like energy or momentum)only Eulerian conservation law exists, see Falkovich.

6.3 See also• Contour advection

• Coordinate system

• Equivalent latitude

• Fluid dynamics

• Frame of reference

• Generalized Lagrangian mean

• Lagrangian particle tracking

• Semi-Lagrangian scheme

• Streamlines, streaklines, and pathlines

• Trajectory (fluid mechanics)

6.4 Notes[1] Batchelor (1973) pp. 71–73.

[2] Lamb (1994) §3–§7 and §13–§16.

6.5 References• Batchelor, G.K. (1973), An introduction to fluid dy-namics, Cambridge University Press, ISBN 0-521-09817-3

• Lamb, H. (1994) [1932], Hydrodynamics (6thed.), Cambridge University Press, ISBN 978-0-521-45868-9

• Falkovich, Gregory (2011), Fluid Mechanics (Ashort course for physicists), Cambridge UniversityPress, ISBN 978-1-107-00575-4

Page 29: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 7

Lagrangian mechanics

Lagrangian mechanics is a re-formulation of classicalmechanics using the principle of stationary action (alsocalled the principle of least action).[1] Lagrangian me-chanics applies to systems whether or not they conserveenergy or momentum, and it provides conditions underwhich energy, momentum or both are conserved.[2] It wasintroduced by the Italian-French mathematician Joseph-Louis Lagrange in 1788.In Lagrangian mechanics, the trajectory of a system ofparticles is derived by solving the Lagrange equationsin one of two forms, either the Lagrange equations ofthe first kind,[3] which treat constraints explicitly as ex-tra equations, often using Lagrange multipliers;[4][5] orthe Lagrange equations of the second kind, whichincorporate the constraints directly by judicious choiceof generalized coordinates.[3][6] The fundamental lemmaof the calculus of variations shows that solving the La-grange equations is equivalent to finding the path forwhich the action functional is stationary, a quantity thatis the integral of the Lagrangian over time.The use of generalized coordinates may considerablysimplify a system’s analysis. For example, consider asmall frictionless bead traveling in a groove. If one istracking the bead as a particle, calculation of the motionof the bead using Newtonian mechanics would requiresolving for the time-varying constraint force required tokeep the bead in the groove. For the same problem us-ing Lagrangian mechanics, one looks at the path of thegroove and chooses a set of independent generalized co-ordinates that completely characterize the possible mo-tion of the bead. This choice eliminates the need for theconstraint force to enter into the resultant system of equa-tions. There are fewer equations since one is not directlycalculating the influence of the groove on the bead at agiven moment.

7.1 Conceptual framework

7.1.1 Generalized coordinates

Concepts and terminology

For one particle acted on by external forces, Newton’ssecond law forms a set of 3 second-order ordinary differ-ential equations, one for each dimension. Therefore, themotion of the particle can be completely described by 6independent variables: 3 initial position coordinates and3 initial velocity coordinates. Given these, the generalsolutions to Newton’s second law become particular so-lutions that determine the time evolution of the particle’sbehaviour after its initial state (t = 0).The most familiar set of variables for position r = (r1, r2,r3) and velocity _rj = (r1, r2, r3) are Cartesian coordi-nates and their time derivatives (i.e. position (x, y, z) andvelocity (vx, vy, vz) components). Determining forces interms of standard coordinates can be complicated, andusually requires much labour.An alternative and more efficient approach is to use onlyas many coordinates as are needed to define the positionof the particle, at the same time incorporating the con-straints on the system, and writing down kinetic and po-tential energies. In other words, to determine the numberof degrees of freedom the particle has, i.e. the numberof possible ways the system can move subject to the con-straints (forces that prevent it moving in certain paths).Energies are much easier to write down and calculate thanforces, since energy is a scalar while forces are vectors.These coordinates are generalized coordinates, denoted qj, and there is one for each degree of freedom. Their corre-sponding time derivatives are the generalized velocities,qj . The number of degrees of freedom is usually notequal to the number of spatial dimensions: multi-bodysystems in 3-dimensional space (such as Barton’s Pendu-lums, planets in the solar system, or atoms in molecules)can have many more degrees of freedom incorporatingrotations as well as translations. This contrasts the num-ber of spatial coordinates used with Newton’s laws above.

Mathematical formulation

The position vector r in a standard coordinate system(like Cartesian, spherical etc.), is related to the general-ized coordinates by some transformation equation:

23

Page 30: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

24 CHAPTER 7. LAGRANGIAN MECHANICS

r = r(qi, t).

where there are as many qi as needed (number of degreesof freedom in the system). Likewise for velocity and gen-eralized velocities.For example, for a simple pendulum of length ℓ, thereis the constraint of the pendulum bob’s suspension(rod/wire/string etc.). The position r depends on the xand y coordinates at time t, that is, r(t)=(x(t),y(t)), how-ever x and y are coupled to each other in a constraint equa-tion (if x changes ymust change, and vice versa). A logi-cal choice for a generalized coordinate is the angle of thependulum from vertical, θ, so we have r = (x(θ), y(θ)) =r(θ), in which θ = θ(t). Then the transformation equationwould be

r(θ(t)) = (ℓ sin θ,−ℓ cos θ)

and so

_r(θ(t), θ(t)) = (ℓ θ cos θ, ℓ θ sin θ)

which corresponds to the one degree of freedom the pen-dulum has. The term “generalized coordinates” is reallya holdover from the period when Cartesian coordinateswere the default coordinate system.In general, from m independent generalized coordinatesqj, the following transformation equations hold for a sys-tem composed of n particles:[7]:260

r1 = r1(q1, q2, · · · , qm, t)r2 = r2(q1, q2, · · · , qm, t)

...rn = rn(q1, q2, · · · , qm, t)

where m indicates the total number of generalized coor-dinates. An expression for the virtual displacement (in-finitesimal), δri of the system for time-independent con-straints or “velocity-dependent constraints” is the sameform as a total differential[7]:264

δri =m∑j=1

∂ri∂qj

δqj ,

where j is an integer label corresponding to a generalizedcoordinate.The generalized coordinates form a discrete set of vari-ables that define the configuration of a system. The con-tinuum analogue for defining a field are field variables,say ϕ(r, t), which represents density function varying withposition and time.

7.1.2 D'Alembert’s principle and general-ized forces

D'Alembert’s principle introduces the concept of virtualwork due to applied forces Fi and inertial forces, actingon a three-dimensional accelerating system of n particleswhose motion is consistent with its constraints,[7]:269

Mathematically the virtual work done δW on a particleof mass mi through a virtual displacement δri (consistentwith the constraints) is:

where ai are the accelerations of the particles in the sys-tem and i = 1, 2,...,n simply labels the particles. In termsof generalized coordinates

δW =m∑j=1

n∑i=1

(Fi −miai) ·∂ri∂qj

δqj = 0.

this expression suggests that the applied forces may beexpressed as generalized forces, Qj. Dividing by δqj givesthe definition of a generalized force:[7]:265

Qj =δW

δqj=

n∑i=1

Fi ·∂ri∂qj

.

If the forces Fi are conservative, there is a scalar potentialfield V in which the gradient of V is the force:[7]:266 & 270

Fi = −∇V ⇒ Qj = −n∑

i=1

∇V · ∂ri∂qj

= −∂V∂qj

.

i.e. generalized forces can be reduced to a potential gra-dient in terms of generalized coordinates. The previousresult may be easier to see by recognizing thatV is a func-tion of the ri, which are in turn functions of qj, and thenapplying the chain rule to the derivative of V with respectto qj.

7.1.3 Kinetic energy relations

The kinetic energy, T, for the system of particles is de-fined by[7]:269

T =1

2

n∑i=1

mi_ri · _ri.

The partial derivatives of T with respect to the gen-eralized coordinates qj and generalized velocities qjare:[7]:269

Page 31: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

7.1. CONCEPTUAL FRAMEWORK 25

∂T

∂qj=

n∑i=1

mi_ri ·∂_ri∂qj

∂T

∂qj=

n∑i=1

mi_ri ·∂_ri∂qj

.

Because qj and qj are independent variables:

∂_ri∂qj

=∂ri∂qj

.

Then:

∂T

∂qj=

n∑i=1

mi_ri ·∂ri∂qj

.

The total time derivative of this equation is

ddt∂T

∂qj=

n∑i=1

miri ·∂ri∂qj

+

n∑i=1

mi_ri ·∂_ri∂qj

= Qj+∂T

∂qj.

resulting in:

Newton’s laws are contained in it, yet there is no need tofind the constraint forces because virtual work and gen-eralized coordinates (which account for constraints) areused. This equation in itself is not actually used in prac-tice, but is a step towards deriving Lagrange’s equations(see below).[8]

7.1.4 Lagrangian and action

The core element of Lagrangian mechanics is theLagrangian function, which summarizes the dynamics ofthe entire system in a very simple expression. The physicsof analyzing a system is reduced to choosing the mostconvenient set of generalized coordinates, determiningthe kinetic and potential energies of the constituents ofthe system, then writing down the equation for the La-grangian to use in Lagrange’s equations. It is defined by[9]

L = T − V

where T is the total kinetic energy and V is the total po-tential energy of the system.The next fundamental element is the action S , defined asthe time integral of the Lagrangian:[8]

S =

∫ t2

t1

L dt.

This also contains the dynamics of the system, and hasdeep theoretical implications (discussed below). Techni-cally, the action is a functional, that is, it is a function thatmaps the full Lagrangian function for all times betweent1 and t2 to a scalar value for the action. Its dimensionsare the same as angular momentum.In classical field theory, the physical system is not a setof discrete particles, but rather a continuous field definedover a region of 3d space. Associated with the field isa Lagrangian density L(r, t) defined in terms of the fieldand its derivatives at a location r . The total Lagrangian isthen the integral of the Lagrangian density over 3d space(see volume integral):

L(t) =

∫L(r, t)d3r

where d3r is a 3d differential volume element, must beused instead. The action becomes an integral over spaceand time:

S =

∫ t2

t1

∫L(r, t)d3rdt.

7.1.5 Hamilton’s principle of stationaryaction

Let q0 and q1 be the coordinates at respective initial andfinal times t0 and t1. Using the calculus of variations, itcan be shown that Lagrange’s equations are equivalent toHamilton’s principle:

The trajectory of the system between t0 and t1has a stationary action S.

By stationary, we mean that the action does not vary tofirst-order from infinitesimal deformations of the trajec-tory, with the end-points (q0, t0) and (q1,t1) fixed. Hamil-ton’s principle can be written as:

δS = 0.

Thus, instead of thinking about particles accelerating inresponse to applied forces, one might think of them pick-ing out the path with a stationary action.Hamilton’s principle is sometimes referred to as theprinciple of least action, however the action functionalneed only be stationary, not necessarily a maximum ora minimum value. Any variation of the functional givesan increase in the functional integral of the action.

Page 32: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

26 CHAPTER 7. LAGRANGIAN MECHANICS

We can use this principle instead of Newton’s Laws as thefundamental principle of mechanics, this allows us to usean integral principle (Newton’s Laws are based on dif-ferential equations so they are a differential principle) asthe basis for mechanics. However it is not widely statedthat Hamilton’s principle is a variational principle onlywith holonomic constraints, if we are dealing with non-holonomic systems then the variational principle shouldbe replaced with one involving d'Alembert principle ofvirtual work. Working only with holonomic constraints isthe price we have to pay for using an elegant variationalformulation of mechanics.

7.2 Lagrange equations of the firstkind

Lagrange introduced an analytical method for finding sta-tionary points using the method of Lagrange multipliers,and also applied it to mechanics.For a system subject to the (holonomic) constraint equa-tion on the generalized coordinates:

F (r1, r2, r3) = A

where A is a constant, then Lagrange’s equations of thefirst kind are:

[∂L

∂rj− ddt

(∂L

∂rj

)]+ λ

∂F

∂rj= 0

where λ is the Lagrange multiplier. By analogy with themathematical procedure, we can write:

δL

δrj+ λ

∂F

∂rj= 0

where

δL

δrj=∂L

∂rj− ddt

(∂L

∂rj

)denotes the variational derivative.For e constraint equations F1, F2,..., Fe, there is a La-grange multiplier for each constraint equation, and La-grange’s equations of the first kind generalize to:

This procedure does increase the number of equations,but there are enough to solve for all of the multipliers.The number of equations generated is the number of con-straint equations plus the number of coordinates, i.e. e +

m. The advantage of the method is that (potentially com-plicated) substitution and elimination of variables linkedby constraint equations can be bypassed.There is a connection between the constraint equationsFj and the constraint forces Nj acting in the conservativesystem (forces are conservative):

Nj =

e∑i=1

λi∂Fi

∂rj

which is derived below.

7.3 Lagrange equations of the sec-ond kind

7.3.1 Euler–Lagrange equations

For any system with m degrees of freedom, the Lagrangeequations include m generalized coordinates and m gen-eralized velocities. Below, we sketch out the derivationof the Lagrange equations of the second kind. In thiscontext, V is used rather than U for potential energy andT replaces K for kinetic energy. See the references formore detailed and more general derivations.The equations of motion in Lagrangian mechanics are theLagrange equations of the second kind, also known asthe Euler–Lagrange equations:[8][10]

where j = 1, 2,...m represents the jth degree of free-dom, qj are the generalized coordinates, and qj are thegeneralized velocities.Although the mathematics required for Lagrange’s equa-tions appears significantly more complicated than New-ton’s laws, this does point to deeper insights into classicalmechanics than Newton’s laws alone: in particular, sym-metry and conservation. In practice it’s often easier tosolve a problem using the Lagrange equations than New-ton’s laws, because the minimum generalized coordinatesqi can be chosen by convenience to exploit symmetries inthe system, and constraint forces are incorporated into thegeometry of the problem. There is one Lagrange equa-tion for each generalized coordinate qi.For a system of many particles, each particle can havedifferent numbers of degrees of freedom from the others.In each of the Lagrange equations, T is the total kineticenergy of the system, and V the total potential energy.

7.3.2 Derivation of Lagrange’s equations

Page 33: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

7.3. LAGRANGE EQUATIONS OF THE SECOND KIND 27

Hamilton’s principle

The Euler–Lagrange equations follow directly fromHamilton’s principle, and are mathematically equivalent.From the calculus of variations, any functional of theform:

J =

∫ x2

x1

F (x, y, y′)dx

leads to the general Euler–Lagrange equation for station-ary value of J. (see main article for derivation):

ddx

∂F

∂y′=∂F

∂y

Then making the replacements:

x→ t, y → q, y′ → q, F → L, J → S

yields the Lagrange equations formechanics. Sincemath-ematically Hamilton’s equations can be derived from La-grange’s equations (by a Legendre transformation) andLagrange’s equations can be derived from Newton’s laws,all of which are equivalent and summarize classical me-chanics, this means classical mechanics is fundamen-tally ruled by a variation principle (Hamilton’s principleabove).

Generalized forces

For a conservative system, since the potential field is onlya function of position, not velocity, Lagrange’s equationsalso follow directly from the equation of motion above:

Qj =ddt

(∂(L+ V )

∂qj

)−∂(L+ V )

∂qj=

[ ddt

(∂L

∂qj

)+ 0

]−[∂L

∂qj+∂V

∂qj

]=

ddt

(∂L

∂qj

)− ∂L

∂qj+Qj .

simplifying to

ddt

(∂L

∂qj

)=∂L

∂qj

This is consistent with the results derived above and maybe seen by differentiating the right side of the Lagrangianwith respect to qj and time, and solely with respect to qj,adding the results and associating terms with the equa-tions for Fi and Qj.

Newton’s laws

As the following derivation shows, no new physics is in-troduced, so the Lagrange equations can describe thedynamics of a classical system equivalently as Newton’slaws.

When qi = ri (i.e. the generalized coordinates are simplythe Cartesian coordinates), it is straightforward to checkthat Lagrange’s equations reduce to Newton’s second law.

7.3.3 Dissipation function

Main article: Rayleigh dissipation function

In a more general formulation, the forces could be bothpotential and viscous. If an appropriate transformationcan be found from the Fᵢ, Rayleigh suggests using a dis-sipation function, D, of the following form:[7]:271

D =1

2

m∑j=1

m∑k=1

Cjkqj qk.

where Cjk are constants that are related to the dampingcoefficients in the physical system, though not necessarilyequal to themIf D is defined this way, then[7]:271

Qj = −∂V∂qj

− ∂D

∂qj

and

0 =ddt

(∂L

∂qj

)− ∂L

∂qj+∂D

∂qj.

7.3.4 Examples

In this section two examples are provided in which theabove concepts are applied. The first example establishesthat in a simple case, the Newtonian approach and theLagrangian formalism agree. The second case illustratesthe power of the above formalism, in a case that is hardto solve with Newton’s laws.

Falling mass

Consider a point massm falling freely from rest. By grav-ity a force F =mg is exerted on the mass (assuming g con-stant during the motion). Filling in the force in Newton’slaw, we find x = g from which the solution

x(t) =1

2gt2

follows (by taking the antiderivative of the antiderivative,and choosing the origin as the starting point). This result

Page 34: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

28 CHAPTER 7. LAGRANGIAN MECHANICS

can also be derived through the Lagrangian formalism.Take x to be the coordinate, which is 0 at the startingpoint. The kinetic energy is T = 1⁄2mv2 and the potentialenergy is V = −mgx; hence,

L = T − V =1

2mx2 +mgx.

Then

0 =∂L

∂x− ddt∂L

∂x= mg −m

dxdt

which can be rewritten as x = g , yielding the same resultas earlier.

Pendulum on a movable support

Consider a pendulum of mass m and length ℓ, which isattached to a support with massM, which can move alonga line in the x-direction. Let x be the coordinate along theline of the support, and let us denote the position of thependulum by the angle θ from the vertical.

Sketch of the situation with definition of the coordinates (click toenlarge)

The kinetic energy can then be shown to be

T = 12Mx2 + 1

2m(x2pend + y2pend

)= 1

2Mx2 + 12m

[(x+ ℓθ cos θ

)2+(ℓθ sin θ

)2],

and the potential energy of the system is

V = mgypend = −mgℓ cos θ.

The Lagrangian is therefore

L = T − V

= 12Mx2 + 1

2m

[(x+ ℓθ cos θ

)2+(ℓθ sin θ

)2]+mgℓ cos θ

= 12 (M +m) x2 +mxℓθ cos θ + 1

2mℓ2θ2 +mgℓ cos θ

Now carrying out the differentiations gives for the supportcoordinate x

ddt[(M +m)x+mℓθ cos θ

]= 0,

therefore:

(M +m)x+mℓθ cos θ −mℓθ2 sin θ = 0

indicating the presence of a constant of motion. Perform-ing the same procedure for the variable θ yields:

ddt[m(xℓ cos θ + ℓ2θ)

]+mℓ(xθ + g) sin θ = 0;

therefore

θ +x

ℓcos θ + g

ℓsin θ = 0.

These equations may look quite complicated, but findingthem with Newton’s laws would have required carefullyidentifying all forces, which would have been much morelaborious and prone to errors. By considering limit cases,the correctness of this system can be verified: For ex-ample, x → 0 should give the equations of motion fora pendulum that is at rest in some inertial frame, whileθ → 0 should give the equations for a pendulum in a con-stantly accelerating system, etc. Furthermore, it is trivialto obtain the results numerically, given suitable startingconditions and a chosen time step, by stepping throughthe results iteratively.

Two-body central force problem

The basic problem is that of two bodies in orbit abouteach other attracted by a central force. The Jacobi coor-dinates are introduced; namely, the location of the centerof mass R and the separation of the bodies r (the relativeposition). The Lagrangian is then[11][12]

L = T − U =1

2M R2 +

(1

2µr2 − U(r)

)= Lcm + Lrel

where M is the total mass, μ is the reduced mass, andU the potential of the radial force. The Lagrangian is

Page 35: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

7.4. EXTENSIONS OF LAGRANGIAN MECHANICS 29

divided into a center-of-mass term and a relative motionterm. The R equation from the Euler–Lagrange systemis simply:

M R = 0,

resulting in simple motion of the center of mass in astraight line at constant velocity. The relative motion isexpressed in polar coordinates (r, θ):

L =1

2µ(r2 + r2θ2

)− U(r),

which does not depend upon θ, therefore an ignorable co-ordinate. The Lagrange equation for θ is then:

∂L

∂θ= µr2θ = constant = ℓ,

where ℓ is the conserved angular momentum. The La-grange equation for r is:

∂L

∂r=

ddt∂L

∂r,

or:

µrθ2 − dU

dr= µr.

This equation is identical to the radial equation obtainedusing Newton’s laws in a co-rotating reference frame, thatis, a frame rotating with the reduced mass so it appearsstationary. If the angular velocity is replaced by its valuein terms of the angular momentum,

θ =ℓ

µr2,

the radial equation becomes:[13]

µr = −dUdr

+ℓ2

µr3.

which is the equation of motion for a one-dimensionalproblem in which a particle of mass μ is subjected to theinward central force −dU/dr and a second outward force,called in this context the centrifugal force:

Fcf = µrθ2 =ℓ2

µr3.

Of course, if one remains entirely within the one-dimensional formulation, ℓ enters only as some imposed

parameter of the external outward force, and its inter-pretation as angular momentum depends upon the moregeneral two-dimensional problem from which the one-dimensional problem originated.If one arrives at this equation using Newtonian mechan-ics in a co-rotating frame, the interpretation is evidentas the centrifugal force in that frame due to the rotationof the frame itself. If one arrives at this equation di-rectly by using the generalized coordinates (r, θ) and sim-ply following the Lagrangian formulation without think-ing about frames at all, the interpretation is that the cen-trifugal force is an outgrowth of using polar coordinates.As Hildebrand says:[14] “Since such quantities are nottrue physical forces, they are often called inertia forces.Their presence or absence depends, not upon the par-ticular problem at hand, but upon the coordinate systemchosen.” In particular, if Cartesian coordinates are cho-sen, the centrifugal force disappears, and the formulationinvolves only the central force itself, which provides thecentripetal force for a curved motion.This viewpoint, that fictitious forces originate in thechoice of coordinates, often is expressed by users ofthe Lagrangian method. This view arises naturally inthe Lagrangian approach, because the frame of refer-ence is (possibly unconsciously) selected by the choiceof coordinates.[15] Unfortunately, this usage of “inertialforce” conflicts with the Newtonian idea of an inertialforce. In the Newtonian view, an inertial force originatesin the acceleration of the frame of observation (the factthat it is not an inertial frame of reference), not in thechoice of coordinate system. To keep matters clear, it issafest to refer to the Lagrangian inertial forces as gener-alized inertial forces, to distinguish them from the New-tonian vector inertial forces. That is, one should avoidfollowing Hildebrand when he says (p. 155) “we deal al-wayswith generalized forces, velocities accelerations, andmomenta. For brevity, the adjective “generalized” will beomitted frequently.”It is known that the Lagrangian of a system is not unique.Within the Lagrangian formalism the Newtonian ficti-tious forces can be identified by the existence of alter-native Lagrangians in which the fictitious forces disap-pear, sometimes found by exploiting the symmetry of thesystem.[16]

7.4 Extensions of Lagrangian me-chanics

The Hamiltonian, denoted by H, is obtained by perform-ing a Legendre transformation on the Lagrangian, whichintroduces new variables, canonically conjugate to theoriginal variables. This doubles the number of variables,but makes differential equations first order. The Hamilto-nian is the basis for an alternative formulation of classicalmechanics known as Hamiltonian mechanics. It is a par-

Page 36: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

30 CHAPTER 7. LAGRANGIAN MECHANICS

ticularly ubiquitous quantity in quantum mechanics (seeHamiltonian (quantum mechanics)).In 1948, Feynman discovered the path integral formula-tion extending the principle of least action to quantummechanics for electrons and photons. In this formula-tion, particles travel every possible path between the ini-tial and final states; the probability of a specific finalstate is obtained by summing over all possible trajecto-ries leading to it. In the classical regime, the path integralformulation cleanly reproduces Hamilton’s principle, andFermat’s principle in optics.Dissipation (i.e. non-conservative systems) can also betreated with an effective Lagrangian formulated by a cer-tain doubling of the degrees of freedom; see.[17][18][19][20]

7.5 See also

• Canonical coordinates

• Functional derivative

• Generalized coordinates

• Hamiltonian mechanics

• Hamiltonian optics

• Lagrangian analysis (applications of Lagrangianmechanics)

• Lagrangian point

• Non-autonomous mechanics

• Restricted three-body problem

7.6 References[1] Goldstein, H. (2001). Classical Mechanics (3rd ed.).

Addison-Wesley. p. 35.

[2] Goldstein, H. (2001). Classical Mechanics (3rd ed.).Addison-Wesley. p. 54.

[3] R. Dvorak, Florian Freistetter (2005). "§ 3.2 Lagrangeequations of the first kind”. Chaos and stability in plane-tary systems. Birkhäuser. p. 24. ISBN 3-540-28208-4.

[4] H Haken (2006). Information and self-organization (3rded.). Springer. p. 61. ISBN 3-540-33021-6.

[5] Cornelius Lanczos (1986). “II §5 Auxiliary conditions:the Lagrangian λ-method”. The variational principles ofmechanics (Reprint of University of Toronto 1970 4thed.). Courier Dover. p. 43. ISBN 0-486-65067-7.

[6] Henry Zatzkis (1960). "§1.4 Lagrange equations of thesecond kind”. In DH Menzel. Fundamental formulas ofphysics 1 (2nd ed.). Courier Dover. p. 160. ISBN 0-486-60595-7.

[7] Torby, Bruce (1984). “Energy Methods”. Advanced Dy-namics for Engineers. HRW Series in Mechanical Engi-neering. United States of America: CBS College Publish-ing. ISBN 0-03-063366-4.

[8] Analytical Mechanics, L.N. Hand, J.D. Finch, CambridgeUniversity Press, 2008, ISBN 978-0-521-57572-0

[9] Torby1984, p.270

[10] The Road to Reality, Roger Penrose, Vintage books,2007, ISBN 0-679-77631-1

[11] John Robert Taylor (2005). Classical mechanics. Univer-sity Science Books. p. 297. ISBN 1-891389-22-X.

[12] The Lagrangian also can be written explicitly for a rotatingframe. See Thanu Padmanabhan (2000). "§2.3.2 Motionin a rotating frame”. Theoretical Astrophysics: Astrophys-ical processes (3rd ed.). Cambridge University Press. p.48. ISBN 0-521-56632-0.

[13] Louis N. Hand, Janet D. Finch (1998). Analytical me-chanics. Cambridge University Press. pp. 140–141.ISBN 0-521-57572-9.

[14] Francis Begnaud Hildebrand (1992). Methods of appliedmathematics (Reprint of Prentice-Hall 1965 2nd ed.).Courier Dover. p. 156. ISBN 0-486-67002-3.

[15] For example, seeMichail Zak, Joseph P. Zbilut, Ronald E.Meyers (1997). From instability to intelligence. Springer.p. 202. ISBN 3-540-63055-4. for a comparison of La-grangians in an inertial and in a noninertial frame of ref-erence. See also the discussion of “total” and “updated”Lagrangian formulations in Ahmed A. Shabana (2008).Computational continuum mechanics. Cambridge Univer-sity Press. pp. 118–119. ISBN 0-521-88569-8.

[16] Terry Gannon (2006). Moonshine beyond the monster:the bridge connecting algebra, modular forms and physics.Cambridge University Press. p. 267. ISBN 0-521-83531-3.

[17] B. P. Kosyakov, “Introduction to the classical theory ofparticles and fields”, Berlin, Germany: Springer (2007)

[18] “Classical Mechanics of Nonconservative Systems” byChad Galley

[19] “Radiation reaction at the level of the action” by OfekBirnholtz, Shahar Hadar, and Barak Kol

[20] “Theory of post-Newtonian radiation and reaction” byOfek Birnholtz, Shahar Hadar, and Barak Kol

7.7 Further reading• Landau, L.D. and Lifshitz, E.M. Mechanics, Perga-mon Press.

• Gupta, Kiran Chandra, Classical mechanics of par-ticles and rigid bodies (Wiley, 1988).

• Goldstein, Herbert, Classical Mechanics, AddisonWesley.

Page 37: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

7.8. EXTERNAL LINKS 31

• Cassel, Kevin W.: Variational Methods with Appli-cations in Science and Engineering, Cambridge Uni-versity Press, 2013.

7.8 External links• Tong, David, Classical Dynamics Cambridge lecturenotes

• Principle of least action interactive Excellent inter-active explanation/webpage

• Joseph Louis de Lagrange - Œuvres complètes(Gallica-Math)

Page 38: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 8

Hamiltonian mechanics

Hamiltonian mechanics is a theory developed as a re-formulation of classical mechanics and predicts the sameoutcomes as non-Hamiltonian classical mechanics. Ituses a different mathematical formalism, providing amore abstract understanding of the theory. Historically,it was an important reformulation of classical mechan-ics, which later contributed to the formulation of quantummechanics.Hamiltonian mechanics was first formulated by WilliamRowan Hamilton in 1833, starting from Lagrangian me-chanics, a previous reformulation of classical mechanicsintroduced by Joseph Louis Lagrange in 1788.

8.1 Overview

In Hamiltonian mechanics, a classical physical system isdescribed by a set of canonical coordinates r = (q,p) ,where each component of the coordinate qi, pi is indexedto the frame of reference of the system.The time evolution of the system is uniquely defined byHamilton’s equations:[1]

where H = H(q,p, t) is the Hamiltonian, which oftencorresponds to the total energy of the system.[2] For aclosed system, it is the sum of the kinetic and potentialenergy in the system.In classical mechanics, the time evolution is obtained bycomputing the total force being exerted on each particleof the system, and from Newton’s second law, the time-evolutions of both position and velocity are computed. Incontrast, in Hamiltonian mechanics, the time evolution isobtained by computing the Hamiltonian of the system inthe generalized coordinates and inserting it in the Hamil-tonian equations. It is important to point out that thisapproach is equivalent to the one used in Lagrangian me-chanics. In fact, as will be shown below, the Hamiltonianis the Legendre transform of the Lagrangian, and thusboth approaches give the same equations for the samegeneralized momentum. The main motivation to useHamiltonian mechanics instead of Lagrangian mechan-

ics comes from the symplectic structure of Hamiltoniansystems.While Hamiltonian mechanics can be used to describesimple systems such as a bouncing ball, a pendulum or anoscillating spring in which energy changes from kinetic topotential and back again over time, its strength is shown inmore complex dynamic systems, such as planetary orbitsin celestial mechanics.[3] Naturally, the more degrees offreedom the system has, the more complicated its timeevolution is and, in most cases, it becomes chaotic.

8.1.1 Basic physical interpretation

A simple interpretation of the Hamilton mechanicscomes from its application on a one-dimensional systemconsisting of one particle of mass m under no exter-nal forces applied. The Hamiltonian represents the to-tal energy of the system, which is the sum of kinetic andpotential energy, traditionally denoted T and V, respec-tively. Here q is the coordinate and p is the momentum,mv. Then

H = T + V, T =p2

2m, V = V (q).

Note that T is a function of p alone, while V is a functionof q alone (i.e., T and V are scleronomic ).In this example, the time-derivative of the momentump equals the Newtonian force, and so the first Hamiltonequation means that the force equals the negative gradientof potential energy. The time-derivative of q is the veloc-ity, and so the second Hamilton equation means that theparticle’s velocity equals the derivative of its kinetic en-ergy with respect to its momentum.

8.1.2 Calculating a Hamiltonian from aLagrangian

Given a Lagrangian in terms of the generalized coordi-nates qi and generalized velocities qi and time:

1. The momenta are calculated by differentiating the

32

Page 39: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

8.3. AS A REFORMULATION OF LAGRANGIAN MECHANICS 33

Lagrangian with respect to the (generalized) veloci-ties: pi(qi, qi, t) = ∂L

∂qi.

2. The velocities qi are expressed in terms of the mo-menta pi by inverting the expressions in the previousstep.

3. The Hamiltonian is calculated using the usual def-inition of H as the Legendre transformation of L :H =

∑i qi

∂L∂qi

− L =∑

i qipi − L . Then the ve-locities are substituted for using the previous results.

8.2 Deriving Hamilton’s equations

Hamilton’s equations can be derived by looking at howthe total differential of the Lagrangian depends on time,generalized positions qi and generalized velocities qi : [4]

dL =∑i

(∂L∂qi

dqi +∂L∂qi

dqi)+∂L∂t

dt .

Now the generalized momenta were defined as

pi =∂L∂qi

.

If this is substituted into the total differential of the La-grangian, one gets

dL =∑i

(∂L∂qi

dqi + pidqi)+∂L∂t

dt .

We can rewrite this as

dL =∑i

(∂L∂qi

dqi + d (piqi)− qidpi)+∂L∂t

dt

and rearrange again to get

d(∑

i

piqi − L

)=∑i

(− ∂L∂qi

dqi + qidpi)−∂L∂t

dt .

The term on the left-hand side is just the Hamiltonian thatwe have defined before, so we find that

dH =∑i

(− ∂L∂qi

dqi + qidpi)− ∂L∂t

dt.

We can also calculate the total differential of the Hamil-tonianH with respect to time directly, as we did with theLagrangian L above, yielding:

dH =∑i

(∂H∂qi

dqi +∂H∂pi

dpi)+∂H∂t

dt.

It follows from the previous two independent equationsthat their right-hand sides are equal with each other. Thuswe obtain the equation

∑i

(− ∂L∂qi

dqi + qidpi)−∂L∂t

dt =∑i

(∂H∂qi

dqi +∂H∂pi

dpi)+∂H∂t

dt.

Since this calculation was done off-shell, we can associatecorresponding terms from both sides of this equation toyield:

∂H∂qi

= − ∂L∂qi

,∂H∂pi

= qi ,∂H∂t

= −∂L∂t

.

On-shell, Lagrange’s equations tell us that

ddt∂L∂qi

− ∂L∂qi

= 0 .

We can rearrange this to get

∂L∂qi

= pi .

Thus Hamilton’s equations hold on-shell:

∂H∂qj

= −pj ,∂H∂pj

= qj ,∂H∂t

= −∂L∂t

.

8.3 As a reformulation of La-grangian mechanics

Starting with Lagrangianmechanics, the equations ofmo-tion are based on generalized coordinates

qj | j = 1, . . . , N

and matching generalized velocities

qj | j = 1, . . . , N .

We write the Lagrangian as

L(qj , qj , t)

Page 40: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

34 CHAPTER 8. HAMILTONIAN MECHANICS

with the subscripted variables understood to represent allN variables of that type. Hamiltonian mechanics aims toreplace the generalized velocity variables with general-ized momentum variables, also known as conjugate mo-menta. By doing so, it is possible to handle certain sys-tems, such as aspects of quantum mechanics, that wouldotherwise be even more complicated.For each generalized velocity, there is one correspondingconjugate momentum, defined as:

pj =∂L∂qj

.

In Cartesian coordinates, the generalized momenta areprecisely the physical linear momenta. In circular po-lar coordinates, the generalized momentum correspond-ing to the angular velocity is the physical angular momen-tum. For an arbitrary choice of generalized coordinates,it may not be possible to obtain an intuitive interpretationof the conjugate momenta.One thing which is not too obvious in this coordinatedependent formulation is that different generalized co-ordinates are really nothing more than different coor-dinate patches on the same symplectic manifold (seeMathematical formalism, below).The Hamiltonian is the Legendre transform of theLagrangian:

H (qj , pj , t) =∑i

qipi − L(qj , qj , t).

If the transformation equations defining the generalizedcoordinates are independent of t, and the Lagrangian is asum of products of functions (in the generalized coordi-nates) which are homogeneous of order 0, 1 or 2, then itcan be shown that H is equal to the total energy E = T +V.Each side in the definition of H produces a differential:

dH =∑i

[(∂H∂qi

)dqi +

(∂H∂pi

)dpi]+

(∂H∂t

)dt

=∑i

[qi dpi + pi dqi −

(∂L∂qi

)dqi −

(∂L∂qi

)dqi]−(∂L∂t

)dt.

Substituting the previous definition of the conjugate mo-menta into this equation and matching coefficients, weobtain the equations of motion of Hamiltonian mechan-ics, known as the canonical equations of Hamilton:

∂H∂qj

= −pj ,∂H∂pj

= qj ,∂H∂t

= −∂L∂t.

Hamilton’s equations consist of 2n first-order differentialequations, while Lagrange’s equations consist of nsecond-order equations. However, Hamilton’s equationsusually don't reduce the difficulty of finding explicit solu-tions. They still offer some advantages, since importanttheoretical results can be derived because coordinates andmomenta are independent variables with nearly symmet-ric roles.Hamilton’s equations have another advantage over La-grange’s equations: if a system has a symmetry, such thata coordinate does not occur in the Hamiltonian, the cor-responding momentum is conserved, and that coordinatecan be ignored in the other equations of the set. Effec-tively, this reduces the problem from n coordinates to (n-1) coordinates. In the Lagrangian framework, of coursethe result that the corresponding momentum is conservedstill follows immediately, but all the generalized veloci-ties still occur in the Lagrangian - we still have to solve asystem of equations in n coordinates.[2]

The Lagrangian and Hamiltonian approaches provide thegroundwork for deeper results in the theory of classicalmechanics, and for formulations of quantum mechanics.

8.4 Geometry of Hamiltonian sys-tems

A Hamiltonian system may be understood as a fiber bun-dle E over time R, with the fibers Et, t ∈ R, being theposition space. The Lagrangian is thus a function on thejet bundle J over E; taking the fiberwise Legendre trans-form of the Lagrangian produces a function on the dualbundle over time whose fiber at t is the cotangent spaceT*Et, which comes equipped with a natural symplecticform, and this latter function is the Hamiltonian.

8.5 Generalization to quantummechanics through Poissonbracket

Hamilton’s equations above work well for classical me-chanics, but not for quantum mechanics, since the differ-ential equations discussed assume that one can specify theexact position and momentum of the particle simultane-ously at any point in time. However, the equations can befurther generalized to then be extended to apply to quan-tum mechanics as well as to classical mechanics, throughthe deformation of the Poisson algebra over p and q to thealgebra of Moyal brackets.Specifically, the more general form of the Hamilton’sequation reads

Page 41: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

8.7. RIEMANNIAN MANIFOLDS 35

dfdt = f,H+ ∂f

∂t

where f is some function of p and q, andH is the Hamilto-nian. To find out the rules for evaluating a Poisson bracketwithout resorting to differential equations, see Lie alge-bra; a Poisson bracket is the name for the Lie bracket in aPoisson algebra. These Poisson brackets can then be ex-tended to Moyal brackets comporting to an inequivalentLie algebra, as proven by H. Groenewold, and thereby de-scribe quantum mechanical diffusion in phase space (Seethe phase space formulation andWeyl quantization). Thismore algebraic approach not only permits ultimately ex-tending probability distributions in phase space toWignerquasi-probability distributions, but, at the mere Poissonbracket classical setting, also provides more power inhelping analyze the relevant conserved quantities in a sys-tem.

8.6 Mathematical formalism

Any smooth real-valued functionH on a symplectic man-ifold can be used to define a Hamiltonian system. Thefunction H is known as the Hamiltonian or the en-ergy function. The symplectic manifold is then calledthe phase space. The Hamiltonian induces a specialvector field on the symplectic manifold, known as theHamiltonian vector field.The Hamiltonian vector field (a special type of symplecticvector field) induces a Hamiltonian flow on the man-ifold. This is a one-parameter family of transforma-tions of the manifold (the parameter of the curves iscommonly called the time); in other words an isotopyof symplectomorphisms, starting with the identity. ByLiouville’s theorem, each symplectomorphism preservesthe volume form on the phase space. The collection ofsymplectomorphisms induced by the Hamiltonian flowis commonly called the Hamiltonian mechanics of theHamiltonian system.The symplectic structure induces a Poisson bracket. ThePoisson bracket gives the space of functions on the man-ifold the structure of a Lie algebra.Given a function f

ddtf =

∂tf + f,H.

If we have a probability distribution, ρ, then (since thephase space velocity ( pi, qi ) has zero divergence, andprobability is conserved) its convective derivative can beshown to be zero and so

∂tρ = − ρ,H.

This is called Liouville’s theorem. Every smooth func-tion G over the symplectic manifold generates a one-parameter family of symplectomorphisms and if G, H = 0, then G is conserved and the symplectomorphismsare symmetry transformations.A Hamiltonian may have multiple conserved quantitiesGi. If the symplectic manifold has dimension 2n andthere are n functionally independent conserved quantitiesGi which are in involution (i.e., Gi, Gj = 0), thenthe Hamiltonian is Liouville integrable. The Liouville-Arnold theorem says that locally, any Liouville integrableHamiltonian can be transformed via a symplectomor-phism in a new Hamiltonian with the conserved quan-tities Gi as coordinates; the new coordinates are calledaction-angle coordinates. The transformed Hamiltoniandepends only on the Gi, and hence the equations of mo-tion have the simple form

Gi = 0, φi = F (G),

for some function F (Arnol'd et al., 1988). There is anentire field focusing on small deviations from integrablesystems governed by the KAM theorem.The integrability of Hamiltonian vector fields is an openquestion. In general, Hamiltonian systems are chaotic;concepts of measure, completeness, integrability and sta-bility are poorly defined. At this time, the study ofdynamical systems is primarily qualitative, and not aquantitative science.

8.7 Riemannian manifolds

An important special case consists of those Hamiltoniansthat are quadratic forms, that is, Hamiltonians that can bewritten as

H(q, p) =1

2⟨p, p⟩q

where ⟨·, ·⟩q is a smoothly varying inner product on thefibers T ∗

qQ , the cotangent space to the point q in theconfiguration space, sometimes called a cometric. ThisHamiltonian consists entirely of the kinetic term.If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces alinear isomorphism between the tangent and cotangentbundles. (See Musical isomorphism). Using this iso-morphism, one can define a cometric. (In coordinates,the matrix defining the cometric is the inverse of the ma-trix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the sameas the geodesics on the manifold. In particular, theHamiltonian flow in this case is the same thing as thegeodesic flow. The existence of such solutions, and the

Page 42: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

36 CHAPTER 8. HAMILTONIAN MECHANICS

completeness of the set of solutions, are discussed in de-tail in the article on geodesics. See also Geodesics asHamiltonian flows.

8.8 Sub-Riemannian manifolds

When the cometric is degenerate, then it is not invertible.In this case, one does not have a Riemannian manifold,as one does not have a metric. However, the Hamiltonianstill exists. In the case where the cometric is degenerateat every point q of the configuration space manifold Q, sothat the rank of the cometric is less than the dimension ofthe manifold Q, one has a sub-Riemannian manifold.The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonianuniquely determines the cometric, and vice-versa. Thisimplies that every sub-Riemannian manifold is uniquelydetermined by its sub-Riemannian Hamiltonian, and thatthe converse is true: every sub-Riemannian manifoldhas a unique sub-Riemannian Hamiltonian. The exis-tence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem.The continuous, real-valued Heisenberg group provides asimple example of a sub-Riemannian manifold. For theHeisenberg group, the Hamiltonian is given by

H(x, y, z, px, py, pz) =1

2

(p2x + p2y

).

pz is not involved in the Hamiltonian.

8.9 Poisson algebras

Hamiltonian systems can be generalized in various ways.Instead of simply looking at the algebra of smooth func-tions over a symplectic manifold, Hamiltonian mechan-ics can be formulated on general commutative unital realPoisson algebras. A state is a continuous linear func-tional on the Poisson algebra (equipped with some suit-able topology) such that for any element A of the algebra,A² maps to a nonnegative real number.A further generalization is given by Nambu dynamics.

8.10 Charged particle in an electro-magnetic field

A good illustration of Hamiltonian mechanics is givenby the Hamiltonian of a charged particle in anelectromagnetic field. In Cartesian coordinates (i.e. qi =xi ), the Lagrangian of a non-relativistic classical particlein an electromagnetic field is (in SI Units):

L =∑i

12mx

2i +

∑i

exiAi − eϕ,

where e is the electric charge of the particle (not neces-sarily the elementary charge), ϕ is the electric scalar po-tential, and the Ai are the components of the magneticvector potential (these may be modified through a gaugetransformation). This is called minimal coupling.The generalized momenta are given by:

pi =∂L∂xi

= mxi + eAi.

Rearranging, the velocities are expressed in terms of themomenta:

xi =pi − eAi

m.

If we substitute the definition of the momenta, and thedefinitions of the velocities in terms of the momenta, intothe definition of the Hamiltonian given above, and thensimplify and rearrange, we get:

H =∑i

xipi − L =∑i

(pi − eAi)2

2m+ eϕ.

This equation is used frequently in quantum mechanics.

8.11 Relativistic charged particlein an electromagnetic field

The Lagrangian for a relativistic charged particle is givenby:

L(t) = −mc2

√1−

˙x(t)2

c2−eϕ(x(t), t)+e ˙x(t)·A(x(t), t) .

Thus the particle’s canonical (total) momentum is

P (t) =∂L(t)∂ ˙x(t)

=m ˙x(t)√1−

˙x(t)2

c2

+ eA(x(t), t) ,

that is, the sum of the kinetic momentum and the poten-tial momentum.Solving for the velocity, we get

˙x(t) =P (t)− eA(x(t), t)√

m2 + 1c2

(P (t)− eA(x(t), t)

)2 .

Page 43: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

8.13. REFERENCES 37

So the Hamiltonian is

H(t) = ˙x(t)·P (t)−L(t) = c

√m2c2 +

(P (t)− eA(x(t), t)

)2+eϕ(x(t), t) .

From this we get the force equation (equivalent to theEuler–Lagrange equation)

˙P = −∂H

∂x= e(∇A) · ˙x− e∇ϕ

from which one can derive

d

dt

m ˙x√1− ˙x

2

c2

= eE + e ˙x× B .

An equivalent expression for the Hamiltonian as functionof the relativistic (kinetic) momentum, p = γm ˙x(t) , is

H(t) = ˙x(t)·p (t)+mc2

γ+eϕ(x(t), t) = γmc2+eϕ(x(t), t) = E+V .

This has the advantage that p can be measured experi-mentally whereas P cannot. Notice that the Hamiltonian(total energy) can be viewed as the sum of the relativisticenergy (kinetic+rest), E = γmc2 , plus the potential en-ergy, V = eϕ .

8.12 See also• Canonical transformation

• Classical field theory

• Covariant Hamiltonian field theory

• Classical mechanics

• Dynamical systems theory

• Hamilton–Jacobi equation

• Hamilton–Jacobi–Einstein equation

• Lagrangian mechanics

• Maxwell’s equations

• Hamiltonian (quantum mechanics)

• Quantum Hamilton’s equations

• Quantum field theory

• Hamiltonian optics

• De Donder–Weyl theory

• Geometric Mechanics

8.13 References

8.13.1 Footnotes[1] Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge

University Press, 2008, ISBN 978-0-521-57572-0

[2] Goldstein, Herbert; Poole, Charles P., Jr.; Safko, John L.(2002), Classical Mechanics (3rd ed.), San Francisco, CA:Addison Wesley, pp. 347–349, ISBN 0-201-65702-3

[3] “16.3 The Hamiltonian”, MIT OpenCourseWare website18.013A, retrieved February 2007

[4] This derivation is along the lines as given in Arnol'd 1989,pp. 65–66

8.13.2 Sources

• Arnol'd, V. I. (1989), Mathematical Methods ofClassical Mechanics, Springer-Verlag, ISBN 0-387-96890-3

• Abraham, R.; Marsden, J.E. (1978), Foundations ofMechanics, London: Benjamin-Cummings, ISBN0-8053-0102-X

• Arnol'd, V. I.; Kozlov, V. V.; Neĩshtadt, A. I.(1988), “Mathematical aspects of classical and ce-lestial mechanics”, Encyclopaedia of MathematicalSciences, Dynamical Systems III 3, Springer-Verlag

• Vinogradov, A.M.; Kupershmidt, B. A. (1981), Thestructure of Hamiltonian mechanics (DjVu), LondonMath. Soc. Lect. Notes Ser. 60, London: Cam-bridge Univ. Press

8.14 External links• Binney, James J., Classical Mechanics (lecturenotes), University of Oxford, retrieved 27 October2010

• Tong, David, Classical Dynamics (Cambridge lecturenotes), University of Cambridge, retrieved 27 Octo-ber 2010

• Hamilton, William Rowan, On a General Method inDynamics, Trinity College Dublin

Page 44: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 9

Classical mechanics

Diagram of orbital motion of a satellite around the earth, show-ing perpendicular velocity and acceleration (force) vectors.

In physics, classical mechanics and quantum mechan-ics are the two major sub-fields of mechanics. Classicalmechanics is concerned with the set of physical laws de-scribing the motion of bodies under the action of a systemof forces. The study of the motion of bodies is an ancientone, making classical mechanics one of the oldest andlargest subjects in science, engineering and technology.It is also widely known as Newtonian mechanics.Classical mechanics describes the motion of macroscopicobjects, from projectiles to parts of machinery, as well asastronomical objects, such as spacecraft, planets, stars,and galaxies. Besides this, many specializations withinthe subject deal with solids, liquids and gases and otherspecific sub-topics. Classical mechanics also provides ex-tremely accurate results as long as the domain of studyis restricted to large objects and the speeds involved donot approach the speed of light. When the objects beingdealt with become sufficiently small, it becomes neces-sary to introduce the other major sub-field of mechan-ics, quantum mechanics, which reconciles the macro-scopic laws of physics with the atomic nature of mat-ter and handles the wave–particle duality of atoms andmolecules. However, when both quantum mechanics and

classical mechanics cannot apply, such as at the quantumlevel with many degrees of freedom, quantum field theory(QFT) becomes applicable. QFT deals with small dis-tances and large speeds with many degrees of freedomas well as the possibility of any change in the number ofparticles throughout the interaction. To deal with largedegrees of freedom at the macroscopic level, statisticalmechanics becomes valid. Statistical mechanics exploresthe large number of particles and their interactions as awhole in everyday life. Statistical mechanics is mainlyused in thermodynamics. In the case of high velocity ob-jects approaching the speed of light, classical mechanicsis enhanced by special relativity. General relativity unifiesspecial relativity with Newton’s law of universal gravita-tion, allowing physicists to handle gravitation at a deeperlevel.The term classical mechanicswas coined in the early 20thcentury to describe the system of physics begun by IsaacNewton and many contemporary 17th century naturalphilosophers, building upon the earlier astronomical the-ories of Johannes Kepler, which in turn were based onthe precise observations of Tycho Brahe and the studiesof terrestrial projectile motion of Galileo. Since these as-pects of physics were developed long before the emer-gence of quantum physics and relativity, some sourcesexclude Einstein’s theory of relativity from this category.However, a number of modern sources do include rela-tivistic mechanics, which in their view represents classi-cal mechanics in its most developed and most accurateform.[note 1]

The initial stage in the development of classical mechan-ics is often referred to as Newtonian mechanics, and isassociated with the physical concepts employed by andthe mathematical methods invented by Newton himself,in parallel with Leibniz, and others. This is further de-scribed in the following sections. Later, more abstractand general methods were developed, leading to refor-mulations of classical mechanics known as Lagrangianmechanics and Hamiltonian mechanics. These advanceswere largelymade in the 18th and 19th centuries, and theyextend substantially beyond Newton’s work, particularlythrough their use of analytical mechanics. Ultimately, themathematics developed for these were central to the cre-ation of quantum mechanics.

38

Page 45: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

9.1. HISTORY 39

9.1 History

Main article: History of classical mechanicsSee also: Timeline of classical mechanics

Some Greek philosophers of antiquity, among themAristotle, founder of Aristotelian physics, may have beenthe first to maintain the idea that “everything happens fora reason” and that theoretical principles can assist in theunderstanding of nature. While to amodern reader, manyof these preserved ideas come forth as eminently reason-able, there is a conspicuous lack of both mathematicaltheory and controlled experiment, as we know it. Theseboth turned out to be decisive factors in forming modernscience, and they started out with classical mechanics.In his Elementa super demonstrationem ponderum, me-dieval mathematician Jordanus de Nemore concept of“positional gravity" and the use of component forces.

A B

C

DThree stage Theory of impetus according to Albert of Saxony.

The first published causal explanation of the motions ofplanets was Johannes Kepler’sAstronomia nova publishedin 1609. He concluded, based on Tycho Brahe's obser-vations of the orbit of Mars, that the orbits were ellipses.This break with ancient thought was happening aroundthe same time that Galileo was proposing abstract math-ematical laws for the motion of objects. He may (or maynot) have performed the famous experiment of droppingtwo cannonballs of different weights from the tower ofPisa, showing that they both hit the ground at the sametime. The reality of this experiment is disputed, but,more importantly, he did carry out quantitative experi-ments by rolling balls on an inclined plane. His theory ofaccelerated motion derived from the results of such ex-periments, and forms a cornerstone of classical mechan-ics.

Sir Isaac Newton (1643–1727), an influential figure in the his-tory of physics and whose three laws of motion form the basis ofclassical mechanics

As foundation for his principles of natural philosophy,Isaac Newton proposed three laws of motion: the lawof inertia, his second law of acceleration (mentionedabove), and the law of action and reaction; and hence laidthe foundations for classical mechanics. Both Newton’ssecond and third laws were given proper scientific andmathematical treatment in Newton’s Philosophiæ Natu-ralis Principia Mathematica, which distinguishes themfrom earlier attempts at explaining similar phenomena,which were either incomplete, incorrect, or given little ac-curate mathematical expression. Newton also enunciatedthe principles of conservation of momentum and angularmomentum. In mechanics, Newton was also the first toprovide the first correct scientific and mathematical for-mulation of gravity in Newton’s law of universal gravita-tion. The combination of Newton’s laws of motion andgravitation provide the fullest and most accurate descrip-tion of classical mechanics. He demonstrated that theselaws apply to everyday objects as well as to celestial ob-jects. In particular, he obtained a theoretical explanationof Kepler’s laws of motion of the planets.Newton previously invented the calculus, of mathematics,and used it to perform the mathematical calculations. Foracceptability, his book, the Principia, was formulated en-tirely in terms of the long-established geometric methods,which were soon eclipsed by his calculus. However, it wasLeibniz who developed the notation of the derivative andintegral preferred[1] today.Newton, andmost of his contemporaries, with the notableexception of Huygens, worked on the assumption thatclassical mechanics would be able to explain all phenom-

Page 46: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

40 CHAPTER 9. CLASSICAL MECHANICS

Hamilton's greatest contribution is perhaps the reformulation ofNewtonian mechanics, now called Hamiltonian mechanics.

ena, including light, in the form of geometric optics. Evenwhen discovering the so-called Newton’s rings (a wave in-terference phenomenon) his explanation remained withhis own corpuscular theory of light.After Newton, classical mechanics became a principalfield of study in mathematics as well as physics. Severalre-formulations progressively allowed finding solutions toa far greater number of problems. The first notable re-formulation was in 1788 by Joseph Louis Lagrange. La-grangian mechanics was in turn re-formulated in 1833 byWilliam Rowan Hamilton.Some difficulties were discovered in the late 19th cen-tury that could only be resolved by more modern physics.Some of these difficulties related to compatibility withelectromagnetic theory, and the famous Michelson–Morley experiment. The resolution of these problems ledto the special theory of relativity, often included in theterm classical mechanics.A second set of difficulties were related to thermodynam-ics. When combined with thermodynamics, classical me-chanics leads to the Gibbs paradox of classical statisticalmechanics, in which entropy is not a well-defined quan-tity. Black-body radiation was not explained withoutthe introduction of quanta. As experiments reached theatomic level, classical mechanics failed to explain, evenapproximately, such basic things as the energy levels andsizes of atoms and the photo-electric effect. The effortat resolving these problems led to the development ofquantum mechanics.Since the end of the 20th century, the place of classicalmechanics in physics has been no longer that of an in-

dependent theory. Instead, classical mechanics is nowconsidered an approximate theory to the more generalquantummechanics. Emphasis has shifted to understand-ing the fundamental forces of nature as in the Standardmodel and its more modern extensions into a unifiedtheory of everything.[2] Classical mechanics is a theoryfor the study of the motion of non-quantum mechanical,low-energy particles in weak gravitational fields. In the21st century classical mechanics has been extended intothe complex domain and complex classical mechanics ex-hibits behaviors very similar to quantum mechanics.[3]

9.2 Description of the theory

The analysis of projectile motion is a part of classical mechanics.

The following introduces the basic concepts of classicalmechanics. For simplicity, it often models real-world ob-jects as point particles, objects with negligible size. Themotion of a point particle is characterized by a small num-ber of parameters: its position, mass, and the forces ap-plied to it. Each of these parameters is discussed in turn.In reality, the kind of objects that classical mechanicscan describe always have a non-zero size. (The physicsof very small particles, such as the electron, is more ac-curately described by quantum mechanics.) Objects withnon-zero size havemore complicated behavior than hypo-thetical point particles, because of the additional degreesof freedom: a baseball can spin while it is moving, forexample. However, the results for point particles can beused to study such objects by treating them as compositeobjects, made up of a large number of interacting pointparticles. The center of mass of a composite object be-haves like a point particle.Classical mechanics uses common-sense notions of howmatter and forces exist and interact. It assumes thatmatter and energy have definite, knowable attributes

Page 47: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

9.2. DESCRIPTION OF THE THEORY 41

such as where an object is in space and its speed. It alsoassumes that objects may be directly influenced only bytheir immediate surroundings, known as the principleof locality. In quantum mechanics, an object may haveeither its position or velocity undetermined.

9.2.1 Position and its derivatives

Main article: Kinematics

The position of a point particle is defined with respect toan arbitrary fixed reference point, O, in space, usuallyaccompanied by a coordinate system, with the referencepoint located at the origin of the coordinate system. It isdefined as the vector r fromO to the particle. In general,the point particle need not be stationary relative to O, sor is a function of t, the time elapsed since an arbitraryinitial time. In pre-Einstein relativity (known as Galileanrelativity), time is considered an absolute, i.e., the timeinterval between any given pair of events is the same forall observers.[4] In addition to relying on absolute time,classical mechanics assumes Euclidean geometry for thestructure of space.[5]

Velocity and speed

Main articles: Velocity and speed

The velocity, or the rate of change of position with time,is defined as the derivative of the position with respect totime:

v = drdt

In classical mechanics, velocities are directly additive andsubtractive. For example, if one car traveling east at 60km/h passes another car traveling east at 50 km/h, thenfrom the perspective of the slower car, the faster car istraveling east at 60 − 50 = 10 km/h. Whereas, from theperspective of the faster car, the slower car is moving 10km/h to the west. Velocities are directly additive as vectorquantities; they must be dealt with using vector analysis.Mathematically, if the velocity of the first object in theprevious discussion is denoted by the vector u = ud andthe velocity of the second object by the vector v = ve,where u is the speed of the first object, v is the speedof the second object, and d and e are unit vectors in thedirections ofmotion of each particle respectively, then thevelocity of the first object as seen by the second object is

u′ = u− v .

Similarly,

v′ = v− u .

When both objects are moving in the same direction, thisequation can be simplified to

u′ = (u− v)d .

Or, by ignoring direction, the difference can be given interms of speed only:

u′ = u− v .

Acceleration

Main article: Acceleration

The acceleration, or rate of change of velocity, is thederivative of the velocity with respect to time (the secondderivative of the position with respect to time):

a =dvdt =

d2rdt2 .

Acceleration represents the velocity’s change over time:either of the velocity’s magnitude or direction, or both.If only the magnitude v of the velocity decreases, this issometimes referred to as deceleration, but generally anychange in the velocity with time, including deceleration,is simply referred to as acceleration.

Frames of reference

Main articles: Inertial frame of reference and Galileantransformation

While the position, velocity and acceleration of a particlecan be referred to any observer in any state of motion,classical mechanics assumes the existence of a specialfamily of reference frames in terms of which the me-chanical laws of nature take a comparatively simple form.These special reference frames are called inertial frames.An inertial frame is such that when an object withoutany force interactions (an idealized situation) is viewedfrom it, it appears either to be at rest or in a state of uni-form motion in a straight line. This is the fundamentaldefinition of an inertial frame. They are characterizedby the requirement that all forces entering the observer’sphysical laws originate in identifiable sources (charges,gravitational bodies, and so forth). A non-inertial refer-ence frame is one accelerating with respect to an inertialone, and in such a non-inertial frame a particle is subjectto acceleration by fictitious forces that enter the equations

Page 48: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

42 CHAPTER 9. CLASSICAL MECHANICS

of motion solely as a result of its accelerated motion, anddo not originate in identifiable sources. These fictitiousforces are in addition to the real forces recognized in aninertial frame. A key concept of inertial frames is themethod for identifying them. For practical purposes, ref-erence frames that are unaccelerated with respect to thedistant stars (an extremely distant point) are regarded asgood approximations to inertial frames.Consider two reference frames S and S'. For observers ineach of the reference frames an event has space-time co-ordinates of (x,y,z,t) in frame S and (x',y',z',t') in frameS'. Assuming time is measured the same in all referenceframes, and if we require x = x' when t = 0, then the re-lation between the space-time coordinates of the sameevent observed from the reference frames S' and S, whichare moving at a relative velocity of u in the x direction is:

x' = x − u·ty' = yz' = zt' = t.

This set of formulas defines a group transformationknown as the Galilean transformation (informally, theGalilean transform). This group is a limiting case of thePoincaré group used in special relativity. The limitingcase applies when the velocity u is very small comparedto c, the speed of light.The transformations have the following consequences:

• v′ = v − u (the velocity v′ of a particle from the per-spective of S′ is slower by u than its velocity v fromthe perspective of S)

• a′ = a (the acceleration of a particle is the same inany inertial reference frame)

• F′ = F (the force on a particle is the same in anyinertial reference frame)

• the speed of light is not a constant in classical me-chanics, nor does the special position given to thespeed of light in relativistic mechanics have a coun-terpart in classical mechanics.

For some problems, it is convenient to use rotating coor-dinates (reference frames). Thereby one can either keepa mapping to a convenient inertial frame, or introduce ad-ditionally a fictitious centrifugal force and Coriolis force.

9.2.2 Forces; Newton’s second law

Main articles: Force and Newton’s laws of motion

Newton was the first to mathematically express the rela-tionship between force and momentum. Some physicists

interpret Newton’s second law of motion as a definitionof force and mass, while others consider it a fundamentalpostulate, a law of nature. Either interpretation has thesame mathematical consequences, historically known as“Newton’s Second Law":

F =dpdt =

d(mv)dt .

The quantity mv is called the (canonical) momentum.The net force on a particle is thus equal to the rate ofchange of the momentum of the particle with time. Sincethe definition of acceleration is a = dv/dt, the second lawcan be written in the simplified and more familiar form:

F = ma .So long as the force acting on a particle is known, New-ton’s second law is sufficient to describe the motion of aparticle. Once independent relations for each force act-ing on a particle are available, they can be substituted intoNewton’s second law to obtain an ordinary differentialequation, which is called the equation of motion.As an example, assume that friction is the only force act-ing on the particle, and that it may be modeled as a func-tion of the velocity of the particle, for example:

FR = −λv ,where λ is a positive constant. Then the equation of mo-tion is

−λv = ma = mdvdt .

This can be integrated to obtain

v = v0e−λt/m

where v0 is the initial velocity. This means that the ve-locity of this particle decays exponentially to zero as timeprogresses. In this case, an equivalent viewpoint is thatthe kinetic energy of the particle is absorbed by friction(which converts it to heat energy in accordance with theconservation of energy), and the particle is slowing down.This expression can be further integrated to obtain the po-sition r of the particle as a function of time.Important forces include the gravitational force andthe Lorentz force for electromagnetism. In addition,Newton’s third law can sometimes be used to deduce theforces acting on a particle: if it is known that particle Aexerts a force F on another particle B, it follows that Bmust exert an equal and opposite reaction force, −F, onA. The strong form of Newton’s third law requires that Fand −F act along the line connecting A and B, while theweak form does not. Illustrations of the weak form ofNewton’s third law are often found for magnetic forces.

Page 49: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

9.3. LIMITS OF VALIDITY 43

9.2.3 Work and energy

Main articles: Work (physics), kinetic energy andpotential energy

If a constant force F is applied to a particle that achieves adisplacement Δr,[note 2] the work done by the force is de-fined as the scalar product of the force and displacementvectors:

W = F ·∆r .

More generally, if the force varies as a function of posi-tion as the particle moves from r1 to r2 along a path C,the work done on the particle is given by the line integral

W =

∫C

F(r) · dr .

If the work done in moving the particle from r1 to r2 isthe same no matter what path is taken, the force is said tobe conservative. Gravity is a conservative force, as is theforce due to an idealized spring, as given by Hooke’s law.The force due to friction is non-conservative.The kinetic energy E of a particle of mass m travellingat speed v is given by

Ek =12mv

2 .

For extended objects composed of many particles, thekinetic energy of the composite body is the sum of thekinetic energies of the particles.The work–energy theorem states that for a particle ofconstant mass m the total work W done on the particlefrom position r1 to r2 is equal to the change in kineticenergy E of the particle:

W = ∆Ek = Ek,2 − Ek,1 =12m(v 22 − v 2

1

).

Conservative forces can be expressed as the gradient ofa scalar function, known as the potential energy and de-noted E :

F = −∇Ep .

If all the forces acting on a particle are conservative, andE is the total potential energy (which is defined as a workof involved forces to rearrange mutual positions of bod-ies), obtained by summing the potential energies corre-sponding to each force

F·∆r = −∇Ep·∆r = −∆Ep ⇒ −∆Ep = ∆Ek ⇒ ∆(Ek+Ep) = 0 .

This result is known as conservation of energy and statesthat the total energy,

∑E = Ek + Ep ,

is constant in time. It is often useful, because many com-monly encountered forces are conservative.

9.2.4 Beyond Newton’s laws

Classical mechanics also includes descriptions of thecomplex motions of extended non-pointlike objects.Euler’s laws provide extensions to Newton’s laws in thisarea. The concepts of angular momentum rely on thesame calculus used to describe one-dimensional motion.The rocket equation extends the notion of rate of changeof an object’s momentum to include the effects of an ob-ject “losing mass”.There are two important alternative formulations of clas-sical mechanics: Lagrangian mechanics and Hamiltonianmechanics. These, and other modern formulations,usually bypass the concept of “force”, instead refer-ring to other physical quantities, such as energy, speedand momentum, for describing mechanical systems ingeneralized coordinates.The expressions given above for momentum and kineticenergy are only valid when there is no significant electro-magnetic contribution. In electromagnetism, Newton’ssecond law for current-carrying wires breaks down unlessone includes the electromagnetic field contribution to themomentum of the system as expressed by the Poyntingvector divided by c2, where c is the speed of light in freespace.

9.3 Limits of validity

ClassicalMechanics

QuantumMechanics

RelativisticMechanics

QuantumField TheoryS

ize

SpeedFar less than 3×108 m/s Comparable to 3×108 m/s

Near

or

less

than

10

-9 m

Far

larg

er

than

10

-9 m

Domain of validity for Classical Mechanics

Many branches of classical mechanics are simplifica-tions or approximations of more accurate forms; two of

Page 50: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

44 CHAPTER 9. CLASSICAL MECHANICS

the most accurate being general relativity and relativisticstatistical mechanics. Geometric optics is an approxima-tion to the quantum theory of light, and does not have asuperior “classical” form.

9.3.1 The Newtonian approximation tospecial relativity

In special relativity, the momentum of a particle is givenby

p =mv√

1− (v2/c2),

where m is the particle’s rest mass, v its velocity, and c isthe speed of light.If v is very small compared to c, v2/c2 is approximatelyzero, and so

p ≈ mv .

Thus the Newtonian equation p =mv is an approximationof the relativistic equation for bodies moving with lowspeeds compared to the speed of light.For example, the relativistic cyclotron frequency of acyclotron, gyrotron, or high voltage magnetron is givenby

f = fcm0

m0 + T/c2,

where f is the classical frequency of an electron (or othercharged particle) with kinetic energy T and (rest) massm0 circling in a magnetic field. The (rest) mass of anelectron is 511 keV. So the frequency correction is 1%for a magnetic vacuum tube with a 5.11 kV direct currentaccelerating voltage.

9.3.2 The classical approximation to quan-tum mechanics

The ray approximation of classical mechanics breaksdown when the de Broglie wavelength is not much smallerthan other dimensions of the system. For non-relativisticparticles, this wavelength is

λ =h

p

where h is Planck’s constant and p is the momentum.Again, this happens with electrons before it happens withheavier particles. For example, the electrons used byClinton Davisson and Lester Germer in 1927, accelerated

by 54 volts, had a wavelength of 0.167 nm, which waslong enough to exhibit a single diffraction side lobe whenreflecting from the face of a nickel crystal with atomicspacing of 0.215 nm. With a larger vacuum chamber, itwould seem relatively easy to increase the angular resolu-tion from around a radian to a milliradian and see quan-tum diffraction from the periodic patterns of integratedcircuit computer memory.More practical examples of the failure of classicalmechanics on an engineering scale are conduction byquantum tunneling in tunnel diodes and very narrowtransistor gates in integrated circuits.Classical mechanics is the same extreme high frequencyapproximation as geometric optics. It is more often ac-curate because it describes particles and bodies with restmass. These have more momentum and therefore shorterDe Broglie wavelengths than massless particles, such aslight, with the same kinetic energies.

9.4 Branches

Classical mechanics was traditionally divided into threemain branches:

• Statics, the study of equilibrium and its relation toforces

• Dynamics, the study of motion and its relation toforces

• Kinematics, dealing with the implications of ob-served motions without regard for circumstancescausing them

Another division is based on the choice of mathematicalformalism:

• Newtonian mechanics

• Lagrangian mechanics

• Hamiltonian mechanics

Alternatively, a division can be made by region of appli-cation:

• Celestial mechanics, relating to stars, planets andother celestial bodies

• Continuum mechanics, for materials modelled as acontinuum, e.g., solids and fluids (i.e., liquids andgases).

• Relativistic mechanics (i.e. including the specialand general theories of relativity), for bodies whosespeed is close to the speed of light.

Page 51: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

9.8. FURTHER READING 45

• Statistical mechanics, which provides a frameworkfor relating the microscopic properties of individ-ual atoms and molecules to the macroscopic or bulkthermodynamic properties of materials.

9.5 See also

• Dynamical systems

• History of classical mechanics

• List of equations in classical mechanics

• List of publications in classical mechanics

• Molecular dynamics

• Newton’s laws of motion

• Special theory of relativity

9.6 Notes[1] The notion of “classical” may be somewhat confusing,

insofar as this term usually refers to the era of classicalantiquity in European history. While many discoverieswithin the mathematics of that period remain in full forcetoday, and of the greatest use, much of the science thatemerged then has since been superseded by more accu-rate models. This in no way detracts from the science ofthat time, though as most of modern physics is built di-rectly upon the important developments, especially withintechnology, which took place in antiquity and during theMiddle Ages in Europe and elsewhere. However, theemergence of classical mechanics was a decisive stage inthe development of science, in the modern sense of theterm. What characterizes it, above all, is its insistence onmathematics (rather than speculation), and its reliance onexperiment (rather than observation). With classical me-chanics it was established how to formulate quantitativepredictions in theory, and how to test them by carefully de-signed measurement. The emerging globally cooperativeendeavor increasingly provided for much closer scrutinyand testing, both of theory and experiment. This was, andremains, a key factor in establishing certain knowledge,and in bringing it to the service of society. History showshow closely the health and wealth of a society depends onnurturing this investigative and critical approach.

[2] The displacement Δr is the difference of the particle’s ini-tial and final positions: Δr = rfi ₐ − rᵢ ᵢ ᵢₐ .

9.7 References[1] Jesseph, Douglas M. (1998). “Leibniz on the Foundations

of the Calculus: The Question of the Reality of Infinitesi-mal Magnitudes”. Perspectives on Science. 6.1&2: 6–40.Retrieved 31 December 2011.

[2] Page 2-10 of the Feynman Lectures on Physics says “Foralready in classical mechanics there was indeterminabilityfrom a practical point of view.” The past tense here impliesthat classical physics is no longer fundamental.

[3] Complex Elliptic Pendulum, Carl M. Bender, Daniel W.Hook, Karta Kooner

[4] Mughal, Muhammad Aurang Zeb. 2009. Time, absolute.Birx, H. James (ed.), Encyclopedia of Time: Science, Phi-losophy, Theology, and Culture, Vol. 3. Thousand Oaks,CA: Sage, pp. 1254-1255.

[5] MIT physics 8.01 lecture notes (page 12) (PDF)

9.8 Further reading

• Feynman, Richard (1996). Six Easy Pieces. PerseusPublishing. ISBN 0-201-40825-2.

• Feynman, Richard; Phillips, Richard (1998). SixEasy Pieces. Perseus Publishing. ISBN 0-201-32841-0.

• Feynman, Richard (1999). Lectures on Physics.Perseus Publishing. ISBN 0-7382-0092-1.

• Landau, L.D.; Lifshitz, E.M. (1972). MechanicsCourse of Theoretical Physics, Vol. 1. FranklinBook Company. ISBN 0-08-016739-X.

• Eisberg, Robert Martin (1961). Fundamentals ofModern Physics. John Wiley and Sons.

• M. Alonso; J. Finn. Fundamental university physics.Addison-Wesley.

• Gerald Jay Sussman; Jack Wisdom (2001).Structure and Interpretation of Classical Mechanics.MIT Press. ISBN 0-262-19455-4.

• D. Kleppner; R.J. Kolenkow (1973). An Intro-duction to Mechanics. McGraw-Hill. ISBN 0-07-035048-5.

• Herbert Goldstein; Charles P. Poole; John L. Safko(2002). Classical Mechanics (3rd ed.). AddisonWesley. ISBN 0-201-65702-3.

• Thornton, Stephen T.; Marion, Jerry B. (2003).Classical Dynamics of Particles and Systems (5thed.). Brooks Cole. ISBN 0-534-40896-6.

• Kibble, Tom W.B.; Berkshire, Frank H. (2004).Classical Mechanics (5th ed.). Imperial CollegePress. ISBN 978-1-86094-424-6.

• Morin, David (2008). Introduction to Classi-cal Mechanics: With Problems and Solutions (1sted.). Cambridge, UK: Cambridge University Press.ISBN 978-0-521-87622-3.

Page 52: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

46 CHAPTER 9. CLASSICAL MECHANICS

9.9 External links• Crowell, Benjamin. Newtonian Physics (an intro-ductory text, uses algebra with optional sections in-volving calculus)

• Fitzpatrick, Richard. Classical Mechanics (uses cal-culus)

• Hoiland, Paul (2004). Preferred Frames of Refer-ence & Relativity

• Horbatsch, Marko, "Classical Mechanics CourseNotes".

• Rosu, Haret C., "Classical Mechanics". Physics Ed-ucation. 1999. [arxiv.org : physics/9909035]

• Shapiro, Joel A. (2003). Classical Mechanics

• Sussman, Gerald Jay & Wisdom, Jack &Mayer,Meinhard E. (2001). Structure and In-terpretation of Classical Mechanics

• Tong, David. Classical Dynamics (Cambridge lec-ture notes on Lagrangian and Hamiltonian formal-ism)

• Kinematic Models for Design Digital Library(KMODDL)Movies and photos of hundreds of workingmechanical-systems models at Cornell University.Also includes an e-book library of classic texts onmechanical design and engineering.

• MIT OpenCourseWare 8.01: Classical MechanicsFree videos of actual course lectures with links tolecture notes, assignments and exams.

• Alejandro A. Torassa On Classical Mechanics

Page 53: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 10

Entropy (information theory)

2 bits of entropy.

In information theory, entropy is the average amount ofinformation contained in each message received. Here,message stands for an event, sample or character drawnfrom a distribution or data stream. Entropy thus char-acterizes our uncertainty about our source of informa-tion. (Entropy is best understood as a measure of uncer-tainty rather than certainty as entropy is larger for morerandom sources.) The source is also characterized bythe probability distribution of the samples drawn fromit. The idea here is that the less likely an event is, themore information it provides when it occurs. For someother reasons (explained below) it makes sense to de-fine information as the negative of the logarithm of theprobability distribution. The probability distribution ofthe events, coupled with the information amount of ev-ery event, forms a random variable whose average (a.k.a.expected) value is the average amount of information,a.k.a. entropy, generated by this distribution. Becauseentropy is average information, it is also measured inshannons, nats, or hartleys, depending on the base of thelogarithm used to define it.The logarithm of the probability distribution is useful asa measure of information because it is additive. For in-stance, flipping a coin provides 1 shannon of informa-tion whereas m tosses gather m bits. Generally, you needlog2(n) bits to represent a variable that can take one of nvalues. Since 1 of n outcomes is possible when you applya scale graduated with nmarks, you receive log2(n) bits ofinformation with every such measurement. The log2(n)rule holds only until all outcomes are equally probable.If one of the events occurs more often than others, ob-

servation of that event is less informative. Conversely,observing rarer events compensate by providing more in-formation when observed. Since observation of less prob-able events occurs more rarely, the net effect is that theentropy (thought of as the average information) receivedfrom non-uniformly distributed data is less than log2(n).Entropy is zero when only one certain outcome is ex-pected. Shannon entropy quantifies all these considera-tions exactly when a probability distribution of the sourceis provided. It is important to note that themeaning of theevents observed (a.k.a. the meaning of messages) do notmatter in the definition of entropy. Entropy only takesinto account the probability of observing a specific event,so the information it encapsulates is information aboutthe underlying probability distribution, not the meaningof the events themselves.Generally, “entropy” stands for “disorder” or uncertainty.The entropy we talk about here was introduced by ClaudeE. Shannon in his 1948 paper "AMathematical Theory ofCommunication".[1] We also call it Shannon entropy todistinguish from other occurrences of the term, which ap-pears in various parts of physics in different forms. Shan-non entropy provides an absolute limit on the best possi-ble average length of lossless encoding or compressionof any communication, assuming that[2] the communica-tionmay be represented as a sequence of independent andidentically distributed random variables.

10.1 Introduction

Entropy is a measure of unpredictability of informationcontent. To get an informal, intuitive understanding ofthe connection between these three English terms, con-sider the example of a poll on some political issue. Usu-ally, such polls happen because the outcome of the pollisn't already known. In other words, the outcome of thepoll is relatively unpredictable, and actually performingthe poll and learning the results gives some new informa-tion; these are just different ways of saying that the en-tropy of the poll results is large. Now, consider the casethat the same poll is performed a second time shortly af-ter the first poll. Since the result of the first poll is alreadyknown, the outcome of the second poll can be predicted

47

Page 54: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

48 CHAPTER 10. ENTROPY (INFORMATION THEORY)

well and the results should not contain much new infor-mation; in this case the entropy of the second poll resultrelative to the first is small.Now consider the example of a coin toss. When the coinis fair, that is, when the probability of heads is the same asthe probability of tails, then the entropy of the coin tossis as high as it could be. This is because there is no wayto predict the outcome of the coin toss ahead of time—the best we can do is predict that the coin will come upheads, and our prediction will be correct with probability1/2. Such a coin toss has one bit of entropy since there aretwo possible outcomes that occur with equal probability,and learning the actual outcome contains one bit of infor-mation. Contrarily, a coin toss with a coin that has twoheads and no tails has zero entropy since the coin will al-ways come up heads, and the outcome can be predictedperfectly.English text has fairly low entropy. In other words, it isfairly predictable. Even if we don't know exactly whatis going to come next, we can be fairly certain that, forexample, there will be many more e’s than z’s, that thecombination 'qu' will be much more common than anyother combinationwith a 'q' in it, and that the combination'th' will be more common than 'z', 'q', or 'qu'. After thefirst few letters one can often guess the rest of the word.Uncompressed, English text has between 0.6 and 1.3 bitsof entropy for each character of message.[3][4]

If a compression scheme is lossless—that is, youcan always recover the entire original message bydecompressing—then a compressed message has thesame quantity of information as the original, but com-municated in fewer characters. That is, it has more infor-mation, or a higher entropy, per character. This meansa compressed message has less redundancy. Roughlyspeaking, Shannon’s source coding theorem says that alossless compression scheme cannot compress messages,on average, to have more than one bit of information perbit of message, but that any value less than one bit of in-formation per bit of message can be attained by employ-ing a suitable coding scheme. The entropy of a messageper bit multiplied by the length of that message is a mea-sure of howmuch total information the message contains.Shannon’s theorem also implies that no lossless compres-sion scheme can shorten all messages. If some messagescome out shorter, at least onemust come out longer due tothe pigeonhole principle. In practical use, this is generallynot a problem, because we are usually only interested incompressing certain types of messages, for example En-glish documents as opposed to gibberish text, or digitalphotographs rather than noise, and it is unimportant if acompression algorithm makes some unlikely or uninter-esting sequences larger. However, the problem can stillarise even in everyday use when applying a compressionalgorithm to already compressed data: for example, mak-ing a ZIP file of music that is already in the FLAC audioformat is unlikely to achieve much extra saving in space.

10.2 Definition

Named after Boltzmann’s H-theorem, Shannon definedthe entropy H (Greek letter Eta) of a discrete ran-dom variable X with possible values x1, ..., xn andprobability mass function P(X) as:

H(X) = E[I(X)] = E[− ln(P (X))].

Here E is the expected value operator, and I is theinformation content ofX.[5][6] I(X) is itself a random vari-able.When taken from a finite sample, the entropy can explic-itly be written as

H(X) =∑i

P (xi) I(xi) = −∑i

P (xi) logb P (xi)

where b is the base of the logarithm used. Common val-ues of b are 2, Euler’s number e, and 10, and the unit ofentropy is shannon for b = 2, nat for b = e, and hartley forb = 10.[7]

In the case of p(xi) = 0 for some i, the value of the cor-responding summand 0 logb(0) is taken to be 0, which isconsistent with the well-known limit:

limp→0+

p log(p) = 0

Onemay also define the conditional entropy of two eventsX and Y taking values xi and yj respectively, as

H(X|Y ) =∑i,j

p(xi, yj) logp(yj)

p(xi, yj)

where p(xi,yj) is the probability that X=xi and Y=yj. Thisquantity should be understood as the amount of random-ness in the random variable X given that you know thevalue of Y.

10.3 Example

Main article: Binary entropy functionMain article: Bernoulli process

Consider tossing a coin with known, not necessarily fair,probabilities of coming up heads or tails; this is known asthe Bernoulli process.The entropy of the unknown result of the next toss of thecoin is maximized if the coin is fair (that is, if heads andtails both have equal probability 1/2). This is the situationof maximum uncertainty as it is most difficult to predict

Page 55: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

10.5. ASPECTS 49

Entropy H(X) (i.e. the expected surprisal) of a coin flip, mea-sured in shannons, graphed versus the fairness of the coinPr(X=1), where X=1 represents a result of heads.Note that the maximum of the graph depends on the distribution.Here, the entropy is at most 1 shannon, and to communicate theoutcome of a fair coin flip (2 possible values) will require an av-erage of at most 1 bit. The result of a fair die (6 possible values)would require on average log26 bits.

the outcome of the next toss; the result of each toss of thecoin delivers one full bit of information.However, if we know the coin is not fair, but comes upheads or tails with probabilities p and q, where p ≠ q, thenthere is less uncertainty. Every time it is tossed, one sideis more likely to come up than the other. The reduceduncertainty is quantified in a lower entropy: on averageeach toss of the coin delivers less than one full bit of in-formation.The extreme case is that of a double-headed coin thatnever comes up tails, or a double-tailed coin that never re-sults in a head. Then there is no uncertainty. The entropyis zero: each toss of the coin delivers no new informationas the outcome of each coin toss is always certain. In thisrespect, entropy can be normalized by dividing it by in-formation length. This ratio is called metric entropy andis a measure of the randomness of the information.

10.4 Rationale

To understand the meaning of∑pi log 1

pi, at first, try to

define an information function, I, in terms of an event iwith probability, pi . How much information is acquireddue to the observation of event i? Shannon’s solution fol-lows from the fundamental properties of information:[8]

1. I(p) ≥ 0 – information is a non-negative quantity

2. I(1) = 0 – events that always occur do not commu-nicate information

3. I(p1 p2) = I(p1) + I(p2) – information due to inde-pendent events is additive

The latter is a crucial property. It states that joint prob-ability communicates as much information as two indi-vidual events separately. Particularly, if the first eventcan yield one of n equiprobable outcomes and anotherhas one of m equiprobable outcomes then there are mnpossible outcomes of the joint event. This means thatif log2(n) bits are needed to encode the first value andlog2(m) to encode the second, one needs log2(mn) =log2(m) + log2(n) to encode both. Shannon discoveredthat the proper choice of function to quantify informa-tion, preserving this additivity, is logarithmic, i.e.,

I(p) = log(1/p)

The base of logarithm does not matter; any can be used.The different units of information (bits for log2, trits forlog3, nats for ln and so on) are just constant multiplesof each other. For instance, in case of a fair coin toss,heads provides log2(2) = 1 bit of information. Becauseof additivity, n tosses provide n bits of information.Now, suppose we have a distribution where event i canhappen with probability pi. Suppose we have sampled itN times and outcome i was, accordingly, seen ni = Npitimes. The total amount of information we have receivedis

∑i

niI(pi) =∑

Npi log(1/pi)

The average amount of information that we receive withevery event is therefore

∑i

pi log1

pi.

10.5 Aspects

10.5.1 Relationship to thermodynamic en-tropy

Main article: Entropy in thermodynamics and informa-tion theory

The inspiration for adopting the word entropy in infor-mation theory came from the close resemblance betweenShannon’s formula and very similar known formulae fromstatistical mechanics.In statistical thermodynamics the most general formulafor the thermodynamic entropy S of a thermodynamicsystem is the Gibbs entropy,

Page 56: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

50 CHAPTER 10. ENTROPY (INFORMATION THEORY)

S = −kB∑

pi ln pi

where kB is the Boltzmann constant, and pi is the proba-bility of a microstate. The Gibbs entropy was defined byJ. Willard Gibbs in 1878 after earlier work by Boltzmann(1872).[9]

The Gibbs entropy translates over almost unchanged intothe world of quantum physics to give the von Neumannentropy, introduced by John von Neumann in 1927,

S = −kB Tr(ρ ln ρ)

where ρ is the density matrix of the quantum mechanicalsystem and Tr is the trace.At an everyday practical level the links between informa-tion entropy and thermodynamic entropy are not evident.Physicists and chemists are apt to be more interestedin changes in entropy as a system spontaneously evolvesaway from its initial conditions, in accordance with thesecond law of thermodynamics, rather than an unchang-ing probability distribution. And, as the minuteness ofBoltzmann’s constant kB indicates, the changes in S/kBfor even tiny amounts of substances in chemical and phys-ical processes represent amounts of entropy that are ex-tremely large compared to anything in data compressionor signal processing. Furthermore, in classical thermo-dynamics the entropy is defined in terms of macroscopicmeasurements and makes no reference to any probabilitydistribution, which is central to the definition of informa-tion entropy.At a multidisciplinary level, however, connections canbe made between thermodynamic and informational en-tropy, although it took many years in the development ofthe theories of statistical mechanics and information the-ory to make the relationship fully apparent. In fact, in theview of Jaynes (1957), thermodynamic entropy, as ex-plained by statistical mechanics, should be seen as an ap-plication of Shannon’s information theory: the thermody-namic entropy is interpreted as being proportional to theamount of further Shannon information needed to definethe detailed microscopic state of the system, that remainsuncommunicated by a description solely in terms of themacroscopic variables of classical thermodynamics, withthe constant of proportionality being just the Boltzmannconstant. For example, adding heat to a system increasesits thermodynamic entropy because it increases the num-ber of possible microscopic states of the system that areconsistent with the measurable values of its macroscopicvariables, thus making any complete state descriptionlonger. (See article: maximum entropy thermodynam-ics). Maxwell’s demon can (hypothetically) reduce thethermodynamic entropy of a system by using informationabout the states of individual molecules; but, as Landauer(from 1961) and co-workers have shown, to function thedemon himself must increase thermodynamic entropy in

the process, by at least the amount of Shannon informa-tion he proposes to first acquire and store; and so the to-tal thermodynamic entropy does not decrease (which re-solves the paradox). Landauer’s principle has implica-tions on the amount of heat a computer must dissipate toprocess a given amount of information, though moderncomputers are nowhere near the efficiency limit.

10.5.2 Entropy as information content

Main article: Shannon’s source coding theorem

Entropy is defined in the context of a probabilistic model.Independent fair coin flips have an entropy of 1 bit perflip. A source that always generates a long string of B’shas an entropy of 0, since the next character will alwaysbe a 'B'.The entropy rate of a data source means the average num-ber of bits per symbol needed to encode it. Shannon’sexperiments with human predictors show an informationrate between 0.6 and 1.3 bits per character in English;[10]the PPM compression algorithm can achieve a compres-sion ratio of 1.5 bits per character in English text.From the preceding example, note the following points:

1. The amount of entropy is not always an integer num-ber of bits.

2. Many data bits may not convey information. Forexample, data structures often store information re-dundantly, or have identical sections regardless ofthe information in the data structure.

Shannon’s definition of entropy, when applied to an infor-mation source, can determine the minimum channel ca-pacity required to reliably transmit the source as encodedbinary digits (see caveat below in italics). The formulacan be derived by calculating the mathematical expec-tation of the amount of information contained in a digitfrom the information source. See also Shannon-Hartleytheorem.Shannon’s entropy measures the information containedin a message as opposed to the portion of the messagethat is determined (or predictable). Examples of the lat-ter include redundancy in language structure or statisticalproperties relating to the occurrence frequencies of letteror word pairs, triplets etc. See Markov chain.

10.5.3 Data compression

Main article: Data compression

Entropy effectively bounds the performance of thestrongest lossless compression possible, which can be re-alized in theory by using the typical set or in practice us-

Page 57: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

10.5. ASPECTS 51

ing Huffman, Lempel-Ziv or arithmetic coding. The per-formance of existing data compression algorithms is of-ten used as a rough estimate of the entropy of a block ofdata.[11][12] See also Kolmogorov complexity. In prac-tice, compression algorithms deliberately include somejudicious redundancy in the form of checksums to pro-tect against errors.

10.5.4 World’s technological capacity tostore and communicate entropic in-formation

A 2011 study in Science estimates the world’s technolog-ical capacity to store and communicate optimally com-pressed information normalized on the most effectivecompression algorithms available in the year 2007, there-fore estimating the entropy of the technologically avail-able sources.[13]

The authors estimate humankind technological capac-ity to store information (fully entropically compressed)in 1986 and again in 2007. They break the infor-mation into three categories - To store information ona medium, to receive information through a one-waybroadcast networks, to exchange information throughtwo-way telecommunication networks.[13]

10.5.5 Limitations of entropy as informa-tion content

There are a number of entropy-related concepts thatmathematically quantify information content in someway:

• the self-information of an individual message orsymbol taken from a given probability distribution,

• the entropy of a given probability distribution ofmessages or symbols, and

• the entropy rate of a stochastic process.

(The “rate of self-information” can also be defined fora particular sequence of messages or symbols generatedby a given stochastic process: this will always be equalto the entropy rate in the case of a stationary process.)Other quantities of information are also used to compareor relate different sources of information.It is important not to confuse the above concepts. Often itis only clear from context which one is meant. For exam-ple, when someone says that the “entropy” of the Englishlanguage is about 1 bit per character, they are actuallymodeling the English language as a stochastic process andtalking about its entropy rate.Although entropy is often used as a characterization ofthe information content of a data source, this information

content is not absolute: it depends crucially on the prob-abilistic model. A source that always generates the samesymbol has an entropy rate of 0, but the definition of whata symbol is depends on the alphabet. Consider a sourcethat produces the string ABABABABAB... in which Ais always followed by B and vice versa. If the probabilis-tic model considers individual letters as independent, theentropy rate of the sequence is 1 bit per character. Butif the sequence is considered as “AB AB AB AB AB...”with symbols as two-character blocks, then the entropyrate is 0 bits per character.However, if we use very large blocks, then the estimateof per-character entropy rate may become artificially low.This is because in reality, the probability distribution ofthe sequence is not knowable exactly; it is only an esti-mate. For example, suppose one considers the text of ev-ery book ever published as a sequence, with each symbolbeing the text of a complete book. If there are N pub-lished books, and each book is only published once, theestimate of the probability of each book is 1/N, and theentropy (in bits) is −log2(1/N) = log2(N). As a practicalcode, this corresponds to assigning each book a uniqueidentifier and using it in place of the text of the bookwhenever one wants to refer to the book. This is enor-mously useful for talking about books, but it is not souseful for characterizing the information content of anindividual book, or of language in general: it is not pos-sible to reconstruct the book from its identifier withoutknowing the probability distribution, that is, the com-plete text of all the books. The key idea is that the com-plexity of the probabilistic model must be considered.Kolmogorov complexity is a theoretical generalization ofthis idea that allows the consideration of the informa-tion content of a sequence independent of any particularprobability model; it considers the shortest program fora universal computer that outputs the sequence. A codethat achieves the entropy rate of a sequence for a givenmodel, plus the codebook (i.e. the probabilistic model),is one such program, but it may not be the shortest.For example, the Fibonacci sequence is 1, 1, 2, 3, 5, 8,13, ... . Treating the sequence as a message and eachnumber as a symbol, there are almost as many symbolsas there are characters in the message, giving an entropyof approximately log2(n). So the first 128 symbols of theFibonacci sequence has an entropy of approximately 7bits/symbol. However, the sequence can be expressed us-ing a formula [F(n) = F(n−1) + F(n−2) for n=3,4,5,...,F(1)=1, F(2)=1] and this formula has a much lower en-tropy and applies to any length of the Fibonacci sequence.

10.5.6 Limitations of entropy as ameasureof unpredictability

In cryptanalysis, entropy is often roughly used as a mea-sure of the unpredictability of a cryptographic key. Forexample, a 128-bit key that is randomly generated has

Page 58: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

52 CHAPTER 10. ENTROPY (INFORMATION THEORY)

128 bits of entropy. It takes (on average) 2128−1 guessesto break by brute force. If the key’s first digit is 0, and theothers random, then the entropy is 127 bits, and it takes(on average) 2127−1 guesses.However, entropy fails to capture the number ofguesses required if the possible keys are not of equalprobability.[14][15] If the key is half the time “password”and half the time a true random 128-bit key, then the en-tropy is approximately 65 bits. Yet half the time the keymay be guessed on the first try, if your first guess is “pass-word”, and on average, it takes around 2126 guesses (not265−1 ) to break this password.Similarly, consider a 1000000-digit binary one-time pad.If the pad has 1000000 bits of entropy, it is perfect. If thepad has 999999 bits of entropy, evenly distributed (eachindividual bit of the pad having 0.999999 bits of entropy)it may still be considered very good. But if the pad has999999 bits of entropy, where the first digit is fixed andthe remaining 999999 digits are perfectly random, thenthe first digit of the ciphertext will not be encrypted atall.

10.5.7 Data as a Markov process

A common way to define entropy for text is based on theMarkov model of text. For an order-0 source (each char-acter is selected independent of the last characters), thebinary entropy is:

H(S) = −∑

pi log2 pi,

where pi is the probability of i. For a first-order Markovsource (one in which the probability of selecting a charac-ter is dependent only on the immediately preceding char-acter), the entropy rate is:

H(S) = −∑

i pi∑

j pi(j) log2 pi(j),

where i is a state (certain preceding characters) and pi(j)is the probability of j given i as the previous character.For a second order Markov source, the entropy rate is

H(S) = −∑i

pi∑j

pi(j)∑k

pi,j(k) log2 pi,j(k).

10.5.8 b-ary entropy

In general the b-ary entropy of a source S = (S,P) withsource alphabet S = a1, ..., an and discrete probabilitydistribution P = p1, ..., pn where pi is the probability ofai (say pi = p(ai)) is defined by:

Hb(S) = −n∑

i=1

pi logb pi,

Note: the b in "b-ary entropy” is the number of differentsymbols of the ideal alphabet used as a standard yard-stick to measure source alphabets. In information theory,two symbols are necessary and sufficient for an alphabetto encode information. Therefore, the default is to let b= 2 (“binary entropy”). Thus, the entropy of the sourcealphabet, with its given empiric probability distribution,is a number equal to the number (possibly fractional) ofsymbols of the “ideal alphabet”, with an optimal probabil-ity distribution, necessary to encode for each symbol ofthe source alphabet. Also note that “optimal probabilitydistribution” here means a uniform distribution: a sourcealphabet with n symbols has the highest possible entropy(for an alphabet with n symbols) when the probability dis-tribution of the alphabet is uniform. This optimal entropyturns out to be logb(n).

10.6 Efficiency

A source alphabet with non-uniform distribution willhave less entropy than if those symbols had uniform dis-tribution (i.e. the “optimized alphabet”). This deficiencyin entropy can be expressed as a ratio called efficiency:

η(X) = −∑n

i=1p(xi) logb(p(xi))

logb(n)

Efficiency has utility in quantifying the effective use ofa communications channel. This formulation is also re-ferred to as the normalized entropy, as the entropy is di-vided by the maximum entropy logb(n) .

10.7 Characterization

Shannon entropy is characterized by a small number ofcriteria, listed below. Any definition of entropy satisfyingthese assumptions has the form

−Kn∑

i=1

pi log(pi)

where K is a constant corresponding to a choice of mea-surement units.In the following, pi = Pr (X = xi) and Hn(p1, . . . , pn) =H(X) .

10.7.1 Continuity

The measure should be continuous, so that changing thevalues of the probabilities by a very small amount shouldonly change the entropy by a small amount.

Page 59: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

10.8. FURTHER PROPERTIES 53

10.7.2 Symmetry

The measure should be unchanged if the outcomes xi arere-ordered.

Hn (p1, p2, . . .) = Hn (p2, p1, . . .)

10.7.3 Maximum

The measure should be maximal if all the outcomes areequally likely (uncertainty is highest when all possibleevents are equiprobable).

Hn(p1, . . . , pn) ≤ Hn

(1

n, . . . ,

1

n

)= logb(n).

For equiprobable events the entropy should increase withthe number of outcomes.

Hn

(1

n, . . . ,

1

n︸ ︷︷ ︸n

)= logb(n) < logb(n+1) = Hn+1

(1

n+ 1, . . . ,

1

n+ 1︸ ︷︷ ︸n+1

).

10.7.4 Additivity

The amount of entropy should be independent of how theprocess is regarded as being divided into parts.This last functional relationship characterizes the entropyof a systemwith sub-systems. It demands that the entropyof a system can be calculated from the entropies of itssub-systems if the interactions between the sub-systemsare known.Given an ensemble of n uniformly distributed elementsthat are divided into k boxes (sub-systems) with b1, ..., bkelements each, the entropy of the whole ensemble shouldbe equal to the sum of the entropy of the system of boxesand the individual entropies of the boxes, each weightedwith the probability of being in that particular box.For positive integers bi where b1 + ... + bk = n,

Hn

(1

n, . . . ,

1

n

)= Hk

(b1n, . . . ,

bkn

)+

k∑i=1

binHbi

(1

bi, . . . ,

1

bi

).

Choosing k = n, b1 = ... = bn = 1 this implies that theentropy of a certain outcome is zero: H1(1) = 0. Thisimplies that the efficiency of a source alphabet with nsymbols can be defined simply as being equal to its n-aryentropy. See also Redundancy (information theory).

10.8 Further properties

The Shannon entropy satisfies the following properties,for some of which it is useful to interpret entropy asthe amount of information learned (or uncertainty elimi-nated) by revealing the value of a random variable X:

• Adding or removing an event with probability zerodoes not contribute to the entropy:

Hn+1(p1, . . . , pn, 0) = Hn(p1, . . . , pn)

• It can be confirmed using the Jensen inequality that

H(X) = E[logb

(1

p(X)

)]≤ logb

(E[

1

p(X)

])= logb(n)

This maximal entropy of logb(n) is effectivelyattained by a source alphabet having a uniformprobability distribution: uncertainty is maxi-mal when all possible events are equiprobable.

• The entropy or the amount of information revealedby evaluating (X,Y) (that is, evaluating X and Y si-multaneously) is equal to the information revealedby conducting two consecutive experiments: firstevaluating the value of Y, then revealing the valueof X given that you know the value of Y. This maybe written as

H[(X,Y )] = H(X|Y )+H(Y ) = H(Y |X)+H(X).

• If Y=f(X) where f is deterministic, then H(f(X)|X)= 0. Applying the previous formula to H(X, f(X))yields

H(X)+H(f(X)|X) = H(f(X))+H(X|f(X)),

so H(f(X)) ≤ H(X), thus the entropy of a vari-able can only decrease when the latter is passedthrough a deterministic function.

• If X and Y are two independent experiments, thenknowing the value of Y doesn't influence our knowl-edge of the value of X (since the two don't influenceeach other by independence):

H(X|Y ) = H(X).

Page 60: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

54 CHAPTER 10. ENTROPY (INFORMATION THEORY)

• The entropy of two simultaneous events is no morethan the sum of the entropies of each individualevent, and are equal if the two events are indepen-dent. More specifically, if X and Y are two randomvariables on the same probability space, and (X,Y)denotes their Cartesian product, then

H[(X,Y )] ≤ H(X) +H(Y ).

Proving this mathematically follows easily from the pre-vious two properties of entropy.

10.9 Extending discrete entropy tothe continuous case

10.9.1 Differential entropy

Main article: Differential entropy

The Shannon entropy is restricted to random variablestaking discrete values. The corresponding formula fora continuous random variable with probability densityfunction f(x) with finite or infinite support X on the realline is defined by analogy, using the above form of theentropy as an expectation:

h[f ] = E[− ln(f(x))] = −∫Xf(x) ln(f(x)) dx.

This formula is usually referred to as the continuous en-tropy, or differential entropy. A precursor of the contin-uous entropy h[f] is the expression for the functional Hin the H-theorem of Boltzmann.Although the analogy between both functions is sugges-tive, the following question must be set: is the differen-tial entropy a valid extension of the Shannon discrete en-tropy? Differential entropy lacks a number of propertiesthat the Shannon discrete entropy has – it can even be neg-ative – and thus corrections have been suggested, notablylimiting density of discrete points.To answer this question, we must establish a connectionbetween the two functions:We wish to obtain a generally finite measure as the binsize goes to zero. In the discrete case, the bin size is the(implicit) width of each of the n (finite or infinite) binswhose probabilities are denoted by pn. As we general-ize to the continuous domain, we must make this widthexplicit.To do this, start with a continuous function f discretizedinto bins of size ∆ . By the mean-value theorem thereexists a value xi in each bin such that

f(xi)∆ =

∫ (i+1)∆

i∆

f(x) dx

and thus the integral of the function f can be approxi-mated (in the Riemannian sense) by

∫ ∞

−∞f(x) dx = lim

∆→0

∞∑i=−∞

f(xi)∆

where this limit and “bin size goes to zero” are equivalent.We will denote

H∆ := −∞∑

i=−∞f(xi)∆ log (f(xi)∆)

and expanding the logarithm, we have

H∆ = −∞∑

i=−∞f(xi)∆ log(f(xi))−

∞∑i=−∞

f(xi)∆ log(∆).

As Δ → 0, we have

∞∑i=−∞

f(xi)∆ →∫ ∞

−∞f(x) dx = 1

∞∑i=−∞

f(xi)∆ log(f(xi)) →∫ ∞

−∞f(x) log f(x) dx.

But note that log(Δ) → −∞ as Δ→ 0, therefore we need aspecial definition of the differential or continuous entropy:

h[f ] = lim∆→0

(H∆ + log∆

)= −

∫ ∞

−∞f(x) log f(x) dx,

which is, as said before, referred to as the differentialentropy. This means that the differential entropy is not alimit of the Shannon entropy for n→∞. Rather, it differsfrom the limit of the Shannon entropy by an infinite offset.It turns out as a result that, unlike the Shannon entropy,the differential entropy is not in general a good measureof uncertainty or information. For example, the differen-tial entropy can be negative; also it is not invariant undercontinuous co-ordinate transformations.

10.9.2 Relative entropy

Main article: Generalized relative entropy

Another useful measure of entropy that works equallywell in the discrete and the continuous case is the relative

Page 61: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

10.11. SEE ALSO 55

entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a referencemeasure m as follows. Assume that a probability distri-bution p is absolutely continuous with respect to a mea-sure m, i.e. is of the form p(dx) = f(x)m(dx) for somenon-negative m-integrable function f with m-integral 1,then the relative entropy can be defined as

DKL(p∥m) =

∫log(f(x))p(dx) =

∫f(x) log(f(x))m(dx).

In this form the relative entropy generalises (up to changein sign) both the discrete entropy, where the measurem isthe counting measure, and the differential entropy, wherethe measure m is the Lebesgue measure. If the measurem is itself a probability distribution, the relative entropyis non-negative, and zero if p = m as measures. It is de-fined for any measure space, hence coordinate indepen-dent and invariant under co-ordinate reparameterizationsif one properly takes into account the transformation ofthe measure m. The relative entropy, and implicitly en-tropy and differential entropy, do depend on the “refer-ence” measure m.

10.10 Use in combinatorics

Entropy has become a useful quantity in combinatorics.

10.10.1 Loomis-Whitney inequality

A simple example of this is an alternate proof of theLoomis-Whitney inequality: for every subset A ⊆ Zd, wehave

|A|d−1 ≤d∏

i=1

|Pi(A)|

where Pi is the orthogonal projection in the ith coordinate:

Pi(A) = (x1, ..., xi−1, xi+1, ..., xd) : (x1, ..., xd) ∈ A.

The proof follows as a simple corollary of Shearer’s in-equality: if X1, ..., Xd are random variables and S1, ..., Snare subsets of 1, ..., d such that every integer between1 and d lies in exactly r of these subsets, then

H[(X1, ..., Xd)] ≤1

r

n∑i=1

H[(Xj)j∈Si ]

where (Xj)j∈Si is the Cartesian product of random vari-ables Xj with indexes j in Si (so the dimension of thisvector is equal to the size of Si).

We sketch how Loomis-Whitney follows from this: In-deed, let X be a uniformly distributed random variablewith values in A and so that each point in A occurs withequal probability. Then (by the further properties of en-tropy mentioned above)H(X) = log|A|, where |A| denotesthe cardinality of A. Let Si = 1, 2, ..., i−1, i+1, ..., d.The range of (Xj)j∈Si is contained in Pi(A) and henceH[(Xj)j∈Si ] ≤ log |Pi(A)| . Now use this to bound theright side of Shearer’s inequality and exponentiate the op-posite sides of the resulting inequality you obtain.

10.10.2 Approximation to binomial coeffi-cient

For integers 0 < k < n let q = k/n. Then

2nH(q)

n+ 1≤(nk

)≤ 2nH(q),

where

H(q) = −q log2(q)− (1− q) log2(1− q). [16]

Here is a sketch proof. Note that(nk

)qqn(1 − q)n−nq is

one term of the expression

n∑i=0

(ni

)qi(1− q)n−i = (q + (1− q))n = 1.

Rearranging gives the upper bound. For the lower boundone first shows, using some algebra, that it is the largestterm in the summation. But then,

(nk

)qqn(1− q)n−nq ≥ 1

n+1

since there are n+1 terms in the summation. Rearranginggives the lower bound.A nice interpretation of this is that the number of binarystrings of length n with exactly k many 1’s is approxi-mately 2nH(k/n) .[17]

10.11 See also• Conditional entropy

• Cross entropy – is a measure of the average num-ber of bits needed to identify an event from a set ofpossibilities between two probability distributions

• Entropy (arrow of time)

• Entropy encoding – a coding scheme that assignscodes to symbols so as to match code lengths withthe probabilities of the symbols.

Page 62: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

56 CHAPTER 10. ENTROPY (INFORMATION THEORY)

• Entropy estimation

• Entropy power inequality

• Entropy rate

• Fisher information

• Hamming distance

• History of entropy

• History of information theory

• Information geometry

• Joint entropy – is the measure how much entropy iscontained in a joint system of two random variables.

• Kolmogorov-Sinai entropy in dynamical systems

• Levenshtein distance

• Mutual information

• Negentropy

• Perplexity

• Qualitative variation – other measures of statisticaldispersion for nominal distributions

• Quantum relative entropy – a measure of distin-guishability between two quantum states.

• Rényi entropy – a generalisation of Shannon en-tropy; it is one of a family of functionals for quanti-fying the diversity, uncertainty or randomness of asystem.

• Shannon index

• Theil index

• Typoglycemia

10.12 References[1] Shannon, Claude E. (July–October 1948). "A Mathe-

matical Theory of Communication". Bell System Tech-nical Journal 27 (3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x. (PDF)

[2] Goise, François & Olla, Stefano (2008). Entropy meth-ods for the Boltzmann equation: lectures from a specialsemester at the Centre Émile Borel, Institut H. Poincaré,Paris, 2001. Springer. p. 14. ISBN 978-3-540-73704-9.

[3] Schneier, B: Applied Cryptography, Second edition, page234. John Wiley and Sons.

[4] Shannon, C. E. (January 1951). “Prediction and Entropyof Printed English”. Bell System Technical Journal 30 (1):50–64. doi:10.1002/j.1538-7305.1951.tb01366.x. Re-trieved 30 March 2014.

[5] Borda, Monica (2011). Fundamentals in InformationTheory and Coding. Springer. p. 11. ISBN 978-3-642-20346-6.

[6] Han, Te Sun & Kobayashi, Kingo (2002). Mathematicsof Information and Coding. American Mathematical So-ciety. pp. 19–20. ISBN 978-0-8218-4256-0.

[7] Schneider, T.D, Information theory primer with an ap-pendix on logarithms, National Cancer Institute, 14 April2007.

[8] Carter, Tom (March 2014). An introduction to informa-tion theory and entropy. Santa Fe. Retrieved Aug 2014.

[9] Compare: Boltzmann, Ludwig (1896, 1898). Vorlesun-gen über Gastheorie : 2 Volumes – Leipzig 1895/98UB: O 5262-6. English version: Lectures on gas theory.Translated by Stephen G. Brush (1964) Berkeley: Univer-sity of California Press; (1995) New York: Dover ISBN0-486-68455-5

[10] Mark Nelson (24 August 2006). “The Hutter Prize”. Re-trieved 2008-11-27.

[11] T. Schürmann and P. Grassberger, Entropy Estimation ofSymbol Sequences, CHAOS,Vol. 6, No. 3 (1996) 414–427

[12] T. Schürmann, Bias Analysis in Entropy Estimation J.Phys. A: Math. Gen. 37 (2004) L295-L301.

[13] “The World’s Technological Capacity to Store, Commu-nicate, and Compute Information”, Martin Hilbert andPriscila López (2011), Science (journal), 332(6025), 60–65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html

[14] Massey, James (1994). “Proc. IEEE International Sym-posium on Information Theory”. Retrieved December 31,2013. |chapter= ignored (help)

[15] Malone, David; Sullivan, Wayne (2005). “Proceedings ofthe Information Technology & Telecommunications Con-ference”. Retrieved December 31, 2013. |chapter= ig-nored (help)

[16] Aoki, New Approaches to Macroeconomic Modeling.page 43.

[17] Probability and Computing, M.Mitzenmacher and E. Up-fal, Cambridge University Press

This article incorporates material from Shannon’s entropyon PlanetMath, which is licensed under the Creative Com-mons Attribution/Share-Alike License.

10.13 Further reading

10.13.1 Textbooks on information theory

• Arndt, C. Information Measures, Information andits Description in Science and Engineering (SpringerSeries: Signals and Communication Technology),2004, ISBN 978-3-540-40855-0

Page 63: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

10.14. EXTERNAL LINKS 57

• Ash, RB. Information Theory. New York: Inter-science, 1965. ISBN 0-470-03445-9. New York:Dover 1990. ISBN 0-486-66521-6

• Gallager, R. Information Theory and Reliable Com-munication. NewYork: JohnWiley and Sons, 1968.ISBN 0-471-29048-3

• Goldman, S. Information Theory. New York: Pren-tice Hall, 1953. New York: Dover 1968 ISBN 0-486-62209-6, 2005 ISBN 0-486-44271-3

• Cover, TM, Thomas, JA. Elements of informationtheory, 1st Edition. New York: Wiley-Interscience,1991. ISBN 0-471-06259-6.

2nd Edition. New York: Wiley-Interscience,2006. ISBN 0-471-24195-4.

• MacKay, DJC. Information Theory, Inference, andLearning Algorithms Cambridge: Cambridge Uni-versity Press, 2003. ISBN 0-521-64298-1

• Martin, Nathaniel F.G. & England, James W.(2011). Mathematical Theory of Entropy. Cam-bridge University Press. ISBN 978-0-521-17738-2.

• Mansuripur, M. Introduction to Information The-ory. New York: Prentice Hall, 1987. ISBN 0-13-484668-0

• Pierce, JR. “An introduction to information theory:symbols, signals and noise”. Dover (2nd Edition).1961 (reprinted by Dover 1980).

• Reza, F. An Introduction to Information Theory.New York: McGraw-Hill 1961. New York: Dover1994. ISBN 0-486-68210-2

• Shannon, CE. Warren Weaver. The MathematicalTheory of Communication. Univ of Illinois Press,1949. ISBN 0-252-72548-4

• Stone, JV. Chapter 1 of book “Information Theory:A Tutorial Introduction”, University of Sheffield,England, 2014. ISBN 978-0956372857.

10.14 External links• Hazewinkel, Michiel, ed. (2001), “Entropy”,Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4

• Introduction to entropy and information on PrincipiaCybernetica Web

• Entropy an interdisciplinary journal on all aspect ofthe entropy concept. Open access.

• Information is not entropy, information is not un-certainty ! – a discussion of the use of the terms“information” and “entropy”.

• I'm Confused: How Could Information Equal En-tropy? – a similar discussion on the bionet.info-theory FAQ.

• Description of information entropy from “Tools forThought” by Howard Rheingold

• A java applet representing Shannon’s Experiment toCalculate the Entropy of English

• Slides on information gain and entropy

• An Intuitive Guide to the Concept of Entropy Arisingin Various Sectors of Science – a wikibook on theinterpretation of the concept of entropy.

• Calculator for Shannon entropy estimation and in-terpretation

• A Light Discussion and Derivation of Entropy

• Network Event Detection With Entropy Measures,Dr. Raimund Eimann, University of Auckland,PDF; 5993 kB – a PhD thesis demonstrating howentropy measures may be used in network anomalydetection.

Page 64: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 11

Topological entropy

This article is about entropy in geometry and topology.For other uses, see Entropy (disambiguation).

In mathematics, the topological entropy of a topologi-cal dynamical system is a nonnegative real number thatis a measure of the complexity of the system. Topo-logical entropy was first introduced in 1965 by Adler,Konheim and McAndrew. Their definition was modelledafter the definition of the Kolmogorov–Sinai, or metricentropy. Later, Dinaburg and Rufus Bowen gave a dif-ferent, weaker definition reminiscent of the Hausdorffdimension. The second definition clarified the mean-ing of the topological entropy: for a system given by aniterated function, the topological entropy represents theexponential growth rate of the number of distinguishableorbits of the iterates. An important variational principlerelates the notions of topological and measure-theoreticentropy.

11.1 Definition

A topological dynamical system consists of a Hausdorfftopological space X (usually assumed to be compact) anda continuous self-map f. Its topological entropy is a non-negative real number that can be defined in various ways,which are known to be equivalent.

11.1.1 Definition of Adler, Konheim, andMcAndrew

Let X be a compact Hausdorff topological space. For anyfinite open cover C of X, let H(C) be the logarithm (usu-ally to base 2) of the smallest number of elements of Cthat cover X.[1] For two covers C and D, let

C ∨D

be their (minimal) common refinement, which consistsof all the non-empty intersections of a set from C witha set from D, and similarly for multiple covers. For anycontinuous map f: X→ X, the following limit exists:

H(C, f) = limn→∞

1

nH(C ∨ f−1C ∨ . . . ∨ f−n+1C).

Then the topological entropy of f, denoted h(f), is de-fined to be the supremum of H(C, f) over all possiblefinite covers C of X.

Interpretation

The parts of C may be viewed as symbols that (partially)describe the position of a point x in X: all points x ∈ Ciare assigned the symbol Ci . Imagine that the position ofx is (imperfectly) measured by a certain device and thateach part ofC corresponds to one possible outcome of themeasurement. The integerH(C∨f−1C∨. . .∨f−n+1C)then represents the minimal number of “words” of lengthn needed to encode the points of X according to the be-havior of their first n − 1 iterates under f, or, put dif-ferently, the total number of “scenarios” of the behaviorof these iterates, as “seen” by the partition C. Thus thetopological entropy is the average (per iteration) amountof information needed to describe long iterations of themap f.

11.1.2 Definition of Bowen and Dinaburg

This definition uses a metric on X (actually, uniformstructure would suffice). This is a weaker definition thanthat of Adler, Konheim, andMcAndrew, as it requires ad-ditional, unnecessary structure on the topological space.However, in practice, the Bowen-Dinaburg topologicalentropy is usually much easier to calculate.Let (X, d) be a compact metric space and f: X→ X be acontinuous map. For each natural number n, a newmetricdn is defined on X by the formula

dn(x, y) = maxd(f i(x), f i(y)) : 0 ≤ i < n.

Given any ε > 0 and n≥ 1, two points ofX are ε-close withrespect to this metric if their first n iterates are ε-close.This metric allows one to distinguish in a neighborhood

58

Page 65: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

11.3. EXAMPLES 59

of an orbit the points that move away from each otherduring the iteration from the points that travel together.A subset E of X is said to be (n, ε)-separated if eachpair of distinct points of E is at least ε apart in the metricdn. Denote by N(n, ε) the maximum cardinality of an (n,ε)-separated set. The topological entropy of the map fis defined by

h(f) = limϵ→0

(lim supn→∞

1

nlogN(n, ϵ)

).

Interpretation

Since X is compact, N(n, ε) is finite and represents thenumber of distinguishable orbit segments of length n, as-suming that we cannot distinguish points within ε of oneanother. A straightforward argument shows that the limitdefining h(f) always exists in the extended real line (butcould be infinite). This limit may be interpreted as themeasure of the average exponential growth of the numberof distinguishable orbit segments. In this sense, it mea-sures complexity of the topological dynamical system (X,f). Rufus Bowen extended this definition of topologicalentropy in a way which permits X to be noncompact.

11.2 Properties

• Let f be an expansive homeomorphism of a com-pact metric spaceX and let C be a topological gen-erator. Then the topological entropy of f relative toC is equal to the topological entropy of f , i.e.

h(f) = H(f, C)

• Let f : X → X be a continuous transformationof a compact metric space X , let hµ(f) be themeasure-theoretic entropy of f with respect toµ andM(X, f) is the set of all f -invariant Borel proba-bility measures. Then

h(f) = supµ∈M(X,f)

hµ(f)

• In general the maximum of the functions hµ overthe set M(X,f) is not attained, but if additionally theentropy map

µ 7→ hµ(f) : M(X, f) → R is upper semi-continuous, the measure of maximal entropyexists.

• If f has a unique measure of maximal entropy µ ,then f is ergodic with respect to µ .

11.3 Examples

• Let σ : Σk → Σ by xn 7→ xn−1 denote thefull two-sided k-shift on symbols 1, . . . , k . LetC = [1], . . . , [k] denote the partition of Σk intocylinders of length 1. Then

∨nj=0 σ

−1(C) is a par-tition of Σk for all n ∈ N and the number of sets iskn respectively. The partitions are open covers andC is a topological generator. Hence

h(σ) = h(σ,C) = limn→∞1n log kn =

log k . The measure-theoretic entropy of theBernoulli ( 1k , . . . , 1k ) -measure is also log k .Hence it is a measure ofmaximal entropy. Fur-ther on it can be shown that no other measuresof maximal entropy exist.

• Let A be an irreducible k× k matrix with entries in0, 1 and let σ : ΣA → ΣA be the correspondingsubshift of finite type. Then h(σ) = logλ where λis the largest positive eigenvalue of A .

11.4 Notes[1] Since X is compact, H(C) is always finite, even for an in-

finite cover C. The use of arbitrary covers yields the samevalue of entropy.

11.5 See also

• Milnor–Thurston kneading theory

• For the measure of correlations in systems withtopological order see Topological entanglement en-tropy

11.6 References

• Adler, R.L.; Konheim, Allan G.; McAndrew, M.H.(1965). “Topological entropy”. Transactions of theAmerican Mathematical Society 114 (2): 309–319.doi:10.2307/1994177. Zbl 0127.13102.

• Dmitri Anosov (2001), “T/t093040”, inHazewinkel, Michiel, Encyclopedia of Mathe-matics, Springer, ISBN 978-1-55608-010-4

• Roy Adler, Tomasz Downarowicz, Michał Misi-urewicz, Topological entropy at Scholarpedia

• Walters, Peter (1982). An introduction to er-godic theory. Graduate Texts in Mathematics79. Springer-Verlag. ISBN 0-387-95152-0. Zbl0475.28009.

Page 66: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

60 CHAPTER 11. TOPOLOGICAL ENTROPY

This article incorporates material from Topological En-tropy on PlanetMath, which is licensed under the CreativeCommons Attribution/Share-Alike License.

Page 67: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 12

Measure-preserving dynamical system

In mathematics, a measure-preserving dynamical sys-tem is an object of study in the abstract formulation ofdynamical systems, and ergodic theory in particular.

12.1 Definition

A measure-preserving dynamical system is defined as aprobability space and a measure-preserving transforma-tion on it. In more detail, it is a system

(X,B, µ, T )

with the following structure:

• X is a set,• B is a σ-algebra over X ,• µ : B → [0, 1] is a probability measure, so that μ(X)= 1, and μ(∅) = 0,

• T : X → X is a measurable transforma-tion which preserves the measure µ , i.e., ∀A ∈B µ(T−1(A)) = µ(A) .

This definition can be generalized to the case in which Tis not a single transformation that is iterated to give thedynamics of the system, but instead is a monoid (or evena group) of transformations Ts : X → X parametrized bys ∈ Z (or R, or N ∪ 0, or [0, +∞)), where each trans-formation Ts satisfies the same requirements as T above.In particular, the transformations obey the rules:

• T0 = idX : X → X , the identity function on X;• Ts Tt = Tt+s , whenever all the terms are well-defined;

• T−1s = T−s , whenever all the terms are well-

defined.

The earlier, simpler case fits into this framework by defin-ingTs = Ts for s ∈ N.The existence of invariant measures for certain mapsand Markov processes is established by the Krylov–Bogolyubov theorem.

12.2 Examples

T

A

T ¹(A)-

1

01

Example of a (Lebesgue measure) preserving map: T : [0,1) →[0,1), x 7→ 2x mod 1.

Examples include:

• μ could be the normalized angle measure dθ/2π onthe unit circle, and T a rotation. See equidistributiontheorem;

• the Bernoulli scheme;

• the interval exchange transformation;

• with the definition of an appropriate measure, asubshift of finite type;

• the base flow of a random dynamical system.

12.3 Homomorphisms

The concept of a homomorphism and an isomorphismmay be defined.

61

Page 68: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

62 CHAPTER 12. MEASURE-PRESERVING DYNAMICAL SYSTEM

Consider two dynamical systems (X,A, µ, T ) and(Y,B, ν, S) . Then a mapping

φ : X → Y

is a homomorphism of dynamical systems if it satisfiesthe following three properties:

1. The map φ is measurable,

2. For each B ∈ B , one has µ(φ−1B) = ν(B) ,

3. For μ-almost all x ∈ X, one has φ(Tx) = S(φ x).

The system (Y,B, ν, S) is then called a factor of(X,A, µ, T ) .The map φ is an isomorphism of dynamical systems if,in addition, there exists another mapping

ψ : Y → X

that is also a homomorphism, which satisfies

1. For μ-almost all x ∈ X, one has x = ψ(φx)

2. For ν-almost all y ∈ Y, one has y = φ(ψy) .

Hence, one may form a category of dynamical systemsand their homomorphisms.

12.4 Generic points

A point x ∈ X is called a generic point if the orbit of thepoint is distributed uniformly according to the measure.

12.5 Symbolic names and genera-tors

Consider a dynamical system (X,B, T, µ) , and let Q =Q1, ..., Qk be a partition of X into k measurable pair-wise disjoint pieces. Given a point x ∈X, clearly x belongsto only one of the Qi. Similarly, the iterated point Tnxcan belong to only one of the parts as well. The symbolicname of x, with regards to the partitionQ, is the sequenceof integers an such that

Tnx ∈ Qan .

The set of symbolic names with respect to a partition iscalled the symbolic dynamics of the dynamical system. Apartition Q is called a generator or generating partitionif μ-almost every point x has a unique symbolic name.

12.6 Operations on partitions

Given a partition Q = Q1, ..., Qk and a dynamical sys-tem (X,B, T, µ) , we define T-pullback of Q as

T−1Q = T−1Q1, . . . , T−1Qk.

Further, given two partitions Q = Q1, ..., Qk and R =R1, ..., Rm, we define their refinement as

Q∨R = Qi∩Rj | i = 1, . . . , k, j = 1, . . . ,m, µ(Qi∩Rj) > 0.

With these two constructs we may define refinement of aniterated pullback

N∨n=0

T−nQ =Qi0 ∩ T−1Qi1 ∩ · · · ∩ T−NQiN where iℓ = 1, . . . , k, ℓ = 0, . . . , N, µ

(Qi0 ∩ T−1Qi1 ∩ · · · ∩ T−NQiN

)> 0

which plays crucial role in the construction of themeasure-theoretic entropy of a dynamical system.

12.7 Measure-theoretic entropy

The entropy of a partition Q is defined as[1][2]

H(Q) = −k∑

m=1

µ(Qm) logµ(Qm).

The measure-theoretic entropy of a dynamical system(X,B, T, µ) with respect to a partition Q = Q1, ..., Qkis then defined as

hµ(T,Q) = limN→∞

1

NH

(N∨

n=0

T−nQ

).

Finally, the Kolmogorov–Sinai or metric or measure-theoretic entropy of a dynamical system (X,B, T, µ) isdefined as

hµ(T ) = supQhµ(T,Q).

where the supremum is taken over all finite measurablepartitions. A theorem of Yakov G. Sinai in 1959 showsthat the supremum is actually obtained on partitions thatare generators. Thus, for example, the entropy of theBernoulli process is log 2, since almost every real numberhas a unique binary expansion. That is, one may partitionthe unit interval into the intervals [0, 1/2) and [1/2, 1].Every real number x is either less than 1/2 or not; andlikewise so is the fractional part of 2nx.If the space X is compact and endowed with a topology,or is a metric space, then the topological entropy may alsobe defined.

Page 69: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

12.10. EXAMPLES 63

12.8 See also• Krylov–Bogolyubov theorem on the existence of in-variant measures

• Poincaré recurrence theorem

12.9 References[1] Ya.G. Sinai, (1959) “On the Notion of Entropy of a Dy-

namical System”, Doklady of Russian Academy of Sci-ences 124, pp. 768–771.

[2] Ya. G. Sinai, (2007) "Metric Entropy of Dynamical Sys-tem"

• Michael S. Keane, “Ergodic theory and subshifts offinite type”, (1991), appearing as Chapter 2 in Er-godic Theory, Symbolic Dynamics and HyperbolicSpaces, Tim Bedford, Michael Keane and Caro-line Series, Eds. Oxford University Press, Oxford(1991). ISBN 0-19-853390-X (Provides exposi-tory introduction, with exercises, and extensive ref-erences.)

• Lai-Sang Young, “Entropy in Dynamical Systems”(pdf; ps), appearing as Chapter 16 in Entropy, An-dreas Greven, Gerhard Keller, and Gerald War-necke, eds. Princeton University Press, Princeton,NJ (2003). ISBN 0-691-11338-6

12.10 Examples• T. Schürmann and I. Hoffmann, The entropy ofstrange billiards inside n-simplexes. J. Phys. A28,page 5033ff, 1995. PDF-Dokument

Page 70: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 13

List of Feynman diagrams

This is a list of common Feynman diagrams.

64

Page 71: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

Chapter 14

Canonical quantization

In physics, canonical quantization is a procedure forquantizing a classical theory, while attempting to preservethe formal structure, such as symmetries, of the classicaltheory, to the greatest extent possible.Historically, this was not quiteWerner Heisenberg's routeto obtaining quantum mechanics, but Paul Dirac intro-duced it in his 1926 doctoral thesis, the “method ofclassical analogy” for quantization,[1] and detailed it inhis classic text.[2] The word canonical arises from theHamiltonian approach to classical mechanics, in whicha system’s dynamics is generated via canonical Poissonbrackets, a structure which is only partially preserved incanonical quantization.This method was further used in the context of quantumfield theory by Paul Dirac, in his construction of quantumelectrodynamics. In the field theory context, it is alsocalled second quantization, in contrast to the semi-classical first quantization for single particles.

14.1 History

Quantum physics first dealt only with the quantization ofthe motion of particles, leaving the electromagnetic fieldclassical, hence the name quantum mechanics.[3]

Later the electromagnetic field was also quantized, andeven the particles themselves were represented throughquantized fields, resulting in the development of quantumelectrodynamics (QED) and quantum field theory ingeneral.[4] Thus, by convention, the original form of par-ticle quantum mechanics is denoted first quantization,while quantum field theory is formulated in the languageof second quantization.

14.2 First quantization

Main article: First quantization

14.2.1 Single particle systems

The following exposition is based on Dirac’s treatise onquantum mechanics.[2] In the classical mechanics of aparticle, there are dynamic variables which are called co-ordinates (x) and momenta (p). These specify the state ofa classical system. The canonical structure (also knownas the symplectic structure) of classical mechanics con-sists of Poisson brackets between these variables, such asx,p = 1. All transformations of variables which pre-serve these brackets are allowed as canonical transfor-mations in classical mechanics. Motion itself is such acanonical transformation.By contrast, in quantum mechanics, all significant fea-tures of a particle are contained in a state |ψ⟩ , calledquantum state. Observables are represented by opera-tors acting on a Hilbert space of such quantum states.The (eigen)value of an operator acting on one of its eigen-states represents the value of a measurement on the par-ticle thus represented. For example, the energy is readoff by the Hamiltonian operator Ĥ acting on a state |ψn⟩, yielding

H|ψn⟩ = En|ψn⟩

where En is the characteristic energy associated to this|ψn⟩ eigenstate.Any state could be represented as a linear combination ofeigenstates of energy; for example,

|ψ⟩ =∞∑

n=0

an|ψn⟩

where an are constant coefficients.As in classical mechanics, all dynamical operators can berepresented by functions of the position and momentumones, X and P, respectively. The connection between thisrepresentation and the more usual wavefunction represen-tation is given by the eigenstate of the position operatorX representing a particle at position x, which is denotedby an element |x⟩ in the Hilbert space, and which satisfiesX|x⟩ = x|x⟩ . Then, ψ(x) = ⟨x|ψ⟩ .

65

Page 72: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

66 CHAPTER 14. CANONICAL QUANTIZATION

Likewise, the eigenstates |p⟩ of the momentum operatorP specify the momentum representation: ψ(p) = ⟨p|ψ⟩.The central relation between these operators is a quantumanalog of the above Poisson bracket of classical mechan-ics, the canonical commutation relation,

[X, P ] = XP − P X = iℏ

This relation encodes (and formally leads to) theuncertainty principle, in the form Δx Δp ≥ ħ/2. This al-gebraic structure may be thus considered as the quantumanalog of the canonical structure of classical mechanics.

14.2.2 Many-particle systems

When turning toN-particle systems, i.e., systems contain-ing N identical particles (particles characterized by thesame quantum numbers such as mass, charge and spin),it is necessary to extend the single-particle state functionψ(r) to the N-particle state function ψ(r1, r2, ..., rN ) .A fundamental difference between classical and quantummechanics concerns the concept of indistinguishability ofidentical particles. Only two species of particles are thuspossible in quantum physics, the so-called bosons andfermions which obey the rules:ψ(r1, ..., rj , ..., rk, ..., rN) =+ψ(r1, ..., rk, ..., rj , ..., rN ) (bosons),ψ(r1, ..., rj , ..., rk, ..., rN) =−ψ(r1, ..., rk, ..., rj , ..., rN ) (fermions).Where we have interchanged two coordinates (rj , rk) ofthe state function. The usual wave function is obtainedusing the slater determinant and the identical particlestheory. Using this basis, it is possible to solve variousmany-particle problems.

14.3 Issues and limitations

Dirac’s book[2] details his popular rule of supplantingPoisson brackets by commutators:

This rule is not as simple or well-defined as it appears.It is ambiguous when products of classical observablesare involved which correspond to noncommuting prod-ucts of the analog operators, and fails in polynomials ofsufficiently high order.For example, the reader is encouraged to check the fol-lowing pair of equalities invented by Groenewold,[5] as-suming only the commutation relation [x,p ] = iħ :

x3, p3+ 112p

2, x3, x2, p3 = 01iℏ [x

3, p3] + 112iℏ

[1iℏ [p

2, x3], 1iℏ [x

2, p3]]= −3ℏ2 .

The right-hand-side “anomaly” term −3ħ2 is not pre-dicted by application of the above naive quantization rule.In order to make this procedure more rigorous, one mighthope to take an axiomatic approach to the problem. If Qrepresents the quantization map that acts on functions fin classical phase space, then the following properties areusually considered desirable:[6]

1. Qxψ = xψ and Qpψ = −iℏ∂xψ (elementaryposition/momentum operators)

2. f 7−→ Qf is a linear map

3. [Qf , Qg] = iℏQf,g (Poisson bracket)

4. Qgf = g(Qf ) (von Neumann rule).

However, not only are these four properties mutually in-consistent, any three of them is also inconsistent![7] Asit turns out, the only pairs of these properties that leadto self-consistent, nontrivial solutions are 2+3 and pos-sibly 1+3 or 1+4. Accepting properties 1+2 along witha weaker condition that 3 be true only asymptotically inthe limit ħ→0 (see Moyal bracket) is deformation quan-tization, and some extraneous information must be pro-vided, as in the standard theories utilized in most ofphysics. Accepting properties 1+2+3 but restricting thespace of quantizable observables to exclude terms such asthe cubic ones in the above example amounts to geometricquantization.

14.4 Second quantization: fieldtheory

Main article: Second quantization

Quantum mechanics was successful at describing non-relativistic systems with fixed numbers of particles, but anew framework was needed to describe systems in whichparticles can be created or destroyed, for example, theelectromagnetic field, considered as a collection of pho-tons. It was eventually realized that special relativitywas inconsistent with single-particle quantummechanics,so that all particles are now described relativistically byquantum fields.When the canonical quantization procedure is applied toa field, such as the electromagnetic field, the classicalfield variables become quantum operators. Thus, the nor-mal modes comprising the amplitude of the field becomequantized, and the quanta are identified with individualparticles or excitations. For example, the quanta of the

Page 73: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

14.4. SECOND QUANTIZATION: FIELD THEORY 67

electromagnetic field are identified with photons. Un-like first quantization, conventional second quantizationis completely unambiguous, in effect a functor.Historically, quantizing the classical theory of a singleparticle gave rise to a wavefunction. The classical equa-tions of motion of a field are typically identical in formto the (quantum) equations for the wave-function of oneof its quanta. For example, the Klein–Gordon equationis the classical equation of motion for a free scalar field,but also the quantum equation for a scalar particle wave-function. This meant that quantizing a field appeared tobe similar to quantizing a theory that was already quan-tized, leading to the fanciful term second quantizationin the early literature, which is still used to describe fieldquantization, even though the modern interpretation de-tailed is different.One drawback to canonical quantization for a relativis-tic field is that by relying on the Hamiltonian to deter-mine time dependence, relativistic invariance is no longermanifest. Thus it is necessary to check that relativistic in-variance is not lost. Alternatively, the Feynman integralapproach is available for quantizing relativistic fields, andis manifestly invariant. For non-relativistic field theories,such as those used in condensed matter physics, Lorentzinvariance is not an issue.

14.4.1 Field operators

Quantum mechanically, the variables of a field (such asthe field’s amplitude at a given point) are represented byoperators on a Hilbert space. In general, all observablesare constructed as operators on the Hilbert space, andthe time-evolution of the operators is governed by theHamiltonian, which must be a positive operator. A state|0⟩ annihilated by the Hamiltonian must be identified asthe vacuum state, which is the basis for building all otherstates. In a non-interacting (free) field theory, the vac-uum is normally identified as a state containing zero par-ticles. In a theory with interacting particles, identifyingthe vacuum is more subtle, due to vacuum polarization,which implies that the physical vacuum in quantum fieldtheory is never really empty. For further elaboration, seethe articles on the quantum mechanical vacuum and thevacuum of quantum chromodynamics. The details of thecanonical quantization depend on the field being quan-tized, and whether it is free or interacting.

Real scalar field

A scalar field theory provides a good example of thecanonical quantization procedure.[8] Classically, a scalarfield is a collection of an infinity of oscillator normalmodes. For simplicity, the quantization can be carried ina 1+1 dimensional space-time ℝ×S1, in which the spa-tial direction is compactified to a circle of circumfer-ence 2π, rendering the momenta discrete. The classical

Lagrangian density is then

L(ϕ) = 1

2(∂tϕ)

2 − 1

2(∂xϕ)

2 − 1

2m2ϕ2 − V (ϕ),

where V(φ) is a potential term, often taken to be a poly-nomial or monomial of degree 3 or higher. The actionfunctional is

S(ϕ) =

∫L(ϕ)dxdt =

∫L(ϕ, ∂tϕ)dt

The canonical momentum obtained via the Legendretransform using the action L is π = ∂tϕ , and the classicalHamiltonian is found to be

H(ϕ, π) =

∫dx

[1

2π2 +

1

2(∂xϕ)

2 +1

2m2ϕ2 + V (ϕ)

].

Canonical quantization treats the variables ϕ(x) and π(x)as operators with canonical commutation relations at timet = 0, given by

[ϕ(x), ϕ(y)] = 0, [π(x), π(y)] = 0, [ϕ(x), π(y)] = iℏδ(x−y).

Operators constructed from ϕ and π can then formally bedefined at other times via the time-evolution generated bythe Hamiltonian:

O(t) = eitHOe−itH .

However, since φ and π do not commute, this expres-sion is ambiguous at the quantum level. The problem isto construct a representation of the relevant operators Oon a Hilbert space H and to construct a positive oper-ator H as a quantum operator on this Hilbert space insuch a way that it gives this evolution for the operatorsOas given by the preceding equation, and to show that Hcontains a vacuum state |0> on which H has zero eigen-value. In practice, this construction is a difficult prob-lem for interacting field theories, and has been solvedcompletely only in a few simple cases via the methodsof constructive quantum field theory. Many of these is-sues can be sidestepped using the Feynman integral as de-scribed for a particular V(φ) in the article on scalar fieldtheory.In the case of a free field, with V(φ) = 0, the quantizationprocedure is relatively straightforward. It is convenient toFourier transform the fields, so that

ϕk =

∫ϕ(x)e−ikxdx, πk =

∫π(x)e−ikxdx.

The reality of the fields imply that

Page 74: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

68 CHAPTER 14. CANONICAL QUANTIZATION

ϕ−k = ϕ†k, π−k = π†k

The classical Hamiltonian may be expanded in Fouriermodes as

H =1

2

∞∑k=−∞

[πkπ

†k + ω2

kϕkϕ†k

],

where ωk =√k2 +m2 .

This Hamiltonian is thus recognizable as an infinite sumof classical normal mode oscillator excitations φk, eachone of which is quantized in the standard manner, so thefree quantum Hamiltonian looks identical. It is the φksthat have become operators obeying the standard commu-tation relations, [φk, πk†] = [φk†, πk] = iħ, with all othersvanishing. The collective Hilbert space of all these oscil-lators is thus constructed using creation and annihilationoperators constructed from these modes,

ak =1√2ℏωk

(ωkϕk + iπk) , a†k =

1√2ℏωk

(ωkϕ

†k − iπ†

k

),

for which [ak, ak†] = 1 for all k, with all other commuta-tors vanishing.The vacuum |0> is taken to be annihilated by all of the ak,and H is the Hilbert space constructed by applying anycombination of the infinite collection of creation opera-tors ak† to |0⟩ . This Hilbert space is called Fock space.For each k, this construction is identical to a quantumharmonic oscillator. The quantum field is an infinite ar-ray of quantum oscillators. The quantum Hamiltonianthen amounts to

H =∞∑

k=−∞

ℏωka†kak =

∞∑k=−∞

ℏωkNk

where Nk may be interpreted as the number operator giv-ing the number of particles in a state with momentum k.This Hamiltonian differs from the previous expression bythe subtraction of the zero-point energy ħωk/2 of eachharmonic oscillator. This satisfies the condition that Hmust annihilate the vacuum, without affecting the time-evolution of operators via the above exponentiation op-eration. This subtraction of the zero-point energy maybe considered to be a resolution of the quantum opera-tor ordering ambiguity, since it is equivalent to requiringthat all creation operators appear to the left of annihila-tion operators in the expansion of the Hamiltonian. Thisprocedure is known as Wick ordering or normal order-ing.

Other fields

All other fields can be quantized by a generalization ofthis procedure. Vector or tensor fields simply have morecomponents, and independent creation and destructionoperators must be introduced for each independent com-ponent. If a field has any internal symmetry, then cre-ation and destruction operators must be introduced foreach component of the field related to this symmetry aswell. If there is a gauge symmetry, then the number of in-dependent components of the field must be carefully an-alyzed to avoid over-counting equivalent configurations,and gauge-fixing may be applied if needed.It turns out that commutation relations are useful only forquantizing bosons, for which the occupancy number ofany state is unlimited. To quantize fermions, which sat-isfy the Pauli exclusion principle, anti-commutators areneeded. These are defined by A,B = AB+BA.When quantizing fermions, the fields are expanded in cre-ation and annihilation operators, θk†, θk, which satisfy

θk, θ†l = δkl, θk, θl = 0, θ†k, θ†l = 0.

The states are constructed on a vacuum |0> annihilated bythe θk, and the Fock space is built by applying all prod-ucts of creation operators θk† to |0>. Pauli’s exclusionprinciple is satisfied, because (θ†k)2|0⟩ = 0 , by virtue ofthe anti-commutation relations.

14.4.2 Condensates

The construction of the scalar field states above assumedthat the potential was minimized at φ = 0, so that thevacuum minimizing the Hamiltonian satisfies φ = 0,indicating that the vacuum expectation value (VEV) ofthe field is zero. In cases involving spontaneous symmetrybreaking, it is possible to have a non-zero VEV, becausethe potential is minimized for a value φ = v . This occursfor example, if V(φ) = gφ4 and m² < 0, for which theminimum energy is found at v = ±m/√g. The value of vin one of these vacua may be considered as condensate ofthe field φ. Canonical quantization then can be carriedout for the shifted field φ(x,t)−v, and particle states withrespect to the shifted vacuum are defined by quantizingthe shifted field. This construction is utilized in the Higgsmechanism in the standard model of particle physics.

14.5 Mathematical quantization

The classical theory is described using a spacelikefoliation of spacetime with the state at each slice be-ing described by an element of a symplectic manifoldwith the time evolution given by the symplectomorphismgenerated by a Hamiltonian function over the symplectic

Page 75: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

14.8. EXTERNAL LINKS 69

manifold. The quantum algebra of “operators” is an ħ-deformation of the algebra of smooth functions over thesymplectic space such that the leading term in the Taylorexpansion over ħ of the commutator [A, B] expressed inthe phase space formulation is iħA, B . (Here, the curlybraces denote the Poisson bracket. The subleading termsare all encoded in the Moyal bracket, the suitable quan-tum deformation of the Poisson bracket.) In general, forthe quantities (observables) involved, and providing thearguments of such brackets, ħ-deformations are highlynonunique—quantization is an “art”, and is specified bythe physical context. (Two different quantum systemsmay represent two different, inequivalent, deformationsof the same classical limit, ħ→ 0.)Now, one looks for unitary representations of this quan-tum algebra. With respect to such a unitary representa-tion, a symplectomorphism in the classical theory wouldnow deform to a (metaplectic) unitary transformation. Inparticular, the time evolution symplectomorphism gen-erated by the classical Hamiltonian deforms to a unitarytransformation generated by the corresponding quantumHamiltonian.A further generalization is to consider a Poisson manifoldinstead of a symplectic space for the classical theory andperform an ħ-deformation of the corresponding Poissonalgebra or even Poisson supermanifolds.

14.6 See also• Correspondence principle

• Creation and annihilation operators

• Dirac bracket

• Moyal bracket

• Weyl quantization

• Geometric quantization

14.7 References[1] Dirac, P. A. M. (1925). “The Fundamental Equa-

tions of Quantum Mechanics”. Proceedings of the RoyalSociety A: Mathematical, Physical and Engineering Sci-ences 109 (752): 642. Bibcode:1925RSPSA.109..642D.doi:10.1098/rspa.1925.0150.

[2] Dirac, P. A.M. (1982). Principles of QuantumMechanics.USA: Oxford University Press. ISBN 0-19-852011-5.

[3] van der Waerden, B.L. (1968). Sources of quantummechanics. New York: Dover Publications. ISBN0486618811.

[4] Schweber, S.S. (1983). QED and the men who madeit. Princeton: Princeton University Press. ISBN0691033277.

[5] H.J. Groenewold, “On the Principles of elementaryquantum mechanics”, Physica,12 (1946) pp. 405–46.doi:10.1016/S0031-8914(46)80059-4

[6] J. R. Shewell, “On the Formation of Quantum-Mechanical Operators.” Am.J.Phys., 27 (1959).doi:10.1119/1.1934740

[7] S. T. Ali, M. Engliš, “Quantization Methods: A Guide forPhysicists and Analysts.” Rev.Math.Phys., 17 (2005) pp.391-490. doi:10.1142/S0129055X05002376

[8] This treatment is based primarily on Ch. 1 in Connes,Alain; Marcolli, Matilde (2008). Noncommutative Geom-etry, Quantum Fields, and Motives. American Mathemat-ical Society. ISBN 0-8218-4210-2.

14.7.1 Historical References

• Silvan S. Schweber: QED and the men who made it,Princeton Univ. Press, 1994, ISBN 0-691-03327-7

14.7.2 General Technical References

• James D. Bjorken, Sidney D. Drell: Relativis-tic quantum mechanics, New York, McGraw-Hill,1964

• Alexander Altland, Ben Simons: Condensed matterfield theory, Cambridge Univ. Press, 2009, ISBN978-0-521-84508-3

• Franz Schwabl: Advanced Quantum Mechanics,Berlin and elsewhere, Springer, 2009 ISBN 978-3-540-85061-8

• An introduction to quantum field theory, byM.E.Peskin and H.D.Schroeder, ISBN 0-201-50397-2

14.8 External links• What is “Relativistic Canonical Quantization"?

• Pedagogic Aides to Quantum Field Theory Click onthe links for Chaps. 1 and 2 at this site to find anextensive, simplified introduction to second quanti-zation. See Sect. 1.5.2 in Chap. 1. See Sect. 2.7and the chapter summary in Chap. 2.

Page 76: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

70 CHAPTER 14. CANONICAL QUANTIZATION

14.9 Text and image sources, contributors, and licenses

14.9.1 Text• Turbulence Source: http://en.wikipedia.org/wiki/Turbulence?oldid=635120614 Contributors: Bryan Derksen, The Anome, Ap, XJaM,

Peterlin, Heron, Stevertigo, Nealmcb, Michael Hardy, JakeVortex, Sannse, Ronz, Docu, Aarchiba, Glenn, Cimon Avaro, Hashar, Emper-orbma, CharlesMatthews,Wik, DJ Clayworth, Taxman, Chris 73, Sverdrup, Moink,Wjbeaty, Giftlite, BenFrantzDale, Tom harrison, Avsa,Dmmaus, Gunnar Larsson, Karol Langner, Rdsmith4, Karl-Henner, Sonett72, Deglr6328, Kate, Discospinster, Marsian, Pban92, Dun-godung, Linuxlad, Alansohn, Polarscribe, XB-70, Saga City, Dirac1933, Drbreznjev, Tobyc75, Novacatz, Nuno Tavares, Woohookitty,RHaworth, Lensovet, GregorB, Isnow, RuM, Rnt20, Graham87, Rjwilmsi, Nneonneo, Margosbot, Efficacy, Chobot, Jaraalbe, Wave-length, Zaidpjd, Nathoq, Anuran, Koffieyahoo, Shell Kinney, Dtrebbien, Ojcit, JocK, Gbm, Raven4x4x, Uriel.frisch, Light current, Phe-lixxx, Besselfunctions, SmackBot, Nscheffey, HalfShadow, Commander Keane bot, Gilliam, SchfiftyThree, Moshe Constantine HassanAl-Silverburg, Complexica, Gbuffett, Skully Collins, KittyRainbow, Bigmantonyd, Akriasas, Just plain Bill, Guyjohnston, Vgy7ujm, Mb-eychok, Thijs Elenbaas, Vitousek, Yashkochar, Yms, Werdan7, McPolu, Spiel496, Bobsagat, Abel Cavaşi, Joseph Solis in Australia,Beve, Nkayesmith, Fnfal, Cydebot, W.F.Galway, MC10, Kefisher, Lugnuts, Pascal.Tesson, Miketwardos, Christian75, Omicronpersei8,Thijs!bot, Headbomb, Vertium, Mailseth, AntiVandalBot, Orionus, Myanw, MikeLynch, JAnDbot, Charlesreid1, David Eppstein, LOST-STAR, RReis, Salih, McSly, Belovedfreak, Dhaluza, KylieTastic, Idioma-bot, Philip Trueman, Gavoth, Omcnew, Corvus coronoides,Spiral5800, Ludwigo, D. Recorder, MasterHD, SieBot, I Like Cheeseburgers, Matthew Yeager, Snideology, Chansonh, RogueTeddy,Hamiltondaniel, Dolphin51, Bvdano, ClueBot, GorillaWarfare, Angelshide, Mild Bill Hiccup, Jdruedi, Wikisteff, Donebythesecondlaw,Reedjr5746, Djr32, Alexbot, Danmichaelo, Sun Creator, Ss785, Crowsnest, Ftoschi, Addbot, DOI bot, Hatashe, Hk44.44, Romanskolduns,Arbitrarily0, HerculeBot, Legobot, Luckas-bot, Yobot, Legobot II, Amirobot, KamikazeBot, Coulatssa, AnomieBOT, Daniele Pugliesi,Vanakaris, Mostiquera, Obersachsebot, Xqbot, Drilnoth, Nickkid5, BookWormHR, Rickproser, FrescoBot, Vyas.phy, Styxpaint, Good-bye Galaxy, Dankarl, Citation bot 1, Htmlvb, Gryllida, DixonDBot, Vp bongolan, Vrenator, Walter wiki 2009, RjwilmsiBot, EmausBot,KurtLC, Gimmetoo, Lequi7, Dcirovic, TheGGoose, Wayne Slam, Lgoodfriend, Mentibot, Ems2715, Napoleese, Xonqnopp, ClueBotNG, Jaceaerojock, Senthilvel32, Ng Jian Rong, Esnascosta, Mahmoudi Mehrizi, Helpful Pixie Bot, Calabe1992, Bibcode Bot, BG19bot,2pem, Zhuding, Jay8g, Thal1989, Hashem sfarim, Techocontra, Constantine.wolski, Francislands, Duxwing, ChrisGualtieri, Prj1991, Dr-George59, Whitezak, SassyLilNugget, Ugog Nizdast, Tbrandt20, Balljust, Anrnusna, Kevinholst, Monkbot, Link2778, Zwicker david,Zjz8868, Fimatic and Anonymous: 172

• Turbulence modeling Source: http://en.wikipedia.org/wiki/Turbulence%20modeling?oldid=634432149 Contributors: Charles Matthews,Karol Langner, Velella, Rjwilmsi, Salsb, PM Poon, SmackBot, Commander Keane bot, Andy M. Wang, Runcorn, Colonel Warden,Benoitroisin, Ebyabe, Thijs!bot, S Marshall, AntiVandalBot, Charlesreid1, EagleFan, Jaboles, HeyYo1988, Salih, Boeing767, MasterHD,JL-Bot, LAX, Djr32, Crowsnest, Addbot, Luckas-bot, Yobot, TaBOT-zerem, Some standardized rigour, Gia224, ArnaudContet, ClueBotNG, Makecat-bot, LarsSchmidtPedersen, S122389 and Anonymous: 26

• Reynolds stress equation model Source: http://en.wikipedia.org/wiki/Reynolds%20stress%20equation%20model?oldid=634973964Contributors: Myasuda, SchreiberBike, MenoBot II, LittleWink, BG19bot, BattyBot, JPaquim and Reynolds15

• Boundary layer Source: http://en.wikipedia.org/wiki/Boundary%20layer?oldid=636749524 Contributors: Gareth Owen, SimonP, MauryMarkowitz, Jdpipe, Michael Hardy, Klaus, GTBacchus, Rboatright, Cherkash, Charles Matthews, Reddi, Wik, Praveen, Moink, Giftlite,Wolfkeeper, BenFrantzDale, Abqwildcat, Mboverload, Dj245, Rich Farmbrough, Ericamick, Oleg Alexandrov, Woohookitty, Rtdrury,Ligar, Saperaud, Rjwilmsi, Jmcc150, Andrei Polyanin, Chobot, Bgwhite, Kummi, RaYmOnD, Nathoq, RussBot, Marcus Cyron, David R.Ingham, Grafen, Rubextablet, Knotnic, Sscomp2004, SmackBot, Deon Steyn, Trekphiler, Robma, DMacks, Voytek s, Jaganath, Richard77,CmdrObot, Fnfal, Gregbard, Cydebot, Meghaljani, Thijs!bot, Davidhorman, Ben pcc, Akradecki, .anacondabot, Rich257, JaGa, Salih,Mikael Häggström, Dhaluza, SgT LemMinG, FlyingBanana, VolkovBot, JohnBlackburne, TXiKiBoT, Kyle, Balaji.hrrn, Raymondwinn,Insanity Incarnate, Cj1340, SieBot, Andrew.Ainsworth, Chansonh, Lguinc, André Neves, Ariadacapo, Awickert, Alexbot, Machinyang,Crowsnest, SilvonenBot, Addbot, Medich1985, Moc5007, Jncraton, Girolamous, Ginosbot, Luckas-bot, Daniele Pugliesi, Killiondude,Rudolf.hellmuth, Citation bot, GrouchoBot, A.amitkumar, Hulk1986, Alxeedo, Smm5164, Pinethicket, Iwfyita, EmausBot, Rami radwan,ZéroBot, Suslindisambiguator, Andmok, Dohn joe, Rcsprinter123, Llightex, ClueBot NG, , CitationCleanerBot, Co6aka, Hunterrc95,Vijek, Frozenice2013, Rager12345 and Anonymous: 75

• Similitude (model) Source: http://en.wikipedia.org/wiki/Similitude%20(model)?oldid=635886444 Contributors: Patrick, Michael Hardy,Lexor, GTBacchus, Charles Matthews, David Shay, Longhair, Duk, Mdd, Keenan Pepper, PAR, Gene Nygaard, BD2412, Rjwilmsi, Russ-Bot, DelftUser, Kewp, Marra, SmackBot, Ddcampayo, Bluebot, Neo-Jay, Tsca.bot, Fuhghettaboutit, Ohconfucius, Freshacconci, Wwmbes,JaGa, Robprain, Guillaume2303, Pinin, Mrinsuperable, Ncowan, Chansonh, Artreve, Crowsnest, Addbot, Denispir, Daniele Pugliesi, Ste-fano2046, MastiBot, EmausBot, Dai bach, Lusilier, Tuc62662 and Anonymous: 17

• Lagrangian and Eulerian specification of the flow field Source: http://en.wikipedia.org/wiki/Lagrangian%20and%20Eulerian%20specification%20of%20the%20flow%20field?oldid=615269489 Contributors: AxelBoldt, BenFrantzDale, Chadernook, robot,Rex the first, Bluebot, Angrist, Javalenok, Mooseo, Fnfal, CBM, Gamebm, Dream Focus, Peteymills, GermanX, Salih, Falcon8765, Dol-phin51, StewartMH, Perturbationist, Brews ohare, Crowsnest, Addbot, Asymptotic wiki, Arbitrarily0, Yobot, FrescoBot, Tweet7, The-Senkel, Helpful Pixie Bot, Peshenator and Anonymous: 16

• Lagrangian mechanics Source: http://en.wikipedia.org/wiki/Lagrangian%20mechanics?oldid=636950943 Contributors: Derek Ross,CYD, Tarquin, AstroNomer, Andre Engels, Roadrunner, Peterlin, Isis, Michael Hardy, Tim Starling, Pit, Wapcaplet, Karada, Looxix,Stevan White, AugPi, Charles Matthews, Phys, Raul654, Robbot, Giftlite, Wolfkeeper, Lethe, Tom harrison, Dratman, Ajgorhoe, ZhenLin, Jason Quinn, DefLog, Icairns, AmarChandra, CALR, R6144, Laurascudder, Chairboy, Bobo192, Haham hanuka, Helixblue, OlegAlexandrov, Linas, StradivariusTV, Mpatel, Isnow, SeventyThree, Graham87, K3wq, Rjwilmsi, Mathbot, Chobot, ChrisChiasson, DVdm,Wavelength, RobotE, RussBot, Robert Turner, Ksyrie, SmackBot, InverseHypercube, KnowledgeOfSelf, KocjoBot, Frédérick Lacasse,Silly rabbit, Colonies Chris, Jgates, Elzair, Lambiam, Xenure, Luis Sanchez, JRSpriggs, Szabolcs Nagy, Grj23, Gregbard, Cydebot, Rwmcg-wier, Xaariz, Headbomb, Jomoal99, Sbandrews, JAnDbot, Hamsterlopithecus, Cstarknyc, Dream Focus, Pcp071098, Soulbot, JBdV, Pixel;-), First Harmonic, ANONYMOUS COWARD0xC0DE, E104421, Hoyabird8, R'n'B, ChrisfromHouston, LordAnubisBOT, Plasticup,CompuChip, Lseixas, VolkovBot, Camrn86, BertSen, Michael H 34, Aither, StevenBell, Filos96, ClueBot, PipepBot, Razimantv, Laudak,Ultinate, Zen Mind, DragonBot, Shsteven, Mleconte, Brews ohare, Crowsnest, SilvonenBot, Addbot, Favonian, Numbo3-bot, Lightbot,OlEnglish, ,سعی Yobot, Galoubet, Materialscientist, RibotBOT, Gsard, Ct529, Kxx, Dwightfowler, Sławomir Biały, Craig Pemberton,Tal physdancer, RedBot, Jordgette, Obsidian Soul, RjwilmsiBot, Mathematici6n, Edouard.darchimbaud, EmausBot, Manastra, JSquish,

Page 77: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

14.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 71

ZéroBot, Druzhnik, SporkBot, AManWithNoPlan, Maschen, ClueBot NG, Manubot, Snotbot, Helpful Pixie Bot, Jcc2011, CedricMC,BG19bot, F=q(E+v^B), Syedosamaali786, Khazar2, Themeasureoftruth, Cgabe6, Hublolly, Minime12358, Mark viking, Kevincassel,Dabarton91, Sauhailam, Yikkayaya, Sofia Koutsouveli, Julian ceaser and Anonymous: 141

• Hamiltonian mechanics Source: http://en.wikipedia.org/wiki/Hamiltonian%20mechanics?oldid=637423137 Contributors: CYD, Zun-dark, Mjb, David spector, Michael Hardy, Pit, Cyde, Looxix, Stevan White, AugPi, Charles Matthews, Reddi, Jitse Niesen, Phys, Bevo,Chuunen Baka, Robbot, Sverdrup, Tobias Bergemann, Snobot, Giftlite, Lethe, Dratman, Zhen Lin, Jason Quinn, HorsePunchKid, ChrisHoward, D6, Bender235, Laurascudder, Army1987, Rephorm, Linuxlad, Jheald, Gene Nygaard, Linas, PhoenixPinion, Isnow, K3wq, RE,Rbeas, Mathbot, Srleffler, Kri, Chobot, Sanpaz, YurikBot, Borgx, RobotE, RussBot, KSmrq, Archelon, David R. Ingham, Bachrach44,Hyandat, Crasshopper, Reyk, Gbmaizol, Darrel francis, Mebden, Teply, Samuel Blanning, SmackBot, Errarel, 7segment, Frédérick Lacasse,TimBentley, Movementarian, MK8, Complexica, Akriasas, Wybot, Atoll, Xenure, JRSpriggs, OS2Warp, Mct mht, Cydebot, Rwmcgwier,BobQQ, Bb vb, Dchristle, Ebyabe, Mbell, Headbomb, Paquitotrek, Sbandrews, Rico402, JAnDbot, Felix116, Epq, Andrej.westermann,Maurice Carbonaro, LordAnubisBOT, Hessammehr, STBotD, Sheliak, Cuzkatzimhut, Gazok, VolkovBot, JohnBlackburne, Barbacana,Thurth, BertSen, Red Act, Voorlandt, Cgwaldman, Geometry guy, YohanN7, SieBot, JerrySteal, JerroldPease-Atlanta, Commutator, An-chor Link Bot, PerryTachett, StewartMH, UrsusArctosL71, AstroMark, Razimantv, HHHEB3, Alexey Muranov, DS1000, Crowsnest,Addbot, EjsBot, SPat, Zorrobot, Luckas-bot, Ptbotgourou, KamikazeBot, Freeskyman, Stefansquintet, Citation bot, Frederic Y Bois, Om-nipaedista, RibotBOT, Craig Pemberton, DrilBot, Vrenator, Doctor Zook, EmausBot, Helptry, Netheril96, Gerasime, SporkBot, Maschen,Zueignung, Jorgecarleitao, Helpful Pixie Bot, Jcc2011, BG19bot, Dzustin, Jamontaldi, F=q(E+v^B), ChrisGualtieri, Kylarnys, Hublolly,Dimoroi, Epic Wink, Zmilne and Anonymous: 107

• Classical mechanics Source: http://en.wikipedia.org/wiki/Classical%20mechanics?oldid=636023444 Contributors: AxelBoldt, CYD,Mav, Tarquin, AstroNomer, Ap, Josh Grosse, XJaM,William Avery, Roadrunner, Peterlin, Maury Markowitz, FlorianMarquardt, Camem-bert, Isis, Lir, Patrick, Michael Hardy, Tim Starling, Grahamp, Bcrowell, TakuyaMurata, Looxix, Stevenj, Lupinoid, Glenn, Bogdan-giusca, Rossami, Denny, Pizza Puzzle, Charles Matthews, Aravindet, Reddi, Dandrake, The Anomebot, Jeepien, Furrykef, Phys, Raul654,BenRG, RadicalBender, Phil Boswell, Robbot, F3meyer, Mayooranathan, Moink, Hadal, Papadopc, Fuelbottle, Anthony, Tobias Berge-mann, Giftlite, Wolfkeeper, Tom harrison, Wwoods, Wgmccallum, Jorge Stolfi, Dan Gardner, PlatinumX, Mobius, Quadell, Antandrus,Beland, Karol Langner, APH, Gauss, Icairns, Zfr, Muijz, Guanabot, FT2, Dave souza, Paul August, SpookyMulder, Bender235, JoeS-mack, Brian0918, MBisanz, Surachit, Bobo192, Nigelj, John Vandenberg, BrokenSegue, Haham hanuka, LucaB, Mlessard, Sun King,Batmanand, Orionix, Velella, Evil Monkey, Dirac1933, Woodstone, Gene Nygaard, RandomWalk, Oleg Alexandrov, Nuno Tavares, Linas,StradivariusTV, Drostie, Ruud Koot, Dodiad, Jeff3000, Ulcph, Mayz, XaosBits, Phlebas, Leapfrog314, Graham87, Magister Mathe-maticae, Qwertyus, FreplySpang, Yurik, Seidenstud, Kinu, MarSch, Thechamelon, RE, Bhadani, Cethegus, DirkvdM, FlaBot, Mathbot,RexNL, Srleffler, Chobot, Krishnavedala, Sharkface217, Sanpaz, Gwernol, Wavelength, Hairy Dude, Deeptrivia, Retodon8, RussBot, CarlT, JabberWok, David R. Ingham, Johann Wolfgang, Ragesoss, Chichui, Enormousdude, Covington, Thou shalt not have any gods be-fore Willy on Wheels, RG2, Timothyarnold85, Sbyrnes321, SmackBot, Tom Lougheed, Hydrogen Iodide, Jagged 85, Ptpare, Harald88,Squiddy, Frédérick Lacasse, Saros136, Bluebot, TimBentley, SMP, Pieter Kuiper, Silly rabbit, Complexica, DHN-bot, Salmar, Foxjwill,Berland, Rsm99833, Cybercobra, Chrylis, Dr. Sunglasses, Sure kr06, Vgy7ujm, Loodog, Farid2053, Phancy Physicist, Xunex, SirFozzie,Mets501, Ssiruuk25, Anjor, Tawkerbot2, RSido, Sketch051, GeorgeLouis, Matthew Auger, Gregbard, Logicus, Cydebot, Rushbie, Rrace-carr, Thijs!bot, Barticus88, AndrewDressel, Kahriman, MrXow, Imusade, Headbomb, James086, Memayer, Austin Maxwell, Seaphoto,JAnDbot, CosineKitty, Db099221, Yill577, Magioladitis, VoABot II, Ling.Nut, Dfalcantara, Ryeterrell, David Eppstein, User A1, Mar-cusMaximus, JaGa, Ekotkie, Euneirophrenia, Rohan Ghatak, Nigholith, AtholM, Bcartolo, C quest000, CompuChip, Juliancolton, Treisijs,Useight, Idioma-bot, Pafcu, VolkovBot, JohnBlackburne, TXiKiBoT, The Original Wildbear, BertSen, GroveGuy, Hqb, Sankalpdravid,Anna Lincoln, Costela, Windrixx, BotKung, Amd628, Gnf1, Tom Atwood, Synthebot, AlleborgoBot, Neparis, SieBot, ToePeu.bot, Jer-rySteal, Paolo.dL, Lisatwo, Duae Quartunciae, Tomasz Prochownik, ClueBot, DeepBlueDiamond, Luke490, CyrilThePig4, Razimantv,Mild Bill Hiccup, Niceguyedc, Djr32, Excirial, Jomsborg, Gulmammad, Brews ohare, Arjayay, PhySusie, BOTarate, Crowsnest, XLinkBot,Rror, Saeed.Veradi, Andeasling, Truthnlove, Cholewa, Addbot, Willking1979, Atethnekos, Dgroseth, Njaelkies Lea, Fluffernutter, Spilling-Bot, Cst17, EconoPhysicist, Bassbonerocks, CUSENZA Mario, LinkFA-Bot, Tide rolls, Lightbot, Lrrasd, Luckas-bot, Bunnyhop11,Tannkrem, AnomieBOT, Rubinbot, Keithbob, Jpc4031, Citation bot, Xqbot, Tripodian, Amareto2, Charvest, Aaron Kauppi, Thehelp-fulbot, Dan6hell66, LucienBOT, Tobby72, Steve Quinn, Machine Elf 1735, Pinethicket, Codwiki, SpaceFlight89, Corinne68, TobeBot,Wdanbae, Lotje, Dinamik-bot, JLincoln, Diannaa, Onel5969, RjwilmsiBot, EmausBot, Syncategoremata, Elementaro, Wikipelli, JSquish,Cogiati, Knight1993, Stanford96, Empty Buffer, Vramasub, Maschen, ChuispastonBot, RockMagnetist, Wakebrdkid, ClueBot NG, Satel-lizer, SusikMkr, Enopet, Frietjes, Braincricket, Widr, ساجد امجد ,ساجد Lincoln Josh, Helpful Pixie Bot, පසඳ කාවනද, IzackN, Prof Mc-Carthy, Brian Tomasik, BlueMist, Sparkie82, Snow Blizzard, StopTheCrackpots, YFdyh-bot, Khazar2, Dexbot, Thatguy1234352, Rahulse-hwag, Reatlas, Devinray1991, Fidasty, Jburnett63, Arachmen, ElectronicKing888, Peterzipfel37, Mars wanderer, Jarjarbinks123455555and Anonymous: 220

• Entropy (information theory) Source: http://en.wikipedia.org/wiki/Entropy%20(information%20theory)?oldid=637488115 Contribu-tors: Tobias Hoevekamp, Derek Ross, Bryan Derksen, The Anome, Ap, PierreAbbat, Rade Kutil, Waveguy, B4hand, Youandme, Olivier,Stevertigo, Michael Hardy, Kku, Mkweise, Ahoerstemeier, Snoyes, AugPi, Rick.G, Ww, Sbwoodside, Dysprosia, Jitse Niesen, Fibonacci,Paul-L, Omegatron, Jeffq, Noeckel, Robbot, Tomchiukc, Benwing, Netpilot43556, Rursus, Bkell, Tobias Bergemann, Stirling Newberry,Giftlite, Boaz, Peruvianllama, Brona, Romanpoet, Jabowery, Christopherlin, Neilc, Gubbubu, Beland, OverlordQ, MarkSweep, KarolLangner, Wiml, Sctfn, Zeman, Abdull, TheObtuseAngleOfDoom, Rich Farmbrough, ArnoldReinhold, ESkog, MisterSheik, Jough, Guet-tarda, Cretog8, Army1987, Foobaz, Flammifer, Sligocki, PAR, Cburnett, Jheald, Tomash, Oleg Alexandrov, Linas, Shreevatsa, LOL,Bkwillwm, Male1979, Ryan Reich, Btyner, Marudubshinki, Graham87, BD2412, Jetekus, Grammarbot, Nanite, Sjö, Rjwilmsi, ThomasArelatensis, Nneonneo, Erkcan, Alejo2083, Mfeadler, Srleffler, Chobot, Flashmorbid, Wavelength, Alpt, Kymacpherson, Ziddy, Kimchi.sg,Afelton, Buster79, Brandon, Hakeem.gadi, DmitriyV, GrinBot, SmackBot, InverseHypercube, Fulldecent, IstvanWolf, Diegotorquemada,Mcld, Gilliam, Ohnoitsjamie, Dauto, Kurykh, Gutworth, Nbarth, DHN-bot, Colonies Chris, Jdthood, Javalenok, CorbinSimpson, Robma,Radagast83, Cybercobra, Mrander, DMacks, FilippoSidoti, Daniel.Cardenas, Michael Rogers, Andrei Stroe, Ohconfucius, Snowgrouse,Dmh, Ninjagecko, JoseREMY, Severoon, Nonsuch, Phancy Physicist, Seanmadsen, Shockem, Ryan256, Dan Gluck, Kencf0618, Dw-malone, AlainD, Ylloh, CmdrObot, Hanspi, CBM, Mcstrother, Citrus538, Neonleonb, FilipeS, Tkircher, Farzaneh, Blaisorblade, Igno-ramibus, Michael C Price, Alexnye, SteveMcCluskey, Nearfar, Thijs!bot, WikiC, Edchi, EdJohnston, D.H, Phy1729, Jvstone, Seaphoto,Heysan, Zylorian, Dougher, Husond, OhanaUnited, Time3000, Shaul1, Coffee2theorems, Magioladitis, RogierBrussee, VoABot II, Alb-mont, Swpb, First Harmonic, JaGa, Kestasjk, Tommy Herbert, Pax:Vobiscum, R'n'B, CommonsDelinker, Coppertwig, Policron, Jobonki,Jvpwiki, Ale2006, Idioma-bot, Cuzkatzimhut, Trevorgoodchild, Aelkiss, Trachten, Saigyo, Kjells, DragonLord, Mermanj, Spinningspark,PhysPhD, Bowsmand, Michel.machado, TimProof, Maxlittle2007, Hirstormandy, Neil Smithline, Dailyknowledge, Flyer22, Mdsam2,EnOreg, Algorithms, Svick, AlanUS, Melcombe, Rinconsoleao, Alksentrs, Schuermann, Vql, Djr32, Blueyeru, TedDunning, Musides,

Page 78: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

72 CHAPTER 14. CANONICAL QUANTIZATION

Ra2007, Qwfp, Johnuniq, Kace7, Porphyro, Addbot, Deepmath, Landon1980, Olli Niemitalo, Hans de Vries, Mv240, MrVanBot, Jill-Jênn, Favonian, ChenzwBot, Wikomidia, Numbo3-bot, Ehrenkater, Tide rolls, Lightbot, Fryed-peach, Eastereaster, Luckas-bot, Yobot,Sobec, Cassandra Cathcart, AnomieBOT, Jim1138, Zandr4, Mintrick, Informationtheory, Belkovich, ArthurBot, Xqbot, Gusshoekey,Br77rino, Almabot, GrouchoBot, Omnipaedista, RibotBOT, Ortvolute, Entropeter, Constructive editor, FrescoBot, Hobsonlane, GEBStgo,Mhadi.afrasiabi, Orubt, Rc3002, HRoestBot, Cesarth73, RedBot, Cfpcompte, Pmagrass, Mduteil, Lotje, BlackAce48, Angelorf, 777sms,CobraBot, Duoduoduo, Aoidh, Spakin, Jann.poppinga, Mitch.mcquoid, Fitoschido, Gopsy, Racerx11, Mo ainm, Hhhippo, Purplie, Quon-dum, SporkBot, Music Sorter, Erianna, Elsehow, ChuispastonBot, Sigma0 1, DASHBotAV, ClueBot NG, Tschijnmotschau, Mesoderm,Helpful Pixie Bot, Bibcode Bot, BG19bot, Guy vandegrift, Eli of Athens, Hushaohan, Trombonechamp, Manoguru, Muhammad ShuaibNadwi, BattyBot, ChrisGualtieri, Marek marek, VLReeder77, Jrajniak89, Cerabot, Fourshade, Frosty, SFK2, Szzoli, Chrislgarry, I amOne of Many, Jamesmcmahon0, Altroware, OhGodItsSoAmazing, Suderpie, Orehet, Monkbot, Visme, Donen1937, WikiRambala, Ois-guad and Anonymous: 299

• Topological entropy Source: http://en.wikipedia.org/wiki/Topological%20entropy?oldid=632716309 Contributors: Zundark, TakuyaMu-rata, Tobias Bergemann, Phils, Jheald, Linas, Rjwilmsi, JosephSilverman, Vina-iwbot, YK Times, Unused0030, Arcfrk, Addbot, Mpfiz,Yobot, Charvest, Sławomir Biały, 777sms, Chaslsullivan, Ripchip Bot, Woottonjames, Wpathooper, Deltahedron, Mark viking, Jondaaland Anonymous: 4

• Measure-preserving dynamical system Source: http://en.wikipedia.org/wiki/Measure-preserving%20dynamical%20system?oldid=635492807 Contributors: Michael Hardy, Kku, GTBacchus, Charles Matthews, Jitse Niesen, Jheald, Alai, Oleg Alexandrov, Linas, Cmk5b,Feodor, Rhetth, Erzbischof, Headbomb, Wluh, YK Times, Sullivan.t.j, Arcfrk, JohnManuel, Alexbot, Protony, Addbot, Lusile, Favonian,Lightbot, Luckas-bot, Yobot, Tamtamar, False vacuum, Blh3321, Imaginary13, Daviddwd, Ivan Ukhov, YiFeiBot, Kamsa Hapnida andAnonymous: 21

• List of Feynman diagrams Source: http://en.wikipedia.org/wiki/List%20of%20Feynman%20diagrams?oldid=595495523 Contributors:Welsh, SmackBot, Headbomb, Yobot and Anonymous: 1

• Canonical quantization Source: http://en.wikipedia.org/wiki/Canonical%20quantization?oldid=622621247 Contributors: TakuyaMu-rata, Phys, Shizhao, Rursus, Fastfission, Dratman, Alison, Wmahan, Edsanville, CALR, MuDavid, Gauge, Cmdrjameson, Leoadec, OlegAlexandrov, Linas, -Ril-, SeventyThree, MarkHudson, FlaBot, Roboto de Ajvol, Bambaiah, Salsb, Rmky87, Teply, KasugaHuang, Smack-Bot, Melchoir, Chris the speller, Colonies Chris, Lambiam, Zarniwoot, CBM, Dr.enh, Headbomb, Second Quantization, RobHar, Brrant,Nachital, Bogni, Wendil, Deftini, Yill577, Andrej.westermann, Jianglaipku, LordAnubisBOT, P.wormer, Haseldon, Cuzkatzimhut, Con-certmusic, TXiKiBoT, Lejarrag, Pamputt, SophomoricPedant, Henry Delforn (old), Lisatwo, Bobathon71, Mild Bill Hiccup, SchreiberBike,Sebbie88, DumZiBoT, YouRang?, Uzdzislaw, Truthnlove, Addbot, Mathieu Perrin, Cesiumfrog, Kalkühl, WikiDreamer Bot, Yobot, Ci-tation bot, Qiushi, LilHelpa, Almabot, False vacuum, Gsard, Fcametti, Kirsim, Meier99, Chronulator, Xnn, Dewritech, Rafi5749, Preon,Antichristos, Clearlyfakeusername, Helpful Pixie Bot, NotWith, The1337gamer, Mogism, Cesaranieto, Faizan, JakeArkinstall, W. P. Uzer,Janus Antoninus and Anonymous: 48

14.9.2 Images• File:Airplane_vortex_edit.jpg Source: http://upload.wikimedia.org/wikipedia/commons/f/fe/Airplane_vortex_edit.jpg License: Public

domain Contributors: This image or video was catalogued by Langley Research Center of the United States National Aeronautics and SpaceAdministration (NASA) under Photo ID: EL-1996-00130 AND Alternate ID: L90-5919.Original artist: NASA Langley Research Center (NASA-LaRC), Edited by Fir0002

• File:Ambox_important.svg Source: http://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public domain Contributors:Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs)

• File:Beta_Negative_Decay.svg Source: http://upload.wikimedia.org/wikipedia/commons/8/89/Beta_Negative_Decay.svg License: Public domain Contribu-tors: This vector image was created with Inkscape Original artist: Joel Holdsworth (Joelholdsworth)

• File:Binary_entropy_plot.svg Source: http://upload.wikimedia.org/wikipedia/commons/2/22/Binary_entropy_plot.svg License: CC-BY-SA-3.0 Contribu-tors: original work by Brona, published on Commons at Image:Binary entropy plot.png. Converted to SVG by Alessio Damato Original artist: Brona andAlessio Damato

• File:BosonFusion-Higgs.svg Source: http://upload.wikimedia.org/wikipedia/commons/7/78/BosonFusion-Higgs.svg License: CC-BY-SA-3.0 Contributors:

• BosonFusion-Higgs.png Original artist: BosonFusion-Higgs.png: User:Harp 12:43, 28 March 2007

• File:Boundarylayer.png Source: http://upload.wikimedia.org/wikipedia/commons/3/38/Boundarylayer.png License: CC BY 3.0 Contributors: http://www.symscape.com/node/447 Original artist: Syguy

• File:Commons-logo.svg Source: http://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Original artist: ?

• File:Crypto_key.svg Source: http://upload.wikimedia.org/wikipedia/commons/6/65/Crypto_key.svg License: CC-BY-SA-3.0 Contributors: Own work basedon image:Key-crypto-sideways.png by MisterMatt originally from English Wikipedia Original artist: MesserWoland

• File:Double_beta_decay_feynman.svg Source: http://upload.wikimedia.org/wikipedia/commons/3/34/Double_beta_decay_feynman.svg License: Publicdomain Contributors: Own work Original artist: JabberWok2

• File:Electron-positron-annihilation.svg Source: http://upload.wikimedia.org/wikipedia/commons/a/a4/Electron-positron-annihilation.svg License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: JabberWok at English Wikipedia

• File:Entropy_flip_2_coins.jpg Source: http://upload.wikimedia.org/wikipedia/commons/d/d4/Entropy_flip_2_coins.jpg License: CC BY-SA 3.0 Contribu-tors: File:Ephesos_620-600_BC.jpg Original artist: http://www.cngcoins.com/

• File:Exampleergodicmap.svg Source: http://upload.wikimedia.org/wikipedia/commons/6/68/Exampleergodicmap.svg License: GFDL Contributors: Ownwork Original artist: Erzbischof

• File:Fisher_iris_versicolor_sepalwidth.svg Source: http://upload.wikimedia.org/wikipedia/commons/4/40/Fisher_iris_versicolor_sepalwidth.svg License:CC BY-SA 3.0 Contributors: en:Image:Fisher iris versicolor sepalwidth.png Original artist: en:User:Qwfp (original); Pbroks13 (talk) (redraw)

• File:Gluon-top-higgs.svg Source: http://upload.wikimedia.org/wikipedia/commons/8/87/Gluon-top-higgs.svg License: CC-BY-SA-3.0 Contributors: http://en.wikipedia.org/wiki/Image:Gluon-top-higgs.svg Original artist: http://en.wikipedia.org/wiki/User:JabberWok

Page 79: TURBULENCE, ENTROPY AND DYNAMICS...TURBULENCE, ENTROPY AND DYNAMICS Lecture Notes, UPC 2014 Jose M. Redondo

14.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 73

• File:Jet.jpg Source: http://upload.wikimedia.org/wikipedia/commons/f/fb/Jet.jpg License: CC BY 3.0 Contributors: Own work Original artist: C. Fukushimaand J. Westerweel, Technical University of Delft, The Netherlands

• File:Kkbar.png Source: http://upload.wikimedia.org/wikipedia/en/5/54/Kkbar.png License: Cc-by-sa-3.0 Contributors: ? Original artist: ?

• File:Laminar_boundary_layer_scheme.svg Source: http://upload.wikimedia.org/wikipedia/commons/0/0e/Laminar_boundary_layer_scheme.svg License:CC-BY-SA-3.0 Contributors: Own work Original artist: F l a n k e r

• File:Los_Angeles_attack_sub_2.jpg Source: http://upload.wikimedia.org/wikipedia/commons/8/8d/Los_Angeles_attack_sub_2.jpg License: Public do-main Contributors: [1] (from page [2]) Original artist: user:

• File:Nuvola_apps_katomic.png Source: http://upload.wikimedia.org/wikipedia/commons/7/73/Nuvola_apps_katomic.png License: LGPL Contributors:http://icon-king.com Original artist: David Vignoni / ICON KING

• File:Orbital_motion.gif Source: http://upload.wikimedia.org/wikipedia/commons/4/4e/Orbital_motion.gif License: GFDL Contributors:

• Earth derived from this image (public domain) Original artist: Own work

• File:PendulumWithMovableSupport.svg Source: http://upload.wikimedia.org/wikipedia/commons/a/a7/PendulumWithMovableSupport.svg License: CC-BY-SA-3.0 Contributors: from Wikipedia en:Image:PendulumWithMovableSupport.svg Image:PendulumWithMovableSupport.svg Original artist: Com-puChip

• File:Penguin_diagram.JPG Source: http://upload.wikimedia.org/wikipedia/commons/c/c5/Penguin_diagram.JPG License: CC BY-SA 2.5 Contributors:own work derived from a LaTeX source code given in http://cnlart.web.cern.ch/cnlart/221/node63.html (slightly modified) and Image:Pygoscelis papua.jpg byUser:Stan Shebs Original artist: Quilbert

• File:Physicsdomains.svg Source: http://upload.wikimedia.org/wikipedia/commons/f/f0/Physicsdomains.svg License: CC BY-SA 3.0 Contributors: Ownwork Original artist: Loodog (talk), SVG conversion by User:Surachit

• File:Primakoff_effect_diagram.GIF Source: http://upload.wikimedia.org/wikipedia/en/4/46/Primakoff_effect_diagram.GIF License: PD Contributors:

self-madeOriginal artist:

Georgios Choudalakis

• File:Quad_cancellation.png Source: http://upload.wikimedia.org/wikipedia/en/3/34/Quad_cancellation.png License: PD Contributors:

Phys (talk) (Uploads) Original artist:Phys (talk) (Uploads)

• File:Question_book-new.svg Source: http://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors:Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:Tkgd2007

• File:Similitude_(model).png Source: http://upload.wikimedia.org/wikipedia/commons/c/c8/Similitude_%28model%29.png License: CC-BY-SA-3.0 Con-tributors: ? Original artist: ?

• File:Sir_Isaac_Newton_(1643-1727).jpg Source: http://upload.wikimedia.org/wikipedia/commons/8/83/Sir_Isaac_Newton_%281643-1727%29.jpg Li-cense: Public domain Contributors: http://www.phys.uu.nl/~vgent/astrology/images/newton1689.jpg] Original artist: Sir Godfrey Kneller

• File:Stylised_Lithium_Atom.svg Source: http://upload.wikimedia.org/wikipedia/commons/e/e1/Stylised_Lithium_Atom.svg License: CC-BY-SA-3.0Contributors: ? Original artist: ?

• File:Symbol_template_class.svg Source: http://upload.wikimedia.org/wikipedia/en/5/5c/Symbol_template_class.svg License: Public domain Contributors: ?Original artist: ?

• File:Text_document_with_red_question_mark.svg Source: http://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg from the Tango project. Original artist:Benjamin D. Esham (bdesham)

• File:Theory_of_impetus.svg Source: http://upload.wikimedia.org/wikipedia/commons/6/68/Theory_of_impetus.svg License: CC0 Contributors: Own workOriginal artist: Krishnavedala

• File:Thermal_Boundary_Layer_Thickness.png Source: http://upload.wikimedia.org/wikipedia/commons/f/ff/Thermal_Boundary_Layer_Thickness.pngLicense: CC BY-SA 3.0 Contributors: Own work Original artist: Vijek

• File:Tir_parabòlic.png Source: http://upload.wikimedia.org/wikipedia/commons/7/72/Tir_parab%C3%B2lic.png License: CC-BY-SA-3.0 Contributors: ?Original artist: ?

• File:Velocity_and_Temperature_boundary_layer_similarity.png Source: http://upload.wikimedia.org/wikipedia/commons/1/13/Velocity_and_Temperature_boundary_layer_similarity.png License: GFDL Contributors: Created using Microsoft Powerpoint basedPreviously published: N/A Original artist: Vijek

• File:Wiki_letter_w_cropped.svg Source: http://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License: CC-BY-SA-3.0 Con-tributors:

• Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen

• File:WilliamRowanHamilton.jpeg Source: http://upload.wikimedia.org/wikipedia/commons/8/81/WilliamRowanHamilton.jpeg License: Public domainContributors: http://mathematik-online.de/F77.htm Original artist: Unknown

• File:Wind_tunnel_x-43.jpg Source: http://upload.wikimedia.org/wikipedia/commons/a/a7/Wind_tunnel_x-43.jpg License: Public domain Contributors:NASA http://www.dfrc.nasa.gov/Gallery/Photo/X-43A/Medium/ED04-0082-2.jpg Original artist: Jeff Caplan/NASA Langley

14.9.3 Content license

• Creative Commons Attribution-Share Alike 3.0