recursion acglz

218
Recursion acglz From Wikipedia, the free encyclopedia

Upload: man

Post on 12-Jan-2016

248 views

Category:

Documents


3 download

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

TRANSCRIPT

Page 1: Recursion acglz

Recursion acglzFrom Wikipedia, the free encyclopedia

Page 2: Recursion acglz

Contents

1 Anonymous recursion 11.1 Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Named functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Passing functions as arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.1 JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Perl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Single recursion 42.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 10

2.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

i

Page 3: Recursion acglz

ii CONTENTS

2.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.11 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Bar recursion 183.1 Technical Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Corecursion 194.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.1.1 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.1.2 Fibonacci sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.1.3 Tree traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.4 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5 Course-of-values recursion 255.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2 Equivalence to primitive recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.3 Application to primitive recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6 Single recursion 286.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

6.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 30

6.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Page 4: Recursion acglz

CONTENTS iii

6.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 34

6.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

6.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.11 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

7 Droste effect 427.1 Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

8 Fixed-point combinator 468.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

8.1.1 Values and domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478.1.2 Function versus implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478.1.3 What is a “combinator"? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

8.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488.2.1 The factorial function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

8.3 Fixed point combinators in lambda calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498.3.1 Equivalent definition of a fixed-point combinator . . . . . . . . . . . . . . . . . . . . . . . 498.3.2 Derivation of the Y combinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508.3.3 Other fixed-point combinators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508.3.4 Strict fixed point combinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518.3.5 Non-standard fixed-point combinators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Page 5: Recursion acglz

iv CONTENTS

8.4 Implementation in other languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528.4.1 Lazy functional implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528.4.2 Strict functional implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528.4.3 Imperative language implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

8.5 Typing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528.5.1 Type for the Y combinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

8.6 General information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

9 Fold (higher-order function) 569.1 Folds as structural transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569.2 Folds on lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

9.2.1 Linear vs. tree-like folds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589.2.2 Special folds for non-empty lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589.2.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589.2.4 Evaluation order considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599.2.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

9.3 Folds in various languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609.4 Universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

10 Single recursion 6210.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6210.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

10.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6410.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 64

10.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6410.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6410.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6510.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6510.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

10.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6610.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6610.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 68

10.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7010.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7010.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Page 6: Recursion acglz

CONTENTS v

10.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7110.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

10.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7210.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7210.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7210.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

10.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7310.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

10.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7310.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

10.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7310.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

10.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7410.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7410.12Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7510.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

11 Single recursion 7611.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7611.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

11.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7811.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 78

11.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7811.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7811.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7911.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7911.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

11.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8011.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8011.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 82

11.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8411.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8411.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8411.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

11.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8511.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8611.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8611.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8611.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

11.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8711.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

11.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Page 7: Recursion acglz

vi CONTENTS

11.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8711.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

11.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8811.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8811.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8811.12Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8911.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

12 Infinite loop 9012.1 Intended vs unintended looping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

12.1.1 Intentional looping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9012.1.2 Unintentional looping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

12.2 Interruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9112.3 Language support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9112.4 Examples of intentional infinite loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9112.5 Examples of unintentional infinite loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

12.5.1 Mathematical errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9212.5.2 Variable handling errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

12.6 Multi-party loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9312.7 Pseudo-infinite loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

12.7.1 Impossible termination condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9312.7.2 Infinite recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9312.7.3 Break statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9312.7.4 Alderson loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

12.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9412.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

13 Single recursion 9513.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9513.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

13.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9713.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 97

13.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9713.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9713.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9813.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9813.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

13.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9913.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9913.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 101

13.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10313.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Page 8: Recursion acglz

CONTENTS vii

13.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10313.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

13.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10413.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10513.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10513.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10513.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

13.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10613.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

13.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10613.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

13.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10613.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

13.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10713.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10713.12Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10813.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

14 Mutual recursion 10914.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

14.1.1 Data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10914.1.2 Computer functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10914.1.3 Mathematical functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

14.2 Prevalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11114.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11114.4 Conversion to direct recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11114.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11214.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11214.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

15 Nonrecursive filter 113

16 this (computer programming) 11416.1 Object-oriented programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11416.2 Subtleties and difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

16.2.1 Open recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11516.3 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

16.3.1 C++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11516.3.2 Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11516.3.3 C# . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11616.3.4 D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11616.3.5 Dylan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Page 9: Recursion acglz

viii CONTENTS

16.3.6 Eiffel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11616.3.7 JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11716.3.8 Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11716.3.9 Self . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11716.3.10 Xbase++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

16.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11716.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11816.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11816.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

17 Polymorphic recursion 11917.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

17.1.1 Nested datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11917.1.2 Higher-ranked types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

17.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11917.2.1 Program analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11917.2.2 Data structures, error detection, graph solutions . . . . . . . . . . . . . . . . . . . . . . . 120

17.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12017.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12017.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12017.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

18 Primitive recursive function 12218.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

18.1.1 Role of the projection functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12318.1.2 Converting predicates to numeric functions . . . . . . . . . . . . . . . . . . . . . . . . . 12318.1.3 Computer language definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

18.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12318.2.1 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12318.2.2 Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12418.2.3 Other operations on natural numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12418.2.4 Operations on integers and rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . 124

18.3 Relationship to recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12418.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12518.5 Some common primitive recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12618.6 Additional primitive recursive forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12718.7 Finitism and consistency results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12718.8 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12818.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12818.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

19 Primitive recursive set function 130

Page 10: Recursion acglz

CONTENTS ix

19.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13019.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

20 Recursion 13120.1 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13120.2 Informal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13220.3 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

20.3.1 Recursive humor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13320.4 In mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

20.4.1 Recursively defined sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13320.4.2 Finite subdivision rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13420.4.3 Functional recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13420.4.4 Proofs involving recursive definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13420.4.5 Recursive optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

20.5 In computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13420.6 In art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13520.7 The recursion theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

20.7.1 Proof of uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13520.7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

20.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13620.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13620.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13720.11External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

21 Single recursion 14221.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14221.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

21.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14421.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 144

21.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14421.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14421.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14521.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14521.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

21.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14621.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14621.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 148

21.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15021.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15021.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15021.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

21.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Page 11: Recursion acglz

x CONTENTS

21.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15221.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15221.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15221.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

21.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15321.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

21.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15321.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

21.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15321.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

21.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15421.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15421.12Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15521.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

22 Recursion termination 15622.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

22.1.1 Fibonacci function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15622.1.2 C++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

22.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15622.3 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

23 Recursive definition 15723.1 Form of recursive definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15823.2 Examples of recursive definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

23.2.1 Elementary functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15823.2.2 Prime numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15923.2.3 Non-negative even numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15923.2.4 Well formed formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

23.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16023.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

24 Recursive function 16124.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

25 Recursive language 16225.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16225.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16225.3 Closure properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16325.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16325.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

26 Reentrancy (computing) 164

Page 12: Recursion acglz

CONTENTS xi

26.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16426.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16526.3 Rules for reentrancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16526.4 Reentrant interrupt handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16526.5 Further examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16526.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16626.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16626.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16626.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

27 Single recursion 16727.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16727.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

27.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16927.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 169

27.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16927.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16927.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17027.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17027.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

27.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17127.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17127.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 173

27.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17527.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17527.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17527.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

27.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17627.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17727.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17727.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17727.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

27.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17827.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

27.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17827.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

27.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17827.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

27.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17927.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17927.12Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18027.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Page 13: Recursion acglz

xii CONTENTS

28 Tail call 18128.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18128.2 Syntactic form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18228.3 Example programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18228.4 Tail recursion modulo cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18328.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18328.6 Implementation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

28.6.1 In assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18428.6.2 Through trampolining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

28.7 Relation to while construct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18528.8 By language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18528.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18528.10Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18628.11References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

29 Transfinite induction 18729.1 Transfinite recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18729.2 Relationship to the axiom of choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18929.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18929.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18929.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18929.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

30 Tree traversal 19130.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

30.1.1 Depth-first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19330.1.2 Breadth-first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19530.1.3 Other types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

30.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19530.3 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

30.3.1 Depth-first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19530.3.2 Breadth-first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

30.4 Infinite trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19630.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19730.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

31 Walther recursion 19831.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19831.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19831.3 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 199

31.3.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19931.3.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Page 14: Recursion acglz

CONTENTS xiii

31.3.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

Page 15: Recursion acglz

Chapter 1

Anonymous recursion

In computer science, anonymous recursion is recursion which does not explicitly call a function by name. Thiscan be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it– or implicitly, via reflection features which allow one to access certain functions depending on the current context,especially “the current function” or sometimes “the calling function of the current function”.In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilitiesto support it. In general programming practice, however, this is considered poor style, and recursion with namedfunctions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in anylanguage that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear thanexplicitly recursing by name.In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursionwithout requiring named functions. This is particularly important for the lambda calculus, which has anonymousunary functions, but is able to compute any recursive function. This anonymous recursion can be produced genericallyvia fixed-point combinators.

1.1 Use

Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they formclosures or are used as callbacks, to avoid having to bind the name of the function.Anonymous recursion primarily consists of calling “the current function”, which results in direct recursion. Anony-mous indirect recursion is possible, such as by calling “the caller (the previous function)", or, more rarely, by goingfurther up the call stack, and this can be chained to produce mutual recursion. The self-reference of “the currentfunction” is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer tothe current context.Anonymous recursion can also be used for named functions, rather that calling them by name, say to specify thatone is recursing on the current function, or to allow one to rename the function without needing to change the namewhere it calls itself. However, as a matter of programming style this is generally not done.

1.2 Alternatives

1.2.1 Named functions

The usual alternative is to use named functions and named recursion. Given an anonymous function, this can bedone either by binding a name to the function, as in named function expressions in JavaScript, or by assigning thefunction to a variable and then calling the variable, as in function statements in JavaScript. Since languages thatallow anonymous functions generally allow assigning these functions to variables (if not first-class functions), manylanguages do not provide a way to refer to the function itself, and explicitly reject anonymous recursion; examplesinclude Go.[1]

1

Page 16: Recursion acglz

2 CHAPTER 1. ANONYMOUS RECURSION

For example, in JavaScript the factorial function can be defined via anonymous recursion as such:[2]

[1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; });

Rewritten to use a named function expression yields:[1,2,3,4,5].map(function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; });

1.2.2 Passing functions as arguments

Even without mechanisms to refer to the current function or calling function, anonymous recursion is possible in alanguage that allows functions as arguments. This is done by adding another parameter to the basic recursive func-tion and using this parameter as the function for the recursive call. This creates a higher-order function, and passingthis higher function itself allows anonymous recursion within the actual recursive function. This can be done purelyanonymously by applying a fixed-point combinator to this higher order function. This is mainly of academic interest,particularly to show that the lambda calculus has recursion, as the resulting expression is significantly more compli-cated than the original named recursive function. Conversely, the use of fixed-pointed combinators may be genericallyreferred to as “anonymous recursion”, as this is a notable use of them, though they have other applications.[3][4]

This is illustrated below using Python. First, a standard named recursion:def fact(n): if n == 0: return 1 return n * fact(n - 1)

Using a higher-order function so the top-level function recurses anonymously on an argument, but still needing thestandard recursive function as an argument:def fact0(n0): if n0 == 0: return 1 return n0 * fact0(n0 - 1) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(n1 - 1) fact= lambda n: fact1(fact0, n)

We can eliminate the standard recursive function by passing the function argument into the call:fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = lambda n: fact1(fact1, n)

The second line can be replaced by a generic higher-order function called a combinator:

F = lambda f: (lambda x: f(f, x)) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = F(fact1)

Written anonymously:[5]

(lambda f: (lambda x: f(f, x))) \ (lambda g, n1: 1 if n1 == 0 else n1 * g(g, n1 - 1))

In the lambda calculus, which only uses functions of a single variable, this can be done via the Y combinator. Firstmake the higher-order function of two variables be a function of a single variable, which directly returns a function,by currying:fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = fact1(fact1)

There are two “applying a higher order function to itself” operations here: f(f) in the first line and fact1(fact1) in thesecond. Factoring out the second double application into a combinator yields:C = lambda x: x(x) fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = C(fact1)

Factoring out the other double application yields:C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1 if n1 == 0 elsen1 * g(n1 - 1)) fact = C(D(fact1))

Combining the two combinators into one yields the Y combinator:C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) Y = lambda y: C(D(y)) fact1 = lambda g:

Page 17: Recursion acglz

1.3. EXAMPLES 3

(lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1)

Expanding out the Y combinator yields:Y = lambda f: (lambda x: f(lambda v: x(x)(v))) \ (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1)

Combining these yields a recursive definition of the factorial in lambda calculus (anonymous functions of a singlevariable):[6]

(lambda f: (lambda x: f(lambda v: x(x)(v))) (lambda x: f(lambda v: x(x)(v)))) \ (lambda g: (lambda n1: 1 if n1 ==0 else n1 * g(n1 - 1)))

1.3 Examples

1.3.1 JavaScript

In JavaScript, the current function is accessible via arguments.callee, while the calling function is accessible viaarguments.caller. These allow anonymous recursion, such as in this implementation of the factorial:[2]

[1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; });

1.3.2 Perl

Starting with Perl 5.16, the current subroutine is accessible via the __SUB__ token, which returns a reference tothe current subroutine, or undef outside a subroutine.[7] This allows anonymous recursion, such as in the followingimplementation of the factorial:use feature ":5.16"; sub { my $x = shift; $x == 0 ? 1 : $x * __SUB__->( $x - 1 ); }

1.4 References[1] Issue 226: It’s impossible to recurse a anonymous function in Go without workarounds.

[2] answer by olliej, Oct 25 '08 to "Why was the arguments.callee.caller property deprecated in JavaScript?", StackOverflow

[3] This terminology appear to be largely folklore, but it does appear in the following:

• Trey Nash, Accelerated C# 2008, Apress, 2007, ISBN 1-59059-873-3, p. 462—463. Derived substantially fromWes Dyer's blog (see next item).

• Wes Dyer Anonymous Recursion in C#, February 02, 2007, contains a substantially similar example found in thebook above, but accompanied by more discussion.

[4] The If Works Deriving the Y combinator, January 10th, 2008

[5] Hugo Walter’s answer to "Can a lambda function call itself recursively in Python?"

[6] Nux’s answer to "Can a lambda function call itself recursively in Python?"

[7] Perldoc, "The 'current_sub' feature", perldoc feature

Page 18: Recursion acglz

Chapter 2

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

2.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

4

Page 19: Recursion acglz

2.2. RECURSIVE DATA TYPES 5

Tree created using the Logo programming language and relying heavily on recursion

2.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 20: Recursion acglz

6 CHAPTER 2. SINGLE RECURSION

2.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

2.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

2.3 Types of recursion

2.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 21: Recursion acglz

2.3. TYPES OF RECURSION 7

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

2.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

2.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

2.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 22: Recursion acglz

8 CHAPTER 2. SINGLE RECURSION

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

2.4 Recursive programs

2.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 23: Recursion acglz

2.4. RECURSIVE PROGRAMS 9

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 24: Recursion acglz

10 CHAPTER 2. SINGLE RECURSION

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

2.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 25: Recursion acglz

2.4. RECURSIVE PROGRAMS 11

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 26: Recursion acglz

12 CHAPTER 2. SINGLE RECURSION

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

2.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

2.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

2.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 27: Recursion acglz

2.6. RECURSION VERSUS ITERATION 13

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

2.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

2.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 28: Recursion acglz

14 CHAPTER 2. SINGLE RECURSION

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

2.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

2.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

2.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

2.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 29: Recursion acglz

2.7. TAIL-RECURSIVE FUNCTIONS 15

2.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

2.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

2.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

2.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

2.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 30: Recursion acglz

16 CHAPTER 2. SINGLE RECURSION

2.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

2.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

2.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 31: Recursion acglz

2.12. FURTHER READING 17

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

2.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

2.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 32: Recursion acglz

Chapter 3

Bar recursion

Bar recursion is a generalized form of recursion developed by C. Spector in his 1962 paper.[1] It is related to barinduction in the same fashion that primitive recursion is related to ordinary induction, or transfinite recursion is relatedto transfinite induction.

3.1 Technical Definition

Let V, R, and O be types, and i be any natural number, representing a sequence of parameters taken from V. Thenthe function sequence f of functions fn from Vi+n → R to O is defined by bar recursion from the functions Ln : R →O and B with Bn : ((Vi+n → R) x (Vn → R)) → O if:

• fn((λα:Vi+n)r) = Ln(r) for any r long enough that Ln₊k on any extension of r equals Ln. Assuming L is acontinuous sequence, there must be such r, because a continuous function can use only finitely much data.

• fn(p) = Bn(p, (λx:V)fn₊₁(cat(p, x))) for any p in Vi+n → R.

Here “cat” is the concatenation function, sending p, x to the sequence which starts with p, and has x as its last term.(This definition is based on the one by Escardó and Oliva.[2])Provided that for every sufficiently long function (λα)r of type Vi → R, there is some n with Ln(r) = Bn((λα)r,(λx:V)Ln₊₁(r)), the bar induction rule ensures that f is well-defined.The idea is that one extends the sequence arbitrarily, using the recursion term B to determine the effect, until asufficiently long node of the tree of sequences over V is reached; then the base term L determines the final value of f.The well-definedness condition corresponds to the requirement that every infinite path must eventually pass though asufficiently long node: the same requirement that is needed to invoke a bar induction.The principles of bar induction and bar recursion are the intuitionistic equivalents of the axiom of dependent choices.[3]

3.2 References[1] C. Spector (1962). “Provably recursive functionals of analysis: a consistency proof of analysis by an extension of principles

in current intuitionistic mathematics”. In F. D. E. Dekker. Recursive Function Theory: Proc. Symposia in Pure Mathematics5. American Mathematical Society. pp. 1–27.

[2] Martín Escardó, Paulo Oliva. “Selection functions, Bar recursion, and Backwards Induction”. Math. Struct. in Comp.Science.

[3] Jeremy Avigad, Solomon Feferman (1999). “VI: Gödel’s functional (“Dialectica”) interpretation”. In S. R. Buss. Handbookof Proof Theory.

18

Page 33: Recursion acglz

Chapter 4

Corecursion

In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analyti-cally, starting on data further from a base case and breaking it down into smaller data and repeating until one reachesa base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing datafurther removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce,bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept isgenerative recursion which may lack a definite “direction” inherent in corecursion and recursion.Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data(base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, suchas streams, so long as it can be produced from simple data (base cases). Where recursion may not terminate, neverreaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, thoughit may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it producesand thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, andarguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for examplerecurrence relations such as the factorial.Corecursion can produce both finite and infinite data structures as result, and may employ self-referential data struc-tures. Corecursion is often used in conjunction with lazy evaluation, to only produce a finite subset of a potentiallyinfinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly im-portant concept in functional programming, where corecursion and codata allow total languages to work with infinitedata structures.

4.1 Examples

Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarilyof interest in functional programming, it can be illustrated using imperative programming, which is done below us-ing the generator facility in Python. In these examples local variables are used, and assigned values imperatively(destructively), though these are not necessary in corecursion in pure functional programming. In pure functionalprogramming, rather than assigning to local variables, these computed values form an invariable sequence, and priorvalues are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be com-puted). The assignments simply express this in the imperative paradigm and explicitly specify where the computationshappen, which serves to clarify the exposition.

4.1.1 Factorial

A classic example of recursion is computing the factorial, which is defined recursively as 0! := 1 andn! := n×(n−1)!

To recursively compute its result on a given input, a recursive function calls (a copy of) itself with a different (“smaller”in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless thebase case has been reached. Thus a call stack develops in the process. For example, to compute fac(3), this recursivelycalls in turn fac(2), fac(1), fac(0) (“winding up” the stack), at which point recursion terminates with fac(0) = 1, andthen the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial

19

Page 34: Recursion acglz

20 CHAPTER 4. CORECURSION

call frame fac(3), where the final result is calculated as 3*2 =: 6 and finally returned. In this example a functionreturns a single value.This stack unwinding can be explicated, defining the factorial corecursively, as an iterator, where one starts with thecase of 1 =: 0! , then from this starting value constructs factorial values for increasing numbers 1, 2, 3... as in theabove recursive definition with “time arrow” reversed, as it were, by reading it backwards as n!× (n+1) =: (n+1)!. The corecursive algorithm thus defined produces a stream of all factorials. This may be concretely implementedas a generator. Symbolically, noting that computing next factorial value requires keeping track of both n and f (aprevious factorial value), this can be represented as:

n, f = (0, 1) : (n+ 1, f × (n+ 1))

or in Haskell,(\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1)

meaning, “starting from n, f = 0, 1 , on each step the next values are calculated as n + 1, f × (n + 1) ". This ismathematically equivalent and almost identical to the recursive definition, but the +1 emphasizes that the factorialvalues are being built up, going forwards from the starting case, rather than being computed after first going back-wards, down to the base case, with a −1 decrement. Note also that the direct output of the corecursive functiondoes not simply contain the factorial n! values, but also includes for each value the auxiliary data of its index n in thesequence, so that any one specific result can be selected among them all, as and when needed.Note the connection with denotational semantics, where the denotations of recursive programs is built up corecursivelyin this way.In Python, a recursive factorial function can be defined as:[lower-alpha 1]

def factorial(n): if n == 0: return 1 else: return n * factorial(n - 1)

This could then be called for example as factorial(5) to compute 5!.A corresponding corecursive generator can be defined as:def factorials(): n, f = 0, 1 while True: yield f n, f = n + 1, f * (n + 1)

This generates an infinite stream of factorials in order; a finite portion of it can be produced by:def n_factorials(k): n, f = 0, 1 while n <= k: yield f n, f = n + 1, f * (n + 1)

This could then be called to produce the factorials up to 5! via:for f in n_factorials(5): print(f)

If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and theaccess into one function,def nth_factorial(k): n, f = 0, 1 while n < k: n, f = n + 1, f * (n + 1) yield f

As can be readily seen here, this is practically equivalent (just by substituting return for the only yield there) to theaccumulator argument technique for tail recursion, unwound into an explicit loop. Thus it can be said that the conceptof corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, whereapplicable.

4.1.2 Fibonacci sequence

In the same way, the Fibonacci sequence can be represented as:

a, b = (0, 1) : (b, a+ b)

Page 35: Recursion acglz

4.1. EXAMPLES 21

Note that because the Fibonacci sequence is a recurrence relation of order 2, the corecursive relation must track twosuccessive terms, with the (b,−) corresponding to shift forward by one step, and the (−, a + b) corresponding tocomputing the next term. This can then be implemented as follows (using parallel assignment):def fibonacci_sequence(): a, b = 0, 1 while True: yield a a, b = b, a + b

In Haskell,map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )

4.1.3 Tree traversal

Tree traversal via a depth-first approach is a classic example of recursion. Dually, breadth-first traversal can verynaturally be implemented via corecursion.Without using recursion or corecursion, one may traverse a tree by starting at the root node, placing the child nodesin a data structure, then removing the nodes in the data structure in turn and iterating over its children.[lower-alpha 2] Ifthe data structure is a stack (LIFO), this yields depth-first traversal, while if the data structure is a queue (FIFO), thisyields breadth-first traversal.Using recursion, a (post-order)[lower-alpha 3] depth-first traversal can be implemented by starting at the root node andrecursively traversing each child subtree in turn (the subtree based at each child node) – the second child subtree doesnot start processing until the first child subtree is finished. Once a leaf node is reached or the children of a branchnode have been exhausted, the node itself is visited (e.g., the value of the node itself is outputted). In this case, thecall stack (of the recursive functions) acts as the stack that is iterated over.Using corecursion, a breadth-first traversal can be implemented by starting at the root node, outputting its value,[lower-alpha 4]

then breadth-first traversing the subtrees – i.e., passing on the whole list of subtrees to the next step (not a single sub-tree, as in the recursive approach) – at the next step outputting the value of all of their root nodes, then passing ontheir child subtrees, etc.[lower-alpha 5] In this case the generator function, indeed the output sequence itself, acts as thequeue. As in the factorial example (above), where the auxiliary information of the index (which step one was at, n)was pushed forward, in addition to the actual output of n!, in this case the auxiliary information of the remainingsubtrees is pushed forward, in addition to the actual output. Symbolically:v,t = ([], FullTree) : (RootValues, ChildTrees)meaning that at each step, one outputs the list of values of root nodes, then proceeds to the child subtrees. Generatingjust the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the listof lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list).These can be compared as follows. The recursive traversal handles a leaf node (at the bottom) as the base case (whenthere are no children, just output the value), and analyzes a tree into subtrees, traversing each in turn, eventuallyresulting in just leaf nodes – actual leaf nodes, and branch nodes whose children have already been dealt with (cutoff below). By contrast, the corecursive traversal handles a root node (at the top) as the base case (given a node, firstoutput the value), treats a tree as being synthesized of a root node and its children, then produces as auxiliary outputa list of subtrees at each step, which are then the input for the next step – the child nodes of the original root arethe root nodes at the next step, as their parents have already been dealt with (cut off above). Note also that in therecursive traversal there is a distinction between leaf nodes and branch nodes, while in the corecursive traversal thereis no distinction, as each node is treated as the root node of the subtree it defines.Notably, given an infinite tree,[lower-alpha 6] the corecursive breadth-first traversal will traverse all nodes, just as for afinite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeedif traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf.This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures.In Python, this can be implemented as follows.[lower-alpha 7] The usual post-order depth-first traversal can be definedas:[lower-alpha 8]

def df(node): if node is not None: df(node.left) df(node.right) print(node.value)

This can then be called by df(t) to print the values of the nodes of the tree in post-order depth-first order.The breadth-first corecursive generator can be defined as:[lower-alpha 9]

Page 36: Recursion acglz

22 CHAPTER 4. CORECURSION

def bf(tree): tree_list = [tree] while tree_list: new_tree_list = [] for tree in tree_list: if tree is not None: yieldtree.value new_tree_list.append(tree.left) new_tree_list.append(tree.right) tree_list = new_tree_list

This can then be called to print the values of the nodes of the tree in breadth-first order:for i in bf(t): print(i)

4.2 Definition

Initial data types can be defined as being the least fixpoint (up to isomorphism) of some type equation; the isomorphismis then given by an initial algebra. Dually, final (or terminal) data types can be defined as being the greatest fixpointof a type equation; the isomorphism is then given by a final coalgebra.If the domain of discourse is the category of sets and total functions, then final data types may contain infinite, non-wellfounded values, whereas initial types do not.[1][2] On the other hand, if the domain of discourse is the categoryof complete partial orders and continuous functions, which corresponds roughly to the Haskell programming lan-guage, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form anisomorphism.[3]

Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dualto the way that ordinary recursion recursively defines functions whose domain is an initial data type.[4]

The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if onewere to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistentwith existing textbooks about Haskell.[5] Also note that the examples used in this article predate the attempts to definecorecursion and explain what it is.

4.3 Discussion

The rule for primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending onthe argument by pattern-matching on its constructors (that were called up before, somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. “fields”), we ascend on the result by filling-in its “destructors”(or “observers”, that will be called afterwards, somewhere - so we're actually calling a constructor, creating anotherbit of the result to be observed later on). Thus corecursion creates (potentially infinite) codata, whereas ordinaryrecursion analyses (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it mightnot terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite.Here is an example in Haskell. The following definition produces the list of Fibonacci numbers in linear time:fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes areever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniquesare an important part of Haskell programming.This can be done in Python as well:[6]

from itertools import tee, chain, islice, imap def add(x, y): return x + y def fibonacci(): def deferred_output(): fori in output: yield i result, c1, c2 = tee(deferred_output(), 3) paired = imap(add, c1, islice(c2, 1, None)) output =chain([0, 1], paired) return result for i in islice(fibonacci(), 20): print(i)

The definition of zipWith can be inlined, leading to this:fibs = 0 : 1 : next fibs where next (a: t@(b:_)) = (a+b):next t

This example employs a self-referential data structure. Ordinary recursion makes use of self-referential functions,but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can berewritten as follows:

Page 37: Recursion acglz

4.4. HISTORY 23

fibs = fibgen (0,1) fibgen (x,y) = x : fibgen (y,x+y)

This employs only self-referential function to construct the result. If it were used with strict list constructor it wouldbe an example of runaway recursion, but with non-strict list constructor this guarded recursion gradually produces anindefinitely defined list.Corecursion need not produce an infinite object; a corecursive queue[7] is a particularly good example of this phe-nomenon. The following definition produces a breadth-first traversal of a binary tree in linear time:data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b) bftrav :: Tree a b -> [Tree a b] bftrav tree = queue wherequeue = tree : gen 1 queue gen 0 p = [] gen len (Leaf _ : p) = gen (len-1) p gen len (Branch _ l r : p) = l : r : gen (len+1) p

This definition takes an initial tree and produces a list of subtrees. This list serves dual purpose as both the queue andthe result (gen len p produces its output len notches after its input back-pointer, p, along the queue). It is finite if andonly if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; thiscan safely be elided if this definition is applied only to infinite trees.Another particularly good example gives a solution to the problem of breadth-first labeling.[8] The function label visitsevery node in a binary tree in a breadth first fashion, and replaces each label with an integer, each subsequent integeris bigger than the last by one. This solution employs a self-referential data structure, and the binary tree can be finiteor infinite.label :: Tree a b -> Tree Int Int label t = t′ where (t′,ns) = label′ t (1:ns) label′ :: Tree a b -> [Int] -> (Tree Int Int,[Int]) label′ (Leaf _ ) (n:ns) = (Leaf n , n+1 : ns ) label′ (Branch _ l r) (n:ns) = (Branch n l′ r′ , n+1 : ns′′) where (l′,ns′) = label′ l ns (r′,ns′′) = label′ r ns′

An apomorphism (such as an anamorphism, such as unfold) is a form of corecursion in the same way that a paramorphism(such as a catamorphism, such as fold) is a form of recursion.The Coq proof assistant supports corecursion and coinduction using the CoFixpoint command.

4.4 History

Corecursion, referred to as circular programming, dates at least to (Bird 1984), who credits John Hughes and PhilipWadler; more general forms were developed in (Allison 1989). The original motivations included producing moreefficient algorithms (allowing 1 pass over data in some cases, instead of requiring multiple passes) and implementingclassical data structures, such as doubly linked lists and queues, in functional languages.

4.5 See also

• Bisimulation

• Coinduction

• Recursion

• Anamorphism

4.6 Notes[1] Not validating input data.

[2] More elegantly, one can start by placing the root node itself in the structure and then iterating.

[3] Post-order is to make “leaf node is base case” explicit for exposition, but the same analysis works for pre-order or in-order.

[4] Breadth-first traversal, unlike depth-first, is unambiguous, and visits a node value before processing children.

Page 38: Recursion acglz

24 CHAPTER 4. CORECURSION

[5] Technically, one may define a breadth-first traversal on an ordered, disconnected set of trees – first the root node of eachtree, then the children of each tree in turn, then the grandchildren in turn, etc.

[6] Assume fixed branching factor (e.g., binary), or at least bounded, and balanced (infinite in every direction).

[7] First defining a tree class, say via:class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right def__str__(self): return str(self.value)and initializing a tree, say via:t = Tree(1, Tree(2, Tree(4), Tree(5)), Tree(3, Tree(6), Tree(7)))In this example nodes are labeled in breadth-first order: 1 2 3 4 5 6 7

[8] Intuitively, the function iterates over subtrees (possibly empty), then once these are finished, all that is left is the node itself,whose value is then returned; this corresponds to treating a leaf node as basic.

[9] Here the argument (and loop variable) is considered as a whole, possible infinite tree, represented by (identified with) itsroot node (tree = root node), rather than as a potential leaf node, hence the choice of variable name.

4.7 References[1] Barwise and Moss 1996.

[2] Moss and Danner 1997.

[3] Smyth and Plotkin 1982.

[4] Gibbons and Hutton 2005.

[5] Doets and van Eijck 2004.

[6] Hettinger 2009.

[7] Allison 1989; Smith 2009.

[8] Jones and Gibbons 1992.

• Bird, Richard Simpson (1984). “Using circular programs to eliminate multiple traversals of data”. Acta Infor-matica 21 (3): 239–250. doi:10.1007/BF00264249.

• Lloyd Allison (April 1989). “Circular Programs and Self-Referential Structures”. Software Practice and Ex-perience 19 (2): 99–109. doi:10.1002/spe.4380190202.

• Geraint Jones and Jeremy Gibbons (1992). Linear-time breadth-first tree algorithms: An exercise in the arith-metic of folds and zips (Technical report). Dept of Computer Science, University of Auckland.

• Jon Barwise and Lawrence S Moss (June 1996). Vicious Circles. Center for the Study of Language andInformation. ISBN 978-1-57586-009-1.

• Lawrence S Moss and Norman Danner (1997). “On the Foundations of Corecursion”. Logic Journal of theIGPL 5 (2): 231–257. doi:10.1093/jigpal/5.2.231.

• Kees Doets and Jan van Eijck (May 2004). The Haskell Road to Logic, Maths, and Programming. King’sCollege Publications. ISBN 978-0-9543006-9-2.

• David Turner (2004-07-28). “Total Functional Programming”. Journal of Universal Computer Science 10 (7):751–768. doi:10.3217/jucs-010-07-0751.

• Jeremy Gibbons and Graham Hutton (April 2005). “Proof methods for corecursive programs”. FundamentaInformaticae Special Issue on Program Transformation 66 (4): 353–366.

• Leon P Smith (2009-07-29), “Lloyd Allison’s Corecursive Queues: Why Continuations Matter”, The MonadReader (14): 37–68

• Raymond Hettinger (2009-11-19). “Recipe 576961: Technique for cyclical iteration”.

• M. B. Smyth and G. D. Plotkin (1982). “The Category-Theoretic Solution of Recursive Domain Equations”.SIAM Journal on Computing 11 (4): 761–783. doi:10.1137/0211062.

Page 39: Recursion acglz

Chapter 5

Course-of-values recursion

In computability theory, course-of-values recursion is a technique for defining number-theoretic functions byrecursion. In a definition of a function f by course-of-values recursion, the value of f(n+1) is computed from thesequence ⟨f(1), f(2), . . . , f(n)⟩ . The fact that such definitions can be converted into definitions using a simplerform of recursion is often used to prove that functions defined by course-of-values recursion are primitive recursive.This article uses the convention that the natural numbers are the set {1,2,3,4,...}.

5.1 Definition and examples

The factorial function n! is recursively defined by the rules

0! = 1,(n+1)! = (n+1)*(n!).

This recursion is a primitive recursion because it computes the next value (n+1)! of the function based on the valueof n and the previous value n! of the function. On the other hand, the function Fib(n), which returns the nth Fibonaccinumber, is defined with the recursion equations

Fib(0) = 0,Fib(1) = 1,Fib(n+2) = Fib(n+1) + Fib(n).

In order to compute Fib(n+2), the last two values of the Fib function are required. Finally, consider the function gdefined with the recursion equations

g(0) = 0,g(n+ 1) =

∑ni=0 g(i)

n−i .

To compute g(n+1) using these equations, all the previous values of g must be computed; no fixed finite number ofprevious values is sufficient in general for the computation of g. The functions Fib and g are examples of functionsdefined by course-of-values recursion.In general, a function f is defined by course-of-values recursion if there is a fixed primitive recursive function hsuch that for all n,

f(n) = h(n, ⟨f(0), f(1), . . . , f(n− 1)⟩)

where ⟨f(0), f(1), . . . , f(n− 1)⟩ is a Gödel number encoding the indicated sequence. In particular

25

Page 40: Recursion acglz

26 CHAPTER 5. COURSE-OF-VALUES RECURSION

f(0) = h(0, ⟨⟩),

provides the initial value of the recursion. The function h might test its first argument to provide explicit initial values,for instance for Fib one could use the function defined by

h(n, s) =

{n ifn < 2

s[n− 2] + s[n− 1] ifn ≥ 2

where s[i] denotes extraction of the element i from an encoded sequence s; this is easily seen to be a primitive recursivefunction (assuming an appropriate Gödel numbering is used).

5.2 Equivalence to primitive recursion

In order to convert a definition by course-of-values recursion into a primitive recursion, an auxiliary (helper) functionis used. Suppose that one wants to have

f(n) = h(n, ⟨f(0), f(1), . . . , f(n− 1)⟩)

To define f using primitive recursion, first define the auxiliary course-of-values function that should satisfy

f̄(n) = ⟨f(0), f(1), . . . , f(n− 1)⟩.

Thus f̄(n) encodes the first n values of f. The function f̄ can be defined by primitive recursion because f̄(n+ 1) isobtained by appending to f̄(n) the new element h(n, f̄(n)) :

f̄(0) = ⟨⟩

f̄(n+ 1) = append(n, f̄(n), h(n, f̄(n))),

where append(n,s,x) computes, whenever s encodes a sequence of length n, a new sequence t of length n + 1 such thatt[n] = x and t[i] = s[i] for all i < n (again this is a primitive recursive function, under the assumption of an appropriateGödel numbering).Given f̄ , the original function f can be defined by f(n) = f̄(n + 1)[n] , which shows that it is also a primitiverecursive function.

5.3 Application to primitive recursive functions

In the context of primitive recursive functions, it is convenient to have a means to represent finite sequences of naturalnumbers as single natural numbers. One such method, Gödel’s encoding, represents a sequence ⟨n1, n2, . . . , nk⟩ as

k∏i=1

pnii

where pi represent the ith prime. It can be shown that, with this representation, the ordinary operations on sequencesare all primitive recursive. These operations include

• Determining the length of a sequence,

• Extracting an element from a sequence given its index,

Page 41: Recursion acglz

5.4. REFERENCES 27

• Concatenating two sequences.

Using this representation of sequences, it can be seen that if h(m) is primitive recursive then the function

f(n) = h(⟨f(1), f(2), . . . , f(n− 1)⟩)

is also primitive recursive.When the natural numbers are taken to begin with zero, the sequence ⟨n1, n2, . . . , nk⟩ is instead represented as

k∏i=1

p(ni+1)i

which makes it possible to distinguish the codes for the sequences ⟨0⟩ and ⟨0, 0⟩ .

5.4 References• Hinman, P.G., 2006, Fundamentals of Mathematical Logic, A K Peters.

• Odifreddi, P.G., 1989, Classical Recursion Theory, North Holland; second edition, 1999.

Page 42: Recursion acglz

Chapter 6

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

6.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

28

Page 43: Recursion acglz

6.2. RECURSIVE DATA TYPES 29

Tree created using the Logo programming language and relying heavily on recursion

6.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 44: Recursion acglz

30 CHAPTER 6. SINGLE RECURSION

6.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

6.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

6.3 Types of recursion

6.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 45: Recursion acglz

6.3. TYPES OF RECURSION 31

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

6.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

6.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

6.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 46: Recursion acglz

32 CHAPTER 6. SINGLE RECURSION

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

6.4 Recursive programs

6.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 47: Recursion acglz

6.4. RECURSIVE PROGRAMS 33

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 48: Recursion acglz

34 CHAPTER 6. SINGLE RECURSION

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

6.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 49: Recursion acglz

6.4. RECURSIVE PROGRAMS 35

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 50: Recursion acglz

36 CHAPTER 6. SINGLE RECURSION

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

6.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

6.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

6.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 51: Recursion acglz

6.6. RECURSION VERSUS ITERATION 37

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

6.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

6.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 52: Recursion acglz

38 CHAPTER 6. SINGLE RECURSION

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

6.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

6.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

6.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

6.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 53: Recursion acglz

6.7. TAIL-RECURSIVE FUNCTIONS 39

6.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

6.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

6.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

6.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

6.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 54: Recursion acglz

40 CHAPTER 6. SINGLE RECURSION

6.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

6.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

6.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 55: Recursion acglz

6.12. FURTHER READING 41

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

6.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

6.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 56: Recursion acglz

Chapter 7

Droste effect

TheDroste effect— known asmise en abyme in art — is the effect of a picture appearing within itself, in a place wherea similar picture would realistically be expected to appear.[1] The appearance is recursive: the smaller version containsan even smaller version of the picture, and so on. Only in theory could this go on forever; practically, it continues onlyas long as the resolution of the picture allows, which is relatively short, since each iteration geometrically reduces thepicture’s size. It is a visual example of a strange loop, a self-referential system of instancing which is the cornerstoneof fractal geometry.

7.1 Origin

The effect is named after the image on the tins and boxes of Droste cocoa powder, one of the main Dutch brands,which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image.[2] Thisimage, introduced in 1904 and maintained for decades with slight variations, became a household notion. Reportedly,poet and columnist Nico Scheepmaker introduced wider usage of the term in the late 1970s.[3]

The Droste effect was used by Giotto di Bondone in 1320, in his Stefaneschi Triptych. The polyptych altarpieceportrays in its center panel Cardinal Giacomo Gaetani Stefaneschi offering the triptych itself to St. Peter.[4] There arealso several examples from medieval times of books featuring images containing the book itself or window panels inchurches depicting miniature copies of the window panel itself. [5]

7.2 Examples

Land O'Lakes Butter packaging has an Indian maiden carrying a package of butter with a picture of herself.Bottles of Clicquot Club soda showed brand mascot “Clicquot” the Eskimo boy holding a bottle of Clicquot Clubsoda.The cover of the vinyl album Ummagumma by Pink Floyd shows a band member sitting, with a picture on the wall.The picture shows the same scene with a different band member and the effect continues for all four band members,with the picture for the fourth being the cover of their previous album A Saucerful of Secrets.In the 1971 science fiction film Escape from the Planet of the Apes, the character Dr. Otto Hasslein (Eric Braeden)attempts to explain the appearance in present day of intelligent apes from Earth’s future. Hasslein uses a paintingon a CRT monitor to illustrate an effect he refers to as "infinite regression.” The demonstration consists of a camerapulling away from a picture of an artist painting a picture, on a suggested infinite loop.The logo of cheese spread brand The Laughing Cow bears the Droste effect.In the 1980s, Chicago Cubs announcer Harry Caray regularly appeared on television station WGN-TV standing nextto a television set that was tuned to WGN showing Caray and the TV, thus forming a Droste effect.The title sequence of the eighth series of the British TV series Doctor Who features the show’s TARDIS emergingfrom a spiraling Droste effect clockface.[6]

This effect can also occur on a computer if the copy of the entire monitor screen is reproduced within one window.

42

Page 57: Recursion acglz

7.3. SEE ALSO 43

For example, the image on the right was created by creating a connection from a computer to itself using ChromeRemote Desktop.

7.3 See also

7.4 References[1] Nänny. Max and Fischer, Olga, The Motivated Sign: Iconicity in Language and Literature p. 37, John Benjamins Publishing

Company (2001) ISBN 90-272-2574-5

[2] Törnqvist, Egil. Ibsen: A Doll’s House, pp.105, Cambridge University Press (1995) ISBN 0-521-47866-9

[3] “Droste, altijd welkom”. cultuurarchief.nl.

[4] “Giotto di Bondone and assistants: Stefaneschi triptych”. vatican.va.

[5] See the collection of articles Medieval 'mise-en-abyme': the object depicted within itself for examples and opinions on howthis effect was used symbolically.

[6] Title sequence by Billy Hanshaw adapted from his design as a fan of the show.

7.5 External links• Escher and the Droste effect

• The Math Behind the Droste Effect (article by Jos Leys summarizing the results of the Leiden study and article)

• Droste Effect with Mathematica

• Droste Effect from Wolfram Demonstrations Project

• HyperDroste : Create Droste Effect animations on an iPhone

Page 58: Recursion acglz

44 CHAPTER 7. DROSTE EFFECT

The woman holds an object bearing a smaller image of her holding the same object, which in turn bears a smaller image of herholding the same object, and so on.

Page 59: Recursion acglz

7.5. EXTERNAL LINKS 45

Example of the Droste effect in a spiraling format

Example of Droste effect using Chrome Remote Desktop.

Page 60: Recursion acglz

Chapter 8

Fixed-point combinator

In computer science, a fixed-point combinator (or fixpoint combinator[1]) is a higher-order function y that satisfiesthe equation,

y f = f (y f)

It is so named because, by setting x = y f , it represents a solution to the fixed point equation,

x = f x

A fixed point of a function f is a value that doesn't change under the application of the function f. Consider thefunction f x = x2 . The values 0 and 1 are fixed points of this function, because 0 = 02 and 1 = 12 . This functionhas no other fixed points.A fixed point combinator need not exist for all functions. Also if f is a function of more than 1 parameter, the fixedpoint of the function need not be a total function.Functions that satisfy the equation for y expand as,

y f = f (. . . f (y f) . . .)

A particular implementation of y is Curry’s paradoxical combinator Y, represented in lambda calculus by,

λf.(λx.f (x x)) (λx.f (x x))

This combinator may be used in implementing Curry’s paradox. The heart of Curry’s paradox is that lambda calculusis unsound as a deductive system, and the Y combinator demonstrates that by allowing an anonymous expression torepresent zero, or even many values. This is inconsistent in mathematical logic.Applied to a function with one variable the Y combinator usually does not terminate. More interesting results areobtained by applying the Y combinator to functions of two or more variables. The second variable may be used as acounter, or index. The resulting function behaves like a while or a for loop in an imperative language.Used in this way the Y combinator implements simple recursion. In the lambda calculus it is not possible to referto the definition of a function in a function body. Recursion may only be achieved by passing in a function as aparameter. The Y combinator demonstrates this style of programming.

8.1 Introduction

The Y combinator is an implementation of the fixed-point combinator in lambda calculus. Fixed-point combinatorsmay also be easily defined in other functional and imperative languages. The implementation in lambda calculus ismore difficult due to limitations in lambda calculus.

46

Page 61: Recursion acglz

8.1. INTRODUCTION 47

The fixed combinator may be used in a number of different areas,

• General mathematics

• Untyped lambda calculus

• Typed lambda calculus

• Functional programming

• Imperative programming

Fixed point combinators may be applied to a range of different functions, but normally will not terminate unless thereis an extra parameter. Even with lazy evaluation when the function to be fixed refers to its parameter, another call tothe function is invoked. The calculation never gets started. The extra parameter is needed to trigger the start of thecalculation.The type of the fixed point is the return type of the function being fixed. This may be a real or a function or any othertype.In the untyped lambda calculus, the function to apply the fix point combinator to may be expressed using an encod-ing, like Church encoding. In this case particular lambda terms (which define functions) are considered as values.“Running” (beta reducing) the fixed point combinator on the encoding gives a lambda term for the result which maythen be interpreted as fixed point value.Alternately a function may be considered as a lambda term defined purely in lambda calculus.These different approaches affect how a mathematician and a programmer may regard a fixed point combinator. Alambda calculus mathematician may see the Y combinator applied to a function as being an expression satisfying thefixed point equation, and therefore a solution.In contrast a person only wanting to apply a fixed point combinator to some general programming task may see itonly as a means of implementing recursion.

8.1.1 Values and domains

Every expression has one value. This is true in general mathematics and it must be true in lambda calculus. Thismeans that in lambda calculus, applying a fixed point combinator to a function gives you an expression whose valueis the fixed point of the function.However this is a value in the lambda calculus domain, it may not correspond to any value in the domain of thefunction, so in a practical sense it is not necessarily a fixed point of the function, and only in the lambda calculusdomain is it a fixed point of the equation.For example, consider,

x2 = −1 ⇒ x =−1

x⇒ f x =

−1

x∧ Y f = x

Division of Signed numbers may be implemented in the Church encoding, so f may be represented by a lambda term.This equation has no solution in the real numbers. But in the domain of the complex numbers i and -i are solutions.This demonstrates that there may be solutions to an equation in another domain. However the lambda term for thesolution for the above equation is weirder than that. The lambda term Y f represents the state where x could be eitheri or -i, as one value. The information distinguishing these two values has been lost, in the change of domain.For the lambda calculus mathematician, this is a consequence of the definition of lambda calculus. For the program-mer, it means that the beta reduction of the lambda term will loop forever, never reaching a normal form.

8.1.2 Function versus implementation

The fixed-point combinator may be defined in mathematics and then implemented in other languages. General math-ematics defines a function based on its extensional properties.[2] That is, two functions are equal if they perform the

Page 62: Recursion acglz

48 CHAPTER 8. FIXED-POINT COMBINATOR

same mapping. Lambda calculus and programming languages regard function identity as an intensional property. Afunctions identity is based on its implementation.A lambda calculus function (or term) is an implementation of a mathematical function. In the lambda calculus thereare a number of combinator (implementations) that satisfy the mathematical definition of a fixed-point combinator.

8.1.3 What is a “combinator"?

A combinator is a particular type of higher-order function that may be used in defining functions without usingvariables. The combinators may be combined to direct values to their correct places in the expression without evernaming them as variables.

8.2 Usage

Usually when applied to functions of one parameter, implementations of the fixed point combinator fail to terminate.Functions with extra parameters are more interesting.The Y combinator is an example of what makes the Lambda calculus inconsistent. So it should be regarded withsuspicion. However it is safe to consider the Y combinator when defined in mathematic logic only. The definition is,

y f = f (y f)

It is easy to see how f may be applied to one variable. Applying it to two or more variables requires adding them tothe equation,

y f x = f (y f) x

This version of the equation must be shown consistent with the previous by the definition for equality of functions,

(∀xf x = g x) ≡ f = g

This definition allows the two equations for y to be regarded as equivalent, provided that the domain of x is welldefined. So if f has multiple parameters the y f may still be regarded as a fixed point, with some restrictions.

8.2.1 The factorial function

The factorial function provides a good example of how the fixed point combinator may be applied to functions oftwo variables. The result demonstrates simple recursion, as would be implemented in a single loop, in an imperativelanguage. The definition of numbers used is explained in Church encoding. The fixed point function is,

F f n = (IsZero n) 1 (multiply n (f (pred n)))

so y F is,

y F n = F (y F ) n

or

y F n = (IsZero n) 1 (multiply n ((y F ) (pred n)))

Setting y F = fact gives,

Page 63: Recursion acglz

8.3. FIXED POINT COMBINATORS IN LAMBDA CALCULUS 49

fact n = (IsZero n) 1 (multiply n (fact (pred n)))

this definition is equivalent to the mathematical definition of factorial,

fact n = ifn = 0 then 1 elsen ∗ fact (n− 1)

This definition puts F in the role of the body of a loop to be iterated.

8.3 Fixed point combinators in lambda calculus

The Y combinator, discovered by Haskell B. Curry, is defined as:

Y = λf.(λx.f (x x)) (λx.f (x x))

Beta reduction of this gives,By repeatedly applying this equality we get,

Y g = g (Y g) = g (g (Y g)) = g (. . . g (Y g) . . .)

8.3.1 Equivalent definition of a fixed-point combinator

This fixed-point combinator may be defined as y in,

x = f x ∧ y f = x

An expression for y may be derived using rules from the definition of a let expression. Firstly using the rule,

(∃xE ∧ F ) ⇐⇒ letx : E inF

gives,

letx = f x in y f = x

Also using,

x ̸∈ FV(E) ∧ x ∈ FV(F ) → letx : G inE F = E (letx : G inF )

gives

y f = letx = f x inx

Then using the eta reduction rule,

f x = y ⇐⇒ f = λx.y

gives,

y = λf. letx = f x inx

Page 64: Recursion acglz

50 CHAPTER 8. FIXED-POINT COMBINATOR

8.3.2 Derivation of the Y combinator

Curry’s Y combinator may be readily obtained from the definition of y.[3] Starting with,

λf. letx = f x inx

A lambda abstraction does not support reference to the variable name, in the applied expression, so x must be passedin as a parameter to x. We can think of this as replacing x by x x, but formally this is not correct. Instead defining yby ∀z, y z = x gives,

λf. let y z = f (y z) in y z

The let expression may be regarded as the definition of the function y, where z is the parameter. Instantiation z as yin the call gives,

λf. let y z = f (y z) in y y

And because the parameter z always passes the function y.

λf. let y z = f (z z) in y y

Using the eta reduction rule,

f x = y ≡ f = λx.y

gives,

λf. let y = λz.f (z z) in y y

A let expression may be expressed as a lambda abstraction using,

n ̸∈ FV (E) → (letn = E inL ≡ (λn.L) E)

gives,

λf.(λy.y y) (λz.f (z z))

This is possibly the simplest implementation of a fixed point combinator in lambda calculus. However one betareduction gives the more symmetrical form of Curry’s Y combinator.

λf.(λz.f (z z)) (λz.f (z z))

See also translating between let and lambda expressions.

8.3.3 Other fixed-point combinators

In untyped lambda calculus fixed-point combinators are not especially rare. In fact there are infinitely many of them.[4]

In 2005 Mayer Goldberg showed that the set of fixed-point combinators of untyped lambda calculus is recursivelyenumerable.[5]

The Y combinator can be expressed in the SKI-calculus as

Page 65: Recursion acglz

8.3. FIXED POINT COMBINATORS IN LAMBDA CALCULUS 51

Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))

The simplest fixed point combinator in the SK-calculus, found by John Tromp, is

Y' = S S K (S (K (S S (S (S S K)))) K)

which corresponds to the lambda expression

Y' = (λx. λy. x y x) (λy. λx. y (x y x))

The following fixed-point combinator is simpler than the Y combinator, and β-reduces into the Y combinator; it issometimes cited as the Y combinator itself:

X = λf.(λx.x x) (λx.f (x x))

Another common fixed point combinator is the Turing fixed-point combinator (named after its discoverer, AlanTuring):

Θ = (λx. λy. (y (x x y))) (λx. λy. (y (x x y)))

It also has a simple call-by-value form:

Θᵥ = (λx. λy. (y (λz. x x y z))) (λx. λy. (y (λz. x x y z)))

The analog for mutual recursion is a polyvariadic fix-point combinator, [6][7][8] which may be denoted Y*.

8.3.4 Strict fixed point combinator

The Z combinator will work in strict languages (or where normal order is applied). The Z combinator has the nextargument defined explicitly, preventing the expansion of Z g in the right hand side of the definition:

Z g v = g (Z g) v

and in lambda calculus is an eta-expansion:

Z = λf.(λx.f (λv.((x x) v))) (λx.f (λv.((x x) v)))

8.3.5 Non-standard fixed-point combinators

In untyped lambda calculus there are terms that have the same Böhm tree as a fixed-point combinator, that is they havethe same infinite extension λx.x (x (x ... )). These are called non-standard fixed-point combinators. Any fixed-pointcombinator is also a non-standard one, but not all non-standard fixed-point combinators are fixed-point combinatorsbecause some of them fail to satisfy the equation that defines the “standard” ones. These strange combinators arecalled strictly non-standard fixed-point combinators; an example is the following combinator;

N = B M (B (B M) B)

where,

B = λx,y,z.x (y z)M = λx.x x

The set of non-standard fixed-point combinators is not recursively enumerable.[5]

Page 66: Recursion acglz

52 CHAPTER 8. FIXED-POINT COMBINATOR

8.4 Implementation in other languages

Note that the Y combinator is a particular implementation of a fixed point combinator in lambda calculus. Its structureis determined by the limitations of lambda calculus. It is not necessary or helpful to use this structure in implementingthe fixed point combinator in other languages.Simple examples of fixed point combinators implemented in some programming paradigms are given below.For examples of implementations of the fixed point combinators in various languages see,

• Rosetta code - Y combinator

• Java code.

• C++ code.

8.4.1 Lazy functional implementation

In a language that supports lazy evaluation, like in Haskell, it is possible to define a fixed-point combinator usingthe defining equation of the fixed-point combinator which is conventionally named fix. The definition is given here,followed by some usage examples.fix :: (a -> a) -> a fix f = f (fix f) -- Lambda lifted -- alternative: -- fix f = let x = f x in x -- Lambda dropped fix (\x-> 9) -- this evaluates to 9 factabs fact 0 = 1 -- factabs is F from the lambda calculus example factabs fact x = x * fact(x-1) (fix factabs) 5 -- evaluates to 120

8.4.2 Strict functional implementation

In a strict functional language the argument to f is expanded beforehand, yielding an infinite call sequence,

f (f...f (fix f)...)) x

This may be resolved by defining fix with an extra parameter.let rec fix f x = f (fix f) x (* note the extra x; here fix f = \x-> f (fix f) x *) let factabs fact = function (* factabs hasextra level of lambda abstraction *) 0 -> 1 | x -> x * fact (x-1) let _ = (fix factabs) 5 (* evaluates to “120” *)

8.4.3 Imperative language implementation

This example is a slightly interpretive implementation of a fixed point combinator. A class is used to contain thefix function, called fixer. The function to be fixed is contained in a class that inherits from fixer. The fix functionaccesses the function to be fixed as a virtual function. As for the strict functional definition, fix is explicitly given anextra parameter x, which means that lazy evaluation is not needed.template <typename R, typename D> class fixer { public: R fix(D x) { return f(x); } private: virtual R f(D) = 0; };class fact : public fixer<long, long> { virtual long f(long x) { if (x == 0) { return 1; } return x * fix(x-1); } }; longresult = fact().fix(5);

8.5 Typing

In polymorphic lambda calculus (System F) a polymorphic fixed-point combinator has type;

∀a.(a → a) → a

Page 67: Recursion acglz

8.6. GENERAL INFORMATION 53

where a is a type variable. That is, fix takes a function, which maps a → a and uses it to return a value of type a.In the simply typed lambda calculus extended with recursive types, fixed-point operators can be written, but the typeof a “useful” fixed-point operator (one whose application always returns) may be restricted.In the simply typed lambda calculus, the fixed-point combinator Y cannot be assigned a type[9] because at some pointit would deal with the self-application sub-term x x by the application rule:

Γ ⊢ x : t1 → t2 Γ ⊢ x : t1Γ ⊢ x x : t2

where x has the infinite type t1 = t1 → t2 . No fixed-point combinator can in fact be typed, in those systems anysupport for recursion must be explicitly added to the language.

8.5.1 Type for the Y combinator

In programming languages that support recursive types, it is possible to type the Y combinator by appropriatelyaccounting for the recursion at the type level. The need to self-apply the variable x can be managed using a type (Reca), which is defined so as to be isomorphic to (Rec a -> a).For example, in the following Haskell code, we have In and Out being the names of the two directions of the isomor-phism, with types:[10]

In :: (Rec a -> a) -> Rec a out :: Rec a -> (Rec a -> a)

which lets us write:newtype Rec a = In { out :: Rec a -> a } y :: (a -> a) -> a y = \f -> (\x -> f (out x x)) (In (\x -> f (out x x)))

Or equivalently in OCaml:type 'a recc = In of ('a recc -> 'a) let out (In x) = x let y f = (fun x a -> f (out x x) a) (In (fun x a -> f (out x x) a))

8.6 General information

The function for which any input is a fixed point is called the identity function. Formally:

∀xf x = x

Other functions have the special property that after being applied once, further applications don't have any effect.More formally:

∀xf f x = f x

Such functions are called idempotent. An example of such a function is the function that returns 0 for all even integers,and 1 for all odd integers.Fixed-point combinators do not necessarily exist in more restrictive models of computation. For instance, they donot exist in simply typed lambda calculus.The Y combinator allows recursion to be defined as a set of rewrite rules,[11] without requiring native recursion supportin the language.[12]

The recursive join in relational databases implements a fixed point, by recursively adding records to a set until nomore may be added.In programming languages that support anonymous functions, fixed-point combinators allow the definition and useof anonymous recursive functions, i.e. without having to bind such functions to identifiers. In this setting, the use offixed-point combinators is sometimes called anonymous recursion.[13][14]

Page 68: Recursion acglz

54 CHAPTER 8. FIXED-POINT COMBINATOR

8.7 See also

• Fixed-point iteration

• Anonymous function

• Lambda calculus

• Let expression

• Lambda lifting

8.8 Notes

[1] Peyton Jones, Simon L. (1987). The Implementation of Functional Programming. Prentice Hall International.

[2] Selinger, Peter. “Lecture Notes on Lambda Calculus (PDF)" (PDF). p. 6.

[3] http://math.stackexchange.com/questions/51246/can-someone-explain-the-y-combinator

[4] Bimbó, Katalin. Combinatory Logic: Pure, Applied and Typed. p. 48.

[5] Goldberg, 2005

[6] Poly-variadic fix-point combinators

[7] Polyvariadic Y in pure Haskell98, lang.haskell.cafe, October 28, 2003

[8] Fixed point combinator for mutually recursive functions?

[9] An Introduction to the Lambda Calculus

[10] Haskell mailing list thread on How to define Y combinator in Haskell, 15 Sep 2006

[11] Daniel P. Friedman, Matthias Felleisen (1986). “Chapter 9 - Lambda The Ultimate”. The Little Lisper. Science ResearchAssociates. p. 179. “In the chapter we have derived a Y-combinator which allows us to write recursive functions of oneargument withour using define.”

[12] Mike Vanier. “The Y Combinator (Slight Return) or: How to Succeed at Recursion Without Really Recursing”. “Moregenerally, Y gives us a way to get recursion in a programming language that supports first-class functions but that doesn'thave recursion built in to it.”

[13] This terminology appear to be largely folklore, but it does appear in the following:

• Trey Nash, Accelerated C# 2008, Apress, 2007, ISBN 1-59059-873-3, p. 462—463. Derived substantially fromWes Dyer's blog (see next item).

• Wes Dyer Anonymous Recursion in C#, February 02, 2007, contains a substantially similar example found in thebook above, but accompanied by more discussion.

[14] The If Works Deriving the Y combinator, January 10th, 2008

8.9 References

• Werner Kluge, Abstract computing machines: a lambda calculus perspective, Springer, 2005, ISBN 3-540-21146-2, pp. 73–77

• Mayer Goldberg, (2005) On the Recursive Enumerability of Fixed-Point Combinators, BRICS Report RS-05-1,University of Aarhus

• Matthias Felleisen. A Lecture on the Why of Y .

Page 69: Recursion acglz

8.10. EXTERNAL LINKS 55

8.10 External links• http://www.latrobe.edu.au/philosophy/phimvt/joy/j05cmp.html

• http://okmij.org/ftp/Computation/fixed-point-combinators.html

• “Fixed-point combinators in Javascript”

• http://www.cs.brown.edu/courses/cs173/2002/Lectures/2002-10-28-lc.pdf

• http://www.mactech.com/articles/mactech/Vol.07/07.05/LambdaCalculus/

• http://www.csse.monash.edu.au/~{}lloyd/tildeFP/Lambda/Examples/Y/ (executable)

• http://www.ece.uc.edu/~{}franco/C511/html/Scheme/ycomb.html

• an example and discussion of a perl implementation

• “A Lecture on the Why of Y”

• “A Use of the Y Combinator in Ruby”

• “Functional programming in Ada”

• “Y Combinator in Erlang”

• “The Y Combinator explained with JavaScript”

• “The Y Combinator (Slight Return)" (detailed derivation)

• “The Y Combinator in C#"

• Rosetta code - Y combinator

Page 70: Recursion acglz

Chapter 9

Fold (higher-order function)

In functional programming, fold – also known variously as reduce, accumulate, aggregate, compress, or inject –refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combiningoperation, recombine the results of recursively processing its constituent parts, building up a return value. Typically,a fold is presented with a combining function, a top node of a data structure, and possibly some default values to beused under certain conditions. The fold then proceeds to combine elements of the data structure’s hierarchy, usingthe function in a systematic way.Folds are in a sense dual to unfolds, which take a “seed” value and apply a function corecursively to decide how toprogressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacingit with the results of applying a combining function at each node on its terminal values and the recursive results(catamorphism as opposed to anamorphism of unfolds).

9.1 Folds as structural transformations

Folds can be regarded as consistently replacing the structural components of a data structure with functions and values.Lists, for example, are built up in many languages from two primitives: any list is either an empty list, commonlycalled nil ([]), or is constructed by prepending an element in front of another list, creating what is called a cons node( Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil))))) ), resulting from application of a cons function, written down as (:)(colon) in Haskell. One can view a fold on lists as replacing the nil at the end of the list with a specific value, andreplacing each cons with a specific function. These replacements can be viewed as a diagram:

There’s another way to perform the structural transformation in a consistent manner, with the order of the two links

56

Page 71: Recursion acglz

9.2. FOLDS ON LISTS 57

of each node flipped when fed into the combining function:

These pictures illustrate right and left fold of a list visually. They also highlight the fact that foldr (:) [] is the identityfunction on lists (a shallow copy in Lisp parlance), as replacing cons with cons and nil with nil will not change theresult. The left fold diagram suggests an easy way to reverse a list, foldl (flip (:)) []. Note that the parameters to consmust be flipped, because the element to add is now the right hand parameter of the combining function. Anothereasy result to see from this vantage-point is to write the higher-order map function in terms of foldr, by composingthe function to act on the elements with cons, as:map f = foldr ((:) . f) []

where the period (.) is an operator denoting function composition.This way of looking at things provides a simple route to designing fold-like functions on other algebraic data structures,like various sorts of trees. One writes a function which recursively replaces the constructors of the datatype withprovided functions, and any constant values of the type with provided values. Such a function is generally referred toas a catamorphism.

9.2 Folds on lists

The folding of the list [1,2,3,4,5] with the addition operator would result in 15, the sum of the elements of the list[1,2,3,4,5]. To a rough approximation, one can think of this fold as replacing the commas in the list with the +operation, giving 1 + 2 + 3 + 4 + 5.In the example above, + is an associative operation, so the final result will be the same regardless of parenthesization,although the specific way in which it is calculated will be different. In the general case of non-associative binaryfunctions, the order in which the elements are combined may influence the final result’s value. On lists, there are twoobvious ways to carry this out: either by combining the first element with the result of recursively combining the rest(called a right fold), or by combining the result of recursively combining all elements but the last one, with the lastelement (called a left fold). This corresponds to a binary operator being either right-associative or left-associative,in Haskell's or Prolog's terminology. With a right fold, the sum would be parenthesized as 1 + (2 + (3 + (4 + 5))),whereas with a left fold it would be parenthesized as (((1 + 2) + 3) + 4) + 5.In practice, it is convenient and natural to have an initial value which in the case of a right fold is used when onereaches the end of the list, and in the case of a left fold is what is initially combined with the first element of the list.In the example above, the value 0 (the additive identity) would be chosen as an initial value, giving 1 + (2 + (3 + (4+ (5 + 0)))) for the right fold, and ((((0 + 1) + 2) + 3) + 4) + 5 for the left fold.

Page 72: Recursion acglz

58 CHAPTER 9. FOLD (HIGHER-ORDER FUNCTION)

9.2.1 Linear vs. tree-like folds

The use of an initial value is necessary when the combining function f is asymmetrical in its types, i.e. when the typeof its result is different from the type of list’s elements. Then an initial value must be used, with the same type asthat of f 's result, for a linear chain of applications to be possible. Whether it will be left- or right-oriented will bedetermined by the types expected of its arguments by the combining function – if it is the second argument that hasto be of the same type as the result, then f could be seen as a binary operation that associates on the right, and viceversa.When the function is symmetrical in its types and the result type is the same as the list elements’ type, the parenthesesmay be placed in arbitrary fashion thus creating a tree of nested sub-expressions, e.g. ((1 + 2) + (3 + 4)) + 5. Ifthe binary operation f is associative this value will be well-defined, i.e. same for any parenthesization, although theoperational details of how it is calculated will be different. This can have significant impact on efficiency if f isnon-strict.Whereas linear folds are node-oriented and operate in a consistent manner for each node of a list, tree-like folds arewhole-list oriented and operate in a consistent manner across groups of nodes.

9.2.2 Special folds for non-empty lists

One often wants to choose the identity element of the operation f as the initial value z. When no initial value seemsappropriate, for example, when one wants to fold the function which computes the maximum of its two parametersover a non-empty list to get the maximum element of the list, there are variants of foldr and foldl which use the lastand first element of the list respectively as the initial value. In Haskell and several other languages, these are calledfoldr1 and foldl1, the 1 making reference to the automatic provision of an initial element, and the fact that the liststhey are applied to must have at least one element.These folds use type-symmetrical binary operation: the types of both its arguments, and its result, must be the same.Richard Bird in his 2010 book proposes[1] “a general fold function on non-empty lists” foldrn which transforms its lastelement, by applying an additional argument function to it, into a value of the result type before starting the foldingitself, and is thus able to use type-asymmetrical binary operation like the regular foldr to produce a result of typedifferent from the list’s elements type.

9.2.3 Implementation

Linear folds

Using Haskell as an example, foldl and foldr can be formulated in a few equations.foldl :: (b -> a -> b) -> b -> [a] -> b foldl f z [] = z foldl f z (x:xs) = foldl f (f z x) xs

If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result ofapplying f to the old initial value and the first element.foldr :: (a -> b -> b) -> b -> [a] -> b foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs)

If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest.

Tree-like folds

Lists can be folded over in a tree-like fashion, both for finite and for indefinitely defined lists:foldt f z [] = z foldt f _ [x] = x foldt f z xs = foldt f z (pairs f xs) foldi f z [] = z foldi f z (x:xs) = f x (foldi f z (pairs fxs)) pairs f (x:y:t) = f x y : pairs f t pairs _ t = t

In the case of foldi function, to avoid its runaway evaluation on indefinitely defined lists the function f must not alwaysdemand its second argument’s value, at least not all of it, and/or not immediately (example below).

Page 73: Recursion acglz

9.2. FOLDS ON LISTS 59

Folds for non-empty lists

foldl1 f [x] = x foldl1 f (x:y:xs) = foldl1 f (f x y : xs) foldr1 f [x] = x foldr1 f (x:xs) = f x (foldr1 f xs) foldt1 f [x] =x foldt1 f (x:y:xs) = foldt1 f (f x y : pairs f xs) foldi1 f [x] = x foldi1 f (x:xs) = f x (foldi1 f (pairs f xs))

9.2.4 Evaluation order considerations

In the presence of lazy, or non-strict evaluation, foldr will immediately return the application of f to the head of thelist and the recursive case of folding over the rest of the list. Thus, if f is able to produce some part of its resultwithout reference to the recursive case on its “right” i.e. in its second argument, and the rest of the result is neverdemanded, then the recursion will stop (e.g. head == foldr (\a b->a) (error “empty list”) ). This allows right folds tooperate on infinite lists. By contrast, foldl will immediately call itself with new parameters until it reaches the endof the list. This tail recursion can be efficiently compiled as a loop, but can't deal with infinite lists at all — it willrecurse forever in an infinite loop.Having reached the end of the list, an expression is in effect built by foldl of nested left-deepening f-applications,which is then presented to the caller to be evaluated. Were the function f to refer to its second argument first here,and be able to produce some part of its result without reference to the recursive case (here, on its “left” i.e. in itsfirst argument), then the recursion would stop. This means that while foldr recurses “on the right” it allows for a lazycombining function to inspect list’s elements from the left; and conversely, while foldl recurses “on the left” it allowsfor a lazy combining function to inspect list’s elements from the right, if it so chooses (e.g. last == foldl (\a b->b)(error “empty list”) ).Reversing a list is also tail-recursive (it can be implemented using rev = foldl (\ys x -> x : ys) [] ). On finite lists, thatmeans that left-fold and reverse can be composed to perform a right fold in a tail-recursive way (cf. 1+>(2+>(3+>0))== ((0<+3)<+2)<+1 ), with a modification to the function f so it reverses the order of its arguments (i.e. foldr f z ==foldl (flip f) z . foldl (flip (:)) [] ), tail-recursively building a representation of expression that right-fold would build.The extraneous intermediate list structure can be eliminated with the continuation-passing technique, foldr f z xs ==foldl (\k x-> k . f x) id xs z ; similarly, foldl f z xs == foldr (\x k-> k . flip f x) id xs z ( flip is only needed in languageslike Haskell with its flipped order of arguments to the combining function of foldl unlike e.g. in Scheme where thesame order of arguments is used for combining functions to both foldl and foldr ).Another technical point to be aware of in the case of left folds using lazy evaluation is that the new initial parameteris not being evaluated before the recursive call is made. This can lead to stack overflows when one reaches the end ofthe list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often providea stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call.In Haskell this is the foldl' (note the apostrophe, pronounced 'prime') function in the Data.List library (one needsto be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituentsautomatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constantspace operation, when lazy evaluation of the final result is impossible or undesirable.

9.2.5 Examples

Using a Haskell interpreter, we can show the structural transformation which fold functions perform by constructinga string as follows:λ> putStrLn $ foldr (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) (1+(2+(3+(4+(5+(6+(7+(8+(9+(10+(11+(12+(13+0)))))))))))))λ> putStrLn $ foldl (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) (((((((((((((0+1)+2)+3)+4)+5)+6)+7)+8)+9)+10)+11)+12)+13)λ> putStrLn $ foldt (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) ((((1+2)+(3+4))+((5+6)+(7+8)))+(((9+10)+(11+12))+13))λ> putStrLn $ foldi (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) (1+((2+3)+(((4+5)+(6+7))+((((8+9)+(10+11))+(12+13))+0))))

Infinite tree-like folding is demonstrated e.g. in recursive primes production by unbounded sieve of Eratosthenes inHaskell:primes = 2 : _Y ((3 :) . minus [5,7..] . foldi (\(x:xs) ys -> x : union xs ys) [] . map (\p-> [p*p, p*p+2*p..])) _Y g =g (_Y g) -- = g . g . g . g . ...

where the function union operates on ordered lists in a local manner to efficiently produce their set union, and minus

Page 74: Recursion acglz

60 CHAPTER 9. FOLD (HIGHER-ORDER FUNCTION)

their set difference.For finite lists, e.g. merge sort (and its duplicates-removing variety, nubsort) could be easily defined using tree-likefolding asmergesort xs = foldt merge [] [[x] | x <- xs] nubsort xs = foldt union [] [[x] | x <- xs]

with the function merge a duplicates-preserving variant of union.Functions head and last could have been defined through folding ashead = foldr (\a b -> a) (error “head: Empty list”) last = foldl (\a b -> b) (error “last: Empty list”)

9.3 Folds in various languages

9.4 Universality

Fold is a polymorphic function. For any g having a definitiong [] = v g (x:xs) = f x (g xs)

then g can be expressed as[4]

g = foldr f vWe can also implement a fixed point combinator using fold,[5] proving that iterations can be reduced to folds:fix f = foldr (\_ -> f) undefined (repeat undefined)

9.5 See also• Aggregate function

• Iterated binary operation

• Catamorphism, a generalization of fold

• Homomorphism

• Map (higher-order function)

• Prefix sum

• Recursive data type

• Structural recursion

9.6 References[1] Richard Bird, “Pearls of Functional Algorithm Design”, Cambridge University Press 2010, ISBN 978-0-521-51338-8, p.

42

[2] For reference functools.reduce: import functoolsFor reference reduce: from functools import reduce

[3] Odersky, Martin (2008-01-05). “Re: Blog: My verdict on the Scala language”. Newsgroup: comp.scala.lang. Retrieved14 October 2013.

[4] Hutton, Graham. “A tutorial on the universality and expressiveness of fold” (PDF). Journal of Functional Programming (9(4)): 355–372. Retrieved March 26, 2009.

Page 75: Recursion acglz

9.7. EXTERNAL LINKS 61

[5] Pope, Bernie. “Getting a Fix from the Right Fold” (PDF). The Monad.Reader (6): 5–16. Retrieved May 1, 2011.

9.7 External links• “Higher order functions — map, fold and filter”

• “Unit 6: The Higher-order fold Functions”

• “Fold”

• “Constructing List Homomorphism from Left and Right Folds”

• “The magic foldr”

Page 76: Recursion acglz

Chapter 10

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

10.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

62

Page 77: Recursion acglz

10.2. RECURSIVE DATA TYPES 63

Tree created using the Logo programming language and relying heavily on recursion

10.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 78: Recursion acglz

64 CHAPTER 10. SINGLE RECURSION

10.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

10.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

10.3 Types of recursion

10.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 79: Recursion acglz

10.3. TYPES OF RECURSION 65

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

10.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

10.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

10.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 80: Recursion acglz

66 CHAPTER 10. SINGLE RECURSION

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

10.4 Recursive programs

10.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 81: Recursion acglz

10.4. RECURSIVE PROGRAMS 67

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 82: Recursion acglz

68 CHAPTER 10. SINGLE RECURSION

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

10.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 83: Recursion acglz

10.4. RECURSIVE PROGRAMS 69

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 84: Recursion acglz

70 CHAPTER 10. SINGLE RECURSION

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

10.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

10.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

10.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 85: Recursion acglz

10.6. RECURSION VERSUS ITERATION 71

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

10.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

10.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 86: Recursion acglz

72 CHAPTER 10. SINGLE RECURSION

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

10.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

10.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

10.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

10.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 87: Recursion acglz

10.7. TAIL-RECURSIVE FUNCTIONS 73

10.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

10.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

10.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

10.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

10.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 88: Recursion acglz

74 CHAPTER 10. SINGLE RECURSION

10.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

10.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

10.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 89: Recursion acglz

10.12. FURTHER READING 75

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

10.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

10.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 90: Recursion acglz

Chapter 11

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

11.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

76

Page 91: Recursion acglz

11.2. RECURSIVE DATA TYPES 77

Tree created using the Logo programming language and relying heavily on recursion

11.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 92: Recursion acglz

78 CHAPTER 11. SINGLE RECURSION

11.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

11.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

11.3 Types of recursion

11.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 93: Recursion acglz

11.3. TYPES OF RECURSION 79

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

11.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

11.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

11.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 94: Recursion acglz

80 CHAPTER 11. SINGLE RECURSION

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

11.4 Recursive programs

11.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 95: Recursion acglz

11.4. RECURSIVE PROGRAMS 81

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 96: Recursion acglz

82 CHAPTER 11. SINGLE RECURSION

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

11.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 97: Recursion acglz

11.4. RECURSIVE PROGRAMS 83

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 98: Recursion acglz

84 CHAPTER 11. SINGLE RECURSION

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

11.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

11.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

11.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 99: Recursion acglz

11.6. RECURSION VERSUS ITERATION 85

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

11.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

11.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 100: Recursion acglz

86 CHAPTER 11. SINGLE RECURSION

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

11.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

11.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

11.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

11.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 101: Recursion acglz

11.7. TAIL-RECURSIVE FUNCTIONS 87

11.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

11.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

11.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

11.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

11.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 102: Recursion acglz

88 CHAPTER 11. SINGLE RECURSION

11.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

11.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

11.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 103: Recursion acglz

11.12. FURTHER READING 89

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

11.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

11.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 104: Recursion acglz

Chapter 12

Infinite loop

This article is about the programming term. For the street on which Apple Inc.'s campus is located, see Infinite Loop(street). For the book by Michael S. Malone, see Infinite Loop (book).

An infinite loop (also known as an endless loop or unproductive loop) is a sequence of instructions in a computerprogram which loops endlessly, either due to the loop having no terminating condition, having one that can never bemet, or one that causes the loop to start over. In older operating systems with cooperative multitasking, infinite loopsnormally caused the entire system to become unresponsive. With the now-prevalent preemptive multitasking model,infinite loops usually cause the program to consume all available processor time, but can usually be terminated by theuser. Busy wait loops are also sometimes called “infinite loops”. One possible cause of a computer "freezing" is aninfinite loop; others include thrashing, deadlock, and access violations.

12.1 Intended vs unintended looping

Looping is repeating a set of instructions until a specific condition is met. An infinite loop occurs when the conditionwill never be met, due to some inherent characteristic of the loop.

12.1.1 Intentional looping

There are a few situations when this is desired behavior. For example, the games on cartridge-based game consolestypically have no exit condition in their main loop, as there is no operating system for the program to exit to; the loopruns until the console is powered off.Antique punch card-reading unit record equipment would literally halt once a card processing task was completed,since there was no need for the hardware to continue operating, until a new stack of program cards were loaded.By contrast, modern interactive computers require that the computer constantly be monitoring for user input or deviceactivity, so at some fundamental level there is an infinite processing idle loop that must continue until the device isturned off or reset. In the Apollo Guidance Computer, for example, this outer loop was contained in the Execprogram, and if the computer had absolutely no other work to do it would loop run a dummy job that would simplyturn off the “computer activity” indicator light.Modern computers also typically do not halt the processor or motherboard circuit-driving clocks when they crash.Instead they fall back to an error condition displaying messages to the operator, and enter an infinite loop waiting forthe user to either respond to a prompt to continue, or to reset the device.

12.1.2 Unintentional looping

Most often, the term is used for those situations when this is not the intended result; that is, when this is a bug. Sucherrors are most common among novice programmers, but can be made by experienced programmers as well, becausetheir causes can be quite subtle.

90

Page 105: Recursion acglz

12.2. INTERRUPTION 91

One common cause, for example, is that the programmer intends to iterate over a collection of items such as a linkedlist, executing the loop code once for each item. Improperly formed links can create a reference loop in the list,where one list element links to one that occurred earlier in the list. This joins part of the list into a circle, causing theprogram to loop forever.While most infinite loops can be found by close inspection of the code, there is no general method to determinewhether a given program will ever halt or will run forever; this is the undecidability of the halting problem.

12.2 Interruption

So long as the system is responsive, infinite loops can often be interrupted by sending a signal to the process (such asSIGINT in Unix), or an interrupt to the processor, causing the current process to be aborted. This can be done in atask manager, in a terminal with the Control-C command, or by using the kill command or system call. However, thisdoes not always work, as the process may not be responding to signals or the processor may be in an uninterruptiblestate, such as in the Cyrix coma bug (caused by overlapping uninterruptible instructions in an instruction pipeline).In some cases other signals such as SIGKILL can work, as they do not require the process to be responsive, while inother cases the loop cannot be terminated short of system shutdown.

12.3 Language support

See also: Control flow

Infinite loops can be implemented using various control flow constructs. Most commonly, in unstructured program-ming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to neverend, either by omitting the condition or explicitly setting it to true, as while (true) ....Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop.Examples include Ada (loop ... end loop),[1] Fortran (DO ... END DO), Go (for { ... }), and Ruby (loop do ... end).

12.4 Examples of intentional infinite loops

The simplest example (in C):int main() { for (;;); // or while (1); }

The form for (;;) for an infinite loop is traditional, appearing in the standard reference The C Programming Language,and is often punningly pronounced “forever”.[2]

This is a loop that will print “Infinite Loop” without halting.A similar example in BASIC :10 PRINT “INFINITE LOOP” 20 GOTO 10

A similar example in X86 assembly language:loop: ; Code to loop here jmp loop

Another example is in DOS:A goto :A

Here the loop is quite obvious, as the last line unconditionally sends execution back to the first.An example in Pythonwhile True: print(“Infinite Loop”)

Page 106: Recursion acglz

92 CHAPTER 12. INFINITE LOOP

An example in Bash$ while true; do echo “Infinite Loop"; done

An example in Perlprint “Infinite Loop\n” while 1

12.5 Examples of unintentional infinite loops

12.5.1 Mathematical errors

Here is one example of an infinite loop in Visual Basic:dim x as integer do while x < 5 x = 1 x = x + 1 loop

This creates a situation where x will never be greater than 5, since at the start of the loop code x is given the value of1, thus, the loop will always end in 2 and the loop will never break. This could be fixed by moving the x = 1 instructionoutside the loop. Essentially what this infinite loop does is to instruct a computer to keep on adding 1 to 1 until 5 isreached. Since 1+1 always equals 2, this will never happen.In some languages, programmer confusion about the mathematical symbols may lead to an unintentional infinite loop.For example, here is a snippet in C:#include <stdio.h> int main(void) { int a = 0; while (a < 10) { printf("%d\n”, a); if (a = 5) printf(“a equals 5!\n”);a++; } return 0; }

The expected output is the numbers 0 through 9, with an interjected “a equals 5!" between 5 and 6. However, in theline “if (a = 5)" above, the programmer has confused the = (assignment) with == (equality test) operators. Instead,this will assign the value of 5 to a at this point in the program. Thus, a will never be able to advance to 10, and thisloop cannot terminate.

12.5.2 Variable handling errors

Unexpected behavior in evaluating the terminating condition can also cause this problem. Here is an example (in C):float x = 0.1; while (x != 1.1) { printf(“x = %f\n”, x); x = x + 0.1; }

On some systems, this loop will execute ten times as expected, but on other systems it will never terminate. Theproblem is that the loop terminating condition (x != 1.1) tests for exact equality of two floating point values, and theway floating point values are represented in many computers will make this test fail, because they cannot representthe value 1.1 exactly.The same can happen in Python:x = 0.1 while x != 1: print x x += 0.1

Because of the likelihood of tests for equality or not-equality failing unexpectedly, it is safer to use greater-than orless-than tests when dealing with floating-point values. For example, instead of testing whether x equals 1.1, onemight test whether (x <= 1.0), or (x < 1.1), either of which would be certain to exit after a finite number of iterations.Another way to fix this particular example would be to use an integer as a loop index, counting the number of iterationsthat have been performed.A similar problem occurs frequently in numerical analysis: in order to compute a certain result, an iteration is intendedto be carried out until the error is smaller than a chosen tolerance. However, because of rounding errors during theiteration, the specified tolerance can never be reached, resulting in an infinite loop.

Page 107: Recursion acglz

12.6. MULTI-PARTY LOOPS 93

12.6 Multi-party loops

Although infinite loops in a single program are usually easy to predict, a loop caused by several entities interactingis much harder to foresee. Consider a server that always replies with an error message if it does not understand therequest. Apparently, there is no possibility for an infinite loop in the server, but if there are two such servers (Aand B), and A receives a message of unknown type from B, then A replies with an error message to B, B does notunderstand the error message and replies to A with its own error message, A does not understand the error messagefrom B and sends yet another error message, and so on ad infinitum. One common example of such situation is ane-mail loop.

12.7 Pseudo-infinite loops

A pseudo-infinite loop is a loop that appears infinite but is really just a very long loop.

12.7.1 Impossible termination condition

An example for loop in C:unsigned int i; for (i = 1; i != 0; i++) { /* loop code */ }

It appears that this will go on indefinitely, but in fact the value of i will eventually reach the maximum value storablein an unsigned int and adding 1 to that number will wrap-around to 0, breaking the loop. The actual limit of i dependson the details of the system and compiler used. With arbitrary-precision arithmetic, this loop would continue untilthe computer’s memory could no longer contain i. If i was a signed integer, rather than an unsigned integer, overflowwould be undefined. In this case, the loop could be optimized into an infinite loop.

12.7.2 Infinite recursion

Infinite recursion is a special case of an infinite loop that is caused by recursion. The most trivial example of this isthe term Ω in the lambda calculus, shown below in Scheme:(define Ω (let ([ω (lambda (f) (f f))]) (ω ω)))

Ω is an infinite recursion, and therefore has no normal form. When using structural recursion, infinite recursions areusually caused by a missing base case or by a faulty inductive step. An example of such a faulty structural recursion:(define (sum-from-1-to n) (+ n (sum-from-1-to (sub1 n))))

The function sum-from-1-to will run out of stack space, as the recursion never stops — it is infinite. To correct theproblem, a base case is added.(define (sum-from-1-to' n) (cond [(= n 1) 1] [else (+ n (sum-from-1-to' (sub1 n)))]))

This revised function will only run out of stack space if n is less than 1 or n is too large; error checking would removethe first case. For information on recursive functions which never run out of stack space, see tail recursion.See also: Recursion, for an alternate explanation of infinite recursion.

12.7.3 Break statement

A “while (true)" loop looks infinite at first glance, but there may be a way to escape the loop through a break statementor return statement. Example in PHP:while (true) { if ($foo->bar()) { return; } }

Page 108: Recursion acglz

94 CHAPTER 12. INFINITE LOOP

12.7.4 Alderson loop

Alderson loop is a rare slang or jargon term for an infinite loop where there is an exit condition available, but inacces-sible in the current implementation of the code, typically due to programmer’s error. These are most common andvisible while debugging user interface code.A C-like pseudocode example of an Alderson loop, where the program is supposed to sum numbers given by the useruntil zero is given, but where the programmer has used the wrong operator:sum = 0; while (true) { printf(“Input a number to add to the sum or 0 to quit”); i = getUserInput(); if (i * 0) { // ifi times 0 is true, add i to the sum sum += i; // this never happens because (i * 0) is 0 for any i; it would work if wehad != in the condition instead of * } if (sum > 100) { break; // terminate the loop; exit condition exists but is neverreached because sum is never added to } }

The term allegedly received its name from a programmer who had coded a modal message box in Microsoft Accesswithout either an OK or Cancel button, thereby disabling the entire program whenever the box came up.[3]

12.8 See also• Cycle detection

• Deadlock

• Divergence (computer science)

• Goto (command)

• Recursion (computer science)

12.9 References[1] Ada Programming: Control: Endless Loop

[2] Endless loop in C/C++

[3] Alderson Loop The Jargon File, Version 4.4.7. Accessed 5/21/2006. (public domain)

Page 109: Recursion acglz

Chapter 13

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

13.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

95

Page 110: Recursion acglz

96 CHAPTER 13. SINGLE RECURSION

Tree created using the Logo programming language and relying heavily on recursion

13.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 111: Recursion acglz

13.3. TYPES OF RECURSION 97

13.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

13.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

13.3 Types of recursion

13.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 112: Recursion acglz

98 CHAPTER 13. SINGLE RECURSION

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

13.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

13.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

13.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 113: Recursion acglz

13.4. RECURSIVE PROGRAMS 99

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

13.4 Recursive programs

13.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 114: Recursion acglz

100 CHAPTER 13. SINGLE RECURSION

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 115: Recursion acglz

13.4. RECURSIVE PROGRAMS 101

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

13.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 116: Recursion acglz

102 CHAPTER 13. SINGLE RECURSION

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 117: Recursion acglz

13.5. IMPLEMENTATION ISSUES 103

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

13.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

13.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

13.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 118: Recursion acglz

104 CHAPTER 13. SINGLE RECURSION

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

13.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

13.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 119: Recursion acglz

13.6. RECURSION VERSUS ITERATION 105

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

13.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

13.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

13.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

13.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 120: Recursion acglz

106 CHAPTER 13. SINGLE RECURSION

13.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

13.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

13.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

13.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

13.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 121: Recursion acglz

13.10. SEE ALSO 107

13.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

13.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

13.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 122: Recursion acglz

108 CHAPTER 13. SINGLE RECURSION

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

13.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

13.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 123: Recursion acglz

Chapter 14

Mutual recursion

In mathematics and computer science, mutual recursion is a form of recursion where two mathematical or com-putational objects, such as functions or data types, are defined in terms of each other.[1] Mutual recursion is verycommon in functional programming and in some problem domains, such as recursive descent parsers, where the datatypes are naturally mutually recursive, but is uncommon in other domains.

14.1 Examples

14.1.1 Data types

Further information: Recursive data type

The most important basic example of a data type that can be defined by mutual recursion is a tree, which can bedefined mutually recursively in terms of a forest (a list of trees). Symbolically:f: [t[1], ..., t[k]] t: v fA forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). Thisdefinition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as itexpresses a tree in simple terms: a list of one type, and a pair of two types. Further, it matches many algorithms ontrees, which consist of doing one thing with the value, and another thing with the children.This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:t: v [t[1], ..., t[k]]A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhatmessier: a tree consists of a pair of one type and a list of another, which require disentangling to prove results about.In Standard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees:[2]

datatype 'a tree = Empty | Node of 'a * 'a forest and 'a forest = Nil | Cons of 'a tree * 'a forest

14.1.2 Computer functions

Just as algorithms on recursive data types can naturally be given by recursive functions, algorithms on mutuallyrecursive data structures can be naturally given by mutually recursive functions. Common examples include algorithmson trees, and recursive descent parsers. As with direct recursion, tail call optimization is necessary if the recursiondepth is large or unbounded, such as using mutual recursion for multitasking. Note that tail call optimization ingeneral (when the function called is not the same as the original function, as in tail-recursive calls) may be moredifficult to implement than the special case of tail-recursive call optimization, and thus efficient implementation ofmutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascalthat require declaration before use, mutually recursive functions require forward declaration, as a forward reference

109

Page 124: Recursion acglz

110 CHAPTER 14. MUTUAL RECURSION

cannot be avoided when defining them.As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions nestedfunctions within its scope if this is supported. This is particularly useful for sharing state across a set of functionswithout having to pass parameters between them.

Basic examples

A standard example of mutual recursion, which is admittedly artificial, is determining whether a non-negative numberis even or is odd by having two separate functions and calling each other, decrementing each time.[3] In C:bool is_even(unsigned int n) { if (n == 0) return true; else return is_odd(n - 1); } bool is_odd(unsigned int n) { if (n== 0) return false; else return is_even(n - 1); }

These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, which is in turnequivalent to is 2 even?, and so on down to 0. This example is mutual single recursion, and could easily be replaced byiteration. In this example, the mutually recursive calls are tail calls, and tail call optimization would be necessary forthis to execute in constant stack space; in C this would take O(n) stack space, unless rewritten to use jumps insteadof calls. [4] This could be reduced to a single recursive function is_even, with is_odd calling is_even, but is_even onlycalling itself, with is_odd inlined.As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value andits behavior on children, and can be split up into two mutually recursive functions, one specifying the behavior on atree, calling the forest function for the forest of children, and one specifying the behavior on a forest, calling the treefunction for the tree in the forest. In Python:def f_tree(tree): f_value(tree.value) f_forest(tree.children) def f_forest(forest): for tree in forest: f_tree(tree)

In this case the tree function calls the forest function by single recursion, but the forest function calls the tree functionby multiple recursion.Using the Standard ML data type above, the size of a tree (number of nodes) can be computed via the followingmutually recursive functions:[5]

fun size_tree Empty = 0 | size_tree (Node (_, f)) = 1 + size_forest f and size_forest Nil = 0 | size_forest (Cons (t, f'))= size_tree t + size_forest f'

A more detailed example in Scheme, counting the leaves of a tree:[6]

(define (count-leaves tree) (if (leaf? tree) 1 (count-leaves-in-forest (children tree)))) (define (count-leaves-in-forestforest) (if (null? forest) 0 (+ (count-leaves (car forest)) (count-leaves-in-forest (cdr forest)))))

These examples reduce easily to a single recursive function by inlining the forest function in the tree function, whichis commonly done in practice: directly recursive functions that operate on trees sequentially process the value of thenode and recurse on the children within one function, rather than dividing these into two separate functions.

Advanced examples

A more complicated example is given by recursive descent parsers, which can be naturally implemented by havingone function for each production rule of a grammar, which then mutually recurse; this will in general be multiplerecursion, as production rules generally combine multiple parts. This can also be done without mutual recursion,for example by still having separate functions for each production rule, but having them called by a single controllerfunction, or by putting all the grammar in a single function.Mutual recursion can also implement a finite-state machine, with one function for each state, and single recursion inchanging state; this requires tail call optimization if the number of state changes is large or unbounded. This can beused as a simple form of cooperative multitasking. A similar approach to multitasking is to instead use coroutineswhich call each other, where rather than terminating by calling another routine, one coroutine yields to another butdoes not terminate, and then resumes execution when it is yielded back to. This allows individual coroutines to holdstate, without it needing to be passed by parameters or stored in shared variables.

Page 125: Recursion acglz

14.2. PREVALENCE 111

There are also some algorithms which naturally have two phases, such as minimax (min and max), and these can beimplemented by having each phase in a separate function with mutual recursion, though they can also be combinedinto a single function with direct recursion.

14.1.3 Mathematical functions

In mathematics, the Hofstadter Female and Male sequences are an example of a pair of integer sequences defined ina mutually recursive manner.Fractals can be computed (up to a given resolution) by recursive functions. This can sometimes be done more elegantlyvia mutually recursive functions; the Sierpiński curve is a good example.

14.2 Prevalence

Mutual recursion is very common in the functional programming style, and is often used for programs written inLISP, Scheme, ML, and similar languages. In languages such as Prolog, mutual recursion is almost unavoidable.Some programming styles discourage mutual recursion, claiming that it can be confusing to distinguish the conditionswhich will return an answer from the conditions that would allow the code to run forever without producing an answer.Peter Norvig points to a design pattern which discourages the use entirely, stating:[7]

If you have two mutually-recursive functions that both alter the state of an object, try to move almostall the functionality into just one of the functions. Otherwise you will probably end up duplicating code.

14.3 Terminology

Mutual recursion is also known as indirect recursion, by contrast with direct recursion, where a single function callsitself directly. This is simply a difference of emphasis, not a different notion: “indirect recursion” emphasises anindividual function, while “mutual recursion” emphasises the set of functions, and does not single out an individualfunction. For example, if f calls itself, that is direct recursion. If instead f calls g and then g calls f, which in turncalls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, g isindirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly aset of three or more functions that call each other can be called a set of mutually recursive functions.

14.4 Conversion to direct recursion

Mathematically, a set of mutually recursive functions are primitive recursive, which can be proven by course-of-values recursion,[8] building a single function F that lists the values of the individual recursive function in order:F = f1(0), f2(0), f1(1), f2(1), . . . , and rewriting the mutual recursion as a primitive recursion.Any mutual recursion between two procedures can be converted to direct recursion by inlining the code of oneprocedure into the other.[9] If there is only one site where one procedure calls the other, this is straightforward, thoughif there are several it can involve code duplication. In terms of the call stack, two mutually recursive procedures yielda stack ABABAB..., and inlining B into A yields the direct recursion (AB)(AB)(AB)...Alternately, any number of procedures can be merged into a single procedure that takes as argument a variant record(or algebraic data type) representing the selection of a procedure and its arguments; the merged procedure thendispatches on its argument to execute the corresponding code and uses direct recursion to call self as appropriate.This can be seen as a limited application of defunctionalization.[10] This translation may be useful when any of themutually recursive procedures can be called by outside code, so there is no obvious case for inlining one procedureinto the other. Such code then needs to be modified so that procedure calls are performed by bundling argumentsinto a variant record as described; alternately, wrapper procedures may be used for this task.

Page 126: Recursion acglz

112 CHAPTER 14. MUTUAL RECURSION

14.5 See also• Recursion (computer science)

• Circular dependency

14.6 References[1] Manuel Rubio-Sánchez, Jaime Urquiza-Fuentes,Cristóbal Pareja-Flores (2002), 'A Gentle Introduction to Mutual Recur-

sion', Proceedings of the 13th annual conference on Innovation and technology in computer science education, June 30–July2, 2008, Madrid, Spain.

[2] Harper 2000, "Date Types".

[3] Hutton 2007, 6.5 Mutual recursion, pp. 53–55.

[4] "Mutual Tail-Recursion" and "Tail-Recursive Functions", A Tutorial on Programming Features in ATS, Hongwei Xi, 2010

[5] Harper 2000, "Datatypes".

[6] Harvey & Wright 1999, V. Abstraction: 18. Trees: Mutual Recursion, pp. 310–313.

[7] Solving Every Sudoku Puzzle

[8] "mutual recursion", PlanetMath

[9] On the Conversion of Indirect to Direct Recursion by Owen Kaser, C. R. Ramakrishnan, and Shaunak Pawagi at StateUniversity of New York, Stony Brook (1993)

[10] Reynolds, John (August 1972). “Definitional Interpreters for Higher-Order Programming Languages” (PDF). Proceedingsof the ACM Annual Conference. Boston, Massachusetts. pp. 717–740.

• Harper, Robert (2000), Programming in Standard ML

• Harvey, Brian; Wright, Matthew (1999). Simply Scheme: Introducing Computer Science. MIT Press. ISBN978-0-26208281-5.

• Hutton, Graham (2007). Programming in Haskell. Cambridge University Press. ISBN 978-0-52169269-4.

14.7 External links• Mutual recursion at Rosetta Code

• "Example demonstrating good use of mutual recursion", "Are there any example of Mutual recursion?", StackOverflow

Page 127: Recursion acglz

Chapter 15

Nonrecursive filter

A non-recursive filter only uses input values like x[n − i], unlike recursive filter where it uses privious output valuesNon-recursive digital filters are often known as Finite Impulse Response (FIR) Filters as a non-recursive digital filterhas a finite number of coefficients in the impulse response h[n].Example: y[n] = 0.5x[n − 1] + 0.5x[n].

113

Page 128: Recursion acglz

Chapter 16

this (computer programming)

this, self, and Me are keywords used in some computer programming languages to refer to the object, class, orother entity that the currently running code is part of. The entity referred to by these keywords thus depends on theexecution context (such as which object is having its method called). Different programming languages use thesekeywords in slightly different ways. In languages where a keyword like “this” is mandatory, the keyword is the onlyway to access data and methods stored in the current object. Where optional, they can disambiguate variables andfunctions with the same name.

16.1 Object-oriented programming

In many object-oriented programming languages, this (also called self or Me) is a variable that is used in instancemethods to refer to the object on which they are working. C++ and languages which derive in style from it (such asJava, C#, D, and PHP) generally use this. Smalltalk and others, such as Object Pascal, Perl, Python, Ruby, Objective-C, and Swift, use self. Microsoft’s Visual Basic uses Me.The concept is similar in all languages: this is usually an immutable reference or pointer which refers to the currentobject; the current object often being the code that acts as 'parent' to the property, method, sub-routine or functionthat contains the this keyword. After an object is properly constructed, or instantiated, this is always a valid reference.Some languages require it explicitly; others use lexical scoping to use it implicitly to make symbols within their classvisible. Or alternatively, the current object referred to by this may be an independent code object that has called thefunction or method containing the keyword this. Such a thing happens, for example, when a Javascript event handlerattached to an HTML tag in a web page calls a function containing the keyword this stored in the global space outsidethe document object; in that context, this will refer to the page element within the document object, not the enclosingwindow object.[1]

In some languages, for example C++ and Java, this or self is a keyword, and the variable automatically exists ininstance methods. In others, for example Python and Perl 5, the first parameter of an instance method is such areference. It needs to be specified explicitly. In that case, the parameter need not necessarily be named this or self;it can be named freely by the programmer like any other parameter. However, by informal convention, the firstparameter of an instance method in Perl or Python is named self.Static methods in C++ or Java are not associated with instances but classes, and so cannot use this, because there isno object. In other languages, such as Python, Ruby, Smalltalk, Objective-C, or Swift, the method is associated witha class object that is passed as this, and they are called class methods.

16.2 Subtleties and difficulties

When lexical scoping is used to infer this, the use of this in code, while not illegal, may raise warning bells to amaintenance programmer, although there are still legitimate uses of this in this case, such as referring to instancevariables hidden by local variables of the same name, or if the method wants to return a reference to the currentobject, i.e. this, itself.

114

Page 129: Recursion acglz

16.3. IMPLEMENTATIONS 115

In some compilers (for example GCC), pointers to C++ instance methods can be directly cast to a pointer of anothertype, with an explicit this pointer parameter.[2]

16.2.1 Open recursion

The dispatch semantics of this, namely that method calls on this are dynamically dispatched, is known as openrecursion, and means that these methods can be overridden by derived classes or objects. By contrast, direct namedrecursion or anonymous recursion of a function uses closed recursion, with early binding. For example, in thefollowing Perl code for the factorial, the token __SUB__ is a reference to the current function:use feature ":5.16"; sub { my $x = shift; $x == 0 ? 1 : $x * __SUB__->( $x - 1 ); }

By contrast, in C++ (using an explicit this for clarity) the this binds to the object itself, but the method is resolved viadynamic dispatch (late binding):unsigned int factorial(unsigned int n) if (n == 0) return 1; else return n * this->factorial(n - 1);

This example is artificial, since this is direct recursion, so overriding the factorial method would override this function;more natural examples are when a method in a derived class calls the same method in a base class, or in cases ofmutual recursion.[3][4]

The fragile base class problem has been blamed on open recursion, with the suggestion that invoking methods on thisdefault to closed recursion (static dispatch, early binding) rather than open recursion (dynamic dispatch, late bind-ing), only using open recursion when it is specifically requested; external calls (not using this) would be dynamicallydispatched as usual.[5][6] The way this is solved in practice in the JDK is through a certain programmer discipline;this discipline has been formalized by C. Ruby and G. T. Leavens; it basically consists of the following rules:[7]

• No code invokes public methods on this.

• Code that can be reused internally (by invocation from other methods of the same class) is encapsulated ina protected or private method; if it needs to be exposed directly to the users as well, then a wrapper publicmethod calls the internal method.

• The previous recommendation can be relaxed for pure methods.

16.3 Implementations

16.3.1 C++

For more details on this topic, see C++ classes.

Early versions of C++ would let the this pointer be changed; by doing so a programmer could change which object amethod was working on. This feature was eventually removed, and now this in C++ is an r-value.[8]

Early versions of C++ did not include references and it has been suggested that had they been so in C++ from thebeginning, this would have been a reference, not a pointer.[9]

C++ lets objects destroy themselves with the source code statement: delete this.

16.3.2 Java

For more details on this topic, see Java (programming language).

The keyword this is a Java language keyword that represents the current instance of the class in which it appears. Itis used to access class variables and methods.Since all instance methods are virtual in Java, this can never be null.[10]

Page 130: Recursion acglz

116 CHAPTER 16. THIS (COMPUTER PROGRAMMING)

16.3.3 C#

For more details on this topic, see C Sharp (programming language).

The keyword this in C# works the same way as in Java, for reference types. However, within C# value types, this hasquite different semantics, being similar to an ordinary mutable variable reference, and can even occur on the left sideof an assignment.One use of this in C# is to allow reference to an outer field variable within a method that contains a local variablethat has the same name. In such a situation, for example, the statement var n = localAndFieldname; within themethod will assign the type and value of the local variable localAndFieldname to n, whereas the statement var n =this.localAndFieldname; will assign the type and value of the outer field variable to n.[11]

16.3.4 D

In D this in a class, struct or union method refers to an immutable reference of the instance of the enclosing aggregate.Classes are reference types, structs and unions are value types. In the first version of D, the keyword this is used as apointer to the instance of the object the method is bound to, while in D2 it has the character of an implicit ref functionargument.

16.3.5 Dylan

In the programming language Dylan, which is an object-oriented language that supports multimethods and doesn'thave a concept of this, sending a message to an object is still kept in the syntax. The two forms below work in thesame way; the differences are just syntactic sugar.object.method(param1, param2)andmethod (object, param1, param2)

16.3.6 Eiffel

Within a class text, the current type is the type obtained from the current class. Within features (routines, commandsand queries) of a class, one may use the keyword Current to reference the current class and its features. The use ofthe keyword Current is optional as the keyword Current is implied by simply referring to the name of the currentclass feature openly. For example: One might have a feature `foo' in a class MY_CLASS and refer to it by:1 class 2 MY_CLASS 3 4 feature -- Access 5 6 foo: INTEGER 7 8 my_function: INTEGER 9 do 10 Result := foo11 end 12 13 end

[12]

Line #10 (above) has the implied reference to Current by the call to simple `foo'.Line #10 (below) has the explicit reference to Current by the call to `Current.foo'.1 class 2 MY_CLASS 3 4 feature -- Access 5 6 foo: INTEGER 7 8 my_function: INTEGER 9 do 10 Result :=Current.foo 11 end 12 13 end

Either approach is acceptable to the compiler, but the implied version (e.g. x := foo) is preferred as it is less verbose.As with other languages, there are times when the use of the keyword Current is mandated, such as:1 class 2 MY_CLASS 3 4 feature -- Access 5 6 my_command 7 -- Create MY_OTHER_CLASS with `Current' 8local 9 x: MY_OTHER_CLASS 10 do 11 create x.make_with_something (Current) 12 end 13 14 end

In the case of the code above, the call on line #11 to make_with_something is passing the current class by explicitlypassing the keyword Current.

Page 131: Recursion acglz

16.4. SEE ALSO 117

16.3.7 JavaScript

For more details on this topic, see JavaScript.

In JavaScript, which is a programming or scripting language used extensively in web browsers, this is an importantkeyword, although what it evaluates to depends on where it is used.

• When used outside any function, in global space, this refers to the enclosing object, which in this case is theenclosing browser window, the window object.

• When used in a function defined in the global space, what the keyword this refers to depends on how thefunction is called. When such a function is called directly (e.g. f(x)), this will refer back to the global space inwhich the function is defined, and in which other global functions and variables may exist as well (or in strictmode, it is undefined). If a global function containing this is called as part of the event handler of an elementin the document object, however, this will refer to the calling HTML element.

• When a function is attached as a property of an object and called as a method of that object (e.g. obj.f(x)),this will refer to the object that the function is contained within.[13][14] It is even possible to manually specifythis when calling a function, by using the .call() or .apply() methods of the function object.[15] For example,the method call obj.f(x) could also be written as obj.f.call(obj, x).

To work around the different meaning of this in nested functions such as DOM event handlers, it is a common idiomin JavaScript to save the this reference of the calling object in a variable (commonly called that or self), and then usethe variable to refer to the calling object in nested functions.For example:$(".element”).hover(function() { // Here, both this and that point to the element under the mouse cursor. var that =this; this.find('.elements’).each(function(){ // Here, this points to the DOM element being iterated. // However, thatstill points to the element under the mouse cursor. this.addClass(“highlight”); }); });

16.3.8 Python

In Python, there is no keyword for this. When a member function is called on an object, it invokes the memberfunction with the same name on the object’s class object, with the object automatically bound to the first argument ofthe function. Thus, the obligatory first parameter of instance methods serves as this; this parameter is conventionallynamed self, but can be named anything.In class methods (created with the classmethod decorator), the first argument refers to the class object itself, andis conventionally called cls; these are primarily used for inheritable constructors,[16] where the use of the class as aparameter allows subclassing the constructor. In static methods (created with the staticmethod decorator), no specialfirst argument exists.

16.3.9 Self

The Self language is named after this use of “self”.

16.3.10 Xbase++

Self is strictly used within methods of a class. Another way to refer to Self is to use ::.

16.4 See also• Anonymous recursion

• Self-reference

Page 132: Recursion acglz

118 CHAPTER 16. THIS (COMPUTER PROGRAMMING)

• Inheritance (object-oriented programming)

16.5 References[1] Powell, Thomas A, and Schneider, Fritz, 2012. Javascript: The Complete Reference, Third Edition. McGraw-Hill. Chapter

11, Event Handling, p 428. ISBN 978-0-07-174120-0

[2] Using the GNU Compiler Collection (GCC) – Bound member functions

[3] "Closed and Open Recursion", Ralf Hinze, July 2007

[4] Open Recursion, Lambda the Ultimate

[5] "Selective Open Recursion: A Solution to the Fragile Base Class Problem", Jonathan Aldrich

[6] "Selective Open Recursion: A Solution to the Fragile Base Class Problem", Lambda the Ultimate

[7] Aldrich, Jonathan, and Kevin Donnelly. "Selective open recursion: Modular reasoning about components and inheritance."SAVCBS 2004 Specification and Verification of Component-Based Systems (2004): 26. citing for the JDK-adopted so-lution C. Ruby and G. T. Leavens. “Safely Creating Correct Subclasses without Seeing Superclass Code”. In Object-Oriented Programming Systems, Languages, and Applications, October 2000. doi:10.1145/353171.353186 also availableas technical report TR #00-05d

[8] ISO/IEC 14882:2003(E): Programming Languages - C++. ISO/IEC. 2003.

[9] Stroustrup: C++ Style and Technique FAQ

[10] Barnes, D. and Kölling, M. Objects First with Java. "...the reason for using this construct [this] is that we have a situationthat is known as name overloading - the same name being used for two different entities... It is important to understandthat the fields and the parameters are separate variables that exist independently of each other, even though they sharesimilar names. A parameter and a field sharing a name is not really a problem in Java.”

[11] De Smet, Bart, 2011. C# 4.0 Unleashed. Sams Publishing, Indianapolis, USA. Chapter 4, Language Essentials, p 210.ISBN 978-0-672-33079-7

[12] NOTE: The line numbers are for reference purposes only. Eiffel does not have line numbers in the class text. However,there is a line number option in the Eiffel Studio IDE, which can be optionally turned on for reference purposes (e.g. pairprogramming, etc).

[13] Crockford, Douglas, 2008. JavaScript: The Good Parts. O'Reilly Media Inc. and Yahoo! Inc. Chapter 4, Functions, p 28.ISBN 978-0-596-51774-8

[14] Powell, Thomas A, and Schneider, Fritz, 2012. Javascript: The Complete Reference, Third Edition. McGraw-Hill. Chapter5, Functions, pp 170–1. ISBN 978-0-07-174120-0

[15] Goodman, Danny, with Morrison, Michael, 2004. Javascript Bible, 5th Edition. Wiley Publishing, Inc., Indianapolis, USA.Chapter 33, Functions and Custom Objects, p 987. ISBN 0-7645-5743-2

[16] Unifying types and classes in Python 2.2, Guido van Rossum, "Overriding the __new__ method"

16.6 Further reading• Meyers, Scott, 1995. More Effective C++: 35 NewWays to Improve Your Programs and Designs. ISBN 0-201-

63371-X Scott Meyers• Stroustrup, Bjarne, 1994. The Design and Evolution of C++. Addison-Wesley Pub. Co. ISBN 0-201-54330-3

Bjarne Stroustrup

16.7 External links• Java this• *this in C++• Java This Keyword

Page 133: Recursion acglz

Chapter 17

Polymorphic recursion

In computer science, polymorphic recursion (also referred to asMilner–Mycroft typability or theMilner–Mycroftcalculus) refers to a recursive parametrically polymorph function where the type parameter changes with each re-cursive invocation made instead of staying constant. Type inference for polymorphic recursion is equivalent to semi-unification and thefore undecidable and requires the use of a semi-algorithm or programmer supplied type annota-tions.[1]

17.1 Example

17.1.1 Nested datatypes

Consider the following nested datatype:data Nested a = a :<: (Nested [a]) | Epsilon infixr 5 :<: nested = 1 :<: [2,3,4] :<: [[4,5],[7],[8,9]] :<: Epsilon

A length function defined over this datatype will be polymorphically recursive, as the type of the argument changesfrom Nested a to Nested [a] in the recursive call:length :: Nested a -> Int length Epsilon = 0 length (_ :<: xs) = 1 + length xs

Haskell normally infers the type signature for a function as simple-looking as this, but here it cannot be omittedwithout triggering a type error.

17.1.2 Higher-ranked types

See also: Higher-ranked type

17.2 Applications

17.2.1 Program analysis

In type-based program analysis polymorphic recursion is often essential in gaining high precision of the analysis.Notable examples of systems employing polymorphic recursion include Dussart, Henglein and Mossin’s binding-time analysis[2] and the Tofte–Talpin region-based memory management system.[3] As these systems assume theexpressions have already been typed in an underlying type system (not necessary employing polymorphic recursion),inference can be made decidable again.

119

Page 134: Recursion acglz

120 CHAPTER 17. POLYMORPHIC RECURSION

17.2.2 Data structures, error detection, graph solutions

Functional programming data structures often use polymorphic recursion to simplify type error checks and solveproblems with nasty “middle” temporary solutions that devour memory in more traditional data structures such astrees. In the two citations that follow, Okasaki (pp. 144-146) gives a CONS example in Haskell wherein the poly-morphic type system automatically flags programmer errors.[4] The recursive aspect is that the type definition assuresthat the outermost constructor has a single element, the second a pair, the third a pair of pairs, etc. recursively, settingan automatic error finding pattern in the data type. Roberts (p. 171) gives a related example in Java, using a Class torepresent a stack frame. The example given is a solution to the Tower of Hanoi problem wherein a stack simulatespolymorphic recursion with a beginning, temporary and ending nested stack substitution structure.[5]

17.3 See also• Higher-ranked polymorphism

17.4 Notes[1] Henglein 1993.

[2] Dussart, Dirk; Henglein, Fritz; Mossin, Christian. “Polymorphic Recursion and Subtype Qualifications: PolymorphicBinding-Time Analysis in Polynomial Time”. Proceedings of the 2nd International Static Analysis Symposium (SAS) (Springer-Verlag).

[3] Tofte, Mads; Talpin, Jean-Pierre (1994). “Implementation of the Typed Call-by-Value λ-calculus using a Stack of Regions”.POPL '94: Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York,NY, USA: ACM. pp. 188–201. doi:10.1145/174675.177855. ISBN 0-89791-636-0.

[4] Chris Okasaki (1999). Purely Functional Data Structures. New York: Cambridge. p. 144. ISBN 978-0521663502.

[5] Eric Roberts (2006). Thinking Recursively with Java. New York: Wiley. p. 171. ISBN 978-0471701460.

17.5 Further reading• Meertens, Lambert (1983). “Incremental polymorphic type checking in B” (PDF). ACM Symposium on Prin-

ciples of Programming Languages (POPL), Austin, Texas.

• Mycroft, Alan (1984). “Polymorphic type schemes and recursive definitions”. International Symposium onProgramming, Toulouse, France. Lecture Notes in Computer Science 167: 217–228. doi:10.1007/3-540-12925-1_41.

• Henglein, Fritz (1993). “Type inference with polymorphic recursion”. ACM Transactions on ProgrammingLanguages and Systems 15 (2). doi:10.1145/169701.169692.

• Kfoury, A. J.; Tiuryn, J.; Urzyczyn, P. (April 1993). “Type reconstruction in the presence of polymorphicrecursion”. ACM Transactions on Programming Languages and Systems (New York, NY, USA: ACM) 15 (2):290–311. doi:10.1145/169701.169687. ISSN 0164-0925.

• Michael I. Schwartzbach (June 1995). “Polymorphic type inference”. Technical Report BRICS-LS-95-3 (BRICS).

• Emms, Martin; Leiß, Hans (1996). “Extending the type checker for SML by polymorphic recursion—A cor-rectness proof”. Technical Report 96-101 (Centrum für Informations- und Sprachverarbeitung, UniversitätMünchen).

• Richard Bird and Lambert Meertens (1998). “Nested Datatypes”.

• C. Vasconcellos, L. Figueiredo, C. Camarao (2003). “Practical Type Inference for Polymorphic Recursion: anImplementation in Haskell”. Journal of Universal Computer Science.

• L. Figueiredo, C. Camarao. “Type Inference for Polymorphic Recursive Definitions: a Specification in Haskell”.

Page 135: Recursion acglz

17.6. EXTERNAL LINKS 121

• Hallett, J. J; Kfoury, A. J. (July 2005). “Programming Examples Needing Polymorphic Recursion”. ElectronicNotes in Theoretical Computer Science (Amsterdam, The Netherlands: Elsevier Science Publishers B. V.) 136:57–102. doi:10.1016/j.entcs.2005.06.014. ISSN 1571-0661.

17.6 External links• Standard ML with polymorphic recursion by Hans Leiß, University of Munich

Page 136: Recursion acglz

Chapter 18

Primitive recursive function

In computability theory, primitive recursive functions are a class of functions that are defined using primitiverecursion and composition as central operations and are a strict subset of the total µ-recursive functions (µ-recursivefunctions are also called partial recursive). Primitive recursive functions form an important building block on the wayto a full formalization of computability. These functions are also important in proof theory.Most of the functions normally studied in number theory are primitive recursive. For example: addition, division,factorial, exponential and the nth prime are all primitive recursive. So are many approximations to real-valuedfunctions.[1] In fact, it is difficult to devise a total recursive function that is not primitive recursive, although someare known (see the section on Limitations below). The set of primitive recursive functions is known as PR incomputational complexity theory.

18.1 Definition

The primitive recursive functions are among the number-theoretic functions, which are functions from the naturalnumbers (nonnegative integers) {0, 1, 2, ...} to the natural numbers. These functions take n arguments for somenatural number n and are called n-ary.The basic primitive recursive functions are given by these axioms:

1. Constant function: The 0-ary constant function 0 is primitive recursive.

2. Successor function: The 1-ary successor function S, which returns the successor of its argument (see Peanopostulates), is primitive recursive. That is, S(k) = k + 1.

3. Projection function: For every n≥1 and each i with 1≤i≤n, the n-ary projection function Pin, which returnsits i-th argument, is primitive recursive.

More complex primitive recursive functions can be obtained by applying the operations given by these axioms:

1. Composition: Given f, a k-ary primitive recursive function, and k m-ary primitive recursive functions g1,...,gk,the composition of f with g1,...,gk, i.e. them-ary functionh(x1, . . . , xm) = f(g1(x1, . . . , xm), . . . , gk(x1, . . . , xm))is primitive recursive.

2. Primitive recursion: Given f, a k-ary primitive recursive function, and g, a (k+2)-ary primitive recursivefunction, the (k+1)-ary function h is defined as the primitive recursion of f and g, i.e. the function h is primitiverecursive when

h(0, x1, . . . , xk) = f(x1, . . . , xk)

h(S(y), x1, . . . , xk) = g(y, h(y, x1, . . . , xk), x1, . . . , xk) .

The primitive recursive functions are the basic functions and those obtained from the basic functions by applyingthese operations a finite number of times.

122

Page 137: Recursion acglz

18.2. EXAMPLES 123

18.1.1 Role of the projection functions

The projection functions can be used to avoid the apparent rigidity in terms of the arity of the functions above; byusing compositions with various projection functions, it is possible to pass a subset of the arguments of one functionto another function. For example, if g and h are 2-ary primitive recursive functions then

f(a, b, c) = g(h(c, a), h(a, b))

is also primitive recursive. One formal definition using projection functions is

f(a, b, c) = g(h(P 33 (a, b, c), P

31 (a, b, c)), h(P

31 (a, b, c), P

32 (a, b, c))).

18.1.2 Converting predicates to numeric functions

In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers withtruth values { t= true, f=false }, or that produce truth values as outputs (see Kleene [1952 pp. 226–227]). This canbe accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common toidentify the truth value t with the number 1 and the truth value f with the number 0. Once this identification hasbeen made, the characteristic function of a set A, which literally returns 1 or 0, can be viewed as a predicate that tellswhether a number is in the set A. Such an identification of predicates with numeric functions will be assumed for theremainder of this article.

18.1.3 Computer language definition

An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. +and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and boundedloops, such as the basic for loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1to n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such as whileloops or IF-THEN plus GOTO, are admitted in a primitive recursive language. Douglas Hofstadter's Bloop in Gödel,Escher, Bach is one such. Adding unbounded loops (WHILE, GOTO) makes the language partially recursive, orTuring-complete; Floop is such, as are almost all real-world computer languages.Arbitrary computer programs, or Turing machines, cannot in general be analyzed to see if they halt or not (the haltingproblem). However, all primitive recursive functions halt. This is not a contradiction; primitive recursive programsare a non-arbitrary subset of all possible programs, constructed specifically to be analyzable.

18.2 Examples

Most number-theoretic functions definable using recursion on a single variable are primitive recursive. Basic examplesinclude the addition and “limited subtraction” functions.

18.2.1 Addition

Intuitively, addition can be recursively defined with the rules:

add(0,x)=x,add(n+1,x)=add(n,x)+1.

To fit this into a strict primitive recursive definition, define:

add(0,x)=P11(x) ,

add(S(n),x)=S(P23(n, add(n,x), x)).

Page 138: Recursion acglz

124 CHAPTER 18. PRIMITIVE RECURSIVE FUNCTION

Here S(n) is “the successor of n" (i.e., n+1), P11 is the identity function, and P2

3 is the projection function that takes3 arguments and returns the second one. Functions f and g required by the above definition of the primitive recursionoperation are respectively played by P1

1 and the composition of S and P23.

18.2.2 Subtraction

Because primitive recursive functions use natural numbers rather than integers, and the natural numbers are not closedunder subtraction, a limited subtraction function (also called “proper subtraction”) is studied in this context. Thislimited subtraction function sub(a,b) [or b ∸ a] returns b - a if this is nonnegative and returns 0 otherwise.The predecessor function acts as the opposite of the successor function and is recursively defined by the rules:

pred(0)=0,pred(n+1)=n.

These rules can be converted into a more formal definition by primitive recursion:

pred(0)=0,pred(S(n))=P1

2(n, pred(n)).

The limited subtraction function is definable from the predecessor function in a manner analogous to the way additionis defined from successor:

sub(0,x)=P11(x),

sub(S(n),x)=pred(P23(n, sub(n,x), x)).

Here sub(a,b) corresponds to b∸a; for the sake of simplicity, the order of the arguments has been switched from the“standard” definition to fit the requirements of primitive recursion. This could easily be rectified using compositionwith suitable projections.

18.2.3 Other operations on natural numbers

Exponentiation and primality testing are primitive recursive. Given primitive recursive functions e, f, g, and h, afunction that returns the value of g when e≤f and the value of h otherwise is primitive recursive.

18.2.4 Operations on integers and rational numbers

By using Gödel numberings, the primitive recursive functions can be extended to operate on other objects such asintegers and rational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operationsincluding addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are representedby Gödel numbers then the field operations are all primitive recursive.

18.3 Relationship to recursive functions

The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use ofthis operator may result in a partial function, that is, a relation with at most one value for each argument, but doesnot necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursivefunction is one that can be computed by a Turing machine. A total recursive function is a partial recursive functionthat is defined for every input.Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. TheAckermann function A(m,n) is a well-known example of a total recursive function that is not primitive recursive.There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the

Page 139: Recursion acglz

18.4. LIMITATIONS 125

Ackermann function. This characterization states that a function is primitive recursive if and only if there is a naturalnumber m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewersteps, where n is the sum of the arguments of the primitive recursive function.[2]

An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set ofall total recursive functions (which is not itself recursively enumerable). This means that there is a single computablefunction f(e,n) such that:

• For every primitive recursive function g, there is an e such that g(n) = f(e,n) for all n, and

• For every e, the function h(n) = f(e,n) is primitive recursive.

However, the primitive recursive functions are not the largest recursively enumerable set of total computable functions.

18.4 Limitations

Primitive recursive functions tend to correspond very closely with our intuition of what a computable function mustbe. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations bywhich one can create new primitive recursive functions are also very straightforward. However the set of primitiverecursive functions does not include every possible total computable function — this can be seen with a variant ofCantor’s diagonal argument. This argument provides a total computable function that is not primitive recursive. Asketch of the proof is as follows:

The primitive recursive functions of one argument (i.e., unary functions) can be computably enumerated.This enumeration uses the definitions of the primitive recursive functions (which are essentially justexpressions with the composition and primitive recursion operations as operators and the basic primitiverecursive functions as atoms), and can be assumed to contain every definition once, even though a samefunction will occur many times on the list (since many definitions define the same function; indeed simplycomposing by the identity function generates infinitely many definitions of any one primitive recursivefunction). This means that the n-th definition of a primitive recursive function in this enumeration canbe effectively determined from n. Indeed if one uses some Gödel numbering to encode definitions asnumbers, then this n-th definition in the list is computed by a primitive recursive function of n. Let fndenote the unary primitive recursive function given by this definition.

Now define the “evaluator function” ev with two arguments, by ev(i,j) = fi(j). Clearly ev is total andcomputable, since one can effectively determine the definition of fi, and being a primitive recursivefunction fi is itself total and computable, so fi(j) is always defined and effectively computable. Howevera diagonal argument will show that the function ev of two arguments is not primitive recursive.

Suppose ev were primitive recursive, then the unary function g defined by g(i) = S(ev(i,i)) would also beprimitive recursive, as it is defined by composition from the successor function and ev. But then g occursin the enumeration, so there is some number n such that g = fn. But now g(n) = S(ev(n,n)) = S(fn(n)) =S(g(n)) gives a contradiction.

This argument can be applied to any class of computable (total) functions that can be enumerated in this way, asexplained in the article Machines that always halt. Note however that the partial computable functions (those thatneed not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machineencodings.Other examples of total recursive but not primitive recursive functions are known:

• The function that takes m to Ackermann(m,m) is a unary total recursive function that is not primitive recursive.

• The Paris–Harrington theorem involves a total recursive function that is not primitive recursive. Because thisfunction is motivated by Ramsey theory, it is sometimes considered more “natural” than the Ackermann func-tion.

• The Sudan function

• The Goodstein function

Page 140: Recursion acglz

126 CHAPTER 18. PRIMITIVE RECURSIVE FUNCTION

18.5 Some common primitive recursive functions

The following examples and definitions are from Kleene (1952) pp. 223-231 — many appear withproofs. Most also appear with similar names, either as proofs or as examples, in Boolos-Burgess-Jeffrey2002 pp. 63-70; they add #22 the logarithm lo(x, y) or lg(x, y) depending on the exact derivation.

In the following we observe that primitive recursive functions can be of four types:

1. functions for short: “number-theoretic functions” from { 0, 1, 2, ...} to { 0, 1, 2, ...},

2. predicates: from { 0, 1, 2, ...} to truth values { t =true, f =false },

3. propositional connectives: from truth values { t, f } to truth values { t, f },

4. representing functions: from truth values { t, f } to { 0, 1, 2, ... }. Many times a predicate requires a representingfunction to convert the predicate’s output { t, f } to { 0, 1 } (note the order “t” to “0” and “f” to “1” matcheswith ~sg( ) defined below). By definition a function φ(x) is a “representing function” of the predicate P(x) ifφ takes only values 0 and 1 and produces 0 when P is true”.

In the following the mark " ' ", e.g. a', is the primitive mark meaning “the successor of”, usually thought of as " +1”,e.g. a +1 = ₑ a'. The functions 16-20 and #G are of particular interest with respect to converting primitive recursivepredicates to, and extracting them from, their “arithmetical” form expressed as Gödel numbers.

1. Addition: a+b2. Multiplication: a×b3. Exponentiation: ab

4. Factorial a! : 0! = 1, a'! = a!×a'5. pred(a): (Predecessor or decrement): If a > 0 then a-1 else 06. Proper subtraction a ∸ b: If a ≥ b then a-b else 07. Minimum(a1, ... a )8. Maximum(a1, ... a )9. Absolute difference: | a-b | = ₑ (a ∸ b) + (b ∸ a)

10. ~sg(a): NOT[signum(a)]: If a=0 then 1 else 011. sg(a): signum(a): If a=0 then 0 else 112. a | b: (a divides b): If b=k×a for some k then 0 else 113. Remainder(a, b): the leftover if b does not divide a “evenly”. Also called MOD(a, b)14. a = b: sg | a - b | (Kleene’s convention was to represent true by 0 and false by 1; presently, especially

in computers, the most common convention is the reverse, namely to represent true by 1 and falseby 0, which amounts to changing sg into ~sg here and in the next item)

15. a < b: sg( a' ∸ b )16. Pr(a): a is a prime number Pr(a) = ₑ a>1 & NOT(Exists c)₁< <ₐ [ c|a ]17. pᵢ: the i+1-st prime number18. (a)ᵢ: exponent of pᵢ in a: the unique x such that pᵢx|a & NOT(pᵢx'|a)19. lh(a): the “length” or number of non-vanishing exponents in a20. lo(a, b): logarithm of a to the base b

In the following, the abbreviation x =def x1, ... xn; subscripts may be applied if the meaning requires.

• #A: A function φ definable explicitly from functions Ψ and constants q1, ... q is primitive recursive in Ψ.

• #B: The finite sum Σ < ψ(x, y) and product Π < ψ(x, y) are primitive recursive in ψ.

Page 141: Recursion acglz

18.6. ADDITIONAL PRIMITIVE RECURSIVE FORMS 127

• #C: A predicate P obtained by substituting functions χ1,..., χ for the respective variables of a predicate Q isprimitive recursive in χ1,..., χ , Q.

• #D: The following predicates are primitive recursive in Q and R:

• NOT_Q(x) .• Q OR R: Q(x) V R(x),• Q AND R: Q(x) & R(x),• Q IMPLIES R: Q(x) → R(x)• Q is equivalent to R: Q(x) ≡ R(x)

• #E: The following predicates are primitive recursive in the predicate R:

• (Ey) < R(x, y) where (Ey) < denotes “there exists at least one y that is less than z suchthat”

• (y) < R(x, y) where (y) < denotes “for all y less than z it is true that”• μy < R(x, y). The operator μy < R(x, y) is a bounded form of the so-called minimization-

or mu-operator: Defined as “the least value of y less than z such that R(x, y) is true; orz if there is no such value.”

• #F: Definition by cases: The function defined thus, where Q1, ..., Q are mutually exclusive predicates (or"ψ(x) shall have the value given by the first clause that applies), is primitive recursive in φ1, ..., Q1, ... Q :

φ(x) =• φ1(x) if Q1(x) is true,• . . . . . . . . . . . . . . . . . . .• φ (x) if Q (x) is true• φ ₊₁(x) otherwise

• #G: If φ satisfies the equation:

φ(y,x) = χ(y, NOT-φ(y; x2, ... x ), x2, ... x then φ is primitive recursive in χ. 'So, ina sense the knowledge of the value NOT-φ(y; x₂ ₒ ) of the course-of-values function isequivalent to the knowledge of the sequence of values φ(0,x₂ ₒ ), ..., φ(y-1,x₂ ₒ ) of theoriginal function.”

18.6 Additional primitive recursive forms

Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these formsmay be easier to find or more natural for reading or writing. Course-of-values recursion defines primitive recursivefunctions. Some forms of mutual recursion also define primitive recursive functions.The functions that can be programmed in the LOOP programming language are exactly the primitive recursive func-tions. This gives a different characterization of the power of these functions. The main limitation of the LOOPlanguage, compared to a Turing-complete language, is that in the LOOP language the number of times that each loopwill run is specified before the loop begins to run.

18.7 Finitism and consistency results

The primitive recursive functions are closely related to mathematical finitism, and are used in several contexts inmathematical logic where a particularly constructive system is desired. Primitive recursive arithmetic (PRA), a formalaxiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose.PRA is much weaker than Peano arithmetic, which is not a finitistic system. Nevertheless, many results in numbertheory and in proof theory can be proved in PRA. For example, Gödel’s incompleteness theorem can be formalizedinto PRA, giving the following theorem:

Page 142: Recursion acglz

128 CHAPTER 18. PRIMITIVE RECURSIVE FUNCTION

If T is a theory of arithmetic satisfying certain hypotheses, with Gödel sentence GT , then PRA provesthe implication Con(T)→GT .

Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitiverecursive functions that carry out the corresponding syntactic transformations of proofs.In proof theory and set theory, there is an interest in finitistic consistency proofs, that is, consistency proofs that them-selves are finitistically acceptable. Such a proof establishes that the consistency of a theory T implies the consistencyof a theory S by producing a primitive recursive function that can transform any proof of an inconsistency from Sinto a proof of an inconsistency from T. One sufficient condition for a consistency proof to be finitistic is the abilityto formalize it in PRA. For example, many consistency results in set theory that are obtained by forcing can be recastas syntactic proofs that can be formalized in PRA.

18.8 History

Recursive definitions had been used more less formally in mathematics before, but the construction of primitiverecursion is traced back to Richard Dedekind's theorem 126 of his Was sind und was sollen die Zahlen? (1888). Thiswork was the first to give a proof that a certain recursive construction defines a unique function.[3][4][5]

The current terminology was coined by Rózsa Péter (1934) after Ackermann had proved in 1928 that the functionwhich today is named after him was not primitive recursive, an event which prompted the need to rename what untilthen were simply called recursive functions.[4][5]

18.9 See also

• Course-of-values recursion

• Grzegorczyk hierarchy

• Machine that always halts

• Recursion (computer science)

• Primitive recursive functional

• Double recursion

• Primitive recursive set function

• Primitive recursive ordinal function

18.10 References

• Brainerd, W.S., Landweber, L.H. (1974), Theory of Computation, Wiley, ISBN 0-471-09585-0

• Robert I. Soare, Recursively Enumerable Sets and Degrees, Springer-Verlag, 1987. ISBN 0-387-15299-7

• Stephen Kleene (1952) Introduction toMetamathematics, North-Holland Publishing Company, New York, 11threprint 1971: (2nd edition notes added on 6th reprint). In Chapter XI. General Recursive Functions §57

• George Boolos, John Burgess, Richard Jeffrey (2002), Computability and Logic: Fourth Edition, CambridgeUniversity Press, Cambridge, UK. Cf pp. 70–71.

• Robert I. Soare 1995Computability and Recursion http://www.people.cs.uchicago.edu/~{}soare/History/compute.pdf

• Daniel Severin 2008, Unary primitive recursive functions, J. Symbolic Logic Volume 73, Issue 4, pp. 1122–1138 arXiv projecteuclid

Page 143: Recursion acglz

18.10. REFERENCES 129

[1] Brainerd and Landweber, 1974

[2] This follows from the facts that the functions of this form are the most quickly growing primitive recursive functions, andthat a function is primitive recursive if and only if its time complexity is bounded by a primitive recursive function. Forthe former, see Linz, Peter (2011), An Introduction to Formal Languages and Automata, Jones & Bartlett Publishers, p.332, ISBN 9781449615529. For the latter, see Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation,Oxford University Press, p. 287, ISBN 9780191620805.

[3] Peter Smith (2013). An Introduction to Gödel’s Theorems (2nd ed.). Cambridge University Press. pp. 98–99. ISBN978-1-107-02284-3.

[4] George Tourlakis (2003). Lectures in Logic and Set Theory: Volume 1, Mathematical Logic. Cambridge University Press.p. 129. ISBN 978-1-139-43942-8.

[5] Rod Downey, ed. (2014). Turing’s Legacy: Developments from Turing’s Ideas in Logic. Cambridge University Press. p.474. ISBN 978-1-107-04348-0.

Page 144: Recursion acglz

Chapter 19

Primitive recursive set function

In mathematics, primitive recursive set functions or primitive recursive ordinal functions are analogs of primitiverecursive functions, defined for sets or ordinals rather than natural numbers. They were introduced by Jensen & Karp(1971).

19.1 Definition

A primitive recursive set function is a function from sets to sets that can be obtained from the following basic functionsby repeatedly applying the following rules of substitution and recursion:The basic functions are:

• Projection: Pn,m(x1,...,xn) = xm

• Zero: F(x) = 0• Adjoining an element to a set: F(x,y) = x ∪ {y}• Testing membership: C(x,y,u,v) = x if u ∈ v, y otherwise.

The rules for generating new functions by substitution are

• F(x,y) = G(x,H(x),y)• F(x,y) = G(H(x),y)

where x and y are finite sequences of variables.The rule for generating new functions by recursion is

• F(z,x) = G(∪u∈zF(u,x),z,x)

A primitive recursive ordinal function is defined in the same way, except that the initial function F(x,y) = x∪{y} isreplaced by F(x) = x∪{x} (the successor of x). The primitive recursive ordinal functions are the same as the primitiverecursive set functions that map ordinals to ordinals.One can also add more initial functions to obtain a larger class of functions. For example, the ordinal function ωα isnot primitive recursive, because the constant function with value ω (or any other infinite set) is not primitive recursive,so one might want to add this constant function to the initial functions.

19.2 References• Jensen, Ronald B.; Karp, Carol (1971), “Primitive recursive set functions”, Axiomatic Set Theory, Proc. Sym-

pos. Pure Math., XIII, Part I, Providence, R.I.: Amer. Math. Soc., pp. 143–176, ISBN 9780821802458, MR0281602

130

Page 145: Recursion acglz

Chapter 20

Recursion

For other uses, see Recursion (disambiguation).

Recursion is the process of repeating items in a self-similar way. For instance, when the surfaces of two mirrors areexactly parallel with each other, the nested images that occur are a form of infinite recursion. The term has a varietyof meanings specific to a variety of disciplines ranging from linguistics to logic. The most common application ofrecursion is in mathematics and computer science, in which it refers to a method of defining functions in which thefunction being defined is applied within its own definition. Specifically, this defines an infinite number of instances(function values), using a finite expression that for some instances may refer to other instances, but in such a waythat no loop or infinite chain of references can occur. The term is also used more generally to describe a process ofrepeating objects in a self-similar way.

20.1 Formal definitions

In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can bedefined by two properties:

1. A simple base case (or cases)—a terminating scenario that does not use recursion to produce an answer

2. A set of rules that reduce all other cases toward the base case

For example, the following is a recursive definition of a person’s ancestors:

• One’s parents are one’s ancestors (base case).

• The ancestors of one’s ancestors are also one’s ancestors (recursion step).

The Fibonacci sequence is a classic example of recursion:Fib(0) = 01, case base asFib(1) = 12, case base asintegers all Forn > 1, Fib (n) := Fib(n− 1) + Fib(n− 2).

Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbersby the Peano axioms can be described as: 0 is a natural number, and each natural number has a successor, which isalso a natural number. By this base case and recursive rule, one can generate the set of all natural numbers.Recursively defined mathematical objects include functions, sets, and especially fractals.There are various more tongue-in-cheek “definitions” of recursion; see recursive humor.

131

Page 146: Recursion acglz

132 CHAPTER 20. RECURSION

20.2 Informal definition

Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking theprocedure itself. A procedure that goes through recursion is said to be 'recursive'.To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. Aprocedure is a set of steps based on a set of rules. The running of a procedure involves actually following the rules andperforming the steps. An analogy: a procedure is like a written recipe; running a procedure is like actually preparingthe meal.Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution ofsome other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that inturn requires heating water, and so forth. However, a recursive procedure is where (at least) one of its steps callsfor a new instance of the very same procedure, like a sourdough recipe calling for some dough left over from thelast time the same recipe was made. This of course immediately creates the possibility of an endless loop; recursioncan only be properly used in a definition if the step in question is skipped in certain cases so that the procedurecan complete, like a sourdough recipe that also tells you how to get some starter dough in case you've never made itbefore. Even if properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishingthe new from the old (partially executed) invocation of the procedure; this requires some administration of how farvarious simultaneous instances of the procedures have progressed. For this reason recursive definitions are very rarein everyday situations. An example could be the following procedure to find a way through a maze. Proceed forwarduntil reaching either an exit or a branching point (a dead end is considered a branching point with 0 branches). Ifthe point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively; if everytrial fails by reaching only dead ends, return on the path that led to this branching point and report failure. Whetherthis actually defines a terminating procedure depends on the nature of the maze: it must not allow loops. In anycase, executing the procedure requires carefully recording all currently explored branching points, and which of theirbranches have already been exhaustively tried.

20.3 In language

Linguist Noam Chomsky among many others has argued that the lack of an upper bound on the number of gram-matical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practicalconstraints such as the time available to utter one), can be explained as the consequence of recursion in naturallanguage.[1][2] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. Asentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are danger-ous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively(very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence.This is really just a special case of the mathematical definition of recursion.This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects thatTin Man said that.... Of course, there are many structures apart from sentences that can be defined recursively,and therefore many ways in which a sentence can embed instances of one category inside another. Over the years,languages in general have proved amenable to this kind of analysis.Recently, however, the generally-accepted idea that recursion is an essential property of human language has beenchallenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetskyand Cilene Rodrigues are among many who that have argued against this.[3] Literary self-reference can in any casebe argued to be different in kind from mathematical or logical recursion.[4]

Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word and, for example,can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for nounphrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditran-sitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that itcan take any of these different types of meanings as arguments. This can be done by defining it for a simple case inwhich it combines sentences, and then defining the other cases recursively in terms of the simple one.[5]

Page 147: Recursion acglz

20.4. IN MATHEMATICS 133

20.3.1 Recursive humor

Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks,generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer toa base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in theirglossary along the lines of:

Recursion, see Recursion.[6]

A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book TheC Programming Language; the index entry recursively references itself (“recursion 86, 139, 141, 182, 202, 269”).The earliest version of this joke was in “Software Tools” by Kernighan and Plauger, and also appears in “The UNIXProgramming Environment” by Kernighan and Pike. It did not appear in the first edition of The C ProgrammingLanguage.Another joke is that “To understand recursion, you must understand recursion.”[6] In the English-language version ofthe Google web search engine, when a search for “recursion” is made, the site suggests “Did you mean: recursion.”An alternative form is the following, from Andrew Plotkin: “If you already know what recursion is, just remember theanswer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her whatrecursion is.”

Recursive acronyms can also be examples of recursive humor. PHP, for example, stands for “PHP Hypertext Pre-processor”, WINE stands for “Wine Is Not an Emulator.” and GNU stands for “GNU’s not Unix”.

20.4 In mathematics

20.4.1 Recursively defined sets

Main article: Recursive definition

Example: the natural numbers

See also: Closure (mathematics)

The canonical example of a recursively defined set is given by the natural numbers:

0 is in Nif n is in N , then n + 1 is in NThe set of natural numbers is the smallest set satisfying the previous two properties.

Example: The set of true reachable propositions

Another interesting example is the set of all “true reachable” propositions in an axiomatic system.

• If a proposition is an axiom, it is a true reachable proposition.

• If a proposition can be obtained from true reachable propositions by means of inference rules, it is a truereachable proposition.

• The set of true reachable propositions is the smallest set of propositions satisfying these conditions.

This set is called 'true reachable propositions’ because in non-constructive approaches to the foundations of mathe-matics, the set of true propositions may be larger than the set recursively constructed from the axioms and rules ofinference. See also Gödel’s incompleteness theorems.

Page 148: Recursion acglz

134 CHAPTER 20. RECURSION

20.4.2 Finite subdivision rules

Main article: Finite subdivision rule

Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdi-vision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdividedinto smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can beiterated. The standard `middle thirds’ technique for creating the Cantor set is a subdivision rule, as is barycentricsubdivision.

20.4.3 Functional recursion

A function may be partly defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) =F(n − 1) + F(n − 2). For such a definition to be useful, it must lead to non-recursively defined values, in this caseF(0) = 0 and F(1) = 1.A famous recursive function is the Ackermann function, which—unlike the Fibonacci sequence—cannot easily beexpressed without recursion.

20.4.4 Proofs involving recursive definitions

Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sec-tions, yields structural induction, a powerful generalization of mathematical induction widely used to derive proofsin mathematical logic and computer science.

20.4.5 Recursive optimization

Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problemin recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of theoptimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step).

20.5 In computer science

Main article: Recursion (computer science)

A common method of simplification is to divide a problem into subproblems of the same type. As a computerprogramming technique, this is called divide and conquer and is key to the design of many important algorithms.Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smallerand smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach,where problems are solved by solving larger and larger instances, until the desired size is reached.A classic example of recursion is the definition of the factorial function, given here in C code:unsigned int factorial(unsigned int n) { if (n == 0) { return 1; } else { return n * factorial(n - 1); } }

The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursivecall by n, until reaching the base case, analogously to the mathematical definition of factorial.Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smallerversions of itself. The solution to the problem is then devised by combining the solutions obtained from the simplerversions of the problem. One example application of recursion is in parsers for programming languages. The greatadvantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed orproduced by a finite computer program.Recurrence relations are equations to define one or more sequences recursively. Some specific kinds of recurrencerelation can be “solved” to obtain a non-recursive definition.

Page 149: Recursion acglz

20.6. IN ART 135

Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually simplicity.The main disadvantage is often that the algorithm may require large amounts of memory if the depth of the recursionis very large.

20.6 In art

The Russian Doll or Matryoshka Doll is a physical artistic example of the recursive concept.

20.7 The recursion theorem

In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set X, an element a ofX and a function f : X → X , the theorem states that there is a unique function F : N → X (where N denotes theset of natural numbers including zero) such that

F (0) = a

F (n+ 1) = f(F (n))

for any natural number n.

20.7.1 Proof of uniqueness

Take two functions F : N → X and G : N → X such that:

F (0) = a

G(0) = a

F (n+ 1) = f(F (n))

G(n+ 1) = f(G(n))

where a is an element of X.It can be proved by mathematical induction that F (n) = G(n) for all natural numbers n:

Base Case: F (0) = a = G(0) so the equality holds for n = 0 .

Inductive Step: Suppose F (k) = G(k) for some k ∈ N . Then F (k + 1) = f(F (k)) = f(G(k)) =G(k + 1).

Hence F(k) = G(k) implies F(k+1) = G(k+1).

By induction, F (n) = G(n) for all n ∈ N .

20.7.2 Examples

Some common recurrence relations are:

• Golden Ratio: ϕ = 1 + (1/ϕ) = 1 + (1/(1 + (1/(1 + 1/...))))

• Factorial: n! = n(n− 1)! = n(n− 1) · · · 1

• Fibonacci numbers: f(n) = f(n− 1) + f(n− 2)

Page 150: Recursion acglz

136 CHAPTER 20. RECURSION

• Catalan numbers: C0 = 1 , Cn+1 = (4n+ 2)Cn/(n+ 2)

• Computing compound interest

• The Tower of Hanoi

• Ackermann function

20.8 See also• Corecursion

• Course-of-values recursion

• Digital infinity

• Fixed point combinator

• Infinite loop

• Infinitism

• Iterated function

• Mise en abyme

• Reentrant (subroutine)

• Self-reference

• Strange loop

• Tail recursion

• Tupper’s self-referential formula

• Turtles all the way down

20.9 Bibliography• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

• Johnsonbaugh, Richard (2004). Discrete Mathematics. Prentice Hall. ISBN 0-13-117686-2.

• Hofstadter, Douglas (1999). Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books. ISBN 0-465-02656-7.

• Shoenfield, Joseph R. (2000). Recursion Theory. A K Peters Ltd. ISBN 1-56881-149-7.

• Causey, Robert L. (2001). Logic, Sets, and Recursion. Jones & Bartlett. ISBN 0-7637-1695-2.

• Cori, Rene; Lascar, Daniel; Pelletier, Donald H. (2001). Recursion Theory, Gödel’s Theorems, Set Theory,Model Theory. Oxford University Press. ISBN 0-19-850050-5.

• Barwise, Jon; Moss, Lawrence S. (1996). Vicious Circles. Stanford Univ Center for the Study of Languageand Information. ISBN 0-19-850050-5. - offers a treatment of corecursion.

• Rosen, Kenneth H. (2002). Discrete Mathematics and Its Applications. McGraw-Hill College. ISBN 0-07-293033-0.

• Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, Clifford Stein (2001). Introduction to Algorithms.Mit Pr. ISBN 0-262-03293-7.

Page 151: Recursion acglz

20.10. REFERENCES 137

• Kernighan, B.; Ritchie, D. (1988). The C programming Language. Prentice Hall. ISBN 0-13-110362-8.

• Stokey, Nancy,; Robert Lucas; Edward Prescott (1989). Recursive Methods in Economic Dynamics. HarvardUniversity Press. ISBN 0-674-75096-9.

• Hungerford (1980). Algebra. Springer. ISBN 978-0-387-90518-1., first chapter on set theory.

20.10 References[1] Pinker, Steven (1994). The Language Instinct. William Morrow.

[2] Pinker, Steven; Jackendoff, Ray (2005). “The faculty of language: What’s so special about it?". Cognition 95 (2): 201–236.doi:10.1016/j.cognition.2004.08.004. PMID 15694646.

[3] Nevins, Andrew; Pesetsky, David; Rodrigues, Cilene (2009). “Evidence and argumentation: A reply to Everett (2009)"(PDF). Language 85 (3): 671–681. doi:10.1353/lan.0.0140.

[4] Drucker, Thomas (4 January 2008). Perspectives on the History of Mathematical Logic. Springer Science & BusinessMedia. p. 110. ISBN 978-0-8176-4768-1.

[5] Barbara Partee and Mats Rooth. 1983. In Rainer Bäuerle et al., Meaning, Use, and Interpretation of Language. Reprintedin Paul Portner and Barbara Partee, eds. 2002. Formal Semantics: The Essential Readings. Blackwell.

[6] Hunter, David (2011). Essentials of Discrete Mathematics. Jones and Bartlett. p. 494.

20.11 External links• Recursion - tutorial by Alan Gauld

• A Primer on Recursion- contains pointers to recursion in Formal Languages, Linguistics, Math and ComputerScience

• Zip Files All The Way Down

• Nevins, Andrew and David Pesetsky and Cilene Rodrigues. Evidence and Argumentation: A Reply to Everett(2009). Language 85.3: 671-−681 (2009)

Page 152: Recursion acglz

138 CHAPTER 20. RECURSION

A visual form of recursion known as the Droste effect. The woman in this image holds an object that contains a smaller image of herholding an identical object, which in turn contains a smaller image of herself holding an identical object, and so forth. Advertisementfor Droste cocoa, c. 1900

Page 153: Recursion acglz

20.11. EXTERNAL LINKS 139

Ouroboros, an ancient symbol depicting a serpent or dragon eating its own tail.

Page 154: Recursion acglz

140 CHAPTER 20. RECURSION

Recently refreshed sourdough, bubbling through fermentation: the recipe calls for some sourdough left over from the last time thesame recipe was made.

Page 155: Recursion acglz

20.11. EXTERNAL LINKS 141

The Sierpinski triangle—a confined recursion of triangles that form a fractal

Page 156: Recursion acglz

Chapter 21

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

21.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

142

Page 157: Recursion acglz

21.2. RECURSIVE DATA TYPES 143

Tree created using the Logo programming language and relying heavily on recursion

21.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 158: Recursion acglz

144 CHAPTER 21. SINGLE RECURSION

21.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

21.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

21.3 Types of recursion

21.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 159: Recursion acglz

21.3. TYPES OF RECURSION 145

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

21.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

21.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

21.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 160: Recursion acglz

146 CHAPTER 21. SINGLE RECURSION

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

21.4 Recursive programs

21.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 161: Recursion acglz

21.4. RECURSIVE PROGRAMS 147

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 162: Recursion acglz

148 CHAPTER 21. SINGLE RECURSION

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

21.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 163: Recursion acglz

21.4. RECURSIVE PROGRAMS 149

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 164: Recursion acglz

150 CHAPTER 21. SINGLE RECURSION

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

21.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

21.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

21.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 165: Recursion acglz

21.6. RECURSION VERSUS ITERATION 151

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

21.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

21.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 166: Recursion acglz

152 CHAPTER 21. SINGLE RECURSION

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

21.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

21.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

21.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

21.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 167: Recursion acglz

21.7. TAIL-RECURSIVE FUNCTIONS 153

21.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

21.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

21.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

21.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

21.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 168: Recursion acglz

154 CHAPTER 21. SINGLE RECURSION

21.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

21.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

21.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 169: Recursion acglz

21.12. FURTHER READING 155

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

21.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

21.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 170: Recursion acglz

Chapter 22

Recursion termination

In computing, Recursion termination is when certain conditions are met and the recursive algorithm ceases callingitself and begins to return values.[1] This happens only if with every recursive call the recursive algorithm changes itsstate and moves towards the base case. Cases that satisfy the definition without being defined in terms of the definitionitself are called base cases. They are small enough to solve directly.[2]

22.1 Examples

22.1.1 Fibonacci function

The Fibonacci function(fibonacci(n)), which takes integer n(n >= 0) as input, has three conditions1. if n is 0, returns 0.2. if n is 1, returns 1.3. otherwise, return [fibonacci(n-1) + fibonacci(n-2)]This recursive function terminates if either conditions 1 or 2 are satisfied. We see that the function’s recursive callreduces the value of n(by passing n-1 or n-2 in the function) ensuring that n reaches either condition 1 or 2.

22.1.2 C++

C++ Example:[3]

int factorial(int number) { if (number == 0) return 1; else return (number * factorial(number - 1)); }Here we see that in the recursive call, the number passed in the recursive step is reduced by 1. This again ensuresthat the number will at some point reduce to 0 which in turn terminates the recursive algorithm.

22.2 References[1] http://www.itl.nist.gov/div897/sqg/dads/HTML/recursiontrm.html

[2] Recursion Lecture, Introduction to Computer Science pg. 3 (PDF).

[3] An Introduction to the Imperative Part of C++.

22.3 External links• Princeton university: “An introduction to computer science in the context of scientific, engineering, and com-

mercial applications”• University of Toronto: “Introduction to Computer Science”

156

Page 171: Recursion acglz

Chapter 23

Recursive definition

Four stages in the construction of a Koch snowflake. As with many other fractals, the stages are obtained via a recursive definition.

A recursive definition (or inductive definition) in mathematical logic and computer science is used to define theelements in a set in terms of other elements in the set (Aczel 1978:740ff).A recursive definition of a function defines values of the functions for some inputs in terms of the values of the same

157

Page 172: Recursion acglz

158 CHAPTER 23. RECURSIVE DEFINITION

function for other inputs. For example, the factorial function n! is defined by the rules

0! = 1.(n+1)! = (n+1)·n!.

This definition is valid for all n, because the recursion eventually reaches the base case of 0. The definition may alsobe thought of as giving a procedure describing how to construct the function n!, starting from n = 0 and proceedingonwards with n = 1, n = 2, n = 3 etc..That such a definition indeed defines a function can be proved by induction.An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, onedefinition of the set N of natural numbers is:

1. 1 is in N.

2. If an element n is in N then n+1 is in N.

3. N is the smallest set satisfying (1) and (2).

There are many sets that satisfy (1) and (2) - for example, the set {1, 1.649, 2, 2.649, 3, 3.649, ...} satisfies thedefinition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members.Properties of recursively defined functions and sets can often be proved by an induction principle that follows therecursive definition. For example, the definition of the natural numbers presented here directly implies the principleof mathematical induction for natural numbers: if a property holds of the natural number 0, and the property holdsof n+1 whenever it holds of n, then the property holds of all natural numbers (Aczel 1978:742).

23.1 Form of recursive definitions

Most recursive definitions have three foundations: a base case (basis), an inductive clause, and an extremal clause.The difference between a circular definition and a recursive definition is that a recursive definition must always havebase cases, cases that satisfy the definition without being defined in terms of the definition itself, and all other casescomprising the definition must be “smaller” (closer to those base cases that terminate the recursion) in some sense.In contrast, a circular definition may have no base case, and define the value of a function in terms of that value itself,rather than on other values of the function. Such a situation would lead to an infinite regress.

23.2 Examples of recursive definitions

23.2.1 Elementary functions

Addition is defined recursively based on counting

0 + a = a

(1 + n) + a = 1 + (n+ a)

Multiplication is defined recursively

0a = 0

(1 + n)a = a+ na

Exponentiation is defined recursively

Page 173: Recursion acglz

23.2. EXAMPLES OF RECURSIVE DEFINITIONS 159

a0 = 1

a1+n = aan

Binomial coefficients are defined recursively

(a

0

)= 1

(1 + a

1 + n

)=

(1 + a)(an

)1 + n

23.2.2 Prime numbers

The set of prime numbers can be defined as the unique set of positive integers satisfying

• 1 is not a prime number

• any other positive integer is a prime number if and only if it is not divisible by any prime number smaller thanitself

The primality of the integer 1 is the base case; checking the primality of any larger integer X by this definition requiresknowing the primality of every integer between 1 and X, which is well defined by this definition. That last point canproved by induction on X, for which it is essential that the second clause says “if and only if"; if it had said just “if” theprimality of for instance 4 would not be clear, and the further application of the second clause would be impossible.

23.2.3 Non-negative even numbers

The even numbers can be defined as consisting of

• 0 is in the set E of non-negative evens (basis clause)

• For any element x in the set E, x+2 is in E (inductive clause)

• Nothing is in E unless it is obtained from the basis and inductive clauses (extremal clause).

23.2.4 Well formed formulas

It is chiefly in logic or computer programming that recursive definitions are found. For example, a well formedformula (wff) can be defined as:

1. a symbol which stands for a proposition - like p means “Connor is a lawyer.”

2. The negation symbol, followed by a wff - like Np means “It is not true that Connor is a lawyer.”

3. Any of the four binary connectives (C, A, K, or E) followed by two wffs. The symbol K means “both are true”,so Kpq may mean “Connor is a lawyer and Mary likes music.”

The value of such a recursive definition is that it can be used to determine whether any particular string of symbolsis “well formed”.

• Kpq is well formed, because it’s K followed by the atomic wffs p and q.

• NKpq is well formed, because it’s N followed by Kpq, which is in turn a wff.

• KNpNq is K followed by Np and Nq; and Np is a wff, etc.

Page 174: Recursion acglz

160 CHAPTER 23. RECURSIVE DEFINITION

23.3 See also• Recursive data types

• Recursion

• Mathematical induction

23.4 References• Paul Halmos: Naive set theory, van Nostrand, 1960

• P. Aczel (1977), “An introduction to inductive definitions”, Handbook of Mathematical Logic, J. Barwise (ed.),ISBN 0-444-86388-5.

• James L. Hein (2009), Discrete Structures, Logic, and Computability. ISBN 0-7637-7206-2.

Page 175: Recursion acglz

Chapter 24

Recursive function

Recursive function may refer to:

• Recursion (computer science), a procedure or subroutine, implemented in a programming language, whoseimplementation references itself

• A total computable function, a function which is defined for all possible inputs

• Primitive recursive function

24.1 See also• μ-recursive function, defined from a particular formal model of computable functions using primitive recursion

and the μ operator

• Recurrence relation, in mathematics, an equation that defines a sequence recursively

161

Page 176: Recursion acglz

Chapter 25

Recursive language

This article is about a class of formal languages as they are studied in mathematics and theoretical computer science.For computer languages that allow a function to call itself recursively, see Recursion (computer science).

In mathematics, logic and computer science, a formal language (a set of finite sequences of symbols taken from afixed alphabet) is called recursive if it is a recursive subset of the set of all possible finite sequences over the alphabetof the language. Equivalently, a formal language is recursive if there exists a total Turing machine (a Turing machinethat halts for every given input) that, when given a finite sequence of symbols as input, accepts it if belongs to thelanguage and rejects it otherwise. Recursive languages are also called decidable.The concept of decidability may be extended to other models of computation. For example one may speak of lan-guages decidable on a non-deterministic Turing machine. Therefore, whenever an ambiguity is possible, the synonymfor “recursive language” used is Turing-decidable language, rather than simply decidable.The class of all recursive languages is often called R, although this name is also used for the class RP.This type of language was not defined in the Chomsky hierarchy of (Chomsky 1959). All recursive languages arealso recursively enumerable. All regular, context-free and context-sensitive languages are recursive.

25.1 Definitions

There are two equivalent major definitions for the concept of a recursive language:

1. A recursive formal language is a recursive subset in the set of all possible words over the alphabet of thelanguage.

2. A recursive language is a formal language for which there exists a Turing machine that, when presented withany finite input string, halts and accept if the string is in the language, and halts and rejects otherwise. TheTuring machine always halts: it is known as a decider and is said to decide the recursive language.

By the second definition, any decision problem can be shown to be decidable by exhibiting an algorithm for it thatterminates on all inputs. An undecidable problem is a problem that is not decidable.

25.2 Examples

As noted above, every context-sensitive language is recursive. Thus, a simple example of a recursive language is theset L={abc, aabbcc, aaabbbccc, ...}; more formally, the set L = {w ∈ {a, b, c}∗ | w = anbncn for some n ≥ 1 }is context-sensitive and therefore recursive.Examples of decidable languages that are not context-sensitive are more difficult to describe. For one such example,some familiarity with mathematical logic is required: Presburger arithmetic is the first-order theory of the naturalnumbers with addition (but without multiplication). While the set of well-formed formulas in Presburger arithmetic

162

Page 177: Recursion acglz

25.3. CLOSURE PROPERTIES 163

is context-free, every deterministic Turing machine accepting the set of true statements in Presburger arithmetic hasa worst-case runtime of at least 22cn , for some constant c>0 (Fischer & Rabin 1974). Here, n denotes the length ofthe given formula. Since every context-sensitive language can be accepted by a linear bounded automaton, and suchan automaton can be simulated by a deterministic Turing machine with worst-case running time at most cn for someconstant c , the set of valid formulas in Presburger arithmetic is not context-sensitive. On positive side, it is knownthat there is a deterministic Turing machine running in time at most triply exponential in n that decides the set oftrue formulas in Presburger arithmetic (Oppen 1978). Thus, this is an example of a language that is decidable butnot context-sensitive.

25.3 Closure properties

Recursive languages are closed under the following operations. That is, if L and P are two recursive languages, thenthe following languages are recursive as well:

• The Kleene star L∗

• The image φ(L) under an e-free homomorphism φ

• The concatenation L ◦ P

• The union L ∪ P

• The intersection L ∩ P

• The complement of L

• The set difference L− P

The last property follows from the fact that the set difference can be expressed in terms of intersection and comple-ment.

25.4 See also• Recursively enumerable language

• Recursion

25.5 References• Michael Sipser (1997). “Decidability”. Introduction to the Theory of Computation. PWS Publishing. pp.

151–170. ISBN 0-534-94728-X.

• Chomsky, Noam (1959). “On certain formal properties of grammars”. Information and Control 2 (2): 137–167. doi:10.1016/S0019-9958(59)90362-6.

• Fischer, Michael J.; Rabin, Michael O. (1974). “Super-Exponential Complexity of Presburger Arithmetic”.Proceedings of the SIAM-AMS Symposium in Applied Mathematics 7: 27–41.

• Oppen, Derek C. (1978). “A 222pn Upper Bound on the Complexity of Presburger Arithmetic” (PDF). J.Comput. Syst. Sci. 16 (3): 323–332. doi:10.1016/0022-0000(78)90021-1.

Page 178: Recursion acglz

Chapter 26

Reentrancy (computing)

In computing, a computer program or subroutine is called reentrant if it can be interrupted in the middle of itsexecution and then safely called again (“re-entered”) before its previous invocations complete execution. The inter-ruption could be caused by an internal action such as a jump or call, or by an external action such as a hardwareinterrupt or signal. Once the reentered invocation completes, the previous invocations will resume correct execution.This definition originates from single-threaded programming environments where the flow of control could be inter-rupted by a hardware interrupt and transferred to an interrupt service routine (ISR). Any subroutine used by the ISRthat could potentially have been executing when the interrupt was triggered should be reentrant. Often, subroutinesaccessible via the operating system kernel are not reentrant. Hence, interrupt service routines are limited in the ac-tions they can perform; for instance, they are usually restricted from accessing the file system and sometimes evenfrom allocating memory.A subroutine that is directly or indirectly recursive should be reentrant. This policy is partially enforced by structuredprogramming languages. However a subroutine can fail to be reentrant if it relies on a global variable to remainunchanged but that variable is modified when the subroutine is recursively invoked.This definition of reentrancy differs from that of thread-safety in multi-threaded environments. A reentrant subrou-tine can achieve thread-safety,[1] but being reentrant alone might not be sufficient to be thread-safe in all situations.Conversely, thread-safe code does not necessarily have to be reentrant (see below for examples).Other terms used for reentrant programs include “pure procedure” or “sharable code”.[2]

26.1 Example

This is an example of a swap() function that fails to be reentrant (as well as failing to be thread-safe). As such, itshould not have been used in the interrupt service routine isr():int t; void swap(int *x, int *y) { t = *x; *x = *y; // hardware interrupt might invoke isr() here! *y = t; } void isr() {int x = 1, y = 2; swap(&x, &y); }

swap() could be made thread-safe by making t thread-local. It still fails to be reentrant, and this will continue to causeproblems if isr() is called in the same context as a thread already executing swap().The following (somewhat contrived) modification of the swap function, which is careful to leave the global data in aconsistent state at the time it exits, is perfectly reentrant ; however, it is not thread-safe since it does not ensure theglobal data is in a consistent state during execution:int t; void swap(int *x, int *y) { int s; s = t; // save global variable t = *x; *x = *y; // hardware interrupt might invokeisr() here! *y = t; t = s; // restore global variable } void isr() { int x = 1, y = 2; swap(&x, &y); }

164

Page 179: Recursion acglz

26.2. BACKGROUND 165

26.2 Background

Reentrancy is not the same thing as idempotence, in which the function may be called more than once yet generateexactly the same output as if it had only been called once. Generally speaking, a function produces output data basedon some input data (though both are optional, in general). Shared data could be accessed by anybody at any time. Ifdata can be changed by anybody (and nobody keeps track of those changes) then there is no guarantee for those whoshare a datum whether that datum is the same as at any time before.Data has a characteristic called a scope, which describes where in a program the data may be used. Data scope iseither global (outside the scope of any function and with an indefinite extent), or local (created each time a functionis called and destroyed upon exit).Local data are not shared by any, re-entering or not, routines; therefore, they do not affect re-entrance. Global dataare defined outside functions, and can be accessed by more than one function, either in form of global variables (datashared between all functions), or as static variables (data shared by all functions of the same name). In object-orientedprogramming, global data are defined in the scope of a class and can be private, making it accessible only to functionsof that class. There is also the concept of instance variables, where a class variable is bound to a class instance. Forthese reasons, in object-oriented programming this distinction is usually reserved for the data accessible outside ofthe class (public), and for the data independent of class instances (static).Reentrancy is distinct from, but closely related to, thread-safety. A function can be thread-safe and still not reentrant.For example, a function could be wrapped all around with a mutex (which avoids problems in multithreading envi-ronments), but if that function is used in an interrupt service routine, it could starve waiting for the first execution torelease the mutex. The key for avoiding confusion is that reentrant refers to only one thread executing. It is a conceptfrom the time when no multitasking operating systems existed.

26.3 Rules for reentrancyReentrant code may not hold any static (or global) non-constant data.

Reentrant functions can work with global data. For example, a reentrant interrupt service routine could grab apiece of hardware status to work with (e.g. serial port read buffer) which is not only global, but volatile. Still,typical use of static variables and global data is not advised, in the sense that only atomic read-modify-writeinstructions should be used in these variables (it should not be possible for an interrupt or signal to come duringthe execution of such an instruction).

Reentrant code may not modify its own code. The operating system might allow a process to modify its code.There are various reasons for this (e.g., blitting graphics quickly) but this would cause a problem with reen-trancy, since the code might not be the same next time.

It may, however, modify itself if it resides in its own unique memory. That is, if each new invocation uses a differentphysical machine code location where a copy of the original code is made, it will not affect other invocationseven if it modifies itself during execution of that particular invocation (thread).

Reentrant code may not call non-reentrant computer programs or routines.Multiple levels of 'user/object/process priority' and/or multiprocessing usually complicate the control of reen-trant code. It is important to keep track of any access and or side effects that are done inside a routine designedto be reentrant.

26.4 Reentrant interrupt handler

A “reentrant interrupt handler” is an interrupt handler that re-enables interrupts early in the interrupt handler. Thismay reduce interrupt latency.[3] In general, while programming interrupt service routines, it is recommended to re-enable interrupts as soon as possible in the interrupt handler. This practice helps to avoid losing interrupts.[4]

26.5 Further examples

In the following piece of C code, neither f nor g functions are reentrant.

Page 180: Recursion acglz

166 CHAPTER 26. REENTRANCY (COMPUTING)

int g_var = 1; int f() { g_var = g_var + 2; return g_var; } int g() { return f() + 2; }

In the above, f depends on a non-constant global variable g_var; thus, if two threads execute it and access g_varconcurrently, then the result varies depending on the timing of the execution. Hence, f is not reentrant. Neither is g;it calls f, which is not reentrant.These slightly altered versions are reentrant:int f(int i) { return i + 2; } int g(int i) { return f(i) + 2; }

In the following piece of C code, the function is thread-safe, but not reentrant.int function() { mutex_lock(); // ... function body // ... mutex_unlock(); }

In the above, function can be called by different threads without any problem. But if the function is used in a reentrantinterrupt handler and a second interrupt arises inside the function, the second routine will hang forever. As interruptservicing can disable other interrupts, the whole system could suffer.

26.6 See also• Referential transparency

• Idempotence

26.7 References[1] Kerrisk 2010, p. 657.

[2] Ralston, Anthony, ed. (2000). “Reentrant program”. Encyclopedia of Computer Science. Fourth edition. Nature publishinggroup. pp. 1514–1515.

[3] Andrew N. Sloss, Dominic Symes, Chris Wright, John Rayfield (2004). ARM System Developer’s Guide. p. 342.

[4] John Regehr (2006). Safe and structured use of interrupts in real-time and embedded software (PDF).

26.8 Further reading• Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press.

26.9 External links• Article "Use reentrant functions for safer signal handling" by Dipak K Jha

• "Writing Reentrant and Thread-Safe Code,” from AIX Version 4.3 General Programming Concepts: Writingand Debugging Programs, 2nd edition, 1999.

• Jack Ganssle (2001). "Introduction to Reentrancy". Embedded.

• Raymond Chen (2004). The difference between thread-safety and re-entrancy. The Old New Thing.

Page 181: Recursion acglz

Chapter 27

Single recursion

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

27.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

167

Page 182: Recursion acglz

168 CHAPTER 27. SINGLE RECURSION

Tree created using the Logo programming language and relying heavily on recursion

27.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 183: Recursion acglz

27.3. TYPES OF RECURSION 169

27.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

27.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

27.3 Types of recursion

27.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 184: Recursion acglz

170 CHAPTER 27. SINGLE RECURSION

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

27.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

27.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

27.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 185: Recursion acglz

27.4. RECURSIVE PROGRAMS 171

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

27.4 Recursive programs

27.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 186: Recursion acglz

172 CHAPTER 27. SINGLE RECURSION

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 187: Recursion acglz

27.4. RECURSIVE PROGRAMS 173

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

27.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 188: Recursion acglz

174 CHAPTER 27. SINGLE RECURSION

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 189: Recursion acglz

27.5. IMPLEMENTATION ISSUES 175

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

27.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

27.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

27.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 190: Recursion acglz

176 CHAPTER 27. SINGLE RECURSION

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

27.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

27.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 191: Recursion acglz

27.6. RECURSION VERSUS ITERATION 177

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

27.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

27.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

27.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

27.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 192: Recursion acglz

178 CHAPTER 27. SINGLE RECURSION

27.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

27.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

27.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

27.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

27.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 193: Recursion acglz

27.10. SEE ALSO 179

27.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

27.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

27.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 194: Recursion acglz

180 CHAPTER 27. SINGLE RECURSION

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012), Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

27.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

27.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 195: Recursion acglz

Chapter 28

Tail call

In computer science, a tail call is a subroutine call performed as the final action of a procedure. If a tail call mightlead to the same subroutine being called again later in the call chain, the subroutine is said to be tail-recursive, whichis a special case of recursion. Tail recursion (or tail-end recursion) is particularly useful, and often easy to handlein implementations.Tail calls can be implemented without adding a new stack frame to the call stack. Most of the frame of the currentprocedure is not needed any more, and it can be replaced by the frame of the tail call, modified as appropriate (similarto overlay for processes, but for function calls). The program can then jump to the called subroutine. Producing suchcode instead of a standard call sequence is called tail call elimination. Tail call elimination allows procedure calls intail position to be implemented as efficiently as goto statements, thus allowing efficient structured programming. Inthe words of Guy L. Steele, “in general procedure calls may be usefully thought of as GOTO statements which alsopass parameters, and can be uniformly coded as [machine code] JUMP instructions.”Traditionally, tail call elimination is optional. However, in functional programming languages, tail call eliminationis often guaranteed by the language standard, and this guarantee allows using recursion, in particular tail recursion,in place of loops. In such cases, it is not correct (though it may be customary) to refer to it as an optimization. Thespecial case of tail recursive calls, when a function calls itself, may be more amenable to call elimination than generaltail calls.

28.1 Description

When a function is called, the computer must “remember” the place it was called from, the return address, so that itcan return to that location with the result once the call is complete. Typically, this information is saved on the callstack, a simple list of return locations in order of the times that the call locations they describe were reached. For tailcalls, there is no need to remember the place we are calling from – instead, we can perform tail call elimination byleaving the stack alone (except possibly for function arguments and local variables[1]), and the newly called functionwill return its result directly to the original caller. Note that the tail call doesn't have to appear lexically after allother statements in the source code; it is only important that the calling function return immediately after the tail call,returning the tail call’s result if any, since the calling function will never get a chance to do anything after the call ifthe optimization is performed.For non-recursive function calls, this is usually an optimization that saves little time and space, since there are notthat many different functions available to call. When dealing with recursive or mutually recursive functions whererecursion happens through tail calls, however, the stack space and the number of returns saved can grow to be verysignificant, since a function can call itself, directly or indirectly, creating new call stacks each iteration. In fact, it oftenasymptotically reduces stack space requirements from linear, or O(n), to constant, or O(1). Tail call elimination isthus required by the standard definitions of some programming languages, such as Scheme,[2][3] and languages inthe ML family among others. In the case of Scheme, the language definition formalizes the intuitive notion of tailposition exactly, by specifying which syntactic forms allow having results in tail context.[4] Implementations allowingan unlimited number of tail calls to be active at the same moment, thanks to tail call elimination, can also be called'properly tail-recursive'.[2]

Besides space and execution efficiency, tail call elimination is important in the functional programming idiom known

181

Page 196: Recursion acglz

182 CHAPTER 28. TAIL CALL

as continuation passing style (CPS), which would otherwise quickly run out of stack space.

28.2 Syntactic form

A tail call can be located just before the syntactical end of a subroutine:function foo(data) { a(data); return b(data); }

Here, both a(data) and b(data) are calls, but b is the last thing the procedure executes before returning and is thus intail position. However, not all tail calls are necessarily located at the syntactical end of a subroutine. Consider:function bar(data) { if ( a(data) ) { return b(data); } return c(data); }

Here, both calls to b and c are in tail position. This is because each of them lies in the end of if-branch respectively,even though the first one is not syntactically at the end of bar’s body.Now consider this code:function foo1(data) { return a(data) + 1; }function foo2(data) { var ret = a(data); return ret; }function foo3(data) { var ret = a(data); return (ret == 0) ? 1 : ret; }

Here, the call to a(data) is in tail position in foo2, but it is not in tail position either in foo1 or in foo3, because controlmust return to the caller to allow it to inspect or modify the return value before returning it.

28.3 Example programs

Take this Scheme program as an example:;; factorial : number -> number ;; to calculate the product of all positive ;; integers less than or equal to n. (define(factorial n) (if (= n 0) 1 (* n (factorial (- n 1)))))

This program is not written in a tail recursion style. Now take this Scheme program as an example:;; factorial : number -> number ;; to calculate the product of all positive ;; integers less than or equal to n. (define(factorial n) (let fact ([i n] [acc 1]) (if (zero? i) acc (fact (- i 1) (* acc i)))))

The inner procedure fact calls itself last in the control flow. This allows an interpreter or compiler to reorganize theexecution which would ordinarily look like this:call factorial (3) call fact (3 1) call fact (2 3) call fact (1 6) call fact (0 6) return 6 return 6 return 6 return 6 return 6into the more efficient variant, in terms of both space and time:call factorial (3) call fact (3 1) replace arguments with (2 3) replace arguments with (1 6) replace arguments with (06) return 6 return 6This reorganization saves space because no state except for the calling function’s address needs to be saved, eitheron the stack or on the heap, and the call stack frame for fact is reused for the intermediate results storage. This alsomeans that the programmer need not worry about running out of stack or heap space for extremely deep recursions.It is also worth noting that, in typical implementations, the tail recursive variant will be substantially faster than theother variant, but only by a constant factor.Some programmers working in functional languages will rewrite recursive code to be tail-recursive so they can takeadvantage of this feature. This often requires addition of an “accumulator” argument (acc in the above example) tothe function. In some cases (such as filtering lists) and in some languages, full tail recursion may require a functionthat was previously purely functional to be written such that it mutates references stored in other variables.

Page 197: Recursion acglz

28.4. TAIL RECURSION MODULO CONS 183

28.4 Tail recursion modulo cons

Tail recursion modulo cons is a generalization of tail recursion optimization introduced by David H. D. Warren[5] inthe context of compilation of Prolog, seen as an explicitly set once language. It was described (though not named) byDaniel P. Friedman and David S. Wise in 1974[6] as a LISP compilation technique. As the name suggests, it applieswhen the only operation left to perform after a recursive call is to prepend a known value in front of a list returnedfrom it (or to perform a constant number of simple data-constructing operations, in general). This call would thus bea tail call save for the said cons operation. But prefixing a value at the start of a list on exit from a recursive call is thesame as appending this value at the end of the growing list on entry into the recursive call, thus building the list as aside effect, as if in an implicit accumulator parameter. The following Prolog fragment illustrates the concept:Thus in tail recursive translation such a call is transformed into first creating a new list node and setting its first field,and then making a tail call with the pointer to the node’s rest field as argument, to be filled recursively.As another example, consider a recursive function in C that duplicates a linked list:In this form the function is not tail-recursive, because control returns to the caller after the recursive call duplicatesthe rest of the input list. Even if it were to allocate the head node before duplicating the rest, it would still need to plugin the result of the recursive call into the next field after the call.[lower-alpha 1] So the function is almost tail-recursive.Warren’s method pushes the responsibility of filling the next field into the recursive call itself, which thus becomestail call:Note how the callee now appends to the end of the growing list, rather than have the caller prepend to the beginningof the returned list. The work is now done on the way forward from the list’s start, before the recursive call whichthen proceeds further, a.o.t. backward from the list’s end, after the recursive call has returned its result. It is thussimilar to accumulating parameter technique, turning a recursive computation into an iterative one.Characteristically for this technique, a parent frame is created here on the execution call stack, which calls the tail-recursive callee which can reuse its own call frame if the tail-call optimization is present.The properly tail-recursive implementation can now be converted into an explicitly iterative form, i.e. an accumulatingloop:

28.5 History

In a paper delivered to the ACM conference in Seattle in 1977, Guy L. Steele summarized the debate over theGOTO and structured programming, and observed that procedure calls in the tail position of a procedure can be besttreated as a direct transfer of control to the called procedure, typically eliminating unnecessary stack manipulationoperations.[7] Since such “tail calls” are very common in Lisp, a language where procedure calls are ubiquitous, thisform of optimization considerably reduces the cost of a procedure call compared to other implementations. Steeleargued that poorly implemented procedure calls had led to an artificial perception that the GOTO was cheap comparedto the procedure call. Steele further argued that “in general procedure calls may be usefully thought of as GOTOstatements which also pass parameters, and can be uniformly coded as [machine code] JUMP instructions”, with themachine code stack manipulation instructions “considered an optimization (rather than vice versa!)".[7] Steele citedevidence that well optimized numerical algorithms in Lisp could execute faster than code produced by then-availablecommercial Fortran compilers because the cost of a procedure call in Lisp was much lower. In Scheme, a Lisp dialectdeveloped by Steele with Gerald Jay Sussman, tail call elimination is mandatory.[8]

28.6 Implementation methods

Tail recursion is important to some high-level languages, especially functional and logic languages and members ofthe Lisp family. In these languages, tail recursion is the most commonly used way (and sometimes the only wayavailable) of implementing iteration. The language specification of Scheme requires that tail calls are to be optimizedso as not to grow the stack. Tail calls can be made explicitly in Perl, with a variant of the “goto” statement that takesa function name: goto &NAME;[9]

Implementing tail call elimination only for tail recursion, rather than for all tail calls, is something significantly easier.For example, in the Java Virtual Machine (JVM), tail-recursive calls can be eliminated (as this reuses the existing callstack), but general tail calls cannot be (as this changes the call stack).[10][11] As a result, functional languages such as

Page 198: Recursion acglz

184 CHAPTER 28. TAIL CALL

Scala that target the JVM can efficiently implement direct tail recursion, but not mutual tail recursion.Various implementation methods are available.

28.6.1 In assembly

Tail calls are often optimized by interpreters and compilers of functional programming and logic programming lan-guages to more efficient forms of iteration. For example, Scheme programmers commonly express while loops ascalls to procedures in tail position and rely on the Scheme compiler or interpreter to substitute the tail calls with moreefficient jump instructions.[12]

For compilers generating assembly directly, tail call elimination is easy: it suffices to replace a call opcode with ajump one, after fixing parameters on the stack. From a compiler’s perspective, the first example above is initiallytranslated into pseudo-assembly language (in fact, this is valid x86 assembly):foo: call B call A ret

Tail call elimination replaces the last two lines with a single jump instruction:foo: call B jmp A

After subroutine A completes, it will then return directly to the return address of foo, omitting the unnecessary retstatement.Typically, the subroutines being called need to be supplied with parameters. The generated code thus needs to makesure that the call frame for A is properly set up before jumping to the tail-called subroutine. For instance, on platformswhere the call stack does not just contain the return address, but also the parameters for the subroutine, the compilermay need to emit instructions to adjust the call stack. On such a platform, consider the code:function foo(data1, data2) B(data1) return A(data2)where data1 and data2 are parameters. A compiler might translate that to the following pseudo assembly code:[lower-alpha 2]

1 foo: 2 mov reg,[sp+data1] ; fetch data1 from stack (sp) parameter into a scratch register. 3 push reg ; put data1 onstack where B expects it 4 call B ; B uses data1 5 pop ; remove data1 from stack 6 mov reg,[sp+data2] ; fetch data2from stack (sp) parameter into a scratch register. 7 push reg ; put data2 on stack where A expects it 8 call A ; A usesdata2 9 pop ; remove data2 from stack. 10 ret

A tail call optimizer could then change the code to:1 foo: 2 mov reg,[sp+data1] ; fetch data1 from stack (sp) parameter into a scratch register. 3 push reg ; put data1 onstack where B expects it 4 call B ; B uses data1 5 pop ; remove data1 from stack 6 mov reg,[sp+data2] ; fetch data2from stack (sp) parameter into a scratch register. 7 mov [sp+data1],reg ; put data2 where A expects it 8 jmp A ; Auses data2 and returns immediately to caller.

This changed code is more efficient both in terms of execution speed and use of stack space.

28.6.2 Through trampolining

However, since many Scheme compilers use C as an intermediate target code, the problem comes down to codingtail recursion in C without growing the stack, even if the back-end compiler does not optimize tail calls. Manyimplementations achieve this by using a device known as a trampoline, a piece of code that repeatedly calls functions.All functions are entered via the trampoline. When a function has to call another, instead of calling it directly itprovides the address of the function to be called, the arguments to be used, and so on, to the trampoline. This ensuresthat the C stack does not grow and iteration can continue indefinitely.It is possible to implement trampolining using higher-order functions in languages that support them, such as Groovy,Visual Basic .NET and C#.[13]

Using a trampoline for all function calls is rather more expensive than the normal C function call, so at least oneScheme compiler, Chicken, uses a technique first described by Henry Baker from an unpublished suggestion by

Page 199: Recursion acglz

28.7. RELATION TO WHILE CONSTRUCT 185

Andrew Appel,[14] in which normal C calls are used but the stack size is checked before every call. When the stackreaches its maximum permitted size, objects on the stack are garbage-collected using the Cheney algorithm by movingall live data into a separate heap. Following this, the stack is unwound (“popped”) and the program resumes from thestate saved just before the garbage collection. Baker says “Appel’s method avoids making a large number of smalltrampoline bounces by occasionally jumping off the Empire State Building.”[14] The garbage collection ensures thatmutual tail recursion can continue indefinitely. However, this approach requires that no C function call ever returns,since there is no guarantee that its caller’s stack frame still exists; therefore, it involves a much more dramatic internalrewriting of the program code: continuation-passing style.

28.7 Relation to while construct

Tail recursion can be related to the while control flow operator by means of a transformation such as the following:function foo(x) is: if predicate(x) then return foo(bar(x)) else return baz(x)The above construct transforms to:function foo(x) is: while predicate(x) do: x ← bar(x) return baz(x)In the preceding, x may be a tuple involving more than one variable: if so, care must be taken in designing theassignment statement x ← bar(x) so that dependencies are respected. One may need to introduce auxiliary variablesor use a swap construct.More general uses of tail recursion may be related to control flow operators such as break and continue, as in thefollowing:function foo(x) is: if p(x) then return bar(x) else if q(x) then return baz(x) ... else if t(x) then return foo(quux(x))... else return foo(quuux(x))where bar and baz are direct return calls, whereas quux and quuux involve a recursive tail call to foo. A translation isgiven as follows:function foo(x) is: do: if p(x) then x ← bar(x) break else if q(x) then x ← baz(x) break ... else if t(x) then x ←quux(x) continue ... else x ← quuux(x) continue loop return x

28.8 By language

• JavaScript - ECMAScript 6.0 will have tail calls.[15]

• Lua - Tail recursion is performed by the reference implementation.[16]

• Python - Stock Python implementations do not perform tail-call optimization, though a third-party module isavailable to do this.[17] Language inventor Guido van Rossum contends that stack traces are altered by tail callelimination making debugging harder, and prefers that programmers use explicit iteration instead.[18]

• Scheme - Required by the language definition.[19][20]

• Tcl - Since Tcl 8.6, Tcl has a tailcall command.[21]

28.9 See also

• Course-of-values recursion

• Recursion (computer science)

• Inline expansion

• Leaf subroutine

• Corecursion

Page 200: Recursion acglz

186 CHAPTER 28. TAIL CALL

28.10 Notes[1] Like this:

if (ls != NULL) { head = malloc( sizeof *head); head->value = ls->value; head->next = duplicate( ls->next); }

[2] The call instruction first pushes the current code location onto the stack and then performs an unconditional jump to the codelocation indicated by the label. The ret instruction first pops a code location off the stack, then performs an unconditionaljump to the retrieved code location.

28.11 References[1] “recursion - Stack memory usage for tail calls - Theoretical Computer Science”. Cstheory.stackexchange.com. 2011-07-29.

Retrieved 2013-03-21.

[2] “Revised^6 Report on the Algorithmic Language Scheme”. R6rs.org. Retrieved 2013-03-21.

[3] “Revised^6 Report on the Algorithmic Language Scheme - Rationale”. R6rs.org. Retrieved 2013-03-21.

[4] “Revised^6 Report on the Algorithmic Language Scheme”. R6rs.org. Retrieved 2013-03-21.

[5] D. H. D. Warren, DAI Research Report 141, University of Edinburgh, 1980.

[6] Daniel P. Friedman and David S. Wise, Technical Report TR19: Unwinding Structured Recursions into Iterations, IndianaUniversity, Dec. 1974.

[7] Steele, Guy Lewis (1977). “Debunking the “expensive procedure call” myth or, procedure call implementations consideredharmful or, LAMBDA: The Ultimate GOTO”. Proceedings of the 1977 annual conference on - ACM '77. pp. 153–162.doi:10.1145/800179.810196. ISBN 978-1-4503-2308-6. hdl:1721.1/5753.

[8] R5RS Sec. 3.5, Richard Kelsey, William Clinger, Jonathan Rees et al. (August 1998). “Revised5 Report on the Algorith-mic Language Scheme”. Higher-Order and Symbolic Computation 11 (1): 7–105. doi:10.1023/A:1010051815785.

[9] Contact details. “goto”. perldoc.perl.org. Retrieved 2013-03-21.

[10] "What is difference between tail calls and tail recursion?", Stack Overflow

[11] "What limitations does the JVM impose on tail-call optimization", Programmers Stack Exchange

[12] Probst, Mark (20 July 2000). “proper tail recursion for gcc”. GCC Project. Retrieved 10 March 2015.

[13] Samuel Jack, Bouncing on your tail. Functional Fun. April 9, 2008.

[14] Henry Baker, “CONS Should Not CONS Its Arguments, Part II: Cheney on the M.T.A.”

[15] http://bdadam.com/blog/video-douglas-crockford-about-the-new-good-parts.html

[16] http://www.lua.org/manual/5.2/manual.html#3.4.9

[17] https://github.com/baruchel/tco

[18] http://neopythonic.blogspot.com/2009/04/tail-recursion-elimination.html

[19] http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-6.html#%_sec_3.5

[20] http://www.r6rs.org/final/html/r6rs/r6rs-Z-H-8.html#node_sec_5.11

[21] http://www.tcl.tk/man/tcl/TclCmd/tailcall.htm

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008and incorporated under the “relicensing” terms of the GFDL, version 1.3 or later.

Page 201: Recursion acglz

Chapter 29

Transfinite induction

Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinalnumbers or cardinal numbers.Let P(α) be a property defined for all ordinals α. Suppose that whenever P(β) is true for all β < α, then P(α) is alsotrue (including the case that P(0) is true given the vacuously true statement that P(α) is true for all α ∈ ∅ ). Thentransfinite induction tells us that P is true for all ordinals.That is, if P(α) is true whenever P(β) is true for all β < α, then P(α) is true for all α. Or, more practically: in orderto prove a property P for all ordinals α, one can assume that it is already known for all smaller β < α.Usually the proof is broken down into three cases:

• Zero case: Prove that P (0) is true.

• Successor case: Prove that for any successor ordinal α+1, P(α+1) follows from P(α) (and, if necessary, P(β)for all β < α).

• Limit case: Prove that for any limit ordinal λ, P(λ) follows from [P(β) for all β < λ].

Notice that all three cases are identical except for the type of ordinal considered. They do not formally need to beconsidered separately, but in practice the proofs are typically so different as to require separate presentations. Zero issometimes considered a limit ordinal and then may sometimes be treated in proofs in the same case as limit ordinals.

29.1 Transfinite recursion

Transfinite recursion is similar to transfinite induction; however, instead of proving that something holds for allordinal numbers, we construct a sequence of objects, one for each ordinal.As an example, a basis for a (possibly infinite-dimensional) vector space can be created by choosing a vector v0 andfor each ordinal α choosing a vector that is not in the span of the vectors {vβ |β < α} . This process stops when novector can be chosen.More formally, we can state the Transfinite Recursion Theorem as follows:

• Transfinite Recursion Theorem (version 1). Given a class function[1] G: V → V (where V is the class of allsets), there exists a unique transfinite sequence F: Ord → V (where Ord is the class of all ordinals) such that

F(α) = G(F ↾ α) for all ordinals α, where ↾ denotes the restriction of F's domain to ordinals < α.

As in the case of induction, we may treat different types of ordinals separately: another formulation of transfiniterecursion is the following:

187

Page 202: Recursion acglz

188 CHAPTER 29. TRANSFINITE INDUCTION

0

1

2

3

ω

ω+1

ω+2

ω+3

ω·2

ω·3

ω·2+1

ω·2+2

ω·4

ω²ω²+

1ω²+2

ω²+ω

ω²+ω·2

ω²·2

ω²·3ω²·4

ω³

ω³+ω

ω³+ω²ω·5

4

5

ω+4

ωωω⁴

ω³·2

ω·2+3

Representation of the ordinal numbers up to ωω. Each turn of the spiral represents one power of ω. Transfinite induction requiresproving a base case (used for 0), a successor case (used for those ordinals which have a predecessor), and a limit case (used forordinals which don't have a predecessor).

• Transfinite Recursion Theorem (version 2). Given a set g1, and class functions G2, G3, there exists a uniquefunction F: Ord → V such that

• F(0) = g1,

• F(α + 1) = G2(F(α)), for all α ∈ Ord,

• F(λ) = G3(F ↾ λ), for all limit λ ≠ 0.

Note that we require the domains of G2, G3 to be broad enough to make the above properties meaningful. The

Page 203: Recursion acglz

29.2. RELATIONSHIP TO THE AXIOM OF CHOICE 189

uniqueness of the sequence satisfying these properties can be proven using transfinite induction.More generally, one can define objects by transfinite recursion on any well-founded relation R. (R need not even be aset; it can be a proper class, provided it is a set-like relation; that is, for any x, the collection of all y such that y R xmust be a set.)

29.2 Relationship to the axiom of choice

Proofs or constructions using induction and recursion often use the axiom of choice to produce a well-ordered relationthat can be treated by transfinite induction. However, if the relation in question is already well-ordered, one can oftenuse transfinite induction without invoking the axiom of choice.[2] For example, many results about Borel sets areproved by transfinite induction on the ordinal rank of the set; these ranks are already well-ordered, so the axiom ofchoice is not needed to well-order them.The following construction of the Vitali set shows one way that the axiom of choice can be used in a proof by transfiniteinduction:

First, well-order the real numbers (this is where the axiom of choice enters via the well-ordering the-orem), giving a sequence ⟨rα|α < β⟩ , where β is an ordinal with the cardinality of the continuum.Let v0 equal r0. Then let v1 equal rα1, where α1 is least such that rα1 − v0 is not a rational number.Continue; at each step use the least real from the r sequence that does not have a rational difference withany element thus far constructed in the v sequence. Continue until all the reals in the r sequence areexhausted. The final v sequence will enumerate the Vitali set.

The above argument uses the axiom of choice in an essential way at the very beginning, in order to well-order thereals. After that step, the axiom of choice is not used again.Other uses of the axiom of choice are more subtle. For example, a construction by transfinite recursion frequentlywill not specify a unique value for Aα₊₁, given the sequence up to α, but will specify only a condition that Aα₊₁must satisfy, and argue that there is at least one set satisfying this condition. If it is not possible to define a uniqueexample of such a set at each stage, then it may be necessary to invoke (some form of) the axiom of choice to selectone such at each step. For inductions and recursions of countable length, the weaker axiom of dependent choice issufficient. Because there are models of Zermelo–Fraenkel set theory of interest to set theorists that satisfy the axiomof dependent choice but not the full axiom of choice, the knowledge that a particular proof only requires dependentchoice can be useful.

29.3 See also

• Mathematical induction

• ∈-induction

• Well-founded induction

29.4 Notes[1] A class function is a rule (specifically, a logical formula) assigning each element in the lefthand class to an element in the

righthand class. It is not a function because its domain and codomain are not sets.

[2] In fact, the domain of the relation does not even need to be a set. It can be a proper class, provided that the relation R isset-like: for any x, the collection of all y such that y R x must be a set.

29.5 References

• Suppes, Patrick (1972), “Section 7.1”, Axiomatic set theory, Dover Publications, ISBN 0-486-61630-4

Page 204: Recursion acglz

190 CHAPTER 29. TRANSFINITE INDUCTION

29.6 External links• Emerson, Jonathan, Lezama, Mark and Weisstein, Eric W., “Transfinite Induction”, MathWorld.

Page 205: Recursion acglz

Chapter 30

Tree traversal

In computer science, tree traversal (also known as tree search) is a form of graph traversal and refers to the processof visiting (examining and/or updating) each node in a tree data structure, exactly once, in a systematic way. Suchtraversals are classified by the order in which the nodes are visited. The following algorithms are described for abinary tree, but they may be generalized to other trees as well.

30.1 Types

C E H

A D I

B G

F

Pre-order: F, B, A, D, C, E, G, I, H

191

Page 206: Recursion acglz

192 CHAPTER 30. TREE TRAVERSAL

C E H

A D I

B G

F

In-order: A, B, C, D, E, F, G, H, I

Compared to linear data structures like linked lists and one-dimensional arrays, which have a canonical method oftraversal (namely in linear order), tree structures can be traversed in many different ways. Starting at the root ofa binary tree, there are three main steps that can be performed and the order in which they are performed definesthe traversal type. These steps (in no particular order) are: performing an action on the current node (referred to as“visiting” the node), traversing to the left child node, and traversing to the right child node.Traversing a tree involves iterating over all nodes in some manner. Because from a given node there is more than onepossible next node (it is not a linear data structure), then, assuming sequential computation (not parallel), some nodesmust be deferred – stored in some way for later visiting. This is often done via a stack (LIFO) or queue (FIFO). Asa tree is a self-referential (recursively defined) data structure, traversal can be defined by recursion or, more subtly,corecursion, in a very natural and clear fashion; in these cases the deferred nodes are stored implicitly in the call stack.The name given to a particular style of traversal comes from the order in which nodes are visited. Most simply, doesone go down first (depth-first: first child, then grandchild before second child) or across first (breadth-first: first child,then second child before grandchildren)? Depth-first traversal is further classified by position of the root elementwith regard to the left and right nodes. Imagine that the left and right nodes are constant in space, then the root nodecould be placed to the left of the left node (pre-order), between the left and right node (in-order), or to the right ofthe right node (post-order). There is no equivalent variation in breadth-first traversal – given an ordering of children,“breadth-first” is unambiguous.For the purpose of illustration, it is assumed that left nodes always have priority over right nodes. This ordering canbe reversed as long as the same ordering is assumed for all traversal methods.Depth-first traversal is easily implemented via a stack, including recursively (via the call stack), while breadth-firsttraversal is easily implemented via a queue, including corecursively.Beyond these basic traversals, various more complex or hybrid schemes are possible, such as depth-limited searches

Page 207: Recursion acglz

30.1. TYPES 193

C E H

A D I

B G

F

Post-order: A, C, E, D, B, H, I, G, F

such as iterative deepening depth-first search.

30.1.1 Depth-first

See also: Depth-first search

There are three types of depth-first traversal: pre-order,[1] in-order,[1] and post-order.[1] For a binary tree, they aredefined as display operations recursively at each node, starting with the root node, whose algorithm is as follows:[2][3]

Pre-order

1. Display the data part of root element (or current element)

2. Traverse the left subtree by recursively calling the pre-order function.

3. Traverse the right subtree by recursively calling the pre-order function.

In-order (symmetric)

1. Traverse the left subtree by recursively calling the in-order function

2. Display the data part of root element (or current element)

Page 208: Recursion acglz

194 CHAPTER 30. TREE TRAVERSAL

C E H

A D I

B G

F

Level-order: F, B, G, A, D, I, C, E, H

3. Traverse the right subtree by recursively calling the in-order function

Post-order

1. Traverse the left subtree by recursively calling the post-order function.

2. Traverse the right subtree by recursively calling the post-order function.

3. Display the data part of root element (or current element).

The trace of a traversal is called a sequentialisation of the tree. The traversal trace is a list of each visited root node.No one sequentialisation according to pre-, in- or post-order describes the underlying tree uniquely. Given a treewith distinct elements, either pre-order or post-order paired with in-order is sufficient to describe the tree uniquely.However, pre-order with post-order leaves some ambiguity in the tree structure.[4]

Generic tree

To traverse any tree in depth-first order, perform the following operations recursively at each node:

1. Perform pre-order operation

2. For each i (with i = 1 to n) do:

(a) Visit i-th, if present(b) Perform in-order operation

Page 209: Recursion acglz

30.2. APPLICATIONS 195

3. Perform post-order operation

where n is the number of child nodes. Depending on the problem at hand, the pre-order, in-order or post-orderoperations may be void, or you may only want to visit a specific child node, so these operations are optional. Also,in practice more than one of pre-order, in-order and post-order operations may be required. For example, wheninserting into a ternary tree, a pre-order operation is performed by comparing items. A post-order operation may beneeded afterwards to re-balance the tree.

30.1.2 Breadth-first

See also: Breadth-first search

Trees can also be traversed in level-order, where we visit every node on a level before going to a lower level. Thissearch is referred to as breadth-first search, as the search tree is broadened as much as possible on each depth beforegoing to the next depth.

30.1.3 Other types

There are also tree traversal algorithms that classify as neither depth-first search nor breadth-first search. One such al-gorithm is Monte Carlo tree search, which concentrates on analyzing the most promising moves, basing the expansionof the search tree on random sampling of the search space.

30.2 Applications

Pre-order traversal while duplicating nodes and edges can make a complete duplicate of a binary tree. It can also beused to make a prefix expression (Polish notation) from expression trees: traverse the expression tree pre-orderly.In-order traversal is very commonly used on binary search trees because it returns values from the underlying set inorder, according to the comparator that set up the binary search tree (hence the name).Post-order traversal while deleting or freeing nodes and values can delete or free an entire binary tree. It can alsogenerate a postfix representation of a binary tree.

30.3 Implementations

30.3.1 Depth-first

Pre-order

In-order

.

Post-order

All the above implementations require call stack space proportional to the height of the tree. In a poorly balancedtree, this can be considerable. We can remove the stack requirement by maintaining parent pointers in each node, orby threading the tree (next section).

Morris in-order traversal using threading

Main article: Threaded binary tree

Page 210: Recursion acglz

196 CHAPTER 30. TREE TRAVERSAL

A binary tree is threaded by making every left child pointer (that would otherwise be null) point to the in-orderpredecessor of the node (if it exists) and every right child pointer (that would otherwise be null) point to the in-ordersuccessor of the node (if it exists).Advantages:

1. Avoids recursion, which uses a call stack and consumes memory and time.

2. The node keeps a record of its parent.

Disadvantages:

1. The tree is more complex.

2. We can make only one traversal at a time.

3. It is more prone to errors when both the children are not present and both values of nodes point to theirancestors.

Morris traversal is an implementation of in-order traversal that uses threading:[5]

1. Create links to the in-order successor

2. Print the data using these links

3. Revert the changes to restore original tree.

30.3.2 Breadth-first

Also, listed below is pseudocode for a simple queue based level order traversal, and will require space proportionalto the maximum number of nodes at a given depth. This can be as much as the total number of nodes / 2. A morespace-efficient approach for this type of traversal can be implemented using an iterative deepening depth-first search.levelorder(root) q = empty queue q.enqueue(root) while not q.empty do node := q.dequeue() visit(node) if node.left≠ null then q.enqueue(node.left) if node.right ≠ null then q.enqueue(node.right)

30.4 Infinite trees

While traversal is usually done for trees with a finite number of nodes (and hence finite depth and finite branchingfactor) it can also be done for infinite trees. This is of particular interest in functional programming (particularly withlazy evaluation), as infinite data structures can often be easily defined and worked with, though they are not (strictly)evaluated, as this would take infinite time. Some finite trees are too large to represent explicitly, such as the gametree for chess or go, and so it is useful to analyze them as if they were infinite.A basic requirement for traversal is to visit every node. For infinite trees, simple algorithms often fail this. Forexample, given a binary tree of infinite depth, a depth-first traversal will go down one side (by convention the leftside) of the tree, never visiting the rest, and indeed if in-order or post-order will never visit any nodes, as it has notreached a leaf (and in fact never will). By contrast, a breadth-first (level-order) traversal will traverse a binary tree ofinfinite depth without problem, and indeed will traverse any tree with bounded branching factor.On the other hand, given a tree of depth 2, where the root node has infinitely many children, and each of these childrenhas two children, a depth-first traversal will visit all nodes, as once it exhausts the grandchildren (children of childrenof one node), it will move on to the next (assuming it is not post-order, in which case it never reaches the root). Bycontrast, a breadth-first traversal will never reach the grandchildren, as it seeks to exhaust the children first.A more sophisticated analysis of running time can be given via infinite ordinal numbers; for example, the breadth-firsttraversal of the depth 2 tree above will take ?·2 steps: ? for the first level, and then another ? for the second level.Thus, simple depth-first or breadth-first searches do not traverse every infinite tree, and are not efficient on verylarge trees. However, hybrid methods can traverse any (countably) infinite tree, essentially via a diagonal argument(“diagonal” – a combination of vertical and horizontal – corresponds to a combination of depth and breadth).

Page 211: Recursion acglz

30.5. REFERENCES 197

Concretely, given the infinitely branching tree of infinite depth, label the root node (), the children of the root node(1), (2), . . . , the grandchildren (1, 1), (1, 2), . . . , (2, 1), (2, 2), . . . , and so on. The nodes are thus in a one-to-onecorrespondence with finite (possibly empty) sequences of positive numbers, which are countable and can be placedin order first by sum of entries, and then by lexicographic order within a given sum (only finitely many sequencessum to a given value, so all entries are reached – formally there are a finite number of compositions of a given naturalnumber, specifically 2n−1compositions of n = 1;), which gives a traversal. Explicitly:0: () 1: (1) 2: (1,1) (2) 3: (1,1,1) (1,2) (2,1) (3) 4: (1,1,1,1) (1,1,2) (1,2,1) (1,3) (2,1,1) (2,2) (3,1) (4)etc.This can be interpreted as mapping the infinite depth binary tree onto this tree and then applying breadth-first traversal:replace the “down” edges connecting a parent node to its second and later children with “right” edges from the 1stchild to the 2nd child, 2nd child to third child, etc. Thus at each step one can either go down (append a (,1) to theend) or go right (add 1 to the last number) (except the root, which is extra and can only go down), which showsthe correspondence between the infinite binary tree and the above numbering; the sum of the entries (minus 1)corresponds to the distance from the root, which agrees with the 2n−1 nodes at depth n−1 in the infinite binary tree(2 corresponds to binary).

30.5 References[1] “Lecture 8 - Tree Traversal”. Retrieved 2 May 2015.

[2] http://www.cise.ufl.edu/~{}sahni/cop3530/slides/lec216.pdf

[3] “Preorder Traversal Algorithm”. Retrieved 2 May 2015.

[4] “algorithms - Which combinations of pre-, post- and in-order sequentialisation are unique? - Computer Science StackExchange”. Retrieved 2 May 2015.

[5] Morris, Joseph M. (1979). “Traversing binary trees simply and cheaply”. Information Processing Letters 9 (5). doi:10.1016/0020-0190(79)90068-1.

General

• Dale, Nell. Lilly, Susan D. “Pascal Plus Data Structures”. D. C. Heath and Company. Lexington, MA. 1995.Fourth Edition.

• Drozdek, Adam. “Data Structures and Algorithms in C++". Brook/Cole. Pacific Grove, CA. 2001. Secondedition.

• http://www.math.northwestern.edu/~{}mlerma/courses/cs310-05s/notes/dm-treetran

30.6 External links• Animation Applet of Binary Tree Traversal

• The Adjacency List Model for Processing Trees with SQL

• Storing Hierarchical Data in a Database with traversal examples in PHP

• Managing Hierarchical Data in MySQL

• Working with Graphs in MySQL

• Non-recursive traversal of DOM trees in JavaScript

• Sample code for recursive and iterative tree traversal implemented in C.

• Sample code for recursive tree traversal in C#.

• See tree traversal implemented in various programming language on Rosetta Code

• Tree traversal without recursion

Page 212: Recursion acglz

Chapter 31

Walther recursion

In computer programming, Walther recursion (named after Christoph Walther) is a method of analysing recursivefunctions that can determine if the function is definitely terminating, given finite inputs. It allows a more natural styleof expressing computation than simply using primitive recursive functions.Since the halting problem cannot be solved in general, there need to be still programs that will terminate, but whichWalther recursion cannot prove to terminate. Walther recursion may be used in total functional languages in order toallow a more liberal style of showing primitive recursion.

31.1 See also• BlooP and FlooP

• Termination analysis

• Total Turing machine

31.2 References• Walther, Christoph (1991). “On Proving the Termination of Algorithms by Machine” (PDF). Artificial Intel-

ligence 70 (1).

• Wu, Alexander (1994). Automated termination proofs using Walther recursion (PDF) (Thesis). MassachusettsInstitute of Technology. Retrieved 2014-09-15.

• McAllester, David A.; Arkoudas, Kostas (1996). McRobbie, Michael A., and Slaney, J.K., ed. Walther Recur-sion. Proceedings of the 13th International Conference on Automated Deduction. New Brunswick, NJ, USA:Springer-Verlag. pp. 643––657. ISBN 3-540-61511-3.

198

Page 213: Recursion acglz

31.3. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 199

31.3 Text and image sources, contributors, and licenses

31.3.1 Text• Anonymous recursion Source: https://en.wikipedia.org/wiki/Anonymous_recursion?oldid=628476829 Contributors: Michael Hardy,

AugPi, Jitse Niesen, Salix alba, Bisqwit, Donhalcon, Pintman, Chris the speller, Nbarth, Chrisamaphone, CBM, Abednigo, Quadrescence,Pcap, AvicBot and Anonymous: 3

• Arm’s-length recursion Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969 Contributors: Edward,Michael Hardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kaufmann, Abdull,MMSequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca, Ruud Koot,Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver, Chobot, Wave-length, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer, SmackBot, In-verseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, David Morón, Jon-Harder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM, Pierre deLyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens, JAnDbot,Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill, DaoKaioshin,Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki, Jeshan, Mild BillHiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Merarischroeder, Btx40,Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse, AnomieBOT, Cita-tion bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTimberlakexx, RedBot,Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, RenaissanceBug, Simur-gia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy, Chmarkine, Jon.weldon,TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb, Jacob p12, Lomotos10,Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Bar recursion Source: https://en.wikipedia.org/wiki/Bar_recursion?oldid=602701644 Contributors: Bearcat, Ben Standeven, Myasuda,Blaisorblade, WereSpielChequers, Yobot, Qetuth, Wert7 and Anonymous: 1

• Corecursion Source: https://en.wikipedia.org/wiki/Corecursion?oldid=675326208 Contributors: The Anome, Malcohol, Greenrd, Walt-pohl, Rich Farmbrough, Ruud Koot, Seliopou, Gurch, Roboto de Ajvol, Hairy Dude, Piet Delport, SmackBot, Imz, Betacommand,Vvarkey, Nbarth, Furby100, Dreadstar, Lambiam, Macha, Jenovazero, PhilKnight, Rhwawn, Gwern, R'n'B, Icktoofay, Thefrob, Classi-calecon, Addbot, Ghettoblaster, Yobot, Jordsan, Obscuranym, Denispir, AnomieBOT, Citation bot, VladimirReshetnikov, Citation bot1, WillNess, H3llBot, Helpful Pixie Bot, BattyBot, ChrisGualtieri, Pimp slap the funk and Anonymous: 14

• Course-of-values recursion Source: https://en.wikipedia.org/wiki/Course-of-values_recursion?oldid=533176028Contributors: MichaelHardy, Andreas Kaufmann, Rajah, Ruud Koot, SmackBot, JonHarder, Lambiam, Macha, Iridescent, JoeBot, CBM, Cydebot, MarshBot,WinBot, Marc van Leeuwen, Brentsmith101 and Anonymous: 4

• Direct recursion Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969Contributors: Edward, MichaelHardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kaufmann, Abdull, MM-Sequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca, Ruud Koot,Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver, Chobot, Wave-length, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer, SmackBot, In-verseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, David Morón, Jon-Harder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM, Pierre deLyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens, JAnDbot,Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill, DaoKaioshin,Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki, Jeshan, Mild BillHiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Merarischroeder, Btx40,Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse, AnomieBOT, Cita-tion bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTimberlakexx, RedBot,Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, RenaissanceBug, Simur-gia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy, Chmarkine, Jon.weldon,TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb, Jacob p12, Lomotos10,Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Droste effect Source: https://en.wikipedia.org/wiki/Droste_effect?oldid=673522611 Contributors: Michael Hardy, Rp, Ixfd64, Olego,Nv8200pa, Wetman, Postdlf, AlistairMcMillan, Jossi, DragonflySixtyseven, AndrewKeenanRichardson, Pavel Vozenilek, Bellczar, STHay-den, Hkroger, Mdd, Scarecroe, Oliphaunt, Bkkbrad, Ruud Koot, SqueakBox, Pfau, Sparkit, JIP, TobyJ, Nightscream, TheRingess,Preslethe, Mongreilf, Wavelength, Borgx, Hairy Dude, Anetode, Raven4x4x, Mistercow, Saric, Garion96, Destin, Attilios, Smack-Bot, McGeddon, Zekkelley, Apeloverage, Schwallex, Murder1, Nahum Reduta, Pcgomes, Ser Amantio di Nicolao, Alpha Omicron,Loadmaster, BillFlis, Dfred, Hossenfeffer, CRGreathouse, CBM, John259, Sopoforic, A876, Quibik, Headinajar, Astavats, .alyn.post.,Rothorpe, Froid, R'n'B, J.delanoy, MONODA, STBotD, Mhopgood, Anonymous Dissident, Katyism, Salvar, Broadbot, Mallardtheduck,Cgwaldman, BotKung, Sputniktilt, Joseph A. Spadaro, Thatother1dude, EverGreg, AHMartin, SieBot, Sylwan~enwiki, Goustien, Atif.t2,Cartman0052007, Trivialist, Opitergino, Jackpot777, Hidro, 7&6=thirteen, Mlaffs, DumZiBoT, XLinkBot, Addbot, DrJos, LaaknorBot,Lightbot, Zorrobot, Goregore~enwiki, Luckas-bot, Yobot, JoshSommers, Punkyfish3000, Jeffwang, Nouill, Thayts, Borbolia777, Beao,Fama Clamosa, Grandenchilada, Lotje, Spazzychalk, Calcyman, Sverigekillen, JayLarsons1, Tuankiet65, Ὁ οἶστρος, Jrg1000, Sage321,Helpful Pixie Bot, Brison09, FrauBella, Roll-Morton, Filedelinkerbot, Kartik26, Joeleoj123, GoldDrake and Anonymous: 97

• Fixed-point combinator Source: https://en.wikipedia.org/wiki/Fixed-point_combinator?oldid=664047929 Contributors: Dominus, Ci-phergoth, AugPi, Andres, Rl, Charles Matthews, Espertus, Tobias Bergemann, Giftlite, BenFrantzDale, Jorend, Fanf, Aminorex, Positron,Ehamberg, Matthew Vaughan, Mernen, Mormegil, Mvanier, Rich Farmbrough, Avriette, Qutezuce, Indil, Edward Z. Yang, Spoon!,Tromp, Diego Moya, Flata, Hammertime, Dhartung, Danhash, Oleg Alexandrov, Bkkbrad, MattGiuca, Ruud Koot, Smartech~enwiki,Venullian, ErikHaugen, Vegaswikian, Mathbot, NekoDaemon, Sderose, Psantora, Wavelength, Hairy Dude, Innovationcreation, MattRyall, Tony1, Sth~enwiki, Liyang, Cedar101, Donhalcon, JLaTondre, AndrewWTaylor, Kslays, BiT, UrsaFoot, Mhss, Bluebot, Nbarth,Dfletter, Cícero, Derek R Bullamore, The S show, Jbolden1517, Wfi, CBM, Cydebot, Boemanneke, JMatthews, Nick Number, Dereckson,

Page 214: Recursion acglz

200 CHAPTER 31. WALTHER RECURSION

Abcarter, JamesBWatson, Cat-five, A3nm, Kestasjk, Gwern, R'n'B, Inquam, Daniel5Ko, Nirajanrajkarnikar, Anandram, Hqb, Amigo-Nico, Santhosh.thottingal, Fatnickc, Toddst1, Svick, Nankezhishi, Krzysz00, Ottawahitech, Alexey Muranov, Afarnen, XLinkBot, Wik-Head, Addbot, Tsunanet, Se'taan, Hash150, Mdnahas, Haklo, Sutambe, Luckas-bot, Pcap, PMLawrence, Subwy, AnomieBOT, SophusBie, ChrisKuklewicz, LittleWink, MastiBot, WillNess, WikitanvirBot, Wgunther, GoingBatty, Zahnradzacken, Serketan, HolyCookie,Jcubic, ClueBot NG, Thepigdog, Helpful Pixie Bot, BG19bot, Yinwang0, Derekleebeatty, Romario333, Czech is Cyrillized, Nomoteretes,Cosmia Nebula, K001q and Anonymous: 99

• Fold (higher-order function) Source: https://en.wikipedia.org/wiki/Fold_(higher-order_function)?oldid=676226824 Contributors: TheAnome, Frank Shearar, Greenrd, RRM, Tobias Bergemann, Matt Crypto, Qutezuce, Mecanismo, Bdoserror, Spoon!, Daf, Trylks, Cbur-nett, Cgibbard, MattGiuca, Ruud Koot, Qwertyus, JVz, Bgwhite, Taejo, Guslacerda, Tony1, Mike92591, Pooryorick~enwiki, Googl,Cedar101, JLaTondre, TuukkaH, SmackBot, Stemail23, BiT, Sam Pointon, Masklinn, Torzsmokus, Frap, Zoonfafer, Rnapier, Cyber-cobra, Tomcjohn, Jbolden1517, Codeculturist, Pmoura, Simeon, Thijs!bot, Hervegirod, Oubiwann, Jack quack, Rriegs, Antic-Hay, Ma-gioladitis, Rhwawn, Soelinn, Stoneice02, David Eppstein, Gwern, MwGamera, J.delanoy, Netytan, SparsityProblem, Grshiplett, Eyelid-lessness, Qseep, EvanCarroll, Mewp~enwiki, Wykypydya, Fourthark, Nolens Volens, OKBot, Classicalecon, XLinkBot, Addbot, Btx40,LaaknorBot, Tfogal, Jarble, Yobot, Masharabinovich, AnomieBOT, Rjanag, Onthewings, ArthurBot, FrescoBot, Bigweeboy, WillNess,RjwilmsiBot, EmausBot, 10mcleod, ZéroBot, Cadadr, WinerFresh, Chiph588, Wikih101, Arturbies, Da cameron, Pimp slap the funk,Bubbleben, Monkbot, Ertyupoi and Anonymous: 96

• Generative recursion Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969 Contributors: Edward,Michael Hardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kaufmann, Abdull,MMSequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca, Ruud Koot,Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver, Chobot, Wave-length, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer, SmackBot, In-verseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, David Morón, Jon-Harder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM, Pierre deLyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens, JAnDbot,Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill, DaoKaioshin,Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki, Jeshan, Mild BillHiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Merarischroeder, Btx40,Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse, AnomieBOT, Cita-tion bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTimberlakexx, RedBot,Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, RenaissanceBug, Simur-gia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy, Chmarkine, Jon.weldon,TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb, Jacob p12, Lomotos10,Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Indirect recursion Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969Contributors: Edward, MichaelHardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kaufmann, Abdull, MM-Sequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca, Ruud Koot,Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver, Chobot, Wave-length, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer, SmackBot, In-verseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, David Morón, Jon-Harder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM, Pierre deLyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens, JAnDbot,Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill, DaoKaioshin,Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki, Jeshan, Mild BillHiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Merarischroeder, Btx40,Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse, AnomieBOT, Cita-tion bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTimberlakexx, RedBot,Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, RenaissanceBug, Simur-gia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy, Chmarkine, Jon.weldon,TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb, Jacob p12, Lomotos10,Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Infinite loop Source: https://en.wikipedia.org/wiki/Infinite_loop?oldid=676610304 Contributors: AxelBoldt, Lee Daniel Crocker, Wes-ley, Ed Poor, Hari, Ortolan88, Edward, Norm, Eric119, Stan Shebs, Nikai, Jiang, Cherkash, Nikola Smolenski, Timwi, Magnus.de,Greenrd, Furrykef, Topbanana, Fredrik, Xiaopo, Kadin2048, Lowellian, Enceladus, Caknuck, Rebrane, Mattflaschen, Tobias Berge-mann, Alerante, Kim Bruning, Scurra, Toytoy, Antandrus, Andreas Kaufmann, M1ss1ontomars2k4, HedgeHog, Mattman723, Pluke,Notinasnaid, Paul August, Tgies, Addaone, Tverbeek, Smalljim, Redquark, Alazoral, David Gale, Hooperbloob, Ekevu, DanielLC, Cyber-Skull, Titanium Dragon, Bart133, Danhash, LFaraone, Oleg Alexandrov, MattGiuca, Ruud Koot, P3nguinP0wered, GregorB, Radiant!,Eyu100, Silvaran, Mathbot, Ewlyahoocom, Scottinglis, Intgr, Tardis, YurikBot, Hairy Dude, DMahalko, Sandpiper, Sliggy, Fang Aili,Airconswitch, Harthacnut, SmackBot, The Monster, Mgreenbe, BiT, Bugs5382, GBL, Ranting Martian, Hieros, Nbarth, Gracenotes,Brainblaster52, Can't sleep, clown will eat me, Mrrealtime, MattOates, TechPurism, Crd721, Hkmaly, Nicholas.Tan, Sjock, AmineBrikci N, Tktktk, Skalman, Lenoxus, CmdrObot, CBM, Cydebot, Omicronpersei8, Thijs!bot, Marvin300, Dajagr, JAnDbot, MER-C,Bongwarrior, Rav Chand Ray, David Eppstein, Gwern, Oren0, Stephenchou0722, Naohiro19, Francis Tyers, Erkan Yilmaz, RockMFR,Shields415, CWii, A.Ou, Chaos5023, Mercurywoodrose, Caster23, Wrldwzrd89, SieBot, Beeawwb, Oxymoron83, Smaug123, Perry-Tachett, Denisarona, GorillaWarfare, Lampak, Brooknet, Trivialist, Coinmanj, Mitch Ames, MystBot, Yes, not yes, Addbot, Basili-cofresco, Proxima Centauri, LaaknorBot, Tassedethe, Luckas-bot, Yobot, Homer, Ptbotgourou, Amirobot, Nyat, Jim1138, SCFilm29,FTWFTWFTWFTW, Xcvista, RedBot, DReifGalaxyM31, Aoidh, Specs112, Reach Out to the Truth, Xnn, EmausBot, Mage24365, Ti-sane, K6ka, Kelvari, Peterh5322, Donner60, Ego White Tray, Kale31899, ClueBot NG, Matthiaspaul, Zakblade2000, Widr, Zakhalesh,Helpful Pixie Bot, TheRealTeln, BG19bot, Vjr04jf0fr4, Clommlon Fiepss, Ninthecloud, Mysterytrey, LionelTabre, Noobnubcakes, Tenti-nator, Chrsimon, Woofmao, OctaviaYounger and Anonymous: 144

• Multiple recursion Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969Contributors: Edward, MichaelHardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kaufmann, Abdull, MM-Sequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca, Ruud Koot,

Page 215: Recursion acglz

31.3. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 201

Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver, Chobot, Wave-length, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer, SmackBot, In-verseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, David Morón, Jon-Harder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM, Pierre deLyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens, JAnDbot,Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill, DaoKaioshin,Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki, Jeshan, Mild BillHiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Merarischroeder, Btx40,Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse, AnomieBOT, Cita-tion bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTimberlakexx, RedBot,Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, RenaissanceBug, Simur-gia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy, Chmarkine, Jon.weldon,TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb, Jacob p12, Lomotos10,Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Mutual recursion Source: https://en.wikipedia.org/wiki/Mutual_recursion?oldid=666309842Contributors: Tobias Hoevekamp, LC~enwiki,TomCerul, Michael Hardy, Minesweeper, LittleDan, Gandalf61, Sietse, Nabla, Spayrard, Oleg Alexandrov, LOL, Ruud Koot, NickyM-cLean, Nbarth, JonHarder, CBM, Gwern, Jamelan, AlanUS, Brentsmith101, Jarble, Legobot, Yobot, Arjun G. Menon, Materialscientist,Ade oshineye, ClueBot NG, BattyBot, Ldionne92, LeviNotik, Elfsternberg, Alakzi and Anonymous: 9

• Nonrecursive filter Source: https://en.wikipedia.org/wiki/Nonrecursive_filter?oldid=676712306 Contributors: Mojo Hand, PostcardCathy, Ottawahitech, AnomieBOT, Vgenapl and Sandeshad

• Open recursion Source: https://en.wikipedia.org/wiki/This_(computer_programming)?oldid=677267463 Contributors: Pnm, Rfc1394,Bkell, Tobias Bergemann, Beland, Nickptar, Int19h, Andreas Kaufmann, D6, AlexChurchill, Max Terry, Jaberwocky6669, Spoon!,Bobo192, John Vandenberg, AKGhetto, Deryck Chan, Woohookitty, Male1979, Sjö, Mjm1964, Hairy Dude, Sikon, EngineerScotty,Cleared as filed, Elkman, Whitejay251, Cedar101, JLaTondre, SmackBot, Kulandai, Anwar saadat, Chris the speller, Thumperward,Nbarth, Colonies Chris, Cybercobra, Decltype, Pemu, Dreftymac, FatalError, Green caterpillar, Torc2, Thijs!bot, Widefox, Ioeth, Kev-inmon, Gwern, Wiki Raja, RHaden, Jerryobject, SimonTrew, DumZiBoT, C. A. Russell, VIRGOPTREX, Addbot, Ghettoblaster, Btx40,Yobot, Vanished user rt41as76lk, AnomieBOT, Omnipaedista, Vaxquis, FrescoBot, Jkr2255, EmausBot, John of Reading, Wikitanvir-Bot, Richard asr, Ocaasi, Wingman4l7, ClueBot NG, Matthiaspaul, Vanished user 28lq93pq34ms, Helpful Pixie Bot, KLBot2, SongO,Chmarkine, EdwardH, BattyBot, ChrisGualtieri, Cmutuku, Comp.arch, HiTryx, Hoyawiki, Blueclaude and Anonymous: 45

• Polymorphic recursion Source: https://en.wikipedia.org/wiki/Polymorphic_recursion?oldid=672498637Contributors: Rich Farmbrough,Woohookitty, Ruud Koot, BuffaloBob, Thrasibule, Nic Waller, Katharineamy, Niceguyedc, Arjayay, Yobot, MuffledThud, Miyagawa,SchreyP, Ashtonfedler, Vcunat, Helpful Pixie Bot, Pdecalculus and Anonymous: 5

• Primitive recursive function Source: https://en.wikipedia.org/wiki/Primitive_recursive_function?oldid=675966980 Contributors: Ax-elBoldt, Tobias Hoevekamp, Zundark, The Anome, Verloren, Darius Bacon, Wmorgan, Mjb, Michael Hardy, Modster, Chinju, Cipher-goth, Schneelocke, Timwi, Dcoetzee, Zoicon5, Markhurd, Hyacinth, Populus, Fredrik, Altenmann, MathMartin, Bethenco, Salty-horse,T2l3r, Tobias Bergemann, Giftlite, Kelson, Luqui, Ascánder, Foobaz, AshtonBenson, Sligocki, Suruena, RJFJR, Artur adib, Oleg Alexan-drov, Ruud Koot, Ryan Reich, Graham87, Josh Parris, Mike Segal, NeonMerlin, R.e.b., NekoDaemon, Roboto de Ajvol, YurikBot,Hairy Dude, Trovatore, Długosz, TheSwami, Bota47, Ott2, Mofoburrell, Ilmari Karonen, Nahaj, SmackBot, Pkirlin, Mhss, Cybercobra,Adamarthurryan, Derek farn, Wvbailey, Loadmaster, Phuzion, Blehfu, JRSpriggs, CRGreathouse, CBM, Gregbard, Cydebot, Juansem-pere, LaGrange, JAnDbot, David Eppstein, TechnoFaye, Christian Storm, Crisperdue, Serprex, Tcamps42, SieBot, Pauldrowe, Cyfal,Anchor Link Bot, Adrianwn, Razimantv, Mindstalk, Marc van Leeuwen, Brentsmith101, Legobot, V4us, AnomieBOT, Joule36e5, Ru-binbot, FrescoBot, HRoestBot, EmausBot, SpectatorRah, ZéroBot, AvicAWB, D.Lazard, Daniseverin, Bezik, Widr, Byron.hawkins,Lfss2, Mark viking, Alturkestani and Anonymous: 59

• Primitive recursive set function Source: https://en.wikipedia.org/wiki/Primitive_recursive_set_function?oldid=621081734 Contribu-tors: Michael Hardy, R.e.b., JRSpriggs and Solarra

• Recursion Source: https://en.wikipedia.org/wiki/Recursion?oldid=677154277 Contributors: Damian Yerrick, AxelBoldt, The Anome,Tarquin, Taw, Grouse, Andre Engels, Eclecticology, XJaM, Toby Bartels, Fubar Obfusco, Miguel~enwiki, Boleslav Bobcik, Mjb, Youandme,Stevertigo, Quintessent, Patrick, Michael Hardy, Mic, Ixfd64, Graue, TakuyaMurata, Rochus, Minesweeper, Pcb21, Tregoweth, Stevenj,JWSchmidt, Александър, Salsa Shark, Poor Yorick, Ghewgill, Mxn, Hashar, Revolver, Dcoetzee, Dino, Dysprosia, Malcohol, Greenrd,Jogloran, Wik, Furrykef, Hyacinth, Head, Wernher, Jonhays0, Khym Chanur, Rls, Jeanmichel~enwiki, Ldo, Robbot, Ruinia, Boffy b,Scarlet, Donreed, Bernhard Bauer, Altenmann, Chancemill, Gandalf61, Orthogonal, Ashley Y, DHN, Wlievens, Hadal, Netjeff, Diberri,Tobias Bergemann, David Gerard, Jimpaz, SimonMayer, Ancheta Wis, Giftlite, Mshonle~enwiki, N12345n, Kim Bruning, Wolfkeeper,Vfp15, Itay2222~enwiki, Wikiwikifast, Guanaco, Sundar, Khalid hassani, DemonThing, Knutux, Antandrus, Beland, Robert Brockway,AndrewKeenanRichardson, Icairns, Lumidek, Derek Parnell, Jh51681, Sonett72, Ratiocinate, Andreas Kaufmann, Zondor, Mernen, Alki-var, Freakofnurture, Slady, Lifefeed, Discospinster, Guanabot, Leibniz, Antaeus Feldspar, Deelkar, Paul August, Crtrue, PutzfetzenORG,SgtThroat, Noren, Bobo192, Nigelj, Marco Polo, Ray Dassen, Blonkm, R. S. Shaw, Obradovic Goran, Officiallyover, GatesPlusPlus,Jumbuck, Liao, Jezmck, Tabor, Yamla, MarkGallagher, Echuck215, Malo, ReyBrujo, Dominic, DV8 2XL, Mattbrundage, Kenyon, OlegAlexandrov, Crosbiesmith, Feezo, Weyes, Cyclotronwiki, Bjones, Linas, Mindmatrix, Camw, David Haslam, PoccilScript, Oliphaunt,Bkkbrad, Ruud Koot, Robertwharvey, Srborlongan, Ralfipedia, DeweyQ, Marudubshinki, GSlicer, SqueakBox, Graham87, Qwertyus,Kbdank71, TobyJ, Jclemens, Paul13~enwiki, Rjwilmsi, Koavf, Quiddity, Salix alba, Robmods, Toresbe, Windchaser, Mathbot, Nihiltres,Itinerant1, Fragglet, NekoDaemon, RexNL, Alvin-cs, BMF81, Mongreilf, Gwernol, Kakurady, Uriah923, Wavelength, RobotE, BillG,Jlittlet, RussBot, Michael Slone, Cmore, Piet Delport, Stephenb, DragonHawk, Deskana, Catamorphism, Nick, Jamesmcguigan, Larrylaptop, MarkSG, Harry Mudd, Dbfirs, The divine bovine, Fang Aili, JoanneB, WikiFew, That Guy, From That Show!, Robertd, Smack-Bot, RDBury, InverseHypercube, McGeddon, Unyoyega, Wegesrand, Jagged 85, Firstrock, Cachedio, NickGarvey, Persian Poet Gal,Thumperward, Neurodivergent, Stevage, HubHikari, Nbarth, Darth Panda, Signed in, Quaque, GeeksHaveFeelings, Jahiegel, Tamfang,Jefffire, Writtenright, Nixeagle, Snowmanradio, JonHarder, Caseydk, Jon Awbrey, Tethros, John Reid, TenPoundHammer, Lambiam,Derek farn, Agradman, Dlibennowell, MagnaMopus, Malixsys, Miketomasello, 16@r, A. Parrot, Remigiu, Davemcarlson, Mets501, Hil-verd, Tawkerbot2, Shirahadasha, Wolfdog, Gamma57, CRGreathouse, Ahy1, CBM, INVERTED, Myasuda, Gregbard, Dragon’s Blood,Tawkerbot4, DumbBOT, Thijs!bot, Epbr123, N5iln, Mojo Hand, Zyrxil, RobHar, Uruiamme, Escarbot, PhiLiP, Thadius856, SpongeSe-bastian, AntiVandalBot, Bm gub, Kaini, JAnDbot, Kaobear, Albany NY, Hut 8.5, Greensburger, Gavia immer, Acroterion, JNW, James-BWatson, Soulbot, Karl432, David Eppstein, Bwildasi, Slimeknight, Falcor84, Bitbit, Flaxmoore, Dennisthe2, Prgrmr@wrk, Meatbites,

Page 216: Recursion acglz

202 CHAPTER 31. WALTHER RECURSION

Patar knight, Trusilver, Milan95, Geehbee, Sanjay742, MONODA, Jdoubleu, Coppertwig, Chiswick Chap, Joshua Issac, Lxix77, Tigger-jay, Deor, Jehan60188, Yugsdrawkcabeht, Technopat, Zurishaddai, Una Smith, Ferengi, Martin451, BotKung, Hyrulio, SpecMode, Jesin,Sliskisty, Anishsane, Plusdo, Wassamatta, MCTales, Mallerd, Jantaro, Dogah, Ivan Štambuk, AlphaPyro, Malcolmxl5, Ham Pastrami,The.ravenous.llama, Taemyr, Todoslocos, Prestonmag, PhilMacD, Thehotelambush, Knavex, AlanUS, CBM2, Rinconsoleao, Escape Or-bit, Classicalecon, Luatha, Ricklaman, SlackerMom, ClueBot, Pi zero, SuperHamster, Boing! said Zebedee, Alpcr, Mijo34, DragonBot,BobManPerson, Vanmaple, M4gnum0n, Lantzy, Cenarium, Hidro, Botsjeh, Resuna, ChrisHodgesUK, Aoe3lover, Franklin.vp, Doriftu,XLinkBot, Marc van Leeuwen, Tombraider007, Ost316, Avoided, WikHead, Ziggy Sawdust, Addbot, Laudan08, Some jerk on theInternet, Fluffernutter, Watzit, Mohamed Magdy, Download, CarsracBot, NittyG, AnnaFrance, Favonian, Ozob, Tide rolls, Jarble, We-ganwock, Legobot, Luckas-bot, Yobot, Systemizer, Rockfan.by, Maldrasen, Obscuranym, Pcap, KamikazeBot, MJM74, AnomieBOT,Materialscientist, Maarwaan, RobertEves92, Citation bot, Obersachsebot, Xqbot, Apothecia, Zargontapel, Hydrated Wombat, Dushy-com, Shadowjams, Kamitsaha, Constructive editor, Spazturtle, Prari, Altg20April2nd, Thayts, Skychildandsonofthesun, OgreBot, Drjaye, DrilBot, Pinethicket, Σ, Coolaery, Xxx3xxx, Tachophile, Varmin, MusicNewz, عقیل ,کاشف Jonkerz, ArbitUsername, Genezis-tan, Benimation, Aoidh, Reaper Eternal, Suffusion of Yellow, Tbhotch, Reach Out to the Truth, Acu192, AleHitch, EmausBot, Kyxzme,Avenue X at Cicero, JeffreyAylesworth, ScottyBerg, RA0808, IceMarioman, Wikipelli, MithrandirAgain, PiemanLK, FinalRapture,Nightsideoflife, Coasterlover1994, Cookiefonster, L Kensington, Chewings72, Scientific29, Vikram360, Sven Manguard, Steveswikied-its, ClueBot NG, Josephshanak, JohnsonL623, Fourmi volage, Brainbelly, Snotbot, Frietjes, Parcly Taxel, Sage321, MerlIwBot, HelpfulPixie Bot, Barravian, Kinaro, Picklebobdogflog, Siddhesh33, Maharshi91, LionelTabre, Ashwiniborse, Mark Arsten, CottontailOfChrist,Altaïr, Aw.alatiqi, Mushi no onna, ChiisaiTenshi, Davidfreesefan23, Cheetahs1990, Khazar2, Tony Heffernan, Mogism, TalhaIrfanKhan,Lugia2453, Sriharsh1234, Brirush, Johnnypeebuckets, A.entropy, Lyxkg007, Carrot Lord, Pdecalculus, TheWisestOfFools, Vande957,Pizzakingme, Cyborg1981, Abhikpal2509, Aa508186 and Anonymous: 624

• Recursion (computer science) Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969 Contributors:Edward, Michael Hardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kauf-mann, Abdull, MMSequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca,Ruud Koot, Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver,Chobot, Wavelength, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer,SmackBot, InverseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, DavidMorón, JonHarder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM,Pierre de Lyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens,JAnDbot, Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill,DaoKaioshin, Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki,Jeshan, Mild Bill Hiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Mer-arischroeder, Btx40, Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse,AnomieBOT, Citation bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTim-berlakexx, RedBot, Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, Re-naissanceBug, Simurgia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy,Chmarkine, Jon.weldon, TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb,Jacob p12, Lomotos10, Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Recursion termination Source: https://en.wikipedia.org/wiki/Recursion_termination?oldid=590539880Contributors: GTBacchus, RuudKoot, Kesla, SmackBot, Flyer22, Dekart, Yobot, Rish.saksena, Jesse V., Ashleyleia and Anonymous: 2

• Recursive definition Source: https://en.wikipedia.org/wiki/Recursive_definition?oldid=669880349Contributors: Ed Poor, Charles Matthews,Hyacinth, Giftlite, Antaeus Feldspar, Paul August, El C, RoyBoy, Cnelson, Keenan Pepper, RJFJR, Velho, BD2412, Mahahahaneapneap,Mukkakukaku, RussBot, Moe Epsilon, Bo Jacoby, SmackBot, Mets501, CBM, Gregbard, Spewin, Thijs!bot, Connor Behan, Pharaohof the Wizards, ClueBot, Andrewbt, LAX, Wounder, Mlaffs, Marc van Leeuwen, OlenWhitaker, Subversive.sound, WikiDao, Addbot,Luckas-bot, Yobot, Ptbotgourou, Pcap, Puzzlenut, Jordaan12, Erik9bot, François Bry, Raiden09, Horcrux92, KurtSchwitters, ZéroBot,Vubb, FCoMtl, Helpful Pixie Bot, YFdyh-bot, Namrehs24 and Anonymous: 25

• Recursive function Source: https://en.wikipedia.org/wiki/Recursive_function?oldid=612348461 Contributors: Michael Hardy, JasonQuinn, Ruud Koot, Johndburger, Xyzzyplugh, CBM, Epbr123, JohnShep, DMCer, Adrianwn, Addbot, Boleyn3, Bgeron, ZéroBot, Dio-goan and Anonymous: 4

• Recursive language Source: https://en.wikipedia.org/wiki/Recursive_language?oldid=651118683 Contributors: Bryan Derksen, Seb,Drahflow, Michael Hardy, Ahoerstemeier, Arthur Frayn, Charles Matthews, Dcoetzee, Big Bob the Finder, MathMartin, Hadal, Saforrest,Tobias Bergemann, Jason Quinn, MarkSweep, Tyler McHenry, Spayrard, Jonsafari, Obradovic Goran, LOL, Graham87, B6s~enwiki,Chris Pressey, Mathbot, Margosbot~enwiki, NekoDaemon, YurikBot, Hairy Dude, RedDwarf~enwiki, Trovatore, Theda, That Guy, FromThat Show!, SmackBot, Pkirlin, Hugo-cs, Bluebot, DMacks, CBM, Gregbard, Misof, Cydebot, Salgueiro~enwiki, Hermel, JAnDbot,LordAnubisBOT, Jamelan, EmxBot, SieBot, Svick, Hans Adler, Addbot, DOI bot, Pcap, AnomieBOT, Materialscientist, Citation bot,Obersachsebot, Vanoco, Citation bot 1, John of Reading, Zahnradzacken, ClueBot NG, Max Longint, Jochen Burghardt and Anonymous:31

• Reentrancy (computing) Source: https://en.wikipedia.org/wiki/Reentrancy_(computing)?oldid=675085602 Contributors: Dachshund,B4hand, TOGoS, TakuyaMurata, Darkwind, Spikey, Ruakh, Tobias Bergemann, Zigger, Marc Mongenet, Cvalente, Andreas Kaufmann,Lornova~enwiki, Alansohn, Msa11usec, Suruena, CloudNine, Alba7~enwiki, Kenyon, Hyperfusion, Linas, Ruud Koot, Roktas~enwiki,Prashanthns, Marudubshinki, Pmj, FlaBot, Eubot, Intgr, Chobot, Bgwhite, YurikBot, Splintercellguy, Mipadi, Zargulon, Allens, Smack-Bot, Kharker, KiloByte, Emurphy42, Ian Burnet~enwiki, DvG~enwiki, Tsca.bot, Hikrammohan, JonHarder, Normxxx, Cybercobra,Pcgomes, SashatoBot, JohnI, Catstail, Wikidrone, Kvng, Dgquintas, Cydebot, Hnassif, Ebrahim, Thijs!bot, Darkuranium, Un brice,NocNokNeo, Jmkelly, Lklundin, Kilrothi, Destynova, Abednigo, VolkovBot, Nxavar, Una Smith, AlleborgoBot, Mccoyst, ClueBot,Rahul81mohan, Alexbot, Wilsone9, Rajputrajat, Holygoat, Dsimic, Addbot, Ghettoblaster, Luckas-bot, Jordsan, AnomieBOT, LilHelpa,QLown, Thehelpfulbot, FrescoBot, Chturne, RedBot, DSNVLKQWE, CobraBot, Dalant019, Arkenflame, ClueBot NG, JulianRDWinter,Gurghet, Heliosonde, OmarMOthman, Mkerrisk, Wilywampa, Fronx, UOResearch and Anonymous: 74

• Single recursion Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969Contributors: Edward, MichaelHardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kaufmann, Abdull, MM-Sequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca, Ruud Koot,Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver, Chobot, Wave-length, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer, SmackBot, In-

Page 217: Recursion acglz

31.3. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 203

verseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, David Morón, Jon-Harder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM, Pierre deLyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens, JAnDbot,Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill, DaoKaioshin,Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki, Jeshan, Mild BillHiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Merarischroeder, Btx40,Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse, AnomieBOT, Cita-tion bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTimberlakexx, RedBot,Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, RenaissanceBug, Simur-gia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy, Chmarkine, Jon.weldon,TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb, Jacob p12, Lomotos10,Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

• Tail call Source: https://en.wikipedia.org/wiki/Tail_call?oldid=677267566 Contributors: Damian Yerrick, Wapcaplet, Selket, Kwi, Sara-phim, Ruakh, Tobias Bergemann, Beland, Andreas Kaufmann, Jkl, Discospinster, Rich Farmbrough, Liberatus, Tony Sidaway, Gpvos,MattGiuca, Ruud Koot, Qwertyus, Buster79, Cedar101, Janto, LinguistAtLarge, Nbarth, Bertport, Dgw, Pgr94, Cydebot, Blaisorblade,Ebrahim, Lightmouse, AlanUS, DFRussia, Wprlh, Addbot, Download, Twimoki, Yobot, PMLawrence, AnomieBOT, Royote, JackieBot,Kotika98, LilHelpa, Control.valve, Ivan Shmakov, Erik9bot, Krinkle, Cnwilliams, Rywebb, WillNess, AManWithNoPlan, Helpful PixieBot, BG19bot, IkamusumeFan, Chrsimon, Hollylilholly, Monkbot and Anonymous: 48

• Transfinite induction Source: https://en.wikipedia.org/wiki/Transfinite_induction?oldid=655962451Contributors: AxelBoldt, The Anome,Tarquin, Twilsonb, Patrick, Chas zzz brown, Michael Hardy, Charles Matthews, Dysprosia, Doradus, Markhurd, Fuzzy Logic, HorsePunchKid,Quarl, Æ, Elwikipedista~enwiki, Pt, EmilJ, Jim Slim, Oleg Alexandrov, Netluck, Ruud Koot, Rjwilmsi, Maxim Razin, Chobot, Al-gebraist, Grubber, Trovatore, BOT-Superzerocool, SmackBot, Melchoir, Tesseran, Stotr~enwiki, Zero sharp, JRSpriggs, Alexey Feld-gendler, CBM, Blaisorblade, LachlanA, WinBot, Albmont, Epsilon0, Bananas29, Ttwo, Dessources, DorganBot, Danwills, Geometryguy, Thehotelambush, Addbot, Somebody9973, שי ,דוד TotientDragooned, Luckas-bot, Yobot, Freebirth Toad, ViolaPlayer, Utkarshl,Constructive editor, Undsoweiter, Paine Ellsworth, Lotje, ZéroBot, DASHBotAV, Brirush and Anonymous: 27

• Tree traversal Source: https://en.wikipedia.org/wiki/Tree_traversal?oldid=675432807Contributors: Julesd, Dcoetzee, Dysprosia, Greenrd,Chealer, Altenmann, DHN, Miles, Tobias Bergemann, Giftlite, DavidCary, Beryllium, Matt Crypto, Pgan002, Beland, Andreas Kauf-mann, Discospinster, T Long, ZeroOne, CanisRufus, Edward Z. Yang, Mypa4me, R. S. Shaw, Cwolfsheep, Zultan, Sam Korn, Hooperbloob,Arthena, Zippanova, Velella, Oleg Alexandrov, Zntrip, Ruud Koot, Isnow, Akavel~enwiki, Kesla, Alienus, Qwertyus, Rjwilmsi, AySz88,Jjhat1, Quuxplusone, Kri, Kickboy, Snailwalker, The Rambling Man, Wavelength, Hairy Dude, Akamad, Slarson, Salrizvy, Closed-mouth, Cedar101, SmackBot, Meinhard Benn, Incnis Mrsi, Abeb leu, Konstantin Veretennicov, Neurodivergent, GregRM, Nbarth,DHN-bot~enwiki, Dcallen, Short Circuit, Cybercobra, Mlpkr, PsychoAlienDog, Benash, Yan Kuligin, Aleenf1, Hu12, Rory O'Kane,Shoeofdeath, Pmussler, Suanshsinghal, Simeon, Cydebot, B, Nowhere man, Feanor981, Dale Gerdemann, Gimmetrow, Jaxelrod, Win-Bot, Widefox, Uselesswarrior, R27182818, CountingPine, Shwaathi, Ahmad87, GermanX, Arthuralbano, Aimal.rextin, Gwern, Kaladis,ScaledLizard, Themania, Pomte, Jeffbadge, KylieTastic, Maghnus, Cedric dlb, Gregsinclair, S t hathliss, Jamelan, Manik762007, Cmbay,Jochgem, Mameisam, LungZeno, Flyer22, Djcollom, Steven Crossin, Ctxppc, Amol2960, Myth010101, Stygiansonic, Arakunem, Ex-cirial, Halawu, Promethean, Pwp333, Pierzz, Addbot, BasVanSchaik, Atethnekos, GSMR, V-Teq~enwiki, CanadianLinuxUser, Dyadron,West.andrew.g, Rssr sy, Teles, Jarble, Kapildalwani, Yobot, Bunnyhop11, Jordsan, Crispmuncher, Totan420, AnomieBOT, Democrat-icLuntz, Materialscientist, Kc03, Xonev, Lewix, WissensDürster, FrescoBot, Mark Renier, Pinethicket, I dream of horses, Mandeep-sandhu, Yadra~enwiki, MaxRadin, KillerGardevoir, Vrenator, Adityazutshi, Figod, Shafaet, Mark Lentczner, Wtuvell, Van Hohenheim,Muditjai, Demonkoryu, Wagino 20100516, Bobogoobo, 28bot, ClueBot NG, Jack Greenmaven, Baartvaark, Catlemur, SixelaNoegip,Snotbot, Jmhain, Anupmehra, Wiki.phylo, HMSSolent, BG19bot, Ramesh Ramaiah, BPositive, ThePiston, ElWatchDog, RJK1984,Khazar2, Alphama, Vigi90, Akerbos, Jamesx12345, Will Faught, Mark viking, Epicgenius, Sohomdeep, François Robere, Thecoolsops,Tentinator, Andrey.a.mitin, Babitaarora, Raghuv.adhepalli, Paul2520, Manul, OmegaTwiddle, Touyats, Xitaowen, Nbhatia911, Victorli-uwiki, Vaibhav.misra27, Shaun richard, Pranav.deotale and Anonymous: 352

• Walther recursion Source: https://en.wikipedia.org/wiki/Walther_recursion?oldid=675284609 Contributors: Tobias Bergemann, Almk-glor, Bazzargh, The Elves Of Dunsimore, ConcernedVancouverite and Jochen Burghardt

31.3.2 Images• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-

main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs)• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Origi-

nal artist: ?• File:Disambig_gray.svg Source: https://upload.wikimedia.org/wikipedia/en/5/5f/Disambig_gray.svg License: Cc-by-sa-3.0 Contribu-

tors: ? Original artist: ?• File:Drost_Effect_using_remote_desktop_connection.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Drost_Effect_

using_remote_desktop_connection.png License: CC BY-SA 4.0 Contributors: Own work Original artist: Kartik26• File:Droste.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/62/Droste.jpg License: Public domain Contributors: [4] [5]

Original artist: Jan (Johannes) Musset?• File:Droste_effect_1260359-3.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/31/Droste_effect_1260359-3.jpg Li-

cense: CC BY-SA 3.0 Contributors: Own work Original artist: Nevit Dilmen (<a href='//commons.wikimedia.org/wiki/User_talk:Nevit'title='User talk:Nevit'>talk</a>)

• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: TheTango! Desktop Project. Original artist:The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (althoughminimally).”

• File:Fractal_fern_explained.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Fractal_fern_explained.pngLicense:Public domain Contributors: Own work Original artist: António Miguel de Campos

Page 218: Recursion acglz

204 CHAPTER 31. WALTHER RECURSION

• File:KochFlake.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d9/KochFlake.svg License: CC-BY-SA-3.0 Contribu-tors: ? Original artist: ?

• File:Left-fold-transformation.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5a/Left-fold-transformation.png Li-cense: Public domain Contributors: English Wikipedia, created by Cale Gibbard in Inkscape who released it into public domain Originalartist: Cale Gibbard

• File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvolaapps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color)

• File:Omega-exp-omega-labeled.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e6/Omega-exp-omega-labeled.svgLicense: CC0 Contributors: Own work Original artist: Pop-up casket (<a href='//commons.wikimedia.org/wiki/User_talk:Pop-up_casket'title='User talk:Pop-up casket'>talk</a>); original by User:Fool

• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0Contributors:Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:Tkgd2007

• File:RecursiveFunction1_execution.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/RecursiveFunction1_execution.png License: Public domain Contributors: I made it myself Original artist: User:Maxtremus

• File:RecursiveFunction2_execution.png Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/RecursiveFunction2_execution.png License: Public domain Contributors: I made it myself Original artist: User:Maxtremus

• File:RecursiveTree.JPG Source: https://upload.wikimedia.org/wikipedia/commons/f/f7/RecursiveTree.JPG License: Public domainContributors: Own work Original artist: Brentsmith101

• File:Right-fold-transformation.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Right-fold-transformation.pngLi-cense: Public domainContributors: Taken from English Wikipedia, created by Cale Gibbard in Inkscape who released it into public domain.Original artist: Cale Gibbard

• File:Serpiente_alquimica.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Serpiente_alquimica.jpg License: Pub-lic domain Contributors: cf. scan of entire page here. Original artist: anonymous medieval illuminator; uploader Carlos adanero

• File:Sierpinski_triangle.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/45/Sierpinski_triangle.svg License: CC BY-SA 3.0 Contributors: ? Original artist: ?

• File:Software_spanner.png Source: https://upload.wikimedia.org/wikipedia/commons/8/82/Software_spanner.png License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia; transfer was stated to be made by User:Rockfang. Original artist: Original uploaderwas CharlesC at en.wikipedia

• File:Sorted_binary_tree_breadth-first_traversal.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d1/Sorted_binary_tree_breadth-first_traversal.svg License: CC0 Contributors: edited public domain file http://commons.wikimedia.org/wiki/File:Sorted_binary_tree_inorder.svg using Adobe Illustrator Original artist: Rory O'Kane

• File:Sorted_binary_tree_inorder.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/77/Sorted_binary_tree_inorder.svgLicense: Public domain Contributors:

• Sorted_binary_tree.svg Original artist: Sorted_binary_tree.svg: Miles• File:Sorted_binary_tree_postorder.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/Sorted_binary_tree_postorder.

svg License: Public domain Contributors:

• Sorted_binary_tree.svg Original artist: Sorted_binary_tree.svg: Miles• File:Sorted_binary_tree_preorder.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d4/Sorted_binary_tree_preorder.

svg License: Public domain Contributors:

• Sorted_binary_tree.svg Original artist: Sorted_binary_tree.svg: Miles• File:Sourdough.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0a/Sourdough.jpg License: CC BY 4.0 Contributors:

Own work Original artist: Janus Sandsgaard• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_

with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svgfrom the Tango project. Original artist: Benjamin D. Esham (bdesham)

• File:Tower_of_Hanoi.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/0/07/Tower_of_Hanoi.jpeg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?

• File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub-lic domain Contributors: Own work Original artist: Cepheus

• File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:CC-BY-SA-3.0 Contributors:

• Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen• File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.

svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.• File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public

domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs),based on original logo tossed together by Brion Vibber

31.3.3 Content license• Creative Commons Attribution-Share Alike 3.0