recursion acg

66
Recursion acg From Wikipedia, the free encyclopedia

Upload: man

Post on 12-Jan-2016

36 views

Category:

Documents


1 download

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

TRANSCRIPT

Page 1: Recursion acg

Recursion acgFrom Wikipedia, the free encyclopedia

Page 2: Recursion acg

Contents

1 Anonymous recursion 11.1 Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Named functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Passing functions as arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.1 JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Perl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Bar recursion 42.1 Technical Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Corecursion 53.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3.1.1 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1.2 Fibonacci sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.1.3 Tree traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.4 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Course-of-values recursion 114.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Equivalence to primitive recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.3 Application to primitive recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5 Droste effect 145.1 Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

i

Page 3: Recursion acg

ii CONTENTS

5.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

6 Fixed-point combinator 186.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

6.1.1 Values and domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196.1.2 Function versus implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196.1.3 What is a “combinator"? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206.2.1 The factorial function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6.3 Fixed point combinators in lambda calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.3.1 Equivalent definition of a fixed-point combinator . . . . . . . . . . . . . . . . . . . . . . . 216.3.2 Derivation of the Y combinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.3.3 Other fixed-point combinators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.3.4 Strict fixed point combinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236.3.5 Non-standard fixed-point combinators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

6.4 Implementation in other languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246.4.1 Lazy functional implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246.4.2 Strict functional implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246.4.3 Imperative language implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6.5 Typing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246.5.1 Type for the Y combinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6.6 General information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7 Fold (higher-order function) 287.1 Folds as structural transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287.2 Folds on lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7.2.1 Linear vs. tree-like folds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.2.2 Special folds for non-empty lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.2.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.2.4 Evaluation order considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317.2.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

7.3 Folds in various languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.4 Universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Page 4: Recursion acg

CONTENTS iii

7.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

8 Recursion 348.1 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.2 Informal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358.3 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

8.3.1 Recursive humor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.4 In mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

8.4.1 Recursively defined sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.4.2 Finite subdivision rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.4.3 Functional recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.4.4 Proofs involving recursive definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.4.5 Recursive optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

8.5 In computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.6 In art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388.7 The recursion theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.7.1 Proof of uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388.7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

9 Recursion (computer science) 459.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

9.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 47

9.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

9.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 51

9.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

9.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Page 5: Recursion acg

iv CONTENTS

9.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

9.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

9.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

9.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

9.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579.11 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589.14 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 59

9.14.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599.14.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609.14.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Page 6: Recursion acg

Chapter 1

Anonymous recursion

In computer science, anonymous recursion is recursion which does not explicitly call a function by name. Thiscan be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it– or implicitly, via reflection features which allow one to access certain functions depending on the current context,especially “the current function” or sometimes “the calling function of the current function”.In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilitiesto support it. In general programming practice, however, this is considered poor style, and recursion with namedfunctions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in anylanguage that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear thanexplicitly recursing by name.In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursionwithout requiring named functions. This is particularly important for the lambda calculus, which has anonymousunary functions, but is able to compute any recursive function. This anonymous recursion can be produced genericallyvia fixed-point combinators.

1.1 Use

Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they formclosures or are used as callbacks, to avoid having to bind the name of the function.Anonymous recursion primarily consists of calling “the current function”, which results in direct recursion. Anony-mous indirect recursion is possible, such as by calling “the caller (the previous function)", or, more rarely, by goingfurther up the call stack, and this can be chained to produce mutual recursion. The self-reference of “the currentfunction” is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer tothe current context.Anonymous recursion can also be used for named functions, rather that calling them by name, say to specify thatone is recursing on the current function, or to allow one to rename the function without needing to change the namewhere it calls itself. However, as a matter of programming style this is generally not done.

1.2 Alternatives

1.2.1 Named functions

The usual alternative is to use named functions and named recursion. Given an anonymous function, this can bedone either by binding a name to the function, as in named function expressions in JavaScript, or by assigning thefunction to a variable and then calling the variable, as in function statements in JavaScript. Since languages thatallow anonymous functions generally allow assigning these functions to variables (if not first-class functions), manylanguages do not provide a way to refer to the function itself, and explicitly reject anonymous recursion; examplesinclude Go.[1]

1

Page 7: Recursion acg

2 CHAPTER 1. ANONYMOUS RECURSION

For example, in JavaScript the factorial function can be defined via anonymous recursion as such:[2]

[1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; });

Rewritten to use a named function expression yields:[1,2,3,4,5].map(function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; });

1.2.2 Passing functions as arguments

Even without mechanisms to refer to the current function or calling function, anonymous recursion is possible in alanguage that allows functions as arguments. This is done by adding another parameter to the basic recursive func-tion and using this parameter as the function for the recursive call. This creates a higher-order function, and passingthis higher function itself allows anonymous recursion within the actual recursive function. This can be done purelyanonymously by applying a fixed-point combinator to this higher order function. This is mainly of academic interest,particularly to show that the lambda calculus has recursion, as the resulting expression is significantly more compli-cated than the original named recursive function. Conversely, the use of fixed-pointed combinators may be genericallyreferred to as “anonymous recursion”, as this is a notable use of them, though they have other applications.[3][4]

This is illustrated below using Python. First, a standard named recursion:def fact(n): if n == 0: return 1 return n * fact(n - 1)

Using a higher-order function so the top-level function recurses anonymously on an argument, but still needing thestandard recursive function as an argument:def fact0(n0): if n0 == 0: return 1 return n0 * fact0(n0 - 1) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(n1 - 1) fact= lambda n: fact1(fact0, n)

We can eliminate the standard recursive function by passing the function argument into the call:fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = lambda n: fact1(fact1, n)

The second line can be replaced by a generic higher-order function called a combinator:F = lambda f: (lambda x: f(f, x)) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = F(fact1)

Written anonymously:[5]

(lambda f: (lambda x: f(f, x))) \ (lambda g, n1: 1 if n1 == 0 else n1 * g(g, n1 - 1))

In the lambda calculus, which only uses functions of a single variable, this can be done via the Y combinator. Firstmake the higher-order function of two variables be a function of a single variable, which directly returns a function,by currying:fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = fact1(fact1)

There are two “applying a higher order function to itself” operations here: f(f) in the first line and fact1(fact1) in thesecond. Factoring out the second double application into a combinator yields:C = lambda x: x(x) fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = C(fact1)

Factoring out the other double application yields:C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1 if n1 == 0 elsen1 * g(n1 - 1)) fact = C(D(fact1))

Combining the two combinators into one yields the Y combinator:C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) Y = lambda y: C(D(y)) fact1 = lambda g:

Page 8: Recursion acg

1.3. EXAMPLES 3

(lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1)

Expanding out the Y combinator yields:Y = lambda f: (lambda x: f(lambda v: x(x)(v))) \ (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1)

Combining these yields a recursive definition of the factorial in lambda calculus (anonymous functions of a singlevariable):[6]

(lambda f: (lambda x: f(lambda v: x(x)(v))) (lambda x: f(lambda v: x(x)(v)))) \ (lambda g: (lambda n1: 1 if n1 ==0 else n1 * g(n1 - 1)))

1.3 Examples

1.3.1 JavaScript

In JavaScript, the current function is accessible via arguments.callee, while the calling function is accessible viaarguments.caller. These allow anonymous recursion, such as in this implementation of the factorial:[2]

[1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; });

1.3.2 Perl

Starting with Perl 5.16, the current subroutine is accessible via the __SUB__ token, which returns a reference tothe current subroutine, or undef outside a subroutine.[7] This allows anonymous recursion, such as in the followingimplementation of the factorial:use feature ":5.16"; sub { my $x = shift; $x == 0 ? 1 : $x * __SUB__->( $x - 1 ); }

1.4 References[1] Issue 226: It’s impossible to recurse a anonymous function in Go without workarounds.

[2] answer by olliej, Oct 25 '08 to "Why was the arguments.callee.caller property deprecated in JavaScript?", StackOverflow

[3] This terminology appear to be largely folklore, but it does appear in the following:

• Trey Nash, Accelerated C# 2008, Apress, 2007, ISBN 1-59059-873-3, p. 462—463. Derived substantially fromWes Dyer's blog (see next item).

• Wes Dyer Anonymous Recursion in C#, February 02, 2007, contains a substantially similar example found in thebook above, but accompanied by more discussion.

[4] The If Works Deriving the Y combinator, January 10th, 2008

[5] Hugo Walter’s answer to "Can a lambda function call itself recursively in Python?"

[6] Nux’s answer to "Can a lambda function call itself recursively in Python?"

[7] Perldoc, "The 'current_sub' feature", perldoc feature

Page 9: Recursion acg

Chapter 2

Bar recursion

Bar recursion is a generalized form of recursion developed by C. Spector in his 1962 paper.[1] It is related to barinduction in the same fashion that primitive recursion is related to ordinary induction, or transfinite recursion is relatedto transfinite induction.

2.1 Technical Definition

Let V, R, and O be types, and i be any natural number, representing a sequence of parameters taken from V. Thenthe function sequence f of functions fn from Vi+n → R to O is defined by bar recursion from the functions Ln : R →O and B with Bn : ((Vi+n → R) x (Vn → R)) → O if:

• fn((λα:Vi+n)r) = Ln(r) for any r long enough that Ln₊k on any extension of r equals Ln. Assuming L is acontinuous sequence, there must be such r, because a continuous function can use only finitely much data.

• fn(p) = Bn(p, (λx:V)fn₊₁(cat(p, x))) for any p in Vi+n → R.

Here “cat” is the concatenation function, sending p, x to the sequence which starts with p, and has x as its last term.(This definition is based on the one by Escardó and Oliva.[2])Provided that for every sufficiently long function (λα)r of type Vi → R, there is some n with Ln(r) = Bn((λα)r,(λx:V)Ln₊₁(r)), the bar induction rule ensures that f is well-defined.The idea is that one extends the sequence arbitrarily, using the recursion term B to determine the effect, until asufficiently long node of the tree of sequences over V is reached; then the base term L determines the final value of f.The well-definedness condition corresponds to the requirement that every infinite path must eventually pass though asufficiently long node: the same requirement that is needed to invoke a bar induction.The principles of bar induction and bar recursion are the intuitionistic equivalents of the axiom of dependent choices.[3]

2.2 References[1] C. Spector (1962). “Provably recursive functionals of analysis: a consistency proof of analysis by an extension of principles

in current intuitionistic mathematics”. In F. D. E. Dekker. Recursive Function Theory: Proc. Symposia in Pure Mathematics5. American Mathematical Society. pp. 1–27.

[2] Martín Escardó, Paulo Oliva. “Selection functions, Bar recursion, and Backwards Induction”. Math. Struct. in Comp.Science.

[3] Jeremy Avigad, Solomon Feferman (1999). “VI: Gödel’s functional (“Dialectica”) interpretation”. In S. R. Buss. Handbookof Proof Theory.

4

Page 10: Recursion acg

Chapter 3

Corecursion

In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analyti-cally, starting on data further from a base case and breaking it down into smaller data and repeating until one reachesa base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing datafurther removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce,bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept isgenerative recursion which may lack a definite “direction” inherent in corecursion and recursion.Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data(base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, suchas streams, so long as it can be produced from simple data (base cases). Where recursion may not terminate, neverreaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, thoughit may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it producesand thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, andarguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for examplerecurrence relations such as the factorial.Corecursion can produce both finite and infinite data structures as result, and may employ self-referential data struc-tures. Corecursion is often used in conjunction with lazy evaluation, to only produce a finite subset of a potentiallyinfinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly im-portant concept in functional programming, where corecursion and codata allow total languages to work with infinitedata structures.

3.1 Examples

Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarilyof interest in functional programming, it can be illustrated using imperative programming, which is done below us-ing the generator facility in Python. In these examples local variables are used, and assigned values imperatively(destructively), though these are not necessary in corecursion in pure functional programming. In pure functionalprogramming, rather than assigning to local variables, these computed values form an invariable sequence, and priorvalues are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be com-puted). The assignments simply express this in the imperative paradigm and explicitly specify where the computationshappen, which serves to clarify the exposition.

3.1.1 Factorial

A classic example of recursion is computing the factorial, which is defined recursively as 0! := 1 andn! := n×(n−1)!

To recursively compute its result on a given input, a recursive function calls (a copy of) itself with a different (“smaller”in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless thebase case has been reached. Thus a call stack develops in the process. For example, to compute fac(3), this recursivelycalls in turn fac(2), fac(1), fac(0) (“winding up” the stack), at which point recursion terminates with fac(0) = 1, andthen the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial

5

Page 11: Recursion acg

6 CHAPTER 3. CORECURSION

call frame fac(3), where the final result is calculated as 3*2 =: 6 and finally returned. In this example a functionreturns a single value.This stack unwinding can be explicated, defining the factorial corecursively, as an iterator, where one starts with thecase of 1 =: 0! , then from this starting value constructs factorial values for increasing numbers 1, 2, 3... as in theabove recursive definition with “time arrow” reversed, as it were, by reading it backwards as n!× (n+1) =: (n+1)!. The corecursive algorithm thus defined produces a stream of all factorials. This may be concretely implementedas a generator. Symbolically, noting that computing next factorial value requires keeping track of both n and f (aprevious factorial value), this can be represented as:

n, f = (0, 1) : (n+ 1, f × (n+ 1))

or in Haskell,(\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1)

meaning, “starting from n, f = 0, 1 , on each step the next values are calculated as n + 1, f × (n + 1) ". This ismathematically equivalent and almost identical to the recursive definition, but the +1 emphasizes that the factorialvalues are being built up, going forwards from the starting case, rather than being computed after first going back-wards, down to the base case, with a −1 decrement. Note also that the direct output of the corecursive functiondoes not simply contain the factorial n! values, but also includes for each value the auxiliary data of its index n in thesequence, so that any one specific result can be selected among them all, as and when needed.Note the connection with denotational semantics, where the denotations of recursive programs is built up corecursivelyin this way.In Python, a recursive factorial function can be defined as:[lower-alpha 1]

def factorial(n): if n == 0: return 1 else: return n * factorial(n - 1)

This could then be called for example as factorial(5) to compute 5!.A corresponding corecursive generator can be defined as:def factorials(): n, f = 0, 1 while True: yield f n, f = n + 1, f * (n + 1)

This generates an infinite stream of factorials in order; a finite portion of it can be produced by:def n_factorials(k): n, f = 0, 1 while n <= k: yield f n, f = n + 1, f * (n + 1)

This could then be called to produce the factorials up to 5! via:for f in n_factorials(5): print(f)

If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and theaccess into one function,def nth_factorial(k): n, f = 0, 1 while n < k: n, f = n + 1, f * (n + 1) yield f

As can be readily seen here, this is practically equivalent (just by substituting return for the only yield there) to theaccumulator argument technique for tail recursion, unwound into an explicit loop. Thus it can be said that the conceptof corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, whereapplicable.

3.1.2 Fibonacci sequence

In the same way, the Fibonacci sequence can be represented as:

a, b = (0, 1) : (b, a+ b)

Page 12: Recursion acg

3.1. EXAMPLES 7

Note that because the Fibonacci sequence is a recurrence relation of order 2, the corecursive relation must track twosuccessive terms, with the (b,−) corresponding to shift forward by one step, and the (−, a + b) corresponding tocomputing the next term. This can then be implemented as follows (using parallel assignment):def fibonacci_sequence(): a, b = 0, 1 while True: yield a a, b = b, a + b

In Haskell,map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )

3.1.3 Tree traversal

Tree traversal via a depth-first approach is a classic example of recursion. Dually, breadth-first traversal can verynaturally be implemented via corecursion.Without using recursion or corecursion, one may traverse a tree by starting at the root node, placing the child nodesin a data structure, then removing the nodes in the data structure in turn and iterating over its children.[lower-alpha 2] Ifthe data structure is a stack (LIFO), this yields depth-first traversal, while if the data structure is a queue (FIFO), thisyields breadth-first traversal.Using recursion, a (post-order)[lower-alpha 3] depth-first traversal can be implemented by starting at the root node andrecursively traversing each child subtree in turn (the subtree based at each child node) – the second child subtree doesnot start processing until the first child subtree is finished. Once a leaf node is reached or the children of a branchnode have been exhausted, the node itself is visited (e.g., the value of the node itself is outputted). In this case, thecall stack (of the recursive functions) acts as the stack that is iterated over.Using corecursion, a breadth-first traversal can be implemented by starting at the root node, outputting its value,[lower-alpha 4]

then breadth-first traversing the subtrees – i.e., passing on the whole list of subtrees to the next step (not a single sub-tree, as in the recursive approach) – at the next step outputting the value of all of their root nodes, then passing ontheir child subtrees, etc.[lower-alpha 5] In this case the generator function, indeed the output sequence itself, acts as thequeue. As in the factorial example (above), where the auxiliary information of the index (which step one was at, n)was pushed forward, in addition to the actual output of n!, in this case the auxiliary information of the remainingsubtrees is pushed forward, in addition to the actual output. Symbolically:v,t = ([], FullTree) : (RootValues, ChildTrees)meaning that at each step, one outputs the list of values of root nodes, then proceeds to the child subtrees. Generatingjust the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the listof lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list).These can be compared as follows. The recursive traversal handles a leaf node (at the bottom) as the base case (whenthere are no children, just output the value), and analyzes a tree into subtrees, traversing each in turn, eventuallyresulting in just leaf nodes – actual leaf nodes, and branch nodes whose children have already been dealt with (cutoff below). By contrast, the corecursive traversal handles a root node (at the top) as the base case (given a node, firstoutput the value), treats a tree as being synthesized of a root node and its children, then produces as auxiliary outputa list of subtrees at each step, which are then the input for the next step – the child nodes of the original root arethe root nodes at the next step, as their parents have already been dealt with (cut off above). Note also that in therecursive traversal there is a distinction between leaf nodes and branch nodes, while in the corecursive traversal thereis no distinction, as each node is treated as the root node of the subtree it defines.Notably, given an infinite tree,[lower-alpha 6] the corecursive breadth-first traversal will traverse all nodes, just as for afinite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeedif traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf.This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures.In Python, this can be implemented as follows.[lower-alpha 7] The usual post-order depth-first traversal can be definedas:[lower-alpha 8]

def df(node): if node is not None: df(node.left) df(node.right) print(node.value)

This can then be called by df(t) to print the values of the nodes of the tree in post-order depth-first order.The breadth-first corecursive generator can be defined as:[lower-alpha 9]

Page 13: Recursion acg

8 CHAPTER 3. CORECURSION

def bf(tree): tree_list = [tree] while tree_list: new_tree_list = [] for tree in tree_list: if tree is not None: yieldtree.value new_tree_list.append(tree.left) new_tree_list.append(tree.right) tree_list = new_tree_list

This can then be called to print the values of the nodes of the tree in breadth-first order:for i in bf(t): print(i)

3.2 Definition

Initial data types can be defined as being the least fixpoint (up to isomorphism) of some type equation; the isomorphismis then given by an initial algebra. Dually, final (or terminal) data types can be defined as being the greatest fixpointof a type equation; the isomorphism is then given by a final coalgebra.If the domain of discourse is the category of sets and total functions, then final data types may contain infinite, non-wellfounded values, whereas initial types do not.[1][2] On the other hand, if the domain of discourse is the categoryof complete partial orders and continuous functions, which corresponds roughly to the Haskell programming lan-guage, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form anisomorphism.[3]

Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dualto the way that ordinary recursion recursively defines functions whose domain is an initial data type.[4]

The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if onewere to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistentwith existing textbooks about Haskell.[5] Also note that the examples used in this article predate the attempts to definecorecursion and explain what it is.

3.3 Discussion

The rule for primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending onthe argument by pattern-matching on its constructors (that were called up before, somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. “fields”), we ascend on the result by filling-in its “destructors”(or “observers”, that will be called afterwards, somewhere - so we're actually calling a constructor, creating anotherbit of the result to be observed later on). Thus corecursion creates (potentially infinite) codata, whereas ordinaryrecursion analyses (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it mightnot terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite.Here is an example in Haskell. The following definition produces the list of Fibonacci numbers in linear time:fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes areever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniquesare an important part of Haskell programming.This can be done in Python as well:[6]

from itertools import tee, chain, islice, imap def add(x, y): return x + y def fibonacci(): def deferred_output(): fori in output: yield i result, c1, c2 = tee(deferred_output(), 3) paired = imap(add, c1, islice(c2, 1, None)) output =chain([0, 1], paired) return result for i in islice(fibonacci(), 20): print(i)

The definition of zipWith can be inlined, leading to this:fibs = 0 : 1 : next fibs where next (a: t@(b:_)) = (a+b):next t

This example employs a self-referential data structure. Ordinary recursion makes use of self-referential functions,but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can berewritten as follows:

Page 14: Recursion acg

3.4. HISTORY 9

fibs = fibgen (0,1) fibgen (x,y) = x : fibgen (y,x+y)

This employs only self-referential function to construct the result. If it were used with strict list constructor it wouldbe an example of runaway recursion, but with non-strict list constructor this guarded recursion gradually produces anindefinitely defined list.Corecursion need not produce an infinite object; a corecursive queue[7] is a particularly good example of this phe-nomenon. The following definition produces a breadth-first traversal of a binary tree in linear time:data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b) bftrav :: Tree a b -> [Tree a b] bftrav tree = queue wherequeue = tree : gen 1 queue gen 0 p = [] gen len (Leaf _ : p) = gen (len-1) p gen len (Branch _ l r : p) = l : r : gen (len+1) p

This definition takes an initial tree and produces a list of subtrees. This list serves dual purpose as both the queue andthe result (gen len p produces its output len notches after its input back-pointer, p, along the queue). It is finite if andonly if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; thiscan safely be elided if this definition is applied only to infinite trees.Another particularly good example gives a solution to the problem of breadth-first labeling.[8] The function label visitsevery node in a binary tree in a breadth first fashion, and replaces each label with an integer, each subsequent integeris bigger than the last by one. This solution employs a self-referential data structure, and the binary tree can be finiteor infinite.label :: Tree a b -> Tree Int Int label t = t′ where (t′,ns) = label′ t (1:ns) label′ :: Tree a b -> [Int] -> (Tree Int Int,[Int]) label′ (Leaf _ ) (n:ns) = (Leaf n , n+1 : ns ) label′ (Branch _ l r) (n:ns) = (Branch n l′ r′ , n+1 : ns′′) where (l′,ns′) = label′ l ns (r′,ns′′) = label′ r ns′

An apomorphism (such as an anamorphism, such as unfold) is a form of corecursion in the same way that a paramorphism(such as a catamorphism, such as fold) is a form of recursion.The Coq proof assistant supports corecursion and coinduction using the CoFixpoint command.

3.4 History

Corecursion, referred to as circular programming, dates at least to (Bird 1984), who credits John Hughes and PhilipWadler; more general forms were developed in (Allison 1989). The original motivations included producing moreefficient algorithms (allowing 1 pass over data in some cases, instead of requiring multiple passes) and implementingclassical data structures, such as doubly linked lists and queues, in functional languages.

3.5 See also

• Bisimulation

• Coinduction

• Recursion

• Anamorphism

3.6 Notes[1] Not validating input data.

[2] More elegantly, one can start by placing the root node itself in the structure and then iterating.

[3] Post-order is to make “leaf node is base case” explicit for exposition, but the same analysis works for pre-order or in-order.

[4] Breadth-first traversal, unlike depth-first, is unambiguous, and visits a node value before processing children.

Page 15: Recursion acg

10 CHAPTER 3. CORECURSION

[5] Technically, one may define a breadth-first traversal on an ordered, disconnected set of trees – first the root node of eachtree, then the children of each tree in turn, then the grandchildren in turn, etc.

[6] Assume fixed branching factor (e.g., binary), or at least bounded, and balanced (infinite in every direction).

[7] First defining a tree class, say via:class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right def__str__(self): return str(self.value)and initializing a tree, say via:t = Tree(1, Tree(2, Tree(4), Tree(5)), Tree(3, Tree(6), Tree(7)))In this example nodes are labeled in breadth-first order: 1 2 3 4 5 6 7

[8] Intuitively, the function iterates over subtrees (possibly empty), then once these are finished, all that is left is the node itself,whose value is then returned; this corresponds to treating a leaf node as basic.

[9] Here the argument (and loop variable) is considered as a whole, possible infinite tree, represented by (identified with) itsroot node (tree = root node), rather than as a potential leaf node, hence the choice of variable name.

3.7 References[1] Barwise and Moss 1996.

[2] Moss and Danner 1997.

[3] Smyth and Plotkin 1982.

[4] Gibbons and Hutton 2005.

[5] Doets and van Eijck 2004.

[6] Hettinger 2009.

[7] Allison 1989; Smith 2009.

[8] Jones and Gibbons 1992.

• Bird, Richard Simpson (1984). “Using circular programs to eliminate multiple traversals of data”. Acta Infor-matica 21 (3): 239–250. doi:10.1007/BF00264249.

• Lloyd Allison (April 1989). “Circular Programs and Self-Referential Structures”. Software Practice and Ex-perience 19 (2): 99–109. doi:10.1002/spe.4380190202.

• Geraint Jones and Jeremy Gibbons (1992). Linear-time breadth-first tree algorithms: An exercise in the arith-metic of folds and zips (Technical report). Dept of Computer Science, University of Auckland.

• Jon Barwise and Lawrence S Moss (June 1996). Vicious Circles. Center for the Study of Language andInformation. ISBN 978-1-57586-009-1.

• Lawrence S Moss and Norman Danner (1997). “On the Foundations of Corecursion”. Logic Journal of theIGPL 5 (2): 231–257. doi:10.1093/jigpal/5.2.231.

• Kees Doets and Jan van Eijck (May 2004). The Haskell Road to Logic, Maths, and Programming. King’sCollege Publications. ISBN 978-0-9543006-9-2.

• David Turner (2004-07-28). “Total Functional Programming”. Journal of Universal Computer Science 10 (7):751–768. doi:10.3217/jucs-010-07-0751.

• Jeremy Gibbons and Graham Hutton (April 2005). “Proof methods for corecursive programs”. FundamentaInformaticae Special Issue on Program Transformation 66 (4): 353–366.

• Leon P Smith (2009-07-29), “Lloyd Allison’s Corecursive Queues: Why Continuations Matter”, The MonadReader (14): 37–68

• Raymond Hettinger (2009-11-19). “Recipe 576961: Technique for cyclical iteration”.

• M. B. Smyth and G. D. Plotkin (1982). “The Category-Theoretic Solution of Recursive Domain Equations”.SIAM Journal on Computing 11 (4): 761–783. doi:10.1137/0211062.

Page 16: Recursion acg

Chapter 4

Course-of-values recursion

In computability theory, course-of-values recursion is a technique for defining number-theoretic functions byrecursion. In a definition of a function f by course-of-values recursion, the value of f(n+1) is computed from thesequence ⟨f(1), f(2), . . . , f(n)⟩ . The fact that such definitions can be converted into definitions using a simplerform of recursion is often used to prove that functions defined by course-of-values recursion are primitive recursive.This article uses the convention that the natural numbers are the set {1,2,3,4,...}.

4.1 Definition and examples

The factorial function n! is recursively defined by the rules

0! = 1,(n+1)! = (n+1)*(n!).

This recursion is a primitive recursion because it computes the next value (n+1)! of the function based on the valueof n and the previous value n! of the function. On the other hand, the function Fib(n), which returns the nth Fibonaccinumber, is defined with the recursion equations

Fib(0) = 0,Fib(1) = 1,Fib(n+2) = Fib(n+1) + Fib(n).

In order to compute Fib(n+2), the last two values of the Fib function are required. Finally, consider the function gdefined with the recursion equations

g(0) = 0,g(n+ 1) =

∑ni=0 g(i)

n−i .

To compute g(n+1) using these equations, all the previous values of g must be computed; no fixed finite number ofprevious values is sufficient in general for the computation of g. The functions Fib and g are examples of functionsdefined by course-of-values recursion.In general, a function f is defined by course-of-values recursion if there is a fixed primitive recursive function hsuch that for all n,

f(n) = h(n, ⟨f(0), f(1), . . . , f(n− 1)⟩)

where ⟨f(0), f(1), . . . , f(n− 1)⟩ is a Gödel number encoding the indicated sequence. In particular

11

Page 17: Recursion acg

12 CHAPTER 4. COURSE-OF-VALUES RECURSION

f(0) = h(0, ⟨⟩),

provides the initial value of the recursion. The function hmight test its first argument to provide explicit initial values,for instance for Fib one could use the function defined by

h(n, s) =

{n ifn < 2

s[n− 2] + s[n− 1] ifn ≥ 2

where s[i] denotes extraction of the element i from an encoded sequence s; this is easily seen to be a primitive recursivefunction (assuming an appropriate Gödel numbering is used).

4.2 Equivalence to primitive recursion

In order to convert a definition by course-of-values recursion into a primitive recursion, an auxiliary (helper) functionis used. Suppose that one wants to have

f(n) = h(n, ⟨f(0), f(1), . . . , f(n− 1)⟩)

To define f using primitive recursion, first define the auxiliary course-of-values function that should satisfy

f̄(n) = ⟨f(0), f(1), . . . , f(n− 1)⟩.

Thus f̄(n) encodes the first n values of f. The function f̄ can be defined by primitive recursion because f̄(n+ 1) isobtained by appending to f̄(n) the new element h(n, f̄(n)) :

f̄(0) = ⟨⟩

f̄(n+ 1) = append(n, f̄(n), h(n, f̄(n))),

where append(n,s,x) computes, whenever s encodes a sequence of length n, a new sequence t of length n + 1 such thatt[n] = x and t[i] = s[i] for all i < n (again this is a primitive recursive function, under the assumption of an appropriateGödel numbering).Given f̄ , the original function f can be defined by f(n) = f̄(n + 1)[n] , which shows that it is also a primitiverecursive function.

4.3 Application to primitive recursive functions

In the context of primitive recursive functions, it is convenient to have a means to represent finite sequences of naturalnumbers as single natural numbers. One such method, Gödel’s encoding, represents a sequence ⟨n1, n2, . . . , nk⟩ as

k∏i=1

pnii

where pi represent the ith prime. It can be shown that, with this representation, the ordinary operations on sequencesare all primitive recursive. These operations include

• Determining the length of a sequence,

• Extracting an element from a sequence given its index,

Page 18: Recursion acg

4.4. REFERENCES 13

• Concatenating two sequences.

Using this representation of sequences, it can be seen that if h(m) is primitive recursive then the function

f(n) = h(⟨f(1), f(2), . . . , f(n− 1)⟩)

is also primitive recursive.When the natural numbers are taken to begin with zero, the sequence ⟨n1, n2, . . . , nk⟩ is instead represented as

k∏i=1

p(ni+1)i

which makes it possible to distinguish the codes for the sequences ⟨0⟩ and ⟨0, 0⟩ .

4.4 References• Hinman, P.G., 2006, Fundamentals of Mathematical Logic, A K Peters.

• Odifreddi, P.G., 1989, Classical Recursion Theory, North Holland; second edition, 1999.

Page 19: Recursion acg

Chapter 5

Droste effect

TheDroste effect— known asmise en abyme in art — is the effect of a picture appearing within itself, in a place wherea similar picture would realistically be expected to appear.[1] The appearance is recursive: the smaller version containsan even smaller version of the picture, and so on. Only in theory could this go on forever; practically, it continues onlyas long as the resolution of the picture allows, which is relatively short, since each iteration geometrically reduces thepicture’s size. It is a visual example of a strange loop, a self-referential system of instancing which is the cornerstoneof fractal geometry.

5.1 Origin

The effect is named after the image on the tins and boxes of Droste cocoa powder, one of the main Dutch brands,which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image.[2] Thisimage, introduced in 1904 and maintained for decades with slight variations, became a household notion. Reportedly,poet and columnist Nico Scheepmaker introduced wider usage of the term in the late 1970s.[3]

The Droste effect was used by Giotto di Bondone in 1320, in his Stefaneschi Triptych. The polyptych altarpieceportrays in its center panel Cardinal Giacomo Gaetani Stefaneschi offering the triptych itself to St. Peter.[4] There arealso several examples from medieval times of books featuring images containing the book itself or window panels inchurches depicting miniature copies of the window panel itself. [5]

5.2 Examples

Land O'Lakes Butter packaging has an Indian maiden carrying a package of butter with a picture of herself.Bottles of Clicquot Club soda showed brand mascot “Clicquot” the Eskimo boy holding a bottle of Clicquot Clubsoda.The cover of the vinyl album Ummagumma by Pink Floyd shows a band member sitting, with a picture on the wall.The picture shows the same scene with a different band member and the effect continues for all four band members,with the picture for the fourth being the cover of their previous album A Saucerful of Secrets.In the 1971 science fiction film Escape from the Planet of the Apes, the character Dr. Otto Hasslein (Eric Braeden)attempts to explain the appearance in present day of intelligent apes from Earth’s future. Hasslein uses a paintingon a CRT monitor to illustrate an effect he refers to as "infinite regression.” The demonstration consists of a camerapulling away from a picture of an artist painting a picture, on a suggested infinite loop.The logo of cheese spread brand The Laughing Cow bears the Droste effect.In the 1980s, Chicago Cubs announcer Harry Caray regularly appeared on television station WGN-TV standing nextto a television set that was tuned to WGN showing Caray and the TV, thus forming a Droste effect.The title sequence of the eighth series of the British TV series Doctor Who features the show’s TARDIS emergingfrom a spiraling Droste effect clockface.[6]

This effect can also occur on a computer if the copy of the entire monitor screen is reproduced within one window.

14

Page 20: Recursion acg

5.3. SEE ALSO 15

For example, the image on the right was created by creating a connection from a computer to itself using ChromeRemote Desktop.

5.3 See also

5.4 References[1] Nänny. Max and Fischer, Olga, The Motivated Sign: Iconicity in Language and Literature p. 37, John Benjamins Publishing

Company (2001) ISBN 90-272-2574-5

[2] Törnqvist, Egil. Ibsen: A Doll’s House, pp.105, Cambridge University Press (1995) ISBN 0-521-47866-9

[3] “Droste, altijd welkom”. cultuurarchief.nl.

[4] “Giotto di Bondone and assistants: Stefaneschi triptych”. vatican.va.

[5] See the collection of articles Medieval 'mise-en-abyme': the object depicted within itself for examples and opinions on howthis effect was used symbolically.

[6] Title sequence by Billy Hanshaw adapted from his design as a fan of the show.

5.5 External links• Escher and the Droste effect

• The Math Behind the Droste Effect (article by Jos Leys summarizing the results of the Leiden study and article)

• Droste Effect with Mathematica

• Droste Effect from Wolfram Demonstrations Project

• HyperDroste : Create Droste Effect animations on an iPhone

Page 21: Recursion acg

16 CHAPTER 5. DROSTE EFFECT

The woman holds an object bearing a smaller image of her holding the same object, which in turn bears a smaller image of herholding the same object, and so on.

Page 22: Recursion acg

5.5. EXTERNAL LINKS 17

Example of the Droste effect in a spiraling format

Example of Droste effect using Chrome Remote Desktop.

Page 23: Recursion acg

Chapter 6

Fixed-point combinator

In computer science, a fixed-point combinator (or fixpoint combinator[1]) is a higher-order function y that satisfiesthe equation,

y f = f (y f)

It is so named because, by setting x = y f , it represents a solution to the fixed point equation,

x = f x

A fixed point of a function f is a value that doesn't change under the application of the function f. Consider thefunction f x = x2 . The values 0 and 1 are fixed points of this function, because 0 = 02 and 1 = 12 . This functionhas no other fixed points.A fixed point combinator need not exist for all functions. Also if f is a function of more than 1 parameter, the fixedpoint of the function need not be a total function.Functions that satisfy the equation for y expand as,

y f = f (. . . f (y f) . . .)

A particular implementation of y is Curry’s paradoxical combinator Y, represented in lambda calculus by,

λf.(λx.f (x x)) (λx.f (x x))

This combinator may be used in implementing Curry’s paradox. The heart of Curry’s paradox is that lambda calculusis unsound as a deductive system, and the Y combinator demonstrates that by allowing an anonymous expression torepresent zero, or even many values. This is inconsistent in mathematical logic.Applied to a function with one variable the Y combinator usually does not terminate. More interesting results areobtained by applying the Y combinator to functions of two or more variables. The second variable may be used as acounter, or index. The resulting function behaves like a while or a for loop in an imperative language.Used in this way the Y combinator implements simple recursion. In the lambda calculus it is not possible to referto the definition of a function in a function body. Recursion may only be achieved by passing in a function as aparameter. The Y combinator demonstrates this style of programming.

6.1 Introduction

The Y combinator is an implementation of the fixed-point combinator in lambda calculus. Fixed-point combinatorsmay also be easily defined in other functional and imperative languages. The implementation in lambda calculus ismore difficult due to limitations in lambda calculus.

18

Page 24: Recursion acg

6.1. INTRODUCTION 19

The fixed combinator may be used in a number of different areas,

• General mathematics

• Untyped lambda calculus

• Typed lambda calculus

• Functional programming

• Imperative programming

Fixed point combinators may be applied to a range of different functions, but normally will not terminate unless thereis an extra parameter. Even with lazy evaluation when the function to be fixed refers to its parameter, another call tothe function is invoked. The calculation never gets started. The extra parameter is needed to trigger the start of thecalculation.The type of the fixed point is the return type of the function being fixed. This may be a real or a function or any othertype.In the untyped lambda calculus, the function to apply the fix point combinator to may be expressed using an encod-ing, like Church encoding. In this case particular lambda terms (which define functions) are considered as values.“Running” (beta reducing) the fixed point combinator on the encoding gives a lambda term for the result which maythen be interpreted as fixed point value.Alternately a function may be considered as a lambda term defined purely in lambda calculus.These different approaches affect how a mathematician and a programmer may regard a fixed point combinator. Alambda calculus mathematician may see the Y combinator applied to a function as being an expression satisfying thefixed point equation, and therefore a solution.In contrast a person only wanting to apply a fixed point combinator to some general programming task may see itonly as a means of implementing recursion.

6.1.1 Values and domains

Every expression has one value. This is true in general mathematics and it must be true in lambda calculus. Thismeans that in lambda calculus, applying a fixed point combinator to a function gives you an expression whose valueis the fixed point of the function.However this is a value in the lambda calculus domain, it may not correspond to any value in the domain of thefunction, so in a practical sense it is not necessarily a fixed point of the function, and only in the lambda calculusdomain is it a fixed point of the equation.For example, consider,

x2 = −1 ⇒ x =−1

x⇒ f x =

−1

x∧ Y f = x

Division of Signed numbers may be implemented in the Church encoding, so f may be represented by a lambda term.This equation has no solution in the real numbers. But in the domain of the complex numbers i and -i are solutions.This demonstrates that there may be solutions to an equation in another domain. However the lambda term for thesolution for the above equation is weirder than that. The lambda term Y f represents the state where x could be eitheri or -i, as one value. The information distinguishing these two values has been lost, in the change of domain.For the lambda calculus mathematician, this is a consequence of the definition of lambda calculus. For the program-mer, it means that the beta reduction of the lambda term will loop forever, never reaching a normal form.

6.1.2 Function versus implementation

The fixed-point combinator may be defined in mathematics and then implemented in other languages. General math-ematics defines a function based on its extensional properties.[2] That is, two functions are equal if they perform the

Page 25: Recursion acg

20 CHAPTER 6. FIXED-POINT COMBINATOR

same mapping. Lambda calculus and programming languages regard function identity as an intensional property. Afunctions identity is based on its implementation.A lambda calculus function (or term) is an implementation of a mathematical function. In the lambda calculus thereare a number of combinator (implementations) that satisfy the mathematical definition of a fixed-point combinator.

6.1.3 What is a “combinator"?

A combinator is a particular type of higher-order function that may be used in defining functions without usingvariables. The combinators may be combined to direct values to their correct places in the expression without evernaming them as variables.

6.2 Usage

Usually when applied to functions of one parameter, implementations of the fixed point combinator fail to terminate.Functions with extra parameters are more interesting.The Y combinator is an example of what makes the Lambda calculus inconsistent. So it should be regarded withsuspicion. However it is safe to consider the Y combinator when defined in mathematic logic only. The definition is,

y f = f (y f)

It is easy to see how f may be applied to one variable. Applying it to two or more variables requires adding them tothe equation,

y f x = f (y f) x

This version of the equation must be shown consistent with the previous by the definition for equality of functions,

(∀xf x = g x) ≡ f = g

This definition allows the two equations for y to be regarded as equivalent, provided that the domain of x is welldefined. So if f has multiple parameters the y f may still be regarded as a fixed point, with some restrictions.

6.2.1 The factorial function

The factorial function provides a good example of how the fixed point combinator may be applied to functions oftwo variables. The result demonstrates simple recursion, as would be implemented in a single loop, in an imperativelanguage. The definition of numbers used is explained in Church encoding. The fixed point function is,

F f n = (IsZero n) 1 (multiply n (f (pred n)))

so y F is,

y F n = F (y F ) n

or

y F n = (IsZero n) 1 (multiply n ((y F ) (pred n)))

Setting y F = fact gives,

Page 26: Recursion acg

6.3. FIXED POINT COMBINATORS IN LAMBDA CALCULUS 21

fact n = (IsZero n) 1 (multiply n (fact (pred n)))

this definition is equivalent to the mathematical definition of factorial,

fact n = ifn = 0 then 1 elsen ∗ fact (n− 1)

This definition puts F in the role of the body of a loop to be iterated.

6.3 Fixed point combinators in lambda calculus

The Y combinator, discovered by Haskell B. Curry, is defined as:

Y = λf.(λx.f (x x)) (λx.f (x x))

Beta reduction of this gives,By repeatedly applying this equality we get,

Y g = g (Y g) = g (g (Y g)) = g (. . . g (Y g) . . .)

6.3.1 Equivalent definition of a fixed-point combinator

This fixed-point combinator may be defined as y in,

x = f x ∧ y f = x

An expression for y may be derived using rules from the definition of a let expression. Firstly using the rule,

(∃xE ∧ F ) ⇐⇒ letx : E inF

gives,

letx = f x in y f = x

Also using,

x ̸∈ FV(E) ∧ x ∈ FV(F ) → letx : G inE F = E (letx : G inF )

gives

y f = letx = f x inx

Then using the eta reduction rule,

f x = y ⇐⇒ f = λx.y

gives,

y = λf. letx = f x inx

Page 27: Recursion acg

22 CHAPTER 6. FIXED-POINT COMBINATOR

6.3.2 Derivation of the Y combinator

Curry’s Y combinator may be readily obtained from the definition of y.[3] Starting with,

λf. letx = f x inx

A lambda abstraction does not support reference to the variable name, in the applied expression, so x must be passedin as a parameter to x. We can think of this as replacing x by x x, but formally this is not correct. Instead defining yby ∀z, y z = x gives,

λf. let y z = f (y z) in y z

The let expression may be regarded as the definition of the function y, where z is the parameter. Instantiation z as yin the call gives,

λf. let y z = f (y z) in y y

And because the parameter z always passes the function y.

λf. let y z = f (z z) in y y

Using the eta reduction rule,

f x = y ≡ f = λx.y

gives,

λf. let y = λz.f (z z) in y y

A let expression may be expressed as a lambda abstraction using,

n ̸∈ FV (E) → (letn = E inL ≡ (λn.L) E)

gives,

λf.(λy.y y) (λz.f (z z))

This is possibly the simplest implementation of a fixed point combinator in lambda calculus. However one betareduction gives the more symmetrical form of Curry’s Y combinator.

λf.(λz.f (z z)) (λz.f (z z))

See also translating between let and lambda expressions.

6.3.3 Other fixed-point combinators

In untyped lambda calculus fixed-point combinators are not especially rare. In fact there are infinitely many of them.[4]

In 2005 Mayer Goldberg showed that the set of fixed-point combinators of untyped lambda calculus is recursivelyenumerable.[5]

The Y combinator can be expressed in the SKI-calculus as

Page 28: Recursion acg

6.3. FIXED POINT COMBINATORS IN LAMBDA CALCULUS 23

Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))

The simplest fixed point combinator in the SK-calculus, found by John Tromp, is

Y' = S S K (S (K (S S (S (S S K)))) K)

which corresponds to the lambda expression

Y' = (λx. λy. x y x) (λy. λx. y (x y x))

The following fixed-point combinator is simpler than the Y combinator, and β-reduces into the Y combinator; it issometimes cited as the Y combinator itself:

X = λf.(λx.x x) (λx.f (x x))

Another common fixed point combinator is the Turing fixed-point combinator (named after its discoverer, AlanTuring):

Θ = (λx. λy. (y (x x y))) (λx. λy. (y (x x y)))

It also has a simple call-by-value form:

Θᵥ = (λx. λy. (y (λz. x x y z))) (λx. λy. (y (λz. x x y z)))

The analog for mutual recursion is a polyvariadic fix-point combinator, [6][7][8] which may be denoted Y*.

6.3.4 Strict fixed point combinator

The Z combinator will work in strict languages (or where normal order is applied). The Z combinator has the nextargument defined explicitly, preventing the expansion of Z g in the right hand side of the definition:

Z g v = g (Z g) v

and in lambda calculus is an eta-expansion:

Z = λf.(λx.f (λv.((x x) v))) (λx.f (λv.((x x) v)))

6.3.5 Non-standard fixed-point combinators

In untyped lambda calculus there are terms that have the same Böhm tree as a fixed-point combinator, that is they havethe same infinite extension λx.x (x (x ... )). These are called non-standard fixed-point combinators. Any fixed-pointcombinator is also a non-standard one, but not all non-standard fixed-point combinators are fixed-point combinatorsbecause some of them fail to satisfy the equation that defines the “standard” ones. These strange combinators arecalled strictly non-standard fixed-point combinators; an example is the following combinator;

N = B M (B (B M) B)

where,

B = λx,y,z.x (y z)M = λx.x x

The set of non-standard fixed-point combinators is not recursively enumerable.[5]

Page 29: Recursion acg

24 CHAPTER 6. FIXED-POINT COMBINATOR

6.4 Implementation in other languages

Note that the Y combinator is a particular implementation of a fixed point combinator in lambda calculus. Its structureis determined by the limitations of lambda calculus. It is not necessary or helpful to use this structure in implementingthe fixed point combinator in other languages.Simple examples of fixed point combinators implemented in some programming paradigms are given below.For examples of implementations of the fixed point combinators in various languages see,

• Rosetta code - Y combinator

• Java code.

• C++ code.

6.4.1 Lazy functional implementation

In a language that supports lazy evaluation, like in Haskell, it is possible to define a fixed-point combinator usingthe defining equation of the fixed-point combinator which is conventionally named fix. The definition is given here,followed by some usage examples.fix :: (a -> a) -> a fix f = f (fix f) -- Lambda lifted -- alternative: -- fix f = let x = f x in x -- Lambda dropped fix (\x-> 9) -- this evaluates to 9 factabs fact 0 = 1 -- factabs is F from the lambda calculus example factabs fact x = x * fact(x-1) (fix factabs) 5 -- evaluates to 120

6.4.2 Strict functional implementation

In a strict functional language the argument to f is expanded beforehand, yielding an infinite call sequence,

f (f...f (fix f)...)) x

This may be resolved by defining fix with an extra parameter.let rec fix f x = f (fix f) x (* note the extra x; here fix f = \x-> f (fix f) x *) let factabs fact = function (* factabs hasextra level of lambda abstraction *) 0 -> 1 | x -> x * fact (x-1) let _ = (fix factabs) 5 (* evaluates to “120” *)

6.4.3 Imperative language implementation

This example is a slightly interpretive implementation of a fixed point combinator. A class is used to contain thefix function, called fixer. The function to be fixed is contained in a class that inherits from fixer. The fix functionaccesses the function to be fixed as a virtual function. As for the strict functional definition, fix is explicitly given anextra parameter x, which means that lazy evaluation is not needed.template <typename R, typename D> class fixer { public: R fix(D x) { return f(x); } private: virtual R f(D) = 0; };class fact : public fixer<long, long> { virtual long f(long x) { if (x == 0) { return 1; } return x * fix(x-1); } }; longresult = fact().fix(5);

6.5 Typing

In polymorphic lambda calculus (System F) a polymorphic fixed-point combinator has type;

∀a.(a → a) → a

Page 30: Recursion acg

6.6. GENERAL INFORMATION 25

where a is a type variable. That is, fix takes a function, which maps a → a and uses it to return a value of type a.In the simply typed lambda calculus extended with recursive types, fixed-point operators can be written, but the typeof a “useful” fixed-point operator (one whose application always returns) may be restricted.In the simply typed lambda calculus, the fixed-point combinator Y cannot be assigned a type[9] because at some pointit would deal with the self-application sub-term x x by the application rule:

Γ ⊢ x : t1 → t2 Γ ⊢ x : t1Γ ⊢ x x : t2

where x has the infinite type t1 = t1 → t2 . No fixed-point combinator can in fact be typed, in those systems anysupport for recursion must be explicitly added to the language.

6.5.1 Type for the Y combinator

In programming languages that support recursive types, it is possible to type the Y combinator by appropriatelyaccounting for the recursion at the type level. The need to self-apply the variable x can be managed using a type (Reca), which is defined so as to be isomorphic to (Rec a -> a).For example, in the following Haskell code, we have In and Out being the names of the two directions of the isomor-phism, with types:[10]

In :: (Rec a -> a) -> Rec a out :: Rec a -> (Rec a -> a)

which lets us write:newtype Rec a = In { out :: Rec a -> a } y :: (a -> a) -> a y = \f -> (\x -> f (out x x)) (In (\x -> f (out x x)))

Or equivalently in OCaml:type 'a recc = In of ('a recc -> 'a) let out (In x) = x let y f = (fun x a -> f (out x x) a) (In (fun x a -> f (out x x) a))

6.6 General information

The function for which any input is a fixed point is called the identity function. Formally:

∀xf x = x

Other functions have the special property that after being applied once, further applications don't have any effect.More formally:

∀xf f x = f x

Such functions are called idempotent. An example of such a function is the function that returns 0 for all even integers,and 1 for all odd integers.Fixed-point combinators do not necessarily exist in more restrictive models of computation. For instance, they donot exist in simply typed lambda calculus.The Y combinator allows recursion to be defined as a set of rewrite rules,[11] without requiring native recursion supportin the language.[12]

The recursive join in relational databases implements a fixed point, by recursively adding records to a set until nomore may be added.In programming languages that support anonymous functions, fixed-point combinators allow the definition and useof anonymous recursive functions, i.e. without having to bind such functions to identifiers. In this setting, the use offixed-point combinators is sometimes called anonymous recursion.[13][14]

Page 31: Recursion acg

26 CHAPTER 6. FIXED-POINT COMBINATOR

6.7 See also

• Fixed-point iteration

• Anonymous function

• Lambda calculus

• Let expression

• Lambda lifting

6.8 Notes

[1] Peyton Jones, Simon L. (1987). The Implementation of Functional Programming. Prentice Hall International.

[2] Selinger, Peter. “Lecture Notes on Lambda Calculus (PDF)" (PDF). p. 6.

[3] http://math.stackexchange.com/questions/51246/can-someone-explain-the-y-combinator

[4] Bimbó, Katalin. Combinatory Logic: Pure, Applied and Typed. p. 48.

[5] Goldberg, 2005

[6] Poly-variadic fix-point combinators

[7] Polyvariadic Y in pure Haskell98, lang.haskell.cafe, October 28, 2003

[8] Fixed point combinator for mutually recursive functions?

[9] An Introduction to the Lambda Calculus

[10] Haskell mailing list thread on How to define Y combinator in Haskell, 15 Sep 2006

[11] Daniel P. Friedman, Matthias Felleisen (1986). “Chapter 9 - Lambda The Ultimate”. The Little Lisper. Science ResearchAssociates. p. 179. “In the chapter we have derived a Y-combinator which allows us to write recursive functions of oneargument withour using define.”

[12] Mike Vanier. “The Y Combinator (Slight Return) or: How to Succeed at Recursion Without Really Recursing”. “Moregenerally, Y gives us a way to get recursion in a programming language that supports first-class functions but that doesn'thave recursion built in to it.”

[13] This terminology appear to be largely folklore, but it does appear in the following:

• Trey Nash, Accelerated C# 2008, Apress, 2007, ISBN 1-59059-873-3, p. 462—463. Derived substantially fromWes Dyer's blog (see next item).

• Wes Dyer Anonymous Recursion in C#, February 02, 2007, contains a substantially similar example found in thebook above, but accompanied by more discussion.

[14] The If Works Deriving the Y combinator, January 10th, 2008

6.9 References

• Werner Kluge, Abstract computing machines: a lambda calculus perspective, Springer, 2005, ISBN 3-540-21146-2, pp. 73–77

• Mayer Goldberg, (2005) On the Recursive Enumerability of Fixed-Point Combinators, BRICS Report RS-05-1,University of Aarhus

• Matthias Felleisen. A Lecture on the Why of Y .

Page 32: Recursion acg

6.10. EXTERNAL LINKS 27

6.10 External links• http://www.latrobe.edu.au/philosophy/phimvt/joy/j05cmp.html

• http://okmij.org/ftp/Computation/fixed-point-combinators.html

• “Fixed-point combinators in Javascript”

• http://www.cs.brown.edu/courses/cs173/2002/Lectures/2002-10-28-lc.pdf

• http://www.mactech.com/articles/mactech/Vol.07/07.05/LambdaCalculus/

• http://www.csse.monash.edu.au/~{}lloyd/tildeFP/Lambda/Examples/Y/ (executable)

• http://www.ece.uc.edu/~{}franco/C511/html/Scheme/ycomb.html

• an example and discussion of a perl implementation

• “A Lecture on the Why of Y”

• “A Use of the Y Combinator in Ruby”

• “Functional programming in Ada”

• “Y Combinator in Erlang”

• “The Y Combinator explained with JavaScript”

• “The Y Combinator (Slight Return)" (detailed derivation)

• “The Y Combinator in C#"

• Rosetta code - Y combinator

Page 33: Recursion acg

Chapter 7

Fold (higher-order function)

In functional programming, fold – also known variously as reduce, accumulate, aggregate, compress, or inject –refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combiningoperation, recombine the results of recursively processing its constituent parts, building up a return value. Typically,a fold is presented with a combining function, a top node of a data structure, and possibly some default values to beused under certain conditions. The fold then proceeds to combine elements of the data structure’s hierarchy, usingthe function in a systematic way.Folds are in a sense dual to unfolds, which take a “seed” value and apply a function corecursively to decide how toprogressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacingit with the results of applying a combining function at each node on its terminal values and the recursive results(catamorphism as opposed to anamorphism of unfolds).

7.1 Folds as structural transformations

Folds can be regarded as consistently replacing the structural components of a data structure with functions and values.Lists, for example, are built up in many languages from two primitives: any list is either an empty list, commonlycalled nil ([]), or is constructed by prepending an element in front of another list, creating what is called a cons node( Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil))))) ), resulting from application of a cons function, written down as (:)(colon) in Haskell. One can view a fold on lists as replacing the nil at the end of the list with a specific value, andreplacing each cons with a specific function. These replacements can be viewed as a diagram:

There’s another way to perform the structural transformation in a consistent manner, with the order of the two links

28

Page 34: Recursion acg

7.2. FOLDS ON LISTS 29

of each node flipped when fed into the combining function:

These pictures illustrate right and left fold of a list visually. They also highlight the fact that foldr (:) [] is the identityfunction on lists (a shallow copy in Lisp parlance), as replacing cons with cons and nil with nil will not change theresult. The left fold diagram suggests an easy way to reverse a list, foldl (flip (:)) []. Note that the parameters to consmust be flipped, because the element to add is now the right hand parameter of the combining function. Anothereasy result to see from this vantage-point is to write the higher-order map function in terms of foldr, by composingthe function to act on the elements with cons, as:map f = foldr ((:) . f) []

where the period (.) is an operator denoting function composition.This way of looking at things provides a simple route to designing fold-like functions on other algebraic data structures,like various sorts of trees. One writes a function which recursively replaces the constructors of the datatype withprovided functions, and any constant values of the type with provided values. Such a function is generally referred toas a catamorphism.

7.2 Folds on lists

The folding of the list [1,2,3,4,5] with the addition operator would result in 15, the sum of the elements of the list[1,2,3,4,5]. To a rough approximation, one can think of this fold as replacing the commas in the list with the +operation, giving 1 + 2 + 3 + 4 + 5.In the example above, + is an associative operation, so the final result will be the same regardless of parenthesization,although the specific way in which it is calculated will be different. In the general case of non-associative binaryfunctions, the order in which the elements are combined may influence the final result’s value. On lists, there are twoobvious ways to carry this out: either by combining the first element with the result of recursively combining the rest(called a right fold), or by combining the result of recursively combining all elements but the last one, with the lastelement (called a left fold). This corresponds to a binary operator being either right-associative or left-associative,in Haskell's or Prolog's terminology. With a right fold, the sum would be parenthesized as 1 + (2 + (3 + (4 + 5))),whereas with a left fold it would be parenthesized as (((1 + 2) + 3) + 4) + 5.In practice, it is convenient and natural to have an initial value which in the case of a right fold is used when onereaches the end of the list, and in the case of a left fold is what is initially combined with the first element of the list.In the example above, the value 0 (the additive identity) would be chosen as an initial value, giving 1 + (2 + (3 + (4+ (5 + 0)))) for the right fold, and ((((0 + 1) + 2) + 3) + 4) + 5 for the left fold.

Page 35: Recursion acg

30 CHAPTER 7. FOLD (HIGHER-ORDER FUNCTION)

7.2.1 Linear vs. tree-like folds

The use of an initial value is necessary when the combining function f is asymmetrical in its types, i.e. when the typeof its result is different from the type of list’s elements. Then an initial value must be used, with the same type asthat of f 's result, for a linear chain of applications to be possible. Whether it will be left- or right-oriented will bedetermined by the types expected of its arguments by the combining function – if it is the second argument that hasto be of the same type as the result, then f could be seen as a binary operation that associates on the right, and viceversa.When the function is symmetrical in its types and the result type is the same as the list elements’ type, the parenthesesmay be placed in arbitrary fashion thus creating a tree of nested sub-expressions, e.g. ((1 + 2) + (3 + 4)) + 5. Ifthe binary operation f is associative this value will be well-defined, i.e. same for any parenthesization, although theoperational details of how it is calculated will be different. This can have significant impact on efficiency if f isnon-strict.Whereas linear folds are node-oriented and operate in a consistent manner for each node of a list, tree-like folds arewhole-list oriented and operate in a consistent manner across groups of nodes.

7.2.2 Special folds for non-empty lists

One often wants to choose the identity element of the operation f as the initial value z. When no initial value seemsappropriate, for example, when one wants to fold the function which computes the maximum of its two parametersover a non-empty list to get the maximum element of the list, there are variants of foldr and foldl which use the lastand first element of the list respectively as the initial value. In Haskell and several other languages, these are calledfoldr1 and foldl1, the 1 making reference to the automatic provision of an initial element, and the fact that the liststhey are applied to must have at least one element.These folds use type-symmetrical binary operation: the types of both its arguments, and its result, must be the same.Richard Bird in his 2010 book proposes[1] “a general fold function on non-empty lists” foldrn which transforms its lastelement, by applying an additional argument function to it, into a value of the result type before starting the foldingitself, and is thus able to use type-asymmetrical binary operation like the regular foldr to produce a result of typedifferent from the list’s elements type.

7.2.3 Implementation

Linear folds

Using Haskell as an example, foldl and foldr can be formulated in a few equations.foldl :: (b -> a -> b) -> b -> [a] -> b foldl f z [] = z foldl f z (x:xs) = foldl f (f z x) xs

If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result ofapplying f to the old initial value and the first element.foldr :: (a -> b -> b) -> b -> [a] -> b foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs)

If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest.

Tree-like folds

Lists can be folded over in a tree-like fashion, both for finite and for indefinitely defined lists:foldt f z [] = z foldt f _ [x] = x foldt f z xs = foldt f z (pairs f xs) foldi f z [] = z foldi f z (x:xs) = f x (foldi f z (pairs fxs)) pairs f (x:y:t) = f x y : pairs f t pairs _ t = t

In the case of foldi function, to avoid its runaway evaluation on indefinitely defined lists the function f must not alwaysdemand its second argument’s value, at least not all of it, and/or not immediately (example below).

Page 36: Recursion acg

7.2. FOLDS ON LISTS 31

Folds for non-empty lists

foldl1 f [x] = x foldl1 f (x:y:xs) = foldl1 f (f x y : xs) foldr1 f [x] = x foldr1 f (x:xs) = f x (foldr1 f xs) foldt1 f [x] =x foldt1 f (x:y:xs) = foldt1 f (f x y : pairs f xs) foldi1 f [x] = x foldi1 f (x:xs) = f x (foldi1 f (pairs f xs))

7.2.4 Evaluation order considerations

In the presence of lazy, or non-strict evaluation, foldr will immediately return the application of f to the head of thelist and the recursive case of folding over the rest of the list. Thus, if f is able to produce some part of its resultwithout reference to the recursive case on its “right” i.e. in its second argument, and the rest of the result is neverdemanded, then the recursion will stop (e.g. head == foldr (\a b->a) (error “empty list”) ). This allows right folds tooperate on infinite lists. By contrast, foldl will immediately call itself with new parameters until it reaches the endof the list. This tail recursion can be efficiently compiled as a loop, but can't deal with infinite lists at all — it willrecurse forever in an infinite loop.Having reached the end of the list, an expression is in effect built by foldl of nested left-deepening f-applications,which is then presented to the caller to be evaluated. Were the function f to refer to its second argument first here,and be able to produce some part of its result without reference to the recursive case (here, on its “left” i.e. in itsfirst argument), then the recursion would stop. This means that while foldr recurses “on the right” it allows for a lazycombining function to inspect list’s elements from the left; and conversely, while foldl recurses “on the left” it allowsfor a lazy combining function to inspect list’s elements from the right, if it so chooses (e.g. last == foldl (\a b->b)(error “empty list”) ).Reversing a list is also tail-recursive (it can be implemented using rev = foldl (\ys x -> x : ys) [] ). On finite lists, thatmeans that left-fold and reverse can be composed to perform a right fold in a tail-recursive way (cf. 1+>(2+>(3+>0))== ((0<+3)<+2)<+1 ), with a modification to the function f so it reverses the order of its arguments (i.e. foldr f z ==foldl (flip f) z . foldl (flip (:)) [] ), tail-recursively building a representation of expression that right-fold would build.The extraneous intermediate list structure can be eliminated with the continuation-passing technique, foldr f z xs ==foldl (\k x-> k . f x) id xs z ; similarly, foldl f z xs == foldr (\x k-> k . flip f x) id xs z ( flip is only needed in languageslike Haskell with its flipped order of arguments to the combining function of foldl unlike e.g. in Scheme where thesame order of arguments is used for combining functions to both foldl and foldr ).Another technical point to be aware of in the case of left folds using lazy evaluation is that the new initial parameteris not being evaluated before the recursive call is made. This can lead to stack overflows when one reaches the end ofthe list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often providea stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call.In Haskell this is the foldl' (note the apostrophe, pronounced 'prime') function in the Data.List library (one needsto be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituentsautomatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constantspace operation, when lazy evaluation of the final result is impossible or undesirable.

7.2.5 Examples

Using a Haskell interpreter, we can show the structural transformation which fold functions perform by constructinga string as follows:λ> putStrLn $ foldr (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) (1+(2+(3+(4+(5+(6+(7+(8+(9+(10+(11+(12+(13+0)))))))))))))λ> putStrLn $ foldl (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) (((((((((((((0+1)+2)+3)+4)+5)+6)+7)+8)+9)+10)+11)+12)+13)λ> putStrLn $ foldt (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) ((((1+2)+(3+4))+((5+6)+(7+8)))+(((9+10)+(11+12))+13))λ> putStrLn $ foldi (\x y -> concat ["(",x,"+",y,”)"]) “0” (map show [1..13]) (1+((2+3)+(((4+5)+(6+7))+((((8+9)+(10+11))+(12+13))+0))))

Infinite tree-like folding is demonstrated e.g. in recursive primes production by unbounded sieve of Eratosthenes inHaskell:primes = 2 : _Y ((3 :) . minus [5,7..] . foldi (\(x:xs) ys -> x : union xs ys) [] . map (\p-> [p*p, p*p+2*p..])) _Y g =g (_Y g) -- = g . g . g . g . ...

where the function union operates on ordered lists in a local manner to efficiently produce their set union, and minus

Page 37: Recursion acg

32 CHAPTER 7. FOLD (HIGHER-ORDER FUNCTION)

their set difference.For finite lists, e.g. merge sort (and its duplicates-removing variety, nubsort) could be easily defined using tree-likefolding asmergesort xs = foldt merge [] [[x] | x <- xs] nubsort xs = foldt union [] [[x] | x <- xs]

with the function merge a duplicates-preserving variant of union.Functions head and last could have been defined through folding ashead = foldr (\a b -> a) (error “head: Empty list”) last = foldl (\a b -> b) (error “last: Empty list”)

7.3 Folds in various languages

7.4 Universality

Fold is a polymorphic function. For any g having a definitiong [] = v g (x:xs) = f x (g xs)

then g can be expressed as[4]

g = foldr f vWe can also implement a fixed point combinator using fold,[5] proving that iterations can be reduced to folds:fix f = foldr (\_ -> f) undefined (repeat undefined)

7.5 See also• Aggregate function

• Iterated binary operation

• Catamorphism, a generalization of fold

• Homomorphism

• Map (higher-order function)

• Prefix sum

• Recursive data type

• Structural recursion

7.6 References[1] Richard Bird, “Pearls of Functional Algorithm Design”, Cambridge University Press 2010, ISBN 978-0-521-51338-8, p.

42

[2] For reference functools.reduce: import functoolsFor reference reduce: from functools import reduce

[3] Odersky, Martin (2008-01-05). “Re: Blog: My verdict on the Scala language”. Newsgroup: comp.scala.lang. Retrieved14 October 2013.

[4] Hutton, Graham. “A tutorial on the universality and expressiveness of fold” (PDF). Journal of Functional Programming (9(4)): 355–372. Retrieved March 26, 2009.

Page 38: Recursion acg

7.7. EXTERNAL LINKS 33

[5] Pope, Bernie. “Getting a Fix from the Right Fold” (PDF). The Monad.Reader (6): 5–16. Retrieved May 1, 2011.

7.7 External links• “Higher order functions — map, fold and filter”

• “Unit 6: The Higher-order fold Functions”

• “Fold”

• “Constructing List Homomorphism from Left and Right Folds”

• “The magic foldr”

Page 39: Recursion acg

Chapter 8

Recursion

For other uses, see Recursion (disambiguation).

Recursion is the process of repeating items in a self-similar way. For instance, when the surfaces of two mirrors areexactly parallel with each other, the nested images that occur are a form of infinite recursion. The term has a varietyof meanings specific to a variety of disciplines ranging from linguistics to logic. The most common application ofrecursion is in mathematics and computer science, in which it refers to a method of defining functions in which thefunction being defined is applied within its own definition. Specifically, this defines an infinite number of instances(function values), using a finite expression that for some instances may refer to other instances, but in such a waythat no loop or infinite chain of references can occur. The term is also used more generally to describe a process ofrepeating objects in a self-similar way.

8.1 Formal definitions

In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can bedefined by two properties:

1. A simple base case (or cases)—a terminating scenario that does not use recursion to produce an answer

2. A set of rules that reduce all other cases toward the base case

For example, the following is a recursive definition of a person’s ancestors:

• One’s parents are one’s ancestors (base case).

• The ancestors of one’s ancestors are also one’s ancestors (recursion step).

The Fibonacci sequence is a classic example of recursion:Fib(0) = 01, case base asFib(1) = 12, case base asintegers all Forn > 1, Fib (n) := Fib(n− 1) + Fib(n− 2).

Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbersby the Peano axioms can be described as: 0 is a natural number, and each natural number has a successor, which isalso a natural number. By this base case and recursive rule, one can generate the set of all natural numbers.Recursively defined mathematical objects include functions, sets, and especially fractals.There are various more tongue-in-cheek “definitions” of recursion; see recursive humor.

34

Page 40: Recursion acg

8.2. INFORMAL DEFINITION 35

8.2 Informal definition

Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking theprocedure itself. A procedure that goes through recursion is said to be 'recursive'.To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. Aprocedure is a set of steps based on a set of rules. The running of a procedure involves actually following the rules andperforming the steps. An analogy: a procedure is like a written recipe; running a procedure is like actually preparingthe meal.Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution ofsome other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that inturn requires heating water, and so forth. However, a recursive procedure is where (at least) one of its steps callsfor a new instance of the very same procedure, like a sourdough recipe calling for some dough left over from thelast time the same recipe was made. This of course immediately creates the possibility of an endless loop; recursioncan only be properly used in a definition if the step in question is skipped in certain cases so that the procedurecan complete, like a sourdough recipe that also tells you how to get some starter dough in case you've never made itbefore. Even if properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishingthe new from the old (partially executed) invocation of the procedure; this requires some administration of how farvarious simultaneous instances of the procedures have progressed. For this reason recursive definitions are very rarein everyday situations. An example could be the following procedure to find a way through a maze. Proceed forwarduntil reaching either an exit or a branching point (a dead end is considered a branching point with 0 branches). Ifthe point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively; if everytrial fails by reaching only dead ends, return on the path that led to this branching point and report failure. Whetherthis actually defines a terminating procedure depends on the nature of the maze: it must not allow loops. In anycase, executing the procedure requires carefully recording all currently explored branching points, and which of theirbranches have already been exhaustively tried.

8.3 In language

Linguist Noam Chomsky among many others has argued that the lack of an upper bound on the number of gram-matical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practicalconstraints such as the time available to utter one), can be explained as the consequence of recursion in naturallanguage.[1][2] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. Asentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are danger-ous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively(very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence.This is really just a special case of the mathematical definition of recursion.This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects thatTin Man said that.... Of course, there are many structures apart from sentences that can be defined recursively,and therefore many ways in which a sentence can embed instances of one category inside another. Over the years,languages in general have proved amenable to this kind of analysis.Recently, however, the generally-accepted idea that recursion is an essential property of human language has beenchallenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetskyand Cilene Rodrigues are among many who that have argued against this.[3] Literary self-reference can in any casebe argued to be different in kind from mathematical or logical recursion.[4]

Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word and, for example,can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for nounphrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditran-sitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that itcan take any of these different types of meanings as arguments. This can be done by defining it for a simple case inwhich it combines sentences, and then defining the other cases recursively in terms of the simple one.[5]

Page 41: Recursion acg

36 CHAPTER 8. RECURSION

8.3.1 Recursive humor

Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks,generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer toa base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in theirglossary along the lines of:

Recursion, see Recursion.[6]

A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book TheC Programming Language; the index entry recursively references itself (“recursion 86, 139, 141, 182, 202, 269”).The earliest version of this joke was in “Software Tools” by Kernighan and Plauger, and also appears in “The UNIXProgramming Environment” by Kernighan and Pike. It did not appear in the first edition of The C ProgrammingLanguage.Another joke is that “To understand recursion, you must understand recursion.”[6] In the English-language version ofthe Google web search engine, when a search for “recursion” is made, the site suggests “Did you mean: recursion.”An alternative form is the following, from Andrew Plotkin: “If you already know what recursion is, just remember theanswer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her whatrecursion is.”

Recursive acronyms can also be examples of recursive humor. PHP, for example, stands for “PHP Hypertext Pre-processor”, WINE stands for “Wine Is Not an Emulator.” and GNU stands for “GNU’s not Unix”.

8.4 In mathematics

8.4.1 Recursively defined sets

Main article: Recursive definition

Example: the natural numbers

See also: Closure (mathematics)

The canonical example of a recursively defined set is given by the natural numbers:

0 is in Nif n is in N , then n + 1 is in NThe set of natural numbers is the smallest set satisfying the previous two properties.

Example: The set of true reachable propositions

Another interesting example is the set of all “true reachable” propositions in an axiomatic system.

• If a proposition is an axiom, it is a true reachable proposition.

• If a proposition can be obtained from true reachable propositions by means of inference rules, it is a truereachable proposition.

• The set of true reachable propositions is the smallest set of propositions satisfying these conditions.

This set is called 'true reachable propositions’ because in non-constructive approaches to the foundations of mathe-matics, the set of true propositions may be larger than the set recursively constructed from the axioms and rules ofinference. See also Gödel’s incompleteness theorems.

Page 42: Recursion acg

8.5. IN COMPUTER SCIENCE 37

8.4.2 Finite subdivision rules

Main article: Finite subdivision rule

Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdi-vision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdividedinto smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can beiterated. The standard `middle thirds’ technique for creating the Cantor set is a subdivision rule, as is barycentricsubdivision.

8.4.3 Functional recursion

A function may be partly defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) =F(n − 1) + F(n − 2). For such a definition to be useful, it must lead to non-recursively defined values, in this caseF(0) = 0 and F(1) = 1.A famous recursive function is the Ackermann function, which—unlike the Fibonacci sequence—cannot easily beexpressed without recursion.

8.4.4 Proofs involving recursive definitions

Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sec-tions, yields structural induction, a powerful generalization of mathematical induction widely used to derive proofsin mathematical logic and computer science.

8.4.5 Recursive optimization

Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problemin recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of theoptimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step).

8.5 In computer science

Main article: Recursion (computer science)

A common method of simplification is to divide a problem into subproblems of the same type. As a computerprogramming technique, this is called divide and conquer and is key to the design of many important algorithms.Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smallerand smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach,where problems are solved by solving larger and larger instances, until the desired size is reached.A classic example of recursion is the definition of the factorial function, given here in C code:unsigned int factorial(unsigned int n) { if (n == 0) { return 1; } else { return n * factorial(n - 1); } }

The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursivecall by n, until reaching the base case, analogously to the mathematical definition of factorial.Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smallerversions of itself. The solution to the problem is then devised by combining the solutions obtained from the simplerversions of the problem. One example application of recursion is in parsers for programming languages. The greatadvantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed orproduced by a finite computer program.Recurrence relations are equations to define one or more sequences recursively. Some specific kinds of recurrencerelation can be “solved” to obtain a non-recursive definition.

Page 43: Recursion acg

38 CHAPTER 8. RECURSION

Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually simplicity.The main disadvantage is often that the algorithm may require large amounts of memory if the depth of the recursionis very large.

8.6 In art

The Russian Doll or Matryoshka Doll is a physical artistic example of the recursive concept.

8.7 The recursion theorem

In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set X, an element a ofX and a function f : X → X , the theorem states that there is a unique function F : N → X (where N denotes theset of natural numbers including zero) such that

F (0) = a

F (n+ 1) = f(F (n))

for any natural number n.

8.7.1 Proof of uniqueness

Take two functions F : N → X and G : N → X such that:

F (0) = a

G(0) = a

F (n+ 1) = f(F (n))

G(n+ 1) = f(G(n))

where a is an element of X.It can be proved by mathematical induction that F (n) = G(n) for all natural numbers n:

Base Case: F (0) = a = G(0) so the equality holds for n = 0 .

Inductive Step: Suppose F (k) = G(k) for some k ∈ N . Then F (k + 1) = f(F (k)) = f(G(k)) =G(k + 1).

Hence F(k) = G(k) implies F(k+1) = G(k+1).

By induction, F (n) = G(n) for all n ∈ N .

8.7.2 Examples

Some common recurrence relations are:

• Golden Ratio: ϕ = 1 + (1/ϕ) = 1 + (1/(1 + (1/(1 + 1/...))))

• Factorial: n! = n(n− 1)! = n(n− 1) · · · 1

• Fibonacci numbers: f(n) = f(n− 1) + f(n− 2)

Page 44: Recursion acg

8.8. SEE ALSO 39

• Catalan numbers: C0 = 1 , Cn+1 = (4n+ 2)Cn/(n+ 2)

• Computing compound interest

• The Tower of Hanoi

• Ackermann function

8.8 See also• Corecursion

• Course-of-values recursion

• Digital infinity

• Fixed point combinator

• Infinite loop

• Infinitism

• Iterated function

• Mise en abyme

• Reentrant (subroutine)

• Self-reference

• Strange loop

• Tail recursion

• Tupper’s self-referential formula

• Turtles all the way down

8.9 Bibliography• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

• Johnsonbaugh, Richard (2004). Discrete Mathematics. Prentice Hall. ISBN 0-13-117686-2.

• Hofstadter, Douglas (1999). Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books. ISBN 0-465-02656-7.

• Shoenfield, Joseph R. (2000). Recursion Theory. A K Peters Ltd. ISBN 1-56881-149-7.

• Causey, Robert L. (2001). Logic, Sets, and Recursion. Jones & Bartlett. ISBN 0-7637-1695-2.

• Cori, Rene; Lascar, Daniel; Pelletier, Donald H. (2001). Recursion Theory, Gödel’s Theorems, Set Theory,Model Theory. Oxford University Press. ISBN 0-19-850050-5.

• Barwise, Jon; Moss, Lawrence S. (1996). Vicious Circles. Stanford Univ Center for the Study of Languageand Information. ISBN 0-19-850050-5. - offers a treatment of corecursion.

• Rosen, Kenneth H. (2002). Discrete Mathematics and Its Applications. McGraw-Hill College. ISBN 0-07-293033-0.

• Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, Clifford Stein (2001). Introduction to Algorithms.Mit Pr. ISBN 0-262-03293-7.

Page 45: Recursion acg

40 CHAPTER 8. RECURSION

• Kernighan, B.; Ritchie, D. (1988). The C programming Language. Prentice Hall. ISBN 0-13-110362-8.

• Stokey, Nancy,; Robert Lucas; Edward Prescott (1989). Recursive Methods in Economic Dynamics. HarvardUniversity Press. ISBN 0-674-75096-9.

• Hungerford (1980). Algebra. Springer. ISBN 978-0-387-90518-1., first chapter on set theory.

8.10 References[1] Pinker, Steven (1994). The Language Instinct. William Morrow.

[2] Pinker, Steven; Jackendoff, Ray (2005). “The faculty of language: What’s so special about it?". Cognition 95 (2): 201–236.doi:10.1016/j.cognition.2004.08.004. PMID 15694646.

[3] Nevins, Andrew; Pesetsky, David; Rodrigues, Cilene (2009). “Evidence and argumentation: A reply to Everett (2009)"(PDF). Language 85 (3): 671–681. doi:10.1353/lan.0.0140.

[4] Drucker, Thomas (4 January 2008). Perspectives on the History of Mathematical Logic. Springer Science & BusinessMedia. p. 110. ISBN 978-0-8176-4768-1.

[5] Barbara Partee and Mats Rooth. 1983. In Rainer Bäuerle et al., Meaning, Use, and Interpretation of Language. Reprintedin Paul Portner and Barbara Partee, eds. 2002. Formal Semantics: The Essential Readings. Blackwell.

[6] Hunter, David (2011). Essentials of Discrete Mathematics. Jones and Bartlett. p. 494.

8.11 External links• Recursion - tutorial by Alan Gauld

• A Primer on Recursion- contains pointers to recursion in Formal Languages, Linguistics, Math and ComputerScience

• Zip Files All The Way Down

• Nevins, Andrew and David Pesetsky and Cilene Rodrigues. Evidence and Argumentation: A Reply to Everett(2009). Language 85.3: 671-−681 (2009)

Page 46: Recursion acg

8.11. EXTERNAL LINKS 41

A visual form of recursion known as theDroste effect. The woman in this image holds an object that contains a smaller image of herholding an identical object, which in turn contains a smaller image of herself holding an identical object, and so forth. Advertisementfor Droste cocoa, c. 1900

Page 47: Recursion acg

42 CHAPTER 8. RECURSION

Ouroboros, an ancient symbol depicting a serpent or dragon eating its own tail.

Page 48: Recursion acg

8.11. EXTERNAL LINKS 43

Recently refreshed sourdough, bubbling through fermentation: the recipe calls for some sourdough left over from the last time thesame recipe was made.

Page 49: Recursion acg

44 CHAPTER 8. RECURSION

The Sierpinski triangle—a confined recursion of triangles that form a fractal

Page 50: Recursion acg

Chapter 9

Recursion (computer science)

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances

of the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

9.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problem must be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

45

Page 51: Recursion acg

46 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

Tree created using the Logo programming language and relying heavily on recursion

9.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 52: Recursion acg

9.3. TYPES OF RECURSION 47

9.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

9.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

9.3 Types of recursion

9.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 53: Recursion acg

48 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

9.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

9.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

9.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 54: Recursion acg

9.4. RECURSIVE PROGRAMS 49

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

9.4 Recursive programs

9.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 55: Recursion acg

50 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 56: Recursion acg

9.4. RECURSIVE PROGRAMS 51

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

9.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 57: Recursion acg

52 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 58: Recursion acg

9.5. IMPLEMENTATION ISSUES 53

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

9.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

9.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

9.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 59: Recursion acg

54 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

9.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

9.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 60: Recursion acg

9.6. RECURSION VERSUS ITERATION 55

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

9.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

9.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

9.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

9.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 61: Recursion acg

56 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

9.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

9.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

9.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

9.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

9.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 62: Recursion acg

9.10. SEE ALSO 57

9.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

9.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

9.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 63: Recursion acg

58 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012),Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

9.12 Further reading• Dijkstra, Edsger W. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

9.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 64: Recursion acg

9.14. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 59

9.14 Text and image sources, contributors, and licenses

9.14.1 Text• Anonymous recursion Source: https://en.wikipedia.org/wiki/Anonymous_recursion?oldid=628476829 Contributors: Michael Hardy,

AugPi, Jitse Niesen, Salix alba, Bisqwit, Donhalcon, Pintman, Chris the speller, Nbarth, Chrisamaphone, CBM, Abednigo, Quadrescence,Pcap, AvicBot and Anonymous: 3

• Bar recursion Source: https://en.wikipedia.org/wiki/Bar_recursion?oldid=602701644 Contributors: Bearcat, Ben Standeven, Myasuda,Blaisorblade, WereSpielChequers, Yobot, Qetuth, Wert7 and Anonymous: 1

• Corecursion Source: https://en.wikipedia.org/wiki/Corecursion?oldid=675326208 Contributors: The Anome, Malcohol, Greenrd, Walt-pohl, Rich Farmbrough, Ruud Koot, Seliopou, Gurch, Roboto de Ajvol, Hairy Dude, Piet Delport, SmackBot, Imz, Betacommand,Vvarkey, Nbarth, Furby100, Dreadstar, Lambiam, Macha, Jenovazero, PhilKnight, Rhwawn, Gwern, R'n'B, Icktoofay, Thefrob, Classi-calecon, Addbot, Ghettoblaster, Yobot, Jordsan, Obscuranym, Denispir, AnomieBOT, Citation bot, VladimirReshetnikov, Citation bot1, WillNess, H3llBot, Helpful Pixie Bot, BattyBot, ChrisGualtieri, Pimp slap the funk and Anonymous: 14

• Course-of-values recursion Source: https://en.wikipedia.org/wiki/Course-of-values_recursion?oldid=533176028Contributors: MichaelHardy, Andreas Kaufmann, Rajah, Ruud Koot, SmackBot, JonHarder, Lambiam, Macha, Iridescent, JoeBot, CBM, Cydebot, MarshBot,WinBot, Marc van Leeuwen, Brentsmith101 and Anonymous: 4

• Droste effect Source: https://en.wikipedia.org/wiki/Droste_effect?oldid=673522611 Contributors: Michael Hardy, Rp, Ixfd64, Olego,Nv8200pa, Wetman, Postdlf, AlistairMcMillan, Jossi, DragonflySixtyseven, AndrewKeenanRichardson, Pavel Vozenilek, Bellczar, STHay-den, Hkroger, Mdd, Scarecroe, Oliphaunt, Bkkbrad, Ruud Koot, SqueakBox, Pfau, Sparkit, JIP, TobyJ, Nightscream, TheRingess,Preslethe, Mongreilf, Wavelength, Borgx, Hairy Dude, Anetode, Raven4x4x, Mistercow, Saric, Garion96, Destin, Attilios, Smack-Bot, McGeddon, Zekkelley, Apeloverage, Schwallex, Murder1, Nahum Reduta, Pcgomes, Ser Amantio di Nicolao, Alpha Omicron,Loadmaster, BillFlis, Dfred, Hossenfeffer, CRGreathouse, CBM, John259, Sopoforic, A876, Quibik, Headinajar, Astavats, .alyn.post.,Rothorpe, Froid, R'n'B, J.delanoy, MONODA, STBotD, Mhopgood, Anonymous Dissident, Katyism, Salvar, Broadbot, Mallardtheduck,Cgwaldman, BotKung, Sputniktilt, Joseph A. Spadaro, Thatother1dude, EverGreg, AHMartin, SieBot, Sylwan~enwiki, Goustien, Atif.t2,Cartman0052007, Trivialist, Opitergino, Jackpot777, Hidro, 7&6=thirteen, Mlaffs, DumZiBoT, XLinkBot, Addbot, DrJos, LaaknorBot,Lightbot, Zorrobot, Goregore~enwiki, Luckas-bot, Yobot, JoshSommers, Punkyfish3000, Jeffwang, Nouill, Thayts, Borbolia777, Beao,Fama Clamosa, Grandenchilada, Lotje, Spazzychalk, Calcyman, Sverigekillen, JayLarsons1, Tuankiet65, Ὁ οἶστρος, Jrg1000, Sage321,Helpful Pixie Bot, Brison09, FrauBella, Roll-Morton, Filedelinkerbot, Kartik26, Joeleoj123, GoldDrake and Anonymous: 97

• Fixed-point combinator Source: https://en.wikipedia.org/wiki/Fixed-point_combinator?oldid=664047929 Contributors: Dominus, Ci-phergoth, AugPi, Andres, Rl, Charles Matthews, Espertus, Tobias Bergemann, Giftlite, BenFrantzDale, Jorend, Fanf, Aminorex, Positron,Ehamberg, Matthew Vaughan, Mernen, Mormegil, Mvanier, Rich Farmbrough, Avriette, Qutezuce, Indil, Edward Z. Yang, Spoon!,Tromp, Diego Moya, Flata, Hammertime, Dhartung, Danhash, Oleg Alexandrov, Bkkbrad, MattGiuca, Ruud Koot, Smartech~enwiki,Venullian, ErikHaugen, Vegaswikian, Mathbot, NekoDaemon, Sderose, Psantora, Wavelength, Hairy Dude, Innovationcreation, MattRyall, Tony1, Sth~enwiki, Liyang, Cedar101, Donhalcon, JLaTondre, AndrewWTaylor, Kslays, BiT, UrsaFoot, Mhss, Bluebot, Nbarth,Dfletter, Cícero, Derek R Bullamore, The S show, Jbolden1517, Wfi, CBM, Cydebot, Boemanneke, JMatthews, Nick Number, Dereckson,Abcarter, JamesBWatson, Cat-five, A3nm, Kestasjk, Gwern, R'n'B, Inquam, Daniel5Ko, Nirajanrajkarnikar, Anandram, Hqb, Amigo-Nico, Santhosh.thottingal, Fatnickc, Toddst1, Svick, Nankezhishi, Krzysz00, Ottawahitech, Alexey Muranov, Afarnen, XLinkBot, Wik-Head, Addbot, Tsunanet, Se'taan, Hash150, Mdnahas, Haklo, Sutambe, Luckas-bot, Pcap, PMLawrence, Subwy, AnomieBOT, SophusBie, ChrisKuklewicz, LittleWink, MastiBot, WillNess, WikitanvirBot, Wgunther, GoingBatty, Zahnradzacken, Serketan, HolyCookie,Jcubic, ClueBot NG, Thepigdog, Helpful Pixie Bot, BG19bot, Yinwang0, Derekleebeatty, Romario333, Czech is Cyrillized, Nomoteretes,Cosmia Nebula, K001q and Anonymous: 99

• Fold (higher-order function) Source: https://en.wikipedia.org/wiki/Fold_(higher-order_function)?oldid=676226824 Contributors: TheAnome, Frank Shearar, Greenrd, RRM, Tobias Bergemann, Matt Crypto, Qutezuce, Mecanismo, Bdoserror, Spoon!, Daf, Trylks, Cbur-nett, Cgibbard, MattGiuca, Ruud Koot, Qwertyus, JVz, Bgwhite, Taejo, Guslacerda, Tony1, Mike92591, Pooryorick~enwiki, Googl,Cedar101, JLaTondre, TuukkaH, SmackBot, Stemail23, BiT, Sam Pointon, Masklinn, Torzsmokus, Frap, Zoonfafer, Rnapier, Cyber-cobra, Tomcjohn, Jbolden1517, Codeculturist, Pmoura, Simeon, Thijs!bot, Hervegirod, Oubiwann, Jack quack, Rriegs, Antic-Hay, Ma-gioladitis, Rhwawn, Soelinn, Stoneice02, David Eppstein, Gwern, MwGamera, J.delanoy, Netytan, SparsityProblem, Grshiplett, Eyelid-lessness, Qseep, EvanCarroll, Mewp~enwiki, Wykypydya, Fourthark, Nolens Volens, OKBot, Classicalecon, XLinkBot, Addbot, Btx40,LaaknorBot, Tfogal, Jarble, Yobot, Masharabinovich, AnomieBOT, Rjanag, Onthewings, ArthurBot, FrescoBot, Bigweeboy, WillNess,RjwilmsiBot, EmausBot, 10mcleod, ZéroBot, Cadadr, WinerFresh, Chiph588, Wikih101, Arturbies, Da cameron, Pimp slap the funk,Bubbleben, Monkbot, Ertyupoi and Anonymous: 96

• Recursion Source: https://en.wikipedia.org/wiki/Recursion?oldid=677154277 Contributors: Damian Yerrick, AxelBoldt, The Anome,Tarquin, Taw, Grouse, Andre Engels, Eclecticology, XJaM, Toby Bartels, Fubar Obfusco, Miguel~enwiki, Boleslav Bobcik, Mjb, Youandme,Stevertigo, Quintessent, Patrick, Michael Hardy, Mic, Ixfd64, Graue, TakuyaMurata, Rochus, Minesweeper, Pcb21, Tregoweth, Stevenj,JWSchmidt, Александър, Salsa Shark, Poor Yorick, Ghewgill, Mxn, Hashar, Revolver, Dcoetzee, Dino, Dysprosia, Malcohol, Greenrd,Jogloran, Wik, Furrykef, Hyacinth, Head, Wernher, Jonhays0, Khym Chanur, Rls, Jeanmichel~enwiki, Ldo, Robbot, Ruinia, Boffy b,Scarlet, Donreed, Bernhard Bauer, Altenmann, Chancemill, Gandalf61, Orthogonal, Ashley Y, DHN, Wlievens, Hadal, Netjeff, Diberri,Tobias Bergemann, David Gerard, Jimpaz, SimonMayer, Ancheta Wis, Giftlite, Mshonle~enwiki, N12345n, Kim Bruning, Wolfkeeper,Vfp15, Itay2222~enwiki, Wikiwikifast, Guanaco, Sundar, Khalid hassani, DemonThing, Knutux, Antandrus, Beland, Robert Brockway,AndrewKeenanRichardson, Icairns, Lumidek, Derek Parnell, Jh51681, Sonett72, Ratiocinate, Andreas Kaufmann, Zondor, Mernen, Alki-var, Freakofnurture, Slady, Lifefeed, Discospinster, Guanabot, Leibniz, Antaeus Feldspar, Deelkar, Paul August, Crtrue, PutzfetzenORG,SgtThroat, Noren, Bobo192, Nigelj, Marco Polo, Ray Dassen, Blonkm, R. S. Shaw, Obradovic Goran, Officiallyover, GatesPlusPlus,Jumbuck, Liao, Jezmck, Tabor, Yamla, MarkGallagher, Echuck215, Malo, ReyBrujo, Dominic, DV8 2XL, Mattbrundage, Kenyon, OlegAlexandrov, Crosbiesmith, Feezo, Weyes, Cyclotronwiki, Bjones, Linas, Mindmatrix, Camw, David Haslam, PoccilScript, Oliphaunt,Bkkbrad, Ruud Koot, Robertwharvey, Srborlongan, Ralfipedia, DeweyQ, Marudubshinki, GSlicer, SqueakBox, Graham87, Qwertyus,Kbdank71, TobyJ, Jclemens, Paul13~enwiki, Rjwilmsi, Koavf, Quiddity, Salix alba, Robmods, Toresbe, Windchaser, Mathbot, Nihiltres,Itinerant1, Fragglet, NekoDaemon, RexNL, Alvin-cs, BMF81, Mongreilf, Gwernol, Kakurady, Uriah923, Wavelength, RobotE, BillG,Jlittlet, RussBot, Michael Slone, Cmore, Piet Delport, Stephenb, DragonHawk, Deskana, Catamorphism, Nick, Jamesmcguigan, Larrylaptop, MarkSG, Harry Mudd, Dbfirs, The divine bovine, Fang Aili, JoanneB, WikiFew, That Guy, From That Show!, Robertd, Smack-Bot, RDBury, InverseHypercube, McGeddon, Unyoyega, Wegesrand, Jagged 85, Firstrock, Cachedio, NickGarvey, Persian Poet Gal,

Page 65: Recursion acg

60 CHAPTER 9. RECURSION (COMPUTER SCIENCE)

Thumperward, Neurodivergent, Stevage, HubHikari, Nbarth, Darth Panda, Signed in, Quaque, GeeksHaveFeelings, Jahiegel, Tamfang,Jefffire, Writtenright, Nixeagle, Snowmanradio, JonHarder, Caseydk, Jon Awbrey, Tethros, John Reid, TenPoundHammer, Lambiam,Derek farn, Agradman, Dlibennowell, MagnaMopus, Malixsys, Miketomasello, 16@r, A. Parrot, Remigiu, Davemcarlson, Mets501, Hil-verd, Tawkerbot2, Shirahadasha, Wolfdog, Gamma57, CRGreathouse, Ahy1, CBM, INVERTED, Myasuda, Gregbard, Dragon’s Blood,Tawkerbot4, DumbBOT, Thijs!bot, Epbr123, N5iln, Mojo Hand, Zyrxil, RobHar, Uruiamme, Escarbot, PhiLiP, Thadius856, SpongeSe-bastian, AntiVandalBot, Bm gub, Kaini, JAnDbot, Kaobear, Albany NY, Hut 8.5, Greensburger, Gavia immer, Acroterion, JNW, James-BWatson, Soulbot, Karl432, David Eppstein, Bwildasi, Slimeknight, Falcor84, Bitbit, Flaxmoore, Dennisthe2, Prgrmr@wrk, Meatbites,Patar knight, Trusilver, Milan95, Geehbee, Sanjay742, MONODA, Jdoubleu, Coppertwig, Chiswick Chap, Joshua Issac, Lxix77, Tigger-jay, Deor, Jehan60188, Yugsdrawkcabeht, Technopat, Zurishaddai, Una Smith, Ferengi, Martin451, BotKung, Hyrulio, SpecMode, Jesin,Sliskisty, Anishsane, Plusdo, Wassamatta, MCTales, Mallerd, Jantaro, Dogah, Ivan Štambuk, AlphaPyro, Malcolmxl5, Ham Pastrami,The.ravenous.llama, Taemyr, Todoslocos, Prestonmag, PhilMacD, Thehotelambush, Knavex, AlanUS, CBM2, Rinconsoleao, Escape Or-bit, Classicalecon, Luatha, Ricklaman, SlackerMom, ClueBot, Pi zero, SuperHamster, Boing! said Zebedee, Alpcr, Mijo34, DragonBot,BobManPerson, Vanmaple, M4gnum0n, Lantzy, Cenarium, Hidro, Botsjeh, Resuna, ChrisHodgesUK, Aoe3lover, Franklin.vp, Doriftu,XLinkBot, Marc van Leeuwen, Tombraider007, Ost316, Avoided, WikHead, Ziggy Sawdust, Addbot, Laudan08, Some jerk on theInternet, Fluffernutter, Watzit, Mohamed Magdy, Download, CarsracBot, NittyG, AnnaFrance, Favonian, Ozob, Tide rolls, Jarble, We-ganwock, Legobot, Luckas-bot, Yobot, Systemizer, Rockfan.by, Maldrasen, Obscuranym, Pcap, KamikazeBot, MJM74, AnomieBOT,Materialscientist, Maarwaan, RobertEves92, Citation bot, Obersachsebot, Xqbot, Apothecia, Zargontapel, Hydrated Wombat, Dushy-com, Shadowjams, Kamitsaha, Constructive editor, Spazturtle, Prari, Altg20April2nd, Thayts, Skychildandsonofthesun, OgreBot, Drjaye, DrilBot, Pinethicket, Σ, Coolaery, Xxx3xxx, Tachophile, Varmin, MusicNewz, عقیل ,کاشف Jonkerz, ArbitUsername, Genezis-tan, Benimation, Aoidh, Reaper Eternal, Suffusion of Yellow, Tbhotch, Reach Out to the Truth, Acu192, AleHitch, EmausBot, Kyxzme,Avenue X at Cicero, JeffreyAylesworth, ScottyBerg, RA0808, IceMarioman, Wikipelli, MithrandirAgain, PiemanLK, FinalRapture,Nightsideoflife, Coasterlover1994, Cookiefonster, L Kensington, Chewings72, Scientific29, Vikram360, Sven Manguard, Steveswikied-its, ClueBot NG, Josephshanak, JohnsonL623, Fourmi volage, Brainbelly, Snotbot, Frietjes, Parcly Taxel, Sage321, MerlIwBot, HelpfulPixie Bot, Barravian, Kinaro, Picklebobdogflog, Siddhesh33, Maharshi91, LionelTabre, Ashwiniborse, Mark Arsten, CottontailOfChrist,Altaïr, Aw.alatiqi, Mushi no onna, ChiisaiTenshi, Davidfreesefan23, Cheetahs1990, Khazar2, Tony Heffernan, Mogism, TalhaIrfanKhan,Lugia2453, Sriharsh1234, Brirush, Johnnypeebuckets, A.entropy, Lyxkg007, Carrot Lord, Pdecalculus, TheWisestOfFools, Vande957,Pizzakingme, Cyborg1981, Abhikpal2509, Aa508186 and Anonymous: 624

• Recursion (computer science) Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969 Contributors:Edward, Michael Hardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kauf-mann, Abdull, MMSequeira, Paul August, Nigelj, R. S. Shaw, Liao, Diego Moya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca,Ruud Koot, Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver,Chobot, Wavelength, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer,SmackBot, InverseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, DavidMorón, JonHarder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM,Pierre de Lyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens,JAnDbot, Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill,DaoKaioshin, Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki,Jeshan, Mild Bill Hiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Mer-arischroeder, Btx40, Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse,AnomieBOT, Citation bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTim-berlakexx, RedBot, Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, Re-naissanceBug, Simurgia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy,Chmarkine, Jon.weldon, TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb,Jacob p12, Lomotos10, Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

9.14.2 Images• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-

main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs)• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Origi-nal artist: ?

• File:Drost_Effect_using_remote_desktop_connection.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Drost_Effect_using_remote_desktop_connection.png License: CC BY-SA 4.0 Contributors: Own work Original artist: Kartik26

• File:Droste.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/62/Droste.jpg License: Public domain Contributors: [4] [5]Original artist: Jan (Johannes) Musset?

• File:Droste_effect_1260359-3.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/31/Droste_effect_1260359-3.jpg Li-cense: CC BY-SA 3.0 Contributors: Own work Original artist: Nevit Dilmen (<a href='//commons.wikimedia.org/wiki/User_talk:Nevit'title='User talk:Nevit'>talk</a>)

• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: TheTango! Desktop Project. Original artist:The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (althoughminimally).”

• File:Fractal_fern_explained.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Fractal_fern_explained.pngLicense:Public domain Contributors: Own work Original artist: António Miguel de Campos

• File:Left-fold-transformation.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5a/Left-fold-transformation.png Li-cense: Public domain Contributors: English Wikipedia, created by Cale Gibbard in Inkscape who released it into public domain Originalartist: Cale Gibbard

• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0Contributors:

Page 66: Recursion acg

9.14. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 61

Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:Tkgd2007

• File:RecursiveFunction1_execution.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/RecursiveFunction1_execution.png License: Public domain Contributors: I made it myself Original artist: User:Maxtremus

• File:RecursiveFunction2_execution.png Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/RecursiveFunction2_execution.png License: Public domain Contributors: I made it myself Original artist: User:Maxtremus

• File:RecursiveTree.JPG Source: https://upload.wikimedia.org/wikipedia/commons/f/f7/RecursiveTree.JPG License: Public domainContributors: Own work Original artist: Brentsmith101

• File:Right-fold-transformation.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Right-fold-transformation.pngLi-cense: Public domainContributors: Taken from English Wikipedia, created by Cale Gibbard in Inkscape who released it into public domain.Original artist: Cale Gibbard

• File:Serpiente_alquimica.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Serpiente_alquimica.jpg License: Pub-lic domain Contributors: cf. scan of entire page here. Original artist: anonymous medieval illuminator; uploader Carlos adanero

• File:Sierpinski_triangle.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/45/Sierpinski_triangle.svg License: CC BY-SA 3.0 Contributors: ? Original artist: ?

• File:Sourdough.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0a/Sourdough.jpg License: CC BY 4.0 Contributors:Own work Original artist: Janus Sandsgaard

• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svgfrom the Tango project. Original artist: Benjamin D. Esham (bdesham)

• File:Tower_of_Hanoi.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/0/07/Tower_of_Hanoi.jpeg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?

• File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.

• File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Publicdomain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs),based on original logo tossed together by Brion Vibber

9.14.3 Content license• Creative Commons Attribution-Share Alike 3.0