towards the grammatical generation of progressing scenes ......towards the grammatical generation of...

117
Towards the Grammatical Generation of Progressing Scenes Martin Berglund October 4, 2006 Master’s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Frank Drewes Examiner: Per Lindstr¨ om Ume ˚ a University Department of Computing Science SE-901 87 UME ˚ A SWEDEN

Upload: others

Post on 27-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Towards the GrammaticalGeneration of Progressing

    Scenes

    Martin Berglund

    October 4, 2006Master’s Thesis in Computing Science, 20 credits

    Supervisor at CS-UmU: Frank DrewesExaminer: Per Lindström

    Ume̊a UniversityDepartment of Computing Science

    SE-901 87 UMEÅSWEDEN

  • Abstract

    Over the years a lot of work has been done on the topic of grammatical picture gen-eration. However, most often such work has disregarded the pragmatic problems ofcombining different grammatical devices to achieve more complex results and imple-menting them in an efficient manner. This master’s thesis aims to establish notions andresults that make it possible to consider and deal with such issues in a uniform way.This is achieved by introducing concepts of “operations” of two varieties, “progressingoperations” to capture iterative refinement and “scene operations” to address the mostimmediate efficiency concerns. Once the theory surrounding these two operations hasbeen developed the theory for their combination is considered. Finally a suggestion onhow ordinary tree-based picture generators can be expressed within this framework isdeveloped. The thesis is concluded with a discussion of the actual impact of the defini-tions made, including examples especially focused on how the concepts exposed can beexploited to increase the efficiency of implementations.

  • ii

  • Acknowledgments

    I would, first and foremost, like to gratefully acknowledge the enthusiastic support ofmy supervisor Frank Drewes. The original ideas behind this thesis were of course his.During the work on the thesis he offered invaluable ideas and guidance, while at thesame time giving me room to explore my own ideas and make my own mistakes.

    My friends at Ume̊a University also deserve a great deal of credit. The many sociablecoffee breaks, gaming pauses and movie nights all delayed, but greatly improved, thethesis. In addition many of them offered valuable suggestions for improvements.

    Finally, I have a lot of gratitude for my family, they have provided tremendousemotional as well as financial support, both before and during the work on this thesis.

    iii

  • iv

  • Contents

    1 Introduction 11.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Background – tree-based picture generation . . . . . . . . . . . . . . . . 1

    1.2.1 Grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.3 Background – other kinds of generators . . . . . . . . . . . . . . . . . . 71.4 Goals for this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.4.1 Combining devices . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4.2 Progressing refinement . . . . . . . . . . . . . . . . . . . . . . . . 101.4.3 Efficiency of operating on partly finished pictures . . . . . . . . . 13

    2 Formalization 172.1 Progressing Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    2.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.2 Derived properties . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    2.3 Progressing Scene Operations . . . . . . . . . . . . . . . . . . . . . . . . 242.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    2.4 Tree-based operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.4.1 Common definitions for tree grammars . . . . . . . . . . . . . . . 272.4.2 Tree grammars defining progressing operations . . . . . . . . . . 292.4.3 Tree grammars defining progressing scene operations . . . . . . . 32

    3 Examples 373.1 A simple 2D picture scene domain . . . . . . . . . . . . . . . . . . . . . 373.2 Scene operation based on the turtle algebra . . . . . . . . . . . . . . . . 40

    3.2.1 Defining the operations . . . . . . . . . . . . . . . . . . . . . . . 403.2.2 Explaining and using the operations . . . . . . . . . . . . . . . . 42

    3.3 Directly mapping a tree picture generator into a PO . . . . . . . . . . . 473.4 A simple arity 0 turtle PSO . . . . . . . . . . . . . . . . . . . . . . . . . 49

    v

  • vi CONTENTS

    3.5 Collage operations as scene operations . . . . . . . . . . . . . . . . . . . 523.5.1 Defining the operations . . . . . . . . . . . . . . . . . . . . . . . 523.5.2 Explaining and using the operations . . . . . . . . . . . . . . . . 52

    3.6 A PSO with arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.7 Adding a PSO with less grammatical underpinnings . . . . . . . . . . . 58

    3.7.1 The Perlin noise generator P . . . . . . . . . . . . . . . . . . . . 593.7.2 Constructing the PSO Prline . . . . . . . . . . . . . . . . . . . . 593.7.3 Using Prline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    4 Efficiency gains 674.1 The source of efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 An efficiency-focused evaluation strategy . . . . . . . . . . . . . . . . . . 68

    4.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.2.3 A simplified practical algorithm . . . . . . . . . . . . . . . . . . . 764.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    5 Formal language implications 815.1 PO/CF introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.2 A quick introduction to Y(REGT) . . . . . . . . . . . . . . . . . . . . . 835.3 Y(REGT) is in PO/CF . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.4 PO/CF is in Y(REGT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.5 Formal language conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 87

    6 Future work 896.1 General depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896.2 Some shortcomings of the current definitions . . . . . . . . . . . . . . . 89

    6.2.1 Restrictive scene domains . . . . . . . . . . . . . . . . . . . . . . 896.2.2 Static and dynamic parts of scenes . . . . . . . . . . . . . . . . . 896.2.3 Inflexible tree PSO’s . . . . . . . . . . . . . . . . . . . . . . . . . 90

    6.3 Parallelized implementation . . . . . . . . . . . . . . . . . . . . . . . . . 916.4 Yield Progressing Scene Operations . . . . . . . . . . . . . . . . . . . . . 916.5 Formal languages aspect . . . . . . . . . . . . . . . . . . . . . . . . . . . 926.6 Structured picture scene domains . . . . . . . . . . . . . . . . . . . . . . 926.7 The usefulness of grammatical devices on modern hardware . . . . . . . 95

    References 97

    A Proofs 99

    B Notation 103

  • List of numbered contents

    List of definitions

    1.3 Σ-algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Progressing operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3 Language generated by a progressing operation . . . . . . . . . . . . . . 182.5 Compositions of PO’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.6 Scene domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.7 Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.8 Scene operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.9 Compositions of SO’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.14 Progressing scene operation . . . . . . . . . . . . . . . . . . . . . . . . . 242.15 Compositions of PSO’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.17 Tree grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.18 O-composing tree grammar . . . . . . . . . . . . . . . . . . . . . . . . . 292.19 Useful SO/PO/PSO’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.20 PO-composing tree grammars . . . . . . . . . . . . . . . . . . . . . . . . 292.21 Instantiations of a PO-composing tree grammar . . . . . . . . . . . . . . 302.22 The operation defined by a PO instantiation . . . . . . . . . . . . . . . . 302.23 Progression in a PO instantiation . . . . . . . . . . . . . . . . . . . . . . 312.24 Progressing operation generated by an instantiation . . . . . . . . . . . 312.25 Tree PO from a grammar . . . . . . . . . . . . . . . . . . . . . . . . . . 312.26 PSO-composing tree grammar . . . . . . . . . . . . . . . . . . . . . . . . 322.27 Instantiations of a PSO-composing tree grammar . . . . . . . . . . . . . 322.28 The operation defined by a PSO instantiation . . . . . . . . . . . . . . . 322.29 Progression in a PSO tree instantiation . . . . . . . . . . . . . . . . . . 332.33 PSO generated by an instantiation . . . . . . . . . . . . . . . . . . . . . 342.35 Tree PSO from a grammar . . . . . . . . . . . . . . . . . . . . . . . . . . 353.1 Relative inert PostScript strings . . . . . . . . . . . . . . . . . . . . . . . 373.2 PostScript pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.3 The PostScript picture domain PS . . . . . . . . . . . . . . . . . . . . . 383.4 Turtle algebra scene operations . . . . . . . . . . . . . . . . . . . . . . . 40

    vii

  • viii CONTENTS

    3.15 Collage scene operations . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.1 Complexity metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.3 The recurring function set R[n]A . . . . . . . . . . . . . . . . . . . . . . . 704.4 Output projection functions πO, πF . . . . . . . . . . . . . . . . . . . . 704.5 Yield functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.7 Yield Scene Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.8 ySO state notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.13 Yield Scene Operation (specialized) . . . . . . . . . . . . . . . . . . . . . 775.1 Free tree progressing operation . . . . . . . . . . . . . . . . . . . . . . . 815.2 PO/CF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.3 YIELD mapping Y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836.3 The T -structured A scene domain . . . . . . . . . . . . . . . . . . . . . . 936.4 Distillation operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    List of examples

    1.2 Grammar notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Turtle algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.6 Collage algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.9 Perlin noise functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.10 Progressing approximations . . . . . . . . . . . . . . . . . . . . . . . . . 101.12 A collage operation respecting finished parts . . . . . . . . . . . . . . . . 143.16 A simple PSO using collage operations . . . . . . . . . . . . . . . . . . . 534.14 Collage operations are yield functions . . . . . . . . . . . . . . . . . . . 774.15 A turtle concatenation yield function . . . . . . . . . . . . . . . . . . . . 786.2 Structured scene domain sketch . . . . . . . . . . . . . . . . . . . . . . . 92

    List of figures

    1.1 Informal tree-based picture generator example . . . . . . . . . . . . . . . 21.5 Simple turtle algebra example . . . . . . . . . . . . . . . . . . . . . . . . 51.7 A simple collage operation P . . . . . . . . . . . . . . . . . . . . . . . . 61.8 The picture generated by a binary tree of collage operation P . . . . . . 71.11 Two hypothetical unfinished pictures from the device in Figure 1.1 . . . 112.1 The conceptual view of a progressing operation . . . . . . . . . . . . . . 182.4 The semantics of the composition of progressing operations . . . . . . . 203.5 The f turtle operation and notation introduction . . . . . . . . . . . . . 433.6 The example “box” and “boxd” arity 0 scene operations . . . . . . . . . 443.7 Examples of the turtle scene operation rotα . . . . . . . . . . . . . . . . 443.8 Examples of the turtle scene operation enc . . . . . . . . . . . . . . . . . 453.9 Examples of the turtle scene operation hide . . . . . . . . . . . . . . . . 45

  • CONTENTS ix

    3.10 Examples of the turtle scene operation ◦ . . . . . . . . . . . . . . . . . . 463.11 Further examples of turtle scene operations . . . . . . . . . . . . . . . . 483.12 An example progression of a tree-based PSO (defined in Section 3.4) . . 503.13 Another example derivation starting from Figure 3.12 . . . . . . . . . . 513.14 The progression of Figure 3.13 continued . . . . . . . . . . . . . . . . . . 513.17 A few examples of collage scene operations and the notation used . . . . 533.18 An example T0L-based collage tree operation . . . . . . . . . . . . . . . 543.19 Continued progression of the T0L collage tree operation of Figure 3.18 . 543.20 The axiom tree and picture of the example operation PSO(Pcomb) . . . 563.21 One progression step done from Figure 3.20 . . . . . . . . . . . . . . . . 563.22 A second progression step done to PSO(Pcomb) of Figure 3.21 . . . . . . 573.23 More progression steps on PSO(Pcomb), starting from Figure 3.22 . . . . 583.24 Finished Perlin noise line from Prline . . . . . . . . . . . . . . . . . . . . 593.25 An illustration of an intermediary step of Prline . . . . . . . . . . . . . . 603.26 An example progression of the Perlin noise PSO . . . . . . . . . . . . . 613.28 The initial tree(s) and picture of PSO(P ′comb) . . . . . . . . . . . . . . . 633.29 A series of progression steps for PSO(P ′comb) . . . . . . . . . . . . . . . . 633.30 The initial tree(s) and picture of PSO(P ′′comb) . . . . . . . . . . . . . . . 643.31 One possible result of PSO(P ′′comb) progressing . . . . . . . . . . . . . . 653.32 Continued progression from Figure 3.31 . . . . . . . . . . . . . . . . . . 656.1 How a transformation can force a picture to be dynamic. . . . . . . . . . 93

    List of lemmas and theorems

    2.10 SOA is closed under composition . . . . . . . . . . . . . . . . . . . . . . 222.11 SOA closed under copied, omitted and, permuted arguments . . . . . . . 222.12 SO static dynamic independence lemma . . . . . . . . . . . . . . . . . . 232.13 Scene operations as order-preserving maps . . . . . . . . . . . . . . . . . 242.16 PSOA is closed under composition . . . . . . . . . . . . . . . . . . . . . 252.30 optinst(v)-like composed SO . . . . . . . . . . . . . . . . . . . . . . . . . 332.31 optinst(v) is an SO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.34 PSO’s generated by instantiations are correct PSO’s . . . . . . . . . . . 354.6 Yield functions are monotonic . . . . . . . . . . . . . . . . . . . . . . . . 704.9 Primitive function invariance . . . . . . . . . . . . . . . . . . . . . . . . 734.10 Primitive function equality . . . . . . . . . . . . . . . . . . . . . . . . . 744.11 Equivalence of ySO’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.12 Algorithm 4.7 defines scene operations . . . . . . . . . . . . . . . . . . . 755.4 From Y(REGT) to PO/CF . . . . . . . . . . . . . . . . . . . . . . . . . 845.5 From PO/CF to Y(REGT) . . . . . . . . . . . . . . . . . . . . . . . . . 865.6 PO/CF = Y(REGT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

  • x CONTENTS

  • Chapter 1

    Introduction

    1.1 Outline

    This thesis presents a framework for expressing systems of grammatical devices whichcooperate to create pictures in an efficient manner. To explain some of the backgroundthe following sections, Sections 1.2 and 1.3, provides a quick introduction to some typesof picture generators that are of interest going forward. In Section 1.4 the informal viewof the problems and ideas that are to be considered are presented.

    The following chapter, Chapter 2, presents the formal definitions and lemmas thatare the core of the thesis. Readers who are unfamiliar with the grammatical devices forpicture generation are advised to read Chapter 2 in parallel with the example chapter,Chapter 3, or even to first look through the examples before starting to read Chapter 2.Of course, the examples are founded on the theory, but reading backwards by consideringthe examples before the theoretical details that underlie them is often a better way tofully understand the concepts.

    1.2 Background – tree-based picture generation

    The background of this thesis is very much related to tree-based picture generation.A good view on the subject as a whole is available in the book “Grammatical PictureGeneration” [Dre06], which contains everything needed as a background for this thesis(and much more). A tree-based picture generator has two parts: A device of some kindthat generates trees over a signature (i.e., a set of ranked symbols) Σ, and an algebraover the same signature Σ (called a Σ-algebra when the signature needs to be madeexplicit) that maps these trees into some domain A of “pictures”. See Figure 1.1 for asimple tree-based picture generator laid out in this way. The exact definition of a picturevaries greatly with the type of algebra; one would expect it to always be something witha graphical interpretation, but in fact even when dealing with graphics the domaincan often contain information that is not reflected in the final graphics (endpoints andbounding boxes for example). Also, there is no special reason why the definitions andideas of this thesis should not be used to operate on any other kind of domain. Audiofor example would be straightforward to fit into the framework presented (in the formof additive synthesis in particular).

    This thesis is to a large extent independent of the domain of pictures. An important

    1

  • 2 Chapter 1. Introduction

    Picture generator

    Tree generator Picture algebraGenerated tree

    Generated picture

    c

    start → A

    A→ a[AB]

    A→ B

    B → b[B]

    B → c

    a[t1t2]→

    t1

    t2

    b[t]→�

    �t

    c→ ♣

    a

    a

    b

    c

    c

    b

    b

    c

    �♣

    �♣

    Figure 1.1: A tree-based picture generator with a simple generator and algebra. In thiscase the generator is a regular tree grammar or some similar grammaticaldevice. The dashed boxes contain an example of a generated tree and thepicture that results from applying the algebra to it.

    motivation however is to allow efficient generation of scenes in a way that would enablemodern 3D graphics hardware to render them efficiently in an incremental fashion. Thisperspective is pursued in a closely related thesis [Nil06].

    The rest of this section is going to touch briefly upon the notions and terminologyof grammatical picture generation.

    1.2.1 Grammars

    The point of this thesis is in some sense to abstract away the grammatical underpinningsand allow arbitrary combinations of generating devices to be used. This makes a detailedexposure of grammatical notions unnecessary. It is instead assumed that the reader isreasonably familiar with the most common grammatical devices1. We shall frequentlyconsider regular tree grammars, on account of the fact that these are one of the mostbasic cases. ET0L and its subclasses is also going to be touched upon in parts. Tomake the notation clear, here is an example of a regular tree grammar as it would bepresented here:

    Example 1.2 (Grammar notation) The grammar in Figure 1.1, when consideredas a regular tree grammar (which is consistent with the tree generated), would be denotedg = (N,Σ,→, t0), where the parts are defined as follows.

    – N = {start , A, B} is the set of nonterminals.

    1Appendix A of [Dre06] or [CDG+97] is highly recommended reading, and if available [Eng75]. In apinch any textbook on formal languages can be helpful however, [HU79] for example.

  • 1.2. Background – tree-based picture generation 3

    – Σ = {a, b, c} is the terminal signature (an alphabet in which every symbol has anonnegative rank indicating the number of subtrees it requires).

    – The component → is a binary derivation relation on TΣ(N) (trees over the signa-ture Σ with leaves in Σ(0) ∪ N , see Appendix B for more notation). For regulartree grammars it is defined by a finite set of rules S, each rule associating a singlenonterminal with a tree from TΣ(N) (that is, S is a finite subset of N × TΣ(N)).The relation → is then defined as follows. For all s, t ∈ TΣ(N) it holds that s→ tiff there exists some t0Jx1K ∈ TΣ∪N (X1) and (n, t′) ∈ S such that t0JnK = s andt0Jt′K = t.For notational simplicity we skip S and directly write

    → ={(start , A), (A, a[AB]), (A,B), (B, b[B]), (B, c)

    },

    that is, we do not make a notational distinction between the binary derivationrelation and the finite set of rules that defines it.

    – Finally, t0 ∈ TΣ(N) is the axiom; in this case, t0 = start (only implied in thefigure).

    The notational “trick” of denoting the binary derivation relation in the same way asthe set of rules recurs in other types of grammars as well, for example the set of tablesof a T0L grammars (Lindenmayer grammars with tables) is identified with the resultingderivation relation in a similar manner. See Appendix B for further notation. 3

    1.2.2 Algebras

    The definition of an algebra for a signature Σ is straightforward. Its first part is adomain A (the domain over which the operations of the algebra work). For example, inFigure 1.1 the domain A could be as follows:

    – The pictorial element ♣ is in A.

    – All the elements of A (recursively) boxed are in A: ∀(t ∈ A) :�� ��t ∈ A.

    – All the elements of A flipped upside down are in A: ∀(t ∈ A) : t∈ A.

    – For all pairs of elements t1, t2 ∈ A, the picture of t1 vertically stacked on top of t2is in A.

    The second part of an algebra is a set of functions over the domain. There is onefunction for each ranked symbol s:n ∈ Σ such that the arity of the function equals therank n of the symbol. For an algebra A this function is then denoted sA : An → A.An algebra is sometimes called a Σ-algebra when it is useful to make the signature overwhich it operates explicit.

    Notice that the domain cannot actually be generated completely by the functions inFigure 1.1. This is so since the only way to flip elements upside down in the algebra iswhen two elements are stacked (by the function associated with the symbol a), converselyone cannot stack elements freely either since the only function that stacks elements alsoturns the top one upside down.

    Before defining some specific algebras which are useful in later parts, let us makeprecise the formal definition of the general concept of an “algebra”. All the later algebrasare simply specific instances or small subclasses of this.

  • 4 Chapter 1. Introduction

    Definition 1.3 (Σ-algebra) A Σ-algebra is a pair A =(A, (fA)f∈Σ

    )where A is

    the domain of the algebra and (fA)f∈Σ is a set of mappings fA : An → A for all f :n ∈ Σ(making it a constant element in A if n = 0). The mappings fA are also called theoperations of the algebra. 3

    Given a Σ-algebra A and a tree t ∈ TΣ the application of the algebra to the tree,evaluating to an element of A, is denoted valA(t) and is defined recursively as follows:

    valA(t) = fA(valA(t1), . . . , valA(tn)

    )if t = f [t1, . . . , tn].

    That is, the function corresponding to each symbol in the tree is applied to the resultsof the recursive evaluation of its children. All that is needed to define an algebra for asignature is then to supply a function (over the domain of the algebra) correspondingto each symbol in the signature with arity matching the rank of the symbol.

    Two algebras that are of immediate interest are presented in the following examples.The first, a turtle algebra is almost completely specified, the signature is decided inadvance and the functions which the symbols correspond to are chosen. There is onlyone thing that can vary, namely the rotation angle used.

    Example 1.4 (Turtle algebra) A (2D) turtle algebra A =(A, (fA)f∈Σ

    )is given

    as follows.

    – The signature Σ equals {f:0, enc:1,hide:1,+:1, ◦:2}.

    – The domain consists of a 2D picture with only straight lines and a single point in2D, known as the endpoint. More formally A = L × R2 where L is the set of allpossible finite line drawings in the plane. Consider a line to be represented by twopairs of Cartesian coordinates (x1, y1) and (x2, y2), with such a line being denotedline(x1, y1, x2, y2). Each l ∈ L is then an arbitrary finite set of such lines, moreformally

    L = P({line(x1, y1, x2, y2)

    ∣∣ (x1, y1), (x2, y2) ∈ R2 × R2}).Conceptually one can consider each such expression line(...) a line drawing com-mand. The set L then contains all programs that draw a picture in the plane usingonly straight lines.

    – The operations of the algebra, corresponding to the symbols, are given as follows.

    • The constant fA is simply the line from (0, 0) to (1, 0) with the endpoint(1, 0), so

    fA() =(line(0, 0, 1, 0), (1, 0)

    ).

    • The unary operation encA evaluates to the line drawing of the argument withthe endpoint set to (0, 0). For all p ∈ A such that p =

    (l, (x, y)

    ),

    encA(p) =(l, (0, 0)

    ).

    • The operation hideA does the “opposite” of enc, it evaluates to the sameendpoint as its argument, but sets the line drawing to the empty set. For allp ∈ A such that p =

    (l, (x, y)

    ),

    hideA(p) =(∅, (x, y)

    ).

  • 1.2. Background – tree-based picture generation 5

    • The operation +A rotates its argument counterclockwise around the origin(0, 0) by some predetermined angle α. Let

    ∀(x, y ∈ R) : rotα(x, y) =(x cos(α)− y sin(α), x sin(α) + y cos(α)

    ).

    Then, for all p ∈ A such that p =(l, (x, y)

    ), it holds that

    +A(p) =({line

    (rotα(x1, y1), rotα(x2, y2)

    ) ∣∣ line(x1, y1, x2, y2) ∈ l},rotα(x, y)

    ).

    • Finally, ◦A has the arity 2 and “concatenates” two pictures, using the end-point of the first as the origin of the second (making the endpoint of theresult the endpoint of the second picture after it has been translated by theendpoint of the first). More formally, let

    ∀(l ∈ L, x, y ∈ R) : ttrans(l, x, y) ={line(x1 + x, y1 + y, x2 + x, y2 + y)

    ∣∣ line(x1, y1, x2, y2) ∈ l}.Then, for all p1, p2 ∈ A such that p1 =

    (l1, (x1, y1)

    )and p2 =

    (l2, (x2, y2)

    ), it

    holds that

    ◦A(p1, p2) =(l1 ∪ ttrans(l2, x1, y1), (x1 + x2, y1 + y2)

    ). 3

    Notice that α is a constant chosen on a per-algebra basis. The algebra is most oftenalso defined with the additional symbol −, defined as the rotation by −α. This can,to simplify, be considered as a special case of allowing the definition of several rotationoperations that rotate their arguments by different angles.

    Two examples of the turtle algebra applied to simple trees are shown in Figure 1.5.

    (a) ◦[f, ◦[enc[f], +[f]]] after application ofthe turtle algebra.

    (b) ◦[f, +[◦[f, +[◦[hide[f], +[f ]]]]]] af-ter application of the turtle alge-bra.

    Figure 1.5: Simple examples of applying the turtle algebra. Both with α = 90◦.

    The next example is the concept of collage algebras. This is not one algebra to theextent that turtle algebras are. Instead it is a large class of algebras which place nospecial restrictions on the signature used. However, all the functions need to be of avery special form.

    Example 1.6 (Collage algebra) The common definition of a collage is a bit moregeneral than is useful for examples and discussion here. According to this definition, acollage C is a finite set of d-dimensional parts. To make things as simple as possible forthe following examples and discussion, we consider only 2D (3D is the more interestingcase, but is not as easily illustrated, especially not on paper). In addition, the concept ofparts is ignored in favor of assuming that collages are flattened into monolithic picturesby taking the union of all parts.

  • 6 Chapter 1. Introduction

    The 2D collage domain A is the set of all 2D pictures, i.e., subsets of R2. A 2Dcollage operation of arity k is a function f : Ak → A such that

    ∀(p1, . . . , pk ∈ A) : f(p1, . . . , pk) = α1(p1) ∪ . . . ∪ αk(pk) ∪ pf

    where α1, . . . , αk are injective affine transformations (and α(p) denotes the picture ptransformed by α) and pf ∈ A. That is, a collage operation takes a number of picturesas arguments, transforms them by some injective affine transformations2 and then takestheir union, adding some (possibly empty) constant picture itself.

    In summary, a 2D collage Σ-algebra A =(A, (fA)f∈Σ

    )is an algebra such that

    – the domain A is the set of all 2D pictures, and

    – for each f :k ∈ Σ the function fA : Ak → A is a 2D collage operation.

    As can be readily seen collage algebras allow quite a lot of freedom and flexibility.Let us consider a small example operation P of arity 2. In Figure 1.7 it is shown whatthis operation looks like when applied to two dashed squares (whose base equals thebottom edge of the constant black polygon). The black polygon is the constant picture

    Figure 1.7: The simple collage operation P applied to two dashed boxes (with theside-length equal to the base of the black polygon added by P ).

    that P adds. The two arguments to P are transformed and placed at the top of thatpolygon, as shown with the dashed squares in Figure 1.7. As a second operation take λof arity 0, which returns the empty picture. P applied with two copies of λ as argumentsis shown in sub-figure (a) of Figure 1.8. Then deeper and deeper binary trees are shownin sub-figures (b) through (d), finally culminating with the pleasingly looking binary treeof depth 11 in sub-figure (e) of Figure 1.8. This procedure then generates a pleasinglylooking tree. When the angle at the top of the black polygon is 45◦ this figure is calleda Pythagorean tree.

    As the construction above suggests, a collage algebra of this kind can be constructedwith any finite number of operations of arbitrary arity. There are no predefined op-erations (as in the turtle algebra). Instead, new collage operations can be defined tocorrespond to any arbitrary signature. To create a collage algebra for a signature Σ onejust needs to choose k affine transformations and one constant picture for each symbols:k ∈ Σ. These can then be directly made into a collage operation as shown above.

    It is interesting to note that the contribution of every subtree to valA(t) is inde-pendent of every other under a collage algebra (provided that they are not subtrees ofeach other). That is, a change in a given subtree only affects the portion of the picturedrawn by that very subtree. All the nodes on a path from the root to this subtree onlyconstruct an affine transformation (a single transformation since the affine transforma-tions are closed under composition, so the length of the path is irrelevant) applied to the

    2Recall that the injective affine transformations of dimension d are exactly those that can be ex-pressed as α(p) = Ap + b with a d× d matrix A and d× 1 vector b such that rank(A) = d.

  • 1.3. Background – other kinds of generators 7

    (a) P0 = P (λ, λ) (b) P1 = P (P0, P0) (c) P2 = P (P1, P1) (d) P3 = P (P2, P2)

    (e) P10 = P (P9, P9)

    Figure 1.8: Results of evaluating binary trees of the collage operation P applied to theempty picture λ (as leaves). (a) with a binary tree of depth 1 through (d)with a depth of 4 and (e) with the binary tree of depth 11.

    image generated. This means that collage algebras afford a high degree of independencewhen small changes to a tree are made. 3

    1.3 Background – other kinds of generators

    While picture generators based on tree-generators and algebras are a convenient basecase there is a great variety of interesting (types of) picture generators that do notbelong to this category. This is in fact one of the central points to be discussed. Oneexample of a very often used type is generators based on Perlin noise [Per85].

    Example 1.9 (Perlin noise functions) A k-dimensional Perlin noise function isa picture generating device which works by creating a k-dimensional unit hypercubeof “natural random” values and applying some (relatively simple) function to it. Thequestion of course is exactly how this continuous unit hypercube of random values iscreated, specifically how the randomness is made to look “natural”. The short answeris that the random values are spatially smoothed, the long answer is as follows.

    The cube is constructed from a large number of uniformly distributed random num-bers. They are used by arranging them into k-dimensional uniform grids, each knownas an “octave”. If there are n octaves used for the generator (n may be freely chosen;it may even be infinite) then the grids should have sizes 2k, 4k, . . . , (2n)k. Some splineinterpolation is then applied to these grids to make them continuous. Finally they areall summed up with a weighting that makes the coarser grids have higher amplitudethan the finer ones. That is, the cube function c : [0, 1]k → R is constructed as

    c(x1, . . . , xk) =(a(1)s(O1, x1, . . . , xk) + a(2)s(O2, x1, . . . , xk)+a(3)s(O4, x1, . . . , xk) + . . . + a(n)s(O2n , x1, . . . , xk)

    )

  • 8 Chapter 1. Introduction

    where

    – for each m ∈ {1, . . . k}, Om is a matrix of size mk and consists of uniform pseudo-random numbers in the range [0, 1),

    – s(Om, x1, . . . , xk) is some smooth interpolation to approximate the value Om(mx1,. . . , mxk),

    – a(x) is a monotonically decreasing function, dictating the amplitude.

    The interpolation s is often some form of polynomial fitting, and a popular choice forthe amplitude function a is a(x) = 1x .

    The end result is that such a cube of random values gives a very natural impressionof randomness. It is simply spatially smooth enough to appear as if it could, in the2D/3D cases, be something like a cloud. This can then form the basis needed to easilyconstruct pictures closely resembling many other physical phenomena.

    The final step is to apply some function to the noise to create the picture. Let Pbe the set of possible point values of the desired picture ([0, 1] for intensity pictures forexample). Then any function F : R → P can be used. The final Perlin noise generatorf is then

    f(x1, . . . , xk) = F(c(x1, . . . , xk)

    ).

    This type of picture generation is very popular in practice, since it is a an easy way tomake very convincing models of natural phenomena and materials. Classical examplesof uses of Perlin noise are to model wood (F then contains a modulo component tocreate nice random rings in the wood), marble (F using some threshold function) aswell as phenomena like fire, mist, water, and clouds.

    Perlin noise is of interest since it is a type of generator that is used in practice but doesnot necessarily fit into the grammatical tree-based model. It is however related to thecommon grammatical devices in at least one way; it starts with a rough approximation ofa picture and then iteratively refines it at (steeply) increasing cost. This can be seen bynoticing that, when assuming that the function F is sufficiently smooth, the evaluationF

    (a(1)O1(x1, . . . , xk)

    )approximates F

    (a(1)O1(x1, . . . , xk)+a(2)O2(x1, . . . , xk)

    )which

    in turn approximates F(a(1)O1(x1, . . . , xk) + a(2)O2(x1, . . . , xk) + a(3)O4(x1, . . . , xk)

    )and so on (recall that the function a is monotonically decreasing). That is, one can startby evaluating the Perlin noise generator with only the coarsest octave (which has thehighest amplitude) and then keep adding finer and lower amplitude ones, making Perlinnoise a good candidate for inclusion in the methods discussed later. 3

    There is a great variety of other generators that are also of interest: iterated functionsystems, cellular automata, genetic algorithms, etc. Perlin noise functions were onlygiven as an example because they are relatively simple and extremely popular.

    However, most common in practice are probably ad-hoc picture generators, genera-tors that are directly programmed by hand without any structure or design considera-tions. This raises the question: what is needed to accommodate the pragmatic approachwith some more structure? The ideas presented in this thesis attempt to address thisproblem to some extent. The aim is to establish a mechanism that allows arbitraryad-hoc constructions to be used in conjunction with the more structural approaches liketree-based grammatical devices.

  • 1.4. Goals for this thesis 9

    1.4 Goals for this thesis

    Let us now discuss the goals of this thesis in more detail. This section is divided intothree subsections of similar structure. Each of them sketches a problem and a proposedway how this problem can be attacked.

    1.4.1 Combining devices

    Problem

    When using grammatical devices to generate pictures in practice, the devices probablybecome quite complex. A real-world scene, or any coarse approximation thereof, requiresa lot of different-looking parts and details. In the literature this is most often glossedover, only one distinct part at a time is considered and generated. A simple example,which is in itself just a part of most interesting scenes, is a tree. A tree has bark, leaves,and the stem and branches, all of which are very different in structure. This is typicallysolved by combining different methods in an ad-hoc fashion. If grammatical generationmechanism are used, this negates some of the usefulness of having a grammaticallygenerated scene.

    This prompts the need to allow the mixing of different types of devices in a well-defined way. On the most general level a picture generator P is a device that producespictures in some domain A, so it is fully described by an, in all interesting cases, infiniteset L(P ) of all the pictures it can generate. From this point of view, two differentgenerators, P1 and P2, can be combined in some way by simply picking two picturese1 ∈ L(P1) and e2 ∈ L(P2) and combining them by some function f : A×A→ A, givingf(e1, e2) as the combined result. This approach is too limited on the one hand and, onthe hand, not limited enough:

    1. Two pictures are just picked from the languages of the generators. There is nopossibility to make them have an effect on each other, other than implementingit in f . In practical terms there is for example no good way to let one picturegenerator delegate the generation of some parts of the picture to another in acontrolled way. The concept of parts can here possibly involve many complexways of combining pictures. In particular, it is not necessarily limited to justgeometrical transformations and unions.

    2. Implementing complex behaviors in the function f is a workable ad-hoc approach,but in the extreme case this would amount to implementing the whole combinedpicture generator in f directly, the pictures e1 and e2 only being trivial startingpoints of the computation.

    A look inside most picture generators reveals that they, in one way or the recursively oriteratively apply functions acting on the domain in question (the tree-based ones beingone notable example). This suggests that compositions of such picture generators couldbe done in a more powerful, but also more intrusive, way.

    As an example one could again consider the generator in Figure 1.1. Imagine thatthere is another picture generator which can generate beautiful boxes in a variety ofways. One might then want to replace the rounded boxes constructed by the function bin the algebra with boxes from that generator. Since there may be any number of suchboxes to replace the original one with (and one may want a different nondeterministicderivation for each one) this is not a simple thing to do by an ad-hoc combinationfunction.

  • 10 Chapter 1. Introduction

    Informal solution sketch

    The solution for this is not very complex. Instead of having all grammatical devicesgenerate a picture the devices can generate an operation. That is, if the picture domainis A, then such a grammatical device could generate a function f : Ak → A. Such afunction of arity k = 0 would then be a picture generating device as we already knowthem, while the arity k > 0 would result in functions that take pictures as argumentsand produce a new picture.

    These devices could then be applied to each other by composition until the composeddevice is of arity 0, creating an ordinary picture generator. One could even use agrammatical device of some kind to generate the composition tree to allow maximalflexibility.

    As an example, modifying devices based on the tree generator and algebra approach,as shown in Figure 1.1, is not very complex. One simply introduces a new kind of rank0 nodes which correspond to arguments. The algebra could then replace these symbolswith the actual arguments that the operation is applied to.

    1.4.2 Progressing refinement

    Problem

    Most grammatical devices operate, conceptually, by refining some intermediary repre-sentation repeatedly until a “final” result has been obtained. This is in some cases evenconsidered the main purpose of the device. Notably Lindenmayer systems were origi-nally proposed not as a grammatical device in which the elements of the language areconsidered in isolation, but rather to consider the derivations as a way to simulate thegrowth of plants [Lin68]. This perspective has been pursued actively since then, see forexample [PLH88].

    This view is of general interest, whenever a system which modifies itself over time isto be modeled. In addition to processes that change over time, like growing plants, onecan also consider simpler processes like movement, where the components themselvesdo not change but are instead transformed in relation to each other. In either case it isobvious that the classical approach does not directly support this, the idea of a picturegenerator P whose semantics is completely described by the generated language L(P )(which is a set of pictures) leaves no means to express which pictures follow from which.

    With a solution to this problem one can even argue that more “ordinary” picturegenerators has a useful concept of progress even though only the final result is meantto be considered. The following example sketches how the derivation steps used togrammatically generate picture can often be used as approximations of the picture.

    Example 1.10 (Progressing approximations) In the language theoretical set-ting one does not consider the information generated during the process of deriving apicture in the language. Instead, one just picks some e ∈ L(P ), where P is a picturegenerator and L(P ) is the language generated.

    In a practical implementation there are two problems that get in the way:

    – It may be prohibitively expensive to derive such an element of the language.

    – Even if it can be derived cheaply, the picture may be too complex to handle forthe rest of the application (rendering it for interactive use for example).

  • 1.4. Goals for this thesis 11

    The intuitive solution is quite simple, but turns out to be very closely tied to theproblem of this section. The idea is that it is possible to deal with approximationsof the picture, using approximations both while it is being computed and when otherefficiency considerations require it to be simple.

    There are good news on this front: In a lot of picture generators the derivation isinternally done in a way that starts with a rough picture and refines it. Consider forexample the generator shown in Figure 1.1. In the figure, the tree generator has finishedproducing a terminal tree and the algebra has been applied to produce a final picture.Instead imagine the tree generator outputting intermediary nonterminal trees while itis working towards the terminal one. This would not immediately allow the algebra tobe applied since it knows nothing about the nonterminals. Adding some interpretationfor the nonterminals enables the derivation to be considered as a sequence of pictures,containing, for example, Figure 1.11. In this case the hypothetical extension of thealgebra simply replaces the nonterminal symbols by a picture of the symbol string. It is

    a

    a

    A B

    b

    b

    c

    ; �

    �♣

    A

    B

    (a) One intermediary tree and the pictureit could generate with a modified al-gebra

    a

    a

    A c

    b

    b

    c

    ; �

    �♣

    A

    (b) A derivation step made from (a) andthe new intermediary picture

    Figure 1.11: Two “intermediary” picture results for a hypothetical modification ofthe generator in Figure 1.1. Sub-figure (b) shows one derivation stepmade on the tree from sub-figure (a). The algebra of Figure 1.1 hasbeen extended to show the nonterminals as pictures, namely the lettersrepresenting them.

    not far off to consider the picture in sub-figure (a) of Figure 1.11 as an approximation ofthe picture in sub-figure (b), and that one in turn an approximation of the final picturein Figure 1.1.

    In reality a lot of tree-based collage grammars (see Section 1.2) tend to add smallerand smaller pieces (in relation to the whole picture) as the derivation progresses. How-ever, remember that tree-based generators are just one convenient (often too convenient)case. In Perlin noise functions (see Section 1.3) a similar approach can luckily be taken;as long as the function F is sufficiently smooth the changes in the picture becomesless and less pronounced when higher-frequency octaves are added. This is so since thehigher frequency octaves have lower amplitude, by the monotonicity requirement on theamplitude function a. In such cases, it should be possible to consider the partly com-pleted intermediary pictures as approximations of what they are going to be like whenfinished.

    An additional important advantage of approximations of this kind is that it maybe possible to predict how the derivations proceed when some facts about the internalsof the generator are known. This allows the changing parts to be handled differently.For instance, it is of great interest to be able to estimate the difference between thecorrect picture (the element of the language the derivation arrives at) and the current

  • 12 Chapter 1. Introduction

    intermediary picture. This allows the application to not only tailor the derivation stepstaken to the situation in the application at large, but also to in some situations anticipatedetail that has not yet been generated. That is, the picture generator could replace anot yet generated part of the picture by a similar template part in order to minimize thevisual difference between the approximation and the correct final picture. For example,consider a tree-based collage picture generator that is used to generate a picture ofa plant, deriving it from the root up through the branches and finally leaves. In aderivation step the outer branches of the plant may not yet have been generated. Insteadthere is a nonterminal in the derivation tree. The picture generator may then interpretthe nonterminal as a template picture of a branch (from a small library of pre-generatedones perhaps), making the picture appear complete at a quick glance. When furtherderivation steps are taken this template can be replaced by the correct branch3. 3

    In conclusion, there seems to be a lot of value in considering the actual derivationsperformed when dealing with grammatical devices. There is of course nothing thatprevents such a device from operating within the confines of the usual “language” model,having the language contain sequences of pictures. On the other hand, grammaticaldevices are conceptually constructed, and typically practically implemented, by doingincremental derivations. It is tempting to define the devices in a way which allows toexploit this fact.

    Informal solution sketch

    Combining this problem with the one from Section 1.4.1 above, the solution becomessomething that is referred to as “progressing operations” in the following. That is, thegenerated objects are operations rather than pictures, as proposed in Section 1.4.1, butrather than being static, these operations “progress” as discussed in this section (andillustrated in Example 1.10).

    A picture generator that produces intermediary pictures (before getting to the finalpicture that is an element of the language, if such a final picture exists) generates aprogressing operation of arity 0. The progressing operation then consists of the currentpicture generated, as well as possible progressions, i.e., a set of progressing operationswith new pictures. Each of these new pictures correspond to one single refinement stepperformed on the original picture by the underlying picture generator. For example, inthe case of a tree-based collage grammar that has currently generated an intermediatetree t, the operation would consist of the current picture (obtained by applying thecollage algebra to t) and the set of continuations given by all possible derivations t→ t′in the grammar. For a Perlin noise function the picture would again simply be thepicture with some k octaves complete, whereas the continuations would (recursively)consist of all progressing operations obtained by adding one more octave.

    The case where the arity k of a progressing operation is greater than zero allowspicture generators to work together. They can then all “progress” by creating a (usuallymore complex) operation that conceptually progresses when the parts it consists ofprogress. In this case what was called the current picture above becomes a functionf : Ak → A, where A is the domain of pictures (notice that this is consistent with thecase k = 0, where f simply becomes a constant picture). Likewise the continuationsbecome a set of progressing operations of arity k.

    3To make such replacements seamless a variety of methods can be used. This is a well-explored fieldfor 3D polyhedra, referred to as level-of-detail morphing/blending.

  • 1.4. Goals for this thesis 13

    This case is sufficiently different from the traditional approach to make it hard toconstruct an apt example this early. When just thinking about tree grammars, onecan consider the discussion of Section 1.4.1 combined with outputting an intermediaryresult for each derivation step in the grammar. Alternatively, one can skip ahead toSection 3.6 and look at the sequences of pictures in that example.

    1.4.3 Efficiency of operating on partly finished pictures

    Problem

    The third and final problem is to extract efficiency from the predictability of changes.This only makes sense in the context of a solution to the problem stated in Section 1.4.2above (which in turn makes use of the solution in Section 1.4.1). In many cases there isenough information about a picture generator to tell, for any intermediary picture, howmuch of the already constructed picture is going to remain unchanged in all possiblecontinuations of the derivation. That is, a picture generator has, as described in thediscussion in Section 1.4.2 above, constructed a partial picture, one that has not yetcompleted its refinement (or maybe never will). Let it then be such that it can in additiontell what parts of it are already “finished” and is not going to be modified further, andwhich ones are temporary (the nonterminal template branch that was mentioned inSection 1.4.2 above being a good example). This would be a very useful piece of extrainformation to have when considering possible efficiency improvements.

    It is often the case that some facts about the internals of the generator are known,making it possible to predict what can change and in what way it can change. In thecase of tree-based collage grammars the nonterminals of the underlying tree dictate whatpart of the picture can change. Any fully terminal subtree will always keep producingthe same result. Given appropriate knowledge, and appropriate types of operationsin the algebra, this translates into knowing a part of the picture. In the case of a k-dimensional Perlin noise function one can also do a prediction: If n octaves have beenadded, and m > n octaves are to be included in total, the value for a point (x1, . . . , xk)before applying F is ccurrent = a(1)s(O1, x1, . . . , xk) + . . . + a(n)s(O2n , x1, . . . , xk). Themaximal value that can be added by the next m − n octaves is cmax =

    ∑ni=k+1 a(i)

    (assuming that the interpolating polynomial does not have a maximal point greaterthan the highest-valued control-point). This makes the possible range for the final pointequal to [ccurrent, ccurrent+cmax). With this range known the function F can be analyzedin advance to decide how the resulting picture can change when the number of octavesincreases.

    As another example, consider the pictures in Figure 1.11 again. The bottom partwith ♣ in two boxes is obviously not going to change shape since its subtree is finished.If the stacking caused by a in the algebra of Figure 1.1 only moves the first argumentupwards the bottom part is not only fixed in shape, it cannot move around either. It isthus completely finished and can in some sense “be committed to paper”; there is simplynothing that can happen in the derivation that can change it. The two nonterminals in(a) on the other hand are not finished yet. In (b) the top ♣ has been generated andwont change shape in any way. It might however move, if the remaining nonterminalA generates something big which would push the top ♣ up. In this sense it is not yetfinished since it is unknown exactly where it is going to be drawn in the final picture.

    The problem of telling what is finished and what is not becomes more interestingwhen considering the combination of operations as discussed in Section 1.4.1. Whenmore than one picture generator is involved the issue becomes considerably more com-

  • 14 Chapter 1. Introduction

    plex. It is not enough that a picture generator can tell if a piece it has generated isfinished, it is equally important whether it uses an unfinished piece generated by anotherpicture generator or not.

    The assumption made here is that if it is possible to tell which parts are done andwhich are not there will be many opportunities to increase efficiency by caching partialresults. If one picture generator mostly generates finished parts, then these parts maynot have to be reevaluated in the context of the application, or another picture generatorusing them, when another derivation step is made.

    Informal solution sketch

    The basic solution to this problem is to split the picture domain into two. One partcontaining the finished parts of the picture, one containing the unfinished ones that maystill change.

    A progressing operation of arity 0, as discussed above in Section 1.4.2, would thensimply generate an element of this new domain. When the operation progresses (acontinuation is chosen) it generates a new picture. However, the operation should guar-antee that the new picture has a finished part which contains at least everything thatthe finished part of the old picture contained (optionally adding something more). Sothe progression may never remove something that is in the finished part.

    As an example, in a tree grammar a terminal leaf is of course finished, whereas anonterminal is unfinished. Any terminal node is finished if and only if all its children arefinished. This means that only a fully terminal tree is finished, ignoring all the terminalparts if even a single nonterminal remains (since the root has that nonterminal in somechild by definition). However, when applying an algebra, one could find that terminalsubtrees generate some parts which are finished independently of a nonterminal in someother part of the tree.

    In the case of a progressing operation with arity greater than 0 the solution is thesame, but the details become more complex. If the arguments are constant the pro-gression case remains exactly the same, it just has to guarantee that things are onlyadded to the finished half. The interesting thing is what happens when the argumentschange. The operation has to guarantee that if nothing is removed from the finishedpart of its arguments, its own evaluation will in turn not remove finished parts either.That is, very informally, as long as the arguments keep what they promise (that finishedthings are never removed) the operation should also keep what it promises (that nothingdisappears from the finished part of its result either).

    For example a collage operation can be made into a very simple operation of thiskind.

    Example 1.12 (A collage operation respecting finished parts) A collage op-eration of the form f(a, b) = α1(a) ∪ α2(b) ∪ s (see Example 1.6), where

    – the domain of the arguments a and b and the constant s is split as described above,and

    – α1 and α2 are affine transformations,

    can be made into an operation as above that respects the finished parts and upholdsthe guarantees about its results in the following way.

    Let us denote the finished part of an element a by stat(a) and the unfinished partdyn(a) (which stands for “static” and “dynamic”, respectively). Then the operation f ′

  • 1.4. Goals for this thesis 15

    can be constructed from f by simply, for all a and b, letting f ′(a, b) = e where

    stat(e) = α1(stat(a)

    )∪ α2

    (stat(b)

    )∪ stat(s),

    dyn(e) = α1(dyn(a)

    )∪ α2

    (dyn(b)

    )∪ dyn(s).

    That is, all that is needed is to keep the finished and unfinished parts separate in thiscase. As long as the arguments a and b maintain their finished parts correctly so willf ′. 3

    In a more complex case, however, this procedure would be more involved. Most oftenthe parts would interact in the operation. Furthermore, very quickly another importantquestion surfaces: what does “adding parts” really mean? In Example 1.12 aboveaddition is given by set union ∪, making the picture domain ordered by the subsetrelation. This is indeed a very common case which is easy to handle, but it is notsufficiently general for all situations. The formal definitions to be developed are thereforenot going to dictate that the subset relation has to be used for ordering.

  • 16 Chapter 1. Introduction

  • Chapter 2

    Formalization

    2.1 Progressing Operations

    Here the formal definitions for progressing operations are presented. See Sections 1.4.1and 1.4.2 for a discussion of the problem that prompts it as well as a sketch of thesolution.

    Examples of the concepts defined here are given in the next chapter, starting withthe discussion in Section 3.3.

    2.1.1 Definitions

    Tackling the problems discussed in Sections 1.4.1 and 1.4.2 at the same time may seemlike skipping ahead. However, in some sense the problem of combining devices, discussedin Section 1.4.1 has already been solved in that section: grammatical devices can simplygenerate functions operating on the domain instead of elements of the domain. Theinterest for that part lies in how to retrofit the ordinary grammatical devices to do so,but this is outside the scope of this chapter.

    A progressing operation has some arity n, operates on some domain A and, as oneof its parts, contains a function F : An → A. The second part to take care of is theactual progression, solving the problem discussed in Section 1.4.2. When an operationprogresses it can possibly do so in various ways (non-deterministically), just like mostof the grammatical devices we wish to encapsulate can. Each such possible progressionshould be allowed to replace the function F by another function with the same arityand domain, and, important to remember, define a new set of possible ways to progressfurther. There may of course be none, allowing the operation to “finish” without furtherpossible progressions. This is easily formalized by letting the second part of a progress-ing operation be a possibly infinite set of progressing operations (with the same arityand domain). See Figure 2.1 for an illustration. A progressing operation is illustratedas a node in the tree, with some number of children, both zero children (making pro-gression impossible) and infinitely many children being valid possibilities. An exampleof infinitely many children is the case of the Perlin noise generators discussed in Exam-ple 1.9 where the number of ways to add another octave is the infinity of real numbers.The tree may also be infinite in depth, having no leaf nodes. Classical Lindenmayersystems are a good example of this, where simulated plants may grow indefinitely.

    The formal definition that follows from this reads as follows.

    17

  • 18 Chapter 2. Formalization

    (F, Π)

    (F1, Π1)

    Π ∋

    Π1 ∋

    . . .

    ∈ Π1

    . . . (Fn, Πn)

    ∈ Π

    Πn ∋

    . . .

    ∈ Πn

    Figure 2.1: The conceptual layout of a progressing operation. The progression prop-erty gives each progressing operation some number of child operations towhich it can non-deterministically progress. It is also possible for eachnode to have an infinite (or even uncountable) number of children. Theremay be leaf nodes in this tree, but it is also possible that all paths areinfinite.

    Definition 2.2 (Progressing operation) A progressing operation (PO) of arityn ∈ N over A is a pair P = (F,Π) where F is a function in An → A and Π is a set ofprogressing operations of arity n over A.

    The components F and Π of P can also be denoted by FP and ΠP respectively. Theset of all progressing operations over A of arity n is denoted by PO[n]A . Both [n] and Amay be omitted when irrelevant or arbitrary. 3

    This definition is of course extremely general. This is however a good thing since thewhole purpose of this solution is to allow widely varying types of generating devices tobe mixed.

    As was already mentioned in Section 1.4.2, the progressing operations bring withthem a slightly different view of the language generated. On the one hand, one may insome cases still primarily be interested in the leaf operations where the progressions are“finished”. In other cases, we may be interested in all possible operations, finished ornot, that can be constructed by progression. Closest to the purpose of the progressingoperations as defined here, however, is the idea of the language being a sequence ofoperations, each one being a progression of the prior one (a path in the tree in Figure 2.1).Each of these are properly formalized in the following definition.

    Definition 2.3 (Language generated by a progressing operation) Let P ∈PO. The progression-closure C(P ) is given by

    C(P ) = {P} ∪⋃

    P ′∈ΠP

    C(P ′).

    The different languages obtained from P are then defined as follows.

    – The final language (or just language) generated by P is denoted L(P ) and definedas

    L(P ) ={F

    ∣∣ (F, ∅) ∈ C(P )}.– The intermediary language generated by P is denoted Li(P ) and defined as

    Li(P ) ={FP ′

    ∣∣ P ′ ∈ C(P )}.

  • 2.2. Scenes 19

    – The progression language generated by P is denoted Lp(P ) is a set of (possiblyinfinite) lists of functions, defined as

    Lp(P ) ={FP0 · FP1 · FP2 · . . .

    ∣∣ P0 = P,∀(i ∈ N) : Pi ∈ ΠPi−1}⋃{FP0 · . . . · FPn

    ∣∣ P0 = P, n ∈ N,∀(i ∈ {1, . . . , n}) : Pi ∈ ΠPi−1 ,ΠPn = ∅},where · is a list concatenation operation (see the notation list in Appendix B). 3

    There is one more concept that is needed to formalize systems that generate progress-ing operations, namely composition of operations. This is needed, because the logicalcombination of the discussions of Sections 1.4.1 and 1.4.2 involves applying operationsto each other. One may even consider the case where some kind of grammatical treegenerator is used to generate a tree of compositions of operations. The first step in defin-ing a composition for progressing operations is easy, for two operations P, P ′ ∈ PO[1]Athe function part of the composition should of course simply be FP◦P ′ = FP ◦ FP ′ .The possible progressions of the composed operation require a bit more thinking. It isreasonably clear that the composed operation should progress by letting its componentsprogress, but in what way exactly? One could let the composed operation progressby having both P and P ′ simultaneously progress one step, but this would mean thatall operations in a large system always have to progress together, which is a much toorestrictive view for both modeling and efficiency reasons. Instead, let P ◦ P ′ progressby letting only one of P or P ′ progress. The structure of this progression is illustratedin Figure 2.4 (recall the notation used in Figure 2.1). Notice that a progression stepin the composed operation implies that only one of its components has progressed. Inaddition, it would also be possible to allow both to progress. This would conceptuallygive more freedom, but it neither adds nor removes anything from L(P ◦P ′) or Li(P ◦P ′)and only adds some strings to Lp(P ◦ P ′) which “skip over” some elements of alreadyexisting strings. Since it is slightly more complex and of dubious utility this variant isnot used.

    The formal definition is as follows.

    Definition 2.5 (Compositions of PO’s) The single-argument composition of twoprogressing operations (as in Definition 2.2) P ∈ PO[n]A and Q ∈ PO

    [m]A at argument k

    (where 1 ≤ k ≤ n) is denoted P ◦k Q and defined by

    P ◦k Q =(FP ◦k FQ, {P ◦k R′ |R′ ∈ ΠQ} ∪ {R′ ◦k Q |R′ ∈ ΠP }

    ).

    For progressing operations P ∈ PO[n]A and Q1 ∈ PO[m1]A , . . . , Qn ∈ PO

    [mn]A the com-

    position P ◦ (Q1, Q2, . . . , Qn) is defined as ((. . . (P ◦n Qn) ◦n−1 Qn−1) ◦(n−2) . . .) ◦1 Q1(which gives it the arity

    ∑nk=1 mk). 3

    2.2 Scenes

    Here the formal definitions for the scenes and scene operations are presented. SeeSection 1.4.3 for both a discussion on the problem that prompts it, and in the secondpart a very informal overview of what needs to be done. Note that progression isnot touched upon here, as the combination of scenes and progression is dealt with inSection 2.3.

    Examples of the concepts defined are in the examples chapter. Sections 3.1 and 3.2deal with scenes and scene operations, respectively. Later parts of the examples all usescene operations in some form, but also involve other concepts.

  • 20 Chapter 2. Formalization

    (F, Π)

    (F1, Π1)

    Π ∋

    Π1 ∋

    . . .

    ∈ Π1

    . . . (Fn, Πn)

    ∈ Π

    Πn ∋

    . . .

    ∈ Πn

    ◦k

    (F ′, Π′)

    (F ′1, Π′

    1)

    Π′ ∋

    Π′1∋

    . . .

    ∈ Π′1

    . . . (F ′n, Π′

    n)

    ∈ Π′

    Π′n∋

    . . .

    ∈ Π′n

    =

    (F ◦k F′, Π′′)

    (F ◦k F′

    1, Π′′

    1)

    Π′′ ∋

    Π′′1∋

    . . .

    ∈ Π′′1

    . . . (F ◦k F′

    n, Π′′

    n)

    Π′′ ∋

    Π′′n ∋

    . . .

    ∈ Π′′n

    (F1 ◦k F′, Π′′

    1)

    ∈ Π′′

    Π′′1∋

    . . .

    ∈ Π′′1

    . . . (Fn ◦k F′, Π′′

    n)

    ∈ Π′′

    Π′′n ∋

    . . .

    ∈ Π′′n

    Figure 2.4: The semantics of composition of progressing operations, using the tree no-tation introduced in Figure 2.1. The function part of the composed opera-tion is made by composing the functions of the arguments, while the pro-gression set is recursively constructed by non-deterministically progressingone of the two arguments.

    2.2.1 Definitions

    The first consideration is to define the requirements on the domain for scenes and sceneoperations. The informal description of the second part of Section 1.4.3 uses the union,and then implicitly considers the domain ordered by set inclusion. In more generalterms however what is needed are the following three components: Some combinationoperation, an ordering of the elements and an identity element for the combinationoperation. The combination operation is needed to make it possible to recombine apartitioned scene into an element of the domain which corresponds to the result of theoperation. An ordering is needed to give “refining a picture” a strict interpretation.An identity element for the combination is needed to be able to have “empty” parts.What is needed is in fact exactly a join semi-lattice (see for example [DP02] for anintroduction).

    Definition 2.6 (Scene domain) A scene domain is a tuple A = (A,≤,t,⊥) where〈A,≤〉 forms a semi-lattice with the least element ⊥ (must exist) and join-operator t.

    A is often used itself in contexts where a set is required, in which case it implies areference to its domain A. 3

    With the definition of the restrictions on the domain in place the definition of scenes isstraightforward. Each scene is simply a pair of two elements of the domain, the finishedpart and the unfinished remainder.

  • 2.2. Scenes 21

    Definition 2.7 (Scene) A scene over the scene domain A is a pair S = (s, d) ∈A× A, where s is referred to as the static part and can be denoted by stat(S) while dis referred to as the dynamic part and may be denoted by dyn(S).

    The set A×A of all scenes over A is denoted by SA, or simply S if A is irrelevant orunderstood. If the scene domain A has the join operation t the abbreviation t̃ : SA → Ais often used, where

    ∀(S ∈ SA) : t̃(S) = stat(S) t dyn(S). 3

    Scene operations can now be defined. They are in fact just functions over sceneswith some special requirements:

    – As long as nothing is removed from the static part of the arguments nothing shouldbe removed from the static part of the result.

    – The way each argument is distributed into a static and dynamic part only affectsthe static/dynamic distribution of the result; the join of the static and dynamicparts of the result is constant.

    These can directly be formalized as follows.

    Definition 2.8 (Scene operation) A scene operation (SO) over the domain A ofarity n is a function F : SnA → SA having the following properties:

    Property 2.8.1 (Static monotonicity) For all scenes S1, S′1, . . . , Sn, S′n ∈ SA,

    if stat(Si) ≤ stat(S′i) for all 1 ≤ i ≤ n, then

    stat(F (S1, . . . , Sn)

    )≤ stat

    (F (S′1, . . . , S

    ′n)

    ). �

    Property 2.8.2 (Static/dynamic distribution invariance) For all scenesS1, S

    ′1, . . . , Sn, S

    ′n ∈ SA

    t̃(S1) = t̃(S′1) ∧ . . . ∧ t̃(Sn) = t̃(S′n)=⇒ t̃

    (F (S1, . . . , Sn)

    )= t̃

    (F (S′1, . . . , S

    ′n)

    ).

    (Recall the notational abbreviation t̃(S) from Definition 2.7.) �

    The set of all scene operations over A is denoted by SOA. The subset consisting only ofthe scene operations of arity n is denoted by SO[n]A . 3

    Composing scene operations is straightforward, it is just ordinary function composi-tion. In this case we start with the single-argument composition since it is going to beuseful later.

    Definition 2.9 (Compositions of SO’s) The single-argument composition of twoscene operations F ∈ SO[n]A and G ∈ SO

    [m]A at argument k (where 1 ≤ k ≤ n) is denoted

    F ◦k G. The composition is defined as

    (F ◦k G)(s1, . . . , sn+m−1) = F(s1, . . . , sk−1, G(sk, . . . , sk+m−1), sk+m, . . . , sn+m−1

    ).

    For scene operations F ∈ SO[n]A and G1 ∈ SO[m1]A , . . . , Gn ∈ SO

    [mn]A the complete

    composition F ◦(G1, G2, . . . , Gn) is defined as ((. . . (F ◦n Gn)◦n−1Gn−1)◦(n−2) . . .)◦1G1(which gives it the arity

    ∑nk=1 mk). 3

  • 22 Chapter 2. Formalization

    Here the question arises if the composition of scene operations yields a scene oper-ation. It is not very hard to see that composition preserves Properties 2.8.1 and 2.8.2,but to remove any doubt here follows a lemma and its formal proof.

    Lemma 2.10 (SOA is closed under composition) For all F ∈ SO[n]A , G ∈SO[m]A , and k ∈ {1, . . . n}, F ◦k G is a scene operation, that is, (F ◦k G) ∈ SO

    [n+m−1]A .3

    Proof To show that F ◦k G is an SO it should be shown that it fulfills the staticmonotonicity property (Property 2.8.1) and the static/dynamic distribution invarianceproperty (Property 2.8.2):

    – Static monotonicity: Let S1, S′1, . . . , Sn+m−1, S′n+m−1 ∈ S be such that stat(Si) ≤

    stat(S′i) for all i ∈ {1, . . . , n + m − 1}. If S = G(Sk, . . . , Sk+m−1) and S′ =G(S′k, . . . S

    ′k+m−1), then static monotonicity (Property 2.8.1) yields stat(S) ≤

    stat(S′). Therefore we get

    stat((F ◦k G)

    (S1, . . . , Sn+m−1

    ))= stat

    (F (S1, . . . , Sk−1, S, Sk+m, . . . , Sm+n−1)

    )≤ stat

    (F (S′1, . . . , S

    ′k−1, S

    ′, S′k+m, . . . , S′m+n−1)

    )= stat

    ((F ◦k G)

    (S′1, . . . , S

    ′m+n−1

    )),

    just as required.

    – Static/dynamic distribution invariance: Let S1, S′1, . . . , Sn+m−1, S′n+m−1 ∈ S be

    such that t̃(Si) = t̃(S′i) for all i ∈ {1, . . . , n + m − 1} (recall the notationalabbreviation t̃ from Definition 2.7) and let SG = G(Sk, . . . , Sk+m−1). Since Ghas the static/dynamic distribution invariance property, it follows that t̃(SG) =t̃

    (G(S′k, . . . , S

    ′k+m−1)

    ), so by the definition of composition and since F fulfills the

    distribution invariance property,

    t̃((F ◦k G)(S1, . . . , Sn+m−1)

    )= t̃

    (F (S1, . . . , Sk−1, SG, Sk+m, . . . , Sn+m−1)

    )= t̃

    (F (S′1, . . . , S

    ′k−1, G(S

    ′k, S

    ′k+m−1), S

    ′k+m, . . . , S

    ′n+m−1)

    )= t̃

    ((F ◦k G)(S′1, . . . , S′n+m−1)

    ),

    just as required.

    This concludes the proof. �

    2.2.2 Derived properties

    The following easy lemma states that the set of all scene operations is closed undercopying, omission and permutation of arguments. This is going to be important forsome details in later discussions.

    Lemma 2.11 (SOA closed under copied, omitted and, permuted arguments)Let F be a function on SA of arity n, let F ′ ∈ SO[m]A and k1, . . . km ∈ {1, . . . , n} be suchthat for all S1, . . . , Sn ∈ SA: F (S1, . . . , Sn) = F ′(Sk1 , . . . Skm). Then F is a sceneoperation. 3

    Proof This is easy to see. What needs to be shown is that F has Property 2.8.1 andProperty 2.8.2:

  • 2.2. Scenes 23

    1. As for the static monotonicity property (Property 2.8.1), let S1, S′1, . . . , Sn, S′n ∈

    SA be such that Si ≤ S′i for all i ∈ {1, . . . , n}. Then

    F (S1, . . . , Sn) = F ′(Sk1 , . . . , Skm) (by definition)≤ F ′(S′k1 , . . . , S

    ′km) (F

    ′ has Property 2.8.1)= F (S′1, . . . , S

    ′n) (by definition),

    so F (S1, . . . , Sn) ≤ F (S′1, . . . , S′n).

    2. As for the static/dynamic distribution invariance property (Property 2.8.2), letS1, S

    ′1 . . . , Sn, S

    ′n ∈ SA be such that t̃(Si) = t̃(S′i) for all i ∈ {1, . . . , n}. Then

    t̃(F (S1, . . . , Sn)

    )= t̃

    (F ′(Sk1 , . . . , Skm)

    )(by definition)

    = t̃(F ′(S′k1 , . . . , S

    ′km)

    )(F ′ has Property 2.8.2)

    = t̃(F (S′1, . . . , Sn)

    )(by definition),

    so t̃(F (S1, . . . , Sn)

    )= t̃

    (F (S′1, . . . , Sn)

    ). �

    One fact that has not been explicitly stated in the definitions above, but which is onsome level implicit in the surrounding text is that changes in the dynamic part do notchange the static part in any way. That is, no matter how much is added to or removedfrom the dynamic part of any of the arguments, the static part of the result remains thesame (though it is of course normally going to change the dynamic part of the result).The following lemma and its proof show that this is indeed the case.

    Lemma 2.12 (SO static dynamic independence lemma) For every scene op-eration F ∈ SO[n]A and all S1, S′1, . . . , Sn, S′n ∈ SA, if stat(Si) = stat(S′i) for alli ∈ {1, . . . , n} then it holds that stat

    (F (S1, . . . , Sn)

    )= stat

    (F (S′1, . . . , S

    ′n)

    ). 3

    Proof Take some F ∈ SO[n]A and some S1, S′1, . . . , Sn, S′n ∈ SA such that stat(Si) =stat(S′i) for all i ∈ {1, . . . , n}. Since stat(Sk) = stat(S′k) =⇒ stat(Sk) ≤ stat(S′k)the static monotonicity property (Property 2.8.1) gives

    stat(F (S1, . . . , Sn)

    )≤ stat

    (F (S′1, . . . , S

    ′n)

    ).

    However, it is also true that stat(Sk) = stat(S′k) implies stat(S′k) ≤ stat(Sk) which

    by static monotonicity and the above yields

    stat(F (S1, . . . , Sn)

    )≤ stat

    (F (S′1, . . . , S

    ′n)

    )≤ stat

    (F (S1, . . . , Sn)

    ).

    This shows thatstat

    (F (S1, . . . , Sn)

    )= stat

    (F (S′1, . . . , S

    ′n)

    )since ≤ is an order relation and thus antisymmetric. �

    Notice that, while this lemma states that the static part of the result depends solely onthe static parts of the arguments, the converse does not hold. Changing the static partof the arguments may change the dynamic part of the result.

    The previous lemma shows that the static parts are independent from the dynamicparts, and Property 2.8.1 enforces an ordering constraint on the static part. Fromthis it follows in a natural way that the scene operations have a very simple orderinginterpretation when the dynamic parts are ignored. This is made precise in the followingcorollary.

  • 24 Chapter 2. Formalization

    Corollary 2.13 (Scene operations as order-preserving maps) A scene oper-ation F of arity n over the domain A = (A,≤,t,⊥) (as in Definition 2.6) is, whenignoring the dynamic parts, an order-preserving mapping from the semi-lattice1 〈A,≤〉nto the semi-lattice 〈A,≤〉.

    That is, there exists an order-preserving mapping ϕ : An → A from the semi-lattice〈A,≤〉n to the semi-lattice 〈A,≤〉 such that

    ∀(S1, . . . , Sn ∈ SA) : stat(F (S1, . . . , Sn

    ))= ϕ

    (stat(S1), . . . , stat(Sn)

    ). 3

    Proof This ϕ can be constructed easily. For all d1, . . . , dn ∈ A let

    ∀(s1, . . . , sn ∈ A) : ϕ(s1, . . . , sn) = stat(F

    ((s1, d1), . . . , (sn, dn)

    )).

    This defines ϕ uniquely since ϕ is independent of the values of d1, . . . , dn ∈ A byLemma 2.12 from above.

    Now ϕ can be shown to be order-preserving by applying Property 2.8.1. For alls1, s

    ′1, . . . , sn, s

    ′n ∈ A such that si ≤ s′i for all i ∈ {1, . . . , n}

    ϕ(s1, . . . , sn) = stat(F

    ((s1, d1), . . . , (sn, dn)

    ))(construction of ϕ)

    ≤ stat(F

    ((s′1, d

    ′1), . . . , (s

    ′n, d

    ′n)

    ))(≤ due to Property 2.8.1)

    = ϕ(s′1, . . . , s′n) (construction of ϕ)

    for arbitrary d1, d′1, . . . , dn, d′n ∈ A. This concludes the proof. �

    2.3 Progressing Scene Operations

    In this section, the definitions that form the core concepts for this thesis are made.This section merges the concepts of Section 2.1 and Section 2.2, yielding progressingoperations over scenes. The combination is not completely trivial. As was alreadydiscussed in Section 1.4.3 the operations need to respect the scene operation propertieswhen progressing.

    There are several examples of progressing scene operations in the examples chapter,starting with Section 3.3.

    2.3.1 Definitions

    The obvious idea for combining scene operations and progressing operations is to simplyreplace the function in a progressing operation with a scene operation. This is indeedhalf the puzzle. However another property has to be added to make sure that theoperation does not violate the scene operation properties when it progresses. We haveto impose a requirement similar to the static monotonicity property (Property 2.8.1)modified to apply to progression instead of changing arguments.

    Definition 2.14 (Progressing scene operation) A progressing scene operation(PSO) of arity n ∈ N over A is a pair P = (F,Π) where

    – F is a scene operation of arity n (as in Definition 2.8),1Recall that the Cartesian product of a semi-lattice is also a semi-lattice. 〈A,≤〉n is defined as

    〈An,�〉 such that for all a, b ∈ An, where a = (a1, . . . , an) and b = (b1, . . . , bn), it holds that a � b ifand only if a1 ≤ b1, . . . , an ≤ bn. See for example [DP02].

  • 2.3. Progressing Scene Operations 25

    – Π is a set of progressing scene operations of arity n over A,

    such that the progression monotonicity property is fulfilled.

    Property 2.14.1 (Progression monotonicity)

    ∀(π ∈ Π) : ∀(S1, . . . , Sn ∈ SA

    ): stat

    (F (S1, . . . , Sn)

    )≤ stat

    (Fπ(S1, . . . , Sn)

    ). �

    The components F and Π of a PSO P are often denoted by FP and ΠP respectively.The set of all progressing scene operations over A of arity n is denoted by PSO[n]A . Boththe arity n and the domain A may be omitted when irrelevant or arbitrary. 3

    Defining the composition of PSO’s is not very hard, it is largely the same as forprogressing operations (Definition 2.5). Instead of functions it now involves PSO’s, andso indirectly invokes the definition of composition of scene operations (Definition 2.9).

    Definition 2.15 (Compositions of PSO’s) The single-argument composition oftwo progressing scene operations (as in Definition 2.14) P ∈ PSO[n]A and Q ∈ PSO

    [m]A

    at argument k (where 1 ≤ k ≤ n) is denoted P ◦k Q and defined by

    P ◦k Q =(FP ◦k FQ, {P ◦k R′ |R′ ∈ ΠQ} ∪ {R′ ◦k Q |R′ ∈ ΠP }

    ).

    Where the composition of the scene operation FP and FQ are as in Definition 2.9 (or-dinary function composition).

    For progressing scene operations P ∈ PSO[n]A and Q1 ∈ PSO[m1]A , . . . , Qn ∈ PSO

    [mn]A

    the composition P ◦(Q1, Q2, . . . , Qn) is defined as ((. . . (P ◦nQn)◦n−1Qn−1)◦(n−2) . . .)◦1Q1 (which gives it the arity

    ∑nk=1 mk). 3

    The composition if of course intended to construct a PSO from two component PSO’s.There are several properties to be fulfilled before something is known to be a properPSO, so a lemma with a proof may be in order.

    Lemma 2.16 (PSOA is closed under composition) For any P ∈ PSO[n]A , Q ∈PSO[m]A and k ∈ {1, . . . n} we have that (P ◦k Q) ∈ PSO

    [n+m−1]A . 3

    Proof Given some P ∈ PSO[n]A , Q ∈ PSO[m]A and k ∈ {1, . . . n}, let R = P ◦k Q be as

    in Definition 2.15. To prove that R ∈ PSO[n+m−1]A , there are two things to show:

    – FR ∈ SO[n+m−1]A (implying that it has Properties 2.8.1 and 2.8.2) and that allelements of ΠR fulfill the progression monotonicity property (Property 2.14.1),

    – all π ∈ ΠR are progressing scene operations (π ∈ PSO[n+m−1]A ), the recursive case.

    Showing that FR ∈ SO[n+m−1]A is trivial since FR = FP ◦k FQ by Definition 2.15 andboth FP ∈ SO[n]A and FQ ∈ SO

    [m]A by Definition 2.14. The definition of scene operation

    compositions (Definition 2.9) together with Lemma 2.10 then gives FR ∈ SO[n+m−1]A .Showing the progress monotonicity property is slightly more complex. For all π ∈ ΠR

    there are two cases (arising from the union in Definition 2.15):

    – π = P ◦k Q′ where Q′ ∈ ΠQ. Let S1, . . . , Sn+m−1 ∈ SA, S = FQ(Sk, . . . , Sk+m−1)and S′ = FQ′(Sk, . . . , Sk+m−1). Since Q has the progression monotonicity prop-erty it is known that stat(S) ≤ stat(S′).

  • 26 Chapter 2. Formalization

    By the definition of composition

    FR(S1, . . . , Sn+m−1)= (FP ◦k FQ)(S1, . . . , Sn+m−1)= FP (S1, . . . , Sk−1, S, Sk+m, . . . , Sn+m−1),

    (2.1)

    and

    Fπ(S1, . . . , Sn+m−1)= (FP ◦k FQ′)(S1, . . . , Sn+m−1)= FP (S1, . . . , Sk−1, S′, Sk+m, . . . , Sn+m−1).

    (2.2)

    Since FP is a scene operation and therefore has the static monotonicity property(Property 2.8.1) combining equations (2.1) and (2.2) yields

    stat(FR(S1, . . . , Sn+m−1)

    )= stat

    (FP (S1, . . . , Sk−1, S, Sk+1, . . . , Sn+m−1)

    )≤ stat

    (FP (S1, . . . , Sk−1, S′, Sk+1, . . . , Sn+m−1)

    )= stat

    (Fπ(S1, . . . , Sn+m−1)

    ).

    – π = P ′ ◦k Q where P ′ ∈ ΠP . Let S1, . . . , Sn+m−1 ∈ SA, S = FQ(Sk, . . . , Sk+m−1).The definition of composition and progression monotonicity (for P , which is knownto be a PSO) gives

    stat(FR(S1, . . . , Sn+m−1)

    )= stat

    ((FP ◦k FQ)(S1, . . . , Sn+m−1)

    )= stat

    (FP (S1, . . . , Sk−1, S, Sk+1, . . . , Sn+m−1)

    )≤ stat

    (FP ′(S1, . . . , Sk−1, S, Sk+1, . . . , Sn+m−1)

    )= stat

    ((FP ′ ◦k FQ)(S1, . . . , Sn+m−1)

    )= stat

    (Fπ(S1, . . . , Sn+m−1)

    ),

    that is, stat(FR(S1, . . . , Sn+m−1)

    )≤

    (Fπ(S1, . . . , Sn+m−1)

    ).

    The only remaining part is to show that all π ∈ ΠR are also progressing scene operations.Each π ∈ ΠR is by the definition of composition of one of the following two forms:

    – π = P ′ ◦k Q with P ′ ∈ ΠP . P ′ is then a progressing scene operation by thedefinition of PSO’s.

    – π = P ◦k Q′ with Q′ ∈ ΠQ. Q′ is then a progressing scene operation by thedefinition of PSO’s.

    Since all π ∈ ΠR are compositions of PSO’s the above proof applies recursively to them,making all π ∈ ΠR PSO’s. �

    2.4 Tree-based operations

    This section will define progressing (scene) operations based on tree grammars andalgebras. The class of allowable grammars is restricted to those that use the traditionalkind of nonterminal symbol replacement. In addition some symbols of rank 0 are addedto represent the arguments to the operation.

  • 2.4. Tree-based operations 27

    There are separate subsections for progressing operation tree grammars (Section 2.4.2)and progressing scene operation tree grammars (Section 2.4.3). The definitions for pro-gressing operations generated by tree grammars will in fact merely be a simplificationthe corresponding definitions for progressing scene grammars. It is therefore safe to skipthe former, proceeding directly to the progressing scene operations if that is the case ofinterest.

    There are examples of tree-based progressing (scene) operations in the exampleschapter, from Section 3.3 onwards.

    2.4.1 Common definitions for tree grammars