formal models of timed musical...
TRANSCRIPT
FORMAL MODELS OF TIMEDMUSICAL PROCESSES
GERARDO M. SARRIA M.
UNIVERSIDAD DEL VALLE
COLOMBIA
2008
FORMAL MODELS OF TIMEDMUSICAL PROCESSES
GERARDO M. SARRIA M.
A Dissertation submitted to
the Faculty of Enginnering of the
Universidad del Valle
in partial fulfillment of the requirements
for the degree of
Doctor of Engineering
Advisor: CAMILO RUEDA
Co-Advisor: JUAN FRANCISCO DIAZ
UNIVERSIDAD DEL VALLE
COLOMBIA
2008
To my mother and my uncle Luis Alberto.
Thanks to them I was able to reach this high point of my career.
Acknowledgements
I have always taken life as it comes. One event leads to another, and another and
so on. In this chain of events, I am so grateful to have met Camilo Rueda, because
with him I found a real figure to follow, a person to admire, and a man with whom
I can talk, discuss and drink wine. He guided me to the road I always wanted to
walk: music and programming. It has been a privilege to work with him.
I am also indebted to all my teachers, in particular to Juan Francisco Dıaz, Martha
Millan, Gabriel Tamura and Fernando Mejıa. They have all my admiration.
Many thanks to Carlos Agon and Gerard Assayag. I won’t forget their hospitality
and help when I lived in France. They oriented me and showed me the way to this
dissertation.
I would also like to thank all people at Ircam, specially to Karim Haddad, Moreno
Andreatta, Jean Bresson, Benoıt Meudic, Olivier Lartillot-Nakamura, Hugues Vinet,
Francois Dechelle, Patrice Tisserand, Charlotte Truchet, Bennett Smith, Mikhail
Malt, Sylvie Benoit, Didier Perini, Line Dao, Alexandra Magne and Bernard Stiegler
(some of them are no longer at Ircam but I remember them all).
Thanks to Universidad del Valle and Pontificia Universidad Javeriana, in particular
to the folks in the Avispa research group, Frank Valencia, Luis Omar Quesada, Salim
Perchy, Jorge Perez and my university colleagues.
Furthermore, I would like to thank my closest friends, Jose Diago, Javier Diaz,
Andres Felipe Llanos, Carlos Fernandez, Carlos Vacca, Rene Zuniga (R.I.P.), Antal
Buss, Antonio Matta, Paula Marcela Gallego, Sandra Ximena Acosta, and Darıo
Victoria.
I am also grateful to my brother, my father, my wife, my son Juan Martın and all
my family, specially to my mom and my uncle Luis Alberto because they gave me
their support since the beginning and they pushed me to a life of studies and full of
success. This dissertation is dedicated to them.
ii
Contents
List of Tables vii
List of Figures ix
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 The Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 This Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Computer Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Process Calculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 Denotational Semantics . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Temporal Logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.6 Semantic Models Of Concurrency . . . . . . . . . . . . . . . . . 11
1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Formal Models 16
iii
2.1 λ-Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Communicating Sequential Processes (CSP) . . . . . . . . . . . . . . . . 28
2.2.1 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3 π-Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 PiCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4.1 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Block-Diagram Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.1 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.5.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.6 CC Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6.1 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.6.2 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.7 The ntcc Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.7.1 Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . 57
2.7.2 Denotational Semantics . . . . . . . . . . . . . . . . . . . . . . . 61
iv
2.7.3 Linear-Temporal Logic . . . . . . . . . . . . . . . . . . . . . . . . 62
2.7.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.7.5 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.8 Summary and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 67
3 The rtcc Calculus 69
3.1 Real-Time and Preemption . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2 Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.1 Constraint System . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2.2 Process Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2.3 Transition System . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.2.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.5 Observable Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.3 Denotational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.3.1 Denotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.3.2 Information, Resources and Time . . . . . . . . . . . . . . . . . . 96
3.3.3 Soundness and Completeness of the Denotations . . . . . . . . 101
3.4 Real-Time Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.4.1 Transition System without Resources . . . . . . . . . . . . . . . 110
3.4.2 Logic Syntax & Semantics . . . . . . . . . . . . . . . . . . . . . . 112
3.4.3 Proof System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.5 Encoding ntcc into rtcc . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.6 Summary and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 117
4 Applications 119
v
4.1 Maquettes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.2 Interactive Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.3 Factor Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4 Summary and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 131
5 Concluding Remarks and Future Work 132
Bibliography 136
vi
List of Tables
1.1 Syntax of CCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Transition Rules of CCS . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Syntax of the λ&-Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Reduction Rules of the λ&-Calculus . . . . . . . . . . . . . . . . . . . . 19
2.3 Syntax of CSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Transition Rules of CSP . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5 Syntax of the π-Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.6 Transition Rules of the π-Calculus . . . . . . . . . . . . . . . . . . . . . 36
2.7 Syntax of PiCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.8 Transition Rules of PiCO . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.9 Syntax of the Block-Diagram Algebra . . . . . . . . . . . . . . . . . . . 45
2.10 Graph and Algebraic Representation of Processes . . . . . . . . . . . . 45
2.11 Syntax of ccp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.12 Transition Rules of ccp . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.13 Denotational Semantics for the Determinate ccp Language . . . . . . . 52
2.14 Syntax of tcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.15 Transition Rules of tcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
vii
2.16 Syntax of ntcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.17 Transition Rules of ntcc . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.18 Denotational Semantics of ntcc . . . . . . . . . . . . . . . . . . . . . . . 62
2.19 Proof System of ntcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.1 Syntax of the rtcc Processes . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2 Transition Rules of rtcc . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.3 Denotational Semantics of rtcc . . . . . . . . . . . . . . . . . . . . . . . 93
3.4 Transition Rules of rtcc without Resources . . . . . . . . . . . . . . . . 111
3.5 Proof System of rtcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
viii
List of Figures
1.1 A parallel switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Configurations of the event structure for the parallel switch . . . . . . 12
2.1 Creation and Use of Triple Function in Elody . . . . . . . . . . . . . . . 20
2.2 The Elody Conceptual Diagram . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 The TimeLine Editor of Elody . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 OM Patch to Create a Song of Salsa . . . . . . . . . . . . . . . . . . . . 23
2.5 OM Meta-object Classes Hierarchy . . . . . . . . . . . . . . . . . . . . . 24
2.6 An Example of a OM Maquette . . . . . . . . . . . . . . . . . . . . . . . 25
2.7 Example of a Patch in MAX . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.8 A Primitive in MAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.9 A Connection in MAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.10 The Cordial Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.11 A Method Editor in Cordial . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.12 Example of a Noise Generator in Faust . . . . . . . . . . . . . . . . . . . 47
2.13 A CCP Store Accessed by Four Agents . . . . . . . . . . . . . . . . . . . 48
2.14 Reactive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.15 Hierarchy of the CC Family . . . . . . . . . . . . . . . . . . . . . . . . . 58
ix
2.16 Nzakara Formulae in Cannon . . . . . . . . . . . . . . . . . . . . . . . . 66
4.1 Maquette as a Static Score . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.2 Dialog Box for Metric Setting in the Maquettes . . . . . . . . . . . . . 122
4.3 Constraints in Maquettes . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.4 The Allen’s Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.5 Factor Oracle automaton for w = abbbaa. . . . . . . . . . . . . . . . . . . 128
x
Abstract
In the last decades several formal models have been proposed to formalize various
musical applications, to solve musical and improvisation problems and to prove prop-
erties in music. In this document we briefly describe some of those formal models,
their applications and we give some considerations about them. We focus on the CCP
calculi and specially on the ntcc calculus, a model of temporal concurrent constraint
programming with the capability of expressing asynchronous and non-deterministic
timed behaviour. This calculus has been used as a convenient formalism for express-
ing temporal musical processes.
Additionally, in this thesis we propose a model of real-time concurrent constraint
programming, called rtcc, which adds to ntcc the means for specifying and modeling
real-time behaviour. This calculus is provided with a new construct for modeling
strong preemption and another one for defining delays within the same time unit.
Together with these new features we provide an operational semantics supporting
resources, limited time and true concurrency.
A denotational semantics based on Chu spaces is also provided. The lack of mono-
tonicity derived from the new constructs and the inclusion of resources and time in
the operational semantics precludes defining the denotations of processes in terms of
quiescent points as is usual in CCP calculi such as ntcc.
This dissertation also introduces a real-time logic for expressing temporal specifica-
tions of rtcc processes. An inference system is defined to prove that a process in
the calculus satisfies a formula in the logic.
Finally, we show the applicability of the rtcc calculus in music and multimedia in-
teraction. We formalize the notion of OpenMusic Maquette as the main application.
Resumen
En las ultimas decadas se han propuesto muchos modelos formales para formalizar
varias aplicaciones musicales, resolver problemas musicales y de improvisacion, y
probar propiedades en la musica. En este documento describiremos brevemente
algunos de estos formalismos, sus aplicaciones y haremos algunas anotaciones sobre
ellos. Nos concentraremos en los calculos basados en CCP y especialmente en el
calculo ntcc, un modelo de programacion concurrente temporal y por restricciones
quien tiene la capacidad de expresar comportamiento asıncrono y no determinista
en el tiempo. Dicho calculo ha sido usado como un formalismo conveniente para
expresar procesos musicales temporales.
Adicionalmente, en esta tesis proponemos un modelo de programacion concurrente
por restricciones tiempo real llamado rtcc, el cual le agrega a ntcc los medios para
especificar y modelar comportamiento en tiempo real. Este calculo provee un nuevo
constructor para modelar la capacidad de parar un proceso y darle lugar a otro
(strong preemption) y otro constructor para definir retardos (delta delay) dentro de
una misma unidad de tiempo. Junto a estas nuevas caracterısticas proveemos una
semantica operacional que soporta recursos, tiempo limitado y concurrencia real.
Tambien proveemos una semantica denotacional basada en espacios de CHU. La falta
de monotonicidad derivada de los nuevos constructores y la inclusion de recursos y
tiempo en la semantica operacional no permite definir las denotaciones de los procesos
en terminos de puntos fijos como es usual en los calculos basados en CCP como ntcc.
Esta disertacion tambien introduce una logica tiempo real para expresar especifica-
ciones temporales de los procesos rtcc. Se definio un sistema de inferencia para
probar que un proceso en el calculo satisface una formula en la logica.
Finalmente mostramos la aplicabilidad del calculo rtcc en la musica e interaccion
multimedial. Formalizamos la nocion Maqueta del lenguaje de programacion Open-
Music como la aplicacion principal.
1 Introduction
“Research is to see what everybody else has seen, and think what nobody else has
thought.”
– Anonymous
This dissertation presents the study of timed musical processes calculi, mathematical
models of computation for music. Our theory (the thesis) consists in the fact that
having a real-time process calculus we are able to model and express real-timed
musical processes and their properties and operations.
1.1 Motivation
In this section we present the motivation leading to proceed with this research and
to develop a new formal model.
1.1.1 The Problem
Musical composition, performance and improvisation are complex tasks. They de-
mand to define and control real-time concurrent activities. We can see musical
objects as structures with various dimensions. In an horizontal dimension, for ex-
ample, time becomes a tight notion where musical objects such as notes or chords
are constrained. The position of each object defines the relative order of the musical
events with respect to each other, forming for instance, rhythmic patterns. In a
vertical dimension we can perceive the event’s simultaneity and also musical objects
such as voices running in parallel, building for example, patterns of harmony.
1
We think that the complexity of musical processes poses a great challenge to any
computational formalism. The development of computational models and tools for
musical systems has increased in the last decades. This is why, defining a simple and
expressive formal model providing the necessary techniques to allow reasoning about
musical properties can be of great help to construct meaningful musical processes,
base of higher level musical applications.
1.1.2 The Solution
We regard the musical tasks as concurrent activities suitable for modeling using
concurrency formalisms such as process calculi. These models provide mathematical
foundations to concurrent computations. They also provide a simple yet powerful
framework for expressing events coordination, algebraic treatment of processes, and
operators to denote a system’s behaviour.
Although models for sequential processes such as the λ-calculus have been successful
in the study of some foundations of computer science, they seem to be inappropriate
for dealing with interaction of processes and concurrent systems. We can quote
Robin Milner from [Mil93]:
“In looking for basic notions for a model of concurrency it is therefore
probably wrong to extrapolate from λ-calculus, except to follow its example
in seeking something small and powerful. (Here is an analogy: Music is
an art form, but it would be wrong to look for an aesthetic theory to cover
all art forms by extrapolation from music theory.)”
Then we may take advantage of the evolution of the formal models for concurrency,
extending those general formalisms and made them convenient for a particular area
of knowledge.
1.1.3 This Dissertation
Vijay A. Saraswat in [Sar93] proposed concurrent constraint programming (ccp) as
a model for specifying concurrent systems in terms of constraints, and also the tcc
2
calculus [SJG94a], aimed at programming and modeling timed, reactive systems.
The tcc model extends ccp with the notion of time as a sequence of time units.
The temporal concurrent constraint programming (ntcc) model [PV01] is a for-
malism with the capability of expressing asynchronous and non-deterministic timed
behaviour. It extends the tcc calculus with the notions of asynchrony and non-
determinism. With this formalism, patterns of temporal behaviour such as “a musi-
cian must play a note within the next t time units” can be expressed. Therefore it is
a convenient model for expressing temporal music processes as was shown in [RV01].
However it might be not adequate for complex musical improvisation problems due
the real time requirements in these systems.
This dissertation studies, among all other formal models used in music, real-time
concurrent constraint programming as a model for real-timed systems by using an
extension of ntcc. This extension is called the rtcc calculus. It extends ntcc with
the notions of strong preemption, delta delay, bounded concurrency and execution,
and providing a new approach to denotational semantics based on Chu spaces (see
section 1.2.6 for a brief description of Chu spaces).
The notions named above are essential for modeling real-timed process behaviour.
We can quote Gerard Berry from [Ber93]:
“Process preemption deals with controlling the life and death of concurrent
processes. Well-defined preemption mechanisms are essential in control-
dominated reactive and real-time programming and accurate handling of
preemption requires a time-dependent model.”
and Mikael Buchholtz et al. from [BAL02]:
“To argue about correctness of a real-time system both the functional and
the timely behaviour of the system have to be taken into account.”
The novelty of the rtcc calculus is that it combines in one framework a process calcu-
lus based on ntcc with ideas from synchronous concurrent programming languages
[BG92, HCRP91, GGBM91], with ideas from algebras with bounded concurrency
[Gru96, BGL97, BAL02], with a special view of processes based upon the model
of Chu spaces and with a real-time logic. Therefore, this combination in a single
3
process calculus ensures a provision of tools useful for modeling real-timed musical
processes, their applications and properties, the thesis of this dissertation.
In concrete, the important goals and features of this work can be summarized as
follows:
A survey of formal models for music. We review some of the formal models
that have been proposed for expressing a variety of musical applications from
a computation perspective.
Development of rtcc. We develop a new calculus justified in the lack of a formal-
ism capable of expressing both reactive and real-time systems for music.
Applications. We illustrate the potential of the new calculus by formalizing a well-
known musical tool and a musical improvisation process.
The author believes that these results contribute to close the gap between the com-
position/performance/improvisation process and the foundations of formal models.
1.2 Background
In this section we give a brief description of those concepts relevant to the thesis
such as computer music, process calculi, semantics of programming languages and
models of concurrency.
1.2.1 Computer Music
The use of computer theory in music has its first occurrence on western music in the
13th century when a perforated card was introduced in a musical machine to make
it play automatically (see [Roa85] to read the complete historical note). At the end
of the 19th century, as Peter Hanappe mentioned in [Han99], Ada Lovelace realized
that the unachieved computing machine designed by Charles Babbage should be able
to manipulate symbols and numbers, and this could become a composing machine
for several disciplines including music. Then, in the first decades of the twentieth
4
century, Joseph Schillinger [Sch41] predicted the use of computers in musical compo-
sition. After that, in the 1950s, the well-known Lejaren Hiller with Leonard Isaacson
generated the composition Illiac Suite for String Quartet and it derived in computer
music theories, and musical research and engineering for automatic or algorithmic
composition.
The software Musicomp (Music Simulator Interpreter for Compositional Procedures)
is perhaps the first software designed for assistance to the composition [Ass98]. It
was created by Robert Baker around 1963 with the expertise of Hiller. After this,
with the technological advances such as digital audio, personal computers, graph-
ical interfaces, standards like MIDI and, most of all, programming languages, the
computer music paradigm has been gradually defined.
In the latter years, the research in computer music has centered its efforts in providing
higher level expressions for music representation synthesis and real-time control. In
this way, languages and tools have been created. They provide many necessary data
abstraction and control flow paradigms, such as convenient methods of handling the
flow of time, structural organization of musical materials, music data representation,
hierarchy, machine improvisation and related style simulation, and concurrency.
1.2.2 Process Calculi
Process Calculi (also known as process algebras) are formalisms mostly developed
for modeling concurrent systems. They are created to build algebraic theories of
concurrency as a basis for structural methods of concurrent system descriptions.
The pioneers of this field are Milner’s CCS [Mil80], Bergstra’s ACP [BK84], and
Hoare’s CSP [Hoa85].
The definition of a process calculus contains a language with basic operators, each
one with a distinct and fundamental role. These operators include a null process
to denote inactivity, sequential composition to organize processes, parallel composi-
tion for simultaneity and independence, communication for the interaction between
processes, summation for expressing alternate courses of computation, restriction
(or hiding or locality) to delimit the interaction of processes, and recursion and/or
replication that allows finite descriptions of infinite behaviour.
5
The process language is often given in an inductive way. It is common to use Backus-
Naur Form (BNF) to describe the syntax of the process language. For instance, the
syntax of CCS is given by the grammar in table 1.1.
P,Q, . . . ∶∶= K Constant Process∣ α.P Action∣ ∑i∈I Pi Summation∣ P ∣ Q Parallel Composition∣ P [f] Renaming∣ P / L Restriction
Table 1.1: Syntax of CCS
Here K ∈ K (the countably infinite set of process names), α is an action in the set Act
(α denotes an input, α denotes an output and τ denotes an unobservable action),
I is an index set, L is a set of labels, and f ∶ Act → Act, is a relabelling function
satisfying the following constraints: f(τ) = τ , and f(a) = f(a) for each label a.
P,Q, . . . are terms of the process language also called processes, agents or expressions.
The constant process 0 stands for an empty sum of processes, i.e. 0 = ∑i∈∅Pi, and
P1 + P2 stands for a sum of two processes, i.e. P1 + P2 = ∑i∈1,2Pi.
1.2.3 Operational Semantics
The operational semantics was introduced by Gordon Plotkin in [Plo81] to define
the states in which programs can be during execution. This semantics is called this
way because is dynamic, that is, it sees a system as a sequence of operations. Each
occurrence of an operation is called a transition. A transition system is a structure
⟨Γ,Ð→⟩, where Γ is a set of configurations γ, and Ð→ ⊆ Γ×Γ is a transition relation.
Notation γ Ð→ γ′ defines the transition from configuration γ to configuration γ′.
The transitions are often divided in internal and external, depending on the system’s
behaviour. Normally, external transitions are denoted by Ô⇒.
6
A labeled transition system is a structure ⟨Γ,A,Ð→⟩ where Γ is a set of configurations,
A is a set of label operations and Ð→ ⊆ Γ×A×Γ is a transition relation. It is written
γaÐ→ γ′ the transition where γ and γ′ are configurations and a is an action. This
action provides information about the behaviour of the transition (internal actions)
or about the interaction between the system and its environment (external actions).
The relationsaÐ→ are defined to be the smallest, which obey rules of the form:
Conditions
Conclusion
A rule states that whenever the given conditions have been obtained in the course
of some derivation, the specified conclusion may be taken for granted as well. For
instance, the operational semantics of CCS is introduced in table 1.2. A transition
PaÐ→ Q holds for CCS expressions P , Q iff it can be proven using these rules.
PαÐ→ P ′
KαÐ→ P ′
Kdef= P
α.PαÐ→ P
PjαÐ→ P ′j
∑i∈I
PiαÐ→ P ′j
j ∈ I
PαÐ→ P ′
P ∣ QαÐ→ P ′ ∣ Q
QαÐ→ Q′
P ∣ QαÐ→ P ∣ Q′
PaÐ→ P ′ Q
aÐ→ Q′
P ∣ QτÐ→ P ′ ∣ Q′
PαÐ→ P ′
P [f] f(α)ÐÐ→ P ′[f]
PαÐ→ P ′
P / LαÐ→ P ′ / L
α,α′ ∉ L
Table 1.2: Transition Rules of CCS
The first rule says if K is defined as the constant name of the process P and this
process evolves to a process P ′ performing α, then we can say that K evolves to
P ′ performing α. The second rule is very intuitive, it states that an action process
α.P simply evolves to P by performing α. The rule for choice states that a summa-
tion process choose non-deterministically one process to evolve (of those which have
7
the capability to proceed) and preempt the execution of the others. The first and
second rules for parallel composition describe the concurrent performance of both
processes P and Q separately; The third rule describes the communication between
two processes acting in parallel which state transition is unobservable. Finally, the
remaining rules say that in a transition the renaming functions and local variables
hold.
The rules in a transition system define the valid transitions in the system modeled.
They also permit the study of relations between elements of the system such as
bisimulation.
Bisimulation is often essential to semantics of languages and can be defined in a few
words as a semantic equivalence of systems where one system simulates the other
and vice-versa. A term P simulates a term Q if for any term P ′ and action a such
that PaÐ→ P ′, then there is a term Q′ such that Q
aÐ→ Q′ and P ′ simulate Q′.
1.2.4 Denotational Semantics
This is probably the most influential and longest established of programming lan-
guages paradigms for the semantics of computation. It was invented by Christopher
Strachey and Dana Scott [SS71]. Unlike an operational semantics which emphasizes
how the process is evaluated, the denotational semantics focuses on what the mean-
ing of a process is. That is, this semantics is based on the static structure of the
process rather than on some sort of dynamically changing configuration.
A denotational semantics determines the meaning of a process in a compositional
way. This allows to reason about the denotations by breaking them down into their
simple known parts. Also it makes much easier to prove things about the semantics
since it can be used structural induction.
A process is now viewed as a mathematical function ⟦ ⟧ that associates its syntax in
the process algebra to an abstract object (its denotation or meaning). Each abstract
object denote a process under the mathematical function. CCS, for instance, was
equipped in [Win82] with a denotational semantics based on event structures (see
section 1.2.6 for a brief description of event structures). A more simple example is
8
the language to obtain the natural numbers. The syntax of this language is
Exp ∶∶= 0 ∣ succ Exp
In this language the value of any expression is the abstract representation of a natural
number (similar to the Church numbers), however its meaning is more complex
because the value may depend on the state. Then, the meaning of an expression is
a function that when applied to a current state, gives the value of the expression
relative to that particular state:
⟦0⟧ = 0
⟦succ Exp⟧ = ⟦Exp⟧ + 1
Note the difference between the symbol 0 (which is part of the language), and the
mathematical concept of zero 0 chosen to denote the abstract object in the language.
The function ⟦ ⟧ cannot be any function; it must be a homomorphism between the
syntactic algebra and the semantic algebra. This allows to connect the properties of
the denotational semantics with those of the operational semantics. To proof this
function, it must be proven the soundness and completeness of the denotations.
Soundness of the denotations ensures that the denotations can accurately execute all
possible behaviours of the calculus (i.e. each transition in the operational semantics
can be matched by a corresponding denotation). Completeness of the denotations is
a much stronger property, which ensures that the processes can only perform valid
execution steps; this means that each denotation corresponds to a valid transition
in the operational semantics.
If the properties of soundness and completeness hold, then the semantics are fully
abstract. Full abstraction is a desirable feature since it states that two processes
have the same denotations precisely when they are observationally equivalent.
1.2.5 Temporal Logics
Amir Pnueli in [Pnu77] introduced temporal logics to computer science for specifying
and verifying correctness of computer programs and for reasoning about concurrent
programs.
9
Temporal logics extend the propositional logic with temporal operators ◻ (always),
(eventually), (next) and U (until). Intuitively, ◻A, read “henceforth A” or
“always A”, holds in all future moments, that is, every state from now on; A, read
“eventually A”, holds some time in the near future; A, read “next A”, holds the
next state of the system; finally, A U B, read “A until B”, holds from the moment
when A holds and it keeps holding until the first occurrence of B.
The temporal operators allow the specification of a system in terms of logical formu-
las, including temporal constraints and events. They have been widely used for the
specification and verification of reactive and concurrent systems (in [Mar00] they are
well-studied in a musical environment).
In order to use a temporal logic as a tool for the specification of reactive systems
it is necessary to present the syntax and semantics of the logic. The syntax defines
the atomic symbols used to build formulas. A simple example of the syntax of a
linear-time temporal logic is:
A,B, . . . ∶∶= true ∣ false ∣ ¬A ∣ A ∨B ∣ A ∣ ◻ A ∣ A ∣ A U BThe semantics describes the desired behaviour or operation of the logic with respect
to a time structure. For example, following [MP92], a time structure could be a
sequence of states σ ∶ s0, s1, s2, . . ., where each state si is an interpretation of a finite
set of state variables, assigning to each variable a value over its domain. If this logic
is linear-time then each state defines a specific moment of time. Notation (σ, j) ⊧ Adenotes the sequence σ at time j holding a formula A. Thus, the semantics is given
by the models of the formulae. A sequence of states σ satisfies (or is a model of) A,
written σ ⊧ A, iff ⟨σ,1⟩ ⊧ A, where the satisfaction relation ⊧ is inductively defined
as follows:
⟨σ, j⟩ ⊧ true⟨σ, j⟩ ⊭ false⟨σ, j⟩ ⊧ ¬A iff ⟨σ, j⟩ ⊭ A⟨σ, j⟩ ⊧ A ∨ B iff ⟨σ, j⟩ ⊧ A or ⟨σ, j⟩ ⊧ B⟨σ, j⟩ ⊧ A iff ⟨σ, j + 1⟩ ⊧ A⟨σ, j⟩ ⊧ ◻A iff ∀k ≥ j ⟨σ, k⟩ ⊧ A⟨σ, j⟩ ⊧A iff ∃k ≥ j s.t. ⟨σ, k⟩ ⊧ A⟨σ, j⟩ ⊧ A U B iff ∃k ≥ j s.t. ⟨σ, k⟩ ⊧ B and ∀i j ≤ i < k ⟨σ, i⟩ ⊧ AThe following are some examples of formulas and their interpretations (recall that
10
the missing logic operators, ∧, Ô⇒ , and ⇐⇒ can be defined in terms of ∨ and ¬).
◻¬(A ∧B). This formula states that A and B cannot hold simultaneously (i.e. not
in the same state forever).
◻ ¬A. Formula A only holds in finitely many states, that is, ¬A will permanently
hold after a finite transition period.
(A Ô⇒ ◻B). Whenever A holds, also B must hold in that state and in all fol-
lowing states.
Many of the temporal logics used in computer science (refer to [BMN00] for an
overview of them) can be classified basically into linear-time and branching-time,
interleaving and true concurrency, and discrete and continuous time. This disserta-
tion will deal with a linear temporal logic. For deep theoretical aspects of temporal
logics the reader may see [Eme90], and [Hen91] intended for real-time systems.
1.2.6 Semantic Models Of Concurrency
A model of concurrency is a formal mathematical description used to specify the
semantics of computation and concurrency. It captures unambiguously the required
functionality of concurrent systems and allows to verify correctness of the functional
specification and properties of those systems.
Process calculi are examples of models of concurrency. In chapter 2 we will review
some process calculi. In this section, we will briefly describe two models of concur-
rency that will help us to define the denotational semantics of the calculus proposed
in this dissertation. Other models not included here are Petri Nets [Pet62], Higher
Dimensional Automata [Pra91], Partial Orders Multisets [Pra86] and Geometric Au-
tomata [Gun91].
Event Structures
An event structure is a model where the state of a system is defined as the set of
actions performed by events. It was developed by Glynn Winskel et al. in [NPW79].
In this model, a process is represented by a set of event occurrences equipped with
11
two relations which describe how events are causally related and when the occurrence
of certain event excludes others.
Formally, an event structure is defined as a tuple ⟨A,Con,⊢⟩, where A is a set of
events, Con, the consistency predicate, is a non empty subset of ∣A∣0 (finite subset
of A) satisfying: X ∈ Con ∧ Y ⊆ X ⇒ Y ∈ Con, and ⊢ is the enabling relation (a
subset of Con ×A), satisfying: X ⊢ a ∧X ⊆ Y ⇒ Y ⊢ a.
A prime event structure ⟨A,≤,#⟩ consists of a set of events A, a partial order ≤⊆ A×A
and a conflict relation # ⊆ A ×A. If two events are in conflict, both of them cannot
happen in a single execution. For the event a to happen, all the events before it in the
partial order must have happened. Also, for all events a, b, c ∈ A, a#b∧ b ≤ c⇒ a#c.
A configuration of an event structure is set of events which has occurred by some
stage in a process, that is, a conflict-free downset x ⊂ A, i.e. given a ∈ x, if b ≤ a then
b ∈ x (the notion of downset), and if a#b then b ∉ x (the meaning of conflict).
For example1, given the electronic system with two switches and a bulb in figure 1.1,
0
b
1
Figure 1.1: A parallel switch
the configurations have the form of figure 1.2.
1,b0,b
0
0,1
0,1,b
1
Figure 1.2: Configurations of the event structure for the parallel switch
Closing either of the switches enables the event of the bulb lightning up.
1Example taken from [Win86].
12
Chu Spaces
A Chu space [Gup94] is simply a ∣A∣× ∣X ∣ rectangular matrix over a given set K (the
alphabet). Formally, a Chu space can be viewed as a tuple C = ⟨A,X,R⟩, over a set
K, where A (the rows) is a set of events, X (the columns) is a set of states, and
R ∶ A ×X ×K is a relation constituting the matrix.
An event a ∈ A can be seen as a function a ∶ X → K, a state x ∈ X can also be
seen as a function x ∶ A → K, and it holds that ∀a ∈ A,x ∈ X . a(x) = x(a). When
K = 0,1, a(x) = 1 means “event a has occurred in state x”, and x(a) = 1 means
“state x has information about event a”.
The following example (taken from [Pra99]) shows the complete Chu space of three
events.
Example 1.2.1. Taking K = 0,1, the set A = a, b, c may be represented as the
Chu space:
x0 x1 x2 x3 x4 x5 x6 x7
a 0 1 0 1 0 1 0 1b 0 0 1 1 0 0 1 1c 0 0 0 0 1 1 1 1
In state x5, for instance, events a and c have occurred but event b has not yet oc-
curred.
The characteristic feature of a set is that its states permit each point (i.e. event)
independently to take on any value from K: all of the KA (in this case 23 = 8)
possible states are permitted. ∎
A Chu space is usually written as a pair (A,X), where X ⊆ KA, with R being
implicit as the membership relation.
To consider the actions that a process executes, a labeled Chu space is defined as the
triple (A,X,λ), where λ ∶ A → Act is the labeling function (Act is a set of possible
actions).
Two Chu spaces are called equivalent when they differ only in the number of repe-
titions of rows and columns.
13
Chu spaces can be interpreted as computational processes [Pra95]. They take A to
be a schedule of events distributed in time, and X to be an automaton of states
forming an information system. An event occurence is then considered not as two
valued fact but as a three-valued affair. An event may be in one of three states: not
yet occurred, ocurring now and already occurred, that is, to consider K = 0, ,1.Events in a process represented by a Chu space may relate each other in the same
way of prime event structures, that is, with relations ≤ and #. When an event in
a Chu space must occur before another event, for example a and b in the following
Chu space:
a 0 1 1 1b 0 0 0 1
we write a ≤ b. On the other hand, when an event cannot occur concurrently with
another, as in the follow process:
a 0 1 0 0b 0 0 0 1
we write a#b. Now we will give some algebra of Chu spaces that will be used in
chapter 3 to describe the denotational semantics of the calculus proposed in this
document.
Process C1 ∧ C2 is defined as (A∪B,z ∈ ∣K ∣A∪B ∣ z ↓A ∈X ∧ z ↓B ∈ Y ), for two Chu
spaces C1 = (A,X) and C2 = (B,Y ) where X ⊆ ∣K ∣A, Y ⊆ ∣K ∣B and z ↓A denotes the
restriction of z in A. Similarly, it is defined C1 ∨ C2 as (A ∪B,z ∈ ∣K ∣A∪B ∣ z ↓A ∈
X ∨ z ↓B ∈ Y )The sequence C1 ; C2 for two Chu spaces C1 = (A,X) and C2 = (B,Y ), is defined as
C1∧C2∧(B∨C1). Notation B (also written B = 0) denotes the Chu space (B,0B)having just the one all-zero state. In other words, if B = b1, . . . , bn, then
B =b1 0⋮ ⋮bn 0
Similarly, notation B ≠ 0 denotes (B,X − 0B).Additionally, C denotes the Chu space in which all events of C have already happen;
in other words, the Chu space (A,X) with a single state x ∈X such that for all a ∈ A,
x(a) = 1.
14
1.3 Organization
A representative part of this dissertation was mostly done jointly with Professor
Camilo Rueda, my advisor. In particular the work on chapter 3 represents the
development of ideas from both of us. The rest of the document is organized as
follows:
In chapter 2 we will examine seven formal models used to formalize some musical
applications, to solve musical and improvisation problems and to prove properties
in music. We give some consideration for each formalism remaking strengths and
weaknesses (those characteristics very important and with musical significance, and
those crucial in musical environments which are not part or are not native in those
formalisms). At the end we show some related work.
In chapter 3 we introduce the rtcc calculus formally. We first explain some changes
made to the ntcc model in order to develop the extension. Then we define an oper-
ational semantics supporting resources, limited time and true concurrency. We show
some properties of the calculus. We also define the notion of observable behaviour,
specially the input-output behaviour, which will be used to demonstrate the rela-
tion between this semantics and the denotational semantics. Later, we introduce
a denotational semantics based on Chu spaces. We show that full-abstraction be-
tween the semantics holds for every process. After this, we describe a real-time logic
for a simplified transition system disregarding resources. Finally, we provide some
annotations about encoding ntcc into rtcc and give some related work.
The research on Chu spaces was originated from Professor Camilo Rueda’s study of
CCP and event structures. Some of the ideas on how represent processes constructs
on Chu spaces were borrowed from an unpublished paper where Prof. Rueda de-
scribes a denotational semantics of CCP using event structures. He also uses Chu
spaces to be a representation of event structures and to prove some properties on
the true concurrency semantics of an stochastic, real-time CCP.
In chapter 4 we illustrate the applicability of the rtcc calculus describing some ex-
amples of modeling systems using this formalisms. We center our efforts in formalize
the notion of OpenMusic Maquette of section 2.1.2.
In chapter 5 we give some concluding remarks and propose future work.
15
2 Formal Models
“Talking about music is like dancing about architecture.”
– Laurie Anderson
In the last fifty years, the term formalization has been increasily used in process
modeling. Formalization is a way of presenting scientific theories within the frame-
work of a formal system, and can be considered a deductive approach from a point
of view purely combinatorial. In other words, the formalization may be seen like a
deductive step leading from a language to a theory.
Formalization in music consists in clarifying phenomena such as analysis and com-
position, representing musical processes in a formal language understandable by
computers [Mal98]. That means to elaborate models of formal representations of
musical concepts that can assist communicating them to the machines.
There are many programming languages for music, musical applications and tools
which take advantage of mathematical principles, formal theories, and studies in
computer science. This chapter is not an attempt to reference every formal model
used in music, but only those which are related to this dissertation. Seven rep-
resentatives (and relevant to the thesis) formalisms that have been used directly
or indirectly to model musical properties will be briefly described and also various
known musical applications of some of those formalisms.
2.1 λ-Calculus
The λ-calculus [Chu85] is a calculus for describing functions that compute values
from arguments. It was introduced by Alonzo Church and Stephen Cole Kleene as
16
a part of the investigation of the David Hilbert’s Entscheidungsproblem1.
We describe next the λ&-calculus [Cas98], a formal model for CLOS, the Common
Lisp Object System [Ste90], and extension of the λ-calculus.
2.1.1 Semantics
The typed λ-calculus is a variant of the λ-calculus that make explicit the intended
types of all expressions. The type of an expression is determinated from the types of
its subexpressions. The λ&-calculus is an extension of the λ≤-calculus [CA96] which
is the simply-typed λ-calculus with a subtyping relation. The formal description of
λ& is in table 2.1 (including terms of the λ≤-calculus).
M ∶∶= xV Variable∣ λxV .M Function∣ M ⋅M Function Application∣ ⟨ℓ1 =M, . . . , ℓn =M⟩ Records∣ M.ℓ Value of a Field∣ ε Empty Overloaded Function∣ (M&VM) Overloaded Function∣ M M Overloaded Application
where V denotes a type.
Table 2.1: Syntax of the λ&-Calculus
Informally, the notation λx.P specifies the function whose value at an argument N
is computed by substituting N for x in P .
Two main characteristics distinguish this calculus from λ≤: overloading (a function,
called generic function, may be formed by a set of ordinary functions, called methods
which can respond differently to the same message) and late binding (a function is
bound to its meaning at compile time while the meaning of a method can be decided
only at run-time when the receiving object is known).
1This problem tried to find a general algorithm that, given a formal language and a mathematicalstatement in the language, it would return True if the statement was true and False otherwise.
17
Therefore a generic (overloaded) function with n methods can be written as
(ε & M1 & M2 & . . . & Mn)
The type of a generic function (overloaded type) is the set of the types of its methods.
Thus, if the method Mi has the type Ui → Vi, then the generic function above has
the type
U1 → V1,U2 → V2, . . . ,Un → VnIf an argument N of type Uj is passed to this function, the selected method is Mj ,
and the result is the application of Mj to N :
(ε & M1 & M2 & . . . & Mn) N ⊳ Mj ⋅N
If the type U of N is not contained in the set Ui of input types of the generic function,
the method Mj is selected such that Uj = mini=1...nUi∣U ≤ Ui (recall that ≤ is the
subtyping relation).
In the λ&-calculus, however, not every set of method types can be considered an
overloaded type (for a generic function). A set of method types Ui → Vii∈I is an
overloaded type iff for all i, j ∈ I the following conditions are satisfied:
Ui ≤ Uj ⇒ Vi ≤ Vj
U is maximal in LB(Ui,Uj) ⇒ ∃!h ∈ I s.t. Uh = U
where LB(U,V ) denotes the set of common lower bounds of the types U and V .
These conditions ensure that during computation the type of a term may only de-
crease, and that the selected method exists, is correct and unique.
Since there is a subtyping relation on types, the rule for generic functions states that
∀i ∈ I,∃J,Sj → Tj ≤ Ui → ViSj → Tjj∈J ≤ Ui → Vii∈IThat is, an overloaded type Sj → Tjj∈J is smaller than or equal to another over-
loaded type Ui → Vii∈I if for every method in the latter, there is a method in the
former smaller than or equal to it.
18
(α) λxU .M = λyU .(M[x ∶= y]) y ∉ FV (M)(η) λxU .Mx =M
(β) (λxU .M)N ⊳ M[xU ∶= N](β&) if N ∶ U is closed and in normal form, and Uj =mini=1...nUi∣U ≤ Ui,
then
(M1&Ui→Vii=1...nM2) N ⊳ M1 N for j < nM2 N for j = n
where FV (M) is the set of free variables of the term M .
Table 2.2: Reduction Rules of the λ&-Calculus
A reduction rule in the λ&-calculus is denoted M1 ⊳ M2. The rules are extended
from those of the λ≤-calculus with a β-reduction rule describing the application of a
generic function. They are shown in table 2.2.
It is possible to use this calculus to model object-oriented languages. The objects are
seen as records and classes are seen as record generators. Thus the inheritance mech-
anism is defined by subtyping and selecting methods. In fact, if a message of type
Ci → Tii∈I is sent to an object of type C, then the method Cj = mini=1...nCi∣C ≤ Ciwill be selected and called. If Cj = C the object uses the method defined in its own
class; on the other hand, if Cj > C the object uses the method that its class has
inherited.
2.1.2 Applications
Elody
Yann Orlarey and his team at Grame have studied the λ-calculus (and functional
programming) in order to show its capability of expressing musical functions and op-
erations. In [OFLB94] a music calculus was proposed by introducing the abstraction
and application concepts from λ-calculus to a descriptive language. The result is an
19
interesting approach of formalizing the composition activity, and it led to a visual
musical language made over the λ-calculus, called Elody [OFL97].
Elody is a visual environment written in Java where musicians may construct musical
objects and assemble them with others to create a composition. The main concept
in Elody is the visual constructor which is an interface to build new musical ob-
jects. The basic elements are notes and silences (musical expressions are built from
them). Visual constructors are represented by windows with boxes (arguments and
a result). Figure 2.12 shows an example of a program that creates (with the Lambda
constructor λ) and uses (with the Application constructor @) the Triple function
that repeat a sequence three times.
Figure 2.1: Creation and Use of Triple Function in Elody
Elody is a good effort of introducing programming principles to musicians and non-
programmers in general. Programs can be seen as the combination of abstractions
and applications (both λ-calculus concepts). Figure 2.2 shows the organization of
Elody’s internal treatments. The process begins with the user constructing of a mu-
sical object by means of an intentional expression; then this expression is translated
to another by calculating the λ-abstractions; latter these abstractions are evaluated
obtaining an extensional expression; finally, from this last expression is built the
sound as a sequence of MIDI events and the graphics as a set of graphic orders.
In addition, the language allows a simple model of description of interactions with an
input stream. Within a timeline editor it is possible to define real-time interactions
and to make some temporal transformations such as constructions, shifting, cutting
and compression/expansion. Figure 2.3 shows expressions to be evaluated within
2Figures of Elody have been taken from [OFL97] and the Elody documentation
20
construction and editionof expressions
User
Elody
representationsound
graphicrepresentation
evaluationabstractions
graphic result
sound result
intentionaldescriptions
calculations ofdescription
withabstractions
extensionaldescriptions
Figure 2.2: The Elody Conceptual Diagram
Figure 2.3: The TimeLine Editor of Elody
a timeline editor. The musical objects are placed in sequence on different tracks.
Tracks can contain either basic musical objects or functions applied on the track
just below. All musical objects and functions can be evaluated and/or played from
21
a given position in time.
OpenMusic
The λ&-calculus has been used in [AARD98, Ago98] to formalize OpenMusic (OM)
[ARL+99], a visual and object-oriented programming language based on CLOS.
OpenMusic is a general purpose application providing an environment that support
musical composition by implementing a set of musical and computational objects
symbolized by icons that may be dragged and dropped all around. It was created
by Gerard Assayag and Carlos Agon at Ircam, and it is a successor of PatchWork
[LD89, Lau96].
Programs in OpenMusic (called patches) are graphical algorithms constituted by
boxes (icons that represents functions, classes, instances, subpatches or maquettes)
and connections between them. Each patch has a Lisp code associated, that is, within
a patch there is a flowchart which graphically describes Lisp code accomplishing a
specific function. Figure 2.4 shows an example of a patch to create a salsa song,
developed by composer Mikhail Malt. This patch was conceived as follows: part A
is a set of four instances of the chord class which will provide the notes for the song;
in part B a single note is taken randomly from each instance of the chord class; in
part C the process in part B is repeated four times; then the sixteen resulting notes
are organized in a list in part D; the last process is repeated two times in part E;
finally the result from the last cycle (which is a list of lists) is flatten and given to
an instance of a chord sequence class.
OpenMusic takes advantage of the metaprogramming techniques of CLOS using the
metaobject protocol (MOP) [Pae93]. The MOP is divided in two parts: the static
part involves the possibility of subclassing metaclasses (such as standard-class, the
class whose instances are themselves classes), and the dynamic part which allows
to define new methods for these subclasses. The OpenMusic root class is OMObject
(every object in OM is an OMObject). Under OMObject is OMBasicObject, class of
the OpenMusic metaobjects (definitions that are independent of their visualization)
such as OMPatch, OMClass and OMGenericFunction. Figure 2.53 shows the hierarchy
of OM classes (CLOS classes are in bold framed rectangles). The left side of the tree
3Figure taken from [AARD98].
22
Figure 2.4: OM Patch to Create a Song of Salsa
defines the OM metaclasses whereas the right side implements the visual paradigm.
23
OMOBJECT
name
valuereference
OMBOX
OMINSTANCE
Real−Instance
OMLISPFUN
Real−Funlisp
OMSIMPLEVIEW
VIEW
OMGENFUN
frameseditorframes
iconID
OMBASICOBJECT
OMPATCH
STANDARD−GENERIC−FUNCTION
OMSLOT
STANDARD−SLOT
OMMETHOD
STANDARD−METHOD
OMCLASS
STANDARD−CLASS
OMSIMPLEFRAME OMCOMPOSFRAME
OMFRAME
object
OMEDITOR
WINDOW
Figure 2.5: OM Meta-object Classes Hierarchy
Every OM metaobject can be visualized as an icon (simple frame) or as a container
(container frame). The OMFrame class is the graphic root.
Additionally, OpenMusic defines an original notion called Maquettes. They are Open-
Music entities for representing, in the same object, patches and scores [Ago04]. Inside
a maquette, musical structures can be organized in a time line together with temporal
relations, constraints and hierarchies. Figure 2.6 shows an example of a maquette.
There are four musical objects: the one above is a MIDI file and other three are
patches with an instance of the chord sequence class as an output. In this particu-
lar maquette are three objects constrained to finish in the same exact moment. In
chapter 4 the maquette notion will be formalized.
Other Applications
There are many other applications based on the λ-calculus and its extensions. Some
of them are briefly described below.
Arctic [Dan84] is a high-level language developed by Roger Dannenberg. It synthe-
24
Figure 2.6: An Example of a OM Maquette
sized ideas from functional programming in order to specify real-time control systems
(real-time systems are modeled as black boxes with inputs and outputs). A program
in Arctic is a higher-order function from the set of inputs to the set of outputs. The
formal model on which Arctic is based lacks time, concurrency and synchronization.
25
This problem is solved in the language by borrowing constructs from other languages.
For example, time is represented as a collection of computer functions that execute
statements.
Another language from the same author is the Canon Score Language [Dan89].
It combines concepts from Arctic and MIDI. The idea is to take primitive operators,
to create scores from them and to make transformations of scores. Canon takes
advantage of its declarative style of programming and the ability to define abstract
behavior, in order to make compositions. As in Arctic, Canon provides ways of
specifying and manipulating time and synchronization.
Common Music [Tau90] is an object-oriented music composition environment. It
was created by Heinrich Taube for describing the sound and its higher-level structure.
The composition process is separated in three different levels: developing musical
ideas, translating the ideas to the real world, and understanding how to conceive
the ideas. Common music provides collections which aims to translate the high-level
information introduced by the user into a lower-level information understandable by
the synthesizer.
Roger Dannenberg et al. have worked on another language for sound synthesis and
music composition called Fugue [DFV91]. It extends the traditional approach to
sound synthesis with concepts from functional programming. In Fugue it is pos-
sible to design instruments by combining functions (similar to orchestra languages
of Music-N family [MMM+69]). These new instruments are used in expressions to
generate sounds, and the expressions are combined into complex ones to create a
whole composition. Fugue extended Canon to manipulate digital audio.
In [HMGW96] Paul Hudak et al. proposed Haskore, an algebraic formalism to music
description and composition in the programming language Haskell. It is a collection
of musical modules (data types) to express music. The score and its components are
defined separately from the performance (which is a temporally ordered sequence
of musical events): some Music data types (such as notes and their combination)
becomes the score, and various functions can be defined to interpret it in order to
produce a performance.
Finally, BOOMS [Bar96, BBE02] is a computer-music environment developed by Eli
Barzilay. It uses concrete abstractions (more precisely value and structure abstrac-
26
tions) as a general tool for music programming. BOOMS supports a combination of
structural and non-structural editing of music pieces by means of its editor GCalc
(the Graphic Calculus based on structured colored cubes, introduced in [OFLB94]).
Expressions in BOOMS can be conceived as single directed acyclic graphs with leaves
that correspond to atomic data values, and internal nodes that correspond to con-
structors.
2.1.3 Considerations
The λ-calculus has been created several years ago, so one of the main advantages
of this calculus is its maturity and robustness. It has a countless number of appli-
cations in hundreds different areas of knowledge. However, it lacks native notions
such as time and constraint. Without these, it’s difficult to express the different
musical temporal aspects, the calculations involved, the objects representation, the
constraints in the different applications and partial information.
The applications that we just presented are based on the λ and λ& calculi. Therefore,
they don’t have a base supplying insights to define new concepts such as, for example,
primitive temporal entities, to express notions such as repetition and eventuality, or
to provide different models to organize the objects in time (OpenMusic and Elody,
for instance, provide some of these characteristics but as an implementation outside
the capabilities of the formal model). Furthermore, there is no formal notion of a
constraint system in the λ-calculus, which limits the possibilities of applying con-
straints, and prevents a formal statement of what exactly are valid manipulations of
temporal objects and what the visual representation of musically significant tempo-
ral constraints stand for (although OpenMusic has been enhanced with a constraint
library called “Situation” [RB98], but this is not native in the formalism).
Additionally, notions such as interaction and concurrency, very typical in music, are
not a priority in the λ-calculus. This limits the possibility of expressing parallel com-
position of processes, synchronization of musical pieces, dynamics of musicians, etc.
Now, in order to explore the combination of functional and concurrent programming,
an extension of the λ-calculus with parallel composition, dynamic channel genera-
tion and input-output communication primitives, called λ∥-calculus, was proposed in
[ALT95] (this calculus was developed to approach the λ-calculus and the π-calculus).
27
2.2 Communicating Sequential Processes (CSP)
The algebra of Communicating Sequential Processes [Hoa78] is a model introduced
by C.A.R. Hoare for the formalization and mathematical treatment of concurrent
systems. It is supported by an elegant mathematical theory, a set of proof tools, and
an extensive literature.
2.2.1 Semantics
CSP allows the description of systems in terms of component processes that operate
independently and interact with each other through message-passing communication.
There are two classes of primitives: events and processes. Processes are independent
self-contained entities with particular interfaces through which they interact with
the environment. Events (or actions) are central elements of interaction and com-
munication between processes or between a process and the environment. The set of
events defined by a given process P is called alphabet of the process, notation αP .
In order to construct processes the syntax in table 2.3 was defined (in [Hoa85] there
are defined more operators but for simplicity they are not included here).
P,Q, . . . ∶∶= STOP∣ SKIP∣ a → P Prefix∣ P ∥ Q Parallel∣ P ∥∣ Q Interleaving∣ P ∣ Q Deterministic Choice∣ P ◻ Q Nondeterministic External Choice∣ P ⊓ Q Nondeterministic Internal Choice∣ P ∖ A Hiding∣ P ; Q Sequential Composition
Table 2.3: Syntax of CSP
Prefixing is the main operator. The constructor a → P connects an event a to a
process P . This operator will wait indefinitely for a to occur in order behave like
P . The classic example from Hoare is the vending machine which accepts a coin,
28
returns a choc, and then repeats:
VMS = (coin → (choc→ VMS))STOP is the process which never engages in any event of an alphabet. SKIP is
described as a process which does nothing but terminate successfully.
There are two operators for reasoning about systems with processes interacting with
each other having independent control: the parallel operator P ∥ Q and the inter-
leaving form P ∥∣ Q. With interleaving is solved the problem of sharing resources by
two concurrent processes.
CSP supports three choice operators. For the nondeterministic ones, the external
choice enables the environment to select one of the actions by offering a particular
event. In the internal choice the process itself may behave like any of the actions
but there is no way to tell which in advance.
The special event is defined to denote a positive termination of a process. It is
also defined the trace of a process as a sequential record of the behaviour of the
process up to some moment in time. A trace is denoted as a list of events, separated
by commas and enclosed in angular brackets (e.g. for the vending machine, at the
moment it has completed two sells, the trace is ⟨coin, choc, coin, choc⟩). The only
possible trace of process SKIP is ⟨⟩.Processes communicate between them by sending events through communication
channels. The transmission of a value x through the channel c is denoted by c!x,
and the reception of a value y through the channel d is denoted by d?y.
The operational semantics is described in terms of labelling transitions of the form
P1
µÐ→ P2, where P1 and P2 are both processes, and µ describes the action which
accompanies this transition. Table 2.4 captures the transitions.
The transitions can be understood as follows. Process SKIP can do nothing except
indicate that it has reached termination with event . It is always the case that a
prefix operator a → P may perform an a transition and subsequently behave as P .
The transitions of a parallel combination describe both the independent execution
of each of the components and the performance of a joint step. In the interleaving
process P1 ∥∣ P2, the components P1 and P2 execute completely independently of
each other, and do not interact on any events apart from termination (each event is
29
SKIPÐ→ STOP
(a→ P ) aÐ→ P
P1
τÐ→ P ′1
P1∥P2
τ
Ð→ P ′1∥P2
P2∥P1
τ
Ð→ P2∥P ′1
P1
aÐ→ P ′1 P2
aÐ→ P ′2
P1 ∥ P2
aÐ→ P ′
1∥ P ′
2
P1
µÐ→ P ′1
P1 ∥∣ P2
µ
Ð→ P ′1∥∣ P2
P2 ∥∣ P1
µ
Ð→ P2 ∥∣ P ′1
[µ ≠] P1
Ð→ P ′1 P2
Ð→ P ′2
P1 ∥∣ P2
Ð→ P ′
1∥∣ P ′
2
P1
aÐ→ P ′1
P1 ◻ P2
a
Ð→ P ′1
P2 ◻ P1
a
Ð→ P ′1
P1
τÐ→ P ′1
P1 ◻ P2
τ
Ð→ P ′1◻ P2
P2 ◻ P1
τ
Ð→ P2 ◻ P ′1
P1 ⊓ P2
τ
Ð→ P1
P1 ⊓ P2
τ
Ð→ P2
PaÐ→ P ′
P ∖ AτÐ→ P ′ ∖ A
[a ∈ A] PµÐ→ P ′
P ∖ AµÐ→ P ′ ∖ A
[µ ∉ A]
P1
µÐ→ P ′1
P1 ; P2
µÐ→ P ′
1; P2
[µ ≠] P1
Ð→ P ′1
P1 ; P2
τÐ→ P2
Table 2.4: Transition Rules of CSP
performed by exactly one process). The external choice P1 ◻ P2 is resolved by the
performance of the first event, in favour of the process that performs it. The internal
choice operator ⊓ choose non-deterministically one of the processes involved. For the
hiding process P ∖ A all the events in A are removed from the interface of P (these
events become internal to P ). Finally, the sequential composition P1 ; P2 initially
executes as P1, and when it terminates then executes the component P2.
2.2.2 Applications
MAX is a programming language developed by Miller Puckette and proposed in
[Puc88] originally as “the Patcher”. MAX can be seen as a graphical and musical
environment for developing real-time music applications by connecting boxes repre-
senting a particular treatment of sound.
30
The fundamental element in MAX is the patch. It is a set of objects (boxes) in-
terconnecting by lines. These objects pass messages between them and respond by
taking actions. They also can access the clock and the MIDI I/O in order to control
a musical process. In figure 2.7 an example of a synthesizer to play tones and chords
via some random generators and some metronomes in MAX is shown.
Figure 2.7: Example of a Patch in MAX
Several extensions and implementations of MAX have been developed. MSP [Dob97]
and jMax [DBdC+98] could be the most widely used. MSP introduced signal process-
ing to MAX. jMax is the implementation of MAX in which the graphical interface
was re-implemented in Java.
Since there was no documents explaining the theoretical foundations of MAX, this
programming language was formalized in [Sel04] using CSP. A Patch then is defined
31
as a network of reactive objects (or reactive system): A patch receives some values
from a external device (software or hardware), it treats these values and then returns
other data, computed from the original values, to an external device (the same or
another). This makes MAX an environment builder for musical reactive systems.
A patch is developed from primitive reactive objects and/or some other patches.
Primitive reactive objects (called simply primitives) are processes denoted textu-
ally [c, p1, p2], and graphically as figure 2.84. This is a primitive of class c, with
parameters p1 and p2, three inputs and two outputs.
1p p2c
Figure 2.8: A Primitive in MAX
A primitive Ω is an element ofM, the set of all reactive objects (M ⊆ P, where P is
the set of all processes). For each primitive Ω ∈M, we have the following notations: S(Ω) is the symbol of its class, P (Ω) is the list of initialization parameters, I(Ω) is the list of inputs (for every input i, αi denotes its alphabet, that is,
the set of values that the object can receive through that channel), O(Ω) is the list of outputs (for every output o, αo denotes its alphabet) αΩ is its alphabet (is equal to ⋃i∈I(Ω)
αi ∪ ⋃o∈O(Ω)
αo)
The behaviour of a primitive starts when it receives values through input channels,
then it sends values through output channels and terminates. For example, the
primitive fork (which allows to force the order in which two object receive a value
emitted by the output of a third object) can be model in CSP as follows (see [Sel04]
to know the formal models of all classic objects of MAX):
[fork] ∈M I([fork]) = (i) O([fork]) = (o1, o2)4Figures 2.8 and 2.9 were taken from [Sel04].
32
[fork] ∶ i?x → o2!x → o1!x→ SKIP
Primitives are eventually connected between them. Connections must go from out-
puts to inputs. An output o and an input i can be connected if αo ⊂ αi (if no
alphabet is given for a channel, then its alphabet is D). A connection is denoted
textually o→ i (or Ω1.o→ Ω2.i if o ∈ O(Ω1) and i ∈ I(Ω2)), and graphically as figure
2.9.
o
i
Figure 2.9: A Connection in MAX
For example, if we have this two primitives:
Ω1 ∶ action1 → o!x→ action2 → SKIP
Ω2 ∶ i?x → action3→ SKIP
and if o → i, then the execution of the patch associated to them has the following
trace:
⟨action1, o!x, i?x, action3, Ω2, action2, Ω1
⟩The temporal model of MAX is formalized as follows. MAX allows to trigger events
in the future. This characteristic is modeled in CSP by defining the time as a positive
discrete variable t whose value never decrease, and by defining a function scheduler
whose work is to set the transmission of a value through a channel at a given time
t′ (it is assumed that the operations do not take any time). The function scheduler
is defined recursively thus:
SCHEDULER ∶ startWait → stopWait → (t ∶= t + δt)→ (∀o, v, t′ ∶ T ∣ t′ ≤ t o, v;T ∶= T /(o, v, t′))→ SCHEDULER
where T is the set of tuples ⟨value, outputchannel, date⟩ describing the transmission
of the values v through the output channel o some time in the future, not before t′
but before t′ + δt.
33
2.2.3 Considerations
The advantage of using CSP in modeling applications in different areas of knowledge
is that it is quite simple conceptually, yet provides a good solution to many of
the common synchronization problems. However, as in the λ-calculus, time and
constraints are not native notions. One can model time (just like above) by taking
into account a discrete variable which never decrease its value and change its value
in an infinite recursion. Nevertheless, there is no control of the rate of change, this
means, time is logic (i.e. the period between events startWait and stopWait in
the scheduler represents the physical passage of time at some speed proportional to
the logical speed of the Patch). Then we must expect a correspondence between
this period of time and the duration of δt. Sadly, there is no guarantee of this
correspondence, hence the well-known characteristic of “real-time” of MAX cannot
be ensured in this formalism. We will see the same kind of model of time below
in the ntcc calculus: it is not possible to associate this logic time exactly to the
physical time, all depends on several other conditions.
On the other hand, the behaviour of CSP processes is dependent on its environment.
Therefore, is difficult to assert global properties. Then there is an absence in the
way of the precise specification of the properties of a system, which is natural in a
constraint-oriented language.
2.3 π-Calculus
The π-calculus [MPW92] is a mathematical model of processes whose interconnec-
tions change as they interact. This means that the configurations of processes may
change with the concurrent computations. This calculus was originally developed by
Robin Milner, Joachim Parrow and David Walker as an extension of CCS.
This model replaces the notion of function (which describes the input-output relation
of sequential processes) formally defined in the λ-calculus, with the notion of name.
A name is used by processes in two ways: as an identification from other processes
and as a communication channel (communication among two processes is possible if
they share the same channel).
34
2.3.1 Semantics
The π-calculus was conceived as a language for describing and reasoning about mobile
systems. The descriptions of mobile systems and their interactions are analyzed as
concurrent communication processes. Then, the primitive notions of the π-calculus
are the processes and the names. The syntax is defined in table 2.5.
π :== ax Output∣ a(x) Input∣ τ Silent
P,Q, . . . :== 0 NIL∣ π.P Prefix∣ P +Q Summation∣ P ∣ Q Parallel∣ (ν x) P Restriction∣ !P Replication
Table 2.5: Syntax of the π-Calculus
As in CSS, 0 represents the process NULL which can be seen as the empty sum. A
prefix denotes an atomic action previous to the next process. The two atomic actions
ax and a(x) denote the transmission of x over the channel a, and the reception of x
on the channel a, respectively. A summation represents the nondeterministic choice
between two processes; the choice does not depend on the process itself but on the
communication. The process P ∣ Q denotes the parallel composition of processes
P and Q. In addition to the individual actions of each process, both processes can
interact between each other. The process, x(y), yz ∣ x(w).0, for instance, denotes
the interaction of a process sending the data w through the channel x with a process
reading this data (it call it y) through the same channel. A name can be restricted in
the context of a single process with (νx) P (the name x is local to P and it is visible
only by P ). The process !P represents the replication of P , that is, it produces as
many copies of P as the programmer wants.
The semantics of the calculus is established by means of a congruence relation and
transition rules. The congruence relation abstracts which processes are structurally
congruent although they are not syntactically identical. For two processes P and Q,
we say that they are structurally congruent, notation P ≡ Q, if we can transform one
35
into the other by using the following equations (in either direction) [Mil99]:
1. Change of bound names
2. Reordering of terms in a summation
3. P ∣ 0 ≡ P , P ∣ Q ≡ Q ∣ P , P ∣ (Q ∣ R) ≡ (P ∣ Q) ∣ R4. (ν x)(P ∣ Q) ≡ P ∣ (ν x)Q if x ∉ fn(P )(ν x)0 ≡ 0, (ν xy)P ≡ (ν yx)P
5. !P ≡ P ∣ !P
where fn(P ) is the set of free names of process P . The transition rules define how to
transform some processes into another processes. They are specified in table 2.6. The
rule TAU allows to perform internal unobservable actions. The rule REACT allows
the reaction of an action and its complement (made by means of communication).
The two rules PAR and RES indicate that a reaction can occur inside a computation
or a restriction. The final rule, STRUCT, allows the structural congruence in the
transitions.
(TAU) τ.P + M → P
(REACT) (x(y).P + M) ∣ (x⟨z⟩.Q + N)→ z/yP ∣ Q(PAR)
P → P ′
P ∣ Q→ P ′ ∣ Q(RES)
P → P ′
(ν x)P → (ν x)P ′(STRUCT)
P → P ′
Q→ Q′if P ≡ Q and P ′ ≡ Q′
Table 2.6: Transition Rules of the π-Calculus
36
2.4 PiCO
The AVISPA research group5 in Colombia was founded with the objective of de-
veloping models for an integration of Object Oriented and Concurrent Constraint
Programming into a Visual Language, so as to have a programming environment
sustained in a rich semantics that eases the task of developing computer music appli-
cation. The group defined PiCO [ADQ+98, RAQ+01], a calculus integrating objects
and constraints.
2.4.1 Semantics
The π+-calculus [DRV99] extends the π-calculus with the notion of constraints. PiCO
adds to π+ the notion of objects and messages synchronized by constraints. Con-
straints and concurrent objects are primitive notions at the calculus level (the objects
are located in constraints and passing messages is defined by delegation).
PiCO is parameterized in a constraint system, which consists of a signature Σ (a
set of constants, functions and predicates) and a consistent theory ∆ over Σ (a set
of first-order sentences with a least one model) (as in [Smo94]). Given a constraint
system, the symbols φ, ψ, . . . denote first-order formulae in the signature, called
constraints. We say that φ entails ψ in a theory ∆, notation φ ⊢∆ ψ, if φ ⇒ ψ is
true in all models of ∆.
The syntax of PiCO is defined in table 2.7. There are three basic processes: Ob-
jects, Messages, and Constraints. Other processes such as null, replication, parallel
composition and hiding play the same roles as their π-calculus counterparts.
A process (φsender, δforward) ⊳ M can be thought of as an object. It responds to a
message sent by other processes by replacing itself with another process available in
the collection of methods M . The constraint φsender represents the guard, that is, it
determines if a process who sends a message can communicate with the object who
receives the message (if the location of the sender satisfies the receptor constraint
then the message is received). The object (sender ∈ a, b, forward ∈ b, c) ⊳M ,
for instance, accepts messages from the process a ⊲m[] then Q, since the constraint
a ∈ a, b is fulfilled. The constraint δforward represents the delegation condition, in
5http://avispa.puj.edu.co/
37
Normal Processes: N ∶∶= O Inaction or null process∣ I ⊲m then P Message sent by I
∣ (φsender, δforward) ⊳M Object with delegationcondition δforward
guarded by constraint φ
Constraint processes: R ∶∶= tell φ then P tell process∣ ask φ then P ask process
Processes: P , Q ∶∶= local x in P New variable x in P
∣ local a in P New name a in P
∣ N Normal process∣ P ∣ Q Composition∣ clone P Replicated process∣ R Constraint process
Object identifiers: I, J , K ∶∶= a, l Names∣ v Value∣ x Variable
Collection of Methods: M ∶∶= [l1 ∶ (x1)P1 & . . .& lm ∶ (xm)Pm]
Messages: m ∶∶= l ∶ [I]
Table 2.7: Syntax of PiCO
other words, in case the message received cannot be read by the receptor (m ∉M),
it is re-sent by means of the location determinated by the constraint δforward.
The process I ⊲m then P can be thought of as a message from I to a target object
(P is the continuation of the message).
Finally, tell φ then P and ask φ then P can be thought of as constraints processes.
They communicate with a global store containing information.
The operational semantics for PiCO is defined much in the same way as is done for
the π+-calculus. The transition relations are not defined over processes directly but
over configurations of the form ⟨P ;S⟩, where P is a process and S is a store. The
transition ⟨P ;S⟩→ ⟨P ′;S′⟩ means that process P evolve to P ′ transforming the store
S to S′ in a single step. The transitions are summarized in table 2.8 (for further
details see [RAQ+01]).
38
S ⊢ φ[I ′ sender] ∣K ∣ = ∣x∣⟨I ′ ⊲ l ∶ [K] then Q ∣ (φsender, δforward) ⊳ [l ∶ (x)P& . . .];S⟩→ ⟨Q ∣ PK/x′, I ′/sender;S⟩
S ⊢ φ[I ′/sender] S ⊔ δ[I ′/forward] ⊢⊥ l ∉ Labels(M)⟨( I′ ⊲ l∶[K] then Q ∣(φsender,δforward)⊳M
) ;S⟩→ ⟨( local J in tell δ[J/forward] then (J⊲ l∶[K] then Q) ∣(φsender,δforward)⊳M
) ;S⟩⟨tell φ then P ;S⟩→ ⟨P ;S ∧ φ⟩
S ⊢ φ
⟨ask φ then P ;S⟩→ ⟨P ;S⟩ ,S ⊢ ¬φ
⟨ask φ then P ;S⟩→ ⟨O;S⟩⟨P ;S⟩→ ⟨P ′;S′⟩
⟨Q ∣ P ;S⟩→ ⟨Q ∣ P ′;S′⟩x ∉ fv(S), ⟨P ;S ≫ x⟩→ ⟨P ′;S′⟩
⟨local x in P ;S⟩→ ⟨P ′;S′⟩a ∉ fv(S), ⟨P ;S ≫ a⟩→ ⟨P ′;S′⟩
⟨local a in P ;S⟩→ ⟨P ′;S′⟩⟨P1;S1⟩ ≡P ⟨P ′1;S′1⟩ ⟨P2;S2⟩ ≡P ⟨P ′2;S′2⟩ ⟨P1;S1⟩→ ⟨P2;S2⟩
⟨P ′1;S′
1⟩→ ⟨P ′
2;S′
2⟩
Table 2.8: Transition Rules of PiCO
The first rule (communication) describes the result of the interaction between the
sender process I ′ ⊲ l ∶ [K] then Q and the receptor process (φsender, δforward) ⊳ [l ∶(x)P& . . .]. The store is used to decide if a communication between both processes
can be established. If so, the sender process P is activated (making the substitu-
tions) concurrently with the process defined as the continuation of P , i.e. Q. The
second rule (delegation) formally describes how messages are delegated. Infinite del-
egations are avoided by the conditions posted by processes. The third and fourth
rules describe the interaction between a process and the store. The other rules are
similar from those of the π and π+ calculi.
2.4.2 Applications
This calculus is a foundation for developing a computational model suitable for con-
structing music composition tools. Cordial [QRT97] is a high level visual program-
ming language integrating object-oriented and constraint programming intended for
musical applications. Its semantics is based on PiCO.
39
Cordial is an iconic language in the spirit of OpenMusic. The basic elements of
a program are those of object-oriented programming, such as classes, objects and
methods. A program is a set of classes. The solution of a music composition problem
is given by programming in a visual, concurrent, object-oriented and constraint way.
For example, given the following problem6:
Two melodic voices, V oice1 and V oice2, will start at the same time and
will be generated until an external condition is met. Notes in the two
voices have three attributes: pitch, duration and dynamic. Their pitch
values should be in a set of allowed ambitus (i.e. a range) Amb, and their
durations must belong to a given set, say 4,2,1 (where 2 represents,
say, a quater note). Notes must also satisfy the following four conditions:
1. If n1 and n2 are two consecutive notes in V oicei (i ∈ 1,2) hav-
ing pitches equal to x and y, respectively, then tell max(x, y) −min(x, y) ∈Melody, where Melody is a given set of integers.
2. Notes are divided into groups of duration equal to 4. A group could
be made to correspond to any meaningful rhythmic division, for
instance a beat of a measure. Each group contains notes of both
V oice1 and V oice2.
3. Notes starting a group are constrained differently. The first note in
each duration group has its dynamic equal to 70.
4. Let n1 and n2 be notes from the same group in V oice1 and V oice2,
respectively. If they sound at the same time and their durations are
both greater than 1 then the absolute difference between their pitch
values must be in a certain interval set, HARMONSET1. In any
other case, the absolute difference between their pitches must be in
HARMONSET2.
Figure 2.10 shows the cordial editor with the musical classes (Melody, Group of
Voices, Voice and Note) to solve the problem. A class in Cordial has three main
sections: Attributes, Methods and Constraints. Melody is the main class; it has
an attribute Group which is a sequence of groups representing the two voices of
6Example taken from [RAQ+01]
40
the melody, three class methods, voice1, voice2 and stop, and two class constraints,
consecutive and end. Class Group models a time window of the two voices of the
class Melody. Class V oice represents the voices with a sequence of notes. Finally,
each note is represented by the class Note (with their attributes of pitch, duration
and dynamics).
The translation of the class Note in figure 2.10 into a PiCO process can be roughly
defined as follows:
clone note >
[new ∶ (self) local cellp, celldu, celldy, vy , vdu, vdy in
clone self ⊳
[pitch ∶ (x) tell x = cellp &
duration ∶ (x) tell x = celldu &
dynamic ∶ (x) tell x = celldy &
contiguous ∶ (aNote) local int1, int2 in
self.pitch ⊲ sub[aNote.pitch, int1] ∣ int1 ⊲ abs(int2) ∣int2 ⊲ inrank[MELODY ] &
harmony ∶ (note′.ot1, ot2) local hint, rt1, rt2 in
rt1 ← 4 − ot1 − self.duration ∣rt2 ← 4 − ot2 − note′.duration ∣ask rt1.value ≤ note′.duration.value ∧
self.duration.value > 1 ∧
note′.duration.value > 1
then
hint ← abs(self.pitch − aNote.pitch)hint ⊲ inrank[HARMONISET1] ∣
ask ¬(rt1.value ≤ note′.duration.value ∧self.duration.value > 1 ∧
note′.duration.value > 1)then
hint ← abs(self.pitch − aNote.pitch)hint ⊲ inrank[HARMONISET2] ∣
ask . . . then . . . & . . .]cellmaker ⊲ [cellp, vp] ∣ . . . &
super ∶ (r) tell r = note]41
Figure 2.10: The Cordial Editor
Methods are defined graphically using instances of classes, inputs and outputs, mes-
sages (methods), conditionals and comments. Figure 2.11 shows the method End of
42
the class Group. This method instances its string argument to “stop” to signal the
condition that the final notes of each voice in a group have the same duration.
Figure 2.11: A Method Editor in Cordial
2.4.3 Considerations
With PiCO, the Avispa research group managed to integrate concurrent objects and
constraints. This calculus allows the representation of complex partially defined
43
objects such a musical structures in a compact way and also allows an easy way of
describing harmony relations. Nevertheless, since there is no explicit notion of time
in PiCO, some musical problems involving time and synchronization are difficult
to express. Moreover, since there is no formal logic associated with the calculus,
reasoning about musical processes behaviour is hard to accomplish.
Given the convenience of graphical representations such as Block-Diagrams (see sec-
tion 2.5), recently in [Tav08] an extension of Pico, called GraPico, was proposed as
a visual representation of the calculus.
2.5 Block-Diagram Algebra
The Block-Diagram Algebra [OFL02] was proposed by Yann Orlarey et al. as an
algebraic approach to block diagram construction. It was designed as an alternative
to the classical graph approach inspired by dataflow models.
The idea of this algebra is to give an explicit formal semantics to the dataflow
inspired music languages by means of high level construction operations combining
and connecting together block diagrams and rules associated to each construction
operation.
2.5.1 Semantics
Block-diagrams are defined as a set of primitive blocks and a set of connections
between them. Let B be a set of primitive blocks and D be the set of block-diagrams
built on top of B. A block-diagram D ∈ D is defined recursively by the syntax given
in table 2.9.
The identity block (a simple connection wire) and the cut block (a connection ending)
are typically used to create complex routings. The sequential composition operator
is used to connect the outputs of D1 to the corresponding inputs of D2. The parallel
composition operator associates two block-diagrams one on top of the other, without
connections. The split composition operator is used to distribute the outputs of D1
to several inputs of D2. The merge composition operator does the inverse of the split
operator (to connect several outputs of D1 to the same inputs of D2). Finally, the
44
D :== B ∈ B∣ identity∣ ! cut∣ (D1 ∶ D2) sequential composition∣ (D1 , D2) parallel composition∣ (D1 <∶ D2) split composition∣ (D1 ∶> D2) merge composition∣ (D1 ∼ D2) recursive composition
Table 2.9: Syntax of the Block-Diagram Algebra
recursive composition operator is used to create cycles in the block diagram in order
to express recursive computations.
Graph AlgebraicRepresentation Representation
!
B C B ∶ C
B
C
B , C
B C B <∶ CB C B ∶> C
B
C
B ∼ C
Table 2.10: Graph and Algebraic Representation of Processes
45
Table 2.10 shows the scheme in block diagram language and its associated textual
syntax7.
2.5.2 Applications
Faust [GO03] is a programming language designed for real-time sound processing
and synthesis. It combines two models of programming: functional programming
(from this approach comes the name Faust, Functional AUdio STream) and block-
diagrams composition. This language was developed by Grame to help programmers
and musicians to build audio stream processors.
Faust is in fact, an implementation of the block-diagram algebra proposed in [OFL02],
that is, it can be thought as a structured block diagram language with a textual syn-
tax. A Faust block-diagram denotes a signal processor transforming signals.
This language provides primitives very similar to C/C++ operators such as arith-
metic, comparison, bitwise, constants, casting, tables and interface elements. A noise
generator, for instance, can be easily developed by using a random number genera-
tor which values are scaled down between −1 and 1, like the following code and its
corresponding block-diagram in figure 2.128:
random = (*(1103515245)+12345) ∼ ;
noise = random *(1.0/2147483647.0);
2.5.3 Considerations
The Block-Diagram Algebra is an interesting approach to visual languages which are
somewhat more intuitive, take the user to a higher level of abstraction and makes
easier the analysis of programs. The graph representation of a block-diagram, its
denotational semantics which describes the meaning of a program by denoting what
is computed (the mathematical object) and its suitability for formal manipulations:
λ calculus, partial evaluation, compilation, for instance, convert this calculus in a
powerful formalism for visual languages.
7Figures modified from those in [OFL02]8Example taken from [OFL04]
46
12345
1103515245
1.0
* +
/
2147483647.0RANDMAX
*
noise
random
Figure 2.12: Example of a Noise Generator in Faust
2.6 CC Family
Vijay A. Saraswat in [Sar93] has proposed concurrent constraint programming (ccp)
as a model for specifying concurrent systems in terms of constraints.
A constraint is a first-order formulae representing partial information about the
shared variables of the system. Examples of constraints are:
x ≥ 42
x + y ≤ 30
The information about the shared variables resides in a store, which is, in fact, the
conjunction of all the constraints applied to the variables. This store can be accessed
by agents (processes who interact with the store) with two basic operations: ask and
tell. As opposed to the π-calculus, which defines a point-to-point communication,
ccp is a model of shared-memory communication. Intuitively, this implies that a
process in a communication broadcasts messages to every other agent in the system.
The figure 2.13 shows the classic example of four agents interacting with the store.
The agents at the left tell the store that the variable x will be instanced between
47
the values 0 and 100. When the agent at the top-right ask if x is equal to 100, the
answer will be no, but when the fourth agent asks if x is between 30 and 50, it will
be blocked because there is not enough information to answer that. The agent will
wait until some other agent tells the store something else about the variable.
STORE
x > 0
x < 100 30 < x < 50 ?
x = 100 ?
Figure 2.13: A CCP Store Accessed by Four Agents
An agent or process then is an information processing device that interacts with its
environment via a shared constraint representing the known common information
[SRP91].
2.6.1 Semantics
Assuming a set of Names N and a set of constraints C, the set of ccp-terms, denoted
by P, is defined inductively by the syntax in table 2.11.
P,Q ∶∶= c Tell∣ c→ P Ask∣ ∃X.P Restriction∣ P ∧Q Parallel composition∣ P ◻Q Summation∣ p(X) Procedural call
Table 2.11: Syntax of ccp
The Restriction, Parallel composition, Summation and Procedure call operators play
the same roles as their π-calculus counterparts. The Tell construct posts the con-
48
straint c in the store. The Ask construct queries the store to know if the condition c
can be entailed; if so, the process P is activated, otherwise the agent is blocked until
its condition is satisfied by the store.
This calculus is parameterized in a constraint system [SRP91] which specifies what
kind of constraints handle the store. Formally, if pC is the set of finite subsets of
C, a constraint system is a structure ⟨C,⊢⟩ where C is a non-empty, countable set of
(primitive) constraints, and ⊢⊆ pC × pC, called an entailment relation, satisfies the
following, for any c, d, e ∈ pC: If c ⊆ d, then d ⊢ c If d ⊢ e and e ⊢ c, then d ⊢ c.
A store of ⟨C,⊢⟩ is a set of constraints d such that if d ⊆ pC and if for any c ∈ C and
d′ ⊆ d, d′ ⊢ c, then c ∈ d. Two stores are equivalent, notation d ⊢⊣ d′, iff d ⊢ d′ and
d′ ⊢ d. An empty store corresponds to true, and an inconsistent store corresponds
to false.
The classic example of a constraint system is the Herbrand Constraint System
[Sar93]. This constraint system is such that C is the set of the atomic propositions in
an ordinary first-order language L with equality, and ⊢ includes the usual entailment
relations from equality (e.g. if f(x1, . . . , xn) = f(y1, . . . , yn) then x1 = y1, . . . , xn = yn).
A cylindric constraint system is a structure ⟨C,⊢,Var,∃X ∣ X ∈ Var⟩ where ⟨C,⊢⟩is a constraint system, Var is an infinite set of variables, and for each x, y ∈ Var and
c, d ∈ pC, ∃X ∶ pC → pC is an operation satisfying: d ⊢ ∃xd if d ⊢ c then ∃xd ⊢ ∃xc ∃x(d ∪ ∃xc) ⊢⊣ ∃xd ∪ ∃xc ∃x∃yd ⊢⊣ ∃y∃xd
For expressing the operational behaviour of the concurrent constraint processes, a
quartic transition relation is defined as →⊆ Env × (∣D∣0 × ∣D∣0) × P × P , where Env
49
is the set of all partial functions from procedure names to agents, ∣D∣0 denotes the
set of finite elements of the constraint system D (it is assumed a cylindric constraint
system), and P is a ccp process. Notation ρ ⊢ P(c,d)ÐÐ→ Q means that agent P
with an input store c can, in one uninterrumpible step, upgrade the store to d, and
subsequently behave like Q (if it is not relevant, the “ρ ⊢” can be ommited). The
operational semantics of the process language is given in table 2.12.
c(d,d∪c)ÐÐÐ→ true
(c ≠ true)
c→ P(d,d)ÐÐ→ P
if d ⊢ c
P(c,d)ÐÐ→ P ′
P ∧Q(c,d)ÐÐ→ P ′ ∧Q
P(c,d)ÐÐ→ P ′
Q ∧ P(c,d)ÐÐ→ Q ∧P ′
P(∃xc,d)ÐÐÐ→ Q
∃xP(c,c∪∃xd)ÐÐÐÐÐ→ ∃x(d,Q)
P(d∪∃xc,d′)ÐÐÐÐÐ→ Q
∃x(d,P ) (c,c∪∃xd′)ÐÐÐÐÐ→ ∃x(d′,Q)
P ◻Q(d,d)ÐÐ→ P P ◻Q
(d,d)ÐÐ→ Q
ρ ⊢ p(X) (d,d)ÐÐ→ ∃α(dαX , ρ(p))
Table 2.12: Transition Rules of ccp
These rules states the following: the process c add the information in c to the shared
store d in a single step; the process c → P waits until the store contains at least as
much information as c, then behaves as P ; the transition rules for P ∧Q show that in
a parallel composition, in a single transition, only one process may evolve, moreover
it shows that P and Q never communicate synchronously, that is, all communication
take place asynchronously with information added by one agent to the common store
for the other agent to use; the rules for ∃xP state that all information about x in c
is hidden from ∃xP , and all information about x that P produces is hidden from the
environment (similar for agents with an internal store); the process P ◻ Q behave
as P or Q (the process choose one of them nondeterministically); finally, procedure
calls p(X) are handled by looking up the procedures in the environment ρ.
50
The semantics of this calculus is based on the observations to make of a process.
These observations are the set of resting points (a resting point of a process is a con-
straint c such that if the process is executed with c as an input store, it will eventually
halt without producing any more information, i.e. the store remains unchanged). At
these points one can be made notable relations of structural congruence such as (see
[SRP91] for further details):
1. P ∧ true ≡ P
2. P ∧Q ≡ Q ∧P
3. P ∧ (Q ∧R) ≡ (P ∧Q) ∧R4. P ◻Q ≡ Q ◻P
5. P ∧ ∃xQ ≡ ∃x(P ∧Q) if x ∉ fv(P )6. (a → b) ∧ (c→ d) ≡ (a → b) if c ≥ a, b ≥ d
and the remarkable interleaving law:
(c→ P ) ∧ (d→ Q) ≡ (c→ (P ∧ (b → Q))) ◻ (b → (Q ∧ (c → P )))This law says that a parallel composition of agents can be reduced to a choice between
all possible interleaving of its basic actions.
On the other hand, in order to describe the denotational semantics of the calculus, the
process language must be split into the determinate language and the nondeterminate
language9. In the determinate language (that without the summation construct), a
process is identified with a function f ∶ ∣D∣0 × ∣D∣0 that maps each input c to false
if the process stays in an infinite execution sequence when it receives c as an input
store, and to a store d if that process ultimately quiesces having upgraded the store
to d. This function f is a closure operator in the sense that it is extensive: the only way in which the process can affect the store is by adding
more information to it,
∀c.c ≤ f(c)9It is necessary to make this distinction since getting the resting points of nondeterministic
processes is significantly more complex than with the deterministic ones.
51
idempotent: on a resting point the process outputs d, then if d is now an input
to the process, it cannot progress further (otherwise it wasn’t a resting point),
∀c.f(f(c)) = f(c) monotone: if information in the input is increased, information in the output
should not decrease,
∀c, d.c ≤ d Ô⇒ f(c) ≤ f(d)The denotational semantics for the determinate process language is in table 2.13.
Function A returns the output store of the resting point for each process.
A(c)e = d ∈ ∣D∣0 ∣ d ≥ cA(c→ P )e = d ∈ ∣D∣0 ∣ d ≥ c Ô⇒ d ∈ A(P )eA(P ∧Q)e = d ∈ ∣D∣0 ∣ d ∈ A(P )e ∧ d ∈ A(Q)eA(∃xP )e = d ∈ ∣D∣0 ∣ ∃c ∈ A(P )e.∃xd = ∃xcA(p(X))e = ∃α(dαX ⊔ e(p))
where e maps procedure names to processes, providing an environment forinterpreting procedure calls, α is some variable in the underlyingconstraint system, and c ⊔ f stands for c ⊔ d ∣ d ∈ f.
Table 2.13: Denotational Semantics for the Determinate ccp Language
Now we will explain the denotations for the determinate language. The process c
augments its input store d with the finite constraint c, therefore, its fix point is in
those stores d containing at least as much information as c (d ≥ c). For an ask process
c→ P , if the input store contains at least as much information as c, then its fix point
is in the denotation of P . Since in a parallel composition P ∧Q the process quiesces
exactly when both P and Q quiesce, its fix point is in the intersection of the fix point
of P with the fix point of Q. For the process ∃xP the fix point is in those stores d
such that if exists another store c in the denotation of P hiding x, it must be equal
to d. Finally, the denotation of procedural call is in the underlying environment e.
52
In the nondeterminate language, since there are some possible results due the sum-
mation construct, the denotation of a process can no longer be a function from ∣D∣0to ∣D∣0, and also cannot be a relation ∣D∣0× ∣D∣0 since parallel composition will not be
definable. Instead, there must be a record of the interaction that a process engages
in with environment before its resting point. Then a trace operator is defined which
intuitively is the parallel composition of a finite set of finite sequences of asks and
tells (the sequence a1!b1⋆ . . .⋆an!bn is called a trace representing the closure operator
a1 → (b1 ∧ (a2 → (b2 . . . (an → bn) . . .))).Since the denotations for the nondeterminate language are somewhat more complex,
the reader may see [SRP91] to know them and all their properties.
2.6.2 Extensions
There are some various extensions of the ccp model such as tcc, tccp, Default cc,
Atomic cc, lcc, Distributed cc, and Scc. In this section, we will briefly explain
some of those extensions and focus on the tcc model which is the relevant branch
of this dissertation.
The tcc branch
The tcc calculus [SJG94a] aimed at programming and modeling timed, reactive
systems. This model extends ccp with delay and time-out operations, and the idea
of parallelism in this calculus is considered by interleaving.
In reactive systems, time is conceptually divided into discrete intervals (or time
units). In a time interval, a process receives a stimulus from the environment, it
computes (reacts) and responds to the environment. Graphically a reactive system
can be seen like figure 2.14 (for further studies on reactive systems see [HP85]).
Each process Pi receives a stimulus ii and responds with oi. The stimulus normally
is some piece of information used by the process to execute something. When the
process is ready (called its resting point) it responds to the environment with another
piece of information and evolve to other process to be executed in the next time unit.
The duration of each stimulus/response determines the time unit ti. For the case
of tcc the stimulus is a constraint representing the initial store, and the output is
53
...
1
1
1 2
2
2 2 3
3
3
31i
P P
i
t t
oo i
P
o
t
Figure 2.14: Reactive Systems
another constraint representing the final store (with the additional information that
the process may add). The following sequence illustrates the formal notation of the
stimulus-response interactions between an environment that inputs c1, c2, . . . and a
process that outputs d1, d2, . . . on such inputs:
P1
(c1,d1)ÔÔ⇒ P2
(c2,d2)ÔÔ⇒ ⋯ Pi
(ci,di)ÔÔ⇒ Pi+1
(ci+1,di+1)ÔÔÔÔ⇒ ⋯
As the reader may see, time is a set of points acting as makers distinguishing a time
interval from the next.
The syntax of this model is in table 2.14.
P,Q ∶∶= c Tell∣ now c then P Timed Positive Ask∣ now c else P Timed Negative Ask∣ next P Unit Delay∣ abort Abort∣ skip Skip∣ P ∥ Q Parallel Composition∣ XP Hiding
Table 2.14: Syntax of tcc
Tell, Timed Positive Ask, Parallel Composition and Hiding are constructs inherited
from ccp. They add constraints to the store, check information from the current
store, compose two processes concurrently in the same time interval and hide vari-
ables to the environment. skip is the process that does nothing at all at every time.
abort causes the stop of all interaction with the environment. Unit Delay is the con-
struct where the current process behaves like the process involve, the next interval
unit. Timed Negative Ask checks the absence of information.
54
The operational semantics is given in terms of configurations [SJG94b]. A configura-
tion is a multi-set Γ of agents. There are two binary transition relations Ð→, over
configurations. Ð→ represents transitions within a time instant, while represents
a transition from one time instant to the next. The rules for transitions are defined
in table 2.15.
(Γ, skip)Ð→ Γ
(Γ,abort)Ð→ abort
(Γ, P ∥ Q)Ð→ (Γ, P,Q)(Γ,XP )Ð→ (Γ, P [Y /X]) (Y not free in Γ)
σ(Γ) ⊢ c(Γ,now c then P )Ð→ (Γ, P )
σ(Γ) ⊢ c(Γ,now c else Q)Ð→ Γ
∆,now ci else Pi ∣ i < n↛∆,now ci else Pi ∣ i < n,next Qj ∣ j <m, Pi ∣ i < n,Qj ∣ j <m
where σ(Γ) represent the store and ∆ range over multi-sets of agents that do notcontain Timed Negative Asks or Unit Delays.
Table 2.15: Transition Rules of tcc
Now we will explain these rules. Process skip has no effect and process abort
annihilates the environment. The parallel composition is achieved with the multiset
union operator “,” inherited from TCCS [BB90]. For the hiding operator the bound
variable is substituted with a variable Y not occurring in Γ. In a time positive ask
now c then P , if the store is strong enough to entail c then P is activated. On the
other hand, for now c else Q if the current store entail c then Q can be eliminated.
Finally, if there are no more→ transitions (every process has quiesced), computations
can now progress to the next time unit; the active agents at the next time unit will
be the remaining negative ask and the unit delay processes.
There are some extensions of tcc. We are giving a briefly description of some of
them.
Temporal concurrent constraint programming is a model for timed systems. This
55
model is a generalization of tcc called ntcc calculus; it was proposed in [PV01] and
described detailed in [VNP02]. The ntcc calculus extends tcc with the notions of
asynchrony and non-determinism. This calculus will be detailed in the next section.
A stochastic extension of ntcc, called sntcc, was proposed in [OR05]. sntcc in-
troduced two additional operators to express stochastic behavior: ⋆ρP , an eventual-
stochastic construct, and ρP , the stochastic process. This new constructors allows
to specify actions according to a given probability ρ.
Recently, another extension of tcc, called utcc, was proposed in [OPV07]. This
calculus increases the expressiveness of tcc allowing infinite behaviour and mobility
by introducing the notion of abstraction. One of the main purposes of this model is
to verify security protocols.
Other extensions
The model tccp [dBGM00] is a non-deterministic extension of ccp to formalize large
concurrent timed system. In this model, concurrent processes are handled with the
assumption of infinite processors (notion called maximal parallelism), and each time
unit is identified with the time needed for the constraint system to add tell’s and to
answer ask’s operations.
Default cc [SJG96] extends ccp by allowing non-monotonic processes and the ex-
pression of defaults, that is negative information (i.e. in the case in which is not
possible to entail something from the store then some processes may be executed).
The main idea of Scc [BJGK96] is to change the classical approach of asynchrony
of ccp. In this calculus the behaviour of the process tell(c) changes by forcing a
tell(c) operation to suspend if the constraint c is not entailed by the store, and it
can be resumed synchronously with a process ask(d) if the store in conjunction with
c entails d, in which case the store is updated by c.
An extension of Scc to deal with real-time (called TScc) was proposed in [BKGJ97].
Here, every primitive action is formed from a time action (which allows for progress
in time) and an instantaneous action (which allows for testing/updating the store).
Timed Default cc [SJG95] extends Default cc over discrete time. Since ccp has
no concept of timed execution, time is divided in discrete ticks (just like in tcc). In
56
this matter, next A (A will be executed the next tick) and always A (A will be
executed at every time) were added to the model.
The Hybrid concurrent constraint programming model, Hybrid cc, is described in
[GJS98] as an extension of Default cc over continuous time. A single temporal con-
struct, called hence, is added to the operations of Default cc. hence A imposes
the constraint A at every time instant after the current one. Continuous constraints
systems augment constraint systems with the notion of constraint holding continu-
ously over a period of time.
The semantics for Atomic cc [BHMR98] allows the atomic interpretations of the tell
operation instead of the normal eventual interpretation (i.e. that where a constraint
is added to the current store without any consistency check). This means that in
Atomic cc the constraints are added to the store only if they are consistent with it.
Figure 2.15 shows the hierarchy of the cc family. There are more extensions such
as Located cc [Rom00], Probabilistic cc [GJP99], pcc [GJS97], lcc [FRS01],
and also the γ-calculus [Smo94], and the ρ-calculus [NM95]. We confine ourselves to
include just the calculi described above. We also include the calculus proposed in
this dissertation.
2.7 The ntcc Calculus
The temporal concurrent constraint programming calculus, ntcc [Val02], is a model
for timed systems. It was created by Frank Valencia and Catusia Palamidessi. The
ntcc calculus extends tcc with the notions of asynchrony and nondeterminism. The
notion of time is introduced as a sequence of time units where each time unit is
identified with the time needed for a process to terminate a computation.
2.7.1 Operational Semantics
The ntcc processes are parameterized in a constraint system which is a pair (Σ,∆)
where Σ is a signature (a set of constants, functions and predicates) and ∆ is a first
order theory over Σ (a set of first-order sentences with a least one model).
Given a constraint system, the underlying language L of the constraint system is
57
(1989)
Atomic cc (1998)
(1997)TScc
(1996)Scc
tccp (2000)
(2001)ntcc
tcc (1994)
utcc (2007)
sntcc (2005)
(2008)rtcc
Default cc
Hybrid cc
Timed Default cc
(1994)
(1995)
(1998)
ccp
Figure 2.15: Hierarchy of the CC Family
a tuple (Σ,V,S), where V is a set of variables and S is a set with the symbols
¬,∧,∨,⇒,∃,∀ and the predicates true and false. A constraint is a first-order
formulae constructed in L.
The Processes P,Q, . . . ∈ Proc are built from constraints c ∈ C and variables x ∈ V in
the underlying constraint system by the syntax in table 2.16.
Processes in ntcc share a common store of partial information. The process tell(c)
adds the constraint c to the store, making c available to other processes in the current
time unit. The guarded choice ∑i∈I when ci do Pi, where I is a finite set of indexes,
represents a process that, in the current time unit, chooses non-deterministically a
process Pj (j ∈ I) whose guard cj is entailed by the store (the non-chosen alternatives
are precluded). Process P ∥ Q represents the parallel composition between P and
Q (in one time unit P and Q operate concurrently, communicating via the common
58
P,Q, . . . :== tell(c) Tell∣ ∑i∈I when ci do Pi Nondeterminism∣ P ∥ Q Parallel Composition∣ (local x) P Local Behavior∣ next P Unit Delay∣ unless c next P Time-Out∣ !P Infinite Behavior∣ ⋆P Asynchrony
Table 2.16: Syntax of ntcc
store). Process (local x) P behaves like P but the information of the variable x is
local to P , i.e. P cannot see information about a global variable x and processes
which are not part of P cannot see the information produced by P about x. The
process next P executes the process P the next time unit. The process unless c
next P represents the activation of P in the next time unit iff c cannot be entailed
by the store in the current time unit. The process !P executes P in all time units
from the current one (this is a delayed version of the replication operator from the
π-calculus). The process ⋆P represents an unbounded but finite delay of P , i.e. P
eventually will be executed.
The operational semantics is given by considering the transitions between configu-
rations γ of the form ⟨P,d⟩, where P is a ntcc process and d is a store. Transitions
are divided in two: the internal transition ⟨P, c⟩→ ⟨Q,d⟩ denotes the process P with
a store c reducing in one internal step to a process Q with store d; the observable
transition P(c,d)Ô⇒ Q denotes the process P on input c from the environment reducing
in one time unit to process Q and outputting d to the environment. The observ-
able transition is defined in terms of a terminating sequence of internal transitions
⟨P, c⟩ →∗ ⟨Q,d⟩ starting in P with store c and ending in Q with store d. As in tcc
the store does not transfer automatically from one time unit to the next one. Table
2.17 describes the transition rules (See [Val02] for further details).
Let us now explain these rules. A tell process adds information to the store and
terminates. The rule SUM says that a process Pj is non-determiniscally chosen for
execution among all those whose guard ci can be entailed from the current store. If a
process P evolves to P ′ then the same transition can occur if we execute P in parallel
with some process Q (parallel composition is commutative). The rule UNL states
59
TELL: ⟨tell(c), d⟩ → ⟨skip, d ∧ c⟩SUM: ⟨∑
i∈I
when ci do Pi, d⟩→ ⟨Pj, d⟩ if d ⊧ cj, j ∈ I
PAR:⟨P, c⟩→ ⟨P ′, d⟩
⟨P ∥ Q,c⟩→ ⟨P ′ ∥ Q,d⟩UNL: ⟨unless c next P,d⟩→ ⟨skip, d⟩ if d ⊧ c
LOC:⟨P, c ∧ ∃xd⟩→ ⟨P ′, c′⟩⟨(local x, c)P,d⟩ → ⟨(local x, c′)P ′, d ∧ ∃xc′⟩
STAR: ⟨⋆P,d⟩→ ⟨nextn P,d⟩ if n ≥ 0
REP: ⟨!P,d⟩→ ⟨P ∥ next !P,d⟩STR:
γ1 → γ2
γ′1→ γ′
2
if γ1 ≡ γ2 and γ′1 ≡ γ′2
OBS:⟨P, c⟩→∗ ⟨Q,d⟩↛
P(c,d)Ô⇒ R ≡ F (Q) =
⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩
skip if Q = ∑i∈I when ci do Qi
F (Q1) ∥ F (Q2) if Q = Q1 ∥ Q2(local x)F (R) if Q = (local x, c)RR if Q = next R or
Q = unless c next R
Table 2.17: Transition Rules of ntcc
that nothing is done in a unless process when c is entailed by the store. For a local
process, a “local” store c is created to hide x from the environment (all information
about x that P produces is hidden from the environment, and vice versa) but the
store at the end of the transition is augmented with the constraints of the local store
except of those involving x. The rule STAR models the fact that process P will
be run in the (undeterminated) future. Replication !P is achieved by executing the
60
process P within the current time unit and sending the same process to the next
time unit. Finally, the rule STR states that structurally congruent configurations
have the same reductions.
The missing rule OBS is the one that goes from one time unit to the next one.
When there are no more internal transitions (all processes get their resting points),
processes containing next constructs are scheduled for the next time unit (these
include those defined by unless processes whose guard cannot be entailed from the
current store). The other processes depend on the future function F (Q).In ntcc what happens within a time unit cannot be directly observed. The pro-
cess observations (what an observer can see from a process behaviour) have been
abstracted from internal transitions. If a process runs as follows:
P = P1
(c1,c′1)
===Ô⇒ P2
(c2,c′2)
===Ô⇒ P3
(c3,c′3)
===Ô⇒ . . .
this process can be represented as P(α,α′)ÔÔ⇒ ω where α = c1 ⋅c2 ⋅c3⋯ and α′ = c′
1⋅c′
2⋅c′
3⋯
2.7.2 Denotational Semantics
The denotational semantics is given by considering the quiescent points of the pro-
cesses, that is, those sequences of constraints where the process can run without
adding any information whatsoever. The semantics is described by the denotations
in table 2.18.
For the tell process the sequences on which it can run without adding information
are those whose first element is stronger than c. Equation DSUM expresses that the
sequences on which ∑i∈I when ci do Pi can run without adding information are those
whose first element entails either: one of the ci (and they should be also quiescent for
Pi) or none of the ci. The equation for a parallel composition says that a sequence
is quiescent for P ∥ Q if it is quiescent for P and Q. The semantic equation for the
local operator states that if a process (local x)P on input α adds new information
to it, then that information is not about x in α; but if P can run on another input
α′ without adding any information and α′ has the same information than α except
for that about x, then (local x)P can run on α without adding any information.
The process nextP has influence only in the suffix of the input sequence. The case
61
DTELL: ⟦tell(c)⟧ = d.α ∣ d ⊧ c
DSUM: ⟦∑i∈I
when ci do Pi⟧ =⋃i∈I
d.α ∣ d ⊧ ci and d.α ∈ ⟦Pi⟧∪
⋂i∈I
d.α ∣ d ⊭ ciDPAR: ⟦P ∥ Q⟧ = ⟦P ⟧ ∩ ⟦Q⟧DLOC: ⟦(local x)P ⟧ = α ∣ there exists α′ ∈ ⟦P ⟧ s.t. ∃xα′ = ∃xαDNEXT: ⟦next P ⟧ = d.α ∣ α ∈ ⟦P ⟧
DUNL: ⟦unless c next P ⟧ = d.α ∣ d ⊧ c∪d.α ∣ d ⊭ c and α ∈ ⟦P ⟧DREP: ⟦!P ⟧ = α ∣ for all β,α′ s.t. α = β.α′, then α′ ∈ ⟦P ⟧DSTAR: ⟦⋆P ⟧ = β.α ∣ α ∈ ⟦P ⟧
Table 2.18: Denotational Semantics of ntcc
of the unless process is similar to the next except if the first element entail c. For
process !P a sequence is quiescent if every suffix of it is quiescent for P . Analogously,
a sequence is quiescent for ⋆P if there exists a suffix of it which is quiescent for P .
These denotations are preserved by the notion of strongest postcondition of each
process (written sp(P )), which is the set of quiescent sequences of P under the
influence of arbitrary environments. More precisely, the set of all α for which there
exists an α′ such that P(α′,α)ÔÔ⇒ ω.
2.7.3 Linear-Temporal Logic
A linear temporal logic for expressing properties of ntcc processes is given by the
following definition:
62
A :== c ∣ A ⋅
⇒ A ∣ ⋅¬A ∣ ⋅∃xA ∣ A ∣ ◻A ∣ A
where c is a constraint,⋅
⇒,⋅
¬ and⋅
∃x represent linear-temporal logic implication, nega-
tion and existential quantification, and , ◻ and denote the temporal operators
next, always and eventually.
The standard interpretation structures of linear temporal logic are infinite sequences
of states [MP92]. In ntcc those states are represented by constraints. Let Cω the
set of infinite sequences of constraints in the underlying set of constraints C. Given
α ∈ Cω and α(i) the i-th element of α, α ∈ Cω is a model of (or that it satisfies) A,
notation α ⊧ A, if ⟨α,1⟩ ⊧ A where:
⟨α, i⟩ ⊧ c iff α(i) ⊧ c⟨α, i⟩ ⊧ ⋅¬A iff ⟨α, i⟩ ⊧/ A
⟨α, i⟩ ⊧ A1
⋅
⇒ A2 iff ⟨α, i⟩ ⊧ A1 implies ⟨α, i⟩ ⊧ A2⟨α, i⟩ ⊧ A iff ⟨α, i + 1⟩ ⊧ A⟨α, i⟩ ⊧ ◻A iff ∀j ≥ i ⟨α, j⟩ ⊧ A⟨α, i⟩ ⊧A iff ∃j ≥ i s.t. ⟨α, j⟩ ⊧ A⟨α, i⟩ ⊧ ⋅∃xA iff there is an x − variant α′ of α s.t. ⟨α′, i⟩ ⊧ A
where α′ is an x − variant of α iff ∃xα = ∃xα′ [MP92].
Given a ntcc process P , and a temporal logic formula A, P satisfies A, notation
P ⊧ A, iff sp(P ) ⊆ ⟦A⟧, that is by following the set of inference rules in table 2.19.
First rule of the proof system states that every output of a process tell(c) on ar-
bitrary inputs always satisfy the propositions c. For a summation process P =
∑i∈I when ci do Pi if a process Pi satisfies a formula Ai, then P on arbitrary in-
puts should satisfies either: some of the guards ci and their corresponding Ai or
none of the guards. The rule LPAR says that if a process P satisfies a formula A
and process Q satisfies a formula B (both on arbitrary inputs), then the parallel
composition of P and Q must satisfies both formulae A and B. A local process
(local x) P should satisfies a formula A with x hidden if P satisfies A. If a process
P will be executed the next time unit, then it will satisfy a formula A the next time
unit. Rule LUNL is similar to the previous one except that it also can satisfy the
proposition c (when the unless process is precluded from its execution). Since !P
execute P in every time unit, then the formula A is always satisfied by P . Since ⋆P
63
LTELL: tell(c) ⊢ c
LSUM: ∀i ∈ I Pi ⊢ Ai
∑i∈I
when ci do Pi ⊢⋅
⋁i∈I(ci ⋅∧Ai) ⋅
∨⋅
⋀i∈I
⋅
¬ci
LPAR:P ⊢ A Q ⊢ B
P ∥ Q ⊢ A⋅
∧B
LLOC: P ⊢ A
(local x) P ⊢ ⋅
∃xA
LNEXT: P ⊢ Anext P ⊢ A
LUNL: P ⊢ A
unless c next P ⊢ c⋅
∨A
LREP: P ⊢ A!P ⊢ ◻A
LSTAR: P ⊢ A⋆P ⊢ A
LCONS: P ⊢ AP ⊢ B
if A⋅
⇒ B
Table 2.19: Proof System of ntcc
execute P in some time unit in the future, then the formula A will be satisfied by P
in that time unit. Finally, rule LCONS says that if P satisfies a formula A, then it
also satisfies any weaker formula B.
2.7.4 Applications
Camilo Rueda and Frank Valencia have proposed ntcc as a model for expressing
temporal music processes and applications like rhythm patterns and controlled im-
provisation [RV01].
In [RV02] some musical properties were formally proved using the linear temporal
logic of ntcc. For the Nzakara musical problem, for instance, it was proved that
64
there is no solution. This problem is described in [Che95] as follows.
The people in Nzakara, Central Africa have an harp with five cords.
With this instrument they play some musical formulae in ostinato (motif
or phrase persistently repeated in the same musical voice) in order to
accompany the musician-poets. Among the Nzakara traditional formulae,
a subset of formulae which have the remarkable property of being canons
exists. Figure 2.1610 shows the transcription of some of those formulae.
The five horizontal lines correspond to the five cords of the harp. Points
show which cords are pinched during the course of the formula. The
cords pinched in pairs form two melodic lines in superposition (one over
the three high cords and the other one over the three low cords). We can
see that with some exceptions the two melodic profiles are identical but
shifted in time. Those are materialized on the transcription by zigzagging
blue and red lines (the exceptions to the rule of the cannon are the points
over the lower cord which are external to the blue line). The problem
arises when we want to mathematically formalize the construction of
those cannons, that is, to build a sequence which is a cannon in a strict
sense (i.e. without any exception in the performace by one voice of the
melodic profile of the other).
The conclusion of not having a solution for this problem emerged after giving an
implementation in ntcc which was weaker than the problem and verifying it with
the expressiveness of the logic.
ntcc was also proposed to model audio processing systems (closely related but with
a different approach from Faust and its Block-Diagram Algebra). In [RV05] this
calculus was used to describe a framework for audio processing capable of modeling
higher level musical structures and to build formal proofs of properties of a given
audio process. For example, the noise generator in section 2.5.2 can be defined in
ntcc as follows:
NOISE(v) def= tell(out = v × trig × 1/2147483647)∥ next(NOISE(v × 1103515245+ 12345))
where trig is a signal controlled by the following process:
10Figure taken from [Che05].
65
Figure 2.16: Nzakara Formulae in Cannon
TRIGGER(i) def= when pushb do next(tell trig = 1 ∥ TRIGGER(1))∥ when pushs do next(tell trig = 0 ∥ TRIGGER(0))∥ unless pushb ∨ pushs next(tell trig = i ∥ TRIGGER(i))
Process TRIGGER observes buttons b and s (which trigger the audio processing
and stop it, respectively) and changes the value of trig, but the environment should
guarantee that b and s are not pushed at the same time.
On the other hand, musical scores involving static and interactive events, that are
bound by some logical properties (such as Allen’s relations [All83]), called interactive
scores [DCA04], were represented using ntcc in [AADCR06]. The construction and
performance of musical pieces composed of temporal structures was modeled in two
phases: the compositional process using a constraint propagation scheme based on
the GECODE library, and the performance process using a ntcc model of scores,
temporal objects, Allen’s relations, events and the clock.
66
Finally, in [AD04] a model based on the Factor Oracle algorithm [ACR99] was pro-
posed for machine improvisation and related style simulation. Later in [RAD06], this
model was extended using the ntcc calculus to be used in learning, improvisation
and performance situations. For testing the model (and to have the possibility of
integrating it into OpenMusic), a ntcc interpreter in Lisp was implemented.
2.7.5 Considerations
The ntcc calculus has proved to be convenient for modeling music problems and
proving properties in a musical environment. Its well-defined semantics and logic
allows to easily express and prove temporal properties. Notwithstanding this, ntcc
may be not adequate for complex musical improvisation problems due the real time
requirements in these systems.
2.8 Summary and Related Work
In this section we presented seven formalisms that have been used to model musical
applications. We have shown that although they seem to be good choices for its
applications, there are some advantages and some disadvantages in their use.
There are a lot more musical software and music programming languages that, at
some level use the power of formal models (the reader may see [LA85] to know
some of the firsts). Logics were used as models of music in [Gib76] and as a music
theory tool in Alan Marsden’s MTT [Mar97]. A process algebra based on CCS,
called MWSCCS, was introduced in [Ros95] to design stochastic automata to model
complex stochastic musical systems.
Also, music theory has used formal models. As it is said in [Mar97]: “Good mu-
sic software depends on good music theory”. One of the firsts theoretical writings
outlining a mathematically rigorous music theory came with Joseph Schillinger in
[Sch41, Sch48]. Other examples include: geometrical spaces have been used to rep-
resent chords in [TCQ06], there was a development on fractal music in [Wri95],
hierarchical structures have been used to represent musical objects in [DC96], a de-
ductive object-oriented approach to formalize jazz piano knowledge was proposed in
[Hir95], an algebraic structures were introduced in [Che89] approaching a formaliza-
67
tion of musical structures, a formal definition of sound was proposed in [Kap99], and
recently, a category-oriented framework was presented in [MA07] for the description
of the relations between musical and mathematical activities.
68
3 The rtcc Calculus
“The difference between art and science is that science is what we understand well
enough to explain to a computer. Art is everything else.”
– Donald Knuth, “Discover”
In many musical systems constraints, time and concurrency arise. As we have shown
in section 2.7.4, the ntcc calculus seems to be convenient for this. However time in
ntcc is logical, that is, each time unit is defined by the time taken by all processes
to make all their internal transitions until no further transition can be done. This
is not enough to satisfy quantitative temporal constraints which is a requirement of
real-time systems (e.g. music improvisation).
In this chapter we propose a new calculus based on ntcc, called the rtcc calculus.
First, some changes made to ntcc to ensure real-time are briefly detailed. Then,
an operational semantics supporting resources, limited time and true concurrency is
defined. Later, a denotational semantics based on CHU Spaces is given. Finally, a
real-time logic for a simplified transition system disregarding resources is described.
3.1 Real-Time and Preemption
The ntcc calculus is a temporal concurrent constraint programming calculus, which
inherits properties from the tcc model, a formalism for reactive concurrent constraint
systems (the ntcc calculus was briefly detailed in section 2.7).
To model real time, we assume that each time unit is a clock-cycle in which com-
putations (internal transitions) involving addition of information to the store (tell
operations) and querying the store (ask operations) take a particular amount of time
69
dependent on the constraint system. A discrete global clock is introduced and it is
assumed that this clock is synchronized with the physical time (i.e. two successive
time units in this calculus correspond exactly to two moments in the physical time).
Now, most formal models of processes abstract away many properties of real sys-
tems such as duration of actions and number of processors [Gru96]. Others assume
maximal parallelism, that is the assumption of having n processors to execute n
parallel processes (as in [dBGM00]). Nevertheless, for real-time systems the fact
that processes have to share one processor cannot be ignored, since it may influence
both the temporal and the functional behaviour of the system [BAL02]. Moreover,
as it is said in [BGL97] the temporal behaviour of a real-time system depends not
only on delays due to process synchronization, but also on the availability of shared
resources.
In this sense, the transition system of ntcc is extended to describe sharing of re-
sources. We assume that the environment provides a number r of resources. Each
process P takes some of these. When P is finished, it releases them.
On the other hand, an essential issue in reactive and real-time systems is process
preemption. In [Ber93], this concept is defined as the control mechanism consisting
in denying a process the right to work, either permanently (abortion) or temporarily
(suspension).
In music improvisation situations, for instance, there are cases in which the musi-
cian must skip some note or play something different to synchronize with the other
members of the band, or wait for a signal or for some time before continuing with his
part. These examples show temporal requirements and the need of satisfying some
temporal constraints that involve a set of processes.
There are two ways to preempt a process: bounding its execution time or with some
signal.
We assume that the environment also provides the exact duration of the time unit.
That is, processes may not have all the time they need to run, instead, if they do
not reach their resting point in a particular time, some (or all) of their computations
not done will be discarded before the time unit is over. The duration will be then
the available time that processes have to execute. We will take this available time
as a natural number; this allows to think of time as a discrete sequence of minimal
70
units that we will call ticks.
ntcc provides a way of executing unit delays and weak time-outs with the constructs
next P and unless c next P . This is enough for non real-time systems. However,
just with these constructs a calculus is not able to express neither strong time-outs
[Ber93]: “if an event A does not happen by time t, cause event B to happen at time
t”, nor real delays within the current time unit.
Process next P activates P the next time unit. Then this construct delays a process
an amount of time given by the environment (the duration of the time unit). This
means that there is no total control over the exact duration of the retard and might
be more than the time wanted. To eliminate this drawback, we add the construct:
delay P until δ
It delays the execution of process P for at least δ ticks. This process allows to express
things like “this process must start 3 seconds after another starts”. This construct is
similar to the notation of delay declarations introduced in logic languages in [Nai82],
and used in programming languages like Godel [HL94].
The process unless c next P intuitively represents the activation of P the next time
unit only if c cannot be inferred from the store in the current time unit. This process
waits during the current time unit for an information to be present in the store and
will activate something the next time unit if it is not. This means that it is necessary
to wait until all processes end their computations in the current time unit to know
if the process P will be executed. This could mean a long delay causing a cascade of
undesired events in the time they happen. Strong time-outs are absolutely necessary
in real-time where it is not enough that a process eventually interrupt its execution
– it must do it as soon as an event is detected. Therefore, in order to guarantee
real-time preemption, we add another construct:
catch c in P finally Q
This new construct is similar to the “do A watching immediately c” of Esterel
[BG92]. With this process the execution of P is interrupted in the exact instant in
which internal transitions (or an input from the environment) cause c to be inferred
71
from the store d (i.e. d ⊧ c). When this happen, Q is executed. In this sense, process
Q performs the default behaviour of process P (this can be viewed as the “last will”
of P before dying).
3.2 Operational Semantics
In this section we provide the operational semantics of the calculus. First the basic
notion of constraint system is introduced. Then, we explain the syntax processes.
Later, the rules for transitions are given. After that, we prove some properties of
the semantics. Finally, we give some considerations about the observable behaviour
of processes.
3.2.1 Constraint System
The rtcc processes are parameterized in a constraint system. This notion is now
defined as it was done in [Smo94].
Definition 3.2.1. (Constraint System) A Constraint System is a pair (Σ,∆)
where Σ is a signature (a set of constants, functions and predicates) and ∆ is a
first-order theory over Σ (a set of first-order sentences with at least one model).
Given a constraint system, the underlying language L of the constraint system is
a tuple (Σ,V,S), where V is a set of variables, and S is a set with the symbols
¬,∧,∨,⇒,∃,∀ and the predicates true and false. A constraint is a first-order
formulae constructed in L.
A constraint c entails a constraint d in ∆, notation c ⊧∆ d, iff c ⇒ d is true in all
models of ∆. The entailment relation is written ⊧ instead of ⊧∆ if ∆ can be inferred
from the context. It is required that ⊧ be decidable. If c ⊧ d and d ⊧ c then c ≈ d
(equivalence relation).
The following is an example of a widely used constraint system.
Definition 3.2.2. (Finite-Domain Constraint System) Let max > 0. A Finite
Domain Constraint System FD[max] is such that:
72
Σ is given by the constants symbols 0,1,2, . . . ,max−1, and the relation symbols
=,≠,<,≤,>,≥. ∆ is given by the axioms in number theory.
The FD constraint system was proposed in [HSD95]. The intended meaning of
FD[max] is that variables range over a finite domain of values 0, . . . ,max − 1.Throughout this dissertation a FD constraint system D is assumed.
Notation 3.2.3. The set of elements of the constraint system is denoted by ∣D∣ and
∣D∣0 represents its set of finite elements. The set of constraints in the underlying
constraint system will be denoted by C.
3.2.2 Process Syntax
In this section we identify the basic processes of this model. All the ntcc constructs
remain. We add a new construct to express strong time-outs and to define default
behaviour (the original time-out construct remains as a weak time-out), and another
to specify delta delays.
Proc is defined as the set of all rtcc processes. Processes communicate with each
other by posting and reading partial information (constraints) about the variables
of the system they model. This partial information resides in common store of
constraints. Henceforth the conjunction of all posted constraints will be simply
called the store.
Definition 3.2.4. (Process Syntax) The Processes P,Q, . . . ∈ Proc are built from
constraints c ∈ C, variables x ∈ V in the underlying constraint system, and a variable
δ ∈ N, by the syntax in table 3.1.
Process tell(c) adds constraint c to the store within the current time unit.
The ask process when c do P is generalized with a non-deterministic choice of the
form ∑i∈I when ci do Pi (I is a finite set of indices). This process, in the current time
unit, must non-deterministically choose one of the Pj (j ∈ I) whose corresponding
guard constraint cj is entailed by the store, and execute it. The non-chosen processes
are precluded.
73
P,Q, . . . ∶∶= tell(c) Tell∣ ∑i∈I when ci do Pi Choice∣ P ∥ Q Parallel Composition∣ local x in P Local Behaviour∣ unless c next P Weak Time-out∣ catch c in P finally Q Strong Time-out∣ delay P until δ Delta Delay∣ next P Unit Delay∣ !P Replication∣ ⋆P Asynchrony
Table 3.1: Syntax of the rtcc Processes
Two processes P and Q acting concurrently are denoted by the process P ∥ Q. In
one time unit P and Q operate in parallel, communicating through the store by
telling and asking information. The “∥” operator is defined as left associative.
The process local x in P declares a variable x private to P (hidden to other pro-
cesses). This process behaves like P , except that all information about x produced
by P can only be seen by P and the information about x produced by other processes
is hidden to P .
The weak time-out process, unless c next P , represents the activation of P the
next time unit if c cannot be inferred from the store in the current time interval (i.e.
d ⊭ c). Otherwise, P will be discarded.
The strong time-out process, catch c in P finally Q, represents the interruption of
P in the current time interval when the store can entail c. Otherwise, the execution
of P continues. When process P is interrupted, process Q is executed. If P finishes,
Q is discarded.
The execution of a process P can be delayed in two ways: with delay P until δ the
process P is activated in the current time unit but at least δ ticks after the beginning
of the time unit, whilst with next P the process P will be activated in the next time
interval.
To define infinite behaviour, the operator “!” is used. The process !P represents
P ∥ next P ∥ next(next P ) ∥ . . ., (i.e. !P executes P in the current time unit and
it is replicated in the next time interval).
74
An arbitrary (but finite) delay is represented with the operator “⋆”. The process
⋆P represents an unbounded but finite P + next P + next(next P ) + . . ., (i.e.
it allows to model asynchronous behaviour across the time intervals).
Notations and Some Derived Processes
We describe now some notations that abbreviate the syntax of the calculus. We also
illustrate some derived constructs obtained from the processes above.
The guarded-choice summation process ∑i∈I when ci do Pi is actually the abbrevi-
ation of
when ci1 do Pi1 + . . . +when cin do Pin
where I = i1, . . . , in. The symbol “+” is used for binary summations (similar to
the choice operator from CCS). If there is no ambiguities, the “when c do” can be
omitted when c = true, that is, ∑i∈I Pi.
The process that do nothing is skip. The inactivity process is defined as the empty
summation ∑i∈∅Pi. This process is similar to process 0 of CCS and STOP of CSP.
Furthermore, terminated processes will always behave like skip.
We write ∏i∈I Pi, where I = i1, . . . , in to denote the parallel composition of all the
Pi, that is, Pi1 ∥ . . . ∥ Pin .
We also write local x1x2 . . . xn in P for
local x1 in (local x2 in (. . . (local xn in P ) . . .))When process Q is skip, the “finally Q” part in process catch c in P finally Q
can be omitted, that is, we can write catch c in P .
A nest of delta delay processes such as delay (delay P until δ1) until δ2 can be
abbreviated to delay P until δ1 + δ2.
Notation nextn P (where next is repeated n times) is written to abbreviate the
process next (next (. . . (next P ) . . .)).
A bounded replication and asynchrony can be specified using summation and prod-
uct. !IP and ⋆IP are defined as abbreviations for ∏i∈I nextiP and ∑i∈I nextiP ,
75
respectively. For example, process ![m,n]P means that P is always active between
the next m and m + n time units.
Examples
Now we will show some examples illustrating the specification of temporal behaviour
in this calculus.
Example 3.2.5. In the case of changing pace of a song’s natural timing such a
ritardando, this behaviour can be modeled as:
!(when ritardando = true do next tell(bpm = 60))∥ catch ritardando = true in !(tell(bpm = 150))
Intuitively, this process states that the speed of a quarter note (or crotchet) will
be 150 (with !(tell(bpm = 150))) until a ritardando signal is given (the constraint
ritardando = true in the store). In the case of the signal is given the speed change
to 60. ∎
Example 3.2.6. Suppose a simple improvisation situation where there are two mu-
sicians M1 and M2. The first musician M1 plays a single random note from a list
Notes every 15 ticks. The second musician M2 must adorn it, that is, play a series
of chords depending on the note played by M1. Additionally, in some occasions M1
not only plays a single note but two in the same time unit (he plays one note and
5 ticks latter plays another). In this case M2 must stop his performance and try to
adorn the second note (there may be cases in which this is not possible due to the
limit of time). This behaviour can be modeled as follows:
First, we have to model M1:
M1
def= ! ∑i∈Notes
tell(note1 = i) ∥ ⋆ delay ∑i∈Notes
tell(note2 = i) until 5
Now for the second musician a process PassingTone that calculate the musical
adornment and perform it is assumed. Also, we assume a note 0 ∉ Notes. Thus M2
76
is modeled:
M2
def= ! when note1 ≠ 0 do
catch note2 ≠ 0 in PassingTone(note1) finally PassingTone(note2)
To model the whole system we simply launch the process M1 ∥M2. ∎
Example 3.2.7. Suppose another simple improvisation situation where two musi-
cians M1 and M2 start playing concurrently for a period of d1 and d2 time units
respectively. Another musician M3 must start playing after M1 has played for d1
time units and stop when musician M2 has played for d2 time units. This behaviour
can be modeled as follows:
First, we have to model a process that waits n time units:
WaitFor(i,n)def= when n = 1 do tell(waiti = 0) ∥
when n ≠ 1 do next WaitFor(i,n−1)
This process can be used to model an agent which plays for a given time units:
P layFor(i,d)def= tell(starti = 1) ∥
catch waiti = 0 in WaitFor(i,d) finally tell(stopi = 1)And we need a process to suspend an agent until a musician has been playing for a
given time units:
DelayBy(i,d)def= when starti = 1 do
(when d ≠ 0 do next DelayBy(i,d−1) ∥when d = 0 do tell(delayi = 1))Finally, we can model the system:
System(d1,d2)def= P layFor(1,d1) ∥ P layFor(2,d2) ∥
DelayBy(1,d1) ∥DelayBy(2,d2) ∥
when delay1 = 1 do catch delay2 = 1 in P layFor(3,0)
∎
3.2.3 Transition System
The operational semantics can be formally described by a transition system con-
formed by the set of processes Proc, the set of configurations Γ and transition re-
lations → and ⇒. A configuration γ is a tuple ⟨P,d, t⟩ where P is a process, d is a
77
constraint in C representing the store, and t is the amount of time left to the process
to be executed.
Definition 3.2.8. (Transition System) The transition relations→ = ⟨r⟩Ð→, r ∈ Z+and⇒ are the least relations satisfying the rules explained below (summarized in table
3.2).
The internal transition rule ⟨P,d, t⟩ rÐ→ ⟨P ′, d′, t′⟩ means that in one internal step
using r resources process P with store d and available time t reduces to process P ′
with store d′ and leaves t′ time remaining. We assume that the use of a resource
is related to an access to the store either by posting constraints or querying for an
entailment. r = 0 denotes the consumption of no resource. We write ⟨P,d, t⟩ →⟨P ′, d′, t′⟩ (ommiting the “r”) when resources are not relevant.
The observable transition rule P(ι,o)Ô⇒ Q means that process P given an input ι from
the environment reduces to process Q and outputs o to the environment in one time
unit. Input ι is a tuple consisting of the initial store c, the number of resources
available r within the time unit and the duration t of the time unit. Output o is
also a tuple consisting of the resulting store d, the maximum number of resources r′
used by processes and the time spent t′ by all process to be executed.
Intuitively, a time unit has a certain duration (this duration depends on the unmber
of ticks the time unit has). An observable transition is constructed from a sequence
of internal transitions. It is assumed that internal transitions cannot be directly
observed.
The internal transition rule for a tell operation is defined as:
t −ΦT (c, d) ≥ 0
⟨tell(c), d, t⟩ 1Ð→ ⟨skip, d ∧ c, t −ΦT (c, d)⟩ (3.1)
A tell process adds a constraint to the current store and terminates, unless it has
not enough time to execute (in this case it remains blocked). The time left to the
process after evolving is equal to the time available before the transition less the time
spent by the constraint system to add the constraint to the store. The time spent
by the constraint system is given by a function ΦT (definition 3.2.9). In addition, a
tell operation requires one resource.
78
Definition 3.2.9. (Time Spent Functions) Let ΦA and ΦT be functions from two
constraints to naturals: ΦA,ΦT ∶ ∣D∣0 × ∣D∣0 Ð→ N − 0. They approximate the time
spent in adding a constraint to a store and querying if a store can entail a constraint,
respectively.
The rule for a choice is given by
t −ΦA(cj, d) ≥ 0 d ⊧ cj, j ∈ I
⟨∑i∈I when ci do Pi, d, t⟩ 1Ð→ ⟨Pj, d, t −ΦA(cj, d)⟩ (3.2)
It chooses one of the processes whose corresponding guard is entailed by the store
and executes it, unless it has not enough time to query the store in which case it
remains blocked. Computation of the time left is as for the tell process.The store in
this operation is not modified. Its execution consumes one unit of resource.
Parallel composition is described with the next rules
⟨P,d, t⟩ sp
Ð→ ⟨P ′, d′p, t′p⟩ ⟨Q,d, t⟩ sq
Ð→ ⟨Q′, d′q, t′q⟩ sp + sq ≤ r
⟨P ∥ Q,d, t⟩ sp+sq
ÐÐÐ→ ⟨P ′ ∥ Q′, d′p ∧ d′q,min(t′p, t′q)⟩ (3.3)
⟨P,d, t⟩ sp
Ð→ ⟨P ′, d′p, t′p⟩ sp ≤ r
⟨P ∥ Q,d, t⟩ sp
Ð→ ⟨P ′ ∥ Q,d′p, t′p⟩ (3.4)
⟨Q,d, t⟩ sq
Ð→ ⟨Q′, d′q, t′q⟩ sq ≤ r
⟨P ∥ Q,d, t⟩ sq
Ð→ ⟨P ∥ Q′, d′q, t′q⟩ (3.5)
Rule 3.3 of parallel composition says that both processes P and Q executes concur-
rently if the amount of resources needed by both processes separately is less than or
equal to the number of resources available. The resulting store is the conjunction
of the output stores from the execution of both processes separately. This process
terminates iff both processes do. Therefore, the time left is the minimum of those
times left by each process. Rules 3.4 and 3.5 affirm that in a parallel process, only
one of the two processes can evolve due to the number of resources available.
To define the rule for locality, following [dBPP95], we extend the construct of local
behaviour to local x, c in P to represent the evolution of the process. Variable c is
79
the local information (or store) produced during the evolution. Initially, c is empty,
so we regard local x in P as local x,true in P . The rule is
⟨P, c ∧ ∃xd, t −ΦT (c,∃xd)⟩ sÐ→ ⟨P ′, c′, t′⟩
⟨local x, c in P,d, t⟩ sÐ→ ⟨local x, c′ in P ′, d ∧ ∃xc′, t′⟩ (3.6)
This rule says that if P can evolve to P ′ with a store composed by c and information
of the “global” store d not involving x (variable x in d is hidden to P ), then the
local ... in P process reduces to a local ... in P ′ process where d is enlarged with
information about the resulting local store c′ without the information on x (x in c′ is
hidden to d and, therefore, to external processes). A process with locality consumes
as many resources as its agent P . Additionally, the time left after the transition
depends on the time spent by P .
Now we have the next rule for weak time-out
t −ΦA(c, d) ≥ 0 d ⊧ c
⟨unless c next P,d, t⟩ 1Ð→ ⟨skip, d, t −ΦA(c, d)⟩ (3.7)
If c is entailed by the store, process P is terminated. Otherwise it will behave like
next P . This will be explained below with the rule for observations. The unless
process consumes one resource. The calculation of the time left is similar to the
ask process except that if the constraint system spends more time than the time
available, this process will behave like a next process.
For a strong time-out there are the rules
t −ΦA(c, d) ≥ 0 d ⊧ c
⟨catch c in P finally Q,d, t⟩ 1Ð→ ⟨Q,d, t −ΦA(c, d)⟩ (3.8)
⟨P,d, t −ΦA(c, d)⟩ sÐ→ ⟨P ′, d′, t′⟩ d ⊭ c
⟨catch c in P finally Q,d, t⟩ sÐ→ ⟨catch c in P ′ finally Q,d′, t′⟩ (3.9)
In rules 3.8 and 3.9 a process P ends its execution (and another process Q starts) if
a constraint c is entailed by the store. Otherwise it evolves but asking for entailment
of constraint c persists. If the process reduces to Q it will consume just one resource
but if it evolves, it will consume as many resources as P needs to be executed. On
the other hand, if constraint c is entailed by the store, the time left will be calculated
80
in the same way as for the ask process; if the constraint cannot be entailed by the
store, the time available after the transition will be determined by the execution of
P plus the time asking the store.
Let T be the time given by the environment (the duration of the time unit). To
delay the execution of a process within a time unit we have the following two rules
δ > T − t t > 0
⟨delay P until δ, d, t⟩ 0Ð→ ⟨delay P until δ, d, t − 1⟩ (3.10)
δ ≤ T − t
⟨delay P until δ, d, t⟩ 0Ð→ ⟨P,d, t⟩ (3.11)
The above two rules state that a process delay P until δ delays the execution of P
for at least δ ticks. Once the delay is less than the current internal time (T represents
the duration of the time-unit given by the environment), the process reduces to P
(i.e. it will be activated). In each transition this process does not consume any
resource.
The rule for replication is
⟨!P,d, t⟩ 0Ð→ ⟨P ∥ next !P,d, t⟩ (3.12)
The replication rule specifies that the process P will be executed in the current time
unit and then copy itself (process !P ) to the next time unit. It consumes no resource
and spends no time.
To define asynchrony we have
⟨⋆P,d, t⟩ 0Ð→ ⟨nextm P,d, t⟩ if m ≥ 0 (3.13)
This rule says that a process P will be delayed for an unbounded but finite time,
that is, P will be executed at some time in the future (but not in the past).
The rule that allows to use the structural congruence defined below in definition
3.2.12 is the followingγ1 → γ2
γ′1→ γ′
2
if γ1 ≡ γ′1 and γ2 ≡ γ′2 (3.14)
81
This rule states that structurally congruent configurations have the same transition.
Finally, the rule for observable transitions is
⟨P, c, t⟩ →∗S ⟨Q,d, t′⟩↛P
(⟨c,r,t⟩, ⟨d,max(S),t−t′⟩)ÔÔÔÔÔÔÔÔÔÔ⇒ R
if R ≡ F (Q) (3.15)
Process P evolves to R in one time unit if there is a sequence of internal transi-
tions starting in configuration ⟨P, c, t⟩ and ending in configuration ⟨Q,d, t′⟩. The
sequence of internal transitions from ⟨P, c, t⟩ to ⟨Q,d, t′⟩ leave a trace of the number
of resources used in each transition. This trace is grouped in the set S. Process R,
called the “residual process”, is constituted by the processes to be executed in the
next time unit. The latter are obtained from Q by applying to the function given in
definition 3.2.10 to it.
Definition 3.2.10. (Future Function) Let F ∶ Proc → Proc be the function de-
fined by
F (Q) =⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩
R if Q = next R or Q = unless c next RF (Q1) ∥ F (Q2) if Q = Q1 ∥ Q2
catch c in F (R) finally S if Q = catch c in R finally S
local x in F (R) if Q = local x, c in R
skip Otherwise
The transition rules are summarized in table 3.2.
To simplify the transitions, a congruence relation ≡ is defined. Following [Sar93], we
introduce the standard notions of contexts and behavioural equivalence.
Informally, a context is a phrase (an expression) with a single hole, denoted by [⋅],that can be plugged in with processes. Formally, processes context C is defined by
the following syntax:
C ::= [⋅] ∣ when c do C + M ∣ C ∥ C∣ local x in C ∣ unless c next C ∣ catch c in C finally C∣ delay C until δ ∣ next C ∣ ! C∣ ⋆C ∣where M stands for summations.
Definition 3.2.11. (Process Equivalence) Two processes P and Q are equiva-
lent, notation P ≐ Q, if for any context C, P ≐ Q implies C[P ] ≐ C[Q].82
t −ΦT (c, d) ≥ 0
⟨tell(c), d, t⟩ 1Ð→ ⟨skip, d ∧ c, t −ΦT (c, d)⟩
t −ΦA(cj, d) ≥ 0 d ⊧ cj , j ∈ I
⟨∑i∈I when ci do Pi, d, t⟩ 1
Ð→ ⟨Pj , d, t −ΦA(cj , d)⟩
⟨P,d, t⟩ sp
Ð→ ⟨P ′, d′p, t′p⟩ sp ≤ r
⟨P ∥ Q,d, t⟩ sp
Ð→ ⟨P ′ ∥ Q,d′p, t′p⟩
⟨Q,d, t⟩ sq
Ð→ ⟨Q′, d′q , t′q⟩ sq ≤ r
⟨P ∥ Q,d, t⟩ sp
Ð→ ⟨P ∥ Q′, d′q, t′q⟩
⟨P,d, t⟩ sp
Ð→ ⟨P ′, d′p, t′p⟩ ⟨Q,d, t⟩ sq
Ð→ ⟨Q′, d′q, t′q⟩ sp + sq ≤ r
⟨P ∥ Q,d, t⟩ sp+sq
ÐÐÐ→ ⟨P ′ ∥ Q′, d′p ∧ d′q,min(t′p, t′q)⟩
⟨P, c ∧ ∃xd, t −ΦT (c,∃xd)⟩ sÐ→ ⟨P ′, c′, t′⟩
⟨local x, c in P,d, t⟩ sÐ→ ⟨local x, c′ in P ′, d ∧ ∃xc′, t′⟩
t −ΦA(c, d) ≥ 0 d ⊧ c
⟨unless c next P,d, t⟩ 1Ð→ ⟨skip, d, t −ΦA(c, d)⟩
t −ΦA(c, d) ≥ 0 d ⊧ c
⟨catch c in P finally Q,d, t⟩ 1
Ð→ ⟨Q,d, t −ΦA(c, d)⟩
⟨P,d, t −ΦA(c, d)⟩ sÐ→ ⟨P ′, d′, t′⟩ d ⊭ c
⟨catch c in P finally Q,d, t⟩ sÐ→ ⟨catch c in P ′ finally Q,d′, t′⟩
δ > T − t t > 0
⟨delay P until δ, d, t⟩ 0
Ð→ ⟨delay P until δ, d, t − 1⟩
δ ≤ T − t
⟨delay P until δ, d, t⟩ 0Ð→ ⟨P,d, t⟩
⟨!P,d, t⟩ 0Ð→ ⟨P ∥ next !P,d, t⟩ ⟨⋆P,d, t⟩ 0
Ð→ ⟨nextm P,d, t⟩if m ≥ 0
γ1 → γ2
γ′1→ γ′
2
if γ1 ≡ γ′1 and γ2 ≡ γ′2
⟨P, c, t⟩ →∗S ⟨Q,d, t′⟩↛P
(⟨c,r,t⟩, ⟨d,max(S),t−t′⟩)ÔÔÔÔÔÔÔÔÔÔÔ⇒ R
if R ≡ F (Q)
Table 3.2: Transition Rules of rtcc
83
Definition 3.2.12. (Structural Congruence) Let ≡ be the smallest behavioural
equivalence relation over processes satisfying:
1. P ≡ Q if they only differ by a renaming of bound variables
2. P ∥ skip ≡ skip ∥ P ≡ P
3. P ∥ Q ≡ Q ∥ P
4. next skip ≡ skip
5. local x in skip ≡ skip, local x y in P ≡ local y x in P
6. local x in next P ≡ next(local x in P )We extend ≡ to configurations by defining ⟨P, c, t⟩ ≡ ⟨Q,c, t⟩ iff P ≡ Q.
Example
To illustrate a sequence of transitions in rtcc, we consider the example 3.2.6. The
two musicians were modeled thus:
M1
def= ! ∑i∈Notes
tell(note1 = i) ∥ ⋆ delay ∑i∈Notes
tell(note2 = i) until 5
M2
def= ! when note1 ≠ 0 do
(catch note2 ≠ 0 in PassingTone(note1) finally PassingTone(note2))In order to describe the transitions we have to make some assumptions about the
system. Let us say that the set Notes is composed by C (MIDI=60) and E (MIDI= 64), that is, Notes =
60,64, the time needed by the constraint system to post a constraint is 2 ticks (no
matter the size of the store), and the same for querying the store, process PassingTone post the following four constraints: chord1 = 1, chord2 =
2, chord3 = 3, and chord4 = 4 given the input note1 (each posting every two
ticks) and post the constraint note3 = 3 given the input note2 (we assume
84
that with 4 chords it can perform a decent adornment given the first note but
given the short time to adorn the second note it will play the same note for
accompaniment), and it consumes only one resource finally, there are 3 resources available (enough for the whole system).
Now, to simplify notation we define
P1
def= ∑i∈Notes
tell(note1 = i)P2
def= ∑i∈Notes
tell(note2 = i)P3
def= delay P2 until 5
P4
def= catch note2 ≠ 0 in PassingTone(note1) finally PassingTone(note2)
P5
def= when note1 ≠ 0 do P4
The initial configuration consists in both process M1 and M2 executing in parallel, an
empty store, and the duration of the time unit (15 ticks). Then internal transitions
may look like this for those time units where the first musician only plays one note:
⟨M1 ∥M2,true,15⟩ 0Ð→ ⟨((P1 ∥ next !P1) ∥ nextm P3) ∥ (P5 ∥ next !P5),true,15⟩
1Ð→ ⟨((tell(note1 = 64) ∥ next !P1) ∥ nextm P3) ∥ (P5 ∥ next !P5),true,13⟩1
Ð→ ⟨((skip ∥ next !P1) ∥ nextm P3) ∥ (when note1 do P4 ∥ next !P5), note1 = 64,11⟩1
Ð→ ⟨(next !P1 ∥ nextm P3) ∥(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64,9⟩
1
Ð→ ⟨(next !P1 ∥ nextm P3) ∥(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64 ∧ . . .
. . . ∧ chord1 = 1,7⟩1Ð→ ⟨(next !P1 ∥ nextm P3) ∥
(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64 ∧ . . .
. . . ∧ chord1 = 1 ∧ chord2 = 2,5⟩1Ð→ ⟨(next !P1 ∥ nextm P3) ∥
(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64 ∧ . . .
. . . ∧ chord1 = 1 ∧ chord2 = 2 ∧ chord3 = 3,3⟩1Ð→ ⟨(next !P1 ∥ nextm P3) ∥ next !P5, note1 = 64 ∧ chord1 = 1 ∧ chord2 = 2 ∧ chord3 = 3 ∧ . . .
. . . ∧ chord4 = 4,1⟩↛For those time units where the first musician plays two notes, the internal transitions
85
may look like this:
⟨M1 ∥M2,true,15⟩ 0
Ð→ ⟨((P1 ∥ next !P1) ∥ P3) ∥ (P5 ∥ next !P5),true,15⟩1
Ð→ ⟨((tell(note1 = 64) ∥ next !P1) ∥ delay P2 until 5) ∥ (P5 ∥ next !P5),true,13⟩1Ð→ ⟨((skip ∥ next !P1) ∥ delay P2 until 5) ∥ (when note1 do P4 ∥ next !P5), note1 = 64,11⟩1Ð→ ⟨(next !P1 ∥ delay P2 until 5) ∥
(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64,9⟩1Ð→ ⟨(next !P1 ∥ tell(note2 = 60)) ∥
(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64 ∧ . . .
. . . ∧ chord1 = 1,7⟩1Ð→ ⟨(next !P1 ∥ skip) ∥
(catch note2 in PassingTone(note1) finally PassingTone(note2) ∥ next !P5), note1 = 64 ∧ . . .
. . . ∧ chord1 = 1 ∧ chord2 = 2 ∧ note2 = 60,5⟩1
Ð→ ⟨next !P1 ∥ (PassingTone(note2) ∥ next !P5), note1 = 64 ∧ chord1 = 1 ∧ chord2 = 2 ∧ note2 = 60,3⟩1Ð→ ⟨next !P1 ∥ next !P5, note1 = 64 ∧ chord1 = 1 ∧ chord2 = 2 ∧ note2 = 60 ∧ note3 = 60,1⟩↛The sequence of observable transitions may look like this:
M1 ∥M2
⟨ι1,o1⟩ÔÔ⇒ (!P1 ∥ next2 P3) ∥ !P5
⟨ι2,o2⟩ÔÔ⇒ (!P1 ∥ next P3) ∥ !P5
⟨ι3,o3⟩ÔÔ⇒ . . .
. . . (!P1 ∥ P3) ∥ !P5
⟨ι4,o4⟩ÔÔ⇒ . . .
where each input/output ⟨ιi, oi⟩ depends on the choices made; for example if we
consider the same internal transitions above we can have
ι1 = ⟨true,15,3⟩o1 = ⟨note1 = 64 ∧ chord1 = 1 ∧ chord2 = 2 ∧ chord3 = 3 ∧ chord4 = 4,1,1⟩
ι4 = ⟨true,15,3⟩o4 = ⟨note1 = 64 ∧ chord1 = 1 ∧ chord2 = 2 ∧ note2 = 60 ∧ note3 = 60,1,1⟩
3.2.4 Properties
We discuss in this section some simple but fundamental properties of the transitions.
It is clear that with the introduction of the strong time-out construct, the delta
delay construct and the additional observables of the transition system not all ccp
properties hold. For example, the properties of monotonicity with respect to the
86
store (if a process P evolve to Q given a particular store d, then P also evolves to Q
given a stronger store e, e ⊧ d) and restartability explained in [dBPP95] do not hold
since for a given store a process may evolve, but if that particular store is augmented,
it is possible that the signal that stops the process (with the catch construct) be
now present, so the process evolves in a different way. Moreover, time becomes very
important because processes are limited by the available time. This available time
is reduced in every transition, so if we take the output of a process and we give it
to the same process as input, that process might evolve in another way obtaining
different results. This show that the notion of quiescent point, usual in CCP calculi,
involves time now.
The following two properties state that a process can only post constraints in the
store or leave it unmodified, but cannot take out constraints from it, i.e. the store
can only be augmented, not reduced. Additionally, a process consumes some time to
evolve, that is, the time available at the beginning of the transition is always greater
than or equal to the time at the end (since processes ultimately perform ask and tell
operations, they reduce the available time using functions ΦA and ΦT from definition
3.2.9), in other words, available time in a transition is always reducing.
Property 3.2.13. (Internal Extensiveness). If ⟨P, c, t⟩ → ⟨Q,d, t′⟩ then d ⊧ c
and t > t′ ≥ 0.
Proof. The proof proceeds by simple induction on the inference of ⟨P, c, t⟩ → ⟨Q,d, t′⟩.
The property above can be extended to the observable relation.
Property 3.2.14. (Observable Extensiveness). If P(⟨c,r,t⟩, ⟨d,s,t′⟩)ÔÔÔÔÔÔ⇒ Q then d ⊧ c
and t > t′ ≥ 0.
Proof. By definition, if P(⟨c,r,t⟩, ⟨d,s,t′⟩)ÔÔÔÔÔÔ⇒ Q, then there is a sequence
⟨P1, c1, t1⟩→ ⟨P2, c2, t2⟩ → . . . → ⟨Pn, cn, tn⟩↛with P = P1, Q = F (Pn), c = c1, t = t1, d = cn and t′ = t− tn. Then, by property 3.2.13
cn ⊧ . . . ⊧ c2 ⊧ c1 and t1 > . . . > tn ≥ 0. Hence d ⊧ c and t > t′ ≥ 0.
87
Time introduces a different behaviour of transitions than that of ntcc. For example,
suppose that there is 5 ticks of available time and we have two processes executing
in parallel P1
def= tell(x = 0) and P2
def= catch x = 0 in Q1 finally Q2. If the current
store is not strong enough to infer x = 0 and posting that constraint in the store
takes 6 ticks of time, P1 cannot add it so process Q1 will continue its execution; but
if we augment the amount of available time the constraint will be added, Q1 will be
stopped and Q2 probably will be executed (if there’s time). We can find a similar
situations with other constructs.
Note that resources were not considered in the above properties. This can be ex-
plained with the fact that processes can evolve with just a single resource, they would
only need enough time.
Finally, since each time unit has a fixed time given by the environment, the number
of internal transitions is finite, i.e. there is always a final transition in a sequence.
This is important since it guarantees that there are no infinite computations in one
time unit.
Theorem 3.2.15. Every sequence of internal transitions is finite.
Proof. The proof follows directly from the fact that ∀c, d ∈ ∣D∣0, ΦT (c, d) > 0 and
ΦA(c, d) > 0, and from property 3.2.13.
3.2.5 Observable Behaviour
In [SRP91] a process is identified with the observations that can be made about it.
Informally, an observation represents a sequence of inputs and outputs. If α denotes
the infinite sequence of states ι, that is, α = ι0 ⋅ ι1 ⋅ ι2 . . ., and α′ denotes the infinite
sequence of states o, that is, α′ = o0 ⋅ o1 ⋅ o2 . . ., then P(α,α′)ÔÔ⇒ ω is written for
P = P1
(ι1,o1)ÔÔ⇒ P2
(ι2,o2)ÔÔ⇒ P3
(ι3,o3)ÔÔ⇒ . . .
At the time unit i, the environment provides the stimulus ιi and Pi produces oi as
a response. An observer can see that with an input α process P responds with α′;
then (α,α′) is regarded as a reactive observation of P . Notation α(i) denotes the
i-th element of α.
88
This observable behaviour of P is called the input-output behaviour of P , notation
io(P ), and can be formally defined as
io(P ) = (α,α′) ∣ P (α,α′)ÔÔ⇒ ω
In what follows let σ be the sequence of constraints in α. If α = ι0 ⋅ ι1 ⋅ ι2 . . ., and each
ιi is ⟨ci, ti⟩, then σ = c0 ⋅ c1 ⋅ c2 . . .. Similarly, if α′ = o0 ⋅ o1 ⋅ o2 . . ., and each oi is ⟨di, t′
i⟩,then σ′ = d0 ⋅ d1 ⋅ d2 . . .. Notation σ(i) will denote the i-th element of σ.
Notation 3.2.16. The set of infinite sequences of constraints in the underlying set
of constraints C will be denoted by Cω. The symbols σ,σ′, σ1, σ′1, . . . range over Cω.
The input-output behaviour of a process will be used in the next section to prove
that the denotation of a process matches the observations made of it.
3.3 Denotational Semantics
In this section a semantic definition of processes is given. First, we define the de-
notations of processes, then we describe how to obtain the observations that can
be made from processes (store, time and resources), and finally the soundness and
completeness of the detonations are demonstrated.
3.3.1 Denotations
A denotation of a process is a labelled K-valued Chu space (with ∣K ∣ = 4) where the
events can be thought of as the action of adding information to the store. Labels
are functions λ ∶ A → Act, where each element of Act, the set of possible actions, is
composed by: the actual information (e.g. constraint) and a tag representing the kind
of action. A tag can be one of tell (denoting the action of posting information), ask
(querying for information), and time (denoting the time this event actually occurs).
Notation1 λ(a)#1 denotes the information that event a provides, whilst notation
λ(a)#2 denotes the kind of action that a performs. If there are no ambiguities λ(a)stands for λ(a)#1.
1Notation borrowed from [SF04].
89
As defined in [Pra03] the elements of K are the possible values of an event in a given
state, with 0, , 1, × corresponding to before, during, after and instead, respectively.
In a given state an event has value 0 when it has not yet started, when it is
happening, 1 when it has finished, and × when it has been canceled.
To define the denotations, let P = (A,X,λ1) and Q = (B,Y,λ2) be labelled Chu
spaces over K, giving semantics to rtcc processes P and Q, respectively. We use
the semantic braces ⟦ ⟧ to denote a function that associates a process to a labelled
Chu space. The definition of this function is given with the semantic equations below
and summarized in table 3.3.
The denotation of tell(c) is given by
⟦tell(c)⟧ = c 0 1 λ(c)↦ ⟨c,tell⟩ (3.16)
The semantic definition of a tell operation is a Chu space with 3 states: when the
process has not yet added the information, when it is adding the constraint, and
when it has already posted the constraint. In this sense the denotation of tell(c)represents, in the state x where x(c) = 1 (the third state in the Chu space), all the
possible stores which contain at least as much information as c, given an arbitrary
input from the environment.
The process skip has a simple denotation with a Chu space consisting in a single
event T which post the constraint true and a single state s where nothing happen,
i.e. s(T ) = 0. Then
⟦skip⟧ = T 0 λ(T )↦ ⟨true,tell⟩ (3.17)
The operation that asks for a constraint entailed by the store and executes a process
if it succeeds, when c do P , can be seen as a sequential process. Information c
should be added before P can execute. Thus, the denotation for ask operators is
defined by
⟦when c do P ⟧ = ⟦tell(c)⟧ ; P λ(c)↦ ⟨c,ask⟩ (3.18)
The non-deterministic choice (summation) of two processes has the following deno-
tation
⟦P +Q⟧ = A ∪B ∨ (P ≠ 0 ∧B −A) ∨ (Q ≠ 0 ∧A −B) (3.19)
90
The events of P and Q might have not started yet (A ∪B), or P has started but
those events in B not in A have not occurred (P ≠ 0 ∧B −A), or similarly for Q.
Parallel composition P ∥ Q is the joint behaviour of P and Q. Then
⟦P ∥ Q⟧ = P ∧Q (3.20)
The semantics for a local behaviour is
⟦local x in P ⟧ = (A,X,λk) where ∀a ∈ A . λk(a)#1 = ∃x.λ1(a)#1 ∧
λk(a)#2 = λ1(a)#2(3.21)
For each event in process P, the variable x is quantified.
The semantics for a strong time-out is
⟦catch c in P finally Q⟧ = (⟦tell(c)⟧ ∧(A ∨ A)∧ Q) ∨ (⟦tell(c)⟧ = 0 ∧ P ∧ B)λ(c)↦ ⟨c,ask⟩ (3.22)
Here A denotes the process P = × (which is (A,×A)), that is, all events in process
P are canceled. This process has two possible behaviours: if the constraint c has
been already posted, then either process P has not started or it has been canceled
(⟦tell(c)⟧ ∧(A ∨ A)), or the constraint c has not been added to the store yet and
the process P continues its execution (⟦tell(c)⟧ = 0 ∧ P).
The following are the denotations for the temporal constructs. For the denotation of
the delta delay construct, we assume a process DELAY of the form 0 1 for
a unique event delay, distinct from all other events (λ(delay) = ⟨δ,time⟩). The Chu
space DELAY defines a process representing the number of ticks of the delay. The
denotation of a delta delay process is
⟦delay P until δ⟧ =DELAY ; P (3.23)
Although the denotation of the delay process allows to execute the process any time,
in the next section we will see that those states before the delay time are precluded.
We also assume a process CLOCK defining a whole time unit. The action of
CLOCK is ⟨T,time⟩. where T is the duration of the time unit. Process CLOCK
91
is always active. This means that once a time unit is over, the event of CLOCK
changes its value to 1, but just before the next time unit starts, it starts again with 0
(the sequence CLOCK ; CLOCK distinguishes the events, which perform the same
action). Hence, all processes are executed in parallel with an event of CLOCK (the
current time unit).
The unit delay construct next P will execute the process P the next time unit.
Hence, its denotation is the Chu space representing the sequence of processes CLOCK
and P . This gives the following semantic equation:
⟦next P ⟧ = CLOCK ; P (3.24)
The process unless c next P has the following denotation
⟦unless c next P ⟧ = (⟦tell(c)⟧ = 0 ∧ CLOCK ; P) ∨ (⟦tell(c)⟧ ≠ 0)λ(c)↦ ⟨c,ask⟩ (3.25)
Either process P will be executed after process CLOCK (similar to the denotation
of next) but only if c is not added in the current time unit, or c is being or has been
added and P will not be executed.
The denotation for replication !P can be defined recursively as the denotation of P
related with the process CLOCK, followed by the same denotation of !P .
⟦!P ⟧ = P ⊔CLOCK ; ⟦!P ⟧ (3.26)
where process P ⊔ CLOCK is defined as CLOCK ∨P ∨ (CLOCK = ∧ P) ∨(CLOCK ∧ P ≠ ), that is, those states where neither CLOCK or P have
not yet started (CLOCK ∨P), those where process CLOCK is currently executing
(CLOCK = ∧ P), and those where CLOCK has finished and P is not currently
happening (CLOCK ∧ P ≠ ). P ≠ denotes the process that is not currently
happening.
Solutions to recursive definitions of Chu spaces such as the one above have been
proven to exist. The reader may consult the proof in [Gup94].
Asynchrony is defined as the bounded delay of a process. If m is an arbitrary
nonnegative integer then the denotation of ⋆P is the denotation of nextm P :
⟦⋆P ⟧ = ⟦nextm P ⟧ where m > 0 (3.27)
92
and
⟦nextm P ⟧ = CLOCK ; CLOCK ; . . . ; CLOCK´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶m
; P
The process will be executed after an arbitrary number of processes CLOCK are
executed. The denotations of rtcc processes are summarized in table 3.3.
⟦tell(c)⟧ = c 0 1 λ(c) ↦ ⟨c,tell⟩⟦when c do P ⟧ = ⟦tell(c)⟧ ; P λ(c) ↦ ⟨c,ask⟩
⟦P +Q⟧ = A ∪B ∨ (P ≠ 0 ∧B −A) ∨ (Q ≠ 0 ∧A −B)⟦P ∥ Q⟧ = P ∧Q
⟦local x in P ⟧ = (A,X,λk) where ∀a ∈ A . λk(a)#1 = ∃x.λ1(a)#1 ∧λk(a)#2 = λ1(a)#2
⟦catch c in P finally Q⟧ = (⟦tell(c)⟧ ∧ (A ∨ A) ∧ Q) ∨ (⟦tell(c)⟧ = 0 ∧ P ∧ B)λ(c)↦ ⟨c,ask⟩
⟦delay P until δ⟧ =DELAY ; P
⟦next P ⟧ = CLOCK ; P⟦unless c next P ⟧ = (⟦tell(c)⟧ = 0 ∧CLOCK ; P) ∨ (⟦tell(c)⟧ ≠ 0)
λ(c)↦ ⟨c,ask⟩⟦!P ⟧ = P ⊔CLOCK ; ⟦!P ⟧⟦⋆P ⟧ = ⟦nextmP ⟧ where m > 0
Table 3.3: Denotational Semantics of rtcc
Now we give some simple examples illustrating the denotations of processes.
93
Example 3.3.1. Let Pdef= tell(a1) and Q
def= when a2 do tell(a3). Then ⟦P +Q⟧ is
computed thus:
⟦P ⟧ = ⟦tell(a1)⟧ = a1 0 1 λ(a1) = ⟨a1,tell⟩
⟦Q⟧ = ⟦when a2 do tell(a3)⟧= ⟦tell(a2)⟧ ; ⟦tell(a3)⟧= a2 0 1 ; a3 0 1
= a2 0 1 ∧ a3 0 1 ∧ (a2 1 ∨ a3 0 )= a2 0 0 0 1 1 1
a3 0 1 0 1 0 1∧ (a2 1
a3 0)
= a2 0 1 1 1
a3 0 0 0 1
λ(a2) = ⟨a2,ask⟩λ(a3) = ⟨a3,tell⟩
⟦P +Q⟧ = a1 0
a2 0
a3 0
∨ (a1 1 ∧ a2 0
a3 0) ∨ (a2 1 1 1
a3 0 0 1∧ a1 0 )
=a1 0
a2 0
a3 0
∨ (a1 1
a2 0 0
a3 0 0
) ∨ (a1 0 0 0 0
a2 1 1 1
a3 0 0 1
)=
a1 0 1 0 0 0 0
a2 0 0 0 1 1 1
a3 0 0 0 0 0 1
λ(a1) = ⟨a1,tell⟩λ(a2) = ⟨a2,ask⟩λ(a3) = ⟨a3,tell⟩
∎
Example 3.3.2. Let Pdef= catch r in tell(b) and Q
def= tell(b). Then ⟦P ∥ Q⟧ is
computed thus:
⟦P ⟧ = ⟦catch r in tell(b)⟧= (r 1 ∧ (b1 0 ∨ b1 × )) ∨ (r 0 ∧ b1 0 1 )= ( r 1 1
b1 0 ×) ∨ ( r 0 0 0
b1 0 1)
= r 1 1 0 0 0
b1 0 × 0 1
λ(r) = ⟨r,ask⟩λ(b1) = ⟨b,tell⟩
⟦Q⟧ = ⟦tell(b)⟧ = b2 0 1 λ(b2) = ⟨b,tell⟩
⟦P ∥ Q⟧ = r 1 1 0 0 0
b1 0 × 0 1∧ b2 0 1
=r 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0
b1 0 × 0 1 0 × 0 1 0 × 0 1
b2 0 0 0 0 0 1 1 1 1 1
λ(r) = ⟨r,ask⟩λ(b1) = ⟨b,tell⟩λ(b2) = ⟨b,tell⟩
Note that although the events of posting the constraint b by both processes is differ-
ent (they have two different rows in the Chu space), they perform the same action.
94
Therefore, the labelling function λ of the resulting Chu space applied with the de-
notation of both events returns the same, i.e. λ(b1)↦ b, and λ(b2)↦ b. ∎
Example 3.3.3. Let Pdef= ![1,2] tell(a). Then ⟦P ⟧ is computed thus:
⟦P ⟧ = ⟦![1,2] tell(a)⟧= a1 0 1 ⊔ clock1 0 1 ; ⟦![1,1] tell(a)⟧= a1 0 0 1 0 1
clock1 0 1 1; ⟦![1,1] tell(a)⟧
=a1 0 0 1 0 1 1 1 1 1 1
a2 0 0 0 0 0 0 0 1 0 1
clock1 0 1 1 1 1 1 1 1
clock2 0 0 0 0 0 0 1 1
; ⟦next tell(a)⟧
=
a1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1
a2 0 0 0 0 0 0 0 1 0 1 1 1 1 1 1
a3 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1
clock1 0 1 1 1 1 1 1 1 1 1 1 1 1
clock2 0 0 0 0 0 0 1 1 1 1 1 1 1
clock3 0 0 0 0 0 0 0 0 0 0 0 1 1
λ(a1) = ⟨a,tell⟩λ(a2) = ⟨a,tell⟩λ(a3) = ⟨a,tell⟩λ(clock1) = ⟨1,time⟩λ(clock2) = ⟨1,time⟩λ(clock3) = ⟨1,time⟩
Note that the time units in the Chu space are defined by the events clocki:
time unit 1³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ time unit 2³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ time unit 3³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µa1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1a2 0 0 0 0 0 0 0 1 0 1 1 1 1 1 1a3 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1
clock1 0 1 1 1 1 1 1 1 1 1 1 1 1clock2 0 0 0 0 0 0 1 1 1 1 1 1 1clock3 0 0 0 0 0 0 0 0 0 0 0 1 1
∎
Example 3.3.4. This example shows that in this calculus the Interleaving Law
[SRP91, GJS96] does not hold. This law says that:
when a do(tell(b) ∥ (when c do tell(d)))(when a do tell(b)) ∥ (when c do tell(d)) = +
when c do(tell(d) ∥ (when a do tell(b)))The semantics of (when a do tell(b)) ∥ (when c do tell(d)) is obtained thus:
⟦when a do tell(b)⟧ = a 0 1 1 1
b 0 0 0 1
⟦when c do tell(d)⟧ = c 0 1 1 1
d 0 0 0 1
⟦(when a do tell(b)) ∥ (when c do tell(d))⟧ =a 0 1 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1
b 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 1
c 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
d 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 1 1
95
On the other hand,
⟦tell(b) ∥when c do tell(d)⟧ = b 0 0 0 0 0 1 1 1 1 1
c 0 1 1 1 0 1 1 1 0 1 1 1
d 0 0 0 1 0 0 0 1 0 0 0 1
⟦tell(d) ∥when a do tell(b)⟧ = d 0 0 0 0 0 1 1 1 1 1
a 0 1 1 1 0 1 1 1 0 1 1 1
b 0 0 0 1 0 0 0 1 0 0 0 1
and
⟦when a do(tell(b) ∥ (when c do tell(d)))⟧ =a 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
b 0 0 0 0 0 0 0 1 1 1 1 1
c 0 0 0 1 1 1 0 1 1 1 0 1 1 1
d 0 0 0 0 0 1 0 0 0 1 0 0 0 1
⟦when c do(tell(d) ∥ (when a do tell(b)))⟧ =a 0 0 0 1 1 1 0 1 1 1 0 1 1 1
b 0 0 0 0 0 1 0 0 0 1 0 0 0 1
c 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
d 0 0 0 0 0 0 0 1 1 1 1 1
The choice between the above two processes is
a 0 1 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1
b 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 1 1
c 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
d 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 1 1
Note the missing state in this last Chu space from the one of ⟦(when a do tell(b)) ∥(when c do tell(d))⟧. This is the state where events a and c are occurring simul-
taneously (written in boldface above). ∎
3.3.2 Information, Resources and Time
The observations that can be made from a process are the input from the environment
(i.e. the initial store, the available resources and the duration of the time unit) and
its output, that is, the constraints added by it (the final store), the maximum number
of resources used concurrently and the time spent by it. In this section we provide
the way of finding these elements, from the denotation of a process.
To model the input from the environment we use the following function.
Definition 3.3.5. (Denotational Inputs). Let Ins ∶ N → ∣D∣ × N × N be the
function that given a time unit index, it will return a tuple consisting of a constraint
representing the initial store, a number of resources and an amount of time, for that
time unit. We denote Ins(i).j the j-th component (j ∈ 1,2,3) of Ins(i).96
To model the output of a process we need to compute its actions given some input.
Since, ultimately, the action that events of a Chu space denoting a process can do is
to add information to the store (i.e. posting constraints), the latter can be recovered
from a denotation at each state x, by calculating the conjunction of all constraints
λ(a)#1 for all events a ∉ delay, clock such that x(a) = 1. We identify the relevant
event as Rev = a ∣ x(a) = 1. However we must take care of distinguishing tell and
ask type of events. These are easily identified by means of the second component of
event labels. The set of ask events are then A = a ∣ a ∈ Rev ∧ λ(a)#2 = ask.As we described in section 1.2.6 in a Chu space dependencies between events can be
easily computed. Let Cons be the consistency (in event structures terms) relation
over Rev. Let us denote
Cons A= (a, b) ∣ ((a, b) ∈ Cons ∧ a ∈ A) ∨ ((a, b) ∈ Cons ∧ ∃d∈A.(d, a) ∈ Cons)the consistency relation rooted in A. For example, for
Pdef= when a do tell(b) ∥when d do tell(c)
we have Cons A= (a, b), (a, d), (a, c), (d, c). With this we identify events with ask
dependencies, D = domain(Cons A) ∪ range(Cons A), and also the set of events
on which a given event b depends, Dep(b) = a ∣ (a, b) ∈ Cons A.From the point of view of constraints dependencies represent implications. For ex-
ample, Dep(c) = a, d represents constraint a∧d⇒ c. Let us now construct all such
implications:
Imp = ⋀e∈Dep(d)
λ(e)⇒ λ(d) ∣ d ∈DWe can now find the information contained in a state.
Definition 3.3.6. (Entailment Closure Function). The information contained
in a state is the entailment closure EC of all tell constraints and the ask dependen-
cies
EC(Rev) = Closure(λ(e) ∣ e ∈ E/D ∪ Imp)The set EC is a somewhat stronger store than what P actually computes. This can
be obtained thus:
Store = λ(a) ∣ a ∈ Rev ∧ λ(a) ∈ EC(Rev)97
To calculate the resources used and the time spent by a process, we use the notions
of step, step graph and run within a Chu space (A,X) as defined in [Pra03].
Given a process (A,X), a step is a pair (x, y) of distinct elements of X such that
every coordinate of (x, y) is one of (0, 0), (0, ), ( , ), ( , ×), ( , 1), or (1, 1).
Thus, for example (00, 0), (00, ), ( , 11), ( 0, 10), are all steps, but not
(01, 0 ), (0 ,00), or (00,01).
For a given process P = (A,X), let G be the graph with vertex set X and edge set
all steps (x, y) with x, y ∈X. G is called the step graph of P . A run is a path in the
step graph (i.e. a sequence of states).
The number of resources used in a state x is given by the summation of all events
a such that x(a) = . It is clear that, by the limits of resources, at each state the
maximum number of ’s (events in transition) must be at most r, where r is the
number of resources available given by the environment. The notion of a valid step
is defined next.
Let r be the number of resources provided by the environment. A valid step is a
step (x, y) where the number of ’s in y is less than or equal to r. For example, if
the environment provides only 2 resources, then the step (000, ) is not valid.
Now, the time spent by a step is given by the distance between the states in it, as in
[Pra02]. This distance gives a reasonable notion of running time by using functions
ΦA and ΦT from definition 3.2.9. Therefore, the notion of distance is valid only on
steps. Intuitively, the element gives the notion of distance between two states x, y
of a Chu space (A,X). It is formally defined as follows:
Definition 3.3.7. (Distance). Let (A,X) be a process. The distance of a step(x, y), notation d(x, y), is
d(x, y) =
⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
0 if x = y ∨ ∀a.(x(a) ≠ ∨ y(a) ≠ 1)max(0, λ(a)#1 − min(C)) if a = delay and x(a) = ∧ y(a) = 1λ(a)#1 if a = clock and x(a) = ∧ y(a) = 1
ΦT( ⋀a∈A
λ(a), s) +ΦA(⋀b∈A
λ(b), s) if ∀a.(x(a) = ∧ y(a) = 1 ∧ λ(a)#2 = tell) ∧∀b.(x(b) = ∧ y(b) = 1 ∧ λ(b)#2 = ask)
min(Dx,y) if Dx,y is the set of the distances of all possibleruns from state x to state y
min(Dy,x) if Dy.x is the set of the distances of all possibleruns from state y to state x
98
where s is the current store, and C is the set of summations of distances of allpossible sequences of valid steps going from state (0) to state x.
The distances satisfy the following four properties of metric spaces:
d(a, b) + d(b, c) ≥ d(a, c)d(a, b) = d(b, a)
d(a, a) = 0d(a, b) = 0 if a = b
It is natural that a run begins in a state x where for all events a (different from
clock), x(a) = 0 (called initial state) and it is ideal that it ends in a state y where
for all events a, y(a) = 1 (called final state). Nevertheless, in some cases there
are events a, b that cannot happen at the same time. For example, in the process
tell(a2) + tell(a3), either event a2 occurs or event a3, therefore it is not possible to
have a final state (11). Moreover, due to the time limit of processes, it is possible
that a process cannot end all its computations. In this last case, the process must
end in some state before the limit of time is reached. Henceforth it is called a possible
final state of process (A,X), a state x where for all a ∈ A, x(a) is equal to one of
0,×,1.Processes are executed with inputs provided by the environment at each time unit.
We are also interested in the outputs of processes at each time unit. Nevertheless,
the denotation of a process models all time units so we have to extract from the
denotation those states corresponding to a given time unit i. We do this by dropping
those states x where x(clocki) ≠ . For a process P , we denote the execution of P
in the time unit i by ⟦P ⟧i.Definition 3.3.8. (Valid Run). For a given time unit i let ⟨s, r, t⟩ = ⟨Ins(i).1,Ins(i).2, Ins(i).3⟩ be the input store, resources and duration of the i-th time unit.
A valid run is a run in ⟦P ⟧i (and with only valid steps) in which its first state x is
an initial state, its last state y is a possible final state and where d(x, y) ≤ t.For example, consider the following Chu space denoting the process P + Q from
example 3.3.1, where Pdef= tell(a1) and Q
def= when a2 do tell(a3):s1 s2 s3 s4 s5 s6 s7
a1 0 1 0 0 0 0a2 0 0 0 1 1 1a3 0 0 0 0 0 1
λ(a1) = ⟨a1,tell⟩λ(a2) = ⟨a2,ask⟩λ(a3) = ⟨a3,tell⟩
99
If for each event ai, Φ(λ(ai), d) = 3, and t = 5 (the duration of the time unit), in this
denotation, [s1, s2, s3], and [s1, s4, s5] are some valid runs (they denote the addition
of constraints a1 and a2 to the store, respectively), but [s1, s4, s5, s6, s7] is not a valid
run.
A process can have several valid runs. The denotation of process P +Q from example
3.3.1, for instance, has five valid runs:
R1 = [000], R2 = [000, 00,100],R3 = [000,0 0,010], R4 = [000,0 0,010,01 ,011],R5 = [000,0 0,01 ,011]
Then we can define a partial order between valid runs.
Ri < Rj iff Ri is a prefix of Rj
Now we summarize the way of finding the outputs of a process from its denotation.
The following are three functions for the calculation of the output observables in a
single time unit. Given a Chu space C = (A,X,λ) denoting some process P , the
observable output of P in a valid run R of C within a time unit can be recovered
with the functions ds, dr, dt defined as follows:
1. ds(R) = EC of the last state of R
2. dr(R) = max(cont ∣ cont is the number of events a ∉ delay, clock s.t. x(a) = , ∀x ∈ R)3. dt(R) = d(x, y) s.t. x is the first state of R and y is the last state of R
For the valid run R4 from the example above, for instance, if Ins(i) = ⟨a2,3,100⟩(where i is the current time unit) we have ds(R4) = a2∧a3, dr(R4) = 1 and dt(R4) =ΦA(a2, a2) +ΦT (a3, a2).The functions above will help us to build a function which returns the outputs of a
process given a valid run.
100
Definition 3.3.9. (Denotational Outputs). Let Ω be the set of valid runs R. Let
Outs ∶ Ω→ ∣D∣×N×N be function which given a valid run of a Chu space will return
a tuple consisting of a constraint representing a store, a number of resources and an
amount of time (the outputs of a process).
Function Outs will be used to define the outputs of a process. Given a valid run R of a
Chu space C = (A,X,λ) denoting some process P , we can say that Outs(R) = ⟨c, r, t⟩iff ds(R) = c, dr(R) = r and dt(R) = t. If RP is the set of valid runs of ⟦P ⟧, all the
outputs of P are Outs(RP ).Since we defined a partial order between valid runs, it can be extended to outputs:
Outs(Ri) < Outs(Rj) iff Ri < Rj
From the result of the application of function Outs with RP we can build sequences
of outputs using the underlying partial order. Given a set of outputs OP , a chain
s = s1s2 . . . sn of OP is a sequence of elements si ∈ OP such that si < si+1 for all
0 ≤ i < n.
The observables of a process then, can be recovered as follows:
Definition 3.3.10. (Denotational Observables). Let RP,i be the set of valid
runs of ⟦P ⟧i. The observations of a process P is the set of chains of its inputs and
outputs, that is
Obs(P ) = (α,α′) ∣ α(i) = Ins(i) and α′(i) = Outs(RP,i)
3.3.3 Soundness and Completeness of the Denotations
In this section we prove the soundness and completeness of the denotations. First, we
will prove that the denotations of a process approximates the input-output behaviour
of it, in the sense that it provides a superset of it.
Theorem 3.3.11. (Soundness). For every process P , io(P ) ⊆ Obs(P ).Proof. Recalling that io(P ) = (α,α′) ∣ P (α,α′)
ÔÔ⇒ ω, the proof proceeds by induction
on the structure of P .
101
P = skip. This is immediate since the operational behaviour of this process is
skip(α,α′)ÔÔ⇒ ω
where for every input, it outputs the same store, zero resources consumed and
zero ticks of time spent. The semantic equation 3.17 shows that the denotation
of this process has only one state x and only one event T with x(T ) = 0, so if
∀i. Ins(i) = ⟨d, r, t⟩ and R is the only valid run in i, then Outs(R) = ⟨d,0,0⟩.Therefore, io(skip) ⊆ Obs(skip).
P = tell(c). Let α = ⟨d, r, t⟩ ⋅ α1 and α′ = ⟨d′, r′, t′⟩ ⋅ α′1. Then operationally the
behaviour of process P will be
tell(c) (⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ skip
(α1,α′1)
ÔÔÔ⇒ ω
From rule 3.1 if the available time t is greater than the time spent by the
constraint system to post c in the given store d (i.e. t > ΦT (c, d)), then this
process evolve to skip and the store is augmented to d∧c. Otherwise the process
remains blocked. In either case the future of the process is skip. Then, from
the operational semantics we can obtain two outputs. For the first case it will
be ⟨d ∧ c,1,ΦT (c, d)⟩. For the second case it will be ⟨d,0,0⟩.On the other hand, from the semantic equation 3.16, ⟦tell(c)⟧ = c 0 1 .
Function Ins applied to the current time unit returns ⟨d, r, t⟩, and given the
only two valid runs in this case: R1 = [0], and R2 = [0, , 1], the application of
functions ds, dr, dt is
ds(R1) = d ds(R2) = d ∧ cdr(R1) = 0 dr(R2) = 1dt(R1) = 0 dt(R2) = ΦT (c, d)
Hence, Outs(Ri) is one of ⟨d,0,0⟩, and ⟨d ∧ c,1,ΦT (c, d)⟩. Therefore, we can
conclude that io(tell(c)) ⊆ Obs(tell(c)).
P = ∑i∈I when ci do Pi. Let α = ⟨d, r, t⟩ ⋅ α1. There are three cases depending on
the current time and the current store:
1. there exists j ∈ I such that d ⊧ cj and t −ΦA(cj , d) < 0,
2. ∀i ∈ I. d ⊭ ci, or
102
3. there exists j ∈ I such that d ⊧ cj and t −ΦA(cj , d) ≥ 0.
For the first two cases, ⟨P,d, t⟩ ↛ and
∑i∈I
when ci do Pi
(⟨d,r,t⟩, ⟨d,0,0⟩)ÔÔÔÔÔÔ⇒ skip
(α1,α′1)
ÔÔÔ⇒ ω
with α′ = ⟨d,0,0⟩ ⋅α′1. By equation 3.18, ⟦when c do P ⟧ = ⟦tell(c)⟧ ; P. Then,
the first case is the one with A for P = (A,X), that is, a Chu space with single
state x where no event has started (∀a ∈ A. x(a) = 0 because there is no time
to execute an event). Then, just like the case of skip, the only valid run in
the current time unit applied to function Outs returns ⟨d,0,0⟩.The second case is the one with ⟦tell(c)⟧ = 0, this means when the guard
cannot be enailed by the store (neither a process or the environment provide
the constraint c). The result of function Outs is the same of the first case.
In the third case, for some Q, ⟨P,d, t⟩→ ⟨Pj, d, t1⟩→∗ ⟨Q,d′, t′⟩↛, and
∑i∈I
when ci do Pi
(⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ F (Q) (α1,α′
1)
ÔÔÔ⇒ ω
Thus Pi
⟨α,α′⟩ÔÔ⇒ ω, which is the case of ⟦tell(c)⟧ ≠ 0. By inductive hypothesis,
(α,α′) ⊆ Obs(Pi), and by equation 3.19, just one the Pi can happen. Therefore,
(α,α′) ⊆ Obs(∑i∈I when ci do Pi).
P = Q ∥ R. Let α = ι1 ⋅ ι2 ⋯ ιn ⋯, and α′ = o1 ⋅o2 ⋯ on ⋯. Depending on the resources
in the given input α the behaviour of process Q ∥ R is one of the next three
cases:
Q ∥ R = Q1 ∥ R(ι1,o11)ÔÔÔ⇒ Q2 ∥ R
(ι2,o12)ÔÔÔ⇒ ⋯ Qn ∥ R
(ιn,o1n)ÔÔÔ⇒ ⋯
Q ∥ R = Q ∥ R1
(ι1,o21)ÔÔÔ⇒ Q ∥ R2
(ι2,o22)ÔÔÔ⇒ ⋯ Q ∥ Rn
(ιn,o2n)ÔÔÔ⇒ ⋯
Q ∥ R = Q1 ∥ R1
(ι1,o1)ÔÔ⇒ Q2 ∥ R2
(ι2,o2)ÔÔ⇒ ⋯ Qn ∥ Rn
(ιn,on)ÔÔ⇒ ⋯
For the first two cases, if α′1= o11 ⋅ o12 ⋯ and α′
2= o21 ⋅ o22 ⋯, then Q
⟨α,α′1⟩
ÔÔ⇒ ω
and R⟨α,α′
2⟩
ÔÔ⇒ ω. By the inductive hypothesis, (α,α′) ⊆ Obs(⟦Q⟧) and (α,α′) ⊆Obs(⟦R⟧). Since, in this case for all i, oi = o1i ∧ o2i, by semantic equation 3.20
103
the denotation of a parallel process is the joint of the processes involved. we
can conclude that α′ ∈ Obs(Q ∥ R).For the third case, depending on the given number of resources in α both pro-
cesses can execute concurrently. The output of the process will be the conjunc-
tion of both stores, the maximum number of resources used by both processes
Q and R, and the time spent by both processes executing concurrently. By
equation 3.20 and by definitions 3.3.8 and 3.3.9, there are some possible valid
runsRp in ⟦Q ∥ R⟧i which applied to functions ds, dr, dt return the output oi in
each time unit i. Therefore, by the inductive hypothesis, (α,α′) ⊆ Obs(Q ∥ R).
P = local x in Q. Let α = ι1 ⋅ ι2 ⋯ ιn ⋯, and α′ = o1 ⋅ o2 ⋯ on ⋯. For local x in Q
there must be a derivation of the form
local x in Q = local x in Q1
(ι1,o1)ÔÔ⇒
local x in Q2
(ι2,o2)ÔÔ⇒
. . .
local x in Qn
(ιn,on)ÔÔ⇒
. . .
If for i > 0, ιi = ⟨di, ri, ti⟩, from rule 3.6 and induction on the length of each
observable transition, we derive
Q = Q1
(⟨∃x.d1,r1,t1⟩, o1)ÔÔÔÔÔÔÔ⇒ Q2
(⟨∃x.d2,r2,t2⟩, o2)ÔÔÔÔÔÔÔ⇒ ⋯ Qn
(⟨∃x.dn,rn,tn⟩, on)ÔÔÔÔÔÔÔ⇒ ⋯
Now, if for i > 0, oi = ⟨d′i, r′i, t′i⟩, there must exists a ci such that d′i = (∃x.di) ∧ci. By inductive hypothesis (α,α′) ⊆ Obs(Q). Then by equation 3.21, if
⟦Q⟧ = (A,X,λ1) then ⟦local x in Q⟧ = (A,X,λ2) where ∀a ∈ A.λ2(a)#1 =
∃x.λ1(a)#1. Hence, we can conclude that (α,α′) ⊆ Obs(local x in Q).
P = delay Q until δ. Let α = ⟨d, r, t⟩ ⋅ α1 and α′ = ⟨d′, r′, t′⟩ ⋅ α′1. By rules 3.10 and
3.11, ⟨P,d, t⟩ → ⟨P,d, t − 1⟩ →∗ ⟨Q,d, t − δ⟩ →∗ ⟨R,d′, t′⟩ ↛. This is the case of
DELAY ; Q. Then the behaviour of process P will be
delay Q until δ(⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ S
(α1,α′1)
ÔÔÔ⇒ ω
104
where S ≡ F (R). Thus P⟨α,α′⟩ÔÔ⇒ ω. Then by the inductive hypothesis we can
conclude that (α,α′) ⊆ Obs(delay Q until δ).
P = next Q. Let α = ⟨d, r, t⟩⋅α1 and α′ = ⟨d′, r′, t′⟩⋅α′1. Then the behaviour of process
P will be
next Q(⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ R
(α1,α′1)
ÔÔÔ⇒ ω
where R ≡ F (next Q) = Q, by rule 3.15 and definition 3.2.10. By the induc-
tive hypothesis, (α,α′) ⊆ Obs(Q) and by equation 3.24 process P must wait
until the event of process CLOCK corresponding to the current time unit is
finished. Therefore (α,α′) ⊆ Obs(next Q).
P = unless c next Q. For this process there are two cases: either d ⊧ c or d ⊭ c. In
the first case, by rule 3.7 this process reduce to skip, then the proof is trivial.
In the second case, using the semantic equation 3.25 the proof is similar to the
case P = next Q.
P = catch c in Q finally R. Let α = ⟨d, r, t⟩ ⋅ α1 and α′ = ⟨d′, r′, t′⟩ ⋅ α′1. There are
two cases: either d ⊧ c or d ⊭ c. For the first case, for some S1, ⟨P,d, t⟩ →⟨R,d, t1⟩→∗ ⟨S1, d′, t′⟩↛, and
catch c in Q finally R(⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ F (S1) (α1,α′
1)
ÔÔÔ⇒ ω
Thus R⟨α,α′⟩ÔÔ⇒ ω. By the inductive hypothesis, (α,α′) ⊆ Obs(R), by equation
3.22 a process tell(c) must occur before R or the environment must provide c,
and processQ had to be cancelled or must not have been started yet. Therefore,
(α,α′) ⊆ Obs(catch c in Q finally R).For the second case, for some S2,
⟨catch c in Q finally R,d, t⟩ → ⟨catch c in Q′ finally R,d1, t1⟩→∗ ⟨catch c in S2 finally R,d′, t′⟩↛
and then
catch c in Q finally R(⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ catch c in F (S2) finally R
(α1,α′1)
ÔÔÔ⇒ ω
105
Thus catch c in Q finally R⟨α,α′⟩ÔÔ⇒ ω. By inductive hypothesis, (α,α′) ⊆
Obs(Q), by equation 3.22 a process tell(c) must not have been occur before,
to execute Q. Therefore, (α,α′) ⊆ Obs(catch c in Q finally R).
P = ! Q. Let α = ι1 ⋅ ι2 ⋅ ι3 ⋯ ιn ⋯, and α′ = o1 ⋅ o2 ⋅ o3 ⋯ on ⋯. Using rule 3.12, we can
obtain the internal transition ⟨! Q,d, t⟩ →∗ ⟨Q′ ∥ next ! Q,d′, t′⟩↛ and then by
definition 3.2.10, F (Q′ ∥ next ! Q) = F (Q′) ∥ ! Q. Hence, if Q = Q1,1 and we
repeat the last reasoning and that of the parallel case, an external transition
may be:
! Q1,1
⟨ι1,o1⟩ÔÔ⇒ Q1,2 ∥ ! Q1,1
⟨ι2,o2⟩ÔÔ⇒ Q1,3 ∥ Q2,2 ∥ ! Q1,1
⟨ι3,o3⟩ÔÔ⇒ Q1,4 ∥ Q2,3 ∥ Q3,2 ∥ ! Q1,1
⋮⟨ιn−1,on−1⟩ÔÔÔÔ⇒ Q1,n ∥ Q2,n−1 ∥ Q3,n−2 ∥ . . . ∥ Qn−1,2 ∥ ! Q1,1
⟨ιn,on⟩ÔÔ⇒ . . .
⋮
where each parallel composition can be derived as
Q1,1
⟨ι1,o1⟩ÔÔ⇒ Q1,2
⟨ι2,o2⟩ÔÔ⇒ Q1,3
⟨ι3,o3⟩ÔÔ⇒ . . .
⟨ιn−1,on−1⟩ÔÔÔÔ⇒ Q1,n
⟨ιn,on⟩ÔÔ⇒ . . .
Q1,1
⟨ι2,o2⟩ÔÔ⇒ Q2,2
⟨ι3,o3⟩ÔÔ⇒ Q2,3
⟨ι4,o4⟩ÔÔ⇒ . . .
⟨ιn−1,on−1⟩ÔÔÔÔ⇒ Q2,n−1
⟨ιn,on⟩ÔÔ⇒ . . .
Q1,1
⟨ι3,o3⟩ÔÔ⇒ Q3,2
⟨ι4,o4⟩ÔÔ⇒ Q3,3
⟨ι5,o5⟩ÔÔ⇒ . . .
⟨ιn−1,on−1⟩ÔÔÔÔ⇒ Q3,n−2
⟨ιn,on⟩ÔÔ⇒ . . .
⋮
Therefore, similar to the case of parallel composition, by the inductive hypoth-
esis and equation 3.26, we can conclude that (α,α′) ⊆ Obs(! Q).P = ⋆ Q. Let α = ι1 ⋅ ι2 ⋅ ι3 ⋯ ιn ⋯, and α′ = o1 ⋅ o2 ⋅ o3 ⋯ on ⋯. By rule 3.13
⟨⋆ Q,d, t⟩ → ⟨nextm Q,d, t⟩. Then we distinguish two cases:
m = 0. In this case we have
⟨⋆ Q,d, t⟩ → ⟨Q,d, t⟩ →∗ ⟨Q′, d′, t′⟩↛and Q
⟨α,α′⟩ÔÔ⇒ ω. By the inductive hypothesis we have (α,α′) ⊆ Obs(Q).
Therefore (α,α′) ⊆ Obs(⋆ Q).106
m ≥ 1. In this other case we have
⋆ Q⟨ι1,o1⟩ÔÔ⇒ nextm−1 Q
⟨ι2,o2⟩ÔÔ⇒ . . .
⟨ιm−1,om−1⟩ÔÔÔÔ⇒ next Q
⟨ιm,om⟩ÔÔÔ⇒ Q
and
Q = Q1
⟨ιm+1,om+1⟩ÔÔÔÔ⇒ Q2
⟨ιm+1,om+1⟩ÔÔÔÔ⇒ . . .
Hence, using inductive hypothesis and equation 3.27 we can conclude
(α,α′) ⊆ Obs(⋆ Q).
We now show the completeness of the denotations (the reverse of the theorem 3.3.11).
Theorem 3.3.12. (Completeness). For every process P , Obs(P ) ⊆ io(P ).Proof. The proof proceeds by structural induction on P . The cases for parallel
composition, delta delay, unit delay, replication and asynchrony can be proved by
reversing the proofs of the corresponding cases in theorem 3.3.11. We give the proof
of the remaining cases, although they are also similar to their corresponding in
theorem 3.3.11, except for some details.
P = tell(c). Let α ∈ Ins(1) and α′ ∈ Outs(RP ). From the semantic equation 3.16,
⟦tell(c)⟧ = c 0 1 . Then α = ⟨d, r, t⟩, and given the only two valid runs
in this case: R1 = [0], and R2 = [0, , 1], the application of functions ds, dr,
dt isds(R1) = d ds(R2) = d ∧ cdr(R1) = 0 dr(R2) = 1dt(R1) = 0 dt(R2) = ΦT (c, d)
Hence, α′ is one of ⟨d,0,0⟩, and ⟨d ∧ c,1,ΦT (c, d)⟩.On the other hand, operationally the behaviour of process P will be
tell(c) (⟨d,r,t⟩, ⟨d′,r′,t′⟩)ÔÔÔÔÔÔÔ⇒ skip
(α1,α′1)
ÔÔÔ⇒ ω
Then, we can obtain two outputs: ⟨d ∧ c,1,ΦT (c, d)⟩ and ⟨d,0,0⟩. Therefore,
we can conclude that Obs(tell(c)) ⊆ io(tell(c)).
107
P = ∑i∈I when ci do Pi. Let α ∈ Ins(1) and α′ ∈ Outs(RP ). By semantic equation
3.18, we have that ⟦when c do Q⟧ = ⟦tell(c)⟧ ; Q where λ(c) ↦ ⟨c,ask⟩,so the act of posting the constraint c by a process or as a input α from the
environment must occur before traversing the Chu space denoting Q.
Then we can distinguish two cases:
1. ⟦tell(c)⟧ = 0. This is the case in which the guard does not hold, that
is, the constraint c was not posted. Then Q will not be traversed. In
the operational semantics this is the case in which the process remains
blocked.
2. ⟦tell(c)⟧ ≠ 0. This is the case in which the constraint is being or has
been posted by a process or the environment in α. So the process will be
executed which is the same case of rule 3.2:
⟨∑i∈I when ci do Pi, d, t⟩ 1Ð→ ⟨Pj , d, t −∑i∈I ΦA(cj , d)⟩
Then, by inductive hypothesis, if I is a singleton, we can conclude that (α,α′) ⊆io(∑i∈I when ci do Pi).On the other hand, and by equation 3.19: ⟦P +Q⟧ = A ∪B ∨ (P ≠ 0∧B −A)∨(Q ≠ 0 ∧ A −B). Recall that P + Q is compositional, that is, if P = P1
and Q = P2 + Q1, then P + Q = P1 + P2 + Q1, and if Q1 = P3 + Q2, then
P + Q = P1 + P2 + P3 + Q2, and we can continue forming ∑i∈I Pi. Then, we
distinguish two cases: P ≠ 0 or Q ≠ 0. This means that only one of the Chu
spaces will be activated, which is the same case of rule 3.2. Therefore, we can
conclude that (α,α′) ⊆ io(∑i∈I when ci do Pi).
P = local x in Q. Let α ∈ Ins(1) and α′ ∈ Outs(RP ). By rule 3.21 if ⟦Q⟧ =(A,X,λ1), then ⟦local x in Q⟧ = (A,X,λ2) where ∀a ∈ A . λ2(a)#1 =
∃x.λ1(a)#1. That is, if Ins(1) = ⟨d, r, t⟩, ds(RP ) = d ∧ ∃xλ1(a)#1 ∀a ∈ A.
Now, if for i > 0, ιi = ⟨di, ri, ti⟩, the observable transitions of P are of the form
108
local x in Q = local x in Q1
(ι1, ⟨d1∧∃x.c1,r′1,t′
1⟩)
ÔÔÔÔÔÔÔÔ⇒
local x in Q2
(ι2, ⟨d2∧∃x.c2,r′2,t′
2⟩)
ÔÔÔÔÔÔÔÔ⇒
. . .
local x in Qn
(ιn, ⟨dn∧∃x.cn,r′n,t′n⟩)ÔÔÔÔÔÔÔÔÔ⇒
. . .
where every ci is the constraint that Qi post to the store. Then, if we quan-
tify ci (i.e. ∃x.ci) and join it with each input di we are going to obtain
the output store. Therefore, by the inductive hypothesis we conclude that
(α,α′) ⊆ io(local x in Q).
P = unless c next Q. Let α ∈ Ins(1) and α′ ∈ Outs(RP ). By equation 3.25:
⟦unless c next Q⟧ = (⟦tell(c)⟧ = 0 ∧ CLOCK ; Q) ∨ (⟦tell(c)⟧ ≠ 0),where λ(c)↦ ⟨c,ask⟩. We distinguish here two cases:
1. ⟦tell(c)⟧ = 0. In the case in which there is no posting of constraint
c by a process or the environment in α, then we must wait until the
time unit is over (when the process CLOCK finishes) in order to be
able to traverse the Chu space denoting Q. This is the same case of
the future of an unless process (definition 3.2.10): if d ⊭ c (where d is
the current store) then F (unless c next Q) = Q, so we conclude that
(α,α′) ⊆ io(unless c next Q).2. ⟦tell(c)⟧ ≠ 0. In this case, the constraint c is being or has been posted, so
nothing else happen. This is the same case of rule 3.7: if d ⊧ c (where d
is the current store) then ⟨unless c next P,d, t⟩ 1Ð→ ⟨skip, d, t −ΦA(c, d)⟩
so we conclude that (α,α′) ⊆ io(unless c next Q).P = catch c in Q finally R. Let α ∈ Ins(1) and α′ ∈ Outs(RP ). By equation 3.22:
⟦catch c in Q finally R⟧ = (⟦tell(c)⟧ ∧(A ∨ A)∧ R) ∨ (⟦tell(c)⟧ = 0 ∧ Q),where λ(c)↦ ⟨c,ask⟩. Then we distinguish here two cases:
1. ⟦tell(c)⟧. This is the case in which the constraint c has already been
posted by a process or the environment in α. Then the process R will be
109
executed. The process Q is then cancelled (A) or there is the possibility
that it has not been executed yet (A). This is the same case of rule
3.9: ⟨catch c in P finally Q,d, t⟩ 1Ð→ ⟨Q,d, t − ΦA(c, d)⟩ if d ⊧ c with
d being the current store. Therefore, by inductive hypothesis (α,α′) ⊆io(catch c in Q finally R).
2. ⟦tell(c)⟧ = 0. In this case the constraint c has not been posted yet, so the
process Q can continue its execution. This is the same case of rule 3.8:
⟨catch c in P finally Q,d, t⟩ sÐ→ ⟨catch c in P ′ finally Q,d′, t′⟩ if d ⊧ c
(d is the current store). Hence, (α,α′) ⊆ io(catch c in Q finally R).
Corollary 3.3.13. (Full Abstraction). For two processes P and Q, Obs(P ) =Obs(Q) iff io(P ) = io(Q).
3.4 Real-Time Logic
A Real-time Temporal Logic such as RTTL [OW87, Ost89] extend the logic from
Pnueli with proof rules needed for real-time properties. This logic is linear-time and
it incorporates timing requirements in a formula by an explicit clock variable. The
dynamic state variable T is bound to the time instant in which the temporal formula
is evaluated.
In this section we will introduce a real-time logic based on RTTL for defining a
temporal logic to express temporal properties in the rtcc calculus. However, since
RTTL lacks the notion of resources we will not consider this characteristic in the
logic. We will define first a transition system with maximal parallelism, that is, for
n processes there will be n resources available to execute them. Then we will give
the syntax and semantics of the logic. After that, we will provide a proof system to
relate the processes in the calculus and the propositions in the logic.
3.4.1 Transition System without Resources
In this new transition system configurations are the same as in section 3.2.3 but the
transition relations → and ⇒ are redefined as the least relations satisfying the rules
110
in table 3.4.
t −ΦT (c, d) ≥ 0
⟨tell(c), d, t⟩ → ⟨skip, d ∧ c, t −ΦT (c, d)⟩t −ΦA(cj , d) ≥ 0 d ⊧ cj , j ∈ I
⟨∑i∈I when ci do Pi, d, t⟩ → ⟨Pj , d, t −ΦA(cj , d)⟩⟨P,d, t⟩ → ⟨P ′, d′p, t′p⟩ ⟨Q,d, t⟩ → ⟨Q′, d′q, t′q⟩⟨P ∥ Q,d, t⟩ → ⟨P ′ ∥ Q′, d′p ∧ d′q,min(t′p, t′q)⟩⟨P, c ∧ ∃xd, t −ΦT (c,∃xd)⟩ → ⟨P ′, c′, t′⟩
⟨local x, c in P,d, t⟩ → ⟨local x, c′ in P ′, d ∧ ∃xc′, t′⟩t −ΦA(c, d) ≥ 0 d ⊧ c
⟨unless c next P,d, t⟩ → ⟨skip, d, t −ΦA(c, d)⟩t −ΦA(c, d) ≥ 0 d ⊧ c
⟨catch c in P finally Q,d, t⟩ → ⟨Q,d, t −ΦA(c, d)⟩⟨P,d, t −ΦA(c, d)⟩ → ⟨P ′, d′, t′⟩ d ⊭ c
⟨catch c in P finally Q,d, t⟩ → ⟨catch c in P ′ finally Q,d′, t′⟩δ > T − t t > 0
⟨delay P until δ, d, t⟩ → ⟨delay P until δ, d, t − 1⟩δ ≤ T − t
⟨delay P until δ, d, t⟩ → ⟨P,d, t⟩
⟨!P,d, t⟩ → ⟨P ∥ next !P,d, t⟩ ⟨⋆P,d, t⟩ → ⟨nextm P,d, t⟩ if m ≥ 0
γ1 → γ2
γ′1→ γ′
2
if γ1 ≡ γ′1 and γ2 ≡ γ′2
⟨P, c, t⟩ →∗ ⟨Q,d, t′⟩ ↛P
(⟨c,t⟩, ⟨d,t−t′⟩)ÔÔÔÔÔÔ⇒ R
if R ≡ F (Q)
Table 3.4: Transition Rules of rtcc without Resources
Most rules remain similar to the transition system for multiple resources. The parallel
composition is probably the process with more changes since there are n resources
and, consequently, any pair of processes can run concurrently without problems.
111
In this transition system definitions and properties remains. It is correct to state
that all properties hold, since the characteristics of time is the same and if a property
holds for an arbitrary number of resources then it must also hold for a number n of
resources.
3.4.2 Logic Syntax & Semantics
Let V be a set of time variables, and let Π be a set of timing constraints over the
variables in V ∪ T. The syntax of the logic is given by the following definition.
Definition 3.4.1. (Logic Syntax). The formulae A,B, . . . ∈ A is defined induc-
tively as follows:
A,B, . . . ∶= c ∣ π ∣ ¬A ∣ A ∧A ∣ ∃x.A ∣ ◻ A ∣ A ∣ A
Here c denotes a constraint in C which we shall refer to as atomic proposition. In
addition π ∈ Π and x ∈ V . The symbols ¬, ∧ and ∃ represent linear-temporal
logic negation, conjunction and existential quantification. The symbols ◻ , and denote the temporal operators always, eventually and next.
To define the semantics of the logic, and following [AH92, MP92], first we define the
notions of x-variant, time intervals, and timed observations sequences.
Definition 3.4.2. (x-variant) Let σ = c0 ⋅c1 ⋅c2 . . ., and σ′ = d0 ⋅d1 ⋅d2 . . . be sequences
of constraints, and x ∈ V a variable. It is said that σ′ is an x-variant of σ if for each
j ≥ 0, ∃xdj = ∃xcj.
Intuitively, σ′ is x-variant of σ, if it differs from σ by at most the interpretation given
to x, that is, if σ and σ′ are the same except for the information about x.
Definition 3.4.3. (Time Interval) A time interval I is a convex subset of Z. It
has the form [a, b] where a ≤ b and a, b ∈ Z+. The left end-point of an interval I is
denoted by l(I) and the right end-point is denoted by r(I).A time interval sequence I = I0 ⋅ I1 ⋅ I2 . . . is a finite or infinite sequence of time
intervals.
112
Definition 3.4.4. (Timed Observation Sequence) A timed observation sequence
ρ = ⟨σ, τ⟩ is a pair consisting of an infinite sequence σ of states, and a monotonic
function τ ∶ N → I that maps every ci ∈ σ to a time interval. A timed observation
sequence is defined as
⟨c1, τ1⟩→ ⟨c2, τ2⟩→ ⟨c3, τ3⟩→ . . .
The formulae of the logic are interpreted over timed observation sequences This
provides a unique interpretation for the atomic propositions at every time unit (this
follows the approach in [AH90]). Informally, a formula A holds at time unit i in a
timed observation sequence iff A holds at time unit i + 1 (a time unit i consists of a
time interval I). Formally, given a timed observation sequence ρ, and a time unit
i ∈ Z+, the satisfaction relation ⟨ρ, i⟩ ⊧ A is defined as follows:
Definition 3.4.5. (Logic Semantics). The timed observation sequence ρ = ⟨σ, τ⟩satisfies the formula A iff ⟨ρ,1⟩ ⊧ε A for every environment ε: V → Z+, where the
satisfaction relation ⊧ is inductively defined as follows:
⟨ρ, i⟩ ⊧ε c iff σ(i) ⊧ε c⟨ρ, i⟩ ⊧ε π iff ε[T = t] ⊧ε π where t ∈ τ(i)⟨ρ, i⟩ ⊧ε ¬A iff ⟨ρ, i⟩ ⊭ε A⟨ρ, i⟩ ⊧ε A1 ∧ A2 iff ⟨ρ, i⟩ ⊧ε A1 and ⟨ρ, i⟩ ⊧ε A2⟨ρ, i⟩ ⊧ε A iff ⟨ρ, i + 1⟩ ⊧ε A⟨ρ, i⟩ ⊧ε ◻A iff ∀j ≥ i ⟨ρ, j⟩ ⊧ε A⟨ρ, i⟩ ⊧ε A iff ∃j ≥ i s.t. ⟨ρ, j⟩ ⊧ε A⟨ρ, i⟩ ⊧ε ∃x.A iff ⟨ρ′, i⟩ ⊧ε A s.t. ρ′ = ⟨σ2, τ2⟩ andσ2 is an x-variant of σ.
A timed observation sequence ρ is a model of the formula A iff ⟨ρ,1⟩ ⊧ε A for any
environment ε, that is, iff ⟨ρ,1⟩ ⊧ A. By ⟦A⟧ it is denoted the collection of all models
of A, i.e. ⟦A⟧ = ρ ∣ ρ ⊧ A.
3.4.3 Proof System
In this section we give an inference system to reason about the correctness of pro-
cesses, i.e. when a rtcc process satisfies a given formal specification in the logic.
This inference system is given by the following rules.
113
A process P satisfies a temporal formula A given a time observation sequence ρ
(model of A), notation P ⊢ A if the assertion P ⊢ A has a rule in this inference
system.
The inference rule for tell(c) is given by
tell(c) ⊢ cif ⟨ρ,1⟩ ⊧ T +ΦT (c, σ(1)) ≤ r(τ1) (3.28)
This rule for the tell operation states that every computation of tell(c) satisfy the
atomic proposition c if in the current time unit the time spent to apply the proposi-
tion ΦT (c, σ(1)) plus the current time T is less than or equal to the maximum time
r(τ1) of the time unit.
For the choice operator ∑i∈I when ci do Pi there is the rule
∀i ∈ I Pi ⊢ Ai
∑i∈I
when ci do Pi ⊢ ⋁i∈I
(ci ∧Ai) ∨ ⋀i∈I
¬ciif ⟨ρ,1⟩ ⊧ T +ΦA(ci, σ(1)) ≤ r(τ1)
(3.29)
If each Pi in the process Q = ∑i∈I when ci do Pi satisfies a formula Ai, then Q must
satisfy: (a) some of the guards ci and their corresponding Ai, or (b) none of the
guards. Additionally, the following temporal formula must hold: the time needed
to ask for all propositions plus the current time must be less than or equal to the
maximum of the time unit.
The rule for parallel composition is given by
P ⊢ A Q ⊢ B
P ∥ Q ⊢ A ∧B(3.30)
If processes P and Q satisfy A and B respectively, then P ∥ Q must satisfy A and
must satisfy B. Therefore it must satisfy A ∧B.
About local behaviour, the rule for this process is
P ⊢ A
local x in P ⊢ ∃xA(3.31)
In the logic a variable is hidden by quantifying it existentially (∃xA). If the output of
local x in P is the output of P with x hidden, and if P satisfies A, then local x in P
should satisfy A with x hidden.
114
The rule for weak time-out unless c next P is similar to the rule for next, except
that P may not be executed; it depends on the constraint c. Thus this process
satisfies c or A. That is,
P ⊢ A
unless c next P ⊢ c ∨Aif ⟨ρ,1⟩ ⊧ T +ΦA(c, σ(1)) ≤ r(τ1) (3.32)
The inference rule for the strong time-out construct is defined as
P ⊢ A Q ⊢ B
catch c in P finally Q ⊢ (¬c ∧A) ∨ (c ∧B) if ⟨ρ,1⟩ ⊧ T +ΦA(c, σ(1)) ≤ r(τ1)(3.33)
This rule states that if P satisfies A and Q satisfies B, then catch c in P finally Q
satisfies either A or B and the time needed to ask for the proposition plus the current
time must be less than or equal to the maximum of the time unit, depending on the
presence of the guard c.
The following is the rule for the delta delay construct:
P ⊢ A
delay P until δ ⊢ Aif ⟨ρ,1⟩ ⊧ T ≥ l(τ1) + δ (3.34)
If a process P satisfies a formula A, then the process delay P until δ will satisfy A
in the moment in which there is at most δ units left on the current time unit.
Assuming that P satisfies A, the rule for the next operator says that if P is going to
be executed the next time unit, then it will satisfy A in the next time unit. Hence,
P ⊢ A
next P ⊢ A(3.35)
For the replication construct, the following rule says that if P is executed in each
time unit and it satisfies A, then the proposition ◻A will be satisfied by !P .
P ⊢ A
!P ⊢ ◻A(3.36)
Finally, the rule for asynchrony says that if P is executed in some time unit, then it
will satisfy A sometime in the future. Therefore,
P ⊢ A
⋆P ⊢ A(3.37)
The inference system is summarized in table 3.5.
115
tell(c) ⊢ cif ⟨ρ,1⟩ ⊧ T +ΦT (c, σ(1)) ≤ r(τ1)
∀i ∈ I Pi ⊢ Ai
∑i∈I
when ci do Pi ⊢ ⋁i∈I
(ci ∧Ai) ∨ ⋀i∈I
(¬ci) if ⟨ρ,1⟩ ⊧ T +ΦA(ci, σ(1)) ≤ r(τ1)
P ⊢ A Q ⊢ B
P ∥ Q ⊢ A ∧B
P ⊢ A
local x in P ⊢ ∃xA
P ⊢ A
unless c next P ⊢ c ∨A if ⟨ρ,1⟩ ⊧ T +ΦA(c, σ(1)) ≤ r(τ1)P ⊢ A Q ⊢ B
catch c in P finally Q ⊢ (¬c ∧A) ∨ (c ∧B) if ⟨ρ,1⟩ ⊧ T +ΦA(c, σ(1)) ≤ r(τ1)P ⊢ A
delay P until δ ⊢ Aif ⟨ρ,1⟩ ⊧ T ≥ l(τ1) + δ
P ⊢ A
next P ⊢ AP ⊢ A
!P ⊢ ◻A
P ⊢ A
⋆P ⊢ A
Table 3.5: Proof System of rtcc
3.5 Encoding ntcc into rtcc
The rtcc calculus is an extension of the ntcc calculus. In this sense all systems that
can be modeled in ntcc can also be modeled in rtcc (but not in the opposite way).
This section is devoted to show this.
In ntcc the notion of time is different from that of rtcc. Each time unit is identified
with the time needed for the processes to terminate their computations. In the rtcc
calculus we consider that each time unit is identified with the duration given by the
environment. Therefore, in ntcc every enabled process computes to its resting point.
In our calculus this could be possible in two ways: (1) if we consider a duration of
each time unit large enough to have no worries about time, or (2) if the environment
knows exactly how much time take each process to get its resting point. The first
approach is weak and not rigorous because it is not possible to ensure a good choice
for duration. The second choice is more realistic since in definition 3.2.9 we declare
116
two functions to approximate the time spent by a process in two actions. Then if we
assume a prior knowledge of processes, we can model in rtcc the systems modeled
in ntcc.
On the other hand, the ntcc calculus lacks the notion of resource. This can be solved
considering a single resource and relate it with the duration of each time unit.
About the processes themselves, if we do not consider the new constructs (strong
time-out and delta delay), most properties of ntcc hold. The reader may observe,
for instance, that the strong time-out construct can stop the execution of a process
at any time (within a time unit). That is, for a given store a process may evolve,
but if that particular store is augmented, it is possible that now the signal that stops
the process be present, so the process may evolve in a different way. This leads to
think that if there is a catch-free process (i.e. processes without occurrences of the
strong time-out construct) which in its execution possibly add some constraints to
the store, then it can be executed in the resulting store obtaining the same result.
3.6 Summary and Related Work
In this chapter we described the rtcc calculus. This calculus allows to model real-
time behaviour and specify strong preemption. rtcc is obtained from ntcc by adding
two new constructs and extending the transition system with support for resources,
limited time and true concurrency.
There exists several formalisms that have been extended to support real-time. Those
described in this dissertation include ACP in [BB91], CCS in [Fid92] and CSP in
[Dav93]. The π-calculus has been extended with real time in two ways: with true con-
currency semantics in [Pri95] (stochastic π-calculus), and with interleaving semantics
in [LZ02] (the πRT -calculus). Additionally, in the CC family various extensions have
been proposed, for example TScc in [BKGJ97] and tccp in [dBGM01, dBGM04].
The finite-domain constraint system defined in section 3.2.1 can be used as a basis
for a musical constraint system. In [Rod06], MuZa, an extension of this constraint
system was proposed. MuZa is a tonal and harmony specialized constraint system,
implemented using the higher-order concurrent constraint language MoZArt [Smo95].
This kind of use of finite-domain constraint system in music was previously proposed
117
by Geraint A. Wiggins in [Wig98]. Other examples of music constraint systems in-
clude Carla [Cou90], PWConstraints [Lau93], Situation [RLBA98, RB98], CompOZe
[HLZ96], BackTalk [PR95], OMClouds [TAC01, Tru04], f2m [Wri95], Arno [And00],
and Strasheela [AAA05, And07]).
A delay declaration similar to delay P until δ was first introduced in a ccp-like
language in [dBGMP97] (it was called δ-CCP). However the concept of delay in this
model is different from our approach. In the δ-CCP calculus, the delay mechanism is
simulated by modifying the ask construct: the agent ask(δ(x)) → A behaves like A
if the current store satisfies the property δ(x) (a user-defined predicate), otherwise
the agent suspends.
Strong preemption primitives of the form catch c in P finally Q are also called
watchdogs. Although in [dBGM01] it is argued that weak preemption is sufficient
in large timed systems (i.e. it is acceptable a delay between the detection of a
event and the consequent action), strong preemption primitives were introduced in
another extension of ccp, called Default tcc, in [SJG95]. However, in order to
maintain consistency in this calculus, some assumptions about the future evolution
of the systems had to be considered. Clearly, this is not our case. Notwithstanding,
the lack of monotonicity derived from the construct catch c in P finally Q and the
inclusion of resources in the operational semantics precludes defining the denotations
of processes in terms of fix points as is usual in CCP calculi. Instead, the denotations
were built as Chu spaces.
The notion of resource and its use as a limit for processes have been previously
included in various formalisms. Damas P. Gruska in [Gru96, Gru97] presented an
extension of CCS called CCSLP, Calculus of Communicating Systems with Limited
Parallelism. Patrice Bremond-Gregoire in [BGBAL96, BGL97] proposed ACSR,
Algebra of Communicating Shared Resources. Mikael Buchholtz in [BAL02] presented
a process algebra for shared processors.
For a transition system disregarding resources a real-time logic was defined based on
RTTL. The presence of resources in the calculus makes difficult to express properties
in the logic. The BI logic [PT06] and the separation logic [O’H07] are the logics
handling resources that the author knows of. However, in these logics the notion of
resource is rather different from ours: a resource a is non-reusable good.
118
4 Applications
“A person does not really understand something until after teaching it to a
computer”
– Donald Knuth
In this chapter it will be illustrated some examples of musical models made over the
rtcc calculus.
4.1 Maquettes
In [Ago98], the notion of maquette was included in the visual programming language
OpenMusic (OM) (see section 2.1.2). Maquettes are spaces where musical objects
such as MIDI and AIFF files, synthesis scripts, musical classes and patches (basic
visual programming units) that may calculate all of the previous objects, can be
organized on a time line. As Carlos Agon mentions in [Ago04], they are OpenMusic
entities for representing, in the same object, patches and scores. A maquette is a
concept of a score where the static description of musical structures and the defini-
tions of dynamic computational processes seamlessly coexist. It can be viewed as
a score or as a set of interconnected processes and solves the problem of combining
the design of high level hierarchical musical structures, the arrangement of musical
material in time, and the specification of musical algorithms.
Properties of a maquette are, among others: graphical representation of musical
structures, the combination of calculations and representations in the same object,
causality and temporal relations between objects, temporal constraints, and hier-
archy in temporal classes. All of these are very useful in the domain of Computer
Assisted Composition.
119
Since a maquette is basically a static score (the model is divided in two phases:
evaluation and performance [BA06], that is, once the maquette is evaluated and the
composer starts to play it, no changes can be made until it ends or the user stop it), it
can be viewed as a musical structure M with processes, hierarchies and constraints,
similar to that in [BDC01]:
M = ⟨TO,R⟩where TO is a set of temporal objects and R is a set of temporal relations. A
temporal object is defined by
TOi = ⟨si, di, ai, pi⟩
where s is the starting time, d is the duration, a is the set of musical attributes and
p is the list of objects associated to TOi (recall that a temporal object can be a
patch, a maquette or some other musical object such as a MIDI or AIFF file). For
example the maquette in figure 4.1 will be defined as follows (example adapted from
[DCA04]):
M = ⟨A,B,G,⟩where A = ⟨1,4, aA,⟩ B = ⟨6,13, aB,C,D,E,F⟩ and C = ⟨1,5, aC ,⟩, D = ⟨6,13, aD,⟩, E =
⟨6,4, aE ,⟩, F = ⟨9,4, aF ,⟩ G = ⟨8,9, aG,⟩In this example there are three temporal objects in the maquette. However, the
object B is another maquette with another four temporal objects. The reader may
notice the hierarchies.
Musical time in maquettes is a set of integers (i.e. a discrete line composed by very
defined points), and the starting time and the duration are such a numbers. The
time unit is chosen by the environment, which is, in this particular case, the user
by means of the metric dialog box in figure 4.2. The metric ruler divides the time
in periods depending on the metric supplied by the user (i.e. 4/4, 3/4, etc). When
120
Figure 4.1: Maquette as a Static Score
the metric ruler is visible, the objects in the Maquette will correct their positions to
the nearest place where their start-time is aligned to a division (or a subdivision, for
example, a 4/4 metric but with a maximal subdivision of 16, meaning a sixteenth
note).
The user organize the temporal objects in the maquette taking into account their
initial positions. These positions by themselves only define the execution time of
the processes (the execution of a maquette is simply a traverse line which sweep
the time-line playing the temporal boxes in the order defined by their positions).
However, the maquettes provide a way to constrain a temporal object. The markers
(little red flags over the time line) allow to identify the temporal constraints post to
the objects and relations between them. Figure 4.3, for instance, shows a maquette
with three markers. The first one constrain the temporal objects A and B to end
121
Figure 4.2: Dialog Box for Metric Setting in the Maquettes
and to start at second 2, respectively. The two other markers constrain C to be
executed between the fifth and the ninth second. It also define the starting point of
D and the ending point of E.
The temporal relations R between processes within a maquette are specified by the
Allen’s relations [All83] in figure 4.4.
For example, the Allen relations for the maquette in figure 4.3 are:
A meets B
A before C
D starts C
E finishes C
Note that although the starting point of B is before the one of C in the maquette,
we cannot state the relation B before C because the evaluation of B could extend
its duration.
The model in rtcc of a maquette is represented by a process that launches in parallel
all its temporal objects and their temporal relations.
Maquettedef= ( ∏
i∈TO
TOi,[ai,Pi]) ∥ ! tell(⋀R)122
Figure 4.3: Constraints in Maquettes
Each temporal object is a process of the form:
TOi,[ai,Pi]def= ! tell(ai) ∥ ! when clock ≥ si do
(tell(si = clock) ∥ catch clock ≥ si + di in Pi)The set of musical attributes ai can be viewed as the local store of the temporal
object TOi, where the constraints attached to it are placed. Pi represents the process
attached to the temporal object. Notice that neither si or di are parameters in the
process. Since the starting point and the duration of a temporal object are values
123
X
X
X
X
Y
X
X
X
X
X
X
X
X
X
GraphicalRepresentation
X meets Y
X starts Y
X overlaps Y
X before Y
X during Y
X contains Y
X equals Y
X started by Y
X finished by Y
X finishes Y
X overlapped by Y
X met by Y
afterX Y
Relation
time
Figure 4.4: The Allen’s Relations
fixed in the evaluation phase they will be handled as constraints (attributes in ai).
The elements of R are temporal relations (i.e. Allen relations). They are expressed
as constraints:
Before(X,Y )def= sX + dX < sY
Meets(X,Y )def= sX + dX = sY
Overlaps(X,Y )def= (sX < sY ) ∧ (sX + dX < sY + dY ) ∧ (sX + dX > sY )
Starts(X,Y )def= (sX = sY ) ∧ (dX < dY )
During(X,Y )def= (sX > sY ) ∧ (dX < dY )
124
Equals(X,Y )def= (sX = sY ) ∧ (dX = dY )
Contains(X,Y )def= (sX < sY ) ∧ (dX > dY )
StartedBy(X,Y )def= (sX = sY ) ∧ (dX > dY )
FinishedBy(X,Y )def= (sX < sY ) ∧ (sX + dX = sY + dY )
Finishes(X,Y )def= (sX > sY ) ∧ (sX + dX = sY + dY )
OverlappedBy(X,Y )def= (sX > sY ) ∧ (sX + dX > sY + dY ) ∧ (sX < sY + dY )
MetBy(X,Y )def= sX = sY + dY
After(X,Y )def= sX > sY + dY
In order to perform a maquette we define a process System that launches it and
starts a clock:
Systemdef= Maquette ∥ CLOCK(0)
where CLOCK is a process that beats time:
CLOCK(i) def= tell(clock = i) ∥ next CLOCK(i + 1)
The maquette in figure 4.3 can be modeled as follows. Let
TO =A,B,C,D,ER =MeetsA,B,BeforeA,C , StartsD,C , F inishesE,C
The system is
TOA,[sA=tA,PA] ∥ TOB,[sB=2 ∧ dB=tB ,PB] ∥ TOC,[sC=5∧dC=5,PC] ∥ . . .
. . . ∥ TOD,[dC=tC ,PC] ∥ TOD,[sD=tD,PD] ∥ ! tell(⋀R)
Then given the parameters of the environment (in the metric settings dialog box)
we may launch the system to evaluate the maquette.
4.2 Interactive Scores
We may think a generalization of the maquette notion where temporal objects can
change while the score is been performed (those changes take place when an event
125
occurs). This kind of generalized maquette was proposed by Myriam Desainte-
Catherine and N. Brousse in [DCB03] as interactive scores.
Interactive scores can be defined as static scores augmented with the Allen relations
and interactive points (starting and ending dates). These interactive points are
activated dynamically during the performance of the interactive score. A musical
piece then can be interpreted in several ways by changing the interactive points (they
are triggered in real-time).
The event is the main object of a interactive score. An event can be related to
several situations such as the detection of a signal, the triggering of a pedal, or the
changing of pace. With events, now the temporal objects are linked to the occurrence
of them, in the sense that some temporal objects may be wait for events before being
executed.
An interactive score can be modeled in rtcc the same way as a maquette:
IntScoredef= ( ∏
i∈TO
TOi,[ai,Pi]) ∥ ! tell(⋀R)
However, the temporal object’s process for interactive scores change (this model is
a modification of the one made for ntcc in [AADCR06]).
TOi,[ai,Pi]def= ! tell(ai) ∥
! unless clock + 1 < si next(tell(clock ≥ si) ∥ Pi) ∥! when clock ≥ si do
(catch clock ≥ si + di in Pi) ∥ next(Holdi)Since it is possible that we do not know the starting date of a temporal object (it
may depends on an event), we guarantee that it starts by using the unless construct.
If the store cannot infer that the current time is less than the starting time of the
temporal object, we set the clock to be greater than that starting time, and therefore,
the object will be executed. On the other hand, if the process is executed then the
starting time cannot change anymore. The process Holdi will guarantee this by
posting the constraint that the starting time is the same in the next time unit:
Holdidef= when si = t do next tell(si = t)
126
Events can be modeled as processes performing actions if they are triggered:
Eventi,[ci,∆i]def= when eventi(on) do
Statei,[ci,∆i] ∥
! unless clock + 1 < si next(tell(clock ≥ si)) ∥! when clock ≥ si do next(Holdi)
Parameters ci and ∆i correspond to the constraints to be posted and the exact
moment when they must be posted, respectively. The constraint can be some starting
points, for instance. The parameter ∆i is useful in those systems where the events
are not always triggered at the beginning of the time unit, but maybe in-between.
Then we can define the process State as follows:
Statei,[ci,∆i]def= delay tell(ci) until ∆i
There are several ways to model the triggering of events. The more intuitive in rtcc
is a process which will be executed sometime in the future:
Triggeridef= ⋆[0,n] tell(eventi(on))
where n is the the duration of the whole musical piece.
4.3 Factor Oracle
Interactive systems are those systems in which their components exchange elements.
An improvisation situation involving more than one musician, for instance, can be
seen as an interactive system where a musician performs some musical composition,
the others learn his method and features, organize them in a model, and generate
and perform another composition consistent with the one of the first musician.
A model using factor oracles for interactive systems as the one above (machine
improvisation) was proposed in [AD04]. A factor oracle is a finite state acyclic
automaton introduced in [ACR99] for string matching on fixed texts. If a word is
a sequence w = σ1σ2 . . . σn of letters (belonging to an alphabet Σ), then the factor
oracle is a data structure representing at least all the factors of w. A sequence p ∈ Σ∗
is a factor of w iff w can be written w = qpr, with q, r ∈ Σ∗.
127
The factor oracle is built from a word w = σ1σ2 . . . σn by creating n + 1 nodes (one
for every σi with i ∈ [1..n] plus the initial state), and inserting an arrow labelled
with symbol σi linking the node i with the node i + 1 (these are called factor links).
Depending on the structure of the word, other arrows from state i to a state j
(0 ≤ i < j ≤ n) are added to the automaton constituting the remaining factors.
Finally, backwards links (called suffix links) are added to the automaton from a
state i to a state j (0 ≤ j < i ≤ n) holding the possibility of traverse all the factors
in a single path (suffix links connect repeated patters of w). Figure 4.5 shows the
factor oracle of the word abbbaa. The dot-lined arrows represents the suffix links.
0 1 2 3 4 5 6a b b b a a
b a
aa
Figure 4.5: Factor Oracle automaton for w = abbbaa.
In [ACR99] an algorithm for constructing the automaton on-line was also presented.
This allows to build a factor oracle by reading the letters of a word one by one from
left to right.
With the on-line construction algorithm many musical applications can be modeled.
For example in a concurrent learning/improvisation situation it is possible to con-
struct a machine improviser. We can think of this improviser as a system consisting
in three phases running concurrently: learning, improvisation and performance. In
the learning phase the on-line construction algorithm is used to build the factor oracle
during the performance of a musician. A word can denote many musical structures,
the simplest is, perhaps, a sequence of notes, one note per beat. Then every note
the musician plays is an input for the algorithm and, ultimately, a new state in the
oracle. In the improvisation phase, the system has to make a choice of what path
should be traversed in the current automaton. The automaton used must be stable,
that is, all factor and suffix links until the last state added must be present. We
assume that the choice is non-deterministic in order to make a good improvisation.
128
Finally, in the performance phase the sequence of notes of the path choose is played.
This model in rtcc is built with processes for each phase. It is developed pretty
close to that of [RAD06]. The learning phase is modeled with a process posting
constraints which define the automaton. Variable σi denotes the label of the factor
link connecting a state i − 1 to a new state i.
Process Learni adds the new state i and build the factor and suffix links with this
new state.
Learnidef= ! tell(δi−1,σi
= i) ∥ BuildFOi(Si−1)Variable δk,σi
denotes the state reached by following a factor link σi from state k in
the automaton. Variable Si denotes the state reached by following a suffix link from
state i. The process BuildFOi adds all the factor links taking into account the state
i, and also adds the suffix links from it.
BuildFOi(k) def= when k ≥ 0 do
unless σi ∈ fk
next(! tell(σi ∈ fk) ∥ ! tell(δk,σi= i) ∥ BuildFOi(Sk))
∥ when k = −1 do ! tell(Si = 0)∥ when k ≥ 0 ∧ σi ∈ fk do ! tell(Si = k)
Variable fk denotes the set of labels of all currently existing factor links going from
k. Note that the automaton is built from latter states to the first ones. This can be
a problem because the system could be too slow and there is the possibility of not
building the whole factor oracle (recall that the duration of a time unit is limited
and in each time unit i the automaton is bigger than the one in time unit i−1, so the
store is also bigger which, normally, increase the time spent for posting constraints).
We can avoid this by taking into account the current tick and execute the process
if there is time, but if time is up then the construction of the factor oracle must
continue in the next time unit.
129
BuildFOi(k) def= when k ≥ 0 do
unless σi ∈ fk
next(! tell(σi ∈ fk) ∥ ! tell(δk,σi= i) ∥
delay tell(timeup = 1) until t ∥
catch timeup = 1 in BuildFOi(Sk)finally next(BuildFOi(k))
∥when k = −1 do ! tell(Si = 0)∥when k ≥ 0 ∧ σi ∈ fk do ! tell(Si = k)
Variable timeup is the signal that specifies the nearly end of the time unit; we assume
that t is the maximum tick in the time unit allowing to post the constraint timeup = 1
and querying it. A musician is modeled as a process playing some note p every time
unit and giving the signal for building the automaton at state j (the number of the
current note).
Musicianjdef= ∑
p∈Σ
(! tell(σj = p) ∥ tell(go = j) ∥ next(Musicianj+1))
The following improvisation process Impro defines the improvisation phase from
state k of the automaton by choosing non-deterministically whether to play a note
(denoting by outputting the symbol σk+1) or to follow a suffix link Sk and then play
a note (also chosen non-deterministically and denoting with symbol σ ∈ fSk). When
Sk = −1 there is only one choice: to play the note symbolized by σk+1.
Impro(k) def= when Sk = −1 do next(tell(out = σk+1) ∥ Impro(k + 1))∥ (when Sk ≥ 0 do (next(tell(out = σk+1) ∥ Impro(k + 1)))
+
when Sk ≥ 0 do
next(∑σ∈Σ
when σ ∈ fSkdo (tell(out = σ) ∥ Impro(δSk,σ))))
Since the learning and the improvisation processes can run concurrently, we must
guarantee that the improvisation works with a subgraph completely built. For this
130
we define a process that synchronize the two phases on Si.
Syncidef= when Si−1 ≥ −1 ∧ go ≥ i do (Learni ∥ next(Synci+1))∥ unless Si−1 ≥ −1 ∧ go ≥ i next(Synci)
Finally, the whole system is modeled by launching all processes and initializing the
first state. The parameter of the process System represents the number of notes
that must be played before starting the improvisation phase.
Systemndef= ! tell(S0 = −1) ∥Musician1 ∥ Sync1 ∥ ! when go = n do Impro(n)
4.4 Summary and Related Work
In this chapter we illustrated the potentiality of the rtcc calculus with a set of
musical applications.
A study of how integrate a more robust constraint system into the Maquettes was
made in [Val06]. The necessity of improving the constraint system was demonstrated
by making a comparison with Boxes [Beu00], a software for musical composition
which allows to compose a musical piece by posting temporal constraints between
boxes representing musical structures.
As the reader may saw in section 2.7.4, for the interactive scores and factor oracle
previous models were developed for ntcc. The models presented here are similar to
those models (finally, rtcc is an extension of ntcc and it remains all constructs),
except for the fact that the models in this section are more realistic in the sense that
they always consider the execution time of processes.
131
5 Concluding Remarks andFuture Work
“Remember, Information is not knowledge; Knowledge is not wisdom; Wisdom is
not truth; Truth is not beauty; Beauty is not love; Love is not music;
Music is the best.”
– Frank Zappa
In this document we consider formal models to formalize various musical interaction
scenarios, from applications to properties. There are three main contributions in this
thesis. First we discuss limitations of existent formal models (calculi and algebras)
to construct musical models, particularly when real time situations are at stake.
Secondly, in order to handle these limitations, we have developed a new calculus
called rtcc, based on ntcc. Finally, we have shown the applicability of rtcc to
various musical models.
Recently, we have seen a significant increase of formal models, particularly process
calculi, proposed in many areas of knowledge. Music is not the exception. With
formal theories composers and musicologists are invited to used tools based on rig-
orous principles, rules, equations, theorems, models, and languages, in order to solve
specific musical problems, to build musical theories, to synchronize devices in music
interaction settings, to build musical programming languages, to prove musical prop-
erties, to construct complex musical material, and many others. In this dissertation
we have shown some important calculi and algebras used in music. We illustrated
various formalisms that have found use in practical musical situations. Neverthe-
less, many of these formalisms were not intended originally to be used in music so
their applicability in many musical environments turns out to be awkward. We have
shown that some of the models described, although having a high level of maturity,
132
they lack explicit notions crucial in music such as processes, time, constraints, and
concurrency. Concurrent process temporal calculi such as ntcc, designed to model
interactive systems, fit better in music applications were processes interact in com-
plex ways. Nevertheless, we showed their limitations concerning the abstract notion
of time they use.
We proposed rtcc, a calculus of the ccp family and an extension of ntcc, as a
formalism for modeling real-time behaviour, for specifying strong preemption and
default behaviour, having a more realistic notion of time and therefore more suitable
to model musical systems. rtcc is obtained from ntcc by adding new constructs
and extending the transition system with support for expressing amount of resources
and time allowance. Although there have been more proposals of real-time calculi
in the family of ccp such as tccp and TScc, the new approach of rtcc considering
resources and a precise notion of time in a single framework is one step ahead. In
computer science talking about real-time always comes with the running time of
processes and it depends on the place where they are being executed. Having this
in mind, the development of a formal model to support real-time systems such as
those of musical improvisation, lead us to build a theory combining a notion of time
as a set of intervals (time units) and a set of points in those intervals (ticks), with a
notion of resources as a natural number bounding the concurrency of processes.
The new construct catch c in P finally Q was born out of the need to have a way to
stop a process, to guarantee real-time behaviour, and to express default behaviour.
The usefulness of this process can be seen in improvisation situations. For example,
we may want all musicians to play within the context of a given harmonic pattern,
but also to allow any improviser to depart from it at some point. When this happens,
we would then expect the other musicians to stop their current interpretation P not
abruptly but with a “gentle” bridge Q. The other new construct delay P until δ
arose from the catch process for two purposes: (1) given the transition system pro-
posed where two processes can be executed at the same time (true concurrency), if
there is no way to delay the execution of a process within a time unit, every process
would be executed simultaneously (assuming that there are enough resources) (2) it
makes the calculus homogeneous with respect to the notion of time, that is, now we
can delay a process for some given ticks or for some given time units.
The proposed formalism includes the notion of a precise given duration of each time
133
unit. This is very important in musical models because the actual time in which some
event occurs must usually be controlled with precision. Also important is the number
of available resources since the temporal behavior of a system can change depending
on it. The transition system was modified accordingly by incorporating into the
configurations a third component representing the time available for processes to be
executed. Also the transition relation was extended with an action. This action
represents the resources consumed in the transition. These new features implied
loosing various properties that usually hold in ccp calculi, which meant that the
usual semantics of ccp based on quiescent points could not be used. In ntcc three
observable behaviour of processes are defined: input-output, default output and
strongest postcondition output. The default output in rtcc is awkward since it
interprets the execution of the process with an input that is irrelevant. In rtcc we
can observe three things of a process: the store, the resources and the time. We may
think of an irrelevant input of store as true. We also may think of an irrelevant input
of resources as one (every process can be executed in that single resource). However
it is difficult to think the notion of an irrelevant input of time. On the other hand,
the strongest postcondition is closely related with the notion of quiescent input
sequences, that is, those sequences on input of which a process output is always
the same. For the case of rtcc we may arguable think of a quiescent point of a
store or of a resource but not of time. Therefore, the approach of quiescent points
for the denotational semantics was not considered. Instead, the denotations were
built as Chu spaces. This has shown to be convenient since the algebra associated
with this model of concurrency contains the necessary operations to manipulate the
processes, to prove properties and to obtain a correspondence with the operational
semantics. In fact, we have shown that the property of full abstraction holds for this
approach. Finally, the inclusion of resources and time into the operational semantics
led to consider a subset of the transition system without resources in order to define
a real-time logic.
We plan to continue this research in the following directions: Resources in the logic. As we saw in this document, resources is a crucial notion
in the rtcc calculus. In fact, resources has always been central in concurrent
programming. We can see the research of E. W. Dijkstra and C.A.R. Hoare on
the problem of resource control in their own work on concurrent programming
134
in [Han02]. We consider important having such a notion included in the logic.
Also, we have seen that the absence of resources in the logic affects its notion of
time. Although we define the concepts of time interval and timed observation
sequence, there is no representation of an actual “passsing” of time. Resources
may solve this since they are the tool to decide about the way to execute
processes. An interpreter. An implementation of a processes simulator or interpreter for
rtcc could be a convenient way to better visualize the behaviour of musical
systems and to make possible listening the audio results of the models in real-
time. Since many musicians state that music is about listening, this could be
a way to show the power of formal models because we would be able to model
the scenario in the calculus, to transcript it into the interpreter, to listen the
results and, the most important, to prove properties of the system. More complex musical systems. We are interested in using the calculus pre-
sented here to model complex improvisation systems and some other music sys-
tems. The author believes that this formalism is capable of expressing a great
variety of musical models including musical environments with consonance and
dissonance, complex machine learning/improvisation, and multimedia seman-
tic interaction.
135
Bibliography
[AAA05] Torsten Anders, Christina Anagnostopoulou, and Michael Alcorn.Strasheela: Design and usage of a music composition environment basedon the oz programming model. In Peter Van Roy, editor, MultiparadigmProgramming in Mozart/OZ: Second International Conference (MOZ2004) Charleroi, Belgium, October 7-8, 2004, volume 3389 of LectureNotes in Computer Science. Springer-Verlag, 2005.
[AADCR06] Antoine Allombert, Gerard Assayag, Myriam Desainte-Catherine, andCamilo Rueda. Concurrent constraint models for specifying interactivescores. In Third Sound and Music Computing Conference (SMC’06),May 2006.
[AARD98] Gerard Assayag, Carlos Augusto Agon, Camilo Rueda, and OlivierDelerue. Objects, time and constraints in openmusic. In ICMA, edi-tor, International Computer Music Conference (ICMC’98), Universityof Michigan, Ann Arbor, USA, 1998.
[ACR99] Cyril Allauzen, Maxime Crochemore, and Mathieu Raffinot. Factororacle: A new structure for pattern matching. In Conference on CurrentTrends in Theory and Practice of Informatics, pages 295–310, 1999.
[AD04] Gerard Assayag and Shlomo Dubnov. Using factor oracles for machineimprovisation. In Soft Computing, volume 8. 2004.
[ADQ+98] Gloria Alvarez, Juan Francisco Diaz, Luis O. Quesada, Frank D. Valen-cia, Gerard Assayag, and Camilo Rueda. Pico: A calculus of concurrentconstraint objects for musical applications. In European Congress onArtifical Intelligence (ECAI’98), Brighton, 1998.
[Ago98] Carlos Augusto Agon. OpenMusic: Un Langage Visuel pour la Com-position Musicale Assitee par Ordenateur. PhD thesis, Paris VI, 1998.
[Ago04] Carlos Augusto Agon. Mixing visual programs and music notation. InPerspectives in Mathematical and Computational Music Theory. Elec-tronic Publishing Osnabruck, 2004.
136
[AH90] Rajeev Alur and Thomas A. Henzinger. Real-time logics: Complexityand expressiveness. In Fifth Annual Symposium on Logic in ComputerScience (LICS), pages 390–401. IEEE Computer Society Press, 1990.
[AH92] Rajeev Alur and Thomas A. Henzinger. Logics and models of real time:A survey. In Proceedings of the Real-Time: Theory in Practice, REXWorkshop, pages 74–106, London, UK, 1992. Springer-Verlag.
[All83] James F. Allen. Maintaining knowledge about temporal intervals. Com-munications of the ACM, 26(11):832–843, November 1983.
[ALT95] Roberto M. Amadio, Lone Leth, and Bent Thomsen. From a concur-rent λ-calculus to the π-calculus. In 10th International Symposium onFundamentals of Computation Theory (FCT’95), volume 965 of Lec-ture Notes In Computer Science, pages 106–115, London, UK, 1995.Springer-Verlag.
[And00] Torsten Anders. Arno: Constraints programming in common music. InICMA, editor, International Computer Music Conference (ICMC’00),Berlin, Germany, 2000.
[And07] Torsten Anders. Composing Music by Composing Rules: Design andUsage of a Generic Music Constraint System. PhD thesis, Queen’sUniversity Belfast, 2007.
[ARL+99] Gerard Assayag, Camilo Rueda, Mickael Laurson, Carlos AugustoAgon, and Olivier Delerue. Computer-assisted composition at ircam:from patchwork to openmusic. Computer Music Journal, 23(3):59–72,Fall 1999.
[Ass98] Gerard Assayag. Computer assisted composition today. In 1st Sympo-sium on Music and Computers. CORFU, October 1998.
[BA06] Jean Bresson and Carlos Augusto Agon. Temporal control over soundsynthesis processes. In Sound and Music Computing, Marseille, 2006.
[BAL02] Mikael Buchholtz, Jacob Andersen, and Hans Henrik Løvengreen. To-wards a process algebra for shared processors. Electronic Notes in The-oretical Computer Science, 52(3), 2002.
[Bar96] Eli Barzilay. Booms object oriented music system. Master’s thesis,Ben-Gurion University of the Negev, December 1996.
[BB90] Gerard Berry and Gerard Boudol. The chemical abstract machine. In17th ACM SIGPLAN-SIGACT symposium on Principles of program-ming languages (POPL’90), pages 81–94, San Francisco, California,United States, 1990. ACM.
137
[BB91] J.C.M. Baeten and Jan A. Bergstra. Real time process algebra. FormalAspects of Computing, 3(2):142–188, 1991.
[BBE02] Mira Balaban, Eli Barzilay, and Michael Elhadad. Abstraction as ameans for end-user computing in creative applications. IEEE Transac-tions on Systems, Man, and Cybernetics, Part A: Systems and Humans,32(6):640–653, November 2002.
[BDC01] Anthony Beurive and Myriam Desainte-Catherine. Representing mu-sical hierarchies with constraints. Musical Constraints Workshop(CP’2001), December 2001.
[Ber93] Gerard Berry. Preemption in concurrent systems. In Proceedings of the13th Conference on Foundations of Software Technology and Theoreti-cal Computer Science, pages 72–93, London, UK, 1993. Springer-Verlag.
[Beu00] Anthony Beurive. Un logiciel de composition musicale combinant unmodele spectral, des structures hierarchiques et des contraintes. InJournees d’informatique musicale (JIM’00), Bordeaux, May 2000.
[BG92] Gerard Berry and Georges Gonthier. The esterel synchronous pro-gramming language: Design, semantics, implementation. Science ofComputer Programming, 19(2):87–152, 1992.
[BGBAL96] Patrice Bremond-Gregoire, Hanene Ben-Abdallah, and Insup Lee. Or-dering processes in a real-time process algebra. In 3rd InternationalWorkshop on Real-Time Systems (AMAST’96), Red Lion Hotel, SaltLake City, Utah, USA, March 1996.
[BGL97] Patrice Bremond-Gregoire and Insup Lee. A process algebra of com-municating shared resources with dense time and priorities. TheoreticalComputer Science, 189(1–2):179–219, December 1997.
[BHMR98] Francisco Bueno, Manuel Hermenegildo, Ugo Montanari, and FrancescaRossi. Partial order and contextual net semantics for atomic and locallyatomic cc programs. Science of Computer Programming, 30(1–2):51–82,January 1998.
[BJGK96] Lubos Brim, Jean-Marie Jacquet, David Gilbert, and MojmırKretinsky. A process algebra for synchronous concurrent constraintprogramming. In Michael Hanus, editor, Fifth International Confer-ence on Algebraic and Logic Programming (ALP’96), Lecture Notes inComputer Science, pages 24–37, Aachen, Germany, June 1996. Spring-Verlang.
[BK84] Jan A. Bergstra and Jan Willem Klop. Process algebra for synchronouscommunication. Information and Control, 60(1–3):109–137, 1984.
138
[BKGJ97] Lubos Brim, Mojmır Kretinsky, David Gilbert, and Jean-MarieJacquet. Temporal synchronous concurrent constraint programming.In 1st International Workshop on Constraint Programming for TimeCritical Systems (COTIC’97), pages 35–50, Piza, Italy, October 1997.
[BMN00] Pierfrancesco Bellini, Riccardo Mattolini, and Paolo Nesi. Temporallogics for real-time system specification. ACM Computing Surveys,32(1):12–42, March 2000.
[CA96] Luca Cardelli and Martin Abadi. A Theory of Objects. Springer, 1996.
[Cas98] Giuseppe Castagna. Foundation of object-oriented programming. Tuto-rial Notes – Laboratoire d’Informatique de l’Ecole Normale Superieure– France, February 1998.
[Che89] Marc Chemillier. Structure et Methode Algebriques en InformatiqueMusicale. PhD thesis, Paris VII, 1989.
[Che95] Marc Chemillier. La musique de la harpe. In Eric De Dampierre, edi-tor, Une Esthetique Perdue, pages 99–208. Presses de l’Ecole NormaleSuperieure, Paris, 1995.
[Che05] Marc Chemillier. Mathematiques de la musique en afrique centrale.Pour la Science, January 2005.
[Chu85] Alonzo Church. The Calculi of Lambda Conversion. Princeton Univer-sity Press, 1985.
[Cou90] Francis-Xavier Courtot. A constraint based logic program for gener-ating polyphonies. In ICMA, editor, International Computer MusicConference (ICMC’90), University of Glasgow, Scotland, 1990.
[Dan84] Roger B. Dannenberg. Arctic: A functional language for real-timecontrol. In ACM Symposium on LISP and Functional Programming(LFP’84), pages 96–103, Austin, Texas, United States, 1984. ACMPress.
[Dan89] Roger B. Dannenberg. The canon score language. Computer MusicJournal, 13(1):47–56, Spring 1989.
[Dav93] Jim Davies. Specification and Proof in Real-Time CSP. PhD thesis,University of Oxford, 1993.
[DBdC+98] Francois Dechelle, Riccardo Borghesi, Maurizio de Cecco, Enzo Maggi,Butch Rovan, and Norbert Schnell. jmax: a new java-based editing andcontrol system for real-time musical applications. In ICMA, editor,International Computer Music Conference (ICMC’98), University ofMichigan, Ann Arbor, USA, 1998.
139
[dBGM00] Frank S. de Boer, Maurizio Gabbrielli, and Maria Chiara Meo. Atimed concurrent constraint language. Information and Computation,161(1):45–83, 2000.
[dBGM01] Frank S. de Boer, Maurizio Gabbrielli, and Maria Chiara Meo. A tem-poral logic for reasoning about timed concurrent constraint programs.In TIME, pages 227–233, 2001.
[dBGM04] Frank S. de Boer, Maurizio Gabbrielli, and Maria Chiara Meo. Provingcorrectness of timed concurrent constraint programs. ACM Transac-tions on Computational Logic, 5(4):706–731, October 2004.
[dBGMP97] Frank S. de Boer, Maurizio Gabbrielli, Elena Marchiori, and CatusciaPalamidessi. Proving concurrent constraint programs correct. ACMTransactions on Programming Languages and Systems (TOPLAS),19(5):685–725, September 1997.
[dBPP95] Frank S. de Boer, Alessandra Di Pierro, and Catuscia Palamidessi.Nondeterminism and infinite computations in constraint programming.In Selected Papers of the Workshop on Topology and Completion inSemantics, volume 151 of Theoretical Computer Science, pages 37–78,Chartres, France, 1995. Elsevier Science Publishers B. V.
[DC96] Myriam Desainte-Catherine. The hierarchical structure may improvethe resolution of musical problems. In 3emes Journees d’InformatiqueMusicale (JIM’96), ıle de Tatihou, Basse Normandie, France, May 1996.
[DCA04] Myriam Desainte-Catherine and Antoine Allombert. Specification oftemporal relations between interactive events. In Sound and MusicComputing (SMC’04), Paris, France, October 2004. Ircam.
[DCB03] Myriam Desainte-Catherine and N. Brousse. Towards a specificationof musical interactive pieces. In Colloquium on Musical Informatics(CIM’03), Firenze, Italy, May 2003.
[DFV91] Roger B. Dannenberg, Christopher Lee Fraley, and Peter Velikonja.Fugue: A functional language for sound synthesis. IEEE Computer,24(7):36–42, July 1991.
[Dob97] Christopher Dobrian. MSP: The Documentation. Cycling ’74 and IR-CAM, 1997.
[DRV99] Juan Francisco Diaz, Camilo Rueda, and Frank Valencia. A calculus forconcurrent processes with constraints. CLEI Electronic Journal, 1(2),May 1999.
140
[Eme90] E. Allen Emerson. Temporal and modal logic. Handbook of TheoreticalComputer Science (volume B): Formal Models and Semantics, pages995–1072, October 1990.
[Fid92] Colin J. Fidge. A constraint-oriented real-time process calculus. InM. Diaz and R.Groz, editors, IFIP TC6/WG6.1 Fifth InternationalConference on Formal Description Techniques for Distributed Systemsand Communication Protocols (FORTE’92), pages 355–370, Lannion,France, October 1992.
[FRS01] Francois Fages, Paul Ruet, and Sylvain Soliman. Linear concurrent con-straint programming: Operational and phase semantics. Informationand Computation, 165(1):14–41, February 2001.
[GGBM91] Paul Le Guernic, Thierry Gautier, MichelL Le Borgne, and Claude LeMaire. Programming real-time applications with SIGNAL. In AnotherLook at Real Time Programming, Proceedings of the IEEE, volume 79,pages 1321–1336, September 1991.
[Gib76] Peter Gibbins. Logics as models of music. The British Journal ofAesthetics, 16(2):157–160, Spring 1976.
[GJP99] Vineet Gupta, Radha Jagadeesan, and Prakash Panangaden. Stochas-tic processes as concurrent constraint programs. In 26th ACMSIGPLAN-SIGACT Symposium on Principles of Programming Lan-guages (POPL’99), pages 189–202, San Antonio, Texas, United States,1999. ACM.
[GJS96] Vineet Gupta, Radha Jagadeesan, and Vijay A. Saraswat. Truly con-current constraint programming. In 7th International Conference onConcurrency Theory (CONCUR’96), volume 1119 of Lecture Notes InComputer Science, pages 373–388, London, UK, 1996. Springer-Verlag.
[GJS97] Vineet Gupta, Radha Jagadeesan, and Vijay A. Saraswat. Probabilisticconcurrent constraint programming. In 8th International Conference onConcurrency Theory (CONCUR’97), volume 1243 of Lecture Notes InComputer Science, pages 243–257, London, UK, 1997. Springer-Verlag.
[GJS98] Vineet Gupta, Radha Jagadeesan, and Vijay A. Saraswat. Computingwith continuous change. Science of Computer Programming, 30(1-2):3–49, 1998.
[GO03] Etienne Gaudrain and Yann Orlarey. A Faust Tutorial. Grame, CentreNational de Creation Musicale, September 2003.
141
[Gru96] Damas P. Gruska. Process algebra for limited parallelism. In Concur-rency Specification and Programming (CS&P’96), pages 61–74, Hum-boldt University, Berlin, 1996.
[Gru97] Damas P. Gruska. Bounded concurrency. In 11th International Sympo-sium on Fundamentals of Computation Theory (FCT’97), volume 1279of Lecture Notes In Computer Science, pages 198–209, London, UK,1997. Springer-Verlag.
[Gun91] Jeremy Gunawardena. Geometric logic, causality and event struc-tures. In 2nd International Conference on Concurrency Theory (CON-CUR’91), volume 527 of Lecture Notes In Computer Science, pages266–280, London, UK, 1991. Springer-Verlag.
[Gup94] Vineet Gupta. Chu Spaces: A Model Of Concurrency. PhD thesis,Stanford University, August 1994.
[Han99] Peter Hanappe. Design and Implementation of an Integrated Environ-ment for Music Composition and Synthesis. PhD thesis, Paris VI, 1999.
[Han02] Per Brinch Hansen, editor. The Origin of Concurrent Programming:From Semaphores to Remote Procedure Calls. Springer, May 2002.
[HCRP91] Nicholas Halbwachs, Paul Caspi, Pascal Raymond, and Daniel Pilaud.The synchronous data-flow programming language LUSTRE. In An-other Look at Real Time Programming, Proceedings of the IEEE, vol-ume 79, pages 1305–1320, September 1991.
[Hen91] Thomas A. Henzinger. The Temporal Specification and Verification ofReal-Time Systems. PhD thesis, Stanford University, August 1991.
[Hir95] Keiji Hirata. Towards formalizing jazz piano knowledge witha deduc-tive object-oriented approach. In Artificial intelligence and Music (IJ-CAI’95), pages 77–80, August 1995.
[HL94] Patricia Hill and John Lloyd. The Godel Programming Language. TheMIT Press, April 1994.
[HLZ96] Martin Henz, Stefan Lauer, and Detlev Zimmermann. COMPOzE —intention-based music composition through constraint programming. In8th IEEE International Conference on Tools with Artificial Intelligence,pages 118–121, Toulouse, France, November 1996. IEEE Computer So-ciety Press.
[HMGW96] Paul Hudak, Tom Makucevich, Syam Gadde, and Bo Whong. Haskoremusic notation - an algebra of music. Journal of Functional Program-ming, 6(3):465–483, 1996.
142
[Hoa78] C. A. R. Hoare. Communicating sequential processes. Communicationsof the ACM, 21(8):666–677, August 1978.
[Hoa85] C. A. R. Hoare. Communicating Sequential Processes. Prentice-HallInternational Series in Computer Science. Prentice Hall, April 1985.
[HP85] David Harel and Amir Pnueli. On the development of reactive systems.In France Krzysztof R. Apt L.I.T.P Univ. Paris, Paris, editor, Logicsand Models of Concurrent Systems, volume 13 of NATO Asi Series F:Computer And Systems Sciences, pages 477–498, New York, NY, USA,1985. Springer-Verlag New York, Inc.
[HSD95] Pascal Van Hentenryck, Vijay Saraswat, and Yves Deville. Design,implementation, and evaluation of the constraint language cc(fd). InAndreas Podelski, editor, Constraint Programming: Basics and Trends,Chatillon Spring School, Chatillon-sur-Seine, France, May 16 - 20,1994, Selected Papers, volume 910 of Lecture Notes in Computer Sci-ence, pages 293–316. Springer-Verlag, 1995.
[Kap99] Hans G. Kaper. Formalizing the concept of sound. In ICMA, editor,International Computer Music Conference (ICMC’99), Beijing, China,1999.
[LA85] Gareth Loy and Curtis Abbott. Programming languages for computermusic synthesis, performance, and composition. ACM Computing Sur-veys, 17(2):235–265, June 1985.
[Lau93] Mikael Laurson. PWConstraints. In G. Haus and I. Pighi, editors,X Colloquio di Informatica Musicale, pages 332–335. Associazione diInformatica Musicale Italiana, 1993.
[Lau96] Mikael Laurson. PatchWork: A Visual Programming Language andsome Musical Applications. PhD thesis, Sibelius Academy, Helsinki,1996.
[LD89] Mikael Laurson and Jacques Duthen. Patchwork: a graphic language inpreform. In ICMA, editor, International Computer Music Conference(ICMC’89), pages 172–175, Ohio State University, Colombus, Ohio,USA, 1989.
[LZ02] Jeremy Y. Lee and John Zic. On modeling real-time mobile pro-cesses. In Twenty-fifth Australasian Conference on Computer Science(ACSC’02), volume 17 of Conferences in Research and Practice in In-formation Technology, pages 139–147, Melbourne, Victoria, Australia,2002. Australian Computer Society, Inc.
143
[MA07] Guerino Mazzola and Moreno Andreatta. Diagrams, gestures and for-mulae in music. In Journal of Mathematics and Music, volume 1, pages23–46, March 2007.
[Mal98] Mikhail Malt. Modeles mathematiques et composition assistee par or-dinateur. PhD thesis, EHESS-IRCAM, 1998.
[Mar97] Alan Marsden. Mtt - a music theory tool. In Journees d’InformatiqueMusicale (JIM’97), Bibliotheque de la Part-Dieu, Lyon - France, June1997.
[Mar00] Alan Marsden. Representing Musical Time: A Temporal-Logic Ap-proach. Swets & Zeitlinger, 2000.
[Mil80] Robin Milner. A Calculus of Communicating Systems. Lecture Notesin Computer Science. Springer-Verlag, 1980.
[Mil93] Robin Milner. The polyadic π-calculus: a tutorial. In F. L. Bauer,W. Brauer, and H. Schwichtenberg, editors, Logic and Algebra of Spec-ification, pages 203–246. Springer-Verlag, 1993.
[Mil99] Robin Milner. Communicating and Mobile Systems: The π-calculus.Cambridge University Press, The Pitt Building, Trumpington Street,Cambridge, United Kingdom, 1999.
[MMM+69] Max Vernon Mathews, Joan E. Miller, F. Richard Moore, John R.Pierce, and J. C. Risset. The Technology of Computer Music. The MITPress, June 1969.
[MP92] Zohar Manna and Amir Pnueli. The Temporal Logic of Reactive andConcurrent Systems. Springer-Verlag, New York, USA, 1992.
[MPW92] Robin Milner, Joachim Parrow, and David Walker. A calculus of mobileprocesses, parts i and ii. Information and Computation, 100(1):1–40,September 1992.
[Nai82] Lee Naish. An introduction to mu-prolog. Technical Report 82/2, TheUniversity of Melbourne, Melbourne, Australia, 1982.
[NM95] Joachim Niehren and Martin Muller. Constraints for free in concur-rent computation. In 1995 Asian Computing Science Conference onAlgorithms, Concurrency and Knowledge (ACSC’95), volume 1023 ofLecture Notes In Computer Science, pages 171–186, London, UK, 1995.Springer-Verlag.
144
[NPW79] Mogens Nielsen, Gordon D. Plotkin, and Glynn Winskel. Petri nets,event structures and domains. In International Sympoisum on Seman-tics of Concurrent Computation, volume 70 of Lecture Notes In Com-puter Science, pages 266–284, London, UK, 1979. Springer-Verlag.
[OFL97] Yann Orlarey, Dominique Fober, and Stephane Letz. L’environnementde composition musicale elody. In Journees d’Informatique Musicale(JIM’97), Bibliotheque de la Part-Dieu, Lyon - France, June 1997.
[OFL02] Yann Orlarey, Dominique Fober, and Stephane Letz. An algebra forblock diagram languages. In ICMA, editor, International ComputerMusic Conference (ICMC’02), pages 542–547, Gothenburg, Sweden,2002.
[OFL04] Yann Orlarey, Dominique Fober, and Stephane Letz. Syntactical andsemantical aspects of faust. Soft Computing - A Fusion of Foundations,Methodologies and Applications, 8(9):623–632, September 2004.
[OFLB94] Yann Orlarey, Dominique Fober, Stephane Letz, and Mark Bilton.Lambda calculus and music calculi. In ICMA, editor, InternationalComputer Music Conference (ICMC’94), pages 243–250, DIEM, Dan-ish Institute of Electroacoustic Music, Denmark, 1994.
[O’H07] Peter W. O’Hearn. Resources, concurrency, and local reasoning. The-oretical Computer Science, 375(1–3):271–307, May 2007.
[OPV07] Carlos Olarte, Catuscia Palamidessi, and Frank Valencia. Universaltimed concurrent constraint programming. In Logic Programming,volume 4670 of Lecture Notes in Computer Science, pages 464–465.Springer-Verlag, August 2007.
[OR05] Carlos Olarte and Camilo Rueda. A stochastic non-deterministic tem-poral concurrent constraint calculus. In SCCC ’05: Proceedings of theXXV International Conference on The Chilean Computer Science So-ciety, page 30, Washington, DC, USA, 2005. IEEE Computer Society.
[Ost89] Jonathan S. Ostroff. Temporal Logic for Real-Time Systems. JohnWiley & Sons Inc, 1989.
[OW87] Jonathan S. Ostroff and W. Murray Wonham. Modeling and verifyingreal-time embedded computer systems. In 8th IEEE Real-Time Sys-tems Symposium, pages 124–132, Los Alamitos, CA, USA, 1987. IEEEComputer Society Press.
[Pae93] Andreas Paepcke. Object-oriented Programming: The CLOS Perspec-tive. The MIT Press, Cambridge, MA, USA, 1993.
145
[Pet62] Carl A. Petri. Fundamentals of a theory of asynchronous informa-tion flow. In Cicely M. Popplewell, editor, International Federation forInformation Processing Congress (IFIP Congress 62), pages 386–390,Munich, Germany, August 27 - September 1 1962.
[Plo81] Gordon D. Plotkin. A structural approach to operational semantics.Technical Report DAIMI FN-19, Computer Science Department, Uni-versity of Aarhus, Denmark, September 1981.
[Pnu77] Amir Pnueli. The temporal logic of programs. In 18th IEEE Symposiumon the Foundations of Computer Science (FOCS77), pages 46–57. IEEEComputer Society Press, 1977.
[PR95] Francois Pachet and Pierre Roy. Mixing constraints and objects: A casestudy in automatic harmonization. In I. Graham, B. Magnusson, andJ.-M. Nerson, editors, TOOLS-Europe’95, pages 119–126, Versailles,France, 1995. Prentice-Hall, Hertfordshire, UK.
[Pra86] Vaughan R. Pratt. Modeling concurrency with partial orders. Interna-tional Journal of Parallel Programming, 15(1):33–71, February 1986.
[Pra91] Vaughan R. Pratt. Modeling concurrency with geometry. In 8th ACMSIGPLAN-SIGACT Symposium on Principles of Programming Lan-guages (POPL ’91), pages 311–322, Orlando, Florida, United States,1991.
[Pra95] Vaughan R. Pratt. Chu spaces and their interpretation as concurrentobjects. In J. van Leeuwen, editor, Computer Science Today, volume1000 of Lecture Notes in Computer Science, pages 392–405. Springer-Verlag, July 1995.
[Pra99] Vaughan R. Pratt. Chu spaces. Course Notes for the School on CategoryTheory and Applications. Coimbra, Portugal, July 1999.
[Pra02] Vaughan R. Pratt. Event-state duality: The enriched case. In 13thInternational Conference on Concurrency Theory (CONCUR’02), vol-ume 2421 of Lecture Notes In Computer Science, pages 41–56, London,UK, 2002. Springer-Verlag.
[Pra03] Vaughan R. Pratt. Transition and cancellation in concurrencyand branching time. Mathematical Structures in Computer Science,13(4):485–529, 2003.
[Pri95] Corrado Priami. Stochastic π-calculus. The Computer Journal,38(7):578–589, 1995.
146
[PT06] David Pym and Chris Tofts. A calculus and logic of resources and pro-cesses. Formal Aspects of Computing, 18(4):495–517, November 2006.
[Puc88] Miller Puckette. The patcher. In ICMA, editor, International ComputerMusic Conference (ICMC’88), GMIMIK, Kologne, Germany, 1988.
[PV01] Catuscia Palamidessi and Frank Valencia. A temporal concurrent con-straint programming calculus. In Seventh International Conference onPrinciples and Practice of Constraint Programming, volume 2239 ofLecture Notes in Computer Science, pages 302–316, London, UK, De-cember 2001. Springer-Verlang.
[QRT97] Luis Omar Quesada, Camilo Rueda, and Gabriel Tamura. Progra-macion visual de cordial. Technical Report 3, AVISPA Research Team.Universidad Javeriana and Universidad del Valle, October 1997.
[RAD06] Camilo Rueda, Gerard Assayag, and Shlomo Dubnov. A concurrentconstraints factor oracle model for music improvisation. In XXXIIConferencia Latinoamericana de Informatica CLEI 2006, Santiago deChile, 2006.
[RAQ+01] Camilo Rueda, Gloria Alvarez, Luis Omar Quesada, Gabriel Tamura,Frank Valencia, Juan Francisco Diaz, and Gerard Assayag. Integratingconstraints and concurrent objects in musical applications: A calculusand its visual language. Kluwer Academic Publishers, 6(1):21–52, 2001.
[RB98] Camilo Rueda and Antoine Bonnet. Situation: Un langage visual baseesur les constraintes pour la composition musicale. In Marc Chemillierand Francois Pachet, editors, Recherches et Applications en Informa-tique Musicale. Hermes Science Publications, Paris, 1998.
[RLBA98] Camilo Rueda, Mikael Laurson, Georges Bloch, and Gerard Assayag.Integrating constraint programming in visual musical composition lan-guages. In ECAI 98 Workshop on Constraints for Artistic Applications.Brighton, 1998.
[Roa85] Curtis Roads. Research in music and artificial intelligence. ACM Com-puting Surveys, 17(2):163–190, June 1985.
[Rod06] Jessica L. Rodriguez. Design and implementation of a musical harmonyconstraint system for mozart (in spanish). BSc Thesis – EngineeringDegree in Computer Science, Universidad del Valle - Cali (Colombia),2006.
[Rom00] Nicolas Romero. Located concurrent constraint programming. Techni-cal report, LIFO - Universite d’Orleans (France), 2000.
147
[Ros95] Brian J. Ross. A process algebra for stochastic music composition. InICMA, editor, International Computer Music Conference (ICMC’95),The Banff Centre for the Arts, Alberta, Canada, 1995.
[RV01] Camilo Rueda and Frank Valencia. Formalizing timed musical processeswith a temporal concurrent constraint programming calculus. MusicalConstraints Workshop (CP’2001), December 2001.
[RV02] Camilo Rueda and Frank Valencia. Proving musical properties using atemporal concurrent constraint calculus. In ICMA, editor, InternationalComputer Music Conference (ICMC’02), Gothenburg, Sweden, 2002.
[RV05] Camilo Rueda and Frank Valencia. A temporal concurrent constraintcalculus as an audio processing framework. In Second Sound and MusicComputing Conference (SMC’05), 2005.
[Sar93] Vijay A. Saraswat. Concurrent Constraint Programming. ACM Doc-toral Dissertation Award. The MIT Press, Cambridge, MA, USA, 1993.
[Sch41] Joseph Schillinger. The Schillinger System of Musical Composition.Carl Fischer, New York, 1941. Reprinted by Da Capo Press in 1978.
[Sch48] Joseph Schillinger. The Mathematical Basis of the Arts. The Philo-sophical Library, 1948. Reprinted by Da Capo Press in 1976.
[Sel04] Carl Seleborg. Interaction temps-reel/temps differe: Elaboration d’ unmodele formel de max et implementation d’une bibliotheque osc pouropenmusic. Master’s thesis, Universite Aix-Marseille II, 2004.
[SF04] Olin Shivers and David Fisher. Multi-return function call. ACM SIG-PLAN Notices, 39(9):79–89, September 2004.
[SJG94a] Vijay A. Saraswat, Radha Jagadeesan, and Vineet Gupta. Foundationsof timed concurrent constraint programming. In Ninth Annual IEEESymposium on Logic in Computer Science, Paris, France, pages 71–80,1109 Spring Street, Suite 300, Silver Spring, MD 20910, USA, 4–7 July1994. IEEE.
[SJG94b] Vijay A. Saraswat, Radha Jagadeesan, and Vineet Gupta. Program-ming in timed concurrent constraint languages. In B. Mayoh, E. Tyugu,and J. Penjaam, editors, Constraint Programming: Proceedings 1993NATO ASI Parnu, Estonia, pages 361–410, Berlin, Germany / Heidel-berg, Germany / London, UK / etc., 1994. Springer Verlag.
[SJG95] Vijay A. Saraswat, Radha Jagadeesan, and Vineet Gupta. Defaulttimed concurrent constraint programming. In POPL ’95: Proceedings
148
of the 22nd ACM SIGPLAN-SIGACT Symposium on Principles of Pro-gramming Languages, pages 272–285, San Francisco, California, UnitedStates, 1995. ACM Press.
[SJG96] Vijay A. Saraswat, Radha Jagadeesan, and Vineet Gupta. Timed de-fault concurrent constraint programming. Journal of Symbolic Compu-tation, 22(5/6):475–520, 1996.
[Smo94] Gert Smolka. A foundation for higher-order concurrent constraint pro-gramming. In Jean-Pierre Jouannaud, editor, 1st International Confer-ence on Constraints in Computational Logics, Lecture Notes in Com-puter Science, vol. 845, pages 50–72, Munchen, Germany, 7-9 Septem-ber 1994.
[Smo95] Gert Smolka. The Oz programming model. In Jan van Leeuwen, editor,Computer Science Today, volume 1000 of Lecture Notes in ComputerScience, pages 324–343. Springer-Verlag, Berlin, 1995.
[SRP91] Vijay A. Saraswat, Martin Rinard, and Prakash Panangaden. Seman-tic foundations of concurrent constraint programming. In ConferenceRecord of the Eighteenth Annual ACM Symposium on Principles ofProgramming Languages, pages 333–352, Orlando, Florida, 1991.
[SS71] Dana Scott and Christopher Strachey. Towards a mathematical se-mantics for computer languages. In Symposium on Computers and Au-tomata, pages 19–46, Polytechnic Institute of Brooklyn, August 1971.
[Ste90] Guy L. Steele. Common Lisp The Language. 2nd Edition. Digital Press,1990.
[TAC01] Charlotte Truchet, Carlos Augusto Agon, and Philippe Codognet. Aconstraint programming system for music composition, preliminary re-sults. In Seventh International Conference on Principles and Practiceof Constraint Programming, Musical Constraints Workshop (CP’2001),Paphos, Cyprus, December 2001.
[Tau90] Heinrich Taube. Common music: A music composition language incommon lisp and clos. Computer Music Journal, 15(2), 1990.
[Tav08] Carlos Tavera. Design, Implementation and Correctness of GraPiCO:A visual, object-oriented and constraint calculus compiled to PiCO (inspanish). PhD thesis, Universidad del Valle - Cali(Colombia), 2008.
[TCQ06] Dmitri Tymoczko, Clifton Callender, and Ian Quinn. The geometry ofmusical chords. Science, 313:72–74, 2006.
149
[Tru04] Charlotte Truchet. Contraintes, Recherche Locale et Composition As-sistee par Ordenateur. PhD thesis, Paris VII, 2004.
[Val02] Frank Valencia. Temporal Concurrent Constraint Programming. PhDthesis, University of Aarhus, 2002.
[Val06] Bruno Valeze. Integration d’un systeme de contraintes temporelles dansle logiciel de composition musicale OpenMusic. Master’s thesis, Uni-versite Bordeaux I - Scrime / Ircam, 2006.
[VNP02] Frank Valencia, Mogens Nielsen, and Catuscia Palamidessi. Tempo-ral concurrent constraint programming: Denotation, logic and applica-tions. Nordic Journal of Computing, 9(2):145–188, 2002.
[Wig98] Geraint A. Wiggins. The use of constraint systems for musical com-position. In Workshop on Constraint Programming and Creativity(ECAI’98), 1998.
[Win82] Glynn Winskel. Event structures for ccs and related languages. In Mo-gens Nielsen and Erik Meineche Schmidt, editors, 9th International Col-loquium on Automata, Languages and Programming (ICALP’82), vol-ume 140 of Lecture Notes in Computer Science, pages 561–576, Aarhus,Denmark, July 12-16 1982. Springer-Verlag.
[Win86] Glynn Winskel. Event structures - lecture notes for the advanced courseon petri nets. Technical Report 95, University of Cambridge, UnitedKingdom, July 1986.
[Wri95] Peter Wright. Generating Fractal Music. PhD thesis, University ofWestern Australia, 1995.
150