eri cation of compiler t ransformations for sp eculativyounis/publications/automatica/...f ormal v...

26

Upload: nguyenduong

Post on 11-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Formal Veri�cation of Compiler Transformations for Speculative

Real-Time Execution 1

Mohamed F. Younis�, Grace Tsaiy, Thomas J. Marlowez and Alexander D. Stoyen?

� AlliedSignal Inc., Advanced Systems Technology Group, Columbia, MD 21044, USA.

y Fairleigh Dickinson University, Department of Computer Science, Teaneck, NJ 07666, USA.

z Seton Hall University, Department of Mathematics and Computer Science, South Orange, NJ 07079, USA

? New Jersey Institute of Technology, Department of Computer and Information Science,

Real-Time Computing Laboratory, Newark, NJ 07102, USA

Abstract. There have been a number of successes in the past few years in use of formal methods for

veri�cation of real-time systems, and also in source-to-source transformation of these systems for improved

analysis, performance, and schedulability. What has been lacking are formal proofs that these transforma-

tions preserve, or establish program properties.

We have previously developed a set of compiler transformation rules for safe and pro�table speculative exe-

cution in real-time systems. In this paper, we present formal proofs that our transformations preserve both

the semantic and the timeliness properties of programs. Our approach uses temporal logic, enhanced with

a denotational-semantics-like representation of program stores. While the paper focuses on the speculative

execution transformations, the approach is applicable to other real-time compiler-based transformations and

code optimization.

Keywords. Compiler transformation, Formal Veri�cation, Real-Time Systems, Speculative Execution,

Compilers, Computer Control, Optimization, Distributed Control, Programming Languages, Temporal Logic

1A short version of this paper was presented at The 20th IFIP/IFAC Workshop on Real-Time Programming (WRTP'95),

Ft. Lauderdale, Florida, USA., November 1995. Corresponding author is Mohamed F. Younis. Tel. 01-410-964-4015. Fax.

01-410-992-5813. E-mail [email protected].

1

1 Introduction

There has been an increase, during the past few years, in the use of distributed computer systems in

safety-critical real-time control applications, such as patient monitoring, avionics and air-tra�c control. In

these applications, not only it is essential to perform the correct control function but also to meet the

timing constraints imposed by the application physics. For example, a ight control system must react fast

enough to maintain the stability of the aircraft. Since errors in the development of such applications may

be disastrous, validation and veri�cation of the control system are mandated before depolyment. In some

applications, e.g. avionics, auditing and certi�cation may be required by a third party before the control

system can be used on the aircraft.

The use of embedded computer systems in real-time control applications allows decreasing the size of the

control unit by implementing sophisticated control algorithms in software. However, the extensive use of

software in these systems increases the risk of subtle programming errors that may a�ect the safety of the

system. Therefore, pre-run time guarantees are required to ensure proper execution behavior. Speci�cally,

there are two issues to be analyzed before physically executing real-time control programs: �rst, the correct-

ness of program semantics; second, timeliness { real-time programs must provide intended service subject to

a set of timing constraints.

Mathematical techniques have been used to prove correctness of programs. First, a program is represented

by a formal speci�cation, for example, modeling a program as a sequence of state transitions (updates) where

a state describes values of variables after a transition. Then the speci�cation is veri�ed using the underlying

proof system. Through rigorous reasoning, mathematical techniques or formal methods have shown to

increase the dependability and reliability of the design and development of software (Tsai and McMillin

1991). In fact, many certi�cation agencies, such as the Federal Aviation Administration, has endorsed

such formal techniques as a fault avoidance methodology that enhances the quality of the software. To

apply formal techniques to real-time systems, it is a necessity to show semantic correctness and timeliness

(Narayana and Aaby 1988; Ostro� 1989; Pnueli and Harel 1988).

On the other hand, compile-time analysis and transformations of real-time programs are very common to

predict the timing behavior of the programs (Nirkhe and Pugh 1993; Park 1993; Stoyenko 1987), to enable

e�cient schedulability analysis (Stoyenko andMarlowe 1992; Stoyenko, et al. 1993), to enhance schedulability

(Gerber and Hong 1995; Gupta and Spezialetti 1995), to perform code optimization (Marlowe and Masticola

1992; Younis 1996), and to extract opportunities for speculative execution and parallelism (Younis et al. 1994;

Younis 1996). Since these transformations a�ect the code and consequently the execution behavior of real-

2

time programs, it is crucial to formally verify that the transformations preserve the program semantics and do

not worsen the program timeliness. Although many real-time compiler transformations have been introduced

in the literature, no work, to the best of our knowledge, has addressed the formal veri�cation real-time

properties of such transformations. In this paper, we present formal veri�cation for compiler transformations

rules for speculative execution of real-time programs. Although we focus on the transformations rules for

speculative execution, the approach is applicable to other real-time compiler transformations as well.

Speculative execution is an optimistic execution of parts of the program based on assumptions on either

program control ow or variable values. These assumptions will later be validated, and rollback may be

needed if they turn to be invalid. Although speculative execution can enhance average execution time of a

program, in real-time systems, worst-case execution path should not be extended. In (Younis et al. 1994;

Younis 1996), we use compile-time analysis to detect both safety and pro�tability of speculative execution

in real-time systems. We rely on intensive static timing analysis to investigate the e�ect of rollback on

worst-case execution time. We have developed compiler transformation rules to fork processes to execute

parts of the code speculatively on a shadow replica or on the same processor interleaving with the current

process. In this paper, we provide veri�cation of those speculative execution compiler transformation rules.

Based on our discussion above, veri�cation of real-time compiler transformations must address the required

properties; preserving semantics and time-safety. We provide a semantic correctness proof, and a timeliness

proof. Again, the approach can be applied to other types of compiler transformations.

In the following section, we provide an overview of formal veri�cation techniques for compiler transforma-

tions and show how we address real-time properties. In Section 3, we summarize the compiler transformation

rules for speculative execution. We provide a semantic correctness proof in Section 4, followed by the time-

liness property. We conclude, in Section 6, by a summary as well as our current and future work.

2 Formal Veri�cation of Compiler Transformations

A global requirement for all compiler transformations is to preserve program semantics. This is a safety

requirement. For real-time programs, safety has a more restrictive de�nition: code transformations also

should not worsen the timing properties of the program; a program that meets all timing constraints should

not be transformed to a one that fails its deadline. Thus, we need to verify that speculative execution does

preserve the program semantics and timeliness.

A correctness proof of a non-real-time compiler transformation consists of three parts: �rst, a proof

via a technique such as abstract interpretation (Cousot and Cousot 1977) that the data ow equations

3

correctly abstract the program semantics; second, a proof that the data ow computation terminates; third,

a proof that the transformation preserves the semantics. For real-time systems, there are three corresponding

proofs for timing: the correctness of individual timing rules, the correctness of timing summaries, and the

preservation of desired timing properties by the transformation. In this paper, we assume that the �rst

two proofs are given, i.e., given correct data ow and timing information, we show that the transformation

preserves the required properties, program semantics and timeliness.

In reasoning about programs, there are two types of proof systems, exogenous and endogenous. The

assertions of an exogenous logic (Hoare 1985; Owicki and Gries 1976) such as \PfSgQ" contain program

fragments (S) and assert the fragments using precondition P and postcondition Q. With exogenous logics,

there is one axiom for each programming language construct, which makes it suitable for statement-based

transformations. On the other hand, endogenous logic (Harter Jr. 1982; Owicki and Lamport 1980) does

not consider intermediate states and thus is suitable for block-based transformations.

Research in (Aiken et al. 1995; Backus et al. 1986) investigates preservation of the meaning of programs,

in the context of a functional language. But they do not handle timeliness, the most important property

of real-time systems. On the other hand, it has been known that methods developed for shared-memory

multiprocessing apply equally well to distributed systems and real-time systems (with an additional variable

time) (Abadi and Lamport 1992), so the temporal logics of concurrent systems (Alur and Henzinger 1991;

Berstein and Harter 1981; Henzinger et al. 1990) can be applied to real-time programs. Since the transfor-

mations considered in this paper are block-based the goal is to prove that if a property holds originally in a

real-time program, it will hold after applying transformation rules. The focus is on verifying the semantic

correctness and timeliness of a program. We don't reason step-by-step on the statements between P and Q;

hence, an endogenous logic based on the temporal logic of (Harter Jr. 1982; Owicki and Lamport 1980) has

been adopted.

3 Speculative Execution in Real-Time Systems

Recall that speculative execution occurs when we execute code without being certain as to whether or not

the execution will be committed. In this section, we identify opportunities for speculative execution and

illustrate our approach to analyzing applicability and pro�tability of speculative execution. We begin with

the problem model assumed throughout the paper.

Opportunities for speculative execution include conditional statements and while loops. The longest

branch of a conditional can be executed while the condition is being evaluated on another processor. Similarly,

4

we can execute the body of a while loop before concluding evaluation of the loop condition. Rollback is

necessary if the value of the condition does not match our assumption. In real-time systems, speculative

execution should be carefully performed so as not to extend the worst-case time in case of rollback. In

this section, we summarize rules for a number of compiler transformations for safe speculative execution in

real-time systems. We start by providing de�nitions and assumptions needed within our model, as well as

the format of our rules.

3.1 The System Model

We assume a set of periodic top-level processes, each with a deadline, invoking methods of a set of objects

governing resources and data. The application runs on an arbitrary network of processors. Objects and

processes are assigned to processors at compile time.

Our analysis relies on an expressive real-time language with all kinds of timing constraints. The language

does not allow any unpredictable constructs: there are no dynamic structures, all loops have an upper bound

on the number of iterations, and there is no recursion. Conceptually, a program in this language may have

resulted from source-to-source translation with more general loops and limited recursion. Conceptually,

a program in this language may have resulted from source-to-source translation of a program with more

general loops and with limited recursion (Chung 1994; Stoyenko et al. 1995). However, we assume the

language allowing concurrency and interprocess synchronization. It is also assumed that the execution time

of the program can be predicted. A machine-dependent timing analysis is to be performed to handle modern

processor architectures with advanced features, such as cache memory and instruction pipeline. Moreover,

there should be an upper bound on communication delays.

We use an axiomatic speci�cation approach that includes both preconditions and postconditions to de-

note the execution before and after applying a transformation for speculative execution. There are other

approaches, for example (Venkatesh 1989; Whit�eld and So�a 1991), for specifying data dependence and

control ow conditions. However, they are not in their current form suitable for real-time systems, since

compiler transformations of real-time programs cannot ignore timing constraints and resources access. The

rules are standard Hoare triples (Hoare 1985):

(precondition,action,postcondition)

In each rule, we consider the code S in a procedure/method P . The set of preconditions identi�es applica-

bility, correctness, and pro�tability, and is decomposed into the following subsets:

� One invariant condition: except as provided below, certain types of blocking statements, for which

5

linearizability is important or retraction is impossible (e.g., I/O, creation/destruction of resources,

exceptions with persistent e�ects, possible errors), do not occur in S. We assume that resource depen-

dences have been captured in Blocking or Ordered constraints on resource access (see below).

� Structural conditions: Syntactic ow-graph restrictions on S.

� Dependence conditions: Summarize the dependences between the code of S and other code segments

in P .

� Blocking conditions: Additional blocking or unblocking information, possibly guarded by their own

preconditions.

� Timing rules: Needed to determine the pro�tability of the transformation.

We use the following information in specifying conditions:

� The standard PDG decomposition of dependence into control dependence, true ( ow) dependence,

anti-dependence, output dependence, and input dependence, and for dependences inside loops, into

loop-dependent and loop-independent dependences.

� V ars(S) = the set of variables referenced in S.

� Mod(S) = the set of variables modi�ed directly or indirectly in S.

� Pres(S) = the set of variables whose de�nitions must be preserved through S.

� Calls(S) = the set of method calls in S.

� Blocking(L) is true if L is blocking.

� Ordered(L) is true if the order of accesses to L is observable.

� For a method M , TCalls(M ) = the set of methods/procedures transitively calling M .

In addition, we assume the following timing functions: �rst, given a set of variables V ars, we assume

functions tc and tr giving the time to copy and restore that set of variables; second, T ime(S) returns an

estimate for the worst-case execution time of S, which may be a code segment, a procedure, or a method.

We also use tf and tj for fork time and join time respectively (both include communication delays).

After stating our model and assumptions, we discuss possible opportunities for speculatively executing

branches of conditionals and while loops in the following subsection.

6

3.2 Opportunities for Speculative Execution

In this section, we brie y describe some possible opportunities for speculative execution. Basically, we focus

on opportunities based on the program control ow during the execution of conditionals and while loops.

For further details and the discussion of other opportunities, the reader is referred to (Younis 1996).

Opportunities in Conditionals. Assume that we have a call at a branch point (for simplicity we assume

exactly two branches) and the code is of the form s : if (C) then S2 else S3 , where C is a call being

executed on another processor. If we store (and possibly later retrieve) the current state at s, there will be

a cost a time delay of tc for the copying (store), and tr for the restore.

If the execution time of S2 (T ime(S2)) dominates that of S3 (T ime(S3)), and T ime(S2) - T ime(S3) >

tr, and further, some initial segment of S2 is not data dependent on an out parameter of C, then we can

begin executing S2 speculatively, abandoning the computation and restoring prior state only if the returned

value indicates we should have chosen S3. This will almost invariably be the case in dealing with an if-then

statement, since S3 is the empty statement. However, for the transformation to be useful, it requires that

the evaluation of C (or some prior statements on which s is not data dependent) be time-consuming.

If one branch has little or no e�ect on state, so that restore is inexpensive, and that branch has some initial

segment not data dependent on C, we can speculatively execute that branch. (If both branches have this

property, we use the one with the longer execution time.) Furthermore, if there are some data dependencies

on a value modi�ed in C, we can speculatively execute that branch and stop at the point when we use that

value, provided that this is pro�table.

Opportunities in While Loops. While a real-time language allows only constant-count loops with compile-

time bounds (Stoyenko 1987), this has been extended to allow while loops with a compile-time-provable

bound on iterations, or equivalently, constant-count loops with exits (Chun 1995, Stoyenko et al. 1995). For

while loops, speculative execution can be involved for some number of iterations, saving the state after each.

In speculative execution, the next loop body may be evaluated in parallel with a call late in the previous

body, where the loop condition depends on the return value (Katz 1990). For example, in the code block

s : while (C) do S2 , where C is a call being executed on another processor, execution of the loop body

S2 (or part of it) can be started during the evaluation of the call C, undoing all updates to the variables if

condition evaluation results in termination.

One particular subcase which proves interesting is the case in which iterations modify distinct locations,

as, for example, in array-oriented programs. In this case, we can remember the original values, allow iteration

to proceed, and restore precisely the values which have been written by speculatively-executed iterations

7

RULE: SPECULATIVE IFPreconditions: Structural:

(1) S = (if(C)then S2 else S3) has a single exit.(2) C is a call executed on another processor.Dependence:(3) V ars(S2) \Mod(C) = �

(Variables modi�ed in C are not used in S2)Blocking:(4) There are no blocking constructs in S2.(5) 8M 2 TCalls(Calls(S2)) ) :Blocking(M)(6) 8M 2 TCalls(C) \ TCalls(Calls(S2)) ) :Ordered(M)

(Incorrectly executing any such statement has an invalid e�ect on the environment.)Timing:(7) ts(Mod(S2)) + tf + tj + tr(Mod(S2)) + Time(S3) < Time(C).

(Useful work can be done; worst-case time does not increase.)(8) Time(S3) + tr(Mod(S2)) � Time(S2).

(Worst-case time does not increase.)Actions:

Execute C in parallel with the following: save(Mod(S2)); S2.Synchronize between exit(C) and exit(S2).Check xc, the return parameters of C;

If this enables S2, do nothing.Otherwise, execute restore(Mod(S2)); S3.

In any case, continue executing from exit(S).Postcondition:

S has completed without missing its deadline.State is as if execution had been sequential.

Comment:A symmetric rule exists for S3.

Fig. 1: Speculative execution for if clauses

which do not in fact occur.

Again, attention should be paid to the worst-case scenario. The cost of rollback must be estimated. The

transformations may not be performed if they endanger satisfaction of timing constraints imposed on the

program. The next subsection presents the speci�cation of compiler rules that ensure safety before applying

the speculative execution transformations.

3.3 The Compiler Transformation Rules

In this section, we discuss two compiler transformation rules, in Figures 1 and 2, for speculative execution.

In the Speculative-If rule (Fig. 1), we assume that the condition of the if-statement is a call that can be

executed on another processor. We can select one of the branches to be speculatively executed while making

the call C in the condition. The selected branch (S2 for example) should satisfy the following conditions :

1. The variable used in S2 are not modi�ed as a side e�ect of the call C.

2. Neither S2 nor any function transitively called from S2 has a blocking construct.

8

RULE: SPECULATIVE WHILEPreconditions: Structural:

(1) S = ( while (C) do S2 ) has a single exit.(2) C is a call executed on another processor.(3) The loop will be executed at least once.Dependence:(4) V ars(S2) \Mod(C) = �

(Variables modi�ed in C are not used in S2)Blocking:(5) There are no blocking constructs in S2.(6) 8M 2 TCalls(Calls(S2)) ) :Blocking(M)(7) 8M 2 TCalls(C) \ TCalls(Calls(S2)) ) :Ordered(M)Timing:(8) tr(Mod(S2)) + 2ts(Mod(S2)) + 2tf + 2tj + Time(S2) < Time(C).

(Useful work can be done; worst-case time does not increase.)(9) tr(Mod(S2)) � Time(S2).

(Given at least one iteration; worst-case time does not increase.)Actions:

Execute C in parallel with the following: save(Mod(S2)); S2.synchronize between exit(C) and exit(S2).Check xc, the return parameters of C;

If this enables S2, repeat.Otherwise, execute restore(Mod(S2)); exit(S).

Postcondition:S has completed without missing its deadline.State is as if execution had been sequential.

Fig. 2: Speculative execution for while clauses

3. There will not be a change in the order of any calls to critical sections if S2 runs while C is running.

To ensure the safety and pro�tability of the transformations, timing conditions of (7) and (8) in Fig. 1

must be satis�ed before performing the transformations. Safety can be guaranteed if the worst-case execution

path is not extended. Speculatively executing the longest branch of a condition, the e�ect of rollback on

the other branch should be examined. The rollback penalty should not extend the short branch over the

worst-case execution time, as in (8). The transformation is pro�table if the overhead (storage, forking and

joining) is less than the execution time of C, as described in (7).

The Speculative-While rule, in Fig. 2, follows in the same spirit. In this paper, we concentrate on the

above two rules. The reader may refer to (Younis 1996) for a complete list of rules.

Before presenting formal proofs that these transformation rules preserve the semantic and timeliness

or real-time programs, the next subsection provides an example to illustrate how to apply the speculative

execution transformations to an aircraft navigation control application.

9

3.4 An Example for Code Transformations

To present how to apply the speculative execution transformations to control applications, an example of

aircraft navigation control system, similar to the one discussed in (Gerber and Hong 1995), is considered.

The route of an aircraft is represented by a set of goal coordinates (stored in the GOAL array). This set

of coordinates is assumed to be provided by another module, and passed as parameters to the navigation

control thread. The algorithm can be summarized in four steps. First it samples the aircraft's current

coordinates, direction (heading), roll, and ground speed 2. Second, it consults the GOAL array for the next

coordinate to target. Then, the safety of the path to the target is checked using ight safety and hazard

avoidance devices such as weather radar and tra�c collision avoidance system (TCAS). If the path is safe,

the relative attitude and the new direction angle are calculated. Finally, the throttle and ap are adjusted

to move to the new coordinate. For simplicity, a 2-dimensional abstraction of navigation control problem is

considered.

The C-like pseudo code for that example is shown in Fig. 3. A transformed version of the code is

depicted in Fig. 4. For clarity, the parts of the code which have not been a�ected by the transformations

are omitted. The speculative-if rule of Fig. 1 could be applied to allow the concurrent execution of the

evaluation of the safety of the navigation path and the then-clause which calculates the new values for the

actuators. If the navigation path is not safe the old values of the throttle and ap are restored and the

else-branch is executed. Assuming the then-clause is signi�cantly longer than the else-clause, the program

runs faster on the average while the worst-case execution is not extended. Therefore the transformations

enhance average-case performance while preserving the program timeliness. For a comprehensive evaluation

of the applicability of speculative execution, the reader is referred to (Younis 1996).

The next subsection starts the formal veri�cation for the speculative execution transformation rules by

showing that the transformed program is semantically equivalent to the original program.

4 Semantic Correctness Proof

In this section we show that the transformation rules preserve the semantics of a program. Our goal is to

prove that applying a transformation rule should lead to a semantically equivalent state, according to the

de�nition of semantic equivalence below. Throughout this paper, we use S to denote a code segment, and

S0 is the corresponding transformed code of S. The following notations and de�nitions are necessary for the

proof.

2While other readings may be required, for simplicity, these are only considered.

10

int checkSafety(double theta,x,y,gx,gx) {

/* check safety for the move using TCAS

weather radar and other warning devices */

if (safe)

return (TRUE);

else

return (FALSE);

}

void action(GOAL goal, int& i) {

double x; /* current x-coordinate */

double y; /* current y-coordinate */

double theta; /* current theta */

double roll; /* current roll */

double speed; /* current speed */

double rtheta,dtheta;

/* beginning of the control loop */

while (TRUE) {

/* Get current sensors readings */

read_sensor(&x,&y,&theta,&roll,&speed);

/* Check if current target is reached */

gx = goal[i].x;

gy = goal[i].y;

goal[i].passed = check_target(x,y,gx,gx);

/* if target is passed get a new goal */

if( goal[i].passed ) {

i = i+1;

gx = goal[i].x;

gy = goal[i].y;

}

/* if the goal can be reached safely, calculate new actuators setting */

if (checkSafety(theta,x,y,gx,gx)) {

/* compute the relative attitude using the current location and goal */

rtheta = compRelAtt(theta,x,y,gx,gx);

if( abs(rtheta) < EPS ) {

dtheta = 0;

}

else {

if ( speed < VHIGH ) {

dtheta = rtheta;

}

else {

dtheta = safeDtheta(rtheta,roll);

}

}

/* compute new flap and throttle settings */

wflap = compFlapw(roll,speed,dtheta);

throttle = compThrottle(roll,speed,dtheta);

/* adjust actuators */

send_output(throttle,wflap);

}

else { /* Goal cannot be reached safely */

/* Warn the pilot of the situation */

}

} /* end of the control loop */

}

Fig. 3: An example of automated ight control

11

void SPEC_action(double theta,x,y,gx,gx,roll,speed,*wflap,*throttle) {

double rtheta, dtheta;

/* compute the relative attitude using the current location and goal */

rtheta = compRelAtt(theta,x,y,gx,gx);

if( abs(rtheta) < EPS ) {

dtheta = 0;

}

else {

if ( speed < VHIGH ) {

dtheta = rtheta;

}

else {

dtheta = safeDtheta(rtheta,roll);

}

}

/* compute new settings for flap and throttle */

*wflap = compFlapw(roll,speed,dtheta);

*throttle = compThrottle(roll,speed,dtheta);

}

void action(GOAL goal, int& i) {

.

.

/* beginning of the control loop */

while (TRUE) {

.

.

/* if the goal can be reached safely, calculate new actuators setting */

cobegin

safe = checkSafety(theta,x,y,gx,gx);

||

/* save the modified variables */

save(throttle,wflap);

SPEC_action(theta,x,y,gx,gx,roll, speed,&wflap,&throttle);

coend;

if (safe) {

/* adjust actuators */

send_output(throttle,wflap);

}

else {

/* restore the modified variables */

restore(throttle,wflap);

/* Warn the pilot of the situation and do approperiate action */

}

} /* end of the control loop */

}

Fig. 4: An example of applying speculative execution transformations

12

Notation 4.1 Let � denote the set of states of a program P . Let �S denote the state of a program after

executing the code segment S where �S 2 �.

Notation 4.2 Let �(x) be the value of variable x at the state �.

Note that x may not be initialized or declared in �. These two cases are handled separately in semantic

correctness proof.

De�nition 4.1 Given �S and �S0 , we say that the states �S and �S0 are equivalent, denoted by �S = �S0 ,

if for every x 2 �S and x 2 �S0 , the value of x is the same in �S and �S0 .

Notation 4.3 Let � be the projection of states onto the set of variables live immediately after the execu-

tion of S (i.e. after(S)) and �0 is the corresponding projection immediately after the execution of S0 (i.e.

after(S0)).

There is a subtlety in the above notation, namely, the use of projections onto the set of variables live on exit

of the transformed code block. Many compiler transformations for optimization and parallelization, and some

for speculative execution, either introduce new variables, or eliminate unnecessary variables, which result in

di�erent values for variables, where, in each case, those values are never used after exiting the transformed

block. Clearly, such dead values should not matter in evaluating the correctness of the transformation. As

far as correctness concerns, if a variable exists before transformation, its value should be the same after

applying transformation rules. Projections are used to show that for the variables of interest, their values

are not altered after transformation. We actually use a weaker criterion , di�ering when the initial program

does not terminate correctly.

De�nition 4.2 Given �S and �S0 , we say that the semantics of a program is preserved after transformation

if and only if the following two conditions hold:

1. if S converges and �S 2 �, then �S0 2 �, and ��S = �0�S0 .

2. if S converges but �S 62 �, S0 also converges.

The above de�nition of semantic equivalence is motivated by a requirement that any transformation

should not worsen the quality of the results returned by a program. If the computation terminates normally

and returns output (correct by assumption), the transformed program must return the same output. If,

however, the program terminates in error, then we certainly should not object if the transformation eliminates

the error and returns a correct result; arguably if less clearly, we shouldn't be particularly concerned once

an error has occurred, as long as the program terminates. Finally, if a program does not terminate, any

behavior in the transformed program is acceptable. Note that this criterion can be modi�ed to prohibit

13

catastrophic errors in the transformed programs, perhaps by adding an ordering on errors, and requiring the

error in the transformed program to be more severe.

Semantic correctness is preserved if a transformed program preserves the contents of the program store

during execution. So the program store corresponding to each program state is examined to show semantics-

preservation, as in denotational semantics (Aiken et al. 1995) or abstract interpretation (Cousot and Cousot

1977).

In the semantic proof, we assume that fork and join have no side e�ects on the program state, since they

usually represent operating system overhead to manage the new process.

Lemma 4.1 The semantics of a program after applying the if-rule, Fig. 1, is preserved assuming the

computation converges.

Proof: Let the following equations denote, respectively, the state sequences before and after applying the

transformation rule.

� � �Sa! �P

Sb! �QSc! � � �

� � �Sa! �P

S0b! �Q0

Sc! � � �

where Sb denotes the statement if (C) then S2 else S3 , and S0b = (xc = C k (Save;Fork;S2)); if (:xc)

then Restore; S3, is the corresponding transformed version. We need to show for an arbitrary variable x

that the value of x will be the same in both �Q and �Q0 . Therefore, the states �Q and �Q0 are semantically

equivalent.

The value of x may be modi�ed in the transformed version as follows:

1. xc is used only locally in the evaluation of C, and it is not live on exit from Sb, so is unconstrained.

2. If x is not a live variable when the control reaches C, there are no constraints on �Q(x) or �Q0(x).

Thus we assume henceforward that x is a live variable upon entering the execution of C.

3. x 2 Pres(Sb) =) x 2 Pres(S0b )

=) �P (x) = �Q(x) = �Q0(x)

If x is never modi�ed in Sb then its value will be the same in �Q0 since the operations incurred by the

transformation such as fork and save do not modify the value. So �Q(x) = �Q0(x).

4. x 2Mod(S2) and x 62Mod(C) \Mod(S3)

C: =) S2 is to be executed.

=) �Q0(x) = �Ck(Save;Fork;S2)(x) = �S2 (x) (Since x 62Mod(C))

14

=) �Q(x) = �C;S2(x) = �S2(x)

=) �Q0(x) = �Q(x)

:C: =) S3 is to be executed.

=) �Q0(x) = �Ck(Save;Fork;S2);Restore;S3 (x) = �Restore;S3 (x)

=) �Restore;S3(x) = �Restore(x) (Since x 62Mod(S3))

=) �Restore(x) = �P (x) (Since x 62Mod(C))

=) �Q(x) = �C;S3(x) = �P (x) (Since x 62Mod(S3) and x 62Mod(C))

=) �Q0(x) = �Q(x)

If the then-clause is to be executed, the e�ect of S2 on the value of x will be propagated to �Q0 .

However, since x is not modi�ed by C, restoring the value of x of �P is su�cient to preserve the

semantics when following the else branch.

5. x 2Mod(S3) and x 62Mod(S2) \Mod(C)

C: �Q0(x) = �Q(x) (since x is not modi�ed in S2).

:C: =) �Q0(x) = �Ck(Save;Fork;S2);Restore;S3 (x) = �S3 (x) (Since x 62Mod(S2) _Mod(C))

=) �Q(x) = �C;S3(x) = �S3(x) (Since x 62Mod(C))

=) �Q0(x) = �Q(x)

Since x is not modi�ed by either C or S2, the speculative execution of the then-clause has no e�ect

on the value of x. Therefore, the semantic is preserved if the condition C is true. On the other hand,

when the else-clause is executed, the value of x in �P is used, which leads to the same state as the

serial order of execution.

6. x 2Mod(C) and x 62Mod(S2) \Mod(S3)

=) �Q(x) = �(C;S2)_(C;S3)(x) = �C(x)

=) �Q0(x) = �(Ck(Save;Fork;S2))_(Ck(Save;Fork;S2);Restore;S3)(x)

Since the value of x is not modi�ed in either branch, only the e�ect of C on x will be propagated to

�Q.

C: =) �Q0(x) = �(Ck(Save;Fork;S2))(x) = �C(x)

=) �Q0(x) = �Q(x)

:C: =) �Q0(x) = �(Ck(Save;Fork;S2);Restore;S3)(x)

=) �Q0(x) = �(C;Restore;S3)(x) = �C;S3 (x) (Since x 62Mod(S2))

=) �Q0(x) = �C;S3 (x)

=) �Q0(x) = �Q(x)

15

In case of rollback, the value of x will not be restored to �P (x). As the values of variables modi�ed in

S2 are only restored, the modi�cation of x in C will not be overwritten by rolling back execution.

7. x 2Mod(S2) \Mod(S3) and x 62Mod(C)

C: =) �Q0(x) = �Ck(Save;Fork;S2)(x) = �S2(x)

There is no race conditions due to the parallel execution of C and S2, since x is not modi�ed in

C

=) �Q(x) = �C;S2(x) = �S2(x)

=) �Q0(x) = �Q(x)

:C: =) �Q0(x) = �(Ck(Save;Fork;S2);Restore;S3)(x) = �S2;Restore;S3 (x) (Since x 62Mod(C))

=) �Q0(x) = �S2;Restore;S3 (x) = �S3(x)

As x 2Mod(S2), the value of x will be restored to its original value in �P

=) �Q(x) = �C;S3(x) = �S3(x) (Since x 62Mod(C))

=) �Q0(x) = �Q(x)

8. x 2Mod(C) \Mod(S3) and x 62Mod(S2)

C: =) �Q0(x) = �Ck(Save;Fork;S2)(x) = �C(x) (Since x 62Mod(S2))

=) �Q(x) = �C;S2(x) = �C(x)

=) �Q0(x) = �Q(x)

:C: =) �Q0(x) = �(Ck(Save;Fork;S2);Restore;S3)(x) = �C;Restore;S3(x) (Since x 62Mod(S2))

=) �Q0(x) = �C;Restore;S3 (x)

As x 62Mod(S2), the value of x will not be restored to its original value in �P

=) �Q(x) = �C;S3(x)

=) �Q0(x) = �Q(x)

9. x 2Mod(C) \Mod(S2) cannot occur, by construction.

10. x 2Mod(C) \Mod(S2) \Mod(S3) cannot occur, by construction.

Lemma 4.2 The semantics of a program after the speculative execution of while-rule, Fig. 2, is preserved

assuming the computation converges.

Proof: For while loops, semantic safety will follow if the states after each iteration (meaning, for the original,

after the executions of S2, and for the transformed code, at synchronization) are shown to be equivalent.

Also the numbers of committed executions of S2 are the same. It is su�cient to show (1) that a single

16

execution of the loop transforms the value of a variable x identically if C holds, and (2) that, when C fails,

x0s value is likewise transformed identically.

Recall that, we transform Sb =While (C) do S2, into S0b =While (xc = C k (Save;Fork;S2)); Restore.

Assume that the state sequence of the original program is � � �Sa! �P

Sb! �QSc! � � � and after applying the

speculative while-rule of Fig. 2, the state sequence becomes � � �Sa! �P

S0b! �Q0

Sc! � � �. We need to prove that

�Q = �Q0 .

Assume that the while loop is executed n times, as well as Ii and Ei are the states before and af-

ter executing the ith iteration, respectively. The original state sequence can be expanded further into:

� � �Sa! �P �I1

C;S2���! �E1 �I2

C;S2���! �E2 � � � �In

C;S2���! �En

�In+1C! �Q

Sc! � � �

where �P = �I1 , and �Ii+1 = �Ei; i = 1; � � � ; n, and the corresponding expanded state sequence after the

transformation is

� � �Sa! �P �I0

1

Ck(Save;Fork;S2)������! �E0

1�I0

2

Ck(Save;Fork;S2)������! �E0

2� � �

�I0n

Ck(Save;Fork;S2)������! �E0

n�I0

n+1

Ck(Save;Fork;S2);Restore�����������! �Q0

Sc! � � �

where I0i and E0i are the corresponding states to I0 and E0, respectively, in the transformed code, and

�P = �I01, and �I0

i+1= �E0

i; i = 1; � � � ; n

As we have done for speculative-if, it is su�cient to show that the value of an arbitrary variable x, will

be the same in both �Q and �Q0 . Therefore, we need to consider all the possible cases of propagating the

value of x to the �Q0 :

1. Since xc is used only locally in the evaluation of C, it is neither live from iteration to iteration nor

upon exit from Sb, and therefore it is unconstrained.

2. x 2 Pres(Sb) =) x 2 Pres(S0b )

=) �P (x) = �A(x) = �A0(x) = �Q(x) = �Q0(x)

If x is not modi�ed in Sb, the value of x will be the same for all program points A in the execution of

Sb.

3. x 2Mod(S2) and x 62Mod(C)

C: assume; �Ii(x) = �I0i(x); i = 1; � � � ; n

=) �E0i(x) = �after(Ck(Save;Fork;S2))(x) = �S2(x) = �Ei

(x)

Since �I1(x) = �I01(x) = �P (x)

=) �E0i(x) = �Ei

(x) 8i = 1; � � � ; n (by induction on i)

:C: =) �Q(x) = �C(x) = �In+1(x) = �En(x)

=) �Q(x) = �En(x) = �E0

n(x) (just proved by induction)

17

=) �Q0(x) = �after(Ck(Save;Fork;S2);Restore)(x) = �after(Save;Fork;S2 ;Restore)(x) = �I0n+1

(x)

=) �Q0(x) = �I0n+1

(x) = �E0n(x)

=) �Q0(x) = �Q(x)

If x is not modi�ed in C, the value of x after every iteration will be the result of the operation in S2,

similar to the original code. However, on exiting the loop, the value of x after the last iteration needs

to be restored.

4. x 2Mod(C) and x 62Mod(S2)

C: assume; �Ii(x) = �I0i(x); i = 1; � � � ; n

=) �Ei(x) = �C;S2 (x) = �C(x)

=) �E0i(x) = �Ck(Save;Fork;S2)(x) = �C(x)

=) �E0i(x) = �Ei

(x)

Since �I1(x) = �I01(x) = �P (x)

=) �E0i(x) = �Ei

(x) 8i = 1; � � � ; n (by induction on i)

:C: =) �Q0(x) = �(Ck(Save;Fork;S2);Restore)(x) = �C;Restore(x)

Given that we restore only Mod(S2)

=) �Q0(x) = �C;Restore(x) = �C(x)

=) �Q(x) = �C(x)

Given that: �I0n+1

(x) = �E0n(x) = �En

(x) = �In+1(x) (just proved by induction)

=) �Q0(x) = �Q(x)

As the value of x is a�ected only by the execution of C, �E0i(x) after any iteration i is similar to the

original code.

5. x 2Mod(C) \Mod(S2) cannot occur, by construction.

Theorem 4.1 Given the transformation rules of Fig. 1 and Fig. 2, the (weak) semantics of a program is

preserved after applying the if-rule and while-rule.

Proof: Let � denote the set of states of a program P . Let S denote an if or a while statement which

satis�es the constraints of Fig. 1 or Fig. 2. Let S0 be the corresponding transformed code of S. We need

to show that the semantics of the program P is preserved after transformation regardless of divergence or

convergence of S. Let � and �0 be projections on live variables at exit(S) and exit(S0) respectively. Also,

let �S denote the state after the execution of S. There are three cases to consider.

18

Case 1: S converges and �S 2 �. From Lemma 1 and Lemma 2, S0 converges, the state �S0 2 �,

and ��S = �0�S0 .

Case 2: S converges and �S 62 �, i.e., the execution of S leads to an erroneous state. Since the extra

computation incurred by the transformation rule such as fork and save are �nite, S0 will

terminate.

Case 3: S diverges. There is no guarantee of �S0 after the execution of the transformed code S0.

From De�nition 4.2, the weak semantics of a program is preserved after the transformation. 2

5 Timeliness Proof

In this section, we address the issue of timing property of the transformations. In order to prove safety of

a transformation rule, we must verify that the worst-case execution time is not increased. In the following

proof, we use WCT, 2,; to denote worst-case time, always true, and leads to, respectively. In addition, the

notation Path(P; T;Q) means that Q is reachable from P through T . As we did for the semantic correctness

proof, we begin by showing that timeliness is preserved for the speculative-if rule.

Theorem 5.1 The speculative-if rule of Fig. 1 does not extend the worst-case execution of the code after

transformation.

Proof: Assume that a program meets its deadline, and has the property P ����; Q for the code segment

if (C) then S2 else S3 where P denotes at(if) and Q is after(if). The following holds according to the

constraints of Fig. 1.

� 2P =) Path(P;R;Q)_ Path(P;H;Q), using the proof system in (Harter Jr. 1982), where R and

H represent the state formulas following the execution of then and else branches, respectively.

� P�WCT����; Q, (given that the program meets its deadline).

� R�Time(S2)����; Q

� H�Time(S3)����; Q

� P�Time(C)����; R _H

� T ime(C) + T ime(S2) �WCT , (assuming that the then-clause is the longer branch).

� T ime(C) + T ime(S3) < WCT

19

After applying the speculative-if rule, the program structure is modi�ed, and the path of reaching Q from

P is di�erent. Assume that: 2P =) Path(P;R0; Q) _ Path(P;H0; Q) , where R0 and H0 represent the

state formulas at the beginning of the then and else branches respectively. With speculative execution, R0

or H0 are reachable from P after executing C in parallel with forking a new process for executing S2 after

saving the original state.

=)

�P

Max((Time(S2)+ts+tj+tf );Time(C))�������������������������; R0 _H0

Since R0 is reachable after executing both C and S2, no time is needed to reach Q. However, rollback is

needed before executing S3.

=)�R0 0

����; Q�

=)

�H0 �Time(S3)+tr

���������; Q

We use tconcurrent to denote the maximum time to reach R0 or H0. That is

tconcurrent =Max((T ime(S2) + ts + tj + tf ); T ime(C))

Therefore, the time to reach Q from P depends on the path, and is less than the execution time of the

longest branch. Obviously the longest path from P to Q is through H0

=)

�P

�tconcurrent+Time(S3)+tr���������������������; Q

Using the timing preconditions of the transformation rule of Fig. 1, we now prove that the worst-execution

time is not extented after transforming the code. First, if tconcurrent = T ime(S2) + ts + tj + tf

=)

�P

�Time(S3)+tr+Time(S2)+ts+tj+tf ))�������������������������; Q

=)

�P

�Time(S2)+Time(C)�����������������; Q

�(Using the timing constraint (7) of Fig. 1)

=)�P

�WCT����; Q

On the other hand, if tconcurrent = T ime(C)

=)

�P

�Time(C)+Time(S3)+tr�����������������; Q

=)

�P

�Time(S2)+Time(C)�����������������; Q

�(Using the timing constraint (8) of Fig. 1)

=)�P

�WCT����; Q

Having proven the timeliness of the transformed code after applying the speculative-if rule, it is next

shown that the same property holds for the while loop transformation.

Theorem 5.2 The speculative-while rule of Fig. 2 does not extend the worst-case execution of the code after

transformation.

Proof: Similar to the case of if-statement, we assume that the original program meets its deadline, and

has the property P ����; Q for the code segment while (C) do S2. The following hold according to the

speculative-while rule of Fig. 2.

20

� 2P =) Path(P; I1; E1; I2; E2; � � � ; In; En; In+1; Q)

again using the proof system in (Harter Jr. 1980), where n is the number of iterations, and Ii and Ei

are the states before and after executing the ith iteration, respectively.

� P�WCT����; Q (given that the program meets its deadline).

� Ii =) at(C) ; i = 1; :::; n (every iteration starts by executing C).

� at(C)�Time(C)����; after(C)

� after(C) =) at(S2) (S2 will be serially executed following C).

� at(S2)�Time(S2)����; after(S2)

� after(S2) =) Ei i = 1; :::; n

� In+1�Time(C)����; Q

� (n+ 1)T ime(C) + nT ime(S2) � WCT

After performing the transformation, the state transition occurs at di�erent times. We consider the two

scenarios separately. The �rst is when the speculative execution of S2 is committed. The second case deals

with exiting the loop.

2Ii =) (at(C) ^ at(Save(S2))) i = 1; � � � ; n (C is to be executed in parallel with S2).

=)

�at(C)

�Time(C)����; after(C)

=)

�at(Save(S2))

�ts+Time(S2)+tj+tf���������; after(S2)

2Ei =) (after(C) \ after(S2))

=)

�Ii

�Max(Time(C);(ts+Time(S2)+tj+tf ))�������������������������; Ei

Assume that tconcurrent = Max(T ime(C); (ts + T ime(S2) + tj + tf ))

=)�Ii

�tconcurrent���������; Ei

It is essential to prove that the execution time of each iteration T ime(C) + T ime(S2) is not extended. If

tconcurrent = T ime(C)

=)

�Ii

Time(C)����; Ei

=)

�Ii

�Time(C)+Time(S2)�����������������; Ei

Similarly, if tconcurrent = ts + T ime(S2) + tj + tf

=)

�Ii

�Time(S2)+ts+tj+tf�����������������; Ei

After dividing by 2 the timing constraint (8) of the speculative-while rule of Fig. 2, it is easy to show that

ts + tj + tf < Time(C). Thus,

21

=)

�Ii

�Time(C)+Time(S2)�����������������; Ei

When C is false, we need to undo all side e�ects due to the speculative execution of S2.

2In+1 =) (at(C) ^ at(Save(S2)))

=)

�at(C)

�Time(C)����; after(C)

=) (after(C) =) at(Restore(S2)))

=)

�at(Save(S2))

�ts+Time(S2)+tj+tf���������; after(S2)

However, at(Restore(S2)) is still reachable after(C) and after(S2)

=)�In+1

�tconcurrent���������; at(Restore(S2))

=)�at(Restore(S2))

�tr����; Q�

=)�In+1

�tconcurrent+tr���������; Q

Considering the whole execution of the while loop, we conclude that:�P

�tr+(n+1)tconcurrent�����������������; Q

The next step is to compare the new worst-case execution time (due to the transformation) with the

original and show that the new WCT is not greater than the original WCT. There are two cases. If

tconcurrent = T ime(C)

=)

�P

�(n+1)Time(C)+tr���������; Q

=)

�P

�(n+1)Time(C)+Time(S2)�����������������; Q

�(Using the timing constraint (9) of Fig. 2)

=)�P

�WCT����; Q

Similarly, if tconcurrent = ts + T ime(S2) + tj + tf

=)

�P

�tr+(n+1)(Time(S2)+ts+tj+tf )�������������������������; Q

=)

�P

�tr+Time(S2)+2ts+2tj+2tf+Time(S2)���������������������; Q

�For n = 1 (at least 2 loop iterations)

=)

�P

�2Time(C)+Time(S2)�����������������; Q

�(Using the timing constraint (8) of Fig. 2)

=)�P

�WCT����; Q

In case n > 1, more iterations are executed. Since we have proved that Ei is reachable from Ii in less

than the worst-execution time of an iteration in the serial non-transformed execution, adding more iterations

will not extend the deadline. Thus

=)

�P

�(n+1)ime(C)+nTime(S2)�����������������; Q

�for n � 1

=)�P

�WCT����; Q

�for n � 1

22

6 Conclusion and Future Work

We have presented a formal proof of the correctness and safety of compiler transformation rules for speculative

execution. We used temporal logic to verify that the rules preserve the semantics as well as the timeliness

of a program. We extended the notion of a state in temporal logic to support reasoning on the contents of

the program store. Although the main focus of this paper is the speculative execution transformations, the

approach is applicable to other code optimization transformations as well.

The transformation rules for speculative execution have been implemented in a platform for complex real-

time applications in the Real-Time Computing Laboratory at New Jersey Institute of Technology (NJIT)

and have been applied to actual real-time applications. The results of this validation show the applicability

and usefulness of speculative execution to real-time systems (Younis 1996).

We plan to apply the formalism presented in this paper to verify other compiler optimization techniques.

We have started to look at the harder problem of performing code improvement in distributed real-time

systems. We would like to study the formal veri�cation of the safety of multi-process compiler transformations

in distributed environment.

Acknowledgements.

This work was done under funding from the O�ce of Naval Research (grants N00014-92-J-1367 and N00014-

93-1-1047) and the National Science Foundation (grant CCR-9402827) at the Real-Time Computing Lab at

NJIT where the �rst author was a doctoral student, the second and third are visiting faculties and the fourth

is a regular faculty. The authors are indebted to the many constructive comments made by the anonymous

referees as well as by members of Real-Time Computing Lab. at NJIT.

References

Abadi, M., and Lamport, L. (1992). An Old-Fashioned Recipe for Real Time. Research Report 91, Digital

Equipment Corporation, System Research Center.

Aiken, A., Williams, J.H., and Wimmers, E.L. (1995). Safe: A Semantic Technique for Transforming

Programs in the Presence of Errors. ACM Transaction of Programming Languages and Systems,

17(1), 63 { 84.

Alur, R., and Henzinger, T. (1991). Logics and Models of Real-Time: A Survey. Lecture Notes of Computer

23

Science, 600, Real-time: Theory in Practice, 74 { 106.

Backus, J., Williams, J.H., and Wimmers, E.L. (1986). The FL Language Manual. Technical Report, RJ

5339(54809), IBM Corp. Armonk, N.Y.

Berstein, A., and Harter, P. (1981). Proving Real Time Properties of Programs with Temporal Logic.

Proceedings of the 18th Symposium on Operating Systems Principles, 1 { 11.

Chung, T.M. (1994). CHaRTS: Compiler for Hard Real-time Systems. PhD Thesis Proposal, Purdue

University, IN., USA.

Gerber, R., and Hong, S. (1995). Compiling Real-Time Programs with Timing Constraints Re�nement and

Structural Code Motion. IEEE Transactions of Software Engineering, 21(5).

Cousot, P., and Cousot, R. (1977). Abstract Interpretation: a Uni�ed Lattice Model for Static Analysis

of Programs by Construction or Approximation of Fixpoints. Proceedings of the ACM SIGPLAN,

January 1977, 238 { 252, ACM press.

Gupta, R. Spezialetti, M. (1995). Busy-Idle Pro�les and Compact Task Graphs: Compile-time Support for

Interleaved and Overlapped Scheduling of Real-Time Tasks. Proceedings of the 15th IEEE Real-Time

Systems Symposium, San Juan, Puerto Rico, December 1994, 89 { 97, IEEE Computer Society Press,

Los Alamitos, CA., USA.

Harter Jr., P.K. (1982). On the Application of Temporal Logic to the Veri�cation of Real-Time Programs.

Ph.D. Thesis, Computer Science Department, University of New York at Stony Brook, NY. USA.

Henzinger, T.A., Manna, Z., and Pnueli, A. (1990). Temporal Proof Methodologies for Real-Time Systems.

Proceedings of the 18th Annual ACM Symposium on Principles of Programming Language, 353 { 366,

ACM press.

Hoare, C.A.R. (1985). Communicating Sequential Processes. Prentice Hall, NJ., USA.

Katz, M. (1990). Optimistic Concurrency with Rollback: an Alternative When Compile-time Parallelization

Fails. Proceedings of the Workshop on Parallelism in the Presence of Pointers and Dynamically-

Allocated Objects, Supercomputing Research Center of the Institute for Defense Analyses, March 1990.

Marlowe, T. and Masticola, S. (1992). Safe Optimization for Hard Real-Time Programming, Proceedings of

the Second International Conference on Systems Integration, Special Session on Real-Time Program-

ming, June 1992, 438 - 446.

24

Narayana, K.T., and Aaby, A.A. (1988). Speci�cation of Real-Time Systems in Real-Time Temporal

Interval Logic. Proceedings of the Real-Time Systems Symposium, Huntsville, Alabama, December

1988, 86 { 95, IEEE Computer Society Press, Los Alamitos, CA., USA.

Nirkhe, V. and Pugh, W. (1993). A partial evaluator for the Maruti Hard-Real-Time System. Real-Time

Systems, 5 (1), 13{30.

Ostro�, J. (1989). Verifying Finite State Real-Time Discrete Event Processes. Proceedings of the 9th

International Conference of Distributed Computing Systems, 207 { 216.

Owicki, S., and Gries, D. (1976). An Axiomatic Proof Technique for Parallel Programs. Acta Informatica,

6, 319 { 340.

Owicki, S., and Lamport, L. (1980). Proving Liveness Properties of Concurrent Programs. Technical

Report, Stanford University, CA., USA.

Park, C. (1993). Predicting Program Execution Times by Analyzing Static and Dynamic Program Paths.

Journal of Real-Time Systems, 5(1), 31 { 62.

Pnueli, A., and Harel, E. (1988). Applications of Temporal Logic to the Speci�cation of Real Time Systems.

Lecture Notes of Computer Science, 331, 84{98.

Stoyenko, A.D. (1987). A Real-Time Language with A Schedulability Analyzer. Ph.D. Thesis, Department

of Computer Science, University of Toronto, Toronto, Canada.

Stoyenko, A.D., and Marlowe, T.J. (1992). Polynomial-Time Transformations and Schedulability Analysis

of Parallel Real-Time Programs with Restricted Resource Contention. Journal of Real-Time Systems,

4(4), 307 { 329.

Stoyenko, A.D., Marlowe, T.J., Halang, W.A., Younis, M.F. (1993). Enabling E�cient Schedulability

Analysis through Conditional Linking and Program Transformations. Control Engineering Practice,

1(1), 85 { 105.

Stoyenko, A.D. Marlowe, T.J. and Younis, M.F. (1995). A Language for Complex Real-Time Systems. The

Computer Journal, 38(4), 319{338.

Tsai, G. and McMillin, B. (1991). Formal Methods of Real-Time Systems. Technical Report Number CSC

91-17, Department of Computer Science, University of Missouri-Rolla, USA.

Venkatesh G.A. (1989). A framework for construction and evaluation of high-level speci�cations for program

analysis techniques. Proceedings of the ACM SIGPLAN '89 Conference on Programming Language

Design and Implementation (PLDI '89), Portland, Oregon, June 1989, 1{12.

25

Whit�eld D. and So�a M. L. (1991) Automatic Generation of Global Optimizers. Proceedings of the ACM

SIGPLAN'91 Conference on Programming Language Design and Implementation (PLDI'91), Toronto,

Canada, June 1991, 120{129.

Younis, M.F., Marlowe, T.F., and Stoyenko, A.D. (1994). Compiler Transformations for Speculative Exe-

cution in a Real-Time System. Proceedings of the The 15th Real-Time Systems Symposium, San Juan,

Puerto Rico, December 1994, 109 { 117, IEEE Computer Society Press, Los Alamitos, CA., USA.

Younis, M.F. (1996). Safe Code Transformations for Speculative Execution in Real-Time Systems. Ph.D.

Dissertation, Department of Computer and Information Science, New Jersey Institute of Technology,

Newark, NJ., USA.

26