[lecture notes in computer science] progamming language implementation and logic programming volume...

16
Flang and its Implementation Andrei Mantsivoda Department of Mathematics, Irkutsk University Irkutsk 664003, Russia emMh [email protected] Abstract. In Flang, the functional and logic styles of programming are amalgamated. Flang also contains special tools for solving combinatorial problems of large complexity. In this paper we discuss the main results connected with development of Flang and its implementations. 1 Introduction Flang [6] is the functional-logic language containing special tools for solving combinatorial problems. This paper contains the overview of results which have been obtained during developing Flang and the Flang system. We consider here the following issues: - brief description of Flang (section 2); - Flang abstract machine (FAM) and its modifications (section 3); - special memory management of domains and constraints in Flang based on enumeration of choice points (section 3.2, 3.3, 3.4); - compilation of Flang programs (section 4); - refinements of the general strategy of computations and optimizations (sec- tion 5). 2 Flang 2.1 Functions and Logic In this subsection we describe a functional-logic kernel of Flang. Flang is based on the idea of a non-deterministic function. Non-deterministic functions are the generalization of 'usual' functions in the following way: - evaluation of functions with unground arguments is permitted; - the depth-first strategy of computation of functions is used: if the system can not reduce a goal, it uses the backtracking procedure to look for alternative ways of execution. This generalization of functions contains usual Prolog relations (they are repre- sented by functions with the only one value -- true). On the other hand, we can treat non-deterministic functions as usual functions and, thus, to write purely functional programs. Let us consider some examples of definitions in Flang. We begin with purely functional definitions. The first of them is factorial:

Upload: jaan

Post on 23-Dec-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Flang and its Implementation

Andrei Mantsivoda

Department of Mathematics, Irkutsk University Irkutsk 664003, Russia

emMh [email protected]

Abstract . In Flang, the functional and logic styles of programming are amalgamated. Flang also contains special tools for solving combinatorial problems of large complexity. In this paper we discuss the main results connected with development of Flang and its implementations.

1 I n t r o d u c t i o n

Flang [6] is the functional-logic language containing special tools for solving combinatorial problems. This paper contains the overview of results which have been obtained during developing Flang and the Flang system. We consider here the following issues:

- brief description of Flang (section 2); - Flang abstract machine (FAM) and its modifications (section 3); - special memory management of domains and constraints in Flang based on

enumeration of choice points (section 3.2, 3.3, 3.4); - compilation of Flang programs (section 4); - refinements of the general strategy of computations and optimizations (sec-

tion 5).

2 Flang

2.1 Functions and Logic

In this subsection we describe a functional-logic kernel of Flang. Flang is based on the idea of a non-determinis t ic function. Non-deterministic functions are the generalization of 'usual' functions in the following way:

- evaluation of functions with unground arguments is permitted; - the depth-first strategy of computation of functions is used: if the system can

not reduce a goal, it uses the backtracking procedure to look for alternative ways of execution.

This generalization of functions contains usual Prolog relations (they are repre- sented by functions with the only one value - - t rue ) . On the other hand, we can treat non-deterministic functions as usual functions and, thus, to write purely functional programs. Let us consider some examples of definitions in Flang. We begin with purely functional definitions. The first of them is f a c t o r i a l :

152

O! ~ 1; X! r X > 0, X �9 ( X - l ) ! ;

The functional symbol ! is a user-defined unary postfix operator. The next func- tion is append:

a p p e n d ( [ ] , X) ~ X; append([X I Y], Z) ~ [X I append(Y, Z)] ;

The naive r eve r s e can be defined as follows:

r e v e r s e ( [ ] ) ~ [ ] ; reverse ( IX I Y] ) ~ append(reverse (Y), [X]) ;

Hang also permits high order functions definitions. For instance, a function app applies a function F to a list:

app( F, [] ) ~ [3; app( F, IX i Y] ) ~ [F : [X] I app( F, Y )];

The application of a function to a list of arguments is denoted by : . Now,

a p p ( ( ! ) , [1, 2, 3, 4, 5] ) = [1, 2, 6, 24, 120].

Logic definitions:

pa ren t (pau l , john ) ~ t rue ; pa ren t ( john , george) ~ t rue

grandparent(X, Y) ~ parent(X, Z), parent (Z, Y);

This program is equivalent to the following program in Prolog:

p a r en t (pau l , john ). pa ren t ( john , george). grandparent(X, Y) :-parent(X, Z), parent(Z, Y).

The QuickSort program is an example of an integrated style definition:

p a r t i t i o n ( X , [ ] , [ ] , [ ] ) ~ t rue ; partition(X, [Y ]Z], [Y l Wl], W2)

X_>Y, partition(X, Z, WI, W2)

partition(X, [Y I Z], WI, [Y ]W2]) ~ partition(X, Z, Wl, W2)

qsort([], X) ~ X; qsort([X I Y], Z)

partition(X, Y, WI, W2), qsort(W1, [X l qsort(W2, Z)])

153

The relation p a r t i t i o n is defined in the logic style. The definition of qsor t is functional. But the right part of its second rule contains logic variables Wl and W2 which are absent in the head of the rule.

The next program is the example of a functional-logic definition:

ancient(X, Y) r parent(X, Y), [X, Y]; ancient(X, Y) ~ parent(X, Z), IX[ancient(Z, Y)];

The function ancient returns the list of relatives which are between the ancient X and the offspring Y in the genealogical tree. While computing this function and searching through the genealogical tree, the Flang system can use backtracking - the action which is quite unusual in the functional programming.

2.2 Cons t r a in t s

In this subsection we briefly describe how constraint solving tools are incorpo- rated in Flang. These tools are based on ideas from [4]. Our experience of solving real-life combinatorial problems shows that the standard strategy of computa- tions based on the Prolog engine (extended by lookahead and forward-checking inference rules) must be refined further. Some of these refinements were added to Flang.

Almost any serious combinatorial problem needs very special constraints, without which the problem can not be solved. It means that it is impossible to provide the language with all necessary built-in constraints. On the other hand, standard tools in Flang do not permit to define new c6nstraints efficiently. Therefore, Flang contains special primitive functions which help the user to define specialized and sophisticated constraints needed to solve problems. Two examples of these primitives are

exclude( L i s t l , L is t2 ). L i s t l is a list of domains and Lis t2 is a list of numbers. This function removes elements of Lis t2 from domains of L i s t 1.

number_of_domains( L i s t l , N). L i s t l is a list of domains, N is a number. This function counts the number of domains containing N and belonging to L i s t 1.

The current version of Flang contains 17 primitives for defining constraints. Now we are developing a special technique of compilation of user-defined constraints.

Flang also provides tools for defining conditions when a delayed constraint should be awaked. Sometimes, it is necessary to introduce constraints with spe- cial conditions of awaking. The user, for example, can define a constraint f (D, X) which must be awaked if (i) a domain D is changed or (ii) the value of X is instantiated. Another example is the built-in constraint D1 > D2 which must be awaked if (i) the maximum value Of D1 is changed or (ii) the minimum value of D2 is changed. It is easy to show other examples. The main problem for im- plementation of these tools is to find compromise between expressiveness and efficiency.

154

We also incorporated in Flang other (maybe less significant) tools. For in- stance, we had to add special efficient tools that are suspicious from the theo- retical and methodological point of view, but very important in practice. The example of such tools is a function de l e t e .Xf_ds t r . It does about the same work as de le te_: f f [4]. Arguments of d e l e t e _ f f _ d s t r are lists of domains. The first argument is input and the second one is output:

Smallest_domain = = delete_ff_dstr(List, List_without_smallest_dom)

Smallest_domain is the domain (an element of the list List) which has the least number of elements among the members of L i s t . The result of remov- ing the smallest domain from L i s t is saved in List_without_smallest_dora. But in contradiction to de le te_f : f it does n o t copy the remainder of L i s t into List_without_smallest_dora, but changes L i s t itself, destructively removing the domain with the least number of elements, and saves the destroyed pointer onto the trail. This kind of functions is necessary for solving combinatorial prob- lems on relatively weak computers.

Flang also contains standard built-in predicates, for instance, indomain in- herited from CHIP [4].

Refinements of the general strategy of computations working with constraints (such as the special version of the intelligent backtracking) are introduced below.

3 Flang Abstract Machine

The Flang Abstract Machine (FAM) is an extension to and a modification of the WAM [10]. It has been used as the basis of the Flang compiler. In this section, to describe the FAM we follow the terminology from [10].

3.1 A r c h i t e c t u r e o f F A M

The main data areas of the FAM are

- S l a c k (Local Stack) - H e a p (Global Stack) - T r a i l

- The area for R e g i s t e r s

A state of the FAM depends on the following registers:

P - program pointer E - last environment B - last choice point A - top of stack Tr - top of trail H - top of heap M - mode of unification (write/read)

155

S - structure pointer

R1, R2, . . . ,Rn - registers for passing parameters

The permanent variables will be denoted by Y1 , . . . , Ym. To simplify instruc- tions we also use a register BP (backtracking program pointer). It corresponds to nothing in real execution and can be characterized as the register of the com- pilation time. The activation record in the Stack of the FAM has the following form:

Continuation CP CE

C Backtrack B h Stale BP c T R i H C

ei

P Number of n arguments Arguments A1

A2

E, B An Permanent Y1 variables Y2

A Ym

We did not intend to design an abstract machine which is completely inde- pendent of the architecture of a real computer. The problem is that the type of an architecture has an effect not only upon the efficiency of the FAM in- structions. Different kinds of an architecture can lead to different optimization principles. But there is the invariant part of the FAM which is the same for all kinds of computer architecture.

We have implemented the version of the FAM for IBM PC. Since this com- puter has very small number of hardware registers and these registers are special- ized, the Flang compiler does not allocate the main the FAM registers dynami- cally, but use some hardware registers for them. For instance, P is allocated in IP, E in BP, A in SP, H in BX. Other registers are allocated in the operative memory.

To improve the performance of the produced code in the case of arithmetic computations, the system uses one more register T (the temporary accumulator register). It is allocated in the hardware registers AX and CX. So, the set of the FAM instructions is extended by special instructions dealing with this special register.

156

3.2 Flang Cons t r a in t Machine

To implement constraints, domains and a non-deterministic strategy of compu- tations, we need to extend the FAM by special tools. We call this extension a Flang Constraint Machine (FCM). The new capability of FCM is to enumerate choice points (CPs). Any CP that ever appears in the process of computation has the unique number. We say that CP is alive if it is in the local stack of FCM. A dead CP is the CP which has been earlier popped from the local stack by the backtracking. A number of CP can correspond to alive or to a dead CP. There are two new data areas in FCM:

- A table of choice points

- A stack of awaked constraints

The table of CPs contains the information whether CP with a given number is alive or dead. The stack of awaked constraints contains previously delayed constraints which are ready for evaluation. If this stack is not empty, FCM calls constraints from it first of all.

A state of FCM depends on the registers. Some of them are inherited from the WAM, and some are new.

The activation record in the local stack is extended in FCM by new field - the number of previous CP (NCP). The number of previous CP contains the number of CP whose address is saved in the field B of the choice point. The number of the active CP is saved in a new register of the Flang machine NC.

3.3 Fin i te Domains

A domain in FCM has the following structure:

157

DOMAIN

basis

length

delayed_constraints

cardinal

last_CP_for_domain

last_CP_for_constraints

element [1. . length]

Cnstr_l Cnstr_m

This structure represents a segment domain initially defined as

{basis, ..., basis + length - I}.

In this structure:

bas i s is the value of the first element in the domain; l eng th is the initial number of elements in the domain; ca rd ina l is the number of currently alive elements of the domain; delayed_constraints is the pointer to the list of delayed constraints containing

this domain; last_CP_for_domain is the number of the last CP when the domain has been

changed; last_CP_for_constraints is the number of the last CP when the list of delayed

constraints has been changed; element [ ] is the array of elements.

The fields last_CP_for_domain and last_CP_for_constraints allow to save in the trail the information on changes in the domain only once during activity of every CP. The situation when savings should be made is recognized by comparing the number of the current CP and the number of the CP when this domain has been changed for the last time. The field last_CP_for_constraints contains the number of a CP when the list of delayed constraints has been changed for the last time. The field last_CP_for_domain is the number of the last CP when elements were removed from the domain. It allows to avoid redundant savings in

158

the trail, since in many cases the system changes domains a lot of times between two settings of CPs.

An elemont [i] contains the number of CP which was active when the ith element of the domain has been removed from the domain. If element [• is equal to 0 or to the number of a dead CP then the ith element belongs to the domain. On the other hand, if element [• is the number of an alive CP, then the ith element does not belong to the domain.

Note that the system should not do anything to restore elements of domains during backtrackings. It is enough to mark popped CPs as dead. Because of this and some other advantages, the FCM appeared to be very convenient for imple- mentations on small and weak computers (such as IBM PC AT). For example, it allows to avoid an explosion of the trail. But it is not the only benefit of the considered approach. The performance of the system based on FCM is very high. An IBM-PC-AT-interpreter using the memory management described above is about 10 times faster than the well-known CHIP interpreter [4]. We hope that the main advantages of this approach will be enlightened by the Flang com- piler which is being developed now. The results of estimation of the method are demonstrated in the next sections of the paper.

3.4 Table of Choice Po in t s

This table causes main problems for the implementation. The table of CPs is represented in the Flang system by a bit vector. The length of this vector is equal to the maximum possible number of a choice point. This straightforward representation of the table is adequate for real-life computations, because the table of CPs can be always compressed if the numbers of CPs are exhausted. The system removes all dead choice points from the table and re-enumerates alive ones. It can be done in one pass.

For instance, in the current version of the Flang interpreter the maximum number of CPs is 65535 (216 - 1). So, the length of the bit vector is 8 KB. The interpreter spends, in average, about 5 minutes to exhaust all CP's numbers. The compression of the table of choice points and re-enumeration takes, in average, 1 second. It means that the system spends for compression only about 0.3% of total time of computations.

4 C o m p i l a t i o n o f ' p u r e ' F l a n g

In this section we describe the main steps of compilation of Flang programs (without constraints) [7]. The compiler fulfills the following steps:

- translation of a Flang source program into a standard form; - global dataflow analysis; - translation of a transformed Flang program into the intermediate code of

the FAM; - translation to the native code of a target computer.

159

The first step of the compilation process is transformation of a Flang source program into the standard form. We demonstrate this step, using the definition of the function factorial (see section 2).

The main hereditary defect of the FAM is that it can not manipulate nested calls of functions. So, before translating into the FAM code, the compiler has to transform the source program to get rid of terms of the form f ( . . . g( . . . ) . . . ), where the call of the function g is the argument of the call of ~. In the definition of factorial the compiler transforms the term X * (X-I) !. This procedure of deliverance from nested calls is known as flattening [2].

To demonstrate the main idea of flattening we apply the standard Prolog built-in relation is . Using it, the second rule of the definition will be transformed into

X! ~ X>O, Vl is X-l, V2 is Vl!, V3 is x * V2, V3;

where Vl, V2 and V3 are some new variables. The result of computation is saved in the variable v3. In general case, given a term of the form f ( . . . g ( . . . ) . . . ), the compiler transforms it into something like

v is g( . . . ) . . .~( . . .v . . . )

where V is a new variable. Variables like V have special features which allow to fulfill some important optimizations.

4.1 Global dataf low analysis

The global analysis of a Flang program includes the following steps:

- Analysis of arguments.

- Analysis of choice points in a program.

- Analysis and separation of functions and predicates.

- Choosing methods of returning values for 'output' arguments.

The first step (the analysis of arguments) gives the information which is very important for optimizations. Our method of analysis [7] is based on abstract interpretation [5]. For all functions of a compiled Flang program, the analyzer computes types of arguments. We can choose many different algorithms for type computations with different levels of complexity (some of them were described in [8] and [9]). In the current version of the Flang compiler we use the following simple lattice of types:

u n k n o w n

~ g ~ g r o u n d

free

u n d

e m p t y

160

In this lattice:

- free - an argument is a free variable; it means that all occurrences of the argument are free.

- unground - the argument is a term containing free variables. - ground - no occurrences of free variables in the argument. - unknown - different occurrences of the argument have different types and at

least one of them has the type free (otherwise, the type of the argument is unground).

In our compiler we Use the fast but at the same time quite powerful algorithm for this part of the global analysis. This can be characterized as 'tracing free variables'. The information received from this step of the global analysis is very significant. For example, when all variables of a program are either free or groun d, then this program can be executed without the use of the Trail.

The next step is the analysis of choice points o f a program. During the analysis of choice points, for any function of a program the Flang compiler receives the information whether this function deterministic or not. The information that some variables of a program have the type ground or unground let the system make more refined analysis of choice points and reduce their number.

The system also analyzes whether the returned value o f a funct ion is u s e d anywhere in the program or not. This analysis permits not to lose time for re- turning unnecessary values when a function plays the role of a predicate.

The last part of the global analysis chooses methods to return values through 'output ' arguments. In Flang there are two ways to return values. Firstly, we can use free variables (like in Prolog). Secondly, functions themselves have their own values (like in functional programming). In both cases, the compiler applies one of two different methods of returning values during execution of a program. There are return-by-value and return-by-pointer techniques (the second one is the standard Prolog method). The first technique is more simple and generates less number of references. But it makes impossible the tail recursion optimization. So, in the compiler the following scheme is used. The compiler recognizes all

161

rules were the tail recursion optimization can be used and only for these rules t h e re tu rn -by -po in t e r technique is applied.

The most essential information which is received from the global analysis for each function f of arity n, the definition of which consists of rn rules, contains the following data:

< ( t o , t l , t 2 , . . . , t , ) , p , d , c , ( t r l , d l , C l ) , . . . , ( t r m , d m , c m ) >,

where ti i=-6-~ 6 { g r o u n d , u n g r o u n d f r e e - by -- va lue , f r e e - by - p o i n t e r , u n k n o w n } ;

p ' 6 { ~ u n c t i o n l p red ica te} ;

d, dl, �9 dm 6 { d e t e r m , n o n - d e t e r m } ;

c, Cl , . . . , Cm 6 { se t - choice - po in t , no - choice - p o i n t } ;

t r l , . . . , t rm 6 { t a i l - r e c u r s i o n - i s - p o s s i b l e , t a i l - r e c u r s i o n - i s - i m p o s s i b l e }

4 . 2 C o m p i l a t i o n i n t o F A M

The compilation into the FAM goes independently for each rule of a Flang pro- gram (with the use of the information received from the global analysis). To describe the compilation process we should introduce the notions of the left and the right parts of a rule. The left part of a rule is the head of the rule and all goals before the first user-defined goal. For instance:

X! ~ X>0, Yl is X-I,V2 is Yl!, V2 * X;

The module of the Flang compiler translating a single rule consequently fulfills the following steps:

- analysis of unification in the head of the rule; - generation of an appropriate try-instruction for the rule; - generation of get-instructions; - generation of instructions for environment allocation; - analysis of operations and registers manipulations that should be fulfilled

between the calls of goals in the right part of the rule and generation of corresponding instructions;

The important property of the compiler is capability to avoid choice point creation in the code of a deterministic function. The Flang compiler minimizes the number of choice point creations and uses only necessary part of choice points when it is possible.

The careful work with choice points allows to use three different types of backtracking:

- branching - no need to restore values and no choice point; - near backtracking - the system restores the Heap and the Trail states but

not registers; - f a r backtracking - the standard complete backtracking.

162

In the conclusion of this section several examples of FAM-instructions are described. To explain the semantics of the instructions we use the terminology from [10]. In the following, any variable R has the form Var(t , v), where t 6 {atom, int, str, re f } and t = Tag(R) , v = Value(R) . R e f ( R n ) denotes an object which address is allocated in Rn. The compilation time operations are concluded in square brackets ' F' and '] '. We hope that almost all instructions below are self-evident. The group of instructions d e t . . , is used to compile deterministic definitions. The instruction get_atom An Pan is used when the global analysis shows that the argument Rn has the type ground (otherwise, the instruction get_atom_first is applied). In get_structure, Rn should have the type ground too.

D e r e f ( R n ) denotes the operation of dereferencing.

try_me_else C if( A > S~ackEnd ) tote Error CE := E; E := A - env_offset; CE(E) :-= CE; BP(E) := C; TR(E) := TR; H(E) := H; [ B P : = C ; ]

retry_me_else C BP(E) := C; [ B P := C;]

trust_me BP(E) := Fail; [ BP := Fail; ]

det%ry_me_else C if( A > SlackEnd )

golo Error CE := E; E := A - env_offset; CE(E) : - CE; [BP:=C;]

detretry_me_else C [ BP := C; ]

dettrust_me [ BP := Fail; ]

r e t u r n

C := CP(E); A := E + env_offset; E := CE(E); gore C;

execute C

A := E + env_offseC E := CE(E); gore (3;

save N A := E - (const + var_size * iV)

get_atom_first An, Rn if( Rn <> Vat(atom,An)) {

if (Tag(Rn) <> ref ) gore BP; else {

Ref(Rn) := Vat( a~om, An ); Push( TRAIL, Ref(Rn) );

} }

get_atom An,Rn if( Rn <> Vat(atom,An) ) gore BP;

gel;_structure Sn,Rn if(( Tag( R. ) <>

Ref(Rn) <> Var(s~r, Sn) ) go~o Be; else S := Value( Rn ) + vat_size;

pickCvar Rn Rn := Ref( S ); S := S + vat_size;.

movereg Rn,Rm Rm := Rn;

movereg_deref Rn,Rm Rm := Deref( Rn );

bldtval Rn Re f ( S ) :--- Rn; S := S + vat_size;

)ut_structxlre Sn,Rn Push( HEAP, Su ); Rn := Var( ref, H ); S := H + vat_size;

163

5 Intelligent Backtracking Strategy

Maybe, the main advantage of our approach is that a structure DOMAIN keeps not only the information, whether an element has been removed from the domain or not, but the extra information - the time when exactly it has been removed. This data allows.to implement many important optimizations. We briefly de- scribe here a strong refinement of the general strategy of computations based on enumeration of choice points. It has been introduced in [1]. The main idea of this refinement is to backtrack not to the last choice point, but to the relevant choice point which really able to improve the situation. The best way to use this optimization is to apply it to execution of programs, containing disjunctive constraints (such as time scheduling programs, etc.)

Scheduling problems very often can be formulated in terms of a partial order - - some events to be scheduled depend on each other, and some do not. So, a solution of such a task can be represented as a lattice with special properties. On the other hand, Prolog enforces a linear order on alternatives (represented by choice points in the stack). This contradiction between the nature of the problem and the Prolog machine very often leeds to superfluous computations. As an example, let us consider the following program

p r s e t_d i s j ( D1 ), s e t_d i s j ( D2 ), check( D1 )

set_disj( D ) ~ D_>3; set_dis j( D ) ~ D_<2

J

check( D ) ~ D = I;

In this programD1 and D2 are domains, D1 = D2 = {1, 2, 3, 4, 5, 6}. Inequalities and equality are treated as partial lookahead constraints (see [4]).

Executing this program, the FCM calls the disjunctive constraint set_disj( D1 ), which reduces the domain D1 (D1 becomes equal to {3, 4, 5, 6}), and sets a choice point for another alternative of se t_dis j (we denote this choice point CP1). Execution of s e t A i s j ( D2 ) reduces the domain D2 and sets a choice point CP2. Since the domain D1 has been reduced, execution of check( D1 ) fails and the system returns to the choice point CP2. It is easy to see that CP~ is not able to change the value of D1 and the backtracking to CPz is useless. So, CP2 can be skipped and, without losing completeness, the system may return to CP1. In this program, though choice points CP1 and CPz are linearly ordered, the corresponding domains D1 and D2 do not depend on each other.

The intelligent backtracking can be regarded as a method establishing a par- tial order on choice points. While returning back, it recognizes, independent choice points, like CP2 in the previous example, and skips them.

164

Let us introduce the general idea of the intelligent backtracking�9 Consider the execution of a constraint

c(D1,...,m,),

where D1 . . . . . Dn are domains. Suppose, that the execution of this constraint fails. In the previous section it was mentioned that the Flang constraint machine enumerates choice points. The later a choice point ispushed onto the stack, the greater number it has. For any domain Dk, k = 1, n, we denote by CPk the number of a choice point when the last change (reduction) of Dk occurred. By CP~elay we denote the number of CP when the constraint c has been delayed. The maximum choice point number for this constraint is defined as follows

= m x(CP1, . . . , C P , , CPs

To successfully solve this constraint, the system needs to change at least one Dk. But, it is easy to see that there is no a choice point with the number greater than CP~a=, which could change Dks (or remove c itself).

Suppose, during computation of some task, the system calls a predicate p with the definition

p(1) ~ FI; p(2) ~ F2;

p(m) ~ Fm;

Suppose also that any alternative p(i) of the predicate p fails due to some con- straint c(i)(DI,. . . ,Dn), which either occurs in the ith clause of this definition,

or has been earlier delayed and now is awaked by the system. Let ,_,~P(i)rna~ be the maximum choice point number for c(i)(D1,. . . , Dn). Obviously, if the current call of p fails, it is useless to return to a choice point with the number greater than

max(CPO)~, . . . , CP.(~) ),

since any of choice points with greater numbers can not change the environment for p, and thus, the next attempt to execute p fails again.

Now we are ready to state the main rule for the intelligent backtracking:

I f the execution of a subgoal p fails, the system returns not to the last choice point, but to the choice point with the number

C PPma= = max( C P(D~, . . . , C P.(mma ))

We say that CPPma~ is a maximum choice point number for the subgoal p. The intelligent backtracking strategy is the well-known idea in logic program-

ming (see [3]). But, the efficient implementation of this strategy is a very difficult problem. In case of domains, the intelligent backtracking optimization permits relatively cheap in time and space implementation based on the FCM [1]. In average, this optimization takes about 5% of total time of computations.

165

6 C o n c l u s i o n a n d F u t u r e W o r k

The results which have been discussed in this paper have been used to develop efficient implementations of Flang. We developed a compiler of the functional- logic kernel of the language which generates extremely fast code. The interpreter of Flang with constraints also has been developed. It is based on the memory management described above. Now our investigations are moving in the following directions:

- Further refinements of the general strategy of computations. Behavior of large combinatorial problems is very sophisticated. The standard Prolog engine is too rough and weak to be adequate for solving them. We need an engine which could be more sensitive to the nature of problems. The intelligent backtracking is just the first step in this direction.

- Compilation of constraint programs. We are developing now the compiler of Flang with constraints trying to preserve at the same time high speed of programs in 'pure' Flang. We are also developing a special method of global analysis of constraint programs. This analysis will help to improve performance of the Flang system.

7 A c k n o w l e d g m e n t s

I thank Manfred Meyer (DFKI, Kaiserslautern) who drew my attention to the world of constraint logic programming. I also thank the members of the research group developing the Flang system, especially, I.Abdrakhimov, S.Petukhin and A.Weimann.

R e f e r e n c e s

1. I.Abdrakhimov, A.Mantsivoda. lnteUigent Backtracking in Flang. Technical Report, Irkutsk university, 1993.

2. H.Boley. A relational/functional Language and its Compilation into the WAM. SEKI Report SR-90-05, University of Kaiserslautern, 1990.

3. M.Bruynooghe. Intelligent Backtracking Revisited. Robinson Festschrift. 4. P. Van Hentenryck. Constraint Satisfaction in Logic Programming. The MIT Press,

Cambridge, 1989. 5. G.Janssens, M.Bruynooghe. On abstracting the procedural behavior of logic pro-

grams. Proc. 5th Russian Conf. on Logic Programming, Lecture Notes in Artificial Intelligence, Springer, pp.240-257.

6. A.Mantsivoda. Flang: A Functional-Logic Language. Lecture Notes in Comp.Sci., 567, Processing Declarative Knowledge (eds. H.Boley and M.M. Richter), Springer, 1991, p.257-270.

7. A.Mantsivoda, V.Petukhin. Compiling Flang. Lecture Notes in Comp.Sci., 641, Compiler Construction (eds. U.Kastens, P.Pfahler), Springer, 1992, p.297-311.

8. P.L.van Roy. Can Logic Programming Execute as Fast as Imperative Programming? Ph.D. Dissertation, University of California at Berkeley, November 1990.

166

9. A. Taylor. High Performance Prolog Implementation. Ph.D. dissertation, Basset Department of Computer Science, University of Sydney, June 1991.

10. D.I-I.D.Warren. An Abstract Prolog Instruction Set. Technical Note 309 SRI Inter- national, Menlo Park, CA, October 1983.