workflow early start pattern and future's update strategies in proactive environment e. zimeo,...
TRANSCRIPT
Workflow Early Start Pattern Workflow Early Start Pattern and Future's Update and Future's Update
Strategies in ProActive Strategies in ProActive EnvironmentEnvironmentE. Zimeo, N. Ranaldo, E. Zimeo, N. Ranaldo, G. TretolaG. Tretola
University of Sannio - Italy
OutlineOutline
•Introduction•Workflow Early Start Pattern•Future’s Update Strategies•Conclusions
Early Start Workflow Early Start Workflow PatternPattern
BackgroundBackground
• A Workflow Management System is able to execute distributed applications described as processes composed of a set of activities
• Activities are functionalities provided by participants distributed in the Internet
• Workflow engine is the component delegated to coordinate the process execution
IntroductionIntroduction
Objective: to improve the Workflow management system Improvement of performance Easiness of modelling
Focus: distributed applications composed with resources handled as services or sub-processes that are coarse-grained modelled
ProblemProblem
In the majority of workflow languages, processes can be seen as the combination of: Serial activities (Sequence pattern) Parallel activities (And-split pattern)
Sequences are the critical point for performance improvement
Several researches are involved in enhancing performance by improving sequential execution
Anticipation of activities is a key issue to obtain performance enhancement
Anticipation means that an activity is enacted before the prefixed time of enactment
State of ArtState of Art
Two approaches:
• Improve process execution changing it at design time Coo-Flow allows for modelling task
anticipation and intermediate results propagation
• Improve process enactment modifying the way processes are executed Micro Workflow introduces future objects in
workflow management SWFL exploits multilevel parallelism in
workflow enactment
Design Time ApproachDesign Time Approach• The former approach requires analysis of activities at greater
level of detail: fine-grained analysis• If the activities could not be considered atomic their internal
structure could be analysed to improve performance• Intrinsic parallelism could be exploited
A B
B ' B ''
A B
Activity B could be decomposed in two sub-activities:•B’ is the independent sub-activity•B’’ is the dependent sub-activity
Equivalent FlowEquivalent Flow
A
B 'B ''A n d
S p litA n dS p lit
• The process description may be modified
A B
Partial overlapping of process. This shorten the total process time
ConsequencesConsequences
• Fine-grained analysis can be used to obtain anticipation at design-time
• At least a point exists, in the depending activity, that signals the beginning of data dependencies from the preceding activity
• Problems: Additional design effort Finding dependence point could be difficult or
could be impossibleThe internal structure is not accessibleSeveral dependence points could exist
The modified process could become more complex
SolutionSolution
• Our proposal is to use a run-time approach to relax the sequence constraints: Partial concurrency could be obtained
overlapping execution of activities at run-time A lot of modelling situations could be seen as
intermediate between serial and parallel Use of data flow synchronization during
execution
Fine-grained concurrency at run-time
Resulting ExecutionResulting Execution
B
In d e p e n d e n t d a t ao p e r a t i o n s
D e p e n d e n t d a t ao p e r a t i o n s
E A B
In d e p e n d e n t d a t ao p e r a t i o n s
D e p e n d e n t d a t ao p e r a t i o n s
E A
W a i t i n g s t a t e
Sequence Fine-grained concurrency
Modelling techniqueModelling technique• To ease modelling, we have defined a new
workflow description pattern:Early Start Pattern
• Activity in the pattern could be executed by the engine with fine-grained concurrency
<xsd:element name="Transition"><xsd:complexType>…<xsd:attribute
name="FlowType"type="xsd:string"use="optional"/>
…</xsd:complexType>
</xsd:element>
A B
RequirementsRequirements• Use of a system that could dynamically
discover the dependence point Asynchronous invocation, returning a
placeholder for the result not computed yet
Placeholder could be forwarded to subsequent activities as actual parameter to satisfy the activation conditions and so anticipating the activation
Activities that receive the placeholder and try to access to the data must be stalled until the data is ready to be used
The placeholder must be updated as soon as possible to each activity that uses it
ProActiveProActive
• To implement Early Start we used ProActive
• It satisfies the four requirements: Invocation on Active Object returns a
symbolic placeholder: the Future Object Future Object could be forwarded Threads trying to access to Future Object,
before it is updated, are placed in waiting state
The Future Object is updated with the computed result
EvaluationEvaluation
Worst case: 3,5%
Best case: 43%
<WorkflowProcess Id=”Example1”><ProcessHeader DurationUnit="S"/><Activities><Activity Id="A"/>…</Activity><Activity Id="B"/>…</Activity></Activities><Transitions><Transition Id="AB"From="A" To="B"FlowType="early"/></Transitions ></WorkflowProcess >
0
5000
10000
15000
20000
25000
Deployment
Tim
e [m
s]
RMI ProActive Worst Case ProActive Best Case
S tart A B E n d
BE A
BE A
public void process() { String resultString; resultString = A.elaborateString();C.printString(resultString);}
Evaluation (2)Evaluation (2)
c o difyandspl i t
uppe rC as e re ve rs e
pr int
andjo in
fus io n pr int
20,85617,764
0
5
10
15
20
25
RMI Services ProActive Services
Improvement
15%
ConsiderationsConsiderations
• Fine-grained analysis at design time is more difficult and could be even impossible
• Fine-grained concurrency could be used to improve performance, without increasing the modelling effort
• Early Start pattern keeps the modelling simple and ensures automatic optimization at run-time
Implementation IssuesImplementation Issues
B
In d e p e n d e n t d a t ao p e r a t i o n s
D e p e n d e n t d a t ao p e r a t i o n s
E A
W a i t i n g s t a t e
BE A
Ideal enacting Real enacting
Eager Forward strategyEager Forward strategy
Future
Future
Engine
B
AA
(run)A
Future
B(run)
Value
Value
B
ValueCurrent ProActive implementation
On demand strategyOn demand strategy
CentralMemory
Future
Future
Engine
B
AA
(run)A
B(run)
Value
Value
B
Value
Future’s Update StrategiesFuture’s Update Strategies
Future updating Future updating techniquestechniques
• Forward vs Home (Who?) Forward: updating is responsibility of the
object that forwards the future Home: updating is responsibility of the
object that computes the value of the future
• Eager vs Lazy (When?) Eager: all the futures are updated as soon
as the value is computed Lazy: the futures are updated only for the
object that requires the value
Value_r1
UPDATEA
C
B
Future_r1
Future_r1Future_r
2
UPDATE
Value_r1
UPDATE
Value_r2
Eager Home-BasedEager Home-Based
The AO that computes the value is responsible of updating it to all the futures
Further ConsiderationFurther Consideration
A B
CD E
Future_r1Future_r1
Future_r1
Future_r1
All the AOs that receive a Future Object ask for updating but not all of them use the value
Updating
Value_r1
Value_r1Value_r1
Value_r1
Only E uses the value of theFuture Object
Is it worth to update all nodes?
F r1 = b.create(size)
Lazy Home-BasedLazy Home-Based
A B
CD E
Future_r1 Future_r1Future_r1
Future_r1
RequestUpdating
Value_r1
Only E is
updated
Updating only who needs the value
Value_r1
Proposal: updating only the nodes that use the value of the Future Object
Experimentation (1)Experimentation (1)
Testing application
A B
C2C1
C3
Future_r1Future_r1 = b.create(size)
c1.use(r1)
C4c2.use(r1)
c3.use(r1) c4.use(r1)
We have measured the time needed to update value to the object C1, C2, C3 and C4 in different cases
Experimentation (2)Experimentation (2)
All nodes require updating
A B
C2C1 C3 C4Future Object
1MB
Lazy-Home and Eager-Home perform the same
Both of them are better than Eager-ForwardN ode s
milliseconds
Experimentation (3)Experimentation (3)
Only 1 node needs update
A B
C2C1 C3 C4Future Object
1Mb•Lazy Home is better than Eager Home by 29%
•Lazy Home is better than Eager Forward by 36%
•Eager Home is better than Eager Forward by 1,5%N ode
milliseconds
ConclusionsConclusions
• Workflow and Web Services composition: Improving performance with run-time
concurrency Easy modelling with Early Start Pattern
• ProActive middleware: Eager-Home & Lazy-Home updating
strategies Experimentation and comparison of the
different strategies
Future worksFuture works
• Workflow and Web Services composition: Extension of asynchronous call to Web
Services, with a client side based invoker Web service model extension to transfer
ProActive features to Web Services Introducing the possibility of “partial result
return”
• ProActive middleware: Eager-Home strategy with multicast Lazy-Home with distributed garbage
collection
Thank you for Thank you for your attentionyour attention