workflow performance evaluation
TRANSCRIPT
Dottorato di Ricerca in Informatica
II ciclo Nuova Serie
Universita di Salerno
Workflow Performance Evaluation
Rossella Aiello
March 2004
Chairman: Supervisor:
Prof. Alfredo De Santis Prof. Giancarlo Nota
Contents
Title Page i
Contents iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Acknowledgements xiii
1 Introduction 1
1.1 The importance of measurement in workflow systems . . . . . . . . . . . . . 1
1.2 Contributions of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Workflows and Measurement Systems 5
2.1 Business Process Reengineering and Improvement . . . . . . . . . . . . . . . 5
2.1.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Workflow Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 The WfMC Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Workflow Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 WfMC Audit Data Specification . . . . . . . . . . . . . . . . . . . . 15
2.3 Integrating BPR and Workflow Models . . . . . . . . . . . . . . . . . . . . . 18
2.4 Measurement Construct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Performance Measurement Systems . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.1 Activity-Based Costing . . . . . . . . . . . . . . . . . . . . . . . . . 23
iii
2.5.2 Balanced Scorecard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.3 Goal-Question-Metric (GQM) . . . . . . . . . . . . . . . . . . . . . . 25
2.5.4 Practical Software and System Measurement (PSM) . . . . . . . . . 26
2.5.5 Capability Maturity Model for Software . . . . . . . . . . . . . . . . 28
2.5.6 Process Performance Measurements System (PPMS) . . . . . . . . . 29
2.6 Workflow Monitoring and Controlling . . . . . . . . . . . . . . . . . . . . . 30
2.6.1 Previous Work on Monitoring and Controlling . . . . . . . . . . . . 31
2.6.2 What quantities to measure . . . . . . . . . . . . . . . . . . . . . . . 32
2.6.3 When to measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6.4 How to measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 A Measurement Framework of Workflows 39
3.1 An example of business process . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 A Measurement Framework for Workflows Evaluation . . . . . . . . . . . . 40
3.2.1 Basic Structures and Primitive Operators . . . . . . . . . . . . . . . 40
3.2.2 Instances and Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.3 Measurement Operators . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.4 Primitive Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Fundamental measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3.1 Duration of Instances . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3.2 Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3.3 Task Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Derived measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.1 Contributions to the execution of workflows . . . . . . . . . . . . . . 56
3.4.2 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4.3 Support to Proactive Systems . . . . . . . . . . . . . . . . . . . . . . 59
3.5 Advanced Measurement Techniques . . . . . . . . . . . . . . . . . . . . . . . 60
3.5.1 Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5.2 Performance Evaluation Reports . . . . . . . . . . . . . . . . . . . . 62
3.5.3 Evaluating a Process Hierarchy . . . . . . . . . . . . . . . . . . . . . 63
iv
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4 Performance Evaluation Monitors 69
4.1 Monitor Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.1.1 The specification of workflow entities . . . . . . . . . . . . . . . . . . 72
4.2 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5 Evaluation of federated workflows 79
5.1 Virtual Enterprise and Federated Workflows . . . . . . . . . . . . . . . . . . 79
5.1.1 Multi-Agent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.2 Software Architectures for the Evaluation of Workflows . . . . . . . . . . . 84
5.2.1 Centralized Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2.2 Semi-distributed Model . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.2.3 Distributed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 A multi agent-based architecture for the monitoring of workflows . . . . . . 88
5.4 The behaviour of a DMW . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.5 The architecture of distributed monitoring system . . . . . . . . . . . . . . 92
5.5.1 The Grasshopper Agent Platform . . . . . . . . . . . . . . . . . . . . 92
5.5.2 The architecture of the distributed system . . . . . . . . . . . . . . . 95
5.5.3 The prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6 Case studies 103
6.1 Intecs S.p.A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1.1 Organizational Standard Software Process (OSSP) . . . . . . . . . . 104
6.1.2 Case Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.1.3 Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.1.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.2 University of Salerno . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.2.1 The IBM HolosofX Workbench tool . . . . . . . . . . . . . . . . . . 114
v
6.2.2 Process Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7 Other Approaches to Measurement 121
7.1 A comparison of current approaches to process measurement . . . . . . . . 121
7.2 Built-in WfMS facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.2.1 Oracle Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.2.2 Filenet Panagon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.2.3 Ultimus Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.2.4 IBM WebSphere MQ Workflow . . . . . . . . . . . . . . . . . . . . . 124
7.2.5 Staffware Process Suite . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.3 The Process Warehouse Approach . . . . . . . . . . . . . . . . . . . . . . . 125
7.3.1 Data Warehousing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.3.2 Process Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.4 Measurement Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.5 Further researches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
References 137
vi
List of Figures
2.1 The WfMC Workflow Process Definition Metamodel . . . . . . . . . . . . . 11
2.2 Relationship between instances of processes, tasks and work items. The
different shape of the nodes indicates the node type. . . . . . . . . . . . . . 12
2.3 The major components of a Workflow Management System . . . . . . . . . 13
2.4 The WfMC Reference Model . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Common Workflow Audit Data format . . . . . . . . . . . . . . . . . . . . . 17
2.6 McGregor’s extension to WfMC Reference Model . . . . . . . . . . . . . . . 18
2.7 A model for Continuous Process Improvement. . . . . . . . . . . . . . . . . 19
2.8 Measurement constructs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.9 The Balanced Scorecard: Strategic Perspectives . . . . . . . . . . . . . . . . 25
2.10 Some examples of GQM metrics. . . . . . . . . . . . . . . . . . . . . . . . . 27
2.11 The Capability Maturity Model. . . . . . . . . . . . . . . . . . . . . . . . . 29
2.12 The three dimensions of a business process. . . . . . . . . . . . . . . . . . . 32
2.13 Ex-post and real-time measurements. . . . . . . . . . . . . . . . . . . . . . . 34
2.14 Data sources for the measurement of workflows. . . . . . . . . . . . . . . . . 36
3.1 A workflow schema for the process of “loan request”. . . . . . . . . . . . . . 40
3.2 Relationship between the instance set and the event set. Each instance can
have many events related to it. . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 A process definition containing a loop . . . . . . . . . . . . . . . . . . . . . 49
3.4 Some time measures for an instance . . . . . . . . . . . . . . . . . . . . . . 51
3.5 A simple pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6 Changes of the queue length in time. . . . . . . . . . . . . . . . . . . . . . . 55
vii
3.7 Intertask duration from T2 to T6. . . . . . . . . . . . . . . . . . . . . . . . . 56
3.8 Kinds of contribution of an actor: 1) on a single process; 2) on several
processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.9 Routing percentages of a process . . . . . . . . . . . . . . . . . . . . . . . . 59
3.10 A windowing application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.11 Use of pipeline schema to build performance evaluation reports. . . . . . . . 64
3.12 Splitting a process instance into subprocess instances; a) Multiple levels
hierarchy; b) Two levels hierarchies. . . . . . . . . . . . . . . . . . . . . . . 65
4.1 Continuous process measurement using a monitor. . . . . . . . . . . . . . . 70
4.2 An example of Directory Information Tree. . . . . . . . . . . . . . . . . . . 73
4.3 The software architecture of the proposed tool. . . . . . . . . . . . . . . . . 75
4.4 The hierarchy of monitors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5 The internal architecture of a Monitor. . . . . . . . . . . . . . . . . . . . . . 77
5.1 The centralized monitors model. . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2 The semi-distributed monitor model. . . . . . . . . . . . . . . . . . . . . . . 87
5.3 The distributed monitor model. . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4 The interaction between MMAs . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.5 The Grasshopper Distributed Agent Environment. . . . . . . . . . . . . . . 94
5.6 Communication via Proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.7 A high level overview of the distributed evaluation system. . . . . . . . . . . 96
5.8 The architecture of the Monitor Manager. . . . . . . . . . . . . . . . . . . . 97
5.9 The execution of a local measurement. . . . . . . . . . . . . . . . . . . . . . 98
5.10 The execution of an organizational-wide measurement. . . . . . . . . . . . . 98
5.11 The execution of an VE-wide measurement. . . . . . . . . . . . . . . . . . . 99
5.12 The monitor interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.13 The selection of the measurements. . . . . . . . . . . . . . . . . . . . . . . . 100
5.14 The creation of agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.15 The filtering and computation agent moved to the client. . . . . . . . . . . . 101
5.16 The interface of presentation of the results. . . . . . . . . . . . . . . . . . . 102
viii
6.1 The high level of OSSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.2 The subphases of the BID phase. . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3 Some activities of a subphase. . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.4 The Personal Metrics file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.5 The Project Metrics file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.6 The decomposition of an activity on the basis of its owners. . . . . . . . . . 108
6.7 The ER Model of the OSSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.8 The measurement framework data model. . . . . . . . . . . . . . . . . . . . 111
6.9 The measurement framework data model. . . . . . . . . . . . . . . . . . . . 112
6.10 The two wrapping classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.11 The login page of the OSSP DBManager tool. . . . . . . . . . . . . . . . . . 113
6.12 The webpage for the insertion of a new process instance. . . . . . . . . . . . 113
6.13 The page to create a new suphase instance. . . . . . . . . . . . . . . . . . . 114
6.14 The different elements of a ADF diagram. . . . . . . . . . . . . . . . . . . . 115
6.15 The high level of the process. . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.16 The subprocess Start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.17 The Analysis phase of the process. . . . . . . . . . . . . . . . . . . . . . . . 118
6.18 The Control phase of the process. . . . . . . . . . . . . . . . . . . . . . . . . 118
6.19 The Release phase of the process. . . . . . . . . . . . . . . . . . . . . . . . . 118
6.20 An example of process simulation. . . . . . . . . . . . . . . . . . . . . . . . 119
6.21 The workflow model in Titulus. . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.1 The monitoring tool embedded in Oracle Workflow. . . . . . . . . . . . . . 123
7.2 The business dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.3 Example of star schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.4 Example of snowflake schema . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.5 Some OLAP operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.6 A Process Data Warehouse Architecture. . . . . . . . . . . . . . . . . . . . . 130
7.7 The HP Workflow Data Warehouse schema . . . . . . . . . . . . . . . . . . 132
7.8 The CARNOT Process Warehouse Architecture. . . . . . . . . . . . . . . . 133
ix
7.9 The WPQL top level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
x
List of Tables
2.1 Vendor Conformance to the Interface 5 . . . . . . . . . . . . . . . . . . . . . 16
3.1 A statistical report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2 The hierarchical measurement framework . . . . . . . . . . . . . . . . . . . 67
4.1 Assignment of measures to monitor types. . . . . . . . . . . . . . . . . . . . 71
5.1 Combining the monitoring subsystem with the WfMS architecture . . . . . 85
5.2 The dependence of agents behaviour from location and interaction type. . . 92
xi
xii
Acknowledgements
I would like to thank my advisors Prof. Giancarlo Nota that, during the last three years,
was always available to help me both scientifically and guiding me in the right direction.
To him my grateful thanks for effective and efficient supervision. I gratefully acknowledge
his support and encouragement during the preparation of this thesis, as well as his helpful
comments during the review of my work.
I also thank the Prof. Antonio Esposito that had the role of my second supervisor, for
his friendly concern with my research processes, for his scientific support, feedback and
encouragement during the first two years of my PhD.
I would like to thank my colleagues Maria Pia, Saro and Antonio for their collaboration
and friendship.
A particular gratitude to Nicola Grieco for all the support and friendship he demonstrates
me in this years.
I would like to express my deep gratitude to my parents for always supporting me during
the progress of my studies and being sure about the success of my effort.
Last but not least, I would like to thank my husband Antonio. Without his trustful,
patient, and loving support it would have been much harder to accomplish this work.
Chapter 1
Introduction
1.1 The importance of measurement in workflow systems
Virtual enterprises have become increasingly common as organization seek new ways of
delivering value to their stakeholders and customers. They integrate both processes and
systems to support and end-to-end value chain.
In this wide scenario, it is necessary that company managers need pertinent, consistent
and up-to-date information to make their decisions. For this reason, many Information
Technology (IT) systems have been developed in recent years in order to improve the way
that information is gathered, managed and distributed between people working in the
companies. In particular, the IT system should:
• allow the decision maker to access relevant information wherever it is situaded in
the organizations;
• allow the decision maker to request and obtain information within and between the
various organisations;
• proactively identify and deliver timely, relevant information to business processes;
• inform the decision maker of changes made in the business process which are in
contrast with the current decision context.
In the last years the needs to have a precise knowledge of the phenomena that involve an
enterprise is becoming more and more important, especially when the analysis of business
processes is addressed. Indeed, many models have been proposed by authoritative orga-
nizations to approach the continuous process improvement. These models make explicit
2 Chapter 1. Introduction
reference to the formal representation of processes as a necessary step towards the intro-
duction of process and product metrics [ISO00, PCCW04].
The concept of measurement plays a fundamental role in the direction of a better and more
precise understanding of phenomena to be observed. The ability to correlate the process
information with other business data, like cost, becomes extremely important. The user
should be able to monitor process bottlenecks, ROI and other business measurements in
order to answer, for example, the following two questions: (1) Is the current performance
of the business process better than it was yesterday? (2) To what degree are to-be values
fulfilled?
Answering to these questions imply both that a measurement system is needed and also
to have analytical tools that allow to manipulate and study the data being collected from
running processes as well as historical data.
Performance Measurement Systems suffer from different shortcomings such as: perfor-
mance measurement is focused too strongly on financial performance indicators, business
processes are not measured systematically, the concept of leading indicators has not been
implemented, performance data becomes available with a considerable time lag, access to
performance data is complicated, and the performance measurement processes are poorly
defined.
The introduction of a workflow system in the organization allows to obtain measurable ben-
efits, among which the reduction of the execution times, or the employee costs, and qualita-
tive benefits such as the increased quality of the processes due to the reduction of the mis-
takes and an even higher degree of conformance to the user expectations [CHRW99, MO99].
During the execution, a WfMS records a series of data useful for both the scheduling of the
next task and for the performance evaluation of the process. These data give to the process
supervisor the possibility to monitor and check the trend of past and current executions
of business processes.
1.2 Contributions of the thesis
In the last decade the attention on workflow systems has considerably grown, so a great
number of researches and applications have been produced. The attention has been mainly
addressed on the primary functionalities of a workflow system, such as the modeling or
Chapter 1. Introduction 3
the execution. For this reason, a lot of researches pointed out the different approaches
on modeling, such as use of graphs, temporal aspects, dynamic model verification and
workflow mining, or the architectures that can be used to implement a workflow system
in a efficient way.
The topic on workflow evaluation has had a relatively little coverage in literature, so the
thesis proposes two main contributions:
1. to handle in a systematic way the workflow performance evaluation, giving a fully
overview on the state of art on this subject;
2. to provide an original contribution, introducing a measurement framework and some
models for the evaluation of workflows that could be easily utilized in different en-
vironments.
At present, most of commercial WfMSs integrate monitoring and measurement tools that
allow to analyze process performance. These systems provide interesting capabilities for
process evaluation, but they have two important disadvantages:
• they are not easily extensible because they provide a fixed measurement set. Adding
some extensions to these tool involves hard-coding and a strict integration in the
tool; for this reason, changes can be made only by WfMS developers.
• The defined measurement set are not always suitable to the context in which a WfMS
works, so particular analysis could not be conducted on automated processes in the
organization.
The main contribution of the measurement framework is probably the unifying context
within which existing measures can be selected for or adapted to particular application
domain. Since the framework is defined in an abstract setting, it has a good degree of
generality and independence from existing WfMS. It is an extensible model. Indeed, the
hierarchical composition approach allows us the addition of new measures, especially to the
third layer where customizations and more sophisticated measures could be introduced.
A second result is the characterization of the measurement activities of workflow quantities
in terms of “what”, “when” and “how”. Such a characterization allows the study of
process measurement tools from several perspectives and leeds to the fundamental ideas
of performance evaluation monitors and the continuous process measurement.
4 Chapter 1. Introduction
Starting from the results concerning the measurement framework and the characterization
of measurement activities, the problem of workflow evaluation has been studied in the
context of Virtual Enterprises. A multiagent model for the evaluation of a workflow
implemented by two or more federated WfMS has been proposed together with a prototype
that exploits the Grasshopper platform to manage the agents operating in the VE.
1.3 Outline of the thesis
The thesis is structured as follows. Chapter 2 introduces the application domain and the
problem of process measurement and workflow monitoring and controlling. The measure-
ment framework is presented in chapter 3. Chapter 4 describes the characteristics and the
architecture of an evaluation tool based on the idea of performance evaluation monitor.
The ideas presented in chapter 4 are generalized and applied in the chapter 5; after a dis-
cussion about three different possible application environments, the attention is addressed
on the distributed workflow evaluation. In the same chapter, a model for the distributed
evaluation of workflows operating within a Virtual Enterprise is introduced. The model
is expressed in terms of a multiagent system and is implemented using the Grasshopper
Platform.
In chapter 6 the measurement framework is validated in two different domains: a software
enterprise and a public agency. Finally, the last chapter concludes the thesis, reviewing
other approaches to the workflow performance evaluation and pointing out future works.
Chapter 2
Workflows and Measurement Systems
Over the last 10 years several factors have accelerated the need to improve business
processes. The most obvious is technology. New technologies (like the Internet) together
with the opening of world markets are rapidly bringing new capabilities to businesses.
As a result, companies have sought out methods for faster business process improvement.
One approach for rapid change and dramatic improvement that has emerged is Business
Process Reengineering (BPR).
Workflow management systems represents the most important technology supporting BPR
activities. Because of tight integration of BPR and workflows, the Continuous Process Im-
provement can be realized. Measurement is a very important activity and many software
measurement systems have been developed.
In this chapter, the domain of BPR and workflows will be dealt, highliting the relation
existing between them; after, some of the main measurement system are presented and
finally, the attention will be focused on the workflow monitoring and controlling.
2.1 Business Process Reengineering and Improvement
The globalization and rapid change in todays business environment are facts with which
every business organization has to cope in order to ensure its long-term survival. Com-
panies must continuously adapt to a changing business environment to improve their
efficiency in processing. They particularly tend to streamline their organizational struc-
tures building more efficient flat organizations and improve the efficiency of their business
processes by careful reengineering [Ham90].
A business process is a collection of interrelated work tasks, initiated in response to an
6 Chapter 2. Workflows and Measurement Systems
event, that achieves a specific result for the customers of the process [SM01]. The field
called Business Process Reengineering (BPR) has as its objective the optimization and
quality improvement of business processes.
The main goal of managers concerns business performances: they would continually im-
proves them to obtain the highest benefits in terms of time and cost. According to Hammer
[Ham90], we should use the power of information technology to radically redesign our busi-
ness processes. This approach places particular emphasis on the radical re-engineering to
achieve dramatic improvements in the performances of business processes. Reengineering
business process implies many changes in the organization; for this reason, it is a very
high risk activity. Another approach often mentioned in the literature, is called Business
Process Improvement [Dav93, Har91] and is sometimes preferred by Public Agencies. This
approach consists in a phased sequential methodology for the implementation of process
change. It contains activities regarding both the process identification and change drivers
analysis and the definition of the process strategies. This methodology is summarized in
the following phases.
1. Identify the processes to be redesigned: it is worthwhile that the higher priority in
reengineering is given to the major processes.
2. Identify IT levers: Awareness of IT capabilities can and should influence process
design.
3. Develop the business vision: identify business strategy and formulate process per-
formance ojectives such as cost reduction, time reduction, quality improvement, etc.
4. Understand and measure the existing processes thus avoiding old mistakes and pro-
viding a baseline for future improvements.
5. Design and build a system to support the new process: The actual design should be
viewed as a prototype, with successive iterations. Successive iterations should lead
to a mature workflow system.
Davenport considers Information Technology (IT) as one of the main enablers of process
innovation and improvement and identifies nine kind of opportunities they offered:
• automational : elimination of human labor and achievement of more efficient struc-
turing of processes;
Chapter 2. Workflows and Measurement Systems 7
• informational : capturing of process information for purposes of analyzing and better
understanding;
• sequential : transformation of sequential processes to parallel in order to achieve
cycletime reductions;
• tracking : monitoring the status of executing processes and objects on which the
processes operate;
• analytical : analysis of information and decision making;
• geographical : allowing the organizations to effectively overcome geographical bound-
aries;
• integrative: improvement of process performance promoting the coordination be-
tween different tasks and processes also thanks to common database and information
exchange;
• intellectual : capturing and distribution of employee expertise;
• disintermediating : increasing efficiency by eliminating human intermediaries in rel-
atively structured tasks.
Workflow systems represent the most important process enabling information technol-
ogy. Mature workflow products are able to solve very complex business process, within the
enterprise and between enterprises. They provide good tools and applications for humans
to participate in the process, and interfaces to integrate applications and external systems
into the process. [Mar02].
2.1.1 Simulation
A number of WfMS provide facilities for administration and process monitoring; limited
simulation capabilities are also offered. Sometimes, Business Process Modeling (BPM)
tools and WfMS are coupled in order to exploit the advanced analysis capabilities of BPM
tools, to evaluate several design alternatives and to achieve a deep knowledge about the
process before the workflow implementation. They provide an environment that simulates
the processing entities and resources involved in the workflow. Designers plug in a process
8 Chapter 2. Workflows and Measurement Systems
model and the modeling tool executes it on different scenarios while varying the rates
and distributions of simulation inputs, and programming various processing entities and
resource parameters (e.g., the response times of people or systems performing workflow
activities). At the end of the simulation BPM tools provide valuable information about
the process model. This includes statistics on resource utilization and queue load for
evaluating the number of work items that build up at run time. Additionally, they can
evaluate bottlenecks in the process definition and provide animations that show how work
moves through the model. Leymann and Roller in [LR00] suggest determining how a
process model handles the work load through two types of simulation:
• Analytical simulation - This kind of simulation is mainly performed on the graph-
based process specification. Process modelers use the probabilities associated with
the control connectors to derive the number of times each activity executes. Ana-
lytical simulation does not account for resource limitations, for example when other
processes compete for the same person. It yields the probability that the process
unfolds in a particular way, and lower-bound estimates for the completion times.
Analytical simulation has the advantage that it uses only basic statistical informa-
tion about the process resources. Additionally, it can be performed quickly with a
low amount of computational resources.
• Discrete event simulation - When the analytical simulation shows that the process-
ing entities and resources can handle the workload, discrete event simulation pro-
duces more details about the process model. The behavior of a workflow system
that implements the process is simulated. Daemons representing process actors and
resources generate events. Modelers specify different distribution patterns for the
daemons that create processes or perform the workflow activities. Unlike analyti-
cal simulation, discrete event simulation takes into account the dynamic aspects of
processes, like competing for resources. However, the additional information comes
at the expense of increased computational resources.
BPM Workbench by IBM [IBM02] is a commercial tool suite containing a Business Modeler
that is a discrete event simulator. A more detailed description of this tool can be found
in the chapter 6 when a case study concerning an administrative process of the University
of Salerno is presented.
Chapter 2. Workflows and Measurement Systems 9
2.2 Workflow Management Systems
Ten years ago, a team of engineers conceived the idea that computer software could be
used to automate paper-driven business processes. They called it “workflow software”.
According to Leymann [LR00] business processes may consists of parts that are carried
out by a computer and parts that are not supported through computers. The parts that
are run on a computer are called workflow. Companies have a lot of benefits with a
automated workflow management system [Ple02]:
• work doesn’t get misplaced or stalled;
• the managers can focus on staff and business issues, such as individual performance,
optimal procedures rather than the routine assignements;
• the procedures are formally documented and followed exactly; in this way the work
is executed as planned by management;
• parallel processing, where two or more tasks are performed concurrently, is far more
practical than in a traditional manual workflow.
Workflow management requires a process definition tool, a process execution engine, user
and application interfaces to access and action work requests, monitoring and management
tools, and reporting capabilities. Some workflow vendors also offer configurable adaptors
and integration tools to more easily extend the flexibility of workflow integration within
the business process.
It is possible to distinguish four categories of workflows on the basis on the repetition
factor and the business value, that is, the importance of a workflow to the company’s
business:
• Collaborative - They are build upon process thinking and are characterized by a
high value business but are executed only a few times. For example, the process
of creation of a technical documentation for a software. The process is generally
complex and is created specifically for the particular task.
• Ad hoc - They show a low business value and low repetition rate. These are workflows
with no predefined structure and the next step in the process is determined by the
user involved or the business process id constructed individually whenever a series
10 Chapter 2. Workflows and Measurement Systems
of actions needs to be performed. Ad hoc workflow tasks typically involve human
coordination, collaboration, or co decision. Thus, the ordering and coordination of
tasks in an ad hoc workflow are not automated but are instead controlled by humans.
• Administrative - This kind of workflows shows a low business value but a high repeti-
tion factor. They represent tipically administrative processes characterized by forms
and document that will be exchanged between the resources, such as the expense
account or a certificate request by a citizen to the Local Municipality.
• Production - Production Workflow have a high business value and a high repetition
factor. They implement the core business of a company, such as the loan process
for a bank and an efficient execution of this workflows provides a company with a
competitive advantage.
2.2.1 The WfMC Model
The Workflow Management Coalition is a no-profit organization, founded in 1993 and
composed of workflow producers, users, analysts and research groups. Its goal is that of
defining common standards for the various workflow functions in order to achieve a high
level of interoperability between different workflow sysytems or workflow and other IT
applications.
Figure 2.1 represents the Workflow Process Definition Metamodel [WfM98b]. This meta-
model identifies the basic set of entities and attributes used in the exchange of process
definitions. A WfMS allows the definition, the computerized representation and execution
of business processes wherein each process can be seen as a network of tasks. The basic
characteristic of these systems is that several process instances of the same kind might
exist. In other words, from a single process definition we can generate different process
instances enacted according to such a definition. A workflow provides a computerized
representation of a process, composed by a set of activities that must be executed in a
controlled way in order to obtain a common goal. The workflow also defines the task
execution order, on the basis of business procedural rules, which must be verified for the
process enactment. Its primary characteristic is the automation of processes involving
combinations of human and machine-based activities, particularly those involving interac-
tion with IT applications and tools.
Chapter 2. Workflows and Measurement Systems 11
Figure 2.1: The WfMC Workflow Process Definition Metamodel
In the context of a workflow, an activity represents a unit of work as scheduled by the
workflow engine; an activity, may be atomic, a sub-process or a loop; furthermore, it may
be assigned to a participant or it may invoke applications. In the following, we will often
use the term task instead of activity.
A task may also consist of one or more work items, that is, the representation of a unit
of work on which a single actor operates and performs a particular function. Work items
are assigned to a work list, usually related to a single actor or to a set of actors belonging
to the same role. An actor interacts with the worklist selecting, arbitrarily, the next work
item to handle. In the figure 2.2, the representation of a possible snapshot of business
processes managed by a WfMS is given. Two processes, called P1 and P2 respectively,
have been defined. From them, three process instances have been generated, two of P1
and one of P2. The arrow from x to i expresses the fact that the task instance x is “part
of” the process instance i. The arrows between nodes on the same level define a “succes-
12 Chapter 2. Workflows and Measurement Systems
Figure 2.2: Relationship between instances of processes, tasks and work items. The
different shape of the nodes indicates the node type.
sor of” relationship with the meaning that, for example, the task instance s is started in
sequence just after the completion of the instance r in the context of j. Note that the task
instance x has two successors due to a parallel activation of the task instances y and z.
Complex business process may also be modeled decomposing the process itself in several
subprocesses.
A Workflow Management System (WfMS) is a system that defines, creates and man-
ages the execution of workflows through the use of software, running on one or more
workflow engines, which is able to interpret the process definition, interact with workflow
participants and, where required, invoke the use of IT tools and applications. The major
components of a Workflow Management System are shown in the fig. 2.3
The WfMC has also identified a reference model [Hol94, Law97] that describes charac-
teristics, terminology and components of a workflow system. Fig. 2.4 shows the Workflow
Reference Model. The management of the workflow is done by a Workflow Management
System (WfMS) which provides an integrated environment for the definition, the creation
and the execution of business processes within an organization [Hol94, Law97]. The work-
flow enactment subsystem provides workflow enactment services by one or more workflow
engines.
• Interface 1 [Hol94, WfM98b, WfM02] presents the metamodel reported above and a
Chapter 2. Workflows and Measurement Systems 13
Figure 2.3: The major components of a Workflow Management System
set of API calls for the interchange of workflow specification elements.
• Workflow client applications are workflow enabled applications which provide process
and activity control functions, as well as worklist management, and administration
functions. Interface 2 [Hol94, WfM98c] defines an API to support interaction with
the workflow client applications. The WfMC mentions different possible configura-
tions which essentially refer to the component which administers the worklist.
• Invoked applications execute workflow tasks. While initially an interface 3 was an-
nounced which would support interaction with invoked applications, it has subse-
quently been amalgamated in Interface 2.
• Other workflow enactment services are accessed by interface 4 [Hol94]. It defines
an API to support interoperability between workflow engines of different vendors
allowing the implementation of nested subprocesses across multiple workflow engines.
• System monitoring and administration tools interact with the workflow enactment
subsystem through Interface 5 [Hol94, WfM98a]. The interface defines an API for
the administration of system monitoring and audit functions for process and activity
instances, remote operations, as well as process definitions.
In order to evaluate the process, audit data represents the data source for the mea-
surements. We will discuss in more detail about WFMC Audit Data in the sec. 2.2.3,
14 Chapter 2. Workflows and Measurement Systems
Figure 2.4: The WfMC Reference Model
while in chapter. 3 we will give another definition of audit data based on the key concepts
of events.
2.2.2 Workflow Modeling
Workflow Management Systems (WfMS) are often adopted as support technology for the
implementation of the process re-engineering in BPR projects. In the recent years many
fundamental research contributions have been proposed. In [dBKV97], the TransCoop
transaction model is presented together with an environment for the specification of co-
operative systems. Transaction features of WfMS are also addressed in [Ley95].
Another important research area is that of the Workflow Process Definition Languages
(WPDL). Interface 1 of the WfMC includes a common meta-model for describing the
process definition and also an XML schema for the interchange of process definitions,
called XML Process Definition Language (XPDL) [WfM02]. The WfMC has recognised
the advantages of separating the design component from the run-time component and has
developed and published an XML Process Definition Language (XPDL). This interface
supports independence of the design and the import/export of the design across different
workflow products, or from specialist modelling tools. In this way, business users can
Chapter 2. Workflows and Measurement Systems 15
utilise specialist tools to model and report upon different simulation scenarios within a
process and then use the model to transfer process entities and attributes to a workflow
definition. A workflow process definition meta-data model has been established; it iden-
tifies commonly used entities within a process definition. A variety of attributes describe
the characteristics of this limited set of entities. Based on this model, vendor specific tools
can transfer models via a common exchange format. In particular, the operations to be
provided by a vendor are:
• Import a workflow definition from XPDL.
• Export a workflow definition from the vendor’s internal representation to XPDL.
Many other WPDL have been introduced to face the problem of process representation
from several perspectives [CCPP95, GE96, Ley94, WWWD96]. The goal of the PIF
project [LGP98] is the design of an interchange format to help automatically exchange a
process description among a wide variety of process tools such process modelers, workflow
software, process simulation systems, and so on.
For this class of systems, the support of the Business Process Modeling can be exploited,
before the implementation phase of a BPR project [BKN00], in order to enhance under-
standing of the design and logic of the business process. Section 2.3 resumes this concept.
2.2.3 WfMC Audit Data Specification
The workflow analysis tools need the information presented in a consistent format, repre-
senting all events that occurred within a given set of criteria, such as how long did process
x take, what activities have been performed within a given process instance? Furthermore,
to understand where the process really is, the audit information can provide an indication
of the true state.
The aim of WfMC Audit Data Specification is to define what information a workflow
engine must trace and record, on the basis of the events that occur during the workflow
execution; this information is called Common Workflow Audit Data (CWDA) [WfM98a].
The specification also defines a format for this information. During the initialization and
execution of a process instance, multiple events occur which are of interest to a business,
including WAPI events, internal Wf Engine operations and other system and application
functions.
16 Chapter 2. Workflows and Measurement Systems
Table 2.1: Vendor Conformance to the Interface 5
Vendor Product/Version Release Date IF5 Supported? Implemented
Action Technologies Inc. ActionWorks Metro 5.0 27/4/98 No
BancTec/Plexus Software FloWare V 5.0 9/98 No
Blockade Systems Corp. ManageIDT M 4.2 2/2003 No
BOC GmbH ADONIS V2.0/V3.0 1998 Yes Trial Developments
Concentus KI Shell 2.0/3.0 25/6/97 No
CSE Systems Cibon 31/3/99 No
FileNET Visual WorkFlo 3.0 9/98 No
Hitachi Ltd WorkCoordinator 11/98 No
IBM WebSphere Business Integration 2003 Yes No
IDS Scheer GmbH ARIS Toolset 3.2 1997 Yes No
Image Integration Systems DocuSphere Workflow V5.0 2002 No
Integic e.POWER WorkManager 6.3 2003 No
INSIEL S.p.A Office241 OfficeFlow V2.6.9a 6/98 Yes No
ivyTeam ivyGrid Version 2.1 06/2002 Yes Planned
PROMATIS INCOME Workflow V1.0 1/4/98 Yes Implemented
SAP SAP Business Workflow 2000 No
Staffware Staffware Process Suite 1.5 October 2002 Yes Trial Developments
TDI-Savvion WebDeploy: WorkFlow 1.2 6/98 No
TIBCO TIB/InConcert Planned Yes Planned
Interface 5 defines what CWDA are basic, that is data (mandatory or optional) that must
be recorded for audit purposes, and what are discretionary or private; in the latter case,
vendors can be decide if adding or not these kind of data for purposes different from audit.
Table 2.1 reports the conformance of the most important workflow vendors to the Interface
5. As the table shows, only few vendors support the Interface 5 in their products and most
of them have not implemented it yet. In the figure 2.5 a typical WfMC audit record is
shown. Each record is composed of three parts: a prefix, the specific information and a
Chapter 2. Workflows and Measurement Systems 17
Figure 2.5: Common Workflow Audit Data format
suffix. The prefix contains information common to every audit data and is defined from
WfMC. The suffix part may include optional values required by the vendors for their own
purposes. Depending of the type of event that occurred in the system, a specific part,
containing a different set of attributes is recorded in the log entry. Figure 2.5 shows an
example of audit trail entry for the creation of a process.
Interface 5 does not contains neither any references regarding how data must be stored
nor how to evaluate this information. In [McG02], McGregor discusses the opportunity
to capture not only statistical information on the process, but information utilised within
the process to provide more informed management reporting and performance monitor-
ing. The author proposes a method to capture workflow process definition and workflow
audit data in a Decision Support System (DSS) (fig. 2.6) to enable business performance
monitoring. Workflow management system (process definition and workflow audit data)
can be used to create a link between the balanced scorecard, workflow management and
decision support principles. In order to extend the reference model, two options are pro-
posed: either duplicating in the Interface 5 some particular data, currently stored only in
Interface 1, or using both interfaces (1 and 5) to supply information to the monitoring
system. The latter approach is preferred by the author.
The Interface 5 called “Administration and Monitoring Tools” is divided into two parts:
Administration Tools are still populated by Interface 5 while Monitoring Tools become a
18 Chapter 2. Workflows and Measurement Systems
Figure 2.6: McGregor’s extension to WfMC Reference Model
DSS. The audit trail supported by Interface 5 now populate the Data Warehouse (Data
Susbsystem) within the DSS. On the other hand, Interface 1 should pass the information
about the Process Definition through to the DSS; in this way the DSS can generate the
Data, Model and User Interface structures for the Decision Support System. As the work-
flow engine(s) carry out the enactment of the business processes, the Data Warehouse
receive logs throught Interface 5; after summary records are created via the User Interface
which interacs both with Model Subsystem and Data Warehouse.
2.3 Integrating BPR and Workflow Models
In the previous sections it has been introducted the relation between BPR and worflows.
The link between workflow and BPR is twofold [IBM02, Saw00]:
1. After having been re-engineered a business process can be implemented, partially or
totally, through a WfMS;
2. It is possible to use statistical data, starting from the execution report of the work-
flow, in order to realize a performance analysis and to support possible re-engineering
activities on the implemented business process.
Chapter 2. Workflows and Measurement Systems 19
Standard process simulation tools are very useful to point out possible critical situa-
tions. Simulation is usually employed to conduct analysis on processes not yet modeled as
workflows, to formulate BPR hypotheses and to evaluate possible alternatives before the
implementation [BKN00]. A BPR team aims at replacing business processes within the
enterprise with new, better processes. The reengineering effort begins with a quick study
of the existing processes. Next the team proceeds to the redesign stage and produces
new process models. Once this stage is completed, they use process simulation to check
the models for errors and evaluate their performance. The evaluation indicates whether
the reengineering effort has reached its objective or requires further work. Through the
simulation, it is possible to obtain an assessment of the current process performances and
to formulate hypotheses about possible re-engineering alternatives. Finally, the adoption
of a WfMS to automate a business process gives the opportunity to collect real execution
data continuously from which exact information about the process performances can be
obtained. On the one hand, such data can be used for monitoring, work balancing and
decision support. On the other, execution data can feed simulation tools that exploit
mathematical models for the purpose of workflow optimizations and process re-definition
[Bae99, CHRW99, VLP95]. These needs require techniques suitable for the measurement
of several quantities related to entities represented within the WfMS.
On the other side, the information maintained by a WfMS, especially that regarding work-
flow executions, allows the team involved in a BPR project to plan a Continuous Process
Improvement cycle based on updated real execution data and not only on assessments
based on historical data (cf. fig. 2.7). This kind of analysis is usually done retrieving in-
ModelExecution
Analysis and Model Tuning
Simulation Tool
ExecutionData
ProcessEnactment
WFMS
Integration
Feedback
Real WorldProcesses
Modeling
ResultsModel
Figure 2.7: A model for Continuous Process Improvement.
20 Chapter 2. Workflows and Measurement Systems
formation about execution data, known as audit data, from log files that maintain all the
relevant events happened during the enactment of workflows. In other cases, the require-
ment imposed on the analysis activity is that performance indicators must be obtained
while the workflow is in progress. In such circumstances, real-time analysis [MR00] would
give the opportunity to raise a feedback action when the considered process is not in con-
formity with the expected behavior.
In [MO99, EP01] the comprehensive treatment of time and time constraints is considered
to be crucial in design and management of business processes. The authors propose a time
modeling and management techniques that allow to handle time problems, to avoid time
constraints violations and to make decisions when significant or unexpected delays occur.
2.4 Measurement Construct
A measurement process is more and more becoming a required component in the man-
agement tool set. Despite increased commitment of resources to an organizational mea-
surement process, few organizations include a means for quantifying and improving the
effectiveness of their measurement process within the ongoing business activities.
The things that can actually be measured include specific attributes of software processes
and products, such as size, effort, and number of defects. The measurement construct
describes how the relevant software attributes are quantified and converted to indicators
that provide a basis for decision making [MCJ+01]. A single measurement construct may
involve three types, or levels, of measures: base measures, derived measures, and indica-
tors. In the list below the meaning of the terms shown in the fig. 2.8 is defined. These
terms will be used in the following chapter.
• Attribute - A measurable attribute is a distinguishable property or characteristic of
a software entity. Entities include processes, products, projects, and resources. An
entity may have many attributes, only some of which may be suitable to be measured.
A measurable attribute is distinguishable either quantitatively or qualitatively by
human or automated means.
• Fundamental Measure - A fundamental measure is a measure of a single attribute
defined by a specified measurement method. Executing the method produces a value
Chapter 2. Workflows and Measurement Systems 21
Figure 2.8: Measurement constructs
for the measure. A base measure is functionally independent of all other measures
and captures information about a single attribute.
• Derived Measure - A derived measure is a measure, or quantity, that is defined as
a function of two or more base and/or derived measures. A derived measure captures
information about more than one attribute. Simple transformations of base measures
do not add information, thus do not produce derived measures. Normalization of
data often involves converting base measures into derived measures that can be used
to compare different entities.
• Indicator - An indicator is a measure that provides an estimate or evaluation of
specified attributes derived from an analysis model with respect to defined informa-
tion needs. Indicators are the basis for measurement analysis and decision making.
To support this last activity we think that indicators are what should be presented
to measurement users.
• Measurement Method - This is a logical sequence of operations, described gener-
ically, used in quantifying an attribute with respect to a specified scale. The opera-
22 Chapter 2. Workflows and Measurement Systems
tions may involve activities such as counting occurrences or observing the passage of
time. There are two types of method: subjective, that is based on human judgement
and objective, based on numerical rules.
• Measurement Function - A measurement function is an algorithm or calculation
performed to combine two or more values of base and/or derived measures.
• Analysis Model - This is an algorithm or calculation involving two or more base
and/or derived measures with associated decision criteria. Analysis models produce
estimates or evaluations relevant to defined information needs.
• Decision Criteria - These are numerical thresholds, targets, and limits used to
determine the need for action or further investigation or to describe the level of
confidence in a given result. Decision criteria help to interpret the measurement
results. Decision criteria may be based on a conceptual understanding of expected
behavior or calculated from data.
Some of these concepts will be resumed in chapter 3 where we will present a measurement
framework based on similar measurement constructs.
2.5 Performance Measurement Systems
Many definitions for process measurement exists; for example in [NGP95] a PMS is defined
as: “.. the set of metrics used to quantify both the efficiency and effectiveness of actions”.
Another definition can be found in [Bit95] which states: “At the heart of the performance
management process (i.e. the process by which the company manages its performance),
there is an information system which enables the closed loop deployment and feedback
system. This information system is the performance measurement system which should
integrate all relevant information from relevant systems.”
Measurement effectiveness is quantified by examining the extent to which the measurement
process goals and objectives are met and the extent to which managers utilize measurement
information during decision-making.
In [IJ03] the main attributes of a successful measurement program are indicated.
• A well-established organizational commitee - Managers must adopt the measurement
program and carefully address how measurement results will be used.
Chapter 2. Workflows and Measurement Systems 23
• An organizations that actually uses the results - A successful measurement program
evolve over time; all participants must be able to learn and implement it effectively.
A common mistake in measurement implementation is to try to do too much too
quickly. To avoid this, it is worthwhile to select few measurements and build up
from there.
• A well planned measurement process - Before deciding what measurements will be
implemented, it is recommended to identify information needs, how data must be
collected and analyzed and, also, to define the criteria for evaluating the measure-
ments.
• Automatic measurements collection - Usually, most succeful measurements are those
that have been collected in a automatic way. Automation increases the data quality
because it reduces the human intervention together with the risks of inconsistencies.
• Define measures that are objective and unambiguos - Numeric measures (such as,
numbers or percentages) are preferred because they minimize ambiguity. Numbers
give an objective view of the process rather than subjective data that are often
confuse and in conflict.
• Improve the measurements continuously - Both the process and the measurements
are continuously evaluated. Measurements that are not being used can be dropped
or replaced.
Subsections from 2.5.1 to 2.5.6 resume the main objectives and features of six widely used
measurement systems.
2.5.1 Activity-Based Costing
Activity-Based Costing (ABC) [Def95] was developed in the mid 1980s within the frame-
work of Cost Management System-Programs by Computer Aided Manufacturing Interna-
tional, Inc. It was defined for two main reasons [BKN00]:
• Organizations were beginning to sell a large range of goods and services rather than
a few standard products. The cost model was more influenced by the cost of the
supply components, such as delay, insurance, online assistance, than the products
themselves.
24 Chapter 2. Workflows and Measurement Systems
• This supply diversification entailed new activities. Resource consumption of these
activities was no longer solely dependent on the volume of production. The cost of
certain support activities also had to be included, such as running an IT department.
The ABC approach is based on the observation that resources are consumed directly by
activities an not by products. Therefore, the causes of resource consumption must be
examined at the activity level. The ABC models identify the basic activities of an orga-
nization. Activities are analyzed by comparing created value and operarting costs.
ABC measures process and activity performance, determines the cost of business process
outputs, and identifies opportunities to improve process efficiency and effectiveness. Qual-
itative evaluation and determination alone is totally inadequate as a single measure of
improvement. It is the integration of the quantity and quality that is the critical decision
support element of the total process. ABC is the mechanism to integrate these two views.
2.5.2 Balanced Scorecard
The Balanced Scorecard (BSC), developed by Kaplan and Norton, was developed to de-
scribe an organizations overall performance using a number of financial and nonfinancial
indicators on a regular basis. The balanced scorecard is a management system (not only
a measurement system) that enables organizations to clarify their vision and strategy
and translate them into action. It provides feedback around both the internal business
processes and external outcomes in order to continuously improve strategic performance
and results.
Kaplan and Norton describe the innovation of the balanced scorecard as follows [KN96]:
“The balanced scorecard retains traditional financial measures. But financial measures tell
the story of past events, an adequate story for industrial age companies for which invest-
ments in long-term capabilities and customer relationships were not critical for success.
These financial measures are inadequate, however, for guiding and evaluating the journey
that information age companies must make to create future value through investment in
customers, suppliers, employees, processes, technology, and innovation.”
A framework with four perspectives has been suggested: the financial, the customer,
the internal business, and the learning and growth perspective (figure 2.9). According to
the originators, the application of this tool can be seen in three areas: for the purpose of
Chapter 2. Workflows and Measurement Systems 25
Figure 2.9: The Balanced Scorecard: Strategic Perspectives
strategic performance reporting; to link strategy with performance measures; to present
different perspectives. An important characteristic of BSC is that the tool is concentrated
upon corporations or organizational units such as strategic business units. It looks at busi-
ness processes only in as far as they are critical for achieving customer and shareholder
objectives.
2.5.3 Goal-Question-Metric (GQM)
The Goal-Question-Metric (GQM) [MB00] method is used to define measurement on the
software project, process, and product in such a way that
• Resulting metrics are tailored to the organization and its goal.
• Resulting measurement data play a constructive and instructive role in the organi-
zation.
• Metrics and their interpretation reflect the values and the viewpoints of the different
groups affected (e.g., developers, users, operators).
GQM defines a measurement model on three levels:
• Conceptual level (goal): A goal is defined for an object, for a variety of reasons, with
respect to various models of quality, from various points of view, and relative to a
particular environment. Objects of measurement are:
26 Chapter 2. Workflows and Measurement Systems
– Products: Artifacts, deliverables and documents that are produced during the
system life cycle; for example, specifications, designs, programs, test suites.
– Processes: Software related activities normally associated with time, such as
specifying, designing, testing, interviewing.
– Resources: Items used by processes in order to produce their outputs, such as
personnel, hardware, software, office space.
• Operational level (question): A set of questions is used to define models of the object
of study and then focuses on that object to characterize the assessment or achieve-
ment of a specific goal. Questions try to characterize the object of measurement
(product, process, resource) with respect to a selected quality issue and to deter-
mine its quality from the selected viewpoint.
• Quantitative level (metric): A set of data, based on the models, is associated with
every question in order to answer it in a measurable way. The data can be:
– Objective: If they depend only on the object that is being measured and not
on the viewpoint from which they are taken; e.g., number of versions of a
document, staff hours spent on a task, size of a program.
– Subjective: If they depend on both the object that is being measured and the
viewpoint from which they are taken; e.g., readability of a text, level of user
satisfaction.
Fig.2.10 shows an example GQM model with some appropriate metrics [BCR94]. Although
originally used to define and evaluate a particular project in a particular environment,
GQM can also be used for control and improvement of a single project within an organi-
zation running several projects.
2.5.4 Practical Software and System Measurement (PSM)
Practical Software and System Measurement (PSM) is a product of the United States
Department of Defense measurement initiative. Practical Software Measurement provides
experience-based guidance on how to define and implement a viable information-driven
measurement process for a software-intensive project. [MCJ+01].
PSM treats measurement as a flexible process - not a pre-defined list of graphs or reports.
Chapter 2. Workflows and Measurement Systems 27
Figure 2.10: Some examples of GQM metrics.
The process is adapted to address the specific software and system information needs,
objectives, and constraints unique to each program. The PSM measurement process is
defined by a set of nine best practices, called measurement principles. Moreover, it pro-
vides additional detailed how-to-guide guidance, sample measures, lessons learned, case
studies and implementation guidance. It has served as the base document for the new
international standard on measurement, the ISO/IEC 15939 Software Engineering - Soft-
ware Measurement Process.
In [IJ03] this model is compared with the Rational Unified Process (RUP) for software
development in order to discuss the differences of the measurement process in both models.
At the same time, a sample set of measures for each phase of RUP and for each category
28 Chapter 2. Workflows and Measurement Systems
of PSM is discussed.
2.5.5 Capability Maturity Model for Software
The Capability Maturity Model for Software (SW-CMM) [PCCW04], was developed by
the Software Engineering Institute (SEI) of the Carnegie Mellon University in Pittsburgh.
The underlying premise of SEIs maturity model is that the quality of software is largely
determined by the quality of the software development process applied to build it. By
means of a questionnaire, an organization can assess the quality (maturity level) of its
software process. The five stages, defined by SEI, are as follows:
1. Initial - the software process is characterized as ad hoc. Few processes are defined,
and success depends on individual effort.
2. Repeatable - learning from similar past projects increases the capability of forecasting
of both time and costs.
3. Defined - the software process for both management and engineering activities is
documented, standardized, and integrated into a standard software process for the
organization.
4. Managed - quantitative measurements on the processes make faster the develop-
ment process because problems are detected very quickly. Corrective actions can be
produced to make easier the goals achievement.
5. Optimizing - continuous process improvement is enabled by quantitative feedback
from the process and from piloting innovative ideas and technologies.
Because organizations that outsource software development projects are increasingly
looking for greater software development maturity from their vendors, the CMM offers
an independent basis for assessments. Consequently, suppliers that wish to deliver bet-
ter results to their customers are engaging in CMM-based software process improvement
efforts. As the SEI transitions from the CMM to the Capability Maturity Model Improve-
ment (CMMI), leading companies are already adopting the CMMI in order to implement
a standardized, integrated approach toward software and systems engineering in their or-
ganizations.
Chapter 2. Workflows and Measurement Systems 29
Figure 2.11: The Capability Maturity Model.
We will resume the CMM model in chapter 6 where we will described a case study con-
cerning a software process managed by Intec Spa.
2.5.6 Process Performance Measurements System (PPMS)
A PPMS can be seen as an information system which supports process actor and their
colleagues to improve the competitiveness of business processes [KK99b, Kue00]. The
system tries to observe five performance aspects: financial, employees, customer, societal
and innovation aspects. The considered approach consists in nine steps, in which process
performances are visualized and improved continuously. The steps are: (1) process goals
are established through a collaboration between the various process participants; (2) for
each process goal, specific indicators are defined; (3) goals and indicators are broaden to
all considered five aspects, not only at the financial one; (4) indicators must be accepted
by the managers; (5) for each indicator one has to define where data (input) come from
and how these data can be accessed and, furthermore, target values (to-be values); (6)
judging technical feasibility and economic efficiency; (7) implementing the PPMS; (8) us-
ing a PPMS measuring the current values of given indicators continuously or regularly and
feeding back the results to the process participants and, finally, (9) improving business
processes and modifying the indicators continuously.
Based on the authors experiences with several enterprises it seems very unlikely that a uni-
versal set of performance indicators can be applied successfully to all business processes.
Indicators have to be fitted exactly to a given enterprise and its business processes; more-
over, performance has to be measured at different levels, in particular at business process
30 Chapter 2. Workflows and Measurement Systems
level and not at the level of business functions [KMW01].
2.6 Workflow Monitoring and Controlling
The separation of process logic from the application components enables workflow to tap
into the process level and collect information about its execution. Workflow systems
provide this data both at run time, ensuring that processes execute according to their
definition, and after the process is complete.
The workflow system manages the runtime data corresponding to each running process. A
workflow monitor enables workflow users to examine this information at run time. What
workflow users can do with this information depends on the process, as well as on the
features provided by the workflow management system. The type of available actions can
range from displaying process information to the early identification of out-of-line situa-
tions.
Under exceptional circumstances, the workflow user needs to override the process defini-
tion and manually change the course of the process. For example, he may find out that
the workflow started to execute with some erroneous information. This feature enables
workflow systems to handle exceptions and unique situations. Early workflow systems did
not provide this functionality. They frustrated their users, who felt that the system was
merely enforcing rigid rules. Current workflow systems aim at providing various degrees
of flexibility. Consequently, applications that use workflow to implement processes allow
their users to manually change running processes by simply leveraging this feature of the
workflow management system. Workflow systems support automatic or semi-automatic
execution of process instances, coordination between process activities, and the commu-
nication between process actors.
Workflow-based controlling is concentrated not on upon business processes. In the context
of process performance measurement, different terms are used. Traditional control offers a
post-hoc view, whereas workflow-based monitoring has the character of real-time report-
ing. The merits of workflow-based monitoring lie in the fast reporting procedure as well as
in its focus on business processes. Its limitations, on the other hand, are that qualitative
performance data and performance data about activities carried out manually, can hardly
be taken into consideration [KMW01]. While process monitoring emphasizes the task of
Chapter 2. Workflows and Measurement Systems 31
gathering process-relevant data without intruding upon the process, process controlling is
mainly concerned with the evaluation and judgement of the data gathered. The Workflow
Management Coalition applies the term as follows: Workflow Monitoring is the ability
to track and report on workflow events during workflow execution. It may be used, for
example, by process owners to monitor the performance of a process instance during its
execution. As it is mainly technology-driven, the selection of process performance indica-
tors is primarily influenced by the data which can be gathered as a by-product through
the automated or semi-automated execution of activities by a workflow management sys-
tem. Some workflow systems use the logged information for recovery. Workflow designers
use it for process analysis, where history information forms the basis for improving the
process. Workflow controlling can put useful data (e.g. time-related data) at the disposal
of controllers. They can then assess these data or use them as input for other measure-
ment instruments, for instance activity-based costing. The merits of controlling lie in the
fast and accurate reporting procedure as well as in its focus on business processes. Its
limitations, on the other hand, are that qualitative performance data and performance
data about activities that are carried out manually, cannot be taken into consideration.
2.6.1 Previous Work on Monitoring and Controlling
The evaluation of automated business process has received relatively little coverage in lit-
erature [BKN00, KK99a]; however, some contributions can be registered in recent years.
A prototype for workflow monitoring and controlling that integrates data of the IBM
MQSeries is presented in [MR00]. The process analysis tool, called PISA, offers some
evaluation functions according to three possible analysis dimension: process view, resource
view and object view. Time management is an important aspect of workflows. Any de-
viation from the prescribed workflow behaviour implies either the missing of a deadline,
an increased execution cost or even a danger or an illegal situation. Time management in
the lifecycle of workflow processes is addressed in [EP01, MO99]. Some time information,
such as time points, durations and deadlines related to a workflow are considered in order
to add new features of a WfMS.
Exploiting such information, pro-active time calculations, time monitoring and deadline
checking as well as time errors handling are possible during the workflow execution time.
In [SW02] a methodology for the modelling and performance analysis of WfMS is proposed.
32 Chapter 2. Workflows and Measurement Systems
Figure 2.12: The three dimensions of a business process.
The approach is based on coloured Petri Nets. It integrates a process model with an organ-
isation model. Furthermore, this methodology combines classical time-based performance
measures with activity based costing which in turn reports cost/dependent analysis as well
as traditional performance measures.
A goal oriented measurement approach is also used in [Can00] to introduce an architectural
framework for the evaluation of new software technologies before accepting innovation and
starting a tecnology transfer project. The framework has been used to evaluate the impact
of a BPR intervention, the introduction of the Ultimus workflow suite, for the automation
of some processes in a Ministerial Organization.
It is convenient now to characterize here what quantities to consider, then when and how
to measure. The benefit of this characterization derives from the fact that it can be used
as a reference for the design of new measurement tools.
2.6.2 What quantities to measure
An analyst interested in the performance evaluation of business processes, usually starts
from some fundamental measures about observable phenomena (such as the duration of
a given task or the workload of a certain employee); then, he/she will evaluate more
elaborate indicators. When a business process is automated with the support of a WfMS,
a representation of real world entities (e.g. process instances) takes place within the
WfMS. As a process instance progress in its execution, all the relevant events about it are
recorded. The opportunity that this class of system offers is that, from basic performance
data, informations about real world processes can be obtained; for example, from the
Chapter 2. Workflows and Measurement Systems 33
events regarding the creation and completion of a task, a measurement system could
automatically compute the task duration. The following are examples of what information
can be required from an analyst to a WfMS through the brokering of a measurement tool:
• Execution duration of a single task or the whole process.
• Effort spent to get a job completed.
• Cost of resources relied to the execution of a given process.
• Workload of resources (employees but also roles and Organizational Units).
• Indicators about the queues, such as length and average waiting time.
• Execution frequencies of a given path during the enactment of several instances of
a process.
It is useful to remember here differences between process measurement, process controlling
and process monitoring . Process controlling and monitoring include process measurement
activities but may be employed to get information about other aspects of workflows, e.g.
showing an execution path or the actor who is working on a given task. The essential
difference between process controlling and process monitoring is that the first is executed
on completed workflow instances, the second deals with the analysis of running workflow
instances [MR00].
The result of a process measurement can be a single value, the trend of a measure during
a time interval or, in general, a set of values statistically arranged, as we have already
discussed in the section 2.4. In the following we will focus our attention on process
measurement in the context of both process controlling and process monitoring.
2.6.3 When to measure
Performances of business processes from several perspectives must be regularly checked,
especially when an organization aims to reach excellence results [ISO00, Fis02]. Therefore,
planning a measurement activity must consider the following aspects:
1. Monitoring vs. controlling.
2. The time interval of reference.
34 Chapter 2. Workflows and Measurement Systems
3. The process measurement frequency.
The former point is strongly dependent on the process measurement goal. If the goal
is to gain information that can support strategic decisions concerning the organization
arrangement and/or the Continuous Process Improvement then process controlling can be
done. Historical execution data are collected and organized to form performance reports
as a basis for statistical analysis. The determination of processes average execution times
and the work distribution among organizational units are examples of data on which this
kind of evaluation is based. Process measurement in this context takes place in ex-post
modality , that is, a measure is evaluated considering only events happened in the past for
completed process instances.
If we are interested in the conformance of executing process instances to their expected
behavior, then monitoring is usually adopted; meeting a deadline or verifying if a queue
is under a given threshold are examples of issues that can be typically addressed. In this
context, process measurement is performed according to a real time modality on workflow
instances in progress. On the basis of recent execution data, a feedback loop is eventually
activated, implying, for example, work item reassignments or new resources allocation.
Process Instances
Time t
I1(P)
I2(P)
I1(Q)
I2(Q)
I3(P)
I4(P)
t0 future past
Figure 2.13: Ex-post and real-time measurements.
The second point requires the capability to consider a time interval. This is a standard
Chapter 2. Workflows and Measurement Systems 35
feature of several types of monitoring tools; nevertheless, it is useful to point out the kind
of analysis that can be done on workflows when the possibility to open a time window on
workflow instances is offered. As shown in fig. 2.13, an analyst, which want to evaluate
a measure at time t, can consider past as well as future events depending on the consid-
ered process instances. The average duration computed on the instances I2 and I3 of the
process P , for example, requires the retrieval of data related to events happened in the
past (ex-post modality).
A second possibility is to look at future events; the analyst could be interested in observing
the queue trend related to an employee during a week starting after the time instant t
(real-time modality).
A further possibility is given by the evaluation of a measure starting from a given point
in the past and continuing the measurement activity in the future. The monitoring of the
workload of a given organizational unit from the last month to next one is an example of
mixed modality (ex-post + real-time). We return on this in the chapter 7.
The process measurement frequency is the last perspective we consider to provide an an-
swer about when to measure. First of all, we discriminate between process monitoring and
process controlling. If we consider process controlling, then the measurement frequency is
typically planned at regular time intervals (may be quarterly and/or annually). On the
other side, if the monitoring of the process instances is our purpose then the measurement
frequency may depend on the measure. Let us consider the two queries below:
1. What is the duration of a completed activity A belonging to I2(Q)?
2. Is the current queue of a role assigned to Q under the given threshold?
Even if the instance I2 of Q is still in progress, it is sufficient a single evaluation of this first
query. The second query, instead, can be invoked many times during the life cycle of some
instances of Q. Note that the reiterated submission of the second query may be costly
(from the point of view of the time spent by the process analyst) without the automated
support of a specialized tool.
2.6.4 How to measure
To obtain the purposes outlined in section 2.6.2, WfMSs usually store data about the
organization structure, the definition of processes and their execution data.
36 Chapter 2. Workflows and Measurement Systems
The following list exemplify the kind of events that a WfMS should record during the
process enactment, according to the WfMC Audit Data Specification:
• The creation and the starting of processes, tasks and work items instances.
• The state change for the instances.
• The selection of a work item assigned to an actor.
• The completion of an instance.
Starting from audit data maintained by the WfMS, ex-post measurement tools could be
easily implemented. In fact, a typical measurement tool derives performance data from
the audit data without interfering with the workflow engine [MR00, AEN02].
From the point of view of input data to a measurement tool, the fig. 2.14 shows the
distinction between internal data sources, maintained by the WfMS, and external data
sources, maintained by other information systems.
The main data sources that we consider are audit data created by the WFMSs during the
enactment of instances but we might also use other data sources as well.
Measurement Tool
Wf EngineOrganizational
Unit
Process definitionand process instances
<createdInstance,i7,tc,created><startedInstance,i7,tk1,running><startedInstance,i2,ts2,running>
...<completedInstance,i7,te,completed>
Audit Data
WfMS
ExternalData Sources
Figure 2.14: Data sources for the measurement of workflows.
An alternative is the direct interaction with the workflow engine. To ensure the process
enactment, the workflow engine must handle computerized representations in terms of
process, task and work item instances together with all the relevant data necessary for the
execution.
The access to these representation is necessary to:
Chapter 2. Workflows and Measurement Systems 37
1. Build interactive tools capable of real-time measurements.
2. Support the proactive behavior of the WfMS expecially with respect to time man-
agement and exceptions [EP01, CDGS01].
A measurement tool could also take advantage from a direct access to the organization
database and process database. Consider, for example, the following query: “How many
subprocess instances of name Pj have been completed by the Organizational Unit Oi?”.
This kind of query could be handled by a measurement tool capable to retrieve the corre-
spondences between Oi and the instances of Pj handled by Oi.
In order to enhance the business value of audit data, an advanced measurement tool might
require the interaction with data sources not maintained by the WfMS. A Data Warehouse
represents a good example of organizational data source that can be used for measurement.
In chapter 7 the Process Warehouse approach to workflow measurement will be dealt. This
approach considers the use of a data warehouse as a middle layer between data sources
and measurement tools; in such a way, OLAP operators can be used to analyze data from
different dimensions.
Chapter 3
A Measurement Framework of Workflows
In this chapter I will introduce a measurement framework in which some primitive measure
operators combined with some set operators allow to define a hierarchy of measures for
the evaluation of automated business processes. The attention is focused on concepts
concerning time and work for processes, tasks and work items represented in a Workflow
Management System (WfMS). The structure of the framework presented consists of three
levels. Primitive operators, essentially measure and set operators, belong to the bottom
level; from the composition of these operators we define a level of fundamental measures
concerning the duration of instances, the amount of work, the length of queues and the
topology of tasks. Finally, the third level shows many aggregate measures and performance
indicators built starting from the fundamental measures and operators defined in the
underlying levels.
3.1 An example of business process
To enhance the understanding of the measurement framework that will be presented in
the next section, it is convenient to describe an example process of loan requast that will
be often recalled in the following. It starts with the stimulus provided by a client that
requires a loan to a bank. The first task handles the reception of a request, fills in a form
and passes the control to a task that effects a preliminary evaluation; if the client does
not have an account number, or the client is considered unreliable, the request is rejected
immediately. When the request is admitted to the subsequent phases a decision is taken
on the basis of the requested amount; if the value is less or equal than 1000$ the request
is approved, otherwise the request handling is more elaborate.
40 Chapter 3. A Measurement Framework of Workflows
The enlarged box in fig. 4.2 shows the content of the task “Analyze Request” in terms of
Figure 3.1: A workflow schema for the process of “loan request”.
its component work items. If necessary, a loop is performed to collect missing customer
information.
3.2 A Measurement Framework for Workflows Evaluation
In this section, we introduce the basic data structures and the primitive operators of the
measurement framework [AEN02]. In order to discuss its essential characteristics we will
consider, in the following, a subset of states and of event types among those proposed by
the WfMC.
3.2.1 Basic Structures and Primitive Operators
The fundamental concepts considered in order to build a hierarchy of measurements for
WfMS-supported business processes are time and work. These concepts are naturally
employed whenever the performance analysis is concerned. Consider, for example, some
typical question addressed by an analyst during the performance evaluation of the process
introduced in the previous section:
a) When has the instance identified by number 36 of the process “Loan request” started?
b1) What is the duration of the instance identified by number 48 for the task “Ana-
lyze Request”?
Chapter 3. A Measurement Framework of Workflows 41
b2) What is the average duration of the task “Analyze Request”?
c) How many work items “Redemption Plan” has a given employee completed?
d) How many processes “Loan Request” have been approved/rejected?
e) What is the global throughput (number of handled requests) in the last month?
Points a), b1) and b2) involve the concept of time. In particular, the first query requires
the retrieval of a time instant while the second and the third require the measurement of
time intervals.
Query c) regards the concept of work. Here, the measurement of the amount of work
items can be carried out by simply counting the started and completed work items. In
this example, the counting of work items is accomplished from the start of the system op-
eration to the instant of time in which the query is formulated. However, we can qualify
the query by specifying a time interval during which the counting operation must take
place as shown in e).
Having chosen the two fundamental dimensions for our measurement framework we must
decide the unit of measure for each dimension. We may assume by convention seconds
and work items as the unit measure for time and work respectively. There is nothing
absolute about this choice. Indeed, an analyst could develop his considerations about
performance evaluation at different levels of detail: minutes, hours, days and months for
aspects concerning time quantities or processes, tasks and work items for aspects related
to the amount of work. Shifts from one level of detail to another should also be based on
the provision of ad hoc transformation functions.
Query b2) also poses the problem of the extensibility of the measurement framework. Given
fundamental measurements such as “length of a time interval” or “number of work items”
we can follow a compositional approach to extend the measurement framework. For exam-
ple, query b2) can be regarded as the evaluation of a function that calculates the average
of a set of values that, in turn, are evaluated by the application of a fundamental measure.
To obtain precise answers to the queries such as those above, we need to develop a mea-
surement framework by means of which numbers can be assigned to the various entities
represented within the WfMS. First we will introduce the fundamental data structures,
then we will discuss the primitive operators on these structures and finally we will show
42 Chapter 3. A Measurement Framework of Workflows
how this model can be extended.
For the purpose of discussion, it is convenient to abstract process, task and work item
instances in a single set of instances. Clearly, during the implementation phase, the set
could be divided into components to improve the queries response time.
3.2.2 Instances and Events
The fundamental structures considered in the construction of the measurement framework
are two sets. The first set contains instances of processes, tasks and work items; the second
is composed of events that have some semantic correspondence with instances. Let I be
the set of instances. The generic element i of I takes the form of a 6-tuple:
i = (i type, i name, father, previous, actor name, current state)
where:
a) i type ∈ {work item, task, process};
b) i name is a symbolic name of the instance i;
c) father ∈ I represents the father of i, i.e., the number of the instance from which i
has been generated;
d) previous ∈ I is the task instance that enables, in sequence, the starting of the task
instance i;
e) actor name is the actor related to the instance when i type = work item;
f) current state, the current state of i, is a member of:
{notRunning, running, completed, suspended, aborted}
As an example, consider the instances id21, id25 and id28 below.
id21 = (task, “collect customer info”, id15, id19, null, completed).
id25 = (task, “analyze request”, id15, id21, null, running).
id28 = (work item, “financial data analysis”, id25, null, ”Brown”, completed).
The tuples id21 and id25 represent task instances created in the context of the process
instance id15. The tuple id28 is a work item instance created in the context of the task
Chapter 3. A Measurement Framework of Workflows 43
instance id25. An example of instance set can be obtained from the graphical representa-
tion of fig. 1 if we remove the roots and the arcs labeled “is-a”. The instance set of that
example contains 3 instances of processes, 8 of tasks and 10 of work items.
An event set, denoted by E , is composed by events, that is, 4-tuples like the following:
e = (e type, instance, time, new state) where:
a) e type ∈ {createdInstance, startedInstance, completedInstance,
suspendedInstance, abortedInstance};
b) instance ∈ I is the instance to which the event is related;
c) time is an attribute that records the time at which the event occurs;
d) new state represents the state assumed by the related instance when the event hap-
pens.
Note that the time attribute allows us to define an order relationship on the set of
events: ei < ej if time(ei) < time(ej) ∀ei, ej ∈ E.
The events createdInstance and startedInstance set an instance state into notRunning
and running respectively. For the remaining events, their names provide an indication of
which state will be reached by an instance as soon as an event occurs. For example, the
state completed will be reached when the event completedInstance happens. A detailed
state transiction diagram can be found in [Hol94].
It is worthwhile to highlight that, in the case of a work item, the event createdInstance
implies the work item is assigned to a worklist, while the event startedInstance corre-
sponds to the selection of such a work item from the worklist.
An element of E is shown below where the event “launching of instance id3 at time 10” is
considered.
(startedInstance, id3, 10, running)
It is useful to introduce two special events: e0 and ec of type clock, that conform to
the following pattern:
(clock, null, time, null)
44 Chapter 3. A Measurement Framework of Workflows
e0 represents the starting of system operation (with time(e0)=0) and the event ec repre-
sents the occurrence of the event “clock reading”; time(ec) will contain the value of the
clock.
During its life cycle an instance is created, executed (possibly suspended and resumed)
and finally completed. In other words, an instance can change state several times and, as
shown in fig. 3.2, this means that to each instance corresponds the set of events that had
an impact on that instance; the attribute instance takes care of this correspondence. It is
also useful to record in an event e the state in which the corresponding instance is entered
as a consequence of e. This will be used in the definition of some measures that need past
state information not stored by the instances.
i1
i2
i3
e11
e13
e12
e21e22e23
e31
e33e32
e34
e35
Figure 3.2: Relationship between the instance set and the event set. Each instance can
have many events related to it.
In the following, we will denote with:
Attr(I) = i type, i name, father, previous, actor name, current state,
Attr(E) = e type, instance, time, new state
the set of names of attributes of instances and events respectively. The name of an
attribute will be also used as a function that, when applied to an object (for example
an event), extracts the corresponding value. Furthermore, map(f, {x1, x2, . . . , xn}) =
{f(x1), f(x2), . . . , f(xn)} will denote the usual operator for the application of a function
to a set of values. It is also convenient to denote with W ⊂ I the set of work items and
with X ⊂ S a subset of S where S can be E or I.
Chapter 3. A Measurement Framework of Workflows 45
3.2.3 Measurement Operators
As noted in the previous section, the two fundamental quantities under consideration are
work and time. The primitive operators for measuring these quantities are fairly simple.
The first is computed as the cardinality of a set; the second one takes two events ei and
ej , and returns the length of the time interval between the respective occurrence times.
Let X be a subset of E or I. We have:
1. ♯(X ) =| X |
2. ∆(ei, ej) = abs(time(ej) − time(ei))
3.2.4 Primitive Operators
Even if the operators ∆ and ♯ are easy to apply, we can take advantage of their simplicity
only when the appropriate objects on which they must work are selected and eventually
grouped. Consider again the query “how many work items have been completed by a
given employee?”. The answer can be provided starting from the set of events. First,
we select the events with i type=work item and new state=completed for the employee,
then, we apply ♯ to the resulting set. What we need is a set of primitive operators that
allow us to propose to ∆ and ♯ the right argument to apply. Here, we discuss in detail five
primitive operators, three selectors, a partition operator and an operator on graphs. The
first two selectors, instance and event set allow following the correspondence between
an instance and its events, the third selector is a filter. The definitions of filter, set
partition and path between two nodes of a graph are analogous to the usual mathematical
definitions. Nevertheless, as shown in the following, the way of combining them with other
operators in a hierarchy of measures is meaningful.
op. 1: instance(e) = i
op. 2: event set(i) = {ei1, ei2, . . . , ein | instance(eij) = i, with j = 1, . . . , n}.
The operator event set permits the selection of events in E , starting from an instance
i ∈ I. In particular, it selects the set of events corresponding to i, retaining the order
relation defined on E. For example, the application of first(event set(i)) returns the first
happened event relied to i.
46 Chapter 3. A Measurement Framework of Workflows
op. 3: filter
The operator filter is the standard operator for the choice of elements from a set,
according to a first order predicate:
filter(X , p) = X ′ with X ′ ⊆ X such that:
∀x ∈ X
p(x) = true if x ∈ X ′
p(x) = false if x 6∈ X ′
The application of filter shown in example 3.1 starts from the set I and returns all
the tasks instances of name “approval” that are part of the process “loan request”.
example 3.1: filter(I, p);
p = i type(i) = task ∧
i name(i) = “approval” ∧
i name(father(i)) = “loan request”.
Now, consider the problem of selecting all the starting events for which work item
instances called “evaluation” have been executed. It seems natural to start from the set
E and select the events that satisfy the predicate p as defined in example 3.2. However,
this problem could be also solved starting from the set I. Indeed, the inner filter applied in
example 3.3 takes as first argument the set I that is filtered according to the predicate p1.
Once the set of work items of name “evaluation” has been chosen, we map the function
event set to each instance of work items. Then, the resulting set of event sets is merged
enabling the application of the outer filter.
Looking at fig. 3.2 we can observe that the instance set I contains fewer elements than
the event set E . Therefore the second query is more efficient.
example 3.2: filter(E , p);
p = i type(instance(e)) = work item ∧
i name(instance(e)) = “evaluation” ∧
e type(e) = startedInstance.
Chapter 3. A Measurement Framework of Workflows 47
example 3.3: filter(merge(map(event set, f ilter(I, p1))), p2);
p1 = i type(i) = work item ∧
i name(i) = “evaluation”;
p2 = e type(e) = startedInstance.
A further application of filter allow us to define the operator event; this operator, given
an instance i and a predicate p, returns the first event relied to i that satisfies p. Although
event is defined by means of filter and event set, it can be considered as a primitive
operator.
example 3.4: event(i, p) = first(filter(event set(i), p)).
op. 4: partition
Sometimes we want to group elements of an instance set or of an event set considering a
common property as a grouping criterion. It is appropriate, for our purposes, to express
such a property in terms of attributes of instances or events.
Let us consider a set of attributes A = a1, a2, . . . , as ⊆ Attr(X), where X can be the set
E or the set I. Then, partition(X,A) = {X1,X2, . . . ,Xn} is a partition of X, such that:
1. ∀i = 1, . . . , n Xi 6= ⊘
2.n⋃
i=1
Xi = X
3. Xi
⋂
Xj = ⊘ ∀ i, j ∈ [1, . . . , n] and i 6= j
4. ∀xih, xik ∈ Xi, h 6= k we have aj(xih) = aj(xik) ∀j ∈ [1, . . . , s]
where aj(x) is the value that assumes the attribute aj on x. In other words, given a set X
and a list of attributes, partition returns a partition of X in which each element in the
partition contains the elements of X that have the same value for the specified attributes.
The example below describes the output of an application of partition. This behaviour is
very useful for the composition of performance analysis reports.
48 Chapter 3. A Measurement Framework of Workflows
example 3.5: Let X be set of work items:
X = {(. . . , 12, . . . , “evaluation”, . . . , “Brown”)
(. . . , 25, . . . , “redemption plan ”, . . . , “White”)
(. . . , 32, . . . , “redemption plan ”, . . . , “White”)
(. . . , 56, . . . , “redemption plan ”, . . . , “Brown”)
(. . . , 63, . . . , “evaluation”, . . . , “Brown”)}.
The application of partition(X, {i name, actor name}) gives the result:
{ {(. . . , 12, . . . , “evaluation”, . . . , “Brown”) (. . . , 63, . . . , “evaluation”, . . . , “Brown”)}
{(. . . , 25, . . . , “redemption plan”, . . . , “White”) (. . . , 32, . . . , “redemption plan”, . . . , “White”)}
{(. . . , 56, . . . , “redemption plan”, . . . , “Brown”)} .
Chapter 3. A Measurement Framework of Workflows 49
op. 5: path
The last operator that we consider in this section takes two instances of tasks, for
example is and ie, and returns the sequence of tasks eventually crossed from is to ie.
path(is, ie) =
< i1, . . . , in > if i1 = is ∧ in = ie ∧
∀h ∈ {1, . . . , n − 1} ih = previous(ih+1) ∧
father(ih) = father(ih+1)
null otherwise
In other words, a task instance ih is included in the sequence if there exists another task
instance ik such that ik = previous(ih) and both instances belong to the same process
instance. Note that this operator works correctly also when a path includes a loop. Con-
sider for example, the following fragment of process definition.
Figure 3.3: A process definition containing a loop
Assuming that the loop is repeated exactly one, a possible sequence crossed during the
process enactment could be:
i1, i2, i3, i12, i
13, i4
where i2 is an instance of the task named T2 and i12 is another instance of the same task.
Therefore, since we know the value of ie, if we go through a loop before to arrive at ie,
the crossed task instances will be included in a finite sequence.
3.3 Fundamental measurements
The fundamental measurements discussed in this section derive from the composition
of the primitive operators introduced in the previous section. First, we discuss simple
50 Chapter 3. A Measurement Framework of Workflows
applications of the operators ∆ and ♯, then we define measures concerning queues and
finally we provide a measure on the topology of tasks.
3.3.1 Duration of Instances
The application of ∆ to the duration of instances is immediate. The two measures below
concern the duration of an instance that can be a work item, a task or a process. The first
can be simply calculated retrieving the events createdInstance and completedInstance.
The second is employed to evaluate the duration of an instance still in execution. In this
case, the event of reference for the upper bound of the time interval is ec, i.e., the reading
of the clock.
fm. 1: instance duration(i) = ∆(event(i, e type(e) = createdInstance),
event(i, e type(e) = completedInstance)).
fm. 2: current duration(i) = ∆(event(i, e type(e) = createdInstance), ec).
The following measure provides an indication about the working duration, that is the
elapsed time between the starting and the completion of an instance. For example, if the
instance is a work item, it returns the time interval between the selection of a work item
(event startedInstance) assigned to an actor and its completion.
fm. 3: working duration(i) = ∆(event(i, e type(e) = startedInstance),
event(i, e type(e) = completedInstance)).
Fig. 3.4 shows the difference between instance duration and working duration for an
instance. The waiting time can be considered as the time interval during which an instance
remains in a queue waiting to be processed. The definition of waiting time is given in the
section that discusses the queues.
It is frequently necessary to compute the sum of measures of the same kind. The
function sigma implements the concept of “summation of measures” where the input
parameter measure gets as a value the measurement definition to apply to the elements of
a set X. We assume the availability of a function sum that, given a set of values, returns
the sum of all the members in the set.
Chapter 3. A Measurement Framework of Workflows 51
Time
createdInstance
t2t1
startedInstance
t3
completedInstance
instance_duration
working duration
waiting time
Figure 3.4: Some time measures for an instance
fm. 5: work(I, p) = ♯(filter(I, p))
Such a measure is quite general and is computed by first applying a filter on the
set of instances and then evaluating the cardinality of the resulting set. The examples
below show its application. In example 4.1 it is evaluated the number of instances for
the process of name “loan request” currently in the system; in example 4.2 we get the
number of running task instances for the same process and in example 4.3 the number of
work items of name “evaluation” completed by Brown in the context of the task “analyze
request”. Finally, the workload for the actor Brown is computed in example 4.4.
example 4.1: work(I, p)
p = i type(i) = process ∧
i name(i) = “loan request”.
example 4.2: work(I, p)
p = i type(i) = task ∧
i name(father(i)) = “loan request” ∧
current state(i) = running.
example 4.3: work(I, p)
52 Chapter 3. A Measurement Framework of Workflows
p = i type(i) = work item ∧
i name(i) = “evaluation”∧ i name(father(i)) = “analize request” ∧
actor name(i) = “Brown”∧ current state(i) = completed).
example 4.4: work(I, p)
p = i type(i) = work item ∧
actor name(i) = “Brown”.
The definition of work distribution given below allows us to evaluate the workload for
each actor involved in the progress of workflows. It is worthwhile to observe that the
contribution of partition is necessary. Indeed, we do not know in advance the names of
all the actors involved and we could not obtain the same result acting with a filtering
technique alone.
fm. 6: work distribution(X) = map(♯, partition(X, {actor name})).
For example, by the application of work distribution to the set X shown in the
example 3.5 we get the result {3, 2}, that is, the workloads of Brown and White.
3.3.2 Queues
The measurements considered with respect to the queues normally regard instances of
work items. In a WfMS, queues are usually interpreted as work items waiting service by a
given actor or a given role [Ora01]. We can extend such interpretation defining measures
that are independent from the place or the entity that supplies the service. Note that the
sentence “work waiting to be executed” is wider then “work waiting to be executed by
Brown”. Indeed, as shown below we could qualify the entity that determines the queue in
several ways. In this section X is always a subset of I obtained by a filtering operation.
First, we define the waiting time of an instance in a queue, then the current queue in its
general and particular settings and, finally the queue at a generic time instant and its
trend with the passing of time.
fm. 7: waiting time(i) = ∆(event(i, e type(e) = createdInstance),
event(i, e type(e) = startedInstance)).
Chapter 3. A Measurement Framework of Workflows 53
fm. 8: current queue(W ) = work(X, p);
p = current state(i) = notRunning.
waiting time represents the waiting time of a given instance in a queue (see fig. 3.4).
It is defined as the time between the occurrence of the events createdInstance and
startedInstance. When the queue length is determined with respect to the current
instant of time, its calculus is very simple. As the definition of current queue states,
starting from the set of instances, it is sufficient to retrieve those that are set in the state
notRunning. The application of current queue gives the number of all the instances
waiting service in the system.
The calculation of a queue can be specialized if we want to focus our attention only on
certain participants to the workflows. For example, we could wish to determine the queue
of work items awaiting attention from the actor Brown or the queue of work items assigned
in the context of the process of “loan request”. We can exploit a technique similar to
the streams or to the pipelines proposed in other fields [ASS85, SGG02] in order to select
the entity for which the queue must be evaluated. Figure 3.5 illustrates a simple pipeline;
the specification of the filtering predicate allows us to focus the attention to a given entity.
The two queries exemplified above can be formalized as:
Figure 3.5: A simple pipeline
example 4.5: current queue(filter(I, pactor));
pactor = i type(i) = work item ∧
actor name(i) = “Brown”.
example 4.6: current queue(filter(I, pprocess));
pprocess = i type(i) = work item ∧
i name(father(father(i))) = “loan request”.
54 Chapter 3. A Measurement Framework of Workflows
We cannot apply the definition of current queue if the instant of time considered for the
calculation is a past instant of time. Indeed, since the definition of instance does not take
past state information into account, the length of a queue at a given time instant must be
defined in a different manner.
fm. 9: instant queue(X, t) = work(X, pnotRun) − work(X, pRunning);
pnotRun = time(event(i, new state(e) = notRunning)) ≤ t;
pRunning = time(event(i, new state(e) = running)) ≤ t.
To determine the length of a queue at a (past) time instant t, we must evaluate the num-
ber of work items notRunning and running before of t. In other words, considering the
instances of X created before t, we must choose only those that, at the instant t, are in the
state notRunning. Note that if we evaluate instant queue(X, t) for t = time(ec) we
obtain the same value of current queue(X). This measure has been applied to evaluate
the changes to which a queue is subjected with the passing of time.
A meaningful application of instant queue is the composition of a measure that evaluate
the changes to which a queue is subjected with the passing of time. To evaluate this trend,
we need to identify the instants of time in which there has been a change in the queue. In
particular, the creation of an instance in the system increments by 1 the size of a queue,
while the starting of an instance decrements the queue of the same value. The fig. 3.6
illustrates this point.
If we compute the length of a queue for each instant in which there has been a change,
we obtain a set of values that can be exploited to evaluate, for example, the max queue
length. Let T = {t1, t2, . . . , tn} the set of time instants in which the queue has changed
T = map(time, filter(merge(map(event set, x)), pchange));
pchange = new state(e) = notRunningnew state(e) = running.
From T and the set of instances X we build a set of pairs as follow:
XT = {X}T = {(X, ti)|ti ∈ T}.
Now, we can define:
Chapter 3. A Measurement Framework of Workflows 55
Figure 3.6: Changes of the queue length in time.
fm. 10: queue trend(X) = map(instant queue,XT ).
The application of queue trend gives in output a set of values denoting the length of
a queue for each time instant of T. If we want to calculate, for instance, the max (min,
avg) queue, the usual functions for the calculus of max (min, avg) value among the set
of values returned by queue trend can be applied. As already mentioned in this section,
the definition of queue trend can be specialized if we want to restrict our attention to an
actor. In particular, we could be interested to the trend of a queue during a given period
of time. We will return on this problem in section 3.5, where windowing and pipeline
techniques will be used to formulate more elaborate measures.
3.3.3 Task Topology
Given a process and two task instances is and ie of the process such that is occurs before
ie, we are interested in evaluating the time elapsed from the execution of is to that of ie.
The figure below suggests that the definition of the intertask duration can be given as the
time interval between the creation of is and the completion of ie.
Let is and ie be two task instances such that father(is) = father(ie). We have:
56 Chapter 3. A Measurement Framework of Workflows
Figure 3.7: Intertask duration from T2 to T6.
fm. 10: intertask duration(is, ie) =
0 if path(is, ie) = null
∆(event(is, e type(e) = createdInstance),
event(is, e type(e) = completedInstance))
otherwise
For example, if is is an instance of “receive loan request” and ie an instance of
“request approval” in the same process instance, we can evaluate the elapsed time
between the reception of the request and its approval.
3.4 Derived measurements
In this section we show how to obtain derived measurements exploiting the fundamental
ones. To illustrate the possibilities offered by the operators introduced so far, we here
discuss three derived measurements in detail: actor time contribution, routing and residual
duration. The first involves the notion of time, the second two that of work. The third
measure provides an estimation of the time interval that remains to complete a process.
Other derived measures are representative of possible indicators in the context of the
proposed framework and are summarized in the appendix.
3.4.1 Contributions to the execution of workflows
An important part of the performance analysis regards the contribution that a component
of a process makes to the progress of the process itself. Consider, for example, an indicator
that takes into account how the duration of a task affects the total duration of a process.
An analyst could use this information to discover possible bottlenecks. From another
point of view, we could be interested in the contribution that resources, especially human
Chapter 3. A Measurement Framework of Workflows 57
resources, make to the progress of one or more processes. In fig. 3.8 two different kinds
of contributions are shown. In the first, given a process P, the contribution of the generic
actor to P is considered; in the second, we are interested in the contribution of an actor
to the process Pi provided that he/she works on the processes P1, . . . , Pn. The evaluation
can be done from the point of view of time overhead or work overhead and is expressed in
percentage.
Now, it is opportune to introduce two definitions that are essentially relative frequencies
Figure 3.8: Kinds of contribution of an actor: 1) on a single process; 2) on several
processes.
and that consider the general notion of contribution. sigma and work can be exploited
to define the notions of time contribution and work contribution respectively. Care must
be taken to specify the set X1 and X2, in the first case, and the predicates p1 and p2,
in the second, since a proportion requires that the numerator be less or equal than the
denominator [Kan95]. In the case of work contribution this means that the set filtered
to the numerator be a subset of those filtered to the denominator.
dm. 1: time contribution(measure,X1,X2) =sigma(measure,X1)
sigma(measure,X2∗ 100
dm. 2: work contribution(measure, p1, p2) =work(I, p1)
work(I, p2)∗ 100
In order to facilitate the reading we will not report, in the rest of the paper, the predi-
cates charged with type checking on the arguments of the measures. We discuss here two
measures that evaluate how much an actor contributes to the execution of processes in
terms of devoted time.
case 1: given a process P , what is the time contribution of actor i on P?
58 Chapter 3. A Measurement Framework of Workflows
Let tactor k be the working time spent by the generic actor on P . In general, to actor k
may be assigned more that one work item even in the context of a single instance of P .
Then:
atc1 =tactor i(P )
n∑
j=1
tactor j(P )
∗ 100 =
= time contribution(working duration, filter(I, p1), f ilter(I, p2));
p1 = p2 ∧ actor name(i) = actor;
p2 = current state(i) = completed ∧ i name(father(father(i))) = p name.
case 2: given an actor, what is the contribution of actor on Pi if he/she works on
P1, . . . , Pn?
Let tprocess i be the time spent from actor on Pi. Then:
atc2 =tprocess i(actor)
n∑
j=1
tprocess j(actor)
∗ 100
= time contribution(working duration, filter(I, p1), f ilter(I, p2));
p1 = p2 ∧ i name(father(father(i))) = p name;
p2 = current state(i) = completed ∧ actor name(i) = actor.
3.4.2 Routing
In presence of decisional points that entail several routes of the workflow, it is useful to
evaluate a measure of routing. For example, a typical question for the process “loan
request” is:
‘How many loan requests have been accepted and how many have been refused after the
completion of “analyze request”?’
Consider the schema of process in fig. 3.10 composed by five tasks.
If 6 process instances are enacted, we could observe the following paths:
1. T1, T2, T3, T5
2. T1, T2, T3, T5
Chapter 3. A Measurement Framework of Workflows 59
Figure 3.9: Routing percentages of a process
3. T1, T2, T4, T3, T5
4. T1, T2, T3, T5
5. T1, T2, T4, T5
6. T1, T2, T3, T5
Observing that the task T3 is executed 4 times immediately after T2 and that T2 is
launched in each process instance, the percentage of routing from T2 to T3 is 4/6 ∗ 100. In
general, we are interested in the number of times that process instances have scheduled a
task Tj immediately after the completion of Ti:
The definition of routing can be given in terms of work contribution. Indeed, the routing
from Ti to Tj can also be interpreted as the percentage contribution of Tj to discharge the
work passed through Ti.
3.4.3 Support to Proactive Systems
Capability of forecasting how much time or work is necessary to complete a process still
in execution is one of the characteristics that one could require to an advanced WfMS.
In [Hal97] a definition of workflow is proposed that emphasizes the role of the WfMS
as proactive system. We claim that a WfMS should benefit of quantitative data about
the performances of processes and that the adoption a proactive behavior rests on the
availability of such data. Suppose that one is interested in evaluating the time necessary
to complete a process. We can assess this quantity by means of residual duration.
Let P be a process and ip an instance in execution of P . We can assess the residual
duration of ip considering the difference between the average duration of already completed
instances of P and the current duration of ip. Remembering that sigma evaluate a sum of
60 Chapter 3. A Measurement Framework of Workflows
measures of instances (filtered by means of P ) and that work counts the number of such
instances we have:
dm. 4: residual duration(ip) =
sigma(instance duration, filter(I, p))
work(I, p)− current duration(ip);
p = i name(i) = i name(ip) ∧
current state(i) = completed.
Depending on the value returned by the application of residual duration there are
three possible alternative interpretations of the expected residual duration of P . When
the value is equal to 0 we have an indication that from now on delay will be accumulated;
if the value is less than 0, the process is late, otherwise, the residual duration represents an
assessment of the time needed to complete the process. Forecasting the amount of work
necessary for the process completion can be estimated by means of residual duration
defined in appendix A.
3.5 Advanced Measurement Techniques
Among the several factors that may contribute to the success of a monitoring system, in
order to perform evaluation of business processes we believe that the choice of a simple
set of measures is certainly appropriate. Simple concepts allow spreading the evaluation
results, increase the acceptance degree of the monitoring system from workflow partici-
pants and encourage the self-evaluation.
To facilitate the reading of performance data, we propose two techniques: windowing and
reporting. From one side, these techniques allow either to select portions of data on a
temporal base or to aggregate them in order to perform comparative analysis. On the
other side, they must be used carefully, since measurement errors could be introduced.
In section 3.5.3 the problem of performance evaluation of complex processes is addressed.
These processes are usually defined in terms of multiple levels of subprocesses and the
evaluation of quantities in this setting can be faced by means of a recursive technique.
Chapter 3. A Measurement Framework of Workflows 61
3.5.1 Windowing
Windowing is a technique widely used in several fields. For example, the working-set
model adopted by some Operating System [SGG02], uses a sequence of page references
and considers a window of predefined length on the sequence to identify which pages have
to be inserted in the working-set. In the same manner, we can define a window on the
temporal axis, in order to focus our attention on a given time interval.
The operator event in win check if an event falls on the inside of a window while instance in win
verifies if an instance is created and completed within the window.
op. 6: event in win(e, [ts, te]) = time(e) ≥ ts ∧ time(e) ≤ te;
op. 7: instance in win(i, [ts, te]) =
event in win(event(i, e type(e) = createdInstance), [ts, te])
event in win(event(i, e type(e) = completedInstance), [ts, te]).
The most obvious application of this technique is the definition of the well-known
measure of work called throughput. In the case of automated business processes, the
throughput evaluates the number of instances of the same type that have been created
and completed per unit time.
Note that throughput is polymorph; in particular the set X could be of type process, task
or work item.
dm. 8: throughput(X, [ts, te]) =work(X, p)
te − ts;
p = instance in win(i, [ts, te]).
Another application of windowing is the determination of the max queue length during
the interval [ts, te] for the work items in charge to the actor ”White”. Some measures
introduced so far must be considered as an approximation of the real value. For example,
the correct interpretation of residual duration is the assessment of how much time
remains to complete a work. The use of windows entails approximation in the measurement
activity. Indeed, the function instance in win does not consider instances started before
ts and active within the window and neither the instances started within the window
but not completed before te. In general, the more the window is wide the more precise
the approximation is. The analyst has the responsibility to decide from one side, the
62 Chapter 3. A Measurement Framework of Workflows
Figure 3.10: A windowing application
reference interval (for example, March instead of January) and, on the other side, if the
approximation is precise enough.
3.5.2 Performance Evaluation Reports
During the system operation, measurement activities can be repeated to collect statistical
data on the performance of business process. Of course, the availability of powerful tools
for the aggregation and visualization of performance data may meaningfully enhance the
analysis capability. Data aggregation for the production of reports can be obtained com-
bining the operators introduced before and employing a pipeline technique.
Consider again the process of “loan request” to compose a report as that shown in table
. Three actors, Brown, White and Yellow are the participants to the workflows and they
receive work item instances. In the report, the time spent on each work item is distributed
among the actors. For example, the row referred to the work item “evaluation” shows that
13 is the sum of its working durations, together with the contribution given by Brown and
Yellow.
Looking at the structure of the report above, we can observe that several sections
shown in different gray patterns compose it. The values appearing in each section can
be computed starting from the measures defined in the framework and exploiting the
windowing and pipeline techniques. For example, the elements in the column “Total per
work item” are obtained by the application of the following function:
example 6.1: map(sigma(working duration,X),
partition(W, {i name(father(i)), i name(i)})).
Chapter 3. A Measurement Framework of Workflows 63
Table 3.1: A statistical report
Performance Evaluation Report from 1-01-2002 to 31-12-2002
task name workitem name actor name total per workitem
Brown White Yellow
Analize Request Evaluation 10 - 3 13
Analize Request Redemption Plan 5 8 3 16
Analize Request Financial Data Analysis - 3 3 6
. . . . . . . . . . . . . . . . . .
Total per actor 35 56 21 112
More precisely, the function works according to the steps:
1. a partition of W is obtained collecting the work items by task name and work item
name;
2. map selects each subset in the partition;
3. on each subset selected by map, sigma is applied to obtain the summation of working
durations.
The remaining sections of the report are computed analogously. In fig. 12, an example
of compositional schema resumes how the various sections of the described report have
been built.
3.5.3 Evaluating a Process Hierarchy
Due to the complexity of a number of real processes, their definition is often structured
nesting subprocesses on several levels. In such way, a process definition can be built using
a hierarchical decomposition method in order to control the complexity of the definition
phase. Another advantage of this approach is that reusable components can be defined
and used as parts of other processes. On the other hand, the organization of a process in
terms of several subprocesses poses to the analyst problems about the set up of the mea-
surement activities and on the correct interpretation of the collected values. For example,
if we consider the average task duration, the first question to be answered is: should the
64 Chapter 3. A Measurement Framework of Workflows
Figure 3.11: Use of pipeline schema to build performance evaluation reports.
measure be evaluated for each subprocess or simply for the main process? In the first case,
the computation should be done by grouping the instances of tasks within each subprocess
instance, then evaluating the duration of single task instances and finally computing the
average values per subprocess. In the second case, the computation requires the arrange-
ment of all task instances within a single collection, i.e., the task instances belonging to
the main process, before the computation of the average value.
Obviously, the two computation methods lead to results numerically and semantically dif-
ferent. The model developed so far presupposes that a process be defined as a two level
tree where the root is a process, on the first level there are tasks while work items are
arranged on the leaves. Now we are ready to extend the model in order to handle case
where a process definition comprises several levels of subprocesses as shown in fig. 3.12-a).
There are two possible ways to approach these cases; both of them are conservative in the
sense that the measurement framework can be applied in its entirety. The first considers
the “splitting” of a process instance into disjoint subtrees i1, i2, . . . , in where i1 is the root
of the main process and i2, . . . , in are the roots of its subprocesses. An example is shown in
fig. 3.12 where a multiple levels hierarchy is mapped into four trees of height 2. According
to the previous definition, the generic subtree i contains only the tasks (together with the
Chapter 3. A Measurement Framework of Workflows 65
i2
i4i3
i1
i2
i3
i4
i1
i2 i3 i4
a) b)
Figure 3.12: Splitting a process instance into subprocess instances; a) Multiple levels
hierarchy; b) Two levels hierarchies.
corresponding work items), generated directly by i.
The work is done by the following recursive function:
split(i) =
{i} if filter(I, p) = ⊘
union({i},map(split, f ilter(I, p))) otherwise
p = i type(j) = process ∧
father(j) = i.
We are now in the position to apply the measurement framework. For example, we
could evaluate the duration of each subprocess instance, the number of completed work
items in them or the average values discussed above. On the other side, if we are inter-
ested in the average task duration with respect to the whole process then, we have two
possibilities.
Let m1, . . . ,mn be the set of average task durations evaluated for the set of subprocesses.
We could assess the average value for the main process through the sample midrange for-
mula (m1 + mn)/2, where m1 and mn represent the first and the last value in an order
statistics of the considered random sample. The use of the midrange formula is just an
example of how the analyst could infer an assessment of a quantity related to a global
66 Chapter 3. A Measurement Framework of Workflows
process starting from local information obtained by the analysis of its subprocesses. In
some cases this method leads to precise results as in the evaluation of minimum and max-
imum values in a set of measures. In general, as shown by the application of the simple
midrange formula, the measure must be interpreted as an approximation of the real value.
The second possibility comes out from the transformation of the original process struc-
ture into an equivalent structure obtained flattening the hierarchy. This can be done by
substituting the value of the attribute father for each task with a new value that points
to the root of the main process.
The first approach is reasonable when an analyst is interested in obtaining precise mea-
surements of the subprocesses. The second one is preferable when a measure is referred
to an aspect of the entire process.
3.6 Summary
The structure of the framework is resumed in the table 3.2.
CONTRIBUTIONS AGGREGATE MEASURES INDICATORS
Time Work task/task topology queue
contribution contribution
max
min
avg
}
task contribution
max
min
avg
}
queue throughput
task contribution task contribution
max
min
avg
}
intertask duration residual duration
actor contribution actor contribution residual work
routing
TIME WORK TASK TOPOLOGY QUEUE
instance duration work intertask duration waiting time
current duration work distribution current queue
working duration instant queue
MEASURES OPERATORS
♯ (instances counting) path partition
∆ (time intervals) map event set
filter event
Table 3.2: The hierarchical measurement framework
Chapter 4
Performance Evaluation Monitors
The need of real-time responses to measurement queries is often required. For this pur-
pose, specialized software agents, that we call performance evaluation monitors (monitors
for short), could be modeled to interact directly with the WfMS. We could define, for
example, a monitor to follow the workload of a given employee in the next week or to
trigger an alert if the queue related to a task exceeds a given threshold.
In a nutshell, recalling the three concepts of what, when and how presented in the chap-
ter 2, we can say:
Performance Evaluation Monitor = what(process measurement) +
when(time window) +
how(real-time).
Fig. 4.1 introduces the monitor component. The possible scenario exemplified in the figure
shows the analyst who is performing process monitoring. A monitor is activated at time
t0, specifying a time interval [t, t′] and providing one or more measures to compute. The
monitor is charged to retrieve events already happened and to open a connection with
the workflow engine asking it to send future events during the time interval [t0, t′]. The
monitor will decide which among the events recorded in the audit data and sent from the
workflow engine must be taken into account to compute the new value of the measure(s)
managed by it.
Since the monitor recomputes a measure as soon as an event that implies a change in the
measure value happens, we say that a monitor performs continuous process measurement.
In the other words, the process measurement frequency is the highest possible.
From the implementation point of view, monitors based tools are more difficult to realize
70 Chapter 4. Performance Evaluation Monitors
Monitor
t't
Audit Data
Wf Engine
time
Past events
t0
Future events
Figure 4.1: Continuous process measurement using a monitor.
than tools oriented to ex-post analysis, because complex interactions with the underlying
WfMS are required during the system operation. Due to such interactions the tools based
on monitors should be designed in tight connection with the target WfMS.
4.1 Monitor Types
It is convenient to distinguish between several kinds of monitors. On the one hand, this is
a necessity because a measure defined for a monitor of type “process” could be meaningless
if related to a monitor of type “actor”. On the other hand, the division of monitors in
several kinds facilitate their use.
We consider four types of monitors, with the monitor type “WfParticipant” that can be
further specialized in the subtypes “actor”, “role” and “Organizational Unit”:
• Process.
• Task.
• Work Item
• WfParticipant.
– Actor.
Chapter 4. Performance Evaluation Monitors 71
– Role.
– Organizational Unit.
A number of measurements are in common between process, task and work item monitors,
while other ones are specific to the process monitor. Analogously, some measures are
assigned to the WfParticipant monitor while others like “number of handled subprocesses”
and “number of working actors” are responsibility of the Organizational Unit monitor only.
Table 4.1 provides examples of measures that can be considered as possible candidates of
monitor specifications. Some of them have been discussed in this section; the definition of
the remaining measures can be found in [AEN02].
Note that the measure current queue appears in all the monitor types shown below. This
is due to the fact that this measure is abstract enough to capture sentences like:
• “Task to be executed”.
• “Task to be executed by OUj for process instances named P”.
The first one is more general than the second that can specialized by means of a logic
predicate.
Table 4.1: Assignment of measures to monitor types.
Process, Task
and Work item
Process WfParticipant Organizational
Unit
duration residual duration time contribution number of handled
subprocesses
waiting time residual work work contribution number of working
actors
working duration intertask duration current queue
current duration routing queue trend
current queue work contribution
number of instances
created, started,
completed
task contribution
Many WfMSs allow to record data containing the cost assessment for the tasks that
compose a process. In this case, specialized monitors could be specified as software agents
72 Chapter 4. Performance Evaluation Monitors
able to provide support to the ABC methodology .
The average cost of a task and the total cost of a process instance are examples of cost
measures. Since the measures assigned to the monitors type introduced in table 4.1 concern
time information (duration, waiting time, intertask duration, etc.) or the amount of work
(routing, work contribution number of subprocesses), we might ask if the cost analysis
requires a new monitor type. Indeed, in a process oriented view, cost measurements are
just another kind of computation that can be carried out by monitors of type work item,
task and process, depending on the granularity level of our analysis. However, cost monitor
could be introduced as well if the perspective of the workflow analysis is focused on the
cost dimension.
4.1.1 The specification of workflow entities
Process measurement requires the precise specification of a set of entities on which a
measure must be applied. A workflow entity can be specified using the following grammar:
< workflow entities >::=< workflow entity > (‘;’< workflow entity >)∗
< workflow entity >::= ‘{’ ((< workflows descriptor > |
< participant descriptor >) [‘with’ < predicate >]) | ‘}’
The goal is to provide a simple way to qualify the set of instances input to the process
measurement; p is a first order predicate that allows to state precisely which instances
must be selected.
An instance of work item acquires its complete identity within the context of a task
instance that, in turn, exists in the context of a process instance. Furthermore, since an
analyst could be interested to a single instance or to a set of instances, a formal way to
qualify unambiguously the object of the process measurement is necessary.
The monitors offer this feature through the concept of workflows descriptor as stated by
the following grammar:
< workflows descriptor >::=< process descriptor > [‘.’< task descriptor >]
[‘.’< workitem descriptor >]
< process descriptor >::=< qualifier >
< task descriptor >::=< qualifier >
< workitem descriptor >::=< qualifier >
Chapter 4. Performance Evaluation Monitors 73
< qualifier >::= (‘&’< name > | < name list > | < identifier > | < identifiers > |
< literal > |‘any’)
< identifiers >::= (‘[’(< identifier >)(‘,’< identifier >)+‘]’)
< name list >::= (‘[’< names > ‘]’)|(‘[’< literals > ‘]’)
< name >::= (< letter > |‘ ’|‘:’)(< namechar >)∗
< literal >::= (′"′[∼ ”]∗′"′)|(”′”[∼′]∗”′”)
< literals >::= (< literal > (‘,’< literal >)∗)
< names >::= (< name > (‘,’< name >)∗)
< id >::= ‘id’(< digit >)+
The production < participant descriptor > is borrowed by those presented in the
grammar used to qualify organizations in X.500 [WKH97].
Data are stored in a hierarchical way: for example, an organizational unit could be part
of another organizational unit or a role may be the specialization of another role. The
hierarchical nature of data has brought about the use of the concept of directory for the
storing and the retrieval of data within an organization. In a directory, all the information
are memorized as objects; for each object, an entry in the directory is provided. An object
can represent a human resource, a computer or an organizational unit. It is necessary
to assign a name to the objects in order to record and query its information. Directory
names are hierarchical and compose a name tree (fig. 4.2) in which each name is composed
of several parts that reflect the hierarchy.
Figure 4.2: An example of Directory Information Tree.
< participant descriptor >:= (‘any’|[< actor descriptor >]| < ou descriptor >)
< ou descriptor >::=< ou DN > (‘,’< ou DN >)∗
74 Chapter 4. Performance Evaluation Monitors
< ou DN >::=< ou key > ‘=’< qualifier >
< ou key >::= (‘C’|‘O’|‘OU’)
< actor descriptor >::=< actor DN > (‘,’< actor DN >)∗
< actor DN >::=< actor key > ‘=’< qualifier >
< actor key >::= (‘Role’|‘Position’|‘Employee’|‘System’)
Below, we will show some examples of the possible uses of this grammar. The process GrantPermission
is the case study that will be introduced in the next section to show the features of active monitors.
1. GrantPermission with owner=‘‘White" - All the instances of the process called GrantPermission
owned by the employee whose name is White are selected.
2. GrantPermission.[OpenFile, ManagerAnalysis] - The instances of tasks OpenFile and
ManagerAnalysis, included in the process GrantPermission, are selected.
3. id3400.* - All the instances of task, included in the process instance with identifier =id3400,
are selected.
4. GrantPermission.PreliminaryExamination.* - This instruction selects all the work item
instances that have been created in the context of task instances named PreliminaryExamination
that, in turn, are executed in the process instances GrantPermission.
5. OU=GrantOffice, Employee=any - All the employees that belong to the organizational unit
“GrantOffice” are selected.
4.2 Software Architecture
The introduction of several monitor types together with a mechanism for the precise specification
of which set of instances must be selected for the measurement activity regard the aspect of “what”
to measure. The focus on “when” and “how” allow us to characterize a tool based on the concept
of performance evaluation monitor.
An important feature of the tool proposed in this section is its capability to continuously evaluate
workflow quantities while the workflow is in progress. In other words, the result of the measurement
is a set of values continuously evalued during a time interval [t, t′] as the workflow engine handles
events relevant to the measure (or set of measures) computed by the monitor(s).
In fig. 4.3, the software architecture of the proposed tool is shown. The analyst interacts with
the Monitor Handler (1) to create or stop a monitor (2). As soon as the Monitor Handler has
received the information about the monitor to activate, it updates the Monitor Table (3) that
contains all the data related to the active monitors; in particular, the monitor table maintains an
unique identifier for each monitor in the system. If the analyst is interested in the monitoring of a
Chapter 4. Performance Evaluation Monitors 75
AuditData
WfExecution
Data
Wf Engine
EventHandler
Events
7
Instances
Rejected Events
9
Past EventHandler
5
6 PastEvents
MonitorTable
11
1o
Queue ofEvents
Dispatcher12
MonitorHandler
4
3
Monitor1
Monitor2
Monitorn
.
.
.
13
13
13
1
2
2
2
14
14
14
8
Figure 4.3: The software architecture of the proposed tool.
measure starting from a temporal point in the past, the Monitor Handler activates the Past Event
Handler (4). This module recoveries past events from the Audit data (5) and sends them to the
Event Handler (6). The Event Handler is a software component that receive all the events from the
WfMS (7) and/or from the Past Event Handler; it might also needs information about execution
data (instances) (8) to select events. As soon as the Event Handler receives an event, evaluates it
in order to decide if must be rejected it (9) or queued (10) in the Queue of Events. The evaluation
is performed on the basis of information stored in the Monitor Table (11). The events, useful for
at least one among the active monitors, are collected in the Queue. The Dispatcher retrieves the
events from the Queue (12) and distribute them to the monitors (13). A monitor receives only
those events that are useful for the computation of its own measure(s). As soon as a monitor
receives an event, it re-computes the measure and shows the result to the analyst (14) who can
interact with a monitor to change, for example, the scale of data or to stop the computation.
The fragment of code shown below points out a significant part of the logic implemented by the
Event Handler module: the filtering technique. As soon as the Event Handler receives an event e,
the event is filtered according to the specification reported below. The function filter is called for
each monitor M active in the measurement tool. If the time of the event under consideration is
inside the defined time interval and there exist a measure in M that uses e to recompute its value,
then we proceed to verify the last constraint. Actually, we take into account the event e only when
the related instance can be described in terms of the workflows descriptor given in input to M .
filter (e: Event, M: Monitor)
begin
76 Chapter 4. Performance Evaluation Monitors
if (time(e) ǫ time window(M)) and
(∃ a measure m ǫ M | m uses e) and
(instance(e) satisfies workflows descriptor(M))
then enqueue(e)
else discard(e)
end
Figure 4.4: The hierarchy of monitors.
Fig.4.4 shows the relation that exists between the monitors. At the top there is the abstract
class Monitor, that cannot be instanced; it provides the necessary attributes and methods to
realize every type of monitor. The BaseMonitor class derives from the abstract class Monitor
and it implements the basic measurements. From the class Monitor also derives the class called
Chapter 4. Performance Evaluation Monitors 77
WfParticipantMonitor that implements a monitor for a workflow participant.
Starting from BaseMonitor, other three classes are defined, that implement,respectively, a ProcessMonitor,
a TaskMonitor and a WorkItemMonitor.
Fig. 4.5 clarifies how a monitor is internally composed. There is a queue that records all the events
that dispatcher has sent, but that have not already elaborated by Monitor. Another component,
called Elaborator gets the events from the queue, process them and put them in a results queue.
When a Monitor has to compute derived or aggregate measurements, it decompose the mea-
Figure 4.5: The internal architecture of a Monitor.
surement in basic measures and starts one or more subMonitors, that are interested in the sub-
computations. An object called AggregateElaborator gets the measures from the results queues of
each submonitors, compose the result and enqueue it in the result queue of the Monitor.
An active object, the Visualizer gets the results from the queue and display them to the user. In
the subMonitors the Visualizer does not exists, because their results are not shown to the user but
they have to sent to the main Monitor.
In the next chapter, the concept of Performance Evaluation Monitor will be extended, dis-
cussing about three different environments: centralized, semi-distributed and distributed.
The system we have already discussed can be use in the centralized environment, in which the
analysis is conducted on a single computer, where also the WfMS runs.
The graphical interface of the Performance Evaluation Monitor is not shown here, but it will be
resumed in the subsection 5.5.3 where it will be also discussed a prototype of a MAS system for
the workflow distribited evaluation.
Chapter 5
Evaluation of federated workflows
Virtual Enterprises are comprised of several organizations connected and interacting by means of
a computer network. In this context, business organizations adopt new forms of interactions and
their processes are often supported by the workflow technology. So, heterogeneous workflow man-
agement systems have to cooperate in order to execute inter-organizational or inter-departmental
processes. For this reason, federated workflows represent a good application to comunicate and
share information within a VE.
The aim of this chapter is to consider the problem of distributed monitoring of workflows defined
and enacted within a Virtual Enterprise. We will first introduce three possible models for the
continuous monitoring of workflows. Then, we will discuss in detail the distributed monitoring
problem. The underlying model is structured as a multiagent system where specialized agents
coordinate their work to obtain performance measures of processes enacted within the Virtual
Enterprise.
5.1 Virtual Enterprise and Federated Workflows
The global market rises new challenges in the way companies operate. As the market evolves,
different companies are tied together and the re-engineering of their processes becomes necessary.
Business Process Re-engineering (BPR) projects often exploit Information and Communication
Technologies as a lever to support new forms of interaction between enterprises that make the
progress of many activities possible even without the intervention of human beings.
The trend towards the change from the traditional way to manage administrative processes to
new ICT-supported process management can be observed also for many public agencies who have
planned BPR intervention to enhance the quality of service to the citizens.
These new forms of interactions are well-summarized by the concept of Virtual Enterprise (VE)
[CMAGL99, CMA01] that emphasizes the role of cooperative work through computer networks.
80 Chapter 5. Evaluation of federated workflows
A VE is a temporary aggregation of autonomous and possibly heterogeneous enterprises, meant
to provide the flexibility and adaptibility to frequent changes that characterize the openess of
the business scenarios [RDO01a]. VEs tipically mean to combine the compentences of several
autonomous and different enterprieses in a new agile enterprise, addressing specific and original
industrial business target. So, technologies for VE development have to satisfy strong requirements
resulting from the need to integrate and coordinate distributed activities, which should commu-
nicate and cooperate despite their heterogeneity and the unpredictability of the environment. As
observed by [Ley94, RDO01a] workflow systems are the most important mdels currently used by
organizations to automate business processes while supporting their specification, execution and
monitoring. In practice, Workflow Management Systems (WfMS) use a specialized software to
manage the activity coordination and the communication of data. WfMS can cooperate in the
context of enterprise-wide workflows, even though the component WfMS use different workflow
specification languages, different formats, different enactment techniques and implementations. In
the VEs, besides to flexibility and adaptability, WfMSs have also to face the issue of distribution,
having to coordinate heterogeneous activities spread over the network.
The technology of WfMS is increasingly used in both business organizations [Vla97] and public
agencies [Koe97, Rui97]. In many organizations different WfMS have been installed, and even a
single enterprise can operate different WfMS in different departments or branches. Enterprises or
departments need to cooperate in the context of at least some business processes, and thus the var-
ious heterogeneous WfMS must be integrated into a federated inter-enterprise or enterprise-wide
WfMS.
In [OTS+99] the authors state that there are three classes of problems which impede the devel-
opment of cross-organizational WfMS: products and models differ from one business to another,
so this makes it difficult for organizations to exchange information or integrate foreign workflows;
moreover, there could exist problems of visibility when business processes cross organizational
boundaries and different jurisdictions; finally, the redistribution of the work is very difficult when
internal and external forces are concerned with the process.
A federated workflow consists of a set of individual, possibly heterogeneous yet interoperable WfMS
[GKT98]. WfMS federation might require the maintenance of autonomy among participating
WfMS. For instance, the execution of workflows can be required to remain under full control of the
component WfMS, and requestors might not have the possibility to monitor or even influence such
executions. Workflow interoperability has to enable execution of workflows in an environment of
heterogeneous WfMS in such a way that “global” workflows can trigger subworkflows anywhere in
the WfMS federation.
In many researches, multiagent systems (MAS) are used to implement WfMSs, particularly in a
distributed domain. A MAS seems to be the most promising candidates for the development of
Chapter 5. Evaluation of federated workflows 81
open digital markets, business process management and virtual enterprise management [RDO01a].
In [CMA01] some characteristics that VE and MAS have in common are listed:
• A VE is composed of distributed, heterogeneous and autonomous component that can be
easily mapped into a MAS;
• Decision making involves autonomous entities that cooperate in order to reach a common
goal but also are competitors on other business goals.
• The execution and supervision of distributed business process requires quick reactions from
enterprise members. l
• A VE is a dynamic organization that might require reconfigurations; so, a flexible modeling
paradigm is necessary.
• The federated MAS approach may provide a solution to handle the requirements of autonomy
cooperative behaviour.
On the other side, MAS is lacking robustness of development environment, security mechanisms,
standards and easy interfaces to legacy sistems. In the following section, I will introduce some
concepts about agents together with some related works on WfMS and MAS.
5.1.1 Multi-Agent Systems
Even if there is not a clear and unambiguous definition of agent, Wooldridge defines an agent as “a
computer system that is situated in some environment, and that is capable of autonomous action
in this environment in order to meet its design objectives” [Woo02]. The term agent is used to
denote a software system that enjoys the following properties [WJ95, Woo02]:
• Autonomy: agents are computational entities that are situades in some environmentand to
some extend have control over their behaviour so that they can act without the intervention
of humans and other systems.
• Social Ability: agents are capable of interacting with other agents via a agent communi-
cation language in order to satisfy their design objectives.
• Pro-activeness: agents are able to exhibit goal-directed behaviour by taking the initiative
not only acting in response to their environment.
• Reactivity: agents perceive the environment and respond in a timely fashion to changes
that occur in it. Each agent must be responsive to events that occur in its environment; it
must be able to react to the new situations, in time for the reaction to be of some utility.
82 Chapter 5. Evaluation of federated workflows
Such agents may replace humans under certain circumstances, for example, when too many factors
influence a decision that has to be taken in split seconds but the decision itself requires a certain
degree of “intelligence”, or when the human person cannot be phisically present. Agents are
intelligent in the sense that they operate flexibly and rationaly in a variety of environmental
situation, given the information they have and their perceptual and effectual capabilities [LN04].
To be intelligent requires specialization; hence, an agent must have its own specialized competence
and severla of them need to collaborate. As a consequence, agents are interacting. They may
be affected by other agents in pursuing their goals and executing their tasks, either indirectly by
observing one another through the environment or directly through a shared language.
Systems of interacting agents are referred to as multiagent sistems (MAS). In [LN04] a MAS id
defined as “a system of autonomous, interacting, specialized agents that act flexibly depending on
the situation”. Agents in a multiagent systems may have designed and implemented by different
individuals, with different goals. Because agents are assumed to be acting autonomously, they must
be capable of dinamically coordinating their activities and cooperating with others. In a distributed
problem solving, an agent is unable to accomplish its own tasks alone, or it can accomplish its
tasks better when working with others [Dur01].
A class of distributed problem solving strategies is the task sharing: a problem is decomposed to
smaller sub-problems and allocated to different agents. In cases where the agents are autonomous,
and can hence decline to carry out tasks, task allocation will involve agents reaching agreements
with others.
The main steps of task sharing are:
• Task decomposition: in this step, the set of tasks to be passed to others is generated. This
generally involve the decomposition of large tasks into subtasks that could be tackled by
different agents.
• Task allocation: subtasks are assigned to agents.
• Task accomplishment : each agent performs its subtasks; it is possible that further decom-
positions and subsubtask assignments can be recursively executed.
• Result synthesis : as soon as an agent has accomplished its subtask, it passes the result to
the appropriate agent that will perform the composition of all results into a overall solution.
An application of this strategy will be presented in section 5.3 where a model of multiagent sistem
for the distribute evaluation of workflow will be introduced.
5.1.2 Related Work
There is a number of works in which a MAS is used to realize a WfMS. The ADEPT System
[JFJ+96, JFN98] is one of the first agent-based workflow management system. It use intelligent
Chapter 5. Evaluation of federated workflows 83
agents that collaborate to manage a real business process. This system consistes of multiple
software agents which negotiate concurrently with each other in order to reach agreement on how
resources are to be assigned to support a business process.
It was decided that the most natural way to view a business process is as a collection of autonomous,
problem solving agents which interact when they have interdependencies. The choice of agents was
motived by a number of observations: the domain involved a distribution of data, problem solving
and responsibilities and the integrity of the organization and its sub-parts needed to be maintained;
there was sophisticated interactions, such as negotiation, coordination and information sharing.
Agent Enhanced Workflow (AEW) [JOSC98] investigated the integration of agent based process
management with existing commercial workflow management systems. AEW in contrast to agent
based workflow, combines a layer of software agents with a commercial workflow system. the agent
layer is given responsibility for the provisioning phase of business proces management, whilst the
workflow system handles process enactment. In the case of a failure, the agents renegotiate the
flow of work and redistribute the work accordingly.
The Monitoring module is represented by an agency containing agent that resolve, visualize and
verifyprocess models drawn from the process management agents. In [ASH+00] the design of a
mobile agent based infrastructure for monitoring and controlling activities in a Virtual Enterprise
(VE) is described. Mobile agents can migrate from one computer to another and execute their
code on different locations. The agents provide support for the dispatching, monitoring, tracking
and negotiation processes. TuCSon is both a model and an infrastructure for the coordination of
Internet agents that has been fruitfully exploited tosupport the design and development of VE’s
WfMS [RDO01a, RDO01b].
A suitable general-purpose coordination infrastructure may well fit the needs of VE management in
a highly dynamic and unpredictable environment like the Internet, by providing engineers with the
abstractions and run-time support to address heterogeneity of different sorts, and to represent WF
rules as coordination laws. VE management and WFMS may be seen as coordination problems.
An infrastructure for WFMS should provide support for (i) communication among participants; (ii)
flexible and scalable workflow participation, allowing participants to adopt both traditional devices
(such as desktop PC) and non-traditional devices (such as thin clients, or PDA). Moreover, such an
infrastructure should support (iii) disconnected operation, allowing WF participants to be mobile,
location-independent, and non-frequently connected to the network; finally, it should enable (iv)
traceability of the business process state. In the case of inter-organisational workflow systems
for use in the VE context the infrastructure should be able to cope both with the openness and
distribution of the Web environment, and with heterogeneity at several levels.
In the TuCSon model multiple tuple centres are defined that abstract the role of the environment.
From the point of view of the technology, TuCSon is implemented in Java to guarantee the
84 Chapter 5. Evaluation of federated workflows
portability among platforms. In this section some works relating MAS and WfMS have been
presented. Even if, some of these systems contain a monitoring module, they are interested in
the monitoring of the enactment of the workflows and not concern the fully evaluation of business
processes. For this reason, our attention is devoted to the discussion of three possible models of
software architectures for the integration of the performance evaluation monitor subsystem into a
WfMS will be proposed. In section 5.3 a multi agent-based model for the evaluation of workflows
in a distributed network is discussed in detail. Finally, the prototype of a multi agent system for
the evaluation of workflows closes the chapter.
5.2 Software Architectures for the Evaluation of Workflows
The aim of this section is to discuss some possible models of distributed workflow measurement
based on the idea of monitor. The idea is implemented through a monitoring subsystem divided
in two main parts: the “Monitor module” and the “Monitor Manager module”.
During its life cycle, a monitor instance performs essentially three fundamental tasks: filtering,
measures evaluation and measures visualization. These tasks are implemented in the Monitor
module. In the filtering phase, the events relied to the specified workflows entities are selected.
The computation phase receives the filtered events necessary for the measures evaluation and, fi-
nally, the visualization concerns the set of techniques to show the results to the users.
The clean distinction of these tasks brings several benefits. First of all, it becomes possible to
distribute the corresponding information processing over several computers. Second, the inde-
pendence of the measure evaluation task from the data visualization task allows the design of
presentation software in which several graphical perspectives or even new presentation patterns
can be proposed without changing the measure evaluation module. Finally, the Monitor module
is clearly divided into submodules making the implementation easier.
The Monitor Manager plays the role of coordinator. It accepts requests of service in terms of
measurement queries by the workflow participants and creates new monitor instances to provide
the requested service.
The software that supports the monitor model is just a subsystem that interacts with the WfMS
in order to get selected data necessary to compute measures.
To ensure the real-time behaviour of monitors, the monitoring subsystem must be designed to work
on tight connection with the workflow engine; therefore, its architecture is strongly affected by the
WfMS architecture.
Table 1 shows three different ways to integrate a WfMS and a monitoring subsystem. Possible
ways to integrate a WfMS and a monitoring subsystem depend on the WfMS architecture and the
Chapter 5. Evaluation of federated workflows 85
WfMS architecture Monitoring subsystem usage
Centralized Centralized
Centralized Distributed
Distributed Distributed
Table 5.1: Combining the monitoring subsystem with the WfMS architecture
centralized/distributed usage of monitor instances. In the following paragraphs we will discuss the
corresponding architectural models.
5.2.1 Centralized Model
The monitor model discussed in chapter 4 is centralized in the sense that both the workflow engine
and the whole monitoring subsystem reside on a single computer. Fig. 5.1 illustrates this model;
the rounded rectangles represents modules of the monitoring subsystem.
During its life cycle a monitor instance can look at the past as well as at the future. It can retrieve
historical data from the workflow log and can ask the workflow engine to send the events necessary
for future evaluation of measures as they change during the flow of time.
Figure 5.1: The centralized monitors model.
Since to (re)compute a measure a monitor instance needs only few events among those main-
tained in the log files or managed by the workflow engine, the filtering module is a fundamental
component; the filtering algorithm can be found in [AEN02]. The three monitor modules are
implemented as communicating threads according to a pipeline scheme in a shared memory envi-
ronment: the filtered events are redirected towards the computation thread that (re)evaluates the
measure passing the result to the visualization thread. Each monitor runs independently from the
other monitors and can be specialized to observe a given phenomenon. The Monitor Manager has
two main functionalities:
86 Chapter 5. Evaluation of federated workflows
1. it exposes an user interface that enables the creation of new instances of monitors
2. it must supervise the life cycle of monitor instances coordinating their executions
The centralized model has been implemented in JAVA. The application domain of the software tool
that implements the model can be mainly identified in small and medium enterprises that manage
maybe a dozen of automated processes whose analysis can be done in a centralized environment.
Another possible application domain is that of a dominant enterprise that imposes the use of
software for the management of a supply chain to its suppliers.
5.2.2 Semi-distributed Model
A natural extension of the centralized model offers to the users the possibility to create monitor
instances from any computer over the network. Fig. 5.2 shows the underlying client-server model
in which the WfMS is still centralized but any user can invoke the execution of a new monitor
instance from his client application.
Note that the computation and presentation parts of a monitor instance now run on the client
computers. On the other hand, the filtering module runs on the same computer where the WfMS
resides. This is a reasonable choice because only few events among those recorded in the log files
or managed by the WfMS will be redirected over the network towards the computation module for
the measure evaluation.
When an user invokes the execution of a new monitor instance, the request is intercepted by the
Monitor Manager (client-side) that updates its table to record the new entry. Then, a request of
service is sent to the Monitor Manager (server-side) that listens to new requests incoming from
the network. Then, it creates a new filtering thread assigning it the task to collect past and future
events according to the observational time window. The thread also forwards the selected events
to the remote computation module using a message passing mechanism.
This model has been implemented using Java Remote Method Invocation (RMI). We have exploited
especially the mobile behaviour feature of RMI implementing a monitor class that can be required
by the clients from the server. Since the measurement framework that we have taken as reference
is extensible, a new version of the measurement framework could be promptly put in operation
simply changing the monitor class on the server. Another advantage deriving from RMI and
the clear separation of the presentation module from the others is the possibility to adopt a new
presentation modality without changing other components. This model could be used in medium or
large companies which use a client-server architecture to model, execute and analyze their business
processes.
Chapter 5. Evaluation of federated workflows 87
Figure 5.2: The semi-distributed monitor model.
5.2.3 Distributed Model
For the purposes of this work, a VE is comprised of many organizations connected and interacting
by a computer network where each organization manages its processes by a WfMS. A global process
involves actors and resources belonging to two organization at least. Usually, a subset of processes
managed within the VE are global processes (i.e. a b2b transaction in a supply chain).
In this general setting neither the centralized model nor the semi-distributed model is suitable for
the performance evaluation of workflows where possibly several workflow engines interact each other
to reach a common goal; therefore, the semi-distributed model must be further generalized to take
into account this wider scenario. Fig. 5.3 illustrates a possible architecture for distributed process
monitoring. Here, n enterprises are connected together to form a VE. Each enterprise manages its
processes with a WfMS and cooperates with other enterprises to enact global processes.
The proposed architecture extends the previous models [ANFD04]. In fact, the monitor instance
M3=(f3, c3, v3) active in O2 and M1=(f1, c1, v1) active in O1 are managed according to what we
have discussed in the centralized model and in the semi-distributed model respectively. However,
when an evaluation query that involves at least two organizations is submitted the scenario becomes
more complex. For example, an user in O2 could ask the evaluation of a measure that requires the
collection of data from the tasks T1 and T2 for a process enacted in O2, then from the tasks T3 and
T4 which represent external processes enacted in O3 and O1 respectively. We will formalize the
specifications of a MAS capable to model the distributed evaluation of workflow in the following
section. Possible application domains of this model could be a network of public agencies (e.g. a
virtual municipality organization) that manage some of their processes in common or an alliance
of enterprises that come together to better face the challenges of the global market.
88 Chapter 5. Evaluation of federated workflows
Figure 5.3: The distributed monitor model.
5.3 A multi agent-based architecture for the monitoring of
workflows
The distributed monitoring of workflow (DMW) could be modeled in the context of a multiagent
system (MAS) to perform local and global measures by means of active monitors into VE. DMW
is a particular case of Distributed Problem Solving [Dur01] in which many sources of information
exist within a VE and none of the WfMS working in isolation has enough information to compute
global measures.
The workflow monitoring system in a distributed environment is still conceptually divided into
Monitor Manager part and Monitor part. However, several instances of Monitor Manager (named
MMi) exist now, one for each monitoring tool installed in the VE. Both types of modules will be
regarded as agents, further divided in specialized agents, in order to distribute the tasks to the
appropriate locations over the network.
Before to define each type of agent in our model of distributed monitoring of workflow, it is useful
to introduce some preliminary definitions.
Let M = m1, m2, ..., mk be the set of measures that the monitoring subsystem can evaluate into a
VE.
A monitoring session is a 5-tuple: (id, user, c, mi, (fa, ca, va)) with the intended meaning that
user starts a monitoring session with unique identifier id on the computer c requiring a measure
mi; user creates a monitor instance (fa, ca, va) in terms of: fa (filtering agent), ca (computation
Chapter 5. Evaluation of federated workflows 89
agent) and va (visualization agent). Obviously, user specifies the “what”, “when” and “how” part
of a monitor using graphical interfaces like those shown in fig. 3.2 and the deployment of fa, ca
and va happens behind the scene. We say that a monitoring session relies on a:
• local interaction when the three agents fa, ca and va run on the same computer c where the
WfMS runs;
• organization-wide interaction when ca and va run on c and fa runs on c′ where the WfMS
runs and c 6= c′;
• VE-wide interaction when m =< m1 • m2 • ...mt > is obtained by the composition of t
measures where at least two of them must be evaluated starting from data maintained by
different WfMS.
The four phases necessary to a measurement session in order to carry out a VE-wide interaction
are shown below. Local and organization-wide interactions are simpler.
1. Task decomposition
a. the user creates a monitor instance to evaluate a measure m;
b. m is identified as local, semi-distributed or distributed measure;
c. decomposition of m in m1, m2, ..., mt.
2. Task distribution
a. distribution of tasks starting other monitoring sessions to compute the local measurement
m1, m2, ..., mt.
3. Task accomplishment
a. filtering of events in the involved WfMS;
b. computations of m1, m2, ..., mt.
4. Results synthesis
a. collection of m1, m2, ..., mt;
b. computation of m;
c. visualization of m;
d. closure of the monitoring session.
Following the usual steps to design a MAS (Wooldridge 2002) we will define agents and their
environment, percepts, states and behaviours. Also we will describe how agents interact during a
monitoring session.
90 Chapter 5. Evaluation of federated workflows
• The agents
Given an integer k less or equal than the number of computers in the network, the agents
in our MAS can be represented with the following sets:
MMA = {MM1, MM2, ...., MMk} is the set of Monitor Manager agents.
The agent MMi performs three operations:
1. session management of the monitoring session with classification of m into one of the
three categories, eventually performing operations related to the task decomposition
phase;
2. cooperations with other MMj related to the task distribution phase;
3. coordination of tasks during the accomplishment and result synthesis phases.
MA = {M1, M2, ...., Mk} is the set of Monitor agents where each MMi = (fai, cai, vai)
performs the task accomplishment and result synthesis phases. It can be decomposed in
subsequent sets:
a. FA = {fa1, fa2, ..., fak} is the set of agents performing filtering on WfMS data;
b. CA = {ca1, ca2, ..., can} is the set of agents that computes measures;
c. V A = {va1, va2, ..., van} is a set of agents that visualize results to user.
• The environment
The environment where DMW acts is represented by the set
V E = {U, M, Mes, O, WfMS, C}
– U is the set of users inside V E,
– M is the set of measures that our Monitor Subsystem is able to perform inside V E,
– Mes is the set of messages that agents exchange inside V E,
– O = O1, O2, ...., On is the set of n organizations inside the Virtual Enterprise,
– WfMS = WfMS1, WfMS2, ..., WfMSn is set of WfMS supporting organizations
inside V E,
– C is the set of computers c inside V E.
• Perceptions
In our DMW, the percepts are simply messages addressed towards agents. For example, the
MMi is available to accept messages for the requirement of monitoring sessions from both the
users and other Monitor Managers. Other examples of percepts are:
• the t results that the computational agents receive for the evaluation of a compound measure;
Chapter 5. Evaluation of federated workflows 91
• the request of collaboration issued by MMi to others Monitor Managers during the task
allocation phase;
• the agreements sent to MMi by others Monitor Managers.
During every exchange of informations in MAS environment, we have an agent that changes the
environment, sending one or more messages, and one or more agent that percept these environment
variations. The dynamics of such variations will be discussed in the next section.
5.4 The behaviour of a DMW
Looking at fig. 5.3 we can observe that all types of agents live in every computer c of the network,
but each one performs only some among the available actions, depending on where it is located
(if it resides on client or on server and which is its organization) and on the type of interactions
it has with other agents during the monitoring session.
For our purposes, these information represent the state of an agent. Anyway, an agent is able to
catch every other information it needs by direct percepts and chains of actions and/or deductions
(e.g. an agent deduces other informations about the environment from the instance of measure-
ment session, such as global process, organizations and WfMS involved, etc).
The location of an agent can be defined as:
location(agent) = (cl, Oj) if an agent lives in a client cl of Oj ,
location(agent) = (s, Oj) if an agent lives in a server s of Oj .
For example, if the agent is MMi and it lives in a server s of Oj then location(MMi) = (s, Oj).
We can see that the previously defined type of interactions complete the information about loca-
tions. In fact, they indicate where the agents are located on the other side of a communication
during a session, enabling some particular steps as tasks distribution and management of data flow
between several monitor instances.
Table 2 resume the dipendence of agents behaviour from location and interactions.
In fig. 5.4 the high level sequence diagram of the main agents of DMW is shown using the UML
notation. The control of task decomposition and task distribution is delegated to MMi that ex-
ploits the contract-net negotiation protocol [Smi80] to announce the availability of computational
tasks to other Monitor Managers over the network. The flow of operations on data is delegated to
Mi that can be seen as a data communication and elaboration specialist. This functional separa-
tion of agents allows to have several levels of domains aggregations. In other words, the VE can
change dinamically increasing the number of WfMS in the network. The only new requirements
92 Chapter 5. Evaluation of federated workflows
Behaviour
Agent MMi Mi=(fa, ca, va)
Location s, Oj c, Oj s, Oj c, Oj
Interaction
local Session Management Filtering
Cooperation Computation
Coordination Visualization
organization-wide Coordination Session Management Filtering Computation
Cooperation Visualization
VE-wide Session Management Session Management Filtering Computation
Cooperation Coordination Computation Visualization
Coordination
Table 5.2: The dependence of agents behaviour from location and interaction type.
for the DMW will be the definition of new global measures in order to customize or to extend the
measurement framework.
5.5 The architecture of distributed monitoring system
In the following, a prototype of this monitoring system will be presented. For the implementation
of the MAS the agent platform Grasshopper has been used. So, in the subsection 5.5.1 some basic
concepts of this platform have been reported.
5.5.1 The Grasshopper Agent Platform
Grasshopper is an agent development platform that enables you to exclusively develop and deploy a
wealth of distributed, agent-based applications written in the Java programming language [IKV01a,
IKV01c, IKV01b]. In more detail, Grasshopper allows to:
• create autonomous acting agents, which are able to migrate;
• transparently locate agents and send messages to them;
• manage and control agents during their execution by providing sophisticated tools;
• provide a wealth set of example agents that can be used as a starting point for exploitation
of agent technology.
Chapter 5. Evaluation of federated workflows 93
Figure 5.4: The interaction between MMAs
In Grasshopper two kinds of agents are available: mobile agents and stationary agents. A
mobile agent is able to change its location during its execution. It may start its execution at
location A, migrate to location B, and continue its execution at location B exactly at the point at
which it has been interrupted before the migration. In contrast to mobile agents, stationary agents
do not have the ability to migrate actively between different network locations. Instead, they are
associated with one specific location.
An agent can assume three different state: active, if the agent is performing its task; suspended,
when the execution is temporanealy interrupted and finally flushed, when the agent is not active
any more, but it can be activated again.
The structure of the Grasshopper Distributed Agent Environment (DAE) is composed of regions,
places, agencies and different types of agents as summarized in fig. 5.5. An agency is the actual
runtime environment for mobile and stationary agents. At least one agency must run on each host
that shall be able to support the execution of agents. A Grasshopper agency consists of two parts,
the core agency that provides the minimum functionalities to support the execution of agents, such
as the communication or the management and one or more places. A place represents the grouping
of funcionalities that can extend core functions.
Agencies as well as their places can be associated with a specific region; there is a registry on
which each agent that is currently hosted by an agency associated with the region is recorded.
94 Chapter 5. Evaluation of federated workflows
Figure 5.5: The Grasshopper Distributed Agent Environment.
If an agent moves to another location, the corresponding registry information is automatically
updated. A region may comprise all agencies belonging to a specific company or organisation, but
while agencies and their places are associated with a single region for their entire life time, mobile
agents are able to move between the agencies of different regions.
Grasshopper supports the following communication modes:
• Synchronous communication - when a client invokes a method on a server, the server executes
the called method and returns the result to the client which then continues its work. This
style is called synchronous because the client is blocked until the result of the method is sent
back.
• Asynchronous communication - the client does not have to wait for the server executing the
method; instead the client continues performing its own task. It can periodically ask the
server whether the method execution has been finished, wait for the result whenever it is
required, or subscribe to be notified when the result is available.
• Dynamic communication - The client is able to construct a message at runtime by specifying
the signature of the server method that shall be invoked. Dynamic messaging can be used
both synchronously and asynchronously.
• Multicast communication - Multicast communication enables clients to use parallelism when
interacting with server objects. By using multicast communication, a client is able to invoke
the same method on several servers in parallel.
Grasshopper agents can invoke methods of other agents in a location transparent way. Therefore,
Chapter 5. Evaluation of federated workflows 95
agents need not know the actual location of their communication peer.
By means of its communication service, Grasshopper achieves an integration of both, agent migra-
tion combined with local interactions, and remote interactions across the network. When using the
communication service, clients do not have direct references to the corresponding servers. Instead,
an intermediate entity is introduced, named proxy object or simply proxy (fig. 5.6). In order to
establish a communication connection with a server, a client creates a proxy that corresponds to
the desired server. This proxy in turn establishes the connection to the actual server.
Figure 5.6: Communication via Proxies
After this short presentation on the selected agent platform, the architecture of the designed
prototype will be discussed.
5.5.2 The architecture of the distributed system
A prototype of the distributed measurement system has been developed, exploiting the features of
the Grasshopper platform. For semplicity, in the prototype implementation, we have considered
that a single WfMS exists in an organization; however, during the discussion of the distributed
model, we will demonstrate that this assumption is not a limitation of the model.
In the protoype, each organization is represented as a Grasshopper region, while each computer
where the monitoring tool is installed represents an agency. The monitoring tool is also installed
on the server where the WfMS runs, as shown in fig. 5.7. A monitoring tool interacts with a user by
a graphical interface, by which a request of measurement can be sent; moreover, a communication
with the Grasshopper platform allows to manage the creation of the agents and their displacement.
As described in the model presented in the section 5.3, a monitoring tool consists of a Monitor
Manager and a set a Monitor agents. A graphical representation of the structure of the Monitor
Manager is shown in fig. 5.8. A Manager communicates by the user by means of a user interface.
96 Chapter 5. Evaluation of federated workflows
Figure 5.7: A high level overview of the distributed evaluation system.
Measure Analyzer performs two operations: the first concerns the identification of the requested
measurement type, that can be local, organization-wide o VE-wide. For a VE-wide measurement,
the sub-measures are defined and by the Communication module, a request of computation must
be sent to the Monitor Managers in other organizations. For the measurement that must be locally
computed or that are organization-wide, a second operation is executed. The Analyzer controls
if the measure is simple or aggregate using a method similar to Performance Evaluation Monitor
(section 4.2). If it is simple, only one Monitor is activated, otherwise, it is decomposed in simple
measures that will be calculated by as many Monitors as they are.
On the basis of the decomposition made by the Analyzer, the Coordinator module interacs with
the Handler for the creation of agents and, moreover, it must collect the subresults in order to
accomplish the measurement. The Monitor Handler manages a table, called Monitor Table, where
it records the references to the active monitors while in the MonitorManager Table the references
to the other Monitor Managers are stored. It is important to note that a Monitor Manager located
on a client maintains only the reference to the Manager on the server, while a Manager located on
the server, stores in this table both the reference to every Manager on the clients in its organization
and other server Managers located in the other organitions composing the VE.
Three different scenarios about the measurement computation are described here; to simplify the
representation we have made the assumption that the measure to be computed is simple.
A local measurement is computed when the request of measure arrives on the server where the
WfMS runs (fig. 5.9). In this case, the Monitor Manager receives the request, analyzes it and
creates the Monitor Agent (1). In its turn, the Monitor Agent creates the three subagents (2),
the Filter, Computation and Visualization Agent. By the invocation of the Proxy objects, the
Chapter 5. Evaluation of federated workflows 97
Figure 5.8: The architecture of the Monitor Manager.
Monitor decides if Filter and Computation agents must be moved (3)(4), while the Visualization
Agent is invoked in a asinchronously way (5). Being a local measure, the displacement of agents
do not happen, so, the Filter agent starts to select the events important for the calculus. At
the meantime, the visualization agent invokes dynamically the computation agent (6) in order to
obtain the measure result. The Computation Agent makes a polling on the Filter Agent to get
the data useful for the calculus (7)(8). Finally, the result is sent to the Visualization Agent that
present it to the user (9)-(12).
The second possibility is that the measurement is organization-wide (fig. 5.10). Unlike to the
precedent example, the measure is required on a client; so, it is necessary to move the Filter agents
and, eventually, also the Computation agent to the server, in order to reduce the traffic on the net:
only the result goes back to the monitor and not all the events necessary for obtaining it.
In the fig. 5.11, the wider scenario is presented. The request arrives from a client in a organization;
the measure m is decomposed in three measures, m1, m2 and m3 that must be computed in three
different organization belonging to the VE. The computation of m1 is charged to the Monitor
that receives the request of m; so, this operation becomes an organization-wide measurement. To
compute the other sub-measures, the Monitor Manager on the client sends a message to that on
the server that has to communicate the request to the other organizations. When all the results
are ready, they are sent back to the first Monitor that collects them and proceeds with the their
integration to obtain the result of m.
98 Chapter 5. Evaluation of federated workflows
Figure 5.9: The execution of a local measurement.
Figure 5.10: The execution of an organizational-wide measurement.
5.5.3 The prototype
For the validation of tool we have used a business process that handles a special permissions granted
by the Municipality of Lecce. As the tool that manages the performance evaluation monitors re-
quires a strict interaction with the workflow engine, we decided to reproduce the case study in
laboratory using a research WfMS developed at the University of Salerno.
Chapter 5. Evaluation of federated workflows 99
Figure 5.11: The execution of an VE-wide measurement.
The interface designed for the Performance Evaluation Monitor has been recovered and applied
in this new MAS. Fig. 5.12 illustrates the specification phase of a monitor that we have divided
into three parts to reflect the concepts concerning what, when and how. The monitor regards
Figure 5.12: The monitor interface.
all the work items sent to the actor De Rosa as stated by the second column that describes the
workflows descriptor according to the notation reported in section 4.1.1.
100 Chapter 5. Evaluation of federated workflows
Its type is “work item” and the specified workflows descriptor regards all the work items created
in the process PassoCarrabile.
The button “Select Measures” allows to select a subset of admissible measures for the chosen mon-
itor type as shown in fig. 5.13. The example shows the assignment of two measures to the new
monitor, i.e., duration and current queue.
After the specification of the time interval taken as reference for the monitoring activity we can qual-
ify the feedback capability of a monitor. A monitor feedback can be seen as a pair (trigger, action)
where trigger is a logic predicate that must be specified in order to start the corresponding ac-
tion. The action is usually a notification to the monitor owner sent when the triggering condition,
evaluated starting from the current values of the considered measures, becomes true. As soon
Figure 5.13: The selection of the measurements.
as the monitor characteristcs have defined, the proper agents are created. In the figure 5.14 the
Grasshopper interface is shown, where the agent have been just created. The agents resides on
the computer named “Admin”. We have simulated an organization-wide measurement, in which
the WfMS is active on another computer called “Proliant800”. In the next figure, two agents have
been moved to this machine. When the computation of the measurement has been completed, the
visualization agent makes a call to the interface that present a graphical view of the results, as it
can be seen from the figure 5.16.
5.6 Conclusion
Some works on workflow systems and agents have been presented in literature [JFN98, JOSC98,
RDO01a]. Most of these platforms provide useful instruments for monitoring workflow executions.
Our approach is oriented to process performance measurement.
To approach this problem the chapter discusses three models based on the idea of performance
evaluation monitor. Each model is suitable for application domains with given characteristics and
allows to point out a scenario for the usage of the corresponding software tools. The distributed
Chapter 5. Evaluation of federated workflows 101
Figure 5.14: The creation of agents.
Figure 5.15: The filtering and computation agent moved to the client.
model proposes a viable architecture for the performance evaluation of workflows defined and en-
acted within a VE. The continuous evaluation of workflows in a VE can be characterized as a
distributed problem solving where specialized agents cooperate to compute workflow quantities
specified by means of monitors. The distributed monitoring system is based on a MAS architec-
ture because none of the WfMS working in isolation has enough information to compute global
measures. We believe that the proposed models could be adapted to other problems, such as
project management and risk management, with little effort. A prototype has been developed
102 Chapter 5. Evaluation of federated workflows
Figure 5.16: The interface of presentation of the results.
but there are some aspects to be considered in the future, concerning for example the problem of
data security or the performances of the system; in fact, if a lot of aggregate measurements are
requested, an uncontrolled number of agents could be created.
Chapter 6
Case studies
In this chapter, the validation of our measurement framework is presented. Tha main aspects
addressed during the validation phases are:
1) testing the soundness of our framework;
2) assessing the importance of a measurement framework both within a private organization
and a Public Agency;
3) how the workflow automation together with a systematic use of a measurement framework
provide information about possible process improvements;
4) the impact on the organization deriving by the introduction of a metrics set.
Two case studies are here reported. Section 6.1 describes an application of the framework at the
Intecs S.p.A., while a case study based on an administrative process are discussed in section 6.2.
6.1 Intecs S.p.A.
Intecs S.p.A. is a Software-House providing leading-edge technological support to the major Euro-
pean organisations in the design and implementation of complex electronic systems. It operates at
the forefront of the software market, where innovation, complexity and quality aspects are essential
to determine the company success. The company has branches located in Roma, Pisa, Piombino,
Bacoli (Naples) and Touluse (France). Last year Intecs S.p.A has reached the third level of quality
standard CMM because the software process has been well-defined and documented as required
by the standard.
The collaboration between the University of Salerno and the Intecs S.p.A. erose from a joint
research project called “Metrics for Quantitative Management” under grant “Misura 3.17 POR
Regione Campania”. The primary goal of the project was the definition of metrics for the evalu-
ation the software development process in order to reach as soon as the fourth level of the CMM.
104 Chapter 6. Case studies
The effort on this project has been to adapt the measurement framework presented in the chapter 3
that was created for the business process evaluation, to the software process evaluation. Before
speaking about the approach used in the project, it is useful to describe the main features of
software process model (OSSP) used in the Intecs S.p.A.
6.1.1 Organizational Standard Software Process (OSSP)
ASSO organization is a single structure that is interested in software process and its improvement,
composed of AMS-ReT&O, Intecs and TRS Neapolis. The software development process of the
ASSO organization is based on the Organisational Standard Software Process (OSSP).
Fig. 6.1 shows the high level of the OSSP1. The model is divided in three parts: the beginning, the
Figure 6.1: The high level of OSSP.
development and the conclusion of the project. Each part contains some phases. The beginning
part is divided in two phases: the BID and the Start of the project. The conclusion part also
contains two phases, the completion of the project and the maintenance. The development part
contains three phases, each of them represents a particular lyfecycle of the process: object-oriented,
incremental and evolutionary.
Each ASSO project must select one of the three lifecycles included in the OSSP and, if necessary,
the tailoring of the described process must be carried out. This ad-hoc process is completely
described in the Software Development Plan (SDP). First, it must be defined during the “BID”
phase of the project while some changes could be made in the “Start” phase. For each change
produced, the SDP must be updated and submitted to a verification.
Each phase is divided in subphases (fig. 6.2) that, in turn, are decomposed in activities (fig. 6.3).
An activity may be simple or a support activity; in such a case, it has the role of a subprocess.
For each activity, a number of attributes are defined: a description, the document to be produced,
the resources involved on the activity and the role they have on it.
6.1.2 Case Description
As already said in the previous paragraph, the project made with Intecs S.p.A. had in mind
to provide the instruments of Quality Management to reach the level “Managed” of the CMM
1OSSP is reported with the autorization of the Intecs S.p.A.
Chapter 6. Case studies 105
Figure 6.2: The subphases of the BID phase.
Figure 6.3: Some activities of a subphase.
[PCCW04].
The project is focused on a particular Key Process Area of this level that is the Quantitative Process
Management. The purpose of this area is to control the process performance of the software project
quantitatively. Quantitative Process Management involves establishing goals for the performance of
the project’s defined software process, taking measurements of the process performance, analyzing
these measurements, and making adjustments to maintain process performance within acceptable
limits.
The main goals of the Quantitative Management are:
1. planning the quantitative process management activities;
2. controlling quantitatively the process performance of the project’s defined software process.
3. to know the process capability of the organization’s standard software process in quantitative
106 Chapter 6. Case studies
terms.
The software process capability of Level 4 organizations can be summarized as being quantifiable
and predictable because the process is measured and operates within quantitative limits. This
level of process capability allows an organization to predict trends in process and product quality
within the quantitative bounds of these limits. Because the process is both stable and measured,
when some exceptional circumstance occurs, the “special cause” of the variation can be identified
and addressed. When the pre-defined limits are exceeded, actions are taken to understand and
correct the situation.
The current method used by Intecs S.p.A. to survey the software process development is based on
the collection of two kinds of data: size and effort. Size concern the quantity of work executed
on a particular activity of a project, such as how many pages of documents, functions or routines
have been written; effort points out the number of hours spent to work an activity of a project.
The data collection takes place by the filling of a file called “Personal Metrics” that is an Excel
file with a certain number of sheet, one for each resource involved on the project, with a maximun
number of nine resources. In every sheet, data related to size and effort of resources are recorded.
Moreover, there is another Excel file, that is “Project Metrics” that summarizes data for each
month of the year. Fig. 6.5 shows the Personal Metrics file. The section concerning the size and
effort are divided in different parts that split up data on the basis on the quantity of the new
software, the reuse software and the changed software. Data contained in the Project Metrics file
are compared with the initial assessments inserted at the start of the process.
This methodology presents some disadvantages:
• the data collection does not allow to record the execution flow of the process activities and
the allocation of these activities to the resources belonging to the project team;
• the most of metrics computed in the excel files are “product metrics” that evaluate the
quantity of work producted by the team and the time spent to perform such activities.
These metrics do not allow to monitor the execution of the process and the workload of the
resouces involved on it.
• to provide some assessments at the initial phase of a new project is often a very difficult
task. So, the use of an automated tool that provide measurements on past similar projects
can help the managers to make assessments in a more correct way.
The project with Intecs S.p.A. wants to apply the measurement framework at the OSSP process
and, at the same time, to realize a tool that computes both project and product metrics.
Unfortunately, OSSP model and our measurement framework differ in some points:
Chapter 6. Case studies 107
Figure 6.4: The Personal Metrics file.
Figure 6.5: The Project Metrics file.
108 Chapter 6. Case studies
• It is impossible to define a-priori the previous field for the subphases. However, for every
phases a subphase of Start-up and one of Closure are available. So, even if an order between
subphases does not exists, the previous can be always set.
• Even if the framework uses three levels, for processes, tasks and workitems and the OSSP has
four levels, process, phase, subphase and activity, we can mapped a phase as a subprocess
and it is possible to apply the split operator to compute the measurements.
Figure 6.6: The decomposition of an activity on the basis of its owners.
• An activity of the OSSP is mapped as a workitem of the framework; however, an activity
is not atomic, but it can be further decomposed on the basis of the resource assigned to it.
For example, in the fig. 6.6, the activity called wi may be decomposed in three activity, one
for each employees that have worked on it. The effort spent on wi will be the sum on the
single efforts spent by the employees on wi.
Effort(wi) =∑
i∈Owner
efforti
• In general, if not expressely defined, the events CreatedInstance and StartedInstance
coincide. This affects the computation of the queues.
After the analysis of the OSSP and the decisions about the mapping strategy, the next step was to
analyze the OSSP model and design a database for the collection of data concerning the execution
of software process instances. In the next section, the data models concerning the OSSP and the
framework will be described.
6.1.3 Specification
In order to measure a process instance of OSSP, it is necessary to collect and organize all the data
concerning its execution. So, a Web application that aims to collect these data has been developed
for the project.
It is possible to distinguish:
Chapter 6. Case studies 109
• Process Definition that models every process, phase, subphase and activity, together with
the role assigned to each activity.
• Process Execution that stores all the project instances of the OSSP.
The data model defined in the project development is reported in fig. 6.7. Collected data have
been arranged in a particular way that facilites the application of the measurement framework.
Tables Processo, Fase, Sottofase, Attivita represent the static entities of the OSSP model
in which all the information presented in the model are recorded. These tables are connected with
relations that show the hierarchy existing between the same entities (one phases contains one or
more subphases, one subphases contains one or more activities).
A particular relation “is-a” exists between the table Attivita and Processo: this indicates that
a support activity is a process, in its turn. A similar relation connects Fase and Processo: in
fact, there are particular phases (those concerning the different possible lifecycle to apply at the
process) that are, in their turn, modeled as a process.
Table Assigned to, that connects Ruolo to Attivita, allows to represent the m:n relation between
an activity and a role in OSSP: one activity may be assigned to one or more roles, while a role
may be assigned to one or more activities.
There are four tables concerning the instances: one for each static entity. There is an exception:
in the static model some roles can be assigned to an activity; during the execution, the roles are
substitute with an Owner who works on this activity. The concept of owner indicates that an
employee assumes a certain role for a particular project, so on different projects, the same person
could assume different roles.
In order to can apply the measurement framework to the OSSP data, other two fundamental
classes have been added to the model: Istanza and Evento. These table records respectively
instances and events in a consistent format respect to the measurement framework. In Istanza
all the instances are inserted, providing information about the father or the previous instance, if
exist, and the current state. Events records the type of the occurred event, its timestamp, the
new state assumed by the instance.
The model adopted for the measurement framework is shown in the fig. 6.8 e 6.9. The generic
workflow instance is represented by the WfInstance class , that contains all information concerning
an instance together with some of the basic measurements of the framework related to a workflow
instance. Moreover, an Owner object is associated with a workitem instance.
The generic workflow event is represented by a class, named WfEvent, that, beyond storing all
data regarding workflow events, maintains a reference to WfInstance object. This reference is
necessary to link a workflow instance to all events that regard it. Both WfInstance and WfEvent
are implementations of the interface named WfElement, representing the generic workflow element.
110 Chapter 6. Case studies
Figure 6.7: The ER Model of the OSSP.
The event and instance sets are respectively, modeled as two classes, named WfInstanceSet and
WfEventSet. These classes contain a lot of methods; some of these are useful to manage the class,
other represent an implementation of the operators and measurements defined in the framework.
WfInstanceSet and WfEventSet are subclasses of the WfElementSet class, that represents the
generic element set.
Moreover, a Workflow system in the model is represented by a class named WfSystem, that has a
reference to the two elements set, WfInstanceSet and WfEventSet.
At this point, two data model have been introduced: the OSSP data model and the measurement
framework data model. In order to unify these two models, it has been necessary to create two
wrapping classes. The class named Owner, represents the role which execute an activity; it is
composed by last name e first name of the user having the specific role and a description of the
work quantities (function number, routine number, ect..) produced during the execution of that
activity. These information are useful to calculate product-oriented measures.
Chapter 6. Case studies 111
Figure 6.8: The measurement framework data model.
Class named OSSPDataManager supplies methods to create the instance set and the event set
starting from the table of the OSSP database.
6.1.4 Implementation
The first step concerning the implementation of a tool for the data collection in the Intecs S.p.A.
has been the creation of an Oracle database, called “DBMetriche” for the OSSP model. A number
of scripts for the upload of the data concerning the process OSSp have been created, together with
those related the management of the database, such as the deletion, the updating, and so on.
The next step has been the design of a Web application for the data insert. Some snapshot of
the user interface of the tool have been reported in the fig. 6.11, 6.12 and 6.13. They respectively
represent the Login page, the page for the insertion of a new process instance and, finally, that for
a new subphases instance.
The aim of the case study was to give an idea of the application of the framework in a different
112 Chapter 6. Case studies
Figure 6.9: The measurement framework data model.
Figure 6.10: The two wrapping classes
environment. The project is currently in the data collection phase. The tool has been installed, for
the validation, on the computers of a developers team, that will use it to instance a new software
development process. Unfortunately, we are not currently able to obtain a sufficient number of
instances in oder to provide some results about measurements, but this will be made in the next
Chapter 6. Case studies 113
Figure 6.11: The login page of the OSSP DBManager tool.
Figure 6.12: The webpage for the insertion of a new process instance.
two months.
6.2 University of Salerno
Another possible application domain of the measurement framework is the process evaluation of
workflow in the context of public agencies. The second case study that is discussed here concerns
one among the administrative processes managed by the University of Salerno; in particular, the
114 Chapter 6. Case studies
Figure 6.13: The page to create a new suphase instance.
selected process is the Request of a Certification submitted by an employee.
For the discussion of the case study we will refer to the model shown in fig. 2.7. First of all, the
process has been modelled using the tool IBM HolosofX Workbench. The main features of such tool
are resumed in section 6.2.1; the description of the modeled process follows in the paragraph 6.2.2
where the process vision (also called To-Be) is discussed in some details.
Starting from the analysis performed by means of the modeling tool, the process has been im-
plemented as a workflow in the application Titulus’97 that manages a system called “Protocollo
Informatico” operating at the University of Salerno.
6.2.1 The IBM HolosofX Workbench tool
IBM Holosofx Workbench is a sophisticated tool that allows realistic and visual modeling of the
way a process is handled under specified conditions [IBM02]. Using this tool it is possible to
capture all possible alternative paths of a process and generate, on demand, an explicit path for
evaluation and/or modification. IBM Holosofx Workbench combines the ability to model and an-
alyze a Process. The application has many project management application features and performs
animated simulation. Finally, this application facilitates integration with Workflow Engines if an
automated solution is appropriate. There are six main components of IBM Holosofx Workbench:
Process Modeling, Process Case Analysis, Weighted Average Analysis, Process Simulation, Report-
ing, and Workflow Integration. Other adding tools, like a UML Modeler or an X-Form Modeler
are included in the suite.
Chapter 6. Case studies 115
• Process Modeling - A model of the Process will provide the data necessary for a detailed
analysis of the time and costs associated with the Process. Modeling enables to view and
analyze all the variations of a Process, analyze and simulate it, generate analysis reports and
interface portions of the Process with some WfMSs (such as FlowMark or Visual WorkFlo).
A Process can be modeled by one or more separate but interrelated diagrams called Activity
Decision Flow (ADF) Diagrams. This kind of diagrams are used to model the main elements
of a Process visually and show how these elements are interrelated. In the fig. 6.14 all the
elements that can be inserted in an Activity Decision Flow Diagram are shown. The connec-
tion of this elements allows to model a network of activities, decisions, and inputs/outputs.
Figure 6.14: The different elements of a ADF diagram.
• Process Case Analysis - In every execution of a business process there are variations in
which activities are performed, who will perform them, and when they are performed. These
variations are caused by the conditions that exist when the Process is performed. In IBM
Holosofx Workbench, the business conditions are modeled by Decisions and Choices. Tracing
through the path of the Process and stopping the Process before the Decision was reached,
then it would have only one Case. There would be a 100% probability that this one Case
could occur. At the point of the Decision, the path of the Process will branch into as many
paths as there are Choices. The two Cases will also divide the probability of occurrence. The
100% probability of the original Case will be multiplied by the probability associated with
the Choices that lead to the two Cases. Each Case has a probability of occurrence, which
determines how much impact the Case will have on the overall Process. If Cases are not
116 Chapter 6. Case studies
properly weighted, they can dramatically alter measurements. IBM HolosofX Workbench
allows users to separate each Case, examine it, and generate a unique Activity Decision Flow
diagram when needed. Some advantages that can be obtained from isolation and analysis of
a single Case are: process verification, reviewing and examining the impact of relevant Cases
on various metrics such as time, cost, and resources or on the overall process performance.
• Weighted Average Analysis - To calculate accurately any metric of a Process (e.g., total
cost), metrics from each of its individual Cases are multiplied by the respective Cases occur-
rence probability. The sum of the weighted Case metrics provides an overall metric for the
Process. IBM HolosofX Workbench generates 37 Weighted Average Analysis reports, which
are grouped into five categories: Times, Costs, Classifications, Indices, and General
• Process Simulation - Process Simulation is another method used to analyze a Process.
Weighted Average Analysis provides a static, longterm view of the Process; Process Simu-
lation captures the dynamics of a shorter-term view. During a Simulation, IBM HolosofX
Workbench dynamically generates a number of inputs. These inputs travel through one of
the possible paths of the Process. Throughout the Process Simulation, Resources are as-
signed to Tasks as needed. If inputs arrive at a Task and the required Resources are not
available, the inputs can form queues. The detection of a large number of items in the queues
helps determine potential bottlenecks and their causes. IBM HolosofX Workbench is able
to animate events as they occur, to help you visualize your analysis. One of the most im-
portant features of Simulation is the ability to perform what-if analysis on a Process Model
and discern which variation of the Process Model best suits your needs. A specific set of
Simulation parameters used for a Simulation is called a Scenario.
• Reporting - IBM HolosofX Workbench allows to produce reports that summarize different
aspects of your Business Processes. There are many different kind of reports that can be
exported to Excel. The groups of reports concerns Weighted Average, Simulation, Docu-
mentation, and Analysis.
• Workflow Integration - IBM HolosofX Workbench can be used as a front-end to workflow
engines. Integration with workflow engines is accomplished by selecting and filtering the
appropriate workflow components from a Process Model and translating them into a format
acceptable for use by other systems. Common methods used in this loosely integrated
architecture are SQL, WFPL, CDL, and FDL. Currently, IBM Holosofx Workbench offers
workflow integration in three vendor-specific or industry standard formats: FlowMark and
MQ Workflow by IBM, Visual WorkFlo by Filenet, Open Image by SNS and Workflow
Coalition Standard (WPQL).
Chapter 6. Case studies 117
6.2.2 Process Description
The process selected as a case study derives from the set of administrative processes that are
regularly executed in the University of Salerno. This particular process concerns the granting of
a Certification by the University to an employee who have required it. The process diagrams are
shown in the following figures. The process starts with the request of a certification made by an
employee of the University to the qualified department(fig. 6.15); after the preliminary operations
on the request (Avvio), the analysis phase starts (Istruttoria). A control on the certificate follows
and finally, the document is released to the employee.
Even if the process might appear simple, several tasks compose each subprocess. The schematiza-
tion of all the subprocesses is reported in the figures 6.16, 6.17, 6.18, 6.21.
Figure 6.15: The high level of the process.
Figure 6.16: The subprocess Start.
The analysis and the modeling of the process has been carried out by interviewing the main
roles involved in the process. This type of analysis aims to represents how the Process is currently
performed (the As-Is) and it provides the baseline measurements for the goals set for improvement.
A new process design will be redesign to meet improvement goals and to be cost-effective (i.e., the
long-term savings are greater than the implementation costs).
After Process policies have been evaluated and the technologies selected, the many techniques for
redesigning a Process can be considered. As the techniques are performed, the model representing
the To-Be version of the process will differ from the As-Is version of the model in name, duration,
118 Chapter 6. Case studies
Figure 6.17: The Analysis phase of the process.
Figure 6.18: The Control phase of the process.
Figure 6.19: The Release phase of the process.
and resource requirements.
In the our case, after the As-Is analysis some bottlenecks of the process have been fastly identified;
so, some corrective actions on the process have been proactively produced during the modeling
phase. For example, for particular activities, the Manager has grant to his employees the autho-
rization in signing some document. In such a way, the process duration has been reduced of two
days at least.
In order to execute analysis A-Is and To-Be the simulation represents a very useful instruments.
The process can be simulated in different ways, modifying the value of particular parameter, to
Chapter 6. Case studies 119
test if a scenario can be better than another. In the fig 6.20 a snapshot of the process simulation
is given.
Figure 6.20: An example of process simulation.
The process has been implemented with a particular software application for document and
workflow management, Titulus’97, that will be soon used at the University of Salerno in order to
store and record documents in a Web environment. The workflow applicationis very simple and
it has basic functionalities; the workflow model is not graphical but uses a tabular form in which
some particular attributes indicate the preconditions and the postconditions of each task of the
process. The workflow model for the considered process is shown in the following figure. The
implementation of the process is in progress and its execution will start as soon as the training on
the metodologies and the software Titulus’97 will end.
Figure 6.21: The workflow model in Titulus.
Chapter 7
Other Approaches to Measurement
In this chapter, a comparison among current approaches to process measurement is discussed.
The first approach, named measurement framework, has already been discussed in the previous
chapter. It might be seen as an application of general methods such as the “Key Performance In-
dicators” [BM97] or the “Goal Question Metrics” [MB00]. These methods face the problem of
process performance evaluation first identifying several areas on which to focus the attention (e.g.
efficiency, cost, quality, etc.), then defining a basic set of indicators for each area. Since this ap-
proach assumes the existence of a data source against which the indicators can be evaluated, it fits
well to the ex-post modality.
Another example of this approach is the tool based on a set of measures presented in [MR00] in
which three possible perspectives of workflow evaluation are considered: process view, resource
view and object view. A measurement framework is usually extensible and can be easily adapted
to a particular application domain. However the introduction of new measures requires the modi-
fication of source code that implements the new definitions, before their application. The second
approach, that is the monitor-based, has been already treated in chapter 4 and 5.
Here we will discuss other three approaches, first the built-in, then the process warehouse and,
finally, the language-based tools.
7.1 A comparison of current approaches to process mea-
surement
We have identified different approaches to process measurement.
• Built-in facilities - The first one is based on facilities currently embedded within com-
mercial WfMS. Typically, an administration tool is able to trace the history of a process
instance during its enactment and to evaluate the execution time of tasks and processes as
well as the workload of resources. The capability to measure these basic quantities allows to
122 Chapter 7. Other Approaches to Measurement
set time alerts with respect to due dates giving to the WfMS a certain degree of proactivity.
In summary, the key strengths of commercial tools for performance evaluation of workflow
are the easy-to- use design and the industrial quality. On the downside, these tools are
built-in; therefore, they are portable with difficulty and provide a reasonable but limited set
of measurement functions.
• Monitor-Based Tool - A monitor-based tool can be considered representative of a class of
tools capable of direct interaction with the WfMS in order to manage future events. A mon-
itor exhibits characteristics such as real time responses to measurement queries, continuous
measurement and feedback capability depending on the entity subject to the measurement
activity. Monitor-based tools should be designed taking into account interaction with the
workflow engine to ensure acceptable time responses.
• Process Warehouse - The third approach consider the use of a data warehouse system to
organize a “process oriented knowledge” of an enterprise. This approach is called “process
warehouse” in the literature [LSB01] and typically exploits the ex post modality. Even if the
Process Warehouse approach is coming popular for the performance analysis of workflows,
its design and implementation are very expensive and difficult.
The set of new techniques, known as Data Mining, permits to extract hidden knowledge
from the data and improve the process evaluation. Data Mining is applied to a different set
of data sources, databases, data warehouses or other kinds of information repositories, such
as workflow logs. In the scope of performance analysis of workflows, these techniques might
be used both as a support in making strategic decisions about the process or organization,
and for the definition of rules in order to control and prevent undesirable situations during
the enactment of the processes.
• Language-Based Tool - Languages-based tools are also possible. A language is an appli-
cation tool truly general that could enable the writing of queries against a WfMS in order
to compute measures about given workflow entities. The main benefit of such a tool is that
we can exploit the expressive power of a programming language to define and evaluate new
measures. The present version of WPQL works in ex-post modality only, retrieving data
necessary to the evaluation of measure from the workflow logs. The practical use of WPQL
in continuous modality requires its integration with the workflow engine in order to assure
the necessary real time behavior.
7.2 Built-in WfMS facilities
The main features of different monitoring tool of commercial WfMS are resumed below. Similar
features can be found in other commercial products equipped with built-in basic measurement
Chapter 7. Other Approaches to Measurement 123
functions.
7.2.1 Oracle Workflow
Once a workflow has been initiated, it may be necessary to check on its status to ensure that it is
progressing forward, or to identify the activity currently being executed. Oracle Workflow provides
a graphical Java Workflow Monitor to view and administer the stauts of a specific instance of a
workflow process. Workflow Report supplies a set of APIs for reporting in order to obtain all the
information about instances currently in progress [Ora01]. Fig. 7.2 shows an example of monitoring
process in Oracle Workflow. The considered process is an administrative process concerning the
granting of a building permit.
Figure 7.1: The monitoring tool embedded in Oracle Workflow.
For the particular process instance shown, a path is highlighted together with the task “Preistrut-
toria” (“Preliminary Examinations”) for which further data are provided below the graphical
representation. These data inform the analyst that, relatively to the task “Prepara Pratica”:
1. The role “TECNICO ISTRUTTORE” is charged to perform the task.
2. The task state is “CLOSED”, i.e. has been completed.
3. The task duration is 44 minutes (evaluated as the difference between Begin Date and End
Date).
4. There was a delay during the progress of the task “Prepara Pratica” since the End Date is
greater than Due Date.
7.2.2 Filenet Panagon
Panagon Visual WorkFlo allows to analyze and improve the performance of processes by the ex-
aminations of statistical data [Fil01]. The system offers three ways to examine statistical data:
124 Chapter 7. Other Approaches to Measurement
a graphical tool, Visual WorkFlo/Conductor, the vwtool utility that is a command-line driven
application and two API calls that show data relatively to Work Objects (instances) and queues.
Good reporting utilities are included in the Ultimus Workflow suite [Ult01]. Graphical represen-
tation of the report data are based on variables like task, actors, cost and elapsed time.
7.2.3 Ultimus Workflow
Ultimus Workflow contains over 200 built-in features that allow users to create sophisticated work-
flow process without coding. Some features among these concern monitoring and reporting. Moni-
toring tool allows to graphically control the status of any workflow instance. Users can zoom in on
the list of instances that are in progress or have been completed. When one instance is selected, a
color-coded workflow map has shown, highlighting the current status.
Another feature is the Workload View that allows the workflow administrator to determine the
current and future workload for each client user. In this way, it is possibile to reassign one or more
of these tasks to other users, providing flexibility in handling exceptions. Finally, in order to eval-
uate and improve the effectiveness of processes, Web-based reports can be designed, generated and
accessed over the Internet. Cost, time and status metrics can be viewed from a user, department,
queue or process perspective.
7.2.4 IBM WebSphere MQ Workflow
WebSphere Business Integration Monitor [IBM03] supports performance-oriented decision making
with two different views of the company, a workflow dashboard and a business dashboard. Business
dashboard capabilities include cost analysis and trend analysis of archived data. Actual metrics
and statistical information can be exported to further analyze and redesign business processes and
deliver continuous improvement.
WebSphere Business Integration Monitor also offers an advanced alert system that you can
customize to any managers specific requirements. This enables you to take preemptive actions,
such as workload balancing, process redesign or pro-cess suspension, and to correct operational
workflow problems before issues become critical.
7.2.5 Staffware Process Suite
The Staffware Process Monitor (SPM), integrated in the Process Suite, provides a sophisticated
tool to monitor the effectiveness and efficiencies of entire business processes [Sta03]. The SPM
provides a whole new level and depth of insight for business analysts and users by allowing them
to intelligently and proactively manage their business processes. Using a graphical interface, the
SPM provides key performance metrics and detailed status reports on entire business processes
Chapter 7. Other Approaches to Measurement 125
Figure 7.2: The business dashboard
(e.g. how long it took to process a claim).
The SWIP (Staffware Work in Progress) reporting tool is complementary to SPM and provides a
real-time view of the workloads across user queues.
7.3 The Process Warehouse Approach
Before describing the process warehouse approach in workflow measurement, it is useful recall some
definitions about data warehouse.
7.3.1 Data Warehousing
Data warehousing is a collection of decision support technologies, aimed at enabling the knowledge
worker (executive, manager, analyst) to make better and faster decisions
A Data Warehouse (DW) is a single, entreprise-wide collection of data. This collection should fulfil
the following four preconditions [Inm92]:
• A DW is subject-oriented - It provides a simple and concise view around particular subject
issues by excluding data that are not useful in the decision support process.
• A DW is integrated - Constructed by integrating multiple, heterogeneous data sources and
applying data cleaning and data integration techniques.
• A DW is time variant - the time horizon for the DW is significantly longer than that of
operational systems.
• A DW is non-volatile - It is a physically separate store of data transformed from operational
environment; Update of data does not occur in the DW environment.
126 Chapter 7. Other Approaches to Measurement
The data warehouse supports On-Line Analytical Processing (OLAP), the functional and perfor-
mance requirements of which are quite different from those of the On-Line Transaction Processing
(OLTP) applications traditionally supported by the operational databases. OLTP applications
typically automate clerical data processing tasks (such as order entry and banking transactions)
that are structured and repetitive, and consist of short, atomic, isolated transactions. The trans-
actions require detailed, up-to-date data, and read or update a few (tens of) records accessed
typically on their primary keys. Data warehouses, in contrast, are targeted for decision support.
Historical, summarized and consolidated data is more important than detailed, individual records.
Since data warehouses contain consolidated data over potentially long periods of time, they tend
to be orders of magnitude larger than operational databases; enterprise data warehouses are pro-
jected to be hundreds of gigabytes to terabytes in size. The workloads are query intensive with
mostly ad hoc, complex queries that can access millions of records and perform a lot of scans,
joins, and aggregates. Query throughput and response times are more important than transaction
throughput [CD97].
Building an enterprise DW is a long and complex process with a very high risk of failure. Another
point of view, like [Kim02] considers a DW as a set of data marts, which are departmental subsets
focused on a particular subjects (like marketing data mart or accounting data mart). A bottom-up
method for building DW suggests to start with data mart and integrate them subsequently to
create the global enterprise DW [GR02].
Conceptual design
A popular conceptual model that influences the front-end tools, database design, and the query
engines for OLAP is the multidimensional view of data in the warehouse. In a multidimensional
data model, there is a set of numeric measures that are the objects of analysis. Examples of such
measures are sales, budget, revenue, inventory, ROI (return on investment). Each of the numeric
measures depends on a set of dimensions, which provide the context for the measure. For example,
the dimensions associated with a sale amount can be the city, product name, and the date when
the sale was made. The dimensions together are assumed to uniquely determine the measure. Each
dimension is described by a set of attributes that can be related via a hierarchy of relationships.
Most data warehouses use a star schema to represent the multidimensional data model. The name
star schema reflects the appearance of a database designed according to this approach. It consists
of one or more fact tables, around which a set of dimension tables cluster. Fig. 7.3 shows an
example of star schema. The database consists of a single fact table and a single table for each
dimension. Each tuple in the fact table consists of a foreign key to each of the dimensions that
provide its multidimensional coordinates, and stores the numeric measures for those coordinates.
Each dimension table consists of columns that correspond to attributes of the dimension.
Chapter 7. Other Approaches to Measurement 127
Figure 7.3: Example of star schema
The main advantage of the star schema is the variety of queries that it can handle in an efficient
way; moreover, this schema matches very well to the way end users perceive and use tha data, thus
making it more intuitively understood [Dev00]
Snowflake schemas provide a refinement of star schemas where the dimensional hierarchy is ex-
plicitly represented by normalizing the dimension tables as shown in fig. 7.4. A fact constellaction
Figure 7.4: Example of snowflake schema
represents multiple fact tables which share dimension tables, viewed as a collection of stars.
OLAP Operators
As mentioned in the section 7.3 key characteristics of OLAP include:
128 Chapter 7. Other Approaches to Measurement
• large data volumes, in potentially sparsely populated arrays;
• consolidation upward and drill down along many dimensions¡
• dynamic viewing and analysis of the data from a wide variety of perspectives and through
complex formulae.
The underlying rationale is that users often view and analyze data multidimensionally, using hierar-
chical segmentationalong each dimension. Thus, a user may analize sales along the time dimension
(such as months within years), along the geographical dimension (cities within states) and along
the marketing dimension (products within category). This approach can be conceptualized as
a cube, or even a hypercube, that has more than three dimensions of analysis. Typical OLAP
operations on the dimensions hypercube are:
• slice and dice - It corresponds to reducing the dimensionality of the data, i.e., taking a
projection of the data on a subset of dimensions for selected values of the other dimensions.
For example, we can slice and dice sales data for a specific product to create a table that
consists of the dimensions city and the day of sale.
• rollup - This operation increases the level of aggregation. Rollup corresponds to taking
the current data object and doing a further group-by on one of the dimensions. Thus, it
is possible to roll-up the sales data, perhaps already aggregated on city, additionally by
product. An example of query can be given total sales by city, we can roll-up to get sales
by state.
• drill-down - The drill-down operation is the converse of rollup. It decreases the level of
aggregation or increases detail.
Figure 7.5: Some OLAP operators
• aggregation A measure is aggragated over one or more dimensions. Some example of queries
can be find total sales, find total sales for each city or find top five products ranked by total
sales.
Chapter 7. Other Approaches to Measurement 129
• pivoting - By pivoting, the cube is rotated and the cells are reorganized based on a new
perspective, that is, a different combination of dimensions is focused.
The other popular operators include ranking (sorting), selections and defining computed attributes.
7.3.2 Process Warehouse
The process warehouse approach finds its motivation from the lessons learned using the standard
tools for process evaluation. The need to access historical data is one of the primary incentives for
adopting the warehousing approach. Historical data also constituites a large and increasingly im-
portant component of enterprise asset data, where it provides the definitive record of the business.
The experience has shown that often audit data tend to grow very quickly as the enactment of
workflows takes place [Ley94]. The consequence is that audit data are available on line only for
a few months and that long-term measurements on large volumes of historical audit data require
off-line access to these data. The separation of operational process support and analytical decision
support is mostly a performance improving architecture. Moreover, warehouses typically can hold
data for longer periods of time and thus are more suited for trend analysis, etc.
The concept that data are not a synonym of information implies the necessity to filter the audit
data in order to keep significant information and to retrieve them in efficient manner. Therefore,
this is a good motivation for the introduction of a data warehouse [GR02].
There are at least two other motivations both directed to enhance the business value of audit data.
The first one is the opportunity to aggregate data coming from different sources; the second is the
possibility to exploit the warehouse data model to perform multidimensional analysis.
Larger corporations employ several workflow management systems for the execution of different
business processes. The aim of data warehouse system is to offer integrated and/or comparative
analysis of the data produced by the different systems. It collects data from both workflow and ex-
ternal sources, and reorganizes them on the basis of a new data model, that is the multidimensional
data model. This architectural schema offers more possibilities to analyze data from different per-
spectives. Fig. 7.8 shows the process warehouse approach. This figure is an extension of fig. 2.14
where a data warehouse system has been inserted between data sources and the measurement tool.
The ETL (Extraction, Transformation and Loading) phase aims to collect data arriving from dif-
ferent data sources in order to record them in the data warehouse. It first requires a preliminary
data cleaning, in order to select only data useful for the purposes of performance evaluation, and
also the correction of them. Furthermore, data arising from different data sources must be trans-
formed in the same format in order to be loaded in the process data warehouse.
Metadata are very important in this approach, because a data warehouse has to maintain all the
information about the structure of source databases, the warehouse schema, the physical data
130 Chapter 7. Other Approaches to Measurement
Figure 7.6: A Process Data Warehouse Architecture.
organization, queries and reports, and so on ([CD97]). Finally, data in the warehouse must be
loaded and updated. The loading is an operation that is carried out only at the creation time of
the warehouse; it is a high cost operation because the whole data sources are copied and transferred
in the warehouse. In [Dev00], DEVLIN speaks about two ways of keeping a DW up to date. The
first method, known as refresh is simply to write the DW once and then rewrite it completely
at intervals. The second method assumes that the DW has been written once in its entirety;
so, only the changes in the source data are written in the DW. This approach is called update.
A choice between refresh and update must be necessary on the basis on performance aspects or
functionality, to determine which is the most appropriate in any particular instance. However, it
is very common the using of the update replication for maintenance of a DW. Besides the fact
that update is capable of creating periodic data starting from transient operational data, the main
benefits claimed on behalf of update mode is performance: it works only on changed records, the
volume of which is considerably less than the volume of the full dataset.
OLAP tools can be used for analyzing the data. These OLAP tools offer much higher functionality
and are higher optimized than typically monitoring interfaces to workflow systems.
Several works using process warehouse approach have been proposed in literature.
M. ZURMUEHLEN in [Mue01] discusses the design on data warehouse applications that take ad-
vantage of workflow technology as an information source. It considers the audit trail as the main
system data source providing a detailed discussion about the two DW dimensions that are process
and resource, but the addiction of the third dimension concerning business objects is also dis-
cussed. A case study derived from an insurance scenario has been analyzed and implemented
Chapter 7. Other Approaches to Measurement 131
using a java-based workflow management system which has been evaluated using operational data
from the business processes of the selected company. In [EOG02] data warehouse technology is
used to exploit workflow logs for monitoring and process improvements. Despite other projects
[CDGS01, BCDS01]which goal was to help to detect critical process situations or quality degrada-
tions under different circumstances, this approach is general since it is based on a general workflow
meta model and explicitly collected queries which should be answered.
In [LSB01] the Process Data Warehouse has presented as a useful technique able to extract knowl-
edge measurements from workflow logs. In this paper, the authors explain the ability of a WfMS to
measure knowledge and the role of the process warehouse in this context and present an example
regarding the insurance sector.
HP BPI Suite
The main goal of Business Process Intelligence (BPI) is to enable business and IT users to extract
knowledge hidden in the WfMS logs and to be alerted of critical situations. HP labs have developed
a tool suite for BPI that supports organizations in analyzing, predicting, and preventing exceptions
[CDGS01]. The tool suite can also dynamically predict the occurrence of exceptions at process in-
stantiation time, and progressively refine the prediction as process execution proceeds. Exception
prediction allows users and applications to take actions in order to prevent the occurrence of the
exceptions.
Their approach is based on applying data mining and data warehousing techniques to process
execution logs. Workflow Management Systems record all important events that occur during
process executions, like the start and completion time of each task, the resource that executed it,
and any failure that occurred during activity or process execution. By cleaning and aggregating
workflow logs into a Workflow Data Warehouse (WDW) and by analyzing them with data mining
technologies, it is possible to extract knowledge about the circumstances in which an exception
occurred in the past, and use this information to explain the causes of its occurrence as well as to
predict future occurrences within running process instances. Another long term goal is the ability
of dynamically optimizing workflow definitions and executions.
The dimensional model that also permits to analyze specific behaviours of process is reported in
fig. 7.7. A detailed description of this model can be found in [BCDS01].
Data mining techniques are applied mainly for exception analysis; however, in this case, the aim
is not the effective and prompt handling but to understand why an exception occurs in order to
predict and prevent their other occurrences. The problem is treated as a classification problem
that identify which are the characteristics of ”exceptional” process instances.
132 Chapter 7. Other Approaches to Measurement
Figure 7.7: The HP Workflow Data Warehouse schema
Carnot
Carnot Process Warehouse ([CAR03]) is a commercial suite that uses a workflow log together with
business data to create a process warehouse. Starting from the collected data, a set of metrics is
computed and visualized in a graphical way.
The CARNOT process warehouse serves as the interface between the audit trail database and
a process-oriented data warehouse infrastructure. interfaces to external data warehouses, OLAP
tools and visualization tools are specified to allow further extension of the CARNOT framework
with industry leading applications.
The components building the CARNOT process warehouse are: a process warehouse builder, a
process warehouse database and a set of process warehouse metrics. The process warehouse builder
extracts and transforms the available data to store them in the process warehouse database. The
Chapter 7. Other Approaches to Measurement 133
Figure 7.8: The CARNOT Process Warehouse Architecture.
process warehouse metrics component analyses this collected information and visualizes them.
Filenet P8
In FileNet P8 ([Sil03]) an OLAP approach is adopted. FileNet Process Analyzer takes a stream
of XML data from FileNet P8 event logs and stores it temporarily in a data warehouse. From
there, selected data passes quickly to a data mart, where it is stored in special schemas supporting
OLAP structures called data cubes. Data in the data mart is then distilled and moved to storage in
the data cubes themselves. Each cube holds only a specific class of business process information,
such as work-in-process information or workload information. Cubes can be historical or real-
time. In historical cubes, all possible query combinations are pre-calculated before transfer to the
cube, which allows users to slice the data from any perspective with good performance. However,
information in historical cubes tends to be updated relatively infrequently. Real-time cubes apply
transformations on the fly to data still stored in the data mart, so information is up-to-date but
query performance is somewhat reduced. Once organized in OLAP cubes, whether historical or
134 Chapter 7. Other Approaches to Measurement
real-time, graphs and tables of process data can be created using any OLAP-aware reporting tool.
7.4 Measurement Language
A completely different approach to process measurement is based on the use of a specialized lan-
guage. Querying a WfMS using a specialized language to compute a particular measurement
represents the most general approach to implement a measurement tool.
Usually, languages represents a powerful mechanism used to express some tasks to be performed on
a system. A language is an application tool truly general that could enable the writing of queries
against a WfMS in order to to computes measures about given workflow entities.
The main benefit of such a tool is that we can exploit the expressive power of a programming
language to define and evaluate new measurements. This is particularly useful when the applica-
tion domain changes and the set of performance indicators must be often tailored to the specific
application domain. This point of view represents a remarkable benefit with respect to commercial
measurement capabilities. The measurement language approach is also an advancement respect to
the proposal in literature [MR00, AEN02], where the introduction of new measurements requires
the modification of the source code that implements the new definitions, before the measurement
can be applied.
In [AEGN02] the WPQL (Workflow Performance Query Language) has been introduced as a tool
for the performance evaluation of workflows. WPQL is a language that is able to define and apply
new measurements starting from primitive measurement functions and abstracting them in com-
posite measures ready to be applied. The basic idea of the language WPQL can be resumed by
the following schema:
1. Define a new measure.
2. Select workflow entities to measure.
3. Apply the measure to the selected entities.
Figure 7.9 illustrates a working session with WPQL. The purpose of the analyst is the evaluation
of the measure intertask duration defined as the time interval between the launching of a start
node is and the completion of the end node ie encountered during the path composed by a certain
number of task instances.
The WPQL interpreter provides an initial environment where a given number of primitives and
measures are already defined. This is the case of path, a primitive that evaluates to true if there is a
sequence of task instances crossed from an initial node to a final node. The measures ♯ and ∆ (delta
in WPQL), are also part of the initial environment. To define the new measure intertask duration
the analyst define the predicates p1 and p2 used in the body of the measure. The application of
Chapter 7. Other Approaches to Measurement 135
Figure 7.9: The WPQL top level.
intertask duration is then shown on two task instances belonging to a MissionRequest process.
The output of the measure application is expressed in msec.
In the current version the WPQL interpreter interacts with two different sources of data maintained
by the WfMS:
• The organizational database that collects information about roles, organizational units and
authorities.
• WfMS execution data related to instances (processes, tasks and work items) enacted by the
WfMS Engine.
The measurement queries can be evaluated interactively or can be collected in a file and submitted
in a batch modality in order to obtain a performance evaluation report.
7.5 Further researches
In this thesis we have discussed about the workflow performance evaluation, introducing a mea-
surement framework and various prototypes that implements it in different environments.
A lot of works can be made to continue the researches on this topic. First of all, the prototype
for the distributed multiagent system presented in the chapter 5 must be concluded and extended.
Problems such as the security or performances have not been considered yet.
Another enhancement could derive from the integration of the WPQL in the monitors. The ad-
vantages of such an integration would be the capability to define new measures typical of WPQL
136 Chapter 7. Other Approaches to Measurement
plus the real time modality and the feedback characteristics of monitors.
There are very few works in the literature that use data mining techniques together with workflow
systems, but researches in this field are promising. The approach of data mining as a support to
process measurement is still an open problem. We believe that such techniques could be used to
rend proactive the behavior of a WfMS. A new research on “Business Intelligence techniques for
the worflow performance evaluation” will start in the next months. .
References
[AEGN02] Andrea F. Abate, Antonio Esposito, Nicola Grieco, and Giancarlo Nota. Workflow
Performance Evaluation through WPQL. In SEKE, pages 489–495. ACM, July 2002.
[AEN02] Rossella Aiello, Antonio Esposito, and Giancarlo Nota. A Hierarchical Measure
Framework for the Evaluation of Automated Business Processes. International Jour-
nal of Software Engineering and Knowledge Engineering, 12(4):331–362, August
2002.
[ANFD04] Rossella Aiello, Giancarlo Nota, Giacomo Franco, and Maria Pia Di Gregorio.
Multiagent-based Evaluation of Workflows. In SOFSEM 2004: Theory and Practice
of Computer Science. 30th Conference on Current Trends in Theory and Practice
of Computer Science, Merin, Czeck Republic, MatFyzPress (Charles University’s)
Publishing House, January 2004.
[ASH+00] Ad Aerts, Nick Szirbik, Dieter Hammer, Jan Goossenaerts, and Hans Wortmann.
On the design of a mobile agent web for supporting virtual enterprises. In IEEE
9th International Workshops on Enabling Technologies: Infrastructure for Collabo-
rative Enterprises (WET ICE 2000), page 236/243. IEEE CS, 14–16 March 2000.
Proceedings.
[ASS85] Harold Abelson, Gerald Jay Sussman, and Julie Sussman. Structure and Interpreta-
tion of Computer Programs. MIT Press, USA, 6 edition, 1985.
[Bae99] J. S. Bae. Integration of WorkFlow Management and Simulation. Computer and
Industrial Engineering, 37:203–206, 1999.
[BCDS01] Angela Bonifati, Fabio Casati, Umeshwar Dayal, and Ming-Chien Shan. Warehousing
Workflow Data: Challenges and Opportunities. In International Conference on Very
Large Data Bases, pages 649–652, December 2001.
[BCR94] V. R. Basili, G. Caldiera, and H. D. Rombach. The Goal Question Metric Approach.
In Encyclopedia of Software Engineering, pages 528–532. John Wiley & Sons, 1994.
138 References
[Bit95] U. S. Bititci. Modelling of Performance Measurement Systems in Manufacturing
Enterprises. International Journal of Production Economics, 42:135–147, 1995.
[BKN00] D. Bustard, P. Kawaleck, and M. Norris. System Modeling for Business Process
Improvement. Artech House Publishers, Norwood, May 2000.
[BM97] Giampio Bracchi and Gianmario Motta. Processi aziendali e Sistemi informativi.
Franco Angeli, 1997.
[Can00] Giovanni Cantone. Measure-driven Processes and Architecture for the Empirical
Evaluation of Software Technology. Journal of Software Maintenance: Research and
Practice, 12 issue 1:47–78, 2000.
[CAR03] CARNOT AG. CARNOT Process Warehouse - Controlling Guide.
http://global.carnot.ag/downloads/docs/pdf/carnot-pwh-guide.pdf, July 2003.
[CCPP95] Fabio Casati, Stefano Ceri, Barbara Pernici, and G. Pozzi. Conceptual Modeling of
Workflows. In M. P. Papazoglou, editor, Proceedings of the OOER’95, 14th Interna-
tional Object-Oriented and Entity-Relationship Modelling Conference, volume 1021
of Lecture Notes in Computer Science, pages 341–354. Springer-Verlag, December
1995.
[CD97] Surajit Chaudhuri and Umeshwar Dayal. An Overview of Data Warehousing and
OLAP Technology. SIGMOD Record, 26(1):65–74, 1997.
[CDGS01] Fabio Casati, Umesh Dayal, Daniela Grigori, and Ming-Chien Shan. Improving Busi-
ness Process Quality through Exception Understanding, Prediction, and Prevention.
In International Conference on Very Large Data Bases, pages 159–168. Morgan Kauf-
mann, September 2001.
[CHRW99] Andrzej Cichocki, Abdelsalam A. Helal, Marek Rusinkiewicz, and Darrell Woelk.
Workflow and Process Automation: Concepts and Technology. Kluwer Academic
Publishers, BOston, 2 edition, December 1999.
[CMA01] L. M. Camarinha-Matos and H. Afsarmanesh. Virtual Enterprise and Support In-
frastructures: Applying Multi-Agent System Approaches. In Multi-Agent Systems
and Application. 9th ECCAI Advanced Course, ACAI 2001 and Agent Link’s 3rd
European Agent Systems Summer School, EASSS 2001, 2001.
[CMAGL99] L. M. Camarinha-Matos, H. Afsarmanesh, C. Garita, and C. Lima. Towards an
Architecture for Virtual Enterprise. Journal of Intelligent Manufacturing, 9(2), April
1999.
References 139
[Dav93] Thomas H. Davenport. Process Innovation: Reengineering Work Through Informa-
tion Technology. Harvard Business School Press, 2 edition, 1993.
[dBKV97] Rolf A. de By, Wolfgang Klas, and Jari Veijalainen. Transaction Management Support
for Cooperative Applications. Kluwer Academic Publishers, Boston, December 1997.
[Def95] Department of Defense. ABC Guidebook - Guidebook for Using and Understanding
Activity-Based Costing. http://www.defenselink.mil/nii/bpr/bprcd/0201.htm,
1995.
[Dev00] Barry Devlin. Data Warehouse from Architecture to Implementation. Addison Wesley
Longman Inc., USA, 6 edition, 2000.
[Dur01] Edmund H. Durfee. Distributed Problem Solving and Planning. In Multi-Agent
Systems and Application. 9th ECCAI Advanced Course, ACAI 2001 and Agent Link’s
3rd European Agent Systems Summer School, EASSS 2001, 2001.
[EOG02] Johann Eder, Georg E. Olivotto, and Wolfgang Gruber. A Data Warehouse for Work-
flow Logs. In Engineering and Deployment of Cooperative Information Systems: First
International Conference, EDCIS 2002, volume 2480 of Lecture Notes in Computer
Science, pages 1–15. Springer-Verlag Heidelberg, 2002.
[EP01] Johann Eder and Euthimios Panagos. Managing Time in Workflow Systems, pages
109–131. J. Wiley & Sons, 2001.
[Fil01] FileNet. Panagon Visual WorkFlo - Application Developers Guide, 2001.
[Fis02] Layna Fischer, editor. The Workflow Handbook 2002. Future Strategies Inc., 2002.
[GE96] Herbert Groiss and Johann Eder. Bringing Workflow Systems to the Web.
http://www.ifi.uni-klu.ac.at/ISYS/JE/Projects/Workflow/pro, 1996.
[GKT98] Andreas Geppert, Markus Kradolfer, and Dimitrios Tombros. Federating heteroge-
neous workflow systems. Technical Report ifi-98.05, University of Zurich, 30, 1998.
[GR02] Matteo Golfarelli and Stefano Rizzi. Data Warehouse: Teoria e Pratica della proget-
tazione. Mc-Graw Hill, Milano, 2002.
[Hal97] Keith Hales. Workflow in Context, pages 27–32. J. Wiley & Sons, 1997.
[Ham90] J. M. Hammer. Re-engineering Work: Don’t Automate, Obliterate. Harvard Business
Review, 68(4):104–112, 1990.
[Har91] H. James Harrington. The Breakthrough Strategy for Total Quality, Productivity, and
Competitiveness. Mc-Graw Hill, New York, 1991.
140 References
[Hol94] David Hollingsworth. Workflow Management Coalition - The Workflow Reference
Model. http://www.wfmc.org/standards/docs/tc003v11.pdf, November 1994.
Document No. WfMC-TC-1003.
[IBM02] IBM HolosofX. Continuous Business Process Management
with HOLOSOFX BPM Suite and IBM MQSeries Workflow.
http://www.redbooks.ibm.com/redbooks/pdfs/sg246590.pdf, May 2002.
[IBM03] IBM. IBM WebSphere Business Integration Monitor, Version
4.2.4. ftp://ftp.software.ibm.com/software/integration/library/
specsheets/wbimonitor G325-2210-01.pdf, 2003.
[IJ03] Doug Ishigaki and Cheryl Jones. Practical Measurement in the Rational
Unified Process. http://www.therationaledge.com/content/jan 03/t looking
ForRUPMetrics di.jsp, January 2003.
[IKV01a] IKV++GmbH. Grasshopper Basics and Concepts.
http://www.grasshopper.de/download/doc/ BasicsAndConcepts2.2.pdf,
March 2001. Release 2.2.
[IKV01b] IKV++GmbH. Grasshopper Programmer’ s Guide.
http://www.grasshopper.de/download/doc/pguide2.2.pdf, March 2001. Release
2.2.
[IKV01c] IKV++GmbH. Grasshopper User’s Guide. http://www.grasshopper.de/
download/doc/uguide2.2.pdf, March 2001. Release 2.2.
[Inm92] William H. Inmon. Building the Data Warehouse. John Wiley and Sons, 1992.
[ISO00] ISO00. Quality Management Systems - Guidelines for Performance Improvements.
ISO 9004:2000, December 2000.
[JFJ+96] N. R. Jennings, P. Faratin, M. R. Johnson, T. J. Norman, P. O’Brien, and M. E.
Wiegand. Agent-Based Business Process Management. International Journal of
Cooperative Systems, 5(2 & 3):105–130, 1996.
[JFN98] N. R. Jennings, P. Faratin, and T. J. Norman. ADEPT: An Agent-Based Approach
to Business Process Management. ACM SIGMOD Record, 27(4):32–39, 1998.
[JOSC98] D. W. Judge, B. R. Odgers, J. W. Shepherdson, and Z. Cui. Agent Enhanced
Workflow. BT Technical Journal, 16(3):79–85, 1998.
[Kan95] Stephen H. Kan. Metrics and Models in Software Quality Engineering. Addison-
Wesley Pub Co, 1995.
References 141
[Kim02] Ralph Kimball. The Data Warehouse Toolkit: The Complete Guide to Dimensional
Modeling. John Wiley Sons, USA, 2 edition, 2002.
[KK99a] M. S. Krishnan and M. I. Kellner. Measuring Process Consistency: Implications for
Reducing Software Defects. IEEE Transactions on Software Engineering, 25(6):800–
815, 1999.
[KK99b] Peter Kueng and A. J. W. Krahn. Building a Process Performance Measurement
System: some early experiences. Journal of Scientific and Industrial Research,
58(3/4):149–159, March/April 1999.
[KMW01] Peter Kueng, Andreas Meier, and Thomas Wettstein. Performance Measurement
Systems must be engineered. Communications of the Association for Information
System, 7(3), July 2001.
[KN96] Robert S. Kaplan and David P. Norton. The Balanced Scorecard: Translating Strategy
into Action. Harvard Business School Press, USA, 1996.
[Koe97] Nicole Koerber. IT-Supported Process Management in a Public Agencies, pages 121–
128. J. Wiley & Sons, 1997.
[Kue00] Peter Kueng. Process Performance Measurement System: a tool to support process-
based organizations. Total Quality Management, 11(1):67–86, January 2000.
[Law97] P. Lawrence, editor. WfMC workflow Handbook. J. Wiley & Sons, 1997.
[Ley94] Frank Leymann. Managing business process as an information resource. IBM System
Journal, 33(2):326–348, 1994.
[Ley95] Frank Leymann. Supporting business transactions via partial recovery in workflow
management systems. In The German Database Conference Datenbanksysteme fur
Business, Technologie und Web, Informatik Aktuell, pages 51–70. Springer, 1995.
[LGP98] Jintae Lee, Michael Grunninger, and PIF Working Group. The PIF Process Inter-
change Format and Framework. The Knowledge Engineering Review, 13(1):91–120,
March 1998.
[LN04] Peter C. Lockermann and Jens Nimis. Flexibility through Multiagent Systems¿ Solu-
tion or Illusion? In SOFSEM 2004: Theory and Practice of Computer Science. 30th
Conference on Current Trends in Theory and Practice of Computer Science, Merin,
Czeck Republic, ACM Springer Verlag, January 2004.
[LR00] Frank Leymann and Dieter Roller. Production Workflows - Concepts and Techniques.
Prentice Hall, 2000.
142 References
[LSB01] Beate List, Josef Schiefer, and Robert M. Bruckner. Measuring Knowledge with
Workflow Management Systems. In International Workshop on Theory and Appli-
cations of Knowledge Management in 12th International Workshop on Database and
Expert Systems Applications (DEXA 2001), pages 467–471, 2001.
[Mar02] Mike Marin. Business Process Management: From EAI and Workflowm to BPM,
pages 133–145. Future Strategies Inc., Lighthuose Point, Florida, 2002.
[MB00] Manoel G. Mendonca and Victor R. Basili. Validation of an Approach for Improving
Existing Measurement Frameworks. IEEE Transactions on Software Engineering,
26(6):484–499, 2000.
[McG02] Carolyn McGregor. The impact of business Performance Monitoring on WfMC Stan-
dards, pages 51–64. Future Strategies Inc., Lighthuose Point, Florida, 2002.
[MCJ+01] John McGarry, David Card, Cheryl Jones, Beth Layman, Elisabeth Clark, Joseph
Dean, and Fred Hall. Practical Software Measurement: Objective Information for
Decision Makers. Addison-Wesley Pub Co, 2001.
[MO99] Olivera Marjanovic and Maria E. Orlowska. On Modeling and Verification of Tem-
poral constraints in Production Workflows. Knowlegde and Information Systems,
1(2):157–192, May 1999.
[MR00] Michael zur Muehlen and Michael Rosemann. Workflow-based Process Monitoring
and Controlling - Technical and Organizational Issues. In 33rd Hawaii International
Conference on System Sciences. IEEE, 2000.
[Mue01] Michael zur Muehlen. Process-driven Management Informations Systems - Combin-
ing Data Warehouses and Workflow Technology. In ICECR-4, November 2001.
[NGP95] A. D. Neely, M. J. Gregor, and K. W. Platts. Performance Measurement System
Design. International Journal of Operations and Production Management, 15(4):80–
116, 1995.
[Ora01] Oracle. Oracle Workflow and Java Technical White Paper release 2.6.2.
http://otn.oracle.com/products/ias/workflow/release262/ wfjavawp.pdf,
November 2001.
[OTS+99] B. R. Odgers, S. G. Thompson, J. W. Shepherdson, Z. Cui, D. Judge, and P. D.
O’Brien. Technologies for Intelligent Workflows: Experiences and Lessons. In Agent-
Based Systems in the Business Context in AAAI 1999 Workshop, pages 63–67, July
1999.
References 143
[PCCW04] Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V.
Weber. Capability maturity model for software - version 1.1.
http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr24.93.pdf, 2004.
[Ple02] Charles Plesums. Introduction to Workflow, pages 19–38. Future Strategies Inc.,
Lighthouse Point, Florida, 2002.
[RDO01a] Alessandro Ricci, Enrico Denti, and Andrea Omicini. Agent coordination infrastruc-
tures for virtual enterprises and workflow management. In Matthias Klusch and
Franco Zambonelli, editors, Cooperative Information Agents V, 5th International
Workshop, CIA 2001, Modena, Italy, September 6-8, 2001, Proceedings, volume 2182
of Lecture Notes in Computer Science, pages 235–246. Springer, 2001.
[RDO01b] Alessandro Ricci, Enrico Denti, and Andrea Omicini. The TuCSoN coordination in-
frastructure for Virtual Enterprises. In IEEE 10th International Workshops on En-
abling Technologies: Infrastructure for Collaborative Enterprises (WET ICE 2001),
pages 348–353, 3rd International Workshop “Web-based Infrastructures and Coor-
dination Architectures for Collaborative Enterprises”, MIT, Cambridge, MA, USA,
20–22 June 2001. IEEE CS. Proceedings.
[Rui97] Dave Ruiz. A Workflow Recipe for Healthy Customer Service, pages 117–120. J.
Wiley & Sons, 1997.
[Saw00] O. A. El Sawy. Redesigning Enterprise Processes for E-Business. McGraw Hill, 2000.
[SGG02] Abraham Silberschatz, Peter Galvin, and Greg Gagne. Operating System Concepts.
J. Wiley & Sons, 6 edition, 2002.
[Sil03] Bruce Silver. FileNet P8: Event-Driven Business Process Management, June 2003.
[SM01] Alec Sharp and Patrick McDermott. Workflow Modeling. Artech House Inc., 2001.
[Smi80] Reid G. Smith. The Contract-Net Protocol: High Level Communication and Control
in Distributed Problem Solver. IEEE Transactions on Computers, c-29(12):1104–
1113, December 1980.
[Sta03] Staffware. Staffware Process Suite. http://www.staffware.com/Downloads/
Main Downloads/ProcessSuiteBrochure.pdf, 2003.
[SW02] Khodakaram Salimifard and Mike Wright. Modelling and Performance Analysis of
workflow Management Systems using time hierarchical coloured Petri Nets. In In-
ternational Conference on Enterprise Information Systems, pages 843–846, 2002.
[Ult01] Ultimus. Ultimus version 5.0 - Product Guide.
http://www.quixa.com/library/docs/QSLIB-ULT-102.pdf, June 2001.
144 References
[Vla97] N. Vlachantonis. Workflow Applications within Business Organizations, pages 41–48.
J. Wiley & Sons, 1997.
[VLP95] J. Veijalainen, A. Lehtola, and O. Pihlajamaa. Research Issues in Workflow Sys-
tems. In 8th ERCIM Database Research Group Workshop on Database Issues and
Infrastructure in Cooperative Informations Systems., pages 51–70. ERCIM, August
1995.
[WfM98a] WfMC. Audit Data Specification. http://www.wfmc.org/standards/docs/
TC-1015 v11 1998.pdf, September 1998. Document No. WFMC-TC-1015.
[WfM98b] WfMC. Interface 1: Process Definition Interchange Process
Model. http://www.wfmc.org/standards/docs/TC-1016-P v11
IF1 Process definition Interchange.pdf, August 1998. Document No.
WfMC TC-1016-P.
[WfM98c] WfMC. Workflow Standard - Interoperability Internet e-mail MIME Binding.
http://www.wfmc.org/standards/docs/if2v20.pdf, September 1998. Document
No. WFMC-TC-1018.
[WfM02] WfMC. Workflow Process Definition Interface - XML Process Definition Lan-
guage. http://www.wfmc.org/standards/docs/TC-1025 10 xpdl 102502.pdf,
October 2002. DOcument No. WFMC-TC-1025.
[WJ95] Michael Wooldridge and N. R. Jennings. Intelligent Agents: Theory and Practice.
The Knowledge Engineering Review, 10(2):15/152, 1995.
[WKH97] M. Wahl, S. Kille, and T. Howes. RFC 2253 - Lightweight Directory Ac-
cess Protocol (v3): UTF-8 String Representation of Distinguished Names.
http://www.faqs.org/rfcs/rfc2253.html, December 1997.
[Woo02] Michael Wooldridge. Multiagent Systems. Joh Wiley & Sons, March 2002.
[WWWD96] Dirk Wodtke, Jeanine Weisenfels, Gerhard Weikum, and Angelika Kotz Dittrich. The
Mentor Project: Steps Toward Enterprise-Wide Workflow Management. In ICDE,
pages 556–565, 1996.