january 2004wosp’2004 1 uml and spe part ii: the uml performance profile dr. dorina petriu...
TRANSCRIPT
January 2004 WOSP’2004 1
UML and SPEPart II: The UML Performance
Profile
Dr. Dorina PetriuCarleton University
Department of Systems and Computer EngineeringOttawa, Canada, K1S 5B6
http://www.sce.carleton.ca/faculty/petriu.html
January 2004 WOSP’2004 tutorial 2
Outline
UML Profile for Schedulability, Performance and Time General Resource Model General Time Modeling General Concurrency Model Schedulability Profile Performance Profile
Applying the Performance Profile Generation of performance models from UML specs Validation Directions for future work
January 2004 WOSP’2004 tutorial 3
UML Profile for Schedulability, Performance
and Time
January 2004 WOSP’2004 tutorial 4
UML Profiles A package of related specializations of general UML concepts
that capture domain-specific variations and usage patterns A domain-specific interpretation of UML
Fully conformant with the UML standard does not extend the UML metamodel uses only standard extension mechanisms: stereotypes, tagged
values, constraints additional semantic constraints cannot contradict the general UML
semantics within the “semantic envelope” defined by the standard
Standard UML Semantics
Profile XProfile Y
January 2004 WOSP’2004 tutorial 5
Guiding Principles for the SPT Profile Adopted as OMG standard, Version 1 - OMG document formal/03-09-01.pdf
http://www.omg.org/docs/formal/03-09-01.pdf
Ability to specify quantitative information directly in UML models key to quantitative analysis and predictive modeling
Flexibility: users can model their real-time systems using modeling approaches and styles of
their own choosing open to existing and new analysis techniques
Facilitate the use of analysis methods eliminate the need for a deep understanding of analysis methods as much as possible, automate the generation of analysis models and the analysis
process itself
Using analysis results for: predicting system characteristics (detect problems early) analyze existing system (sizing, capacity planning)
January 2004 WOSP’2004 tutorial 6
UML Profile Mechanism The UML profile mechanism provides a way of specializing the
concepts defined in the UML standard according to a specific domain model. a stereotype can be viewed as a subclass of an existing UML concept a tagged value associated with the stereotype can be viewed as an
attribute The domain model often shows associations between domain
concepts, but the UML extension mechanisms do not provide a convenient way for specifying new associations in the metamodel. The following general techniques are used: some domain associations map directly to existing associations in the
metamodel. some domain composition associations map to tags associated with the
stereotype. in some cases, a domain association is represented by using the
<<taggedValue>> relationship provided by the UML profile mechanisms.
January 2004 WOSP’2004 tutorial 7
Specializing UML: Stereotypes Possible to add semantics to any standard UML concept
Must not violate standard UML semantics
«PAhost»
Node
UML metaclass
Tagged value associatedwith the «PAhost» stereotype
MyProcessor{PAschdPolicy= ‘FIFO’}
«PAhost» Stereotype of Nodewith added semantics: a processing resource with a scheduling policy, processing rate,context switching time, utilization, throughput, etc.
January 2004 WOSP’2004 tutorial 8
General Process for Model Analysis
Software Domain Schedulability/ Performance Domain
January 2004 WOSP’2004 tutorial 9
UML SPT Profile Structure
Analysis Models Infrastructure Models
General Resource Modeling Framework
«modelLibrary»RealTimeCORBAModel
«profile»RTresourceModeling
«profile»RTconcurrencyModeling
«import»
«profile»RSAprofile
«import»
«import»
«profile»RTtimeModeling
«profile»PAprofile
«import»
«profile»SAProfile
«import»«import»
January 2004 WOSP’2004 tutorial 10
SPT Profile:General Resource Model
January 2004 WOSP’2004 tutorial 11
Quality of Service Concepts Quality of Service (QoS):
a specification (usually quantitative) of how well a particular service is (to be) performed
e.g. throughput, capacity, response time, jitter Resource:
an element whose service capacity is limited, directly or indirectly, by the finite capacities of the underlying physical elements
The specification of a resource can include: offered QoS: the QoS that it provides to its clients required QoS: the QoS it requires from other components to
support its QoS obligations key analysis question: (requiredQoS offeredQoS) why is difficult to answer: the QoS offered by a resource depends
not only on the resource capacity itself, but also on the load (i.e., on the competition from other clients for the same resource).
January 2004 WOSP’2004 tutorial 12
CoreResourceModel
ResourceUsageModel
DynamicUsageModel(from
ResourceUsageModel)
StaticUsageModel(from
ResourceUsageModel)
CausalityModel ResourceTypes
ResourceManagement
RealizationModel
The General Resource Model
January 2004 WOSP’2004 tutorial 13
Parallel hierarchy of instances and descriptors.
Core Resource Model (Domain Model)
DescriptorInstance
1..n0..n
+type
1..n0..n
0..n0..n
/
0..n +offeredQoS0..n
/
0..n0..n ResourceResourceInstance1..n0..n
+type
1..n
+instance
0..n
ResourceServiceResourceServiceInstance
0..n
+type
1
+instance
0..n
1..n+offeredService 1..n
l
1..n1..n
QoScharacteristic
0..n
0..n
0..n
0..n1
QoSvalue
0..n
0..n
+offeredQoS0..n
0..n
10..n
+type
1
+instance
0..n
January 2004 WOSP’2004 tutorial 14
Basic Resource Usage Model
StaticUsage DynamicUsage
UsageDemand
ResourceServiceInstance(from CoreResourceModel)
ResourceUsage10..1 1
+workload
0..1
0..n
+usedServices
0..n
AnalysisContext1
1..n
1
1..n
1
1..n
1
1..nResourceInstance
(from CoreResourceModel)
1..n1..n
1..n0..n
+usedResources
1..n0..n
0..n
1..n
0..n
1..n
EventOccurence(from CausalityModel)
Resource usage describes how a set of clients uses a set of resources and their instances. static usage: who uses what dynamic usage: who uses what and how (order, timing, etc.)
January 2004 WOSP’2004 tutorial 15
Static Usage Model Static usage: who uses what (order does not matter)
StaticUsage(from ResourceUsageModel)
ResourceInstance(from CoreResourceModel)
Client1..n1..n
+usedResources
1..n1..n
QoSvalue(from CoreResourceModel)
0..n
0..n
0..n
+offeredQoS
0..n
/
0..n
1..n
0..n
+requiredQoS 1..n
QoScharacteristic(from CoreResourceModel)
0..n
1
+instance 0..n
+type 1
Instance(from
CoreResourceModel)
January 2004 WOSP’2004 tutorial 16
Basic Causality Loop Used in modeling dynamic scenarios – it captures the essential
of the cause-effect chain in the behaviour of run-time instances.
0..1
1
+effect0..1
+cause1
Stimulus1..n1
+effect
1..n
+cause
1
Scenario
0..n1
+effect
0..n
+cause
1
0..n
1
+effect 0..n
+cause 1
Instance(from CoreResourceModel) 0..1 0..n
+receiver
0..1 0..n
0..n
1..n
+methodExecution0..n
+executionHost1..n
EventOccurence
StimulusReception
StimulusGeneration
January 2004 WOSP’2004 tutorial 17
Dynamic Usage Model dynamic usage: who uses what and how (order, timing, etc.)
+requiredQoS
DynamicUsage(from ResourceUsageModel)
QoSvalue(from CoreResourceModel)
ResourceInstance(from CoreResourceModel)
ResourceServiceInstance(from CoreResourceModel)
1..n1..n
0..n
0..n
0..n
+offeredQoS0..n
Scenario(from CausalityModel)
1..n1..n
+usedResources
1..n1..n /
1..n
1..n
+usedServices
1..n
1..n
/
ActionExecution
0..n
0..n
0..n
0..n
1..n+step 1..n
0..n0..n+predecessor
+successor
0..n
{ordered}
ActionExecution: the only modification to the
UML metamodel proposed in the profile
January 2004 WOSP’2004 tutorial 18
Proposed Changes to the UML Metamodel
The change is reflected in version UML 1.5. Addition of a new metamodel element called ActionExecution Replacement of the association between Action and Stimulus by a
semantically similar association between Stimulus and ActionExecution addition of two new associations between Action and ActionExecution,
and ActionExecution and Instance, respectively.ModelElement
Instance
Stimulus
1
*
1
*
* +receiver 1+argument *
*
+sender 1
*
Action
ActionExecution
10..*
+dispatchExecution
10..*
0..*
11
0..*
+host
1
*
January 2004 WOSP’2004 tutorial 19
Categories of Resources
ActiveResourcePassiveResourceUnprotectedResource
protectionKind activenessKind
ResourceInstance(from CoreResourceModel)
ProtectedResource
CommunicationResourceProcessorDevice
purposeKind
Categorization of resources: a given resource instance may belong to more than one type.
January 2004 WOSP’2004 tutorial 20
Exclusive Use Resources and Actions
ActionExecution(from DynamicUsageModel)
ResourceServiceInstance(from CoreResourceModel)
AcquireService
isBlocking : BooleanReleaseServiceExclusiveService
0..n1 0..n1
/
0..n
1
0..n
1
/
AccessControlPolicy1
0..n
1
0..n
ProtectedResource1..n1..n
/
0..n
0..1
/
0..1
0..n
To use an exclusive service of a protected resource the following actions must be executed before and after: acquire service action (according to an access control policy) release service action
January 2004 WOSP’2004 tutorial 21
Resource Management Model
Instance(from CoreResourceModel)
ResourceInstance(from CoreResourceModel)
ResourceBroker
1..n
0..n
1..n
0..n
AccessControlPolicy(from ResourceTypes)
1..n
11
1..n
ResourceManager
1..n
0..n
+managedResource1..n
0..n
ResourceControlPolicy
1
0..n
1
0..n
Utility package for modeling resource management services: resource broker: administers the access control policy resource manager: creates and keeps track of resources according
to a resource control policy.
January 2004 WOSP’2004 tutorial 22
SPT Profile: General Time Modeling
January 2004 WOSP’2004 tutorial 23
General Time Model Concepts for modeling time and time values Concepts for modeling events in time and time-related stimuli Concepts for modeling timing mechanisms (clocks, timers) Concepts for modeling timing services, such as those found in real-time
operating systems.
TimeModel
TimedEvents
TimingMechanisms
TimingServices
January 2004 WOSP’2004 tutorial 24
Physical and Measured Time
{ordered}
PhysicalTime
Duration
PhysicalInstant
nn
1
n
+start 1
n
1
n
+end1
n
Clock(from TimingMechanisms)
TimeInterval
0..n0..n
+measurement
0..n0..n
TimeValue
kind : {discrete, dense}0..n
n +measurement
0..n
n
1
n
+start1
n
1
n
+end1
n
1
0..n
+referenceClock1
0..n
dense time: corresponds to the continuous model of physical time (represented by real numbers)
discrete time: time broken into quanta (represented by integers)
January 2004 WOSP’2004 tutorial 25
Timing Mechanisms: Clock and TimerResourceInstance
(from CoreResourceModel)
Timeout(from TimedEvents)
Timer
isPeriodic : Boolean
1
0..n
1
+generatedTimeouts 0..n
ClockInterrupt(from TimedEvents)
TimeInterval(from TimeModel)
Clock
1
0..n
+offset1
0..n
1
0..n
+accuracy 1
0..n
0..n
1
+generatedInterrupts 0..n
1
TimeValue(from TimeModel)
TimingMechanism
stabilitydriftskew
set(time : TimeValue)get() : TimeValuereset()start()pause()
1
0..n
+resolution1
0..n
0..n
1
0..n
+referenceClock1
1
0..n+currentValue
1
0..n
1
0..n+maximalValue
1
0..n
TimeValue(from TimeModel)
10..n
+duration
10..n
TimedEvent(from TimedEvents)
1..n+timestamp
1..n
1
+origin
1
January 2004 WOSP’2004 tutorial 26
Timed Stimuli
These two associations are derived from the general association between EventOccurrence and Stimulus
Stimulus(from CausalityModel)
Timeout
ClockInterrupt
StimulusGeneration(from CausalityModel)
1
+cause
1
+effect
1..n1+cause 1
1..n
/
1+cause 1
0..n/
TimedStimulus0..n +start
1..n
0..n
0..n
+end
0..n
TimeValue
(from TimeModel)
0..n+time
In UML events are assumed to occur instantaneously.
January 2004 WOSP’2004 tutorial 27
Timed Events and Timed Actions
Delay
EventOccurence(from CausalityModel)
TimeValue(from TimeModel)
TimedEvent
1..n+timestamp 1..n
Scenario(from CausalityModel)
TimedAction
TimeInterval(from TimeModel)
1
0..n
+duration 1
0..n
1..n+end 1..n1..n+start 1..n
Timed event: has one or more timestamps (according to different clocks)
Timed action: an action that takes a certain time to complete (with known start time, end time and duration)
January 2004 WOSP’2004 tutorial 28
Caller Operator
call
ack
Notation: Timing Marks and Constraints
A timing mark identifies the time of an event occurrence
On messages:sendTime() receiveTime()
On action blocks:startTime()endTime()
{call.sendTime() - ack.receiveTime < 10 sec}
Timing constraint
ack.sendTime()
call.receiveTime()
callHandler.startTime()
callHandler.endTime()
January 2004 WOSP’2004 tutorial 29
Specifying Time Values Time values can be represented by a special stereotype of
Value («RTtimeValue») in different formats; e.g. 12:04 (time of day) 5.3, ‘ms’ (time interval) 2000/10/27 (date) Wed (day of week) $param, ‘ms’ (parameterized value) ‘poisson’, 5.4, ‘sec’ (time value with a Poisson distribution) ‘histogram’ 0, 0.28 1, 0.44 2, 0.28, 3, ‘ms’
P=0.28P=0.44
P=0.28
0 ms 1 ms 2 ms 3 ms
January 2004 WOSP’2004 tutorial 30
Specifying Arrival Patterns Method for specifying standard arrival pattern values
Bounded: ‘bounded’, <min-interval>, <max-interval> Bursty: ‘bursty’, <burst-interval> <max.no.events> Irregular: ‘irregular’, <interarrival-time>, [<interarrival-time>]* Periodic: ‘periodic’, <period> [, <max-deviation>] Unbounded: ‘unbounded’, <probability-distribution>
Probability distributions supported: Bernoulli, Binomial, Exponential, Gamma, Geometric, Histogram,
Normal, Poisson, Uniform
January 2004 WOSP’2004 tutorial 31
General Concurrency Model Rich enough domain model of concurrently executing and com-
municating objects - used in applications, operating systems, etc.
January 2004 WOSP’2004 tutorial 32
Schedulability Profile
January 2004 WOSP’2004 tutorial 33
Background Schedulability analysis: applied to hard real-time systems to find a
schedule that will allow the system to meet all its deadlines. Schedulable entities: granular parts of an application that contend for
use of the execution resource that executes the schedule. Schedule: an assignment of all the schedulable entities in the system
on available processors, produced by the scheduler. Scheduler: implements scheduling algorithms and resource arbitration
policies. Scheduling policy: a methodology used to establish the order of
schedulable entity (e.g. process, or thread) execution. E.g., rate monotonic, earliest deadline first, minimum laxity first,
maximize accrued utility, and minimum slack time. Scheduling mechanism: an implementation technique used by a
scheduler to make decisions about the order to choose threads for execution. E.g.,fixed priority schedulers, and earliest deadline schedulers.
January 2004 WOSP’2004 tutorial 34
Schedulability core domain model
January 2004 WOSP’2004 tutorial 35
Example: Telemetry System Specification
January 2004 WOSP’2004 tutorial 36
Real-time Situation Modeled as SD
January 2004 WOSP’2004 tutorial 37
Real-time Situation Modeled as CD
Sensors:SensorInterface
TelemetryGatherer:DataGatherer
«SAResource»
SensorData:RawDataStorage
TelemetryDisplayer: DataDisplayer
TelemetryProcessor:DataProcessor
Display:DisplayInterface
TGClock : Clock
TGClock : Clock
A.1:gatherData ( )
C.1:displayData ( )
B.1:filterData ( )
TGClock : Clock
A.1.1:main ( )
A.1.1.1: writeStorage ( )
B.1.1 : main ( )
B.1.1.1: readStorage ( )C.1.1.1: readStorage ( )
C.1.1 : main ( )
{SACapacity=1,SAAccessControl=PriorityInheritance}
«SASchedulable»
«SASchedulable» «SASchedulable»
«SATrigger»{SASchedulable=$R1,
RTat=('periodic',100,'ms')}«SAResponse»
{SAAbsDeadline=(100,'ms')}
«SASituation» «SAAction»{SAPriority=2,SAWorstCase=(93,'ms'),RTduration=(33.5,'ms')}
«SAAction»{RTstart=(16.5,'ms'),RTend=(33.5,'ms')}
«SATrigger»{SASchedulable=$R2,
RTat=('periodic',60,'ms')}«SAResponse»
{SAAbsDeadline=(60,'ms')}
«SATrigger»{SASchedulable=$R3,
RTat=('periodic',200,'ms')}«SAResponse»
{SAAbsDeadline=(200,'ms')}
«SAResponse»{SAPriority=3,SAWorstCase=(177,'ms'),RTduration=(46.5,'ms')}
«SAAction»{RTstart=(10,'ms'),RTend=(31.5,'ms')}
«SAAction»{RTstart=(3,'ms'),RTend=(5,'ms')}
«SAResponse»{SAPriority=1,SAWorstCase=(50.5,'ms'),RTduration=(12.5,'ms')}
Result
Result
Result
January 2004 WOSP’2004 tutorial 38
Performance Profile
January 2004 WOSP’2004 tutorial 39
Background Scenarios define execution paths whose end points are externally visible.
QoS requirements can be placed on scenarios. Each scenario is executed by a workload:
open workload: requests arriving at in some predetermined pattern closed workload: a fixed number of active or potential users or jobs
Scenario steps: the elements of scenarios joined with predecessor-successor relationships which may include forks, joins and loops. a step may be an elementary operation or a whole sub-scenario
Resources are used by scenario steps. Quantitative resource demands for each step must be given by performance annotations.
Important note: the main reason we build and solve performance models is to compute the additional delays due to the competition for resources by different concurrent users!
Performance results for a system include resource utilizations, waiting times, response times, throughputs.
Performance analysis applied to real-time systems with stochastic characteristics and soft deadlines (use mean value analysis methods).
January 2004 WOSP’2004 tutorial 40
Performance Domain Model
<<deploys>>
ClosedWorkload
populationexternalDelay
OpenWorkload
occurrencePattern
PResource
utilizationschedulingPolicythroughput
PProcessingResource
processingRatecontextSwitchTimepriorityRangeisPreemptible
PerformanceContext
1..n
0..n
1..n
0..n
Workload
responseTimepriority
1..n
1
1..n
1
PStep
probabiltyrepetitiondelayoperationsintervalexecutionTime
0..n
0..n
+successor
0..n
+predecessor 0..n
PScenario
hostExecutionDemandresponseTime
0..n0..n
+resource
0..n0..n
0..1
0..n
+host 0..1
0..n
1..n
1
1..n
1
11..n 11..n
1..n1..n1
1
+root 1
1
PPassiveResource
waitingTimeresponseTimecapacityaccessTime
{ordered}
January 2004 WOSP’2004 tutorial 41
Specifying Performance Values A complex structured string with the following format<performance-value> ::=“(“<kind-of-value> “,“ <modifier> “,“ <time-value>”)”
where:<kind-of-value> ::= ‘req’ | ‘assm’ | ‘pred’ | ‘msr’
required, assumed, predicted, measured
<modifier> ::= ‘mean’ | ‘sigma’ | ‘kth-mom’ , <Integer> |‘max’ | ‘percentile’ <Real> | ‘dist’
<time-value> is a time value described by the RTtimeValue type A single characteristic may combine more than one performance values: <PA-characteristic> ::= <performance-value> [<performance-Value>]* Example:
{PAdemand = (‘pred’, ‘mean’, (20, ‘ms’)) }
{PArespTime = (‘req’, mean, (1, ‘sec’)) (‘pred’, mean, $RespT) }
required predicted - analysis result
January 2004 WOSP’2004 tutorial 42
Performance Stereotypes(1)Stereotype Applies To Tags Description
«PAclosedLoad» Action, ActionExecution,Stimulus, Action, Message, Method…
PArespTime [0..*]PApriority [0..1]PApopulation [0..1]PAextDelay [0..1]
A closed workload
«PAcontext» Collaboration, CollaborationInstanceSetActivityGraph
A performance analysis context
«PAhost» Classifier, Node, ClassifierRole, Instance, Partition
PAutilization [0..*]PAschdPolicy [0..1]PArate [0..1]PActxtSwT [0..1]PAprioRange [0..1]PApreemptible [0..1]PAthroughput [0..1]
A processor
«PAopenLoad» Action, ActionExecution,Stimulus, Action, Message, Method…
PArespTime [0..*]PApriority [0..1]PAoccurrence [0..1]
An open workload
January 2004 WOSP’2004 tutorial 43
Performance Stereotypes (2)Stereotype Applies To Tags Description
«PAresource» Classifier, Node, ClassifierRole, Instance, Partition
PAutilization [0..*]PAschdPolicy [0..1]PAcapacity [0..1]PAmaxTime [0..1]PArespTime [0..1]PAwaitTime [0..1]
PAthroughput [0..1]
A passive resource
«PAstep» Message, ActionState, Stimulus, SubactivityState
PAdemand [0..1]PArespTime [0..1]PAprob [0..1]PArep [0..1]PAdelay [0..1]PAextOp [0..1]PAinterval [0..1]
A step in a scenario
January 2004 WOSP’2004 tutorial 44
Applying the Performance Profile
January 2004 WOSP’2004 tutorial 45
What is represented in the UML model?
A UML model should represent the following aspects in order to support early performance analysis: : key use cases described by representative scenarios
frequently executed scenarios with performance constraints performance requirements for each scenario can be defined by the user
examples: end to end response time, throughput, jitter, etc.
resources used by each scenario reason why needed: contention for resources has a strong performance impact resources may be active or passive, physical or logical, hardware or software
examples: processor, disk, process, software server, lock, buffer quantitative resource demands must be given for each scenario step
how much? how many times?
workload intensity for each scenario (scenario set) open workload: arrival rate of requests for the scenario closed workload: number of simultaneous users executing the scenario
January 2004 WOSP’2004 tutorial 46
Building Security System: selected Use Cases
Access control
Log user entry/ exit
Acquire/store video
Manage access rights
Manager
Database Video Camera
<<includes>> User
January 2004 WOSP’2004 tutorial 47
Deployment Diagram
<<PAhost>> ApplicCPU
<<PAresource>> LAN
VideoAcquisition
<<PAhost>> DB_CPU
Database
<<PAresource>>
SecurityCard Reader
<<PAresource>>
DoorLock Actuator
<<PAresource>>
Video Camera
<<PAresource>>
Disk{PAcapacity=2}
AccessControl
Access Controller
Access Controller
AquireProc
StoreProc
<<PAresource>>
Buffer{PAcapacity=$Nbuf}
BufferManager
January 2004 WOSP’2004 tutorial 48
Acquire/Store Video Scenario<<PAresource>>
Video Controller
<<PAresource>> AcquireProc {PAcapacity= 2}
<<PAresource>> BufferManager {PAcapacity= $Bufs}
<<PAresource>>
StoreProc
*[$N] procOneImage(i)
<<GRMacquire>> allocBuf (b)
getImage (i, b)
passImage (i, b) storeImage (i, b)
<<GRMrelease>> releaseBuf (b)
freeBuf (b)
<<PAresource>>
Database
writeImg (i, b)
getBuffer()
store (i, b)
<<PAstep>> {PAdemand =(‘asmd’, ‘mean’, (1.5, ‘ms’)}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (1.8, ‘ms))}
<<PAcontext>>
o
<<PAstep>> {PAdemand=(‘asmd’,
‘mean’, ($P * 1.5, ‘ms’)), PAextOp = (network, $P)}
<<PAstep>> {PAdemand=(‘asmd’,
‘mean’, ($B * 0.9, ‘ms’)),, PAextOp=(writeBlock, $B)}
<<PAclosedLoad>> { PApopulation = 1 , PAinterval =((‘req’,’percentile’,95, (1, ‘s’)),
(‘pred’,’percentile’, 95, $Cycle)) }
<<PAstep>> {PArep = $N}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (0.5, ‘ms’))}
o
o
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (0.5, ‘ms’))}
<<PAstep>> {PAdeman d=(‘asmd’,
‘mean’, (0.9, ‘ms’))} <<PAstep>> {PAdemand=(‘asmd’,
‘mean ’, (1.1, ‘ms’))}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (2, ‘ms’))}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (0.2,’ms’))}
Result o Requirement
January 2004 WOSP’2004 tutorial 49
Access Control Scenario
getRights()
User
<<PAresource>>
CardReader <<PAresource>>
DoorLock<<PAresource>>
Alarm<<PAresource>>
Access Controller
<<PAresource>>
Database<<PAresource>>
Disk
readCard
admit (cardInfo)
readRights() [not_in_cache] readData()
checkRights() [OK] openDoor()
[not OK] alarm() [need to log?] logEvent()
writeRec()
enterBuilding
writeEvent()
<<PAstep>> {PAextOp=(read, 1)}
<<PAopenLoad>> {PAoccurencePattern = (‘poisson’, 30,
‘s’),
PArespTime =((‘req’,’percentile’,95, (1, ‘s’)),
(‘pred’,’percentile’, 95, $UserR))}
<<PAstep>>{PAdemand=(‘asmd’, ‘mean’, (3, ‘ms’))}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (1.8, ‘ms’))}
<<PAstep>>{PAdemand=(‘asmd’, ‘mean’, (1.8, ‘ms’))}
<<PAcontext>>
o
o
<<PAstep>>{PAdemand=(‘asmd’, ‘mean’, (1.8, ‘ms’))}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’,
(1.5, ‘ms’)), PAprob = 0.4}
<<PAstep>> {PAdelay=(‘asmd’, ‘mean’, (500, ‘ms’)), PAprob = 1}
<<PAstep>>{PAprob = 0}
<<PAstep>>{PAdemand=(‘asmd’, ‘mean’, (0.3, ‘ms’))}
<<PAstep>>{PAdemand=(‘asmd’, ‘mean’,
(0.2, ‘ms’), PAprob=0.2}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (1.8, ‘ms’))}
o
Result o Requirement
January 2004 WOSP’2004 tutorial 50
Acquire/Store Video Scenario: top-level AD
<<PAcontext>>
<<PAclosedLoad>> {PApopulation = 1, PAinterval= ((‘req’,’percentile’, 95, (1,’s’)), (‘pred’,’percentile’, $Cycle))}
VideoController
procOneImage
*[ N ]
cycleOverhead
<<PAstep>> {PAdemand= (‘asmd’, ‘mean’, (1.8, ‘ms))}
<<PAstep>> {PArep = $N}
composite activity detailed on the next slide
January 2004 WOSP’2004 tutorial 51
Composite activity procOneImage
wait_SP
getBuffer
<<GRMacquire>> allocBuf
wait_DB
<<GRMrelease>> releaseBuf
AcquireProc StoreProc BufferManager Database
getImage
passImage
image
store
writeImg
freeBuf
wait_SP
wait_DB
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, 1.5, ‘ms’)}
<<PAstep>> {PAdemand=(‘asmd’,
‘mean’, ($P * 1.5, ‘ms’)), PAextOp = (network, $P)}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’,( 0.5, ‘ms’)) }
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (0.9, ‘ms’)) }
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (2, ‘ms’)) }
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, 0.5, ‘ms’)}
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (0.2, ‘ms’))}
<<PAstep>> {PAdemand=(‘asmd’,
‘mean’, ($B * 0.9, ms’)), PAextOp=(writeBlock, $B)}
storeImage
<<PAstep>> {PAdemand=(‘asmd’, ‘mean’, (1.1, ‘ms’)) }
January 2004 WOSP’2004 tutorial 52
LQN Model
procOneImage [1.5,0]
alloc [0.5, 0]
bufEntry
getImage [12,0]
passImage [0.9, 0]
AcquireProc (2 threads)
BufferManager
Buffer
AcquireProc2
acquireLoop [1.8]
VideoController
lock [500, 0]
Lock
releaseBuf [0.5, 0]
BufMgr2
alarm [0,0]
Alarm
network [0, 1]
Network
storeImage [3.3, 0]
StoreProc
User rate=2/min
Users
readCard [1, 0]
CardReader
admit [3.9, 0.2]
AccessController
writeEvent [1.8, 0]
writeImg [ 7.2, 0]
readRights [1.8,0]
writeRec [3, 0]
writeBlock [1, 0]
readData [1.5, 0]
(1,0)
(forwarded)
(1, 0) (1, 0) (0, 1)
($P, 0) (1, 0) (1, 0)
($N)
(1, 0)
(0, 0.2)
($B, 0) (0.4, 0) (1, 0)
(1, 0)
(0, 0)
Applic CPU
DB CPU
DiskP
LockP
AlarmP
CardP
UserP
NetP
Dummy
DataBase
Disk
January 2004 WOSP’2004 tutorial 53
LQN Simulation results
For 40 cameras Bufs Cycle UserR UAcpu Uacqp Ubuf Ustor Sstor PCam PDoor
1 1.31 8.78 0.543 0.965 1.000 0.583 19.8 1.00 0
2 0.87 10.27 0.816 0.942 1.683 1.683 22.7 0.06 0 3 0.77 10.70 0.923 0.931 2.035 1.242 24.8 0 0
For 3 buffers, single ApplicCPU
Cams Cycle UserR UAcpu Uacqp Ubuf Ustor Sstor PCam PDoor
30 0.58 10.74 0.923 0.93 2.034 1.242 24.8 0 0 40 0.77 10.70 0.923 0.931 2.035 1.242 24.8 0 0
50 0.96 10.71 0.923 0.932 2.036 1.243 24.8 0.32 0 60 1.16 10.69 0.923 0.932 2.037 1.244 24.8 0.96 0
For 3 buffers, 2 ApplicCPUs
Cams Cycle UserR UAcpu Uacqp Ubuf Ustor Sstor PCam PDoor
40 0.67 8.23 1.057 0.994 2.086 1.314 22.2 0 0 50 0.84 8.15 1.056 0.994 2.088 1.317 22.2 0.03 0
60 1.01 8.20 1.057 0.995 2.087 1.314 22.1 0.52 0
Bufs : number of buffers; Cycle: mean camera scan cycle in sec
PCam: Prob (camera cycle > 1 sec); PDoor: Prob(door response > 1 sec)
January 2004 WOSP’2004 tutorial 54
Generation of performance models from UML specs
January 2004 WOSP’2004 tutorial 55
Direct UML to LQN Transformation: our first approach
LQN model structure (tasks, devices and their interconnections) from: UML model of the high-level software architecture UML deployment diagram
LQN detailed elements (entries, phases, activities and parameters) from: UML models of key scenarios with performance annotations
Scenarios can be represented in UML by the software designers as: Sequence diagrams Activity diagrams
Why we prefer activity diagrams in our transformation process: So far, UML sequence diagram are missing convenient constructs for
representing branch/merge, fork/join, iteration and decomposition. Other authors use extended sequence diagrams to overcome these
deficiencies. However, the UML metamodel and the present UML tools do not cover such extensions… Hopefully UML 2.0 will!
If sequence diagrams are given, we will transform them automatically in activity diagrams, then proceed with the performance model derivation.
January 2004 WOSP’2004 tutorial 56
Tool interoperability
UML Model(XML format)
UML Tool Analysis results
UML to LQN Transformation
LQN ToolImport perf.
results into UML
model
Performance Model
Forward path (in black) - implemented Backward path (in gray) - not done yet
January 2004 WOSP’2004 tutorial 57
UML LQN Transformation Algorithm1. Generate LQN model structure
1.1 determine LQN software tasks from high-level components
1.2 determine LQN hardware devices from deployment diagram
2. Generate LQN details on entries, phases, activities from scenarios
2.1 for each scenario, process the corresponding activity diagram(s)
2.1.1 match inter-component communication style from pattern with messages between components from the activity diagram
2.1.2 parse the activity diagram detach activity diagram into partition subgraphs divide further each partition into nonterminal
subgraphs map subgraphs to LQN entries, phases, and
activities create the respective LQN elements compute their parameters
2.2 merge LQN submodels corresponding to different scenarios
3. Traverse LQN graph and write out textual model description.
January 2004 WOSP’2004 tutorial 58
Generating the LQN Model Structure
<<process>>
Client
1..nClient Server
CLIENT SERVER
ClientServer
<<process>>
Server<<process>>
Database
Client Server
CLIENT SERVER
ClientServer
<<process>>
Client
1..nClient Server
CLIENT SERVER
ClientServer
<<process>>
Server<<process>>
Database
Client Server
CLIENT SERVER
ClientServer
<<process>>
Client
1..nClient Server
CLIENT SERVER
ClientServer
<<process>>
Server<<process>>
Server<<process>>
Database<<process>>
Database
Client Server
CLIENT SERVER
ClientServer
Client1 ClientN
<<Internet>>
<<Modem>> <<Modem>>
ProcC1 ProcCN
Server
<<LAN>>ProcS
Database
<<disk>>
ProcDB
Client1 ClientN
<<Internet>>
<<Modem>> <<Modem>>
ProcC1 ProcCN
Server
<<LAN>>ProcS
Database
<<disk>><<disk>>
ProcDB
a) High-level architecture
b) Deployment diagramProcS
ProcCModem
Internet
LAN
ProcDB
Disk1
Client
WebServer
Database
c) Generated LQN model structure
• Software tasks generated for high-level software components according to the architectural patterns used.
• Hardware tasks generated for devices from deployment diagram
January 2004 WOSP’2004 tutorial 59
Parsing Activity Diagrams In the case of the ad-hoc Java transformation, a graph-grammar
was defined and a top-down parsing algorithm was developed Input: a graph of meta-objects representing an UML model,
(particularly activity diagrams with swimlanes and collaborations) Output: a parsing sub-tree for each swimlane (i.e., concurrent
component) showing the following: Compliance with the architectural patterns used in the UML model Division of the activity diagram into sub-areas corresponding to
LQN entries, phases and activities Identification of basic structures for each sub-area (sequence,
branch, loop, etc.) The parsing tree is used to generate the following LQN elements:
Lower-level components of LQN tasks: entries, phases, activities LQN request arcs form phases and activities to entries LQN parameters: service times and visit ratios.
January 2004 WOSP’2004 tutorial 60
Mapping AD to LQN Elementsa) ClientSever with blocking client
Client
continue work
request service
serve request and
reply
waiting
undefined
Server
complete service (opt)
wait for reply
do something
b) ClientSever with non-blocking client
Client
continue work
request service
serve request and
reply
waiting
undefined
Server
complete service (opt)
e1, any
e2, ph1
e2, ph2
e1, a1
e1, a3
e1, a2 e2, ph1
e2, ph2
e2[ph1,ph2]
Server
e1
Client
e2[ph1,ph2]
Server
&
&
Client
a1
a2 a4
a3
LQN submodel for (a) LQN submodel for (b)
January 2004 WOSP’2004 tutorial 61
Sample AD With Performance Annotations
send
wait_b wait_c
b1
b2
b3
b4
b5receive
undef_b
c1
c2
undef_c
c3
A B C
t1
t2
t3 t4
t5
t6
t7
t9
t10
t11
t12
t5t5
t13
t14
t15
t16
t17
t18
t8
t5
f2
f4
f3
j1
j2
j3
init
fin
<<PAstep>>{PAdemand=(‘est
’, ‘mean’, 2, ‘ms)’}
<<PAstep>>{PAdemand=(‘es
t’, ‘mean’, 2, ‘ms)’}
<<PAstep>>{PAdemand=(‘est
’, ‘mean’, 3, ‘ms)’}
<<PAstep>>{PAdemand=(‘es
t’, ‘mean’, 1, ‘ms)’}
<<PAstep>>{PAdemand=(‘es
t’, ‘mean’, 2.5, ‘ms)’}
<<PAstep>>{PAdemand=(‘est
’, ‘mean’, 1.7, ‘ms)’}
<<PAstep>>{PAdemand=(‘es
t’, ‘mean’, 2.1, ‘ms)’}
<<PAstep>>{PAdemand=(‘es
t’, ‘mean’, 2.8, ‘ms)’}
<<PAstep>>{PAdemand=(‘es
t’, ‘mean’, 1.5, ‘ms)’}
<<PAstep>>{PAdemand=(‘est
’, ‘mean’, 1.8, ‘ms)’}
<<PAclosedLoad>>
{Papopulation = $N}
January 2004 WOSP’2004 tutorial 62
send
wait_b wait_c
b1
b2
b3
b4
b5receive
undef_b
c1
c2
undef_c
c3
A B C
f2
f4
f3
j1
j2
j3
init
fin
e1, ph1
e3, ph1
e3, ph2
e2, a2
e2, a1
e2, a4
e2, a5
&
a1{ 1.8
}
&
a4 [e2]{ 2.5 }
e1{ 4, 0 }
e3{ 3.8, 2.8 }
a2{ 4 }
a3{ 0 }
a5{ 1.5
}
A
B, e2
C
{1, 0}
{1, 0}
Generating LQN Detailed Elements From AD
January 2004 WOSP’2004 tutorial 63
The PUMA project PERFORMANCE FROM UNIFIED MODEL ANALYSIS
web page: www.sce.carleton.ca/rads/puma/ Goal: to develop a unified approach for building performance models
from scenario-based software specifications The proposed approach to support scenario-based performance
engineering:
Software Design Tool(UML, UCM, etc.)
Performance Tool(LQN, QN, etc.)
Sensitivity and
Optimization Tools
Core Scenario Model
(XML-based)
Annotated Design Specs
Results and
guidance
Performance Model
Results
solvediagnose performance
problems
place recommendations in design context
translate to CSM
generate perf. model
January 2004 WOSP’2004 tutorial 64
PUMA project highlights Scenario input based on the UML Profile for Schedulability,
Performance and Time. Other forms of definition of scenarios, either based on UML, or on other
languages such as Use Case Maps. Core Scenario Model to integrate a wide variety of scenario
specifications. in a XML-based language to be defined close to the domain model of the UML Performance Profile
Performance modeling using existing tools. Initially, Layered Queueing, and Queueing Network model formats for
ensured scalability. Simulation.
Design evaluation by model experimentation, and feedback of results as measures attached to the design as design suggestions identified hot spots software resource analysis.
January 2004 WOSP’2004 tutorial 65
Transformation Steps
PerformanceModel
UML model (in XML format)
UML ToolAnalysisResults
Transformation from UML to
CSM
Performance Model Solver
Results and recommendations
Core ScenarioModel (CSM)
Transformation from CSM to performance
model
Two kind of performance models generated so far:
1. LQN model2. CSIM simulation model (C/C++ code)
January 2004 WOSP’2004 tutorial 66
Core Scenario Model Requirements:
Simpler than the UML model Must contain all the data for generating performance models
Reflects closely the elements of the Performance Profile different kind of resources scenarios and scenario steps and their resource demands workload
Challenges for the transformation from UML to CSM: Extract only the UML model elements important for building the
performance model Process multiple UML diagrams and multiple scenarios Recognize whether the UML model is incomplete/inconsistent from the
performance annotation viewpoint Generate a correct XML file conform to the CSM Schema Find a UML tool that exports properly a UML model in XML format
according to the full XMI definition.
January 2004 WOSP’2004 tutorial 67
Generating Different Performance Models
Challenges for the transformation from CSM to different performance models: Realizing the mapping between CSM and the chosen performance
model different degrees of abstraction between input and output generate the performance model structure compute the performance model parameters identify cases in which the chosen performance model cannot express
some features of the system under study Automatic or semi-automatic transformation?
designer guidance may be necessary Flexibility of the transformation process
inflexible: mapping hard-wired in the transformation code flexible: the performance analyst or software developer may be
allowed to set some transformation rules.
January 2004 WOSP’2004 tutorial 68
Validation
January 2004 WOSP’2004 tutorial 69
Client
Case Study: Document Exchange System
Steps for validating the methodology:a) design the UML model b) implement the system by using reusable frameworks (ACE and JAWS)c) measure the system in a networked environmentd) annotate the UML model with measured resource demandse) generate the LQN model automatically from the UML specificationf) solve the LQN model and compared its results with overall performance
measurements obtained from the real systemg) use the LQN model to gain some insights into the DES performance.
<<PAresource>>
LAN<<PAresource>>
CDisk
<<PAhost>>
ClientCPU
<<PAhost>>
ServerCPU
<<PAresource>>
SDisk
Server
<<GRMdeploys>> <<GRMdeploys>>
Retrieve Thread
SDiskProc
January 2004 WOSP’2004 tutorial 70
Activity Diagram for “Retrieve Document” Scenario
Client RetrieveThread SDiskProc
request document
accept request
read request
update logfile
write to logfile
parse request
get document
read from disk
send document
recycle thread
receive document
undef_D
<<PAstep>> {PAdemand=(‘msrd’,’mean’, (220/$cpuS,’ms’))} <<PAstep>>
{PAdemand=(‘msrd’,’mean’, (1.30 + 130/$cpuS,’ms’))}
<<PAclosedLoad>> {PApopulation= $Nusers}
<<PAstep>> {PAdemand=(‘asmd’,’mean’, (0.5,’ms’)),
PAextOp=(‘network’,1) PArespTime= (‘req’,’mean’,(1,’sec’), (‘pred’,’mean’,$RespT)}}
<<PAstep>> {PAdemand=(‘asmd’,’mean’, (1.5,’ms’))}
<<PAstep>> {PAdemand=(‘msrd’,’mean’, (35/$cpuS,’ms’))}
<<PAstep>> {PAdemand=(‘msrd’,’mean’, (25/$cpuS,’ms’))}
<<PAstep>> {PAdemand=(‘msrd’,’mean’, ($cdS,’ms’)),
PAextOp=(‘readDisk’,$DocS’)}
<PAstep>>
{PAdemand=(‘msrd’,’mean’, (0.70,’ms’)),
PAextOp=(‘readDisk’,$RP’)}
<PAstep>> {PAdemand=(‘msr’,’mean’, (170/$cpuS,’ms’))}
<<PAstep>> {PAdemand=(‘msrd’,’mean’, ($scdC/$cpuS,’ms’)),
PAextOp=(‘network’,$DocS’)}
<<PAstep>>
{PAdemand=(‘msrd’,’mean’, ($gcdC/$cpuS,’ms’))}
wait_D
wait_Dwait_D
January 2004 WOSP’2004 tutorial 71
LQN Model for “Retrieve Document”
ClientP
Ethernet
eServer
RetrieveThread
SDisk
net1
Request Task
net2
ReplyTask
ServerP
defaultentry
ClientT
dummy1
dummy2
eServer
SDiskProc
January 2004 WOSP’2004 tutorial 72
Model sensitivity to I/O time for short messages
Sensitivity of the LQN model to I/O time for 5 KB message size
0
10
20
30
40
50
60
0 5 10 15
no. of clients
measuredLQN(short I/O)LQN (long I/O)
January 2004 WOSP’2004 tutorial 73
Sensitivity to network delays for short messages
Sensitivity of the LQN model to network delay for 5 KB message size
0
10
20
30
40
50
60
0 5 10 15
no. of clients
measuredLQN(0.1214 ms)LQN(0.2248 ms)
January 2004 WOSP’2004 tutorial 74
Sensitivity to I/O time for long messages
Sensitivity of the LQN model to I/O time for 50 KB message size
0
20
40
60
80
100
120
0 5 10 15
no. of clients
measuredLQN(short I/O)LQN(long I/O)
January 2004 WOSP’2004 tutorial 75
Sensitivity to network delays for long messages
Sensitivity of the LQN model to network delay for 50 KB message size
0
20
40
60
80
100
120
140
0 5 10 15
no. of clients
measuredLQN (0.2296 ms)LQN (0.1726 ms)
January 2004 WOSP’2004 tutorial 76
Future research directions Refinement of the current Performance Profile
add more annotations to allow state machine-based performance analysis
harmonize the Performance Profile with the Schedulability Profile Upgrade the STP Profile for UML 2.0 Coordinate the SPT Profile with the emerging QoS Profile
a new OMG initiative to support modeling a wide range of QoS concepts: QoS characteristics, constraints, contracts, execution models, adaptation and monitoring, etc.
Use the Performance Profile in the context of MDA Challenge: how to use UML models at different levels of
abstraction (platform-independent and -dependent) to generate performance models which are inherently instance-based and platform dependent.