d4.2.2 transformations and analysis support to...
TRANSCRIPT
Composit ion with Guarantees for High -integrity
Embedded Software Components Assembly
Project Partners: Aicas, Atego, Atos Origin, CNRI-ISTI, Enea, Ericsson, Fraunhofer, FZI, GMV Aerospace
& Defence, INRIA, Intecs, Italcertifer, Maelardalens University, Thales Alenia Space,
Thales Communications, The Open Group, University of Padova, Technical University of
Madrid
Every effort has been made to ensure that all statements and information contained herein are accurate, however the
Partners accept no liability for any error or omission in the same.
© 2012 Copyright in this document remains vested in the CHESS Project Partners.
Project Number 216682
D4.2.2 – Transformations and analysis support to predictability
Version 1.3 13 January 2012
Final
Public Distribution
UPD, UPM, FZI, INTECS, FhG, Aicas
D4.2.2 Transformations and analysis support to predictability
Page ii Version 1.3 13 January 2012
Confidentiality: Public Distribution
DOCUMENT CONTROL
Version Status Date
0.1 Table of contents according to comments from Padova meeting 8
March 2011
10 June 2011
0.2 Contents of UPM in sections 2,3 and Appendix A 28 August 2001
0.3 Contents of Fhg in section 2 10 September 2011
0.4 Contents of FZI and UPD for sections 2 and 3 19 September 2011
0.5 Contents of FZI for section 3 29 October 2011
0.6 Contents of Aicas and FZI 15 November 2011
0.7 Updates of Section 2 and contents for Section 3 for Fhg 17 November 2011
1.0 Initial Version 15 December 2011
1.1 First review of the document – Atos, Section 4, FZI, UPM, UPD 22 December 2011
1.2 Review of INRIA, Section 4 FhG 10 January 2012
1.3 Final version 13 January 2012
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page iii
Confidentiality: Public Distribution
TABLE OF CONTENTS
Table of Contents ............................................................................................................................................................. iii
Table of Figures ............................................................................................................................................................... iv
Executive Summary .......................................................................................................................................................... v
1. Introduction .............................................................................................................................................................. 6
2. Design of Analysis Transformers ............................................................................................................................ 6
2.1 Scheduling Analysis ............................................................................................................................................ 6 2.1.1 Transformation from CHESS-ML to MARTE-PSM .................................................................................. 7 2.1.2 Transformation from MARTE-PSM to MAST ........................................................................................... 9
2.2 Deployment Analysis ......................................................................................................................................... 15 2.2.1 Initial Mapping Configuration .................................................................................................................. 15 2.2.2 Scheduling Configuration ......................................................................................................................... 17 2.2.3 Bus Access Configuration ......................................................................................................................... 18
2.3 Simulation-Based Analysis................................................................................................................................ 18 2.3.1 Transformation from CHESS-ML to EAST-ADL2 .................................................................................. 18 2.3.2 General Structure of QVT rules ................................................................................................................ 18 2.3.3 CHESSComp2EASTFunction Rule .......................................................................................................... 19 2.3.4 CHESSDeploy2EASTDeploy ................................................................................................................... 19 2.3.5 CHESSTime2EASTDelay ........................................................................................................................ 20 2.3.6 CHESSRT2EASTRepConst ..................................................................................................................... 20
2.4 Static Analysis on Java Code ............................................................................................................................ 21 2.4.1 Integrating Java Bytecode Analysis into the CHESS Toolchain .............................................................. 21 2.4.2 Adherence to RTSJ's Memory Management Discipline ........................................................................... 21 2.4.3 Computation of Stack Sizes ...................................................................................................................... 24
3. Implementation of Transformers .......................................................................................................................... 26
3.1 Scheduling Analysis .......................................................................................................................................... 26 3.1.1 Transformation from CHESS-ML to MARTE-PSM ................................................................................ 26 3.1.2 Transformation from MARTE-PSM to MAST ......................................................................................... 31
3.2 Deployment Configuration Analysis ................................................................................................................. 34 3.2.1 Initial Mapping Configuration .................................................................................................................. 34 3.2.2 Scheduling Configuration ......................................................................................................................... 36 3.2.3 Bus Access Configuration ......................................................................................................................... 37
3.3 Simulation-Based Analysis................................................................................................................................ 37 3.3.1 Initiation of EAST-ADL2 Model Structure .............................................................................................. 37 3.3.2 Transformation of the Systems SW-/HW-Architecture ............................................................................ 38 3.3.3 Annotation and Transformation of Timing Constraints ............................................................................ 38
3.4 Static Analysis on Java Code ............................................................................................................................ 38 3.4.1 VeriFlux’s Bytecode Analysis .................................................................................................................. 38 3.4.2 Adherence to RTSJ Memory Management Discipline.............................................................................. 39 3.4.3 Computation of Stack Sizes ...................................................................................................................... 39
4. Application of Transformers ................................................................................................................................. 40
4.1 Rules for the application of transformers ......................................................................................................... 40 4.1.1 General rules for the application of scheduling analysis in CHESS ML .................................................. 40 4.1.2 General rules for the application of scheduling analysis in MARTE profile ............................................ 43 4.1.3 General rules for the application of deployment analysis ......................................................................... 46 4.1.4 Rules for the Application of the CHESS-ML to EAST-ADL2 Transformation ....................................... 49
4.2 Guides for the interpretation of analysis results ............................................................................................... 50 4.2.1 Interpretation of scheduling analysis results in CHESS ML ..................................................................... 50 4.2.2 Interpretation of scheduling analysis results in MARTE Profile .............................................................. 50
D4.2.2 Transformations and analysis support to predictability
Page iv Version 1.3 13 January 2012
Confidentiality: Public Distribution
4.2.3 Interpretation of Deployment Analysis Results ........................................................................................ 51 4.2.4 Interpretation of the simulation-based analysis results ............................................................................. 52
5. References ................................................................................................................................................................ 53
A.1. Scheduling Analysis .......................................................................................................................................... 54 A.1.1 Transformation from MARTE-PSM to MAST ............................................................................................... 54 A.1.2 Execution of MAST Results to MARTE Transformation ............................................................................... 57
A.2 Deployment Analysis .............................................................................................................................................. 58 A.2.1 Execution of Deployment Analysis ................................................................................................................ 58
A.3 Simulation Based Analysis ..................................................................................................................................... 59 A.3.1 Execution of the Analysis ............................................................................................................................... 59
A.4 Static Analysis on Java Code ................................................................................................................................. 60 A.4.1 Installing and Starting VeriFlux ...................................................................................................................... 60
TABLE OF FIGURES
Figure 2-1: Transformation chain for schedulability analysis ............................................................................................ 7 Figure 2-2: QVT Modules in MARTE2MAST Transformations ..................................................................................... 11 Figure 2-3: CHESS Real-Time Modelling Languages ..................................................................................................... 11 Figure 2-4: SaAnalysisContext to MASTMODELType: the root of mappings ............................................................... 12 Figure 2-5: GaWorkloadEvent to RegularTransaction: identification of mast transactions ............................................. 12 Figure 2-6: Mapping of GQAM events into MAST Events ............................................................................................. 13 Figure 2-7: Mapping rules for handling Event Handlers .................................................................................................. 14 Figure 2-8: MARTE Resource to MAST Resource mappings ......................................................................................... 14 Figure 2-9: Functional Structure Transformation ............................................................................................................. 15 Figure 2-10: Functional Behaviour Transformation ......................................................................................................... 16 Figure 2-11: Deployment View Transformation .............................................................................................................. 17 Figure 2-12: Scheduling Configuration Analysis Transformation .................................................................................... 18 Figure 2-13: Bus Access Configuration Analysis Transformation ................................................................................... 18 Figure 2-14: CHESSComp2EASTFunction Rule ............................................................................................................. 19 Figure 2-15: CHESSDeploy2EASTDeploy rule............................................................................................................... 20 Figure 2-16: CHESSTIME2EASTDelay rule ................................................................................................................... 20 Figure 2-17: CHESSRT2EASTRepConst rule ................................................................................................................. 21 Figure 2-18: Java bytecode analysis in CHESS ................................................................................................................ 21 Figure 2-19: RTSJ scope stack (basic idea) ...................................................................................................................... 23 Figure 2-20: Analysis result, IllegalAssignmentError ...................................................................................................... 23 Figure 2-21: Analysis result, stack size computation ....................................................................................................... 26 Listing 3-1: Registration of ecore model and UML profile .............................................................................................. 32 Listing 3-2: Reuse of ecore models for UML profiles in QVT ......................................................................................... 33 Listing 3-3: Registration of MAST modelling languages ................................................................................................. 33 Figure 3-4: MARTE and Analysis Modelling Tools Assets ............................................................................................. 34 Figure 3-5: Top-Down Mapping Determination ............................................................................................................... 35 Figure 3-6: Bottom-Up Mapping Determination .............................................................................................................. 36 Figure 3-7: Scheduling Priorities Determination .............................................................................................................. 36 Figure 3-8: Determination of Bus-Access Configurations ................................................................................................ 37 Figure A-1: Configuration of analysis generator .............................................................................................................. 55 Figure A-2: Start Analysis Command............................................................................................................................... 56 Figure A-3: Results of Analysis Command in Console .................................................................................................... 56 Figure A-4: Invoke Analysis............................................................................................................................................. 58 Figure A-5: A completed System Tab .............................................................................................................................. 63 Figure A-6: A completed Application Tab ....................................................................................................................... 63 Figure A-7: A completed Analysis Tab for Scoped Memory Analysis ............................................................................ 64 Figure A-8: A completed Analysis Tab for Stack Size Analysis ...................................................................................... 64
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page v
Confidentiality: Public Distribution
EXECUTIVE SUMMARY
This document reports on the work conducted in Task 4.3 in Year 2 of the project, for
the full definition of transformations from CHESS modelling language to analysis
languages.
This report introduces transformation tools support for generation of time predictable
analysis models from CHESS Modelling language. This document is the second version
and, D4.2.1 was the initial version of this deliverable. The initial version included
general descriptions and designs of transformations, the specification of analysis
languages and their integration into modelling framework. This version (second
version) includes the detailed design, implementation and deployment of
transformations.
The designs of transformations introduce general transformation rules that represent the
fundamental structure of transformations and the general structure of modules and
libraries. There is not yet a general approach for the development of QVT (Query View
and Transformation languages) transformations and because of this there is not a
common approach for the designs; but must of designs are based on transformation
rules with different levels of abstraction.
Implementations of model to model transformations are in QVT-o and some
transformations are in combined QVT-r (QVT relational) and QVT-o (QVT
operational). Analysis of code is implemented in Java. Implementation sections include
implementation decisions about transformations.
Deliverable D4.2.1 introduced the general integration of WP3 and WP4 analysis
methods; this deliverable only includes details about design and implementation of
transformations developed in WP4.
Deliverable D2.2 included the description of CHESS ML with details about analysis,
which are important specifications handled in source models of transformations
introduced in this deliverable. This deliverable includes specific details about
transformations; D2.2 included the details about source modelling languages.
D4.2.2 Transformations and analysis support to predictability
Page 6 Version 1.3 13 January 2012
Confidentiality: Public Distribution
1. INTRODUCTION
This section describes the models transformations supported in WP4 and their
integration with source and target languages. This deliverable includes details about
transformations; deliverable D4.2.1 included details about the support of analysis
languages/methods into the general modelling framework.
Analysis languages/methods (target languages of analysis) handled into CHESS WP4
are:
Scheduling analysis. MAST analysis tool supports the implementation of
analysis supported in a specific modelling language.
Deployment analysis. SysXplorer and SystemC give support for this kind of
analysis.
Simulation based analysis. This kind of analysis is partially supported on
DynaSim simulation framework.
Java code analysis. Veriflux provides support for the static analysis of RTSJ
java code.
CHESS-ML and UML are the main languages for the representation of source models.
Some additional UML profiles handled are:
MARTE Profile. Deliverable D4.2.1 introduced MARTE Beta3 as MARTE
reference (this was the basic reference of MARTE 1.0 standard). Currently, the
implementations are supported on MARTE 1.1.
TADL and EAST-ADL profiles. These profiles support specific inputs for the
generation of simulation models.
2. DESIGN OF ANALYSIS TRANSFORMERS
This section introduces the designs of integration tools of analysis languages into
modelling frameworks. The integration tools are based on model to model
transformations for analysis of models, and there is a code analysis tool (VeriFlux) that
supports analysis of generated code (business code included). Model to model analysis
support is based on scheduling analysis methods, deployment analysis and simulation
based analysis.
This section includes the designs of solutions introduced in deliverable D4.2.1.
Deliverable D4.2.1 described general solutions for the integration of analysis methods
into the general modelling framework, and it identified the needed general
transformations and their requirements. This deliverable includes the designs for all
transformations identified in D4.2.1.
2.1 SCHEDULING ANALYSIS
Two kinds of transformations are supported for scheduling analysis: CHESS-ML to
MARTE-PSM transformations and MARTE to MAST analysis language
transformation.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 7
Confidentiality: Public Distribution
2.1.1 Transformation from CHESS-ML to MARTE-PSM
This section outlines the design rules that are used to transform a CHESS user model
(PIM) into a MARTE model used as the PSM of the CHESS process.
The context of use of this transformation is a seamless roundtrip transformation chain
that, starting from the PIM-level user model, automatically generates a PSM
representation of the system, then performs schedulability analysis on it, and eventually
back propagates reports the analysis results on the PSM and on the PIM input model, as
attribute decoration of the relevant design entities.
Figure 2-1: Transformation chain for schedulability analysis
In figure 2 we depicted the whole transformation chain.
A first transformation generates the PSM from the PIM user-model. Once the PSM is
created, another transformation (developed by the University of Cantabria and modified
in part by the University of Padova) generates the input format for the MAST toolset,
which is used to perform the schedulability analysis of the system. The results of the
analysis are back propagated to the PSM (or better to the SAM subset of it) by the same
transformation. Finally another transformation step propagates the analysis results from
the PSM back to the PIM-level user model, where the designer can directly read them.
The scope of this section is to describe the transformation between the PIM and the
PSM level.
These rules generate a PSM that implements the concurrent semantics specified via
extra-functional attributes in the PIM. The generated PSM realizes the concurrent and
communication semantics of the Ravenscar Computational Model (RCM).
The transformation generates a dedicated thread executor for each deferred operation
(cyclic, sporadic and bursty operations). Communication is data-oriented and performed
through protected objects (shared resources with an access protocol).
In the following we enumerate the set of rules that we use for the transformation.
Section 3.1.1 will elaborate on their implementation and the input entities of CHESS-
ML/MARTE and the output entities in MARTE.
D4.2.2 Transformations and analysis support to predictability
Page 8 Version 1.3 13 January 2012
Confidentiality: Public Distribution
ID: “Rule 1”. Short name: “Cyclic operation mapping”
For each cyclic operation “CO”:
1) Create one task T, which executes CO periodically.
2) Link to T the execution of CO, including the transitive call chain derived from the
intra-component bindings of the component implementation, which proceeds across
component instances and terminates in each leaf of the call tree when the first deferred
operation is found (i.e. sporadic or bursty operation) or when no other operations are
called.
ID: “Rule 2”. Short name: “Sporadic operation mapping”
For each sporadic operation “SO”:
1) Create a single task T, with a sporadic activation pattern.
2) Create a single protected object (OBCS), equipped with a “Getter” and a “Put”
operation (to fetch and insert request descriptors in it)
3) The first operation executed by the sporadic task is the getter operation of the
protected object.
4) Attach to the sporadic task the execution of SO including the transitive call chain
derived from the intra-component bindings of the component implementation, which
proceeds across component instances and terminates in each leaf of the call tree when
the first deferred operation is found (i.e. sporadic or bursty operation) or no other
operations are called.
ID: “Rule 3”. Short name: “Bursty operation mapping”
For each bursty operation “BO”:
1) Create a single task T, with a bursty activation pattern.
2) Create a single protected object (OBCS), equipped with a “Getter” and a “Put”
operation (to fetch and insert request descriptors in it)
3) The first operation executed by the bursty task is the getter operation of the protected
object.
4) Attach to the bursty task the execution of BO including the transitive call chain
derived from the intra-component bindings of the component implementation, which
proceeds across component instances and terminates in each leaf of the call tree when
the first deferred operation is found (i.e. sporadic or bursty operation) or when no other
operations are called.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 9
Confidentiality: Public Distribution
ID: “Rule 4”. Short name: “Protected operation mapping”
For each protected operation “PO”:
1) Create a protected object P, which exposes the protected operation “PO”.
2) Link to PO the transitive call chain derived from the intra-component bindings of the
component implementation, which proceeds across component instances until the first
deferred operation is found (i.e. sporadic or bursty operation) or when no other
operations are called.
ID: “Rule 5”. Short name: “Unprotected operation mapping”
For each unprotected operation “UO”:
1) Link to UO the transitive call chain derived from the intra-component bindings of the
component implementation, which proceeds across component instances until the first
deferred operation is found (i.e. sporadic or bursty operation) or when no other
operations are called.
ID: “Rule 6”. Short name “Merging of protected operations”
1) If multiple operations of the same PI are tagged as “protected operation”, create a
single protected object that exposes all of those operations.
ID: “Rule 7”. Short name “End of a transitive call chain to a deferred operation”
A transitive call chain that has a leaf on a sporadic or bursty operation shall include the
“Put” operation of the protected object defined for the sporadic or bursty task executing
that operation.
The analysis results that are reported from the PSM to the PIM are:
1. The worst-case response time for cyclic, sporadic and bursty operations.
2. The worst-case blocking time for cyclic, sporadic and bursty operations.
3. The utilization for processors and busses.
4. The ceiling priority for protected operations and for the protected request queue
of sporadic and bursty operations.
2.1.2 Transformation from MARTE-PSM to MAST
MARTE to MAST transformations is a set of transformations that generate scheduling
analysis models using MARTE-SAM models as input. MARTE SAM is a profile for the
description of scheduling analysis models into UML. SAM is a specialization of
MARTE-GQAM, a general MARTE profile for the representation of analysis models
(two sub-profiles specialize this general profile). Both sub-profiles depend on GRM
which is a general sub-profile for the description of resources. The input of the
D4.2.2 Transformations and analysis support to predictability
Page 10 Version 1.3 13 January 2012
Confidentiality: Public Distribution
transformations is UML models annotated with MARTE sub-profiles SAM, GQAM,
and GRM. The results of this transformation are three models:
A MAST model for the representation of scheduling analysis models
A neutral model for the representation of a Java RTSJ abstract syntax tree with
same behaviour as the scheduling analysis model
A neutral model for the code generation of Ada Ravenscar code with the same
behaviour as the scheduling analysis
This document does not include details about code generation of Java and Ada. It only
describes scheduling analysis generations.
The second transformation’s input is the result model of MAST scheduling analysis
(and the traceability model of previous transformation) and this transformation
propagates the analysis results back to the source model.
2.1.2.1 General Structure of Transformers and QVT Modules
Figure 2-2 shows the different QVT-o transformations, modules and model libraries that
implement both transformations. These modules are:
1. SAM2MAST_QVT_Tranformation: this is a QVT-o transformation that
integrates all kinds of transformations to analysis and code.
2. MAST_RES2SAM_QVT_Transformation: this is a QVT-o transformation that
updates the UML source model with scheduling analysis results.
3. MARTE2MAST_QVT_Library: this is a QVT-o mappings library that defines the
most important mappings from MARTE modelling elements to MAST
modelling elements.
4. MARTE2MAST_QVT_Blackbox_Library: this is QVT-o module that is a Java
black-box. This library handles some basic structures handled in MAST models.
5. UML_Kernel2JavaXMI_library: this is a QVT-o mapping library to support
transformations from UML to Java abstract syntax tree.
6. MARTE2RTSJ_mapping: this QVT-o mapping library extends the previous one
to integrate in Java the concepts defined in MARTE SAM, GQAM and GRM.
7. RTSJ_QVTO_library: this is a QVT-o mapping library for handling RTSJ Java
library in QVT-o.
8. RTSJ_QVTOBlackBoxLibrary: this is a QVT-o module that is a Java black-box
library to support basic operations such as RTSJ model library load.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 11
Confidentiality: Public Distribution
Figure 2-2: QVT Modules in MARTE2MAST Transformations
The two back box Java modules (Figure 2-3) include basic operations that QVT-o
languages cannot implement. For example, MAST language is based on some
sequences structures that cannot be handled in OCL. Another basic function returns the
date time with MAST format, and other operations generate MAST identifiers based on
EMF URIs. RTSJ black box handles dates with RTSJ format, load the RTSJ java model
library, and return the URI of modelling elements.
Figure 2-3: CHESS Real-Time Modelling Languages
2.1.2.2 Transformations Rules in SAM2MAST Transformation Module
SAM2MAST QVT transformation includes some QVT mapping rules for handling the
root elements of SAM/GQAM models. The root of a SAM model is a UML element
annotated with the SaAnalysisContext stereotype. This is the root of a SAM model.
SAM2MAST transformation handles this element for the generation of analysis models
and code. This element is the source for the generation of MASTMODELType
modelling element, which is the root of MAST models. SaAnalysisContext references
D4.2.2 Transformations and analysis support to predictability
Page 12 Version 1.3 13 January 2012
Confidentiality: Public Distribution
two model elements that are the stereotype applications of GaResourcePlatform and
GaWorkloadBehavior. The latter element is the model’s entry point for the specification
of behaviour that must be taken into account in the scheduling analysis, and the former
element is the model’s entry point for the specification of the modelled platform that
must be considered in the scheduling analysis.
All these elements are handled in three mapping rules represented in Figure 2-4,
because there is not a simple mapping from MARTE to MAST (there is no equivalent to
GaResourcePlatform and GaWorkloadBehavior in MAST).
Figure 2-4: SaAnalysisContext to MASTMODELType: the root of mappings
Two other important mappings, shown in Figure 2-5, handle GaWorkloadEvent
stereotype applications. This stereotype is the source of behaviours in the model. The
mapping rules mapWorkloadEventIntoMASTTransaction and mapWorkloadEvent
support the general mappings and they reuse the MARTE2MAST library (included in
section 2.1.2.3) for generation of specific behaviours.
Figure 2-5: GaWorkloadEvent to RegularTransaction: identification of mast transactions
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 13
Confidentiality: Public Distribution
2.1.2.3 Transformation Rules in MARTE2MAST Library
This subsection introduces the most important QVT-o mapping rules that implement
this library.
MARTE2MAST Library includes general mapping from MARTE elements to MAST
elements. Figure 2-6 represents the four mapping that transforms MARTE work load
patterns into MAST external events. They handle the four kinds of patterns that MAST
can represent: periodic, sporadic, burst and irregular.
Figure 2-6: Mapping of GQAM events into MAST Events
The roots of the specifications of actions in work load events in MARTE are
GaScenario stereotype applications (and specializations such as SaStep). MAST has a
different structure and is based on a global event that references some other kinds of
events handlers. MARTE2MAST library handles all these combinations in a set of
QVT-o mapping rules shown in Figure 2-7. The root of the transformation for the
MARTE scenario is GlobalEventHandler. It reuses another two mapping:
SequenceEventHandler and SingleEventHandler (a single event handler or a sequence).
MAST and MARTE languages can represent some others types of handlers based on
alternative and fork-join precedence, but these sequences are not supported in the
implementation of MAST analysis algorithms, and their generation cannot be used for
analysis purposes. Another map rule (IntoEventHandler) provides support
representation GaScenario, GaStep and SaStep based on MAST EventHandlers.
D4.2.2 Transformations and analysis support to predictability
Page 14 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure 2-7: Mapping rules for handling Event Handlers
MARTE2MAST library includes two mapping rules for handling resources (Figure 2-8e
2.8 shows these two rules). Some additional mapping rules support these two basic
rules. The first mapping (mapSchedulaResourceIntoSchedullingServer) implements the
mapping from schedulable resources in MARTE into RegularSchedulingServer in
MAST. The second mapping rule implements the mapping from shared resources into
some of shared resources handled in MAST (priority inheritance, ceiling protocol and
SRP).
Figure 2-8: MARTE Resource to MAST Resource mappings
2.1.2.4 Transformation Rules in MAST_RES2SAM Transformation Module
The MAST_RES2SAM transformation updates the following MARTE annotations in
the UML MARTE model:
SaAnalysisContext: The round-trip transformation includes results that identifies
when the context is schedulable.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 15
Confidentiality: Public Distribution
SaExecHost: the stereotype application is updated with the utilization of the
resource and with the slack calculated for the resource.
SaStep: the SaStep applications are updated with results such as non pre-emption
blocking time for the root step, the number of suspensions and the worst case
response time.
GaScenario: GaScenario applications that are the root of a work load event are
updated with the worst case response time of the event.
Some additional results cannot be back propagated to the source model. The MAST
analysis produces some output results that are not included in the output model of
MAST (MAST write these results in the standard output), and some results in the result
model are not representable in MARTE profiles.
2.2 DEPLOYMENT ANALYSIS
To determine deployment parameters a transformation extracts pieces of information
from the CHESS metamodel. The target analysis model contains the information needed
to trigger the three different analyses. It includes the description of hardware
components, graph representations of functional components and deployment
information. The analysis needs information from the CHESS and MARTE profile that
are applied to an UML input model.
There exist three different analyses that are executed at a different development stage. It
starts with the determination of initial mapping configurations, followed by the
determination of scheduling and bus access configuration parameters.
The target of the transformation is an ecore model that describes the information needed
for the analysis. It contains the hardware description, a functionality representation in
form of a graph model and the deployment information.
2.2.1 Initial Mapping Configuration
The initial mapping configuration determination processes two main aspects of the
system including the functional and the hardware-baseline description. The structural
functional description is modelled with UML models and MARTE and CHESS
stereotypes. The structural functionality modelling is aligned with the CHESS
methodology and mainly uses the CHESS stereotypes Component Implementation and
cHRtSpecification. The information from the model is transformed to an ecore model
that the tools use to perform the analysis. Figure 2-9 illustrates the transformation.
Figure 2-9: Functional Structure Transformation
D4.2.2 Transformations and analysis support to predictability
Page 16 Version 1.3 13 January 2012
Confidentiality: Public Distribution
In a bottom-up approach the behaviour of software components is described by
functional SystemC implementations. An attribute in the CHESS stereotype
ComponentImplementation describes the file path of the source code. It is important
that the interfaces in the code match the names of the interfaces described in CHESS.
Figure 2-10: Functional Behaviour Transformation
In a top-down approach the intra component behaviour must be modelled with activity
diagrams. The control flow of the activity diagrams is annotated with stereotypes to
describe the hardware independent computational effort of the software components. In
the bottom-up approach this computational effort is extracted directly from the code.
The CHESS stereotype ControlFlow is attached to control flow edges in the activity
diagrams. The stereotype attributes describe the order of outgoing edges in case one
node has one or more outgoing edges which is the case when you consider loops. The
lowest number declares the highest priority which is always the inner loop. The highest
number always points to the edge that leaves the loops. The control flow stereotype of
loops also describes the repetition number of this loop. The stereotypes
ComputeComplexity and OperationCount describe the hardware-baseline computational
effort of the software component. They specify the number of arithmetic operations that
are performed on a specific data type for each control flow. The target of the
transformation is a graphical representation of the functionality. The graph contains the
functional behavior, as well as the computation complexity. This graph will be the
starting point to determine communication dependency graphs (iPG – detailed
description in D4.1) for every possible hardware/software component pair that is
allowed. These iPGs are the basis for the mapping configuration determination. The
transformation is illustrated in Figure 2-10.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 17
Confidentiality: Public Distribution
Figure 2-11: Deployment View Transformation
The deployment view includes several aspects of the system that are needed for the
analysis. The main aspect is the hardware-baseline enriched with stereotypes from
MARTE and CHESS. The stereotype CH_HwProcessor, an extension of the MARTE
HwProcessor stereotype, is applied to computational hardware components. The
extension includes multiple instances of the CHESS stereotype HWDataType which
also includes multiple instances of the CHESS stereotype DataTypeExecution. These
stereotypes are used to describe the computational capabilities of the hardware
components. The bus system is enriched with attributes from the MARTE stereotype
HWBus.
The MARTE stereotype Assign is used to assign software stereotypes to hardware data
types. Figure 2-11 illustrates the transformation from the deployment view.
2.2.2 Scheduling Configuration
The analysis generating the scheduling analysis reuses the results of the mapping
determination analysis and additionally extracts information from the involved
computational hardware and software components which is depicted in Figure 2-12.
The transformations from the mapping analysis regarding hardware and software
components are reused. The resulting priorities are back-annotated to the
cHRtSpecification of the software components.
D4.2.2 Transformations and analysis support to predictability
Page 18 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure 2-12: Scheduling Configuration Analysis Transformation
2.2.3 Bus Access Configuration
The FlexRay bus access configuration analysis does not need the computation
complexity information from the mapping analysis. The analysis reuses the results from
the mapping and scheduling configuration analysis. Additionally a FIBEX configuration
file is needed that matches the hardware structure of the CHESS model and already
contains the topology and some standard configuration parameters. The FIBEX file is
linked in the analysis context which is input for the transformation as depicted in Figure
2-13. The analysis determines a bus system configuration for the static slot cycle which
is back-annotated in the CHESS metamodel.
Figure 2-13: Bus Access Configuration Analysis Transformation
2.3 SIMULATION-BASED ANALYSIS
2.3.1 Transformation from CHESS-ML to EAST-ADL2
In order to simulate the systems timing behaviour, the CHESS to EAST-ADL2
transformation generates an EAST-ADL2 model that describes the system from an
automotive specific point of view. The EAST-ADL2 is a domain-specific architecture
description language that targets on the automotive domain. It is used by the simulation-
based analysis approach in order to fulfil automotive specific requirements that are
paired with the used structure in this domain. The EAST-ADL2 defines the system on
different layers of abstraction. The transformation described in this section concentrates
on the design level description. The input of the transformation is an UML model that
has a CHESS and a MARTE profile applied. The needed MARTE packages are HRM,
GCM, GQAM and Alloc.
The transformation consists of two different tasks. The first one is to transform the
structure of the systems architecture including the information about the given software
allocation. The second one is the transformation of the modelled timing constraints.
The transformation is realized within one QVT-o module and one additional QVT-o
library. A back propagation from the analysis model to CHESS was not developed.
Instead, a graphical view was developed, in which the analysis results and timing traces
are graphically represented. Section 4.2.4 provides some further details of the graphical
view.
2.3.2 General Structure of QVT rules
The QVT-o Module is structured into four different rules. Every rule handles the
transformation of one important aspect in the model. The rules are as follow:
1. CHESSComp2EASTFunction: is transforming the software components from the
CHESS Component View into Design Function Types that are grouped in the
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 19
Confidentiality: Public Distribution
EAST-ADL2 Functional Design Architecture. In fact, this rule is handling the
software structure of the systems model.
2. CHESSDeploy2EASTDeploy: is a transformation that handles all information
about the hardware structure of the system. Therefore it transforms the hardware
components which are modelled in the CHESS Deployment View into
Hardware Component Types layered inside the EAST-ADL2 Hardware Design
Architecture.
3. CHESSTime2EASTDelay: transforms the timing information that is needed by
the timing constraints related to the TADL Delay Constraint. The information
will be provided by MARTE GaLatencyObs inside the CHESS model.
4. CHESSRT2EASTRepConst: uses the information given by the CHESS real-time
specification and creates appropriate repetition constraints that are modelled via
TADL elements into the EAST-ADL2 model.
2.3.3 CHESSComp2EASTFunction Rule
Central elements of the transformation will be the CHESS ComponentTypes, which will
be transformed into EAST-ADL2 Design Function Types. Each component will be
instantiated in the model via ComponentImplementations. Such an instantiation is
modelled, using a Realization dependency in the CHESS model. The transformation
module creates in the corresponding EAST-ADL2 Functional Design Architecture a
Design Function Prototype that is typed with the previously transformed Design
Function Type.
Figure 2-14: CHESSComp2EASTFunction Rule
2.3.4 CHESSDeploy2EASTDeploy
The deployment transformation is necessary to transform the hardware structure of the
system. The input is a hardware system modelled in the CHESS model via elements
from the MARTE HRM package. The output is an EAST-ADL Hardware Design
Architecture that contains so called Hardware Component Prototypes that are typed
with Hardware Component Types transformed from corresponding MARTE elements.
Furthermore this transformation rule deals with the information about the software to
hardware allocation. These dependencies are modelled via MARTE Allocations and are
transformed into EAST-ADL2 Function Allocation.
D4.2.2 Transformations and analysis support to predictability
Page 20 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure 2-15: CHESSDeploy2EASTDeploy rule
2.3.5 CHESSTime2EASTDelay
This transformation transforms the timing constraints modelled with CHESS into the
appropriate constraints in EAST-ADL2. All constraints have very similar attributes but
different semantics. The constraints handled by this transformation are in particular
AgeTimingConstraint, ReactionConstration, InputSynchronizationConstraint and
OutputSynchronizationConstraint. Every constraint is assigned to several UML time
observations that are indicating start- and endpoints of the analysed event chain. In the
transformed EAST-ADL2 model this is handled by placing an element with the
stereotype called “EventChain” that delegates to certain stimulus and response events.
Every event represents and is assigned to one flow or client-/server port.
Figure 2-16: CHESSTIME2EASTDelay rule
2.3.6 CHESSRT2EASTRepConst
The EAST-ADL2 repetition constraints are describing the arrival pattern of certain
events. In the CHESS modelling language this is modelled via the “ChRTSpecification”
stereotype. The information provided by the stereotype is either transformed into a
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 21
Confidentiality: Public Distribution
sporadic event constraint or a periodic event constraint. This is depending on the arrival
pattern given in the RT-Specifications occ-kind.
Figure 2-17: CHESSRT2EASTRepConst rule
2.4 STATIC ANALYSIS ON JAVA CODE
2.4.1 Integrating Java Bytecode Analysis into the CHESS Toolchain
Java bytecode is the instruction set for the Java Virtual Machine. This section describes
applications of Java bytecode analysis in the CHESS project. The analyzed code has
been generated in two steps from a CHESS model, as shown in Figure 2-18. In the first
step, UPM's UML2RTSJ tool, as developed in WP5, generates Java source code from
CHESS models. The generated Java source code makes use of the RTSJ library (as
implemented, for instance, by the JamaicaVM). In the second step, a Java compiler
(e.g., javac or jamaicac) translates Java source code to Java bytecode. The generated
bytecode is then analysed by the Java bytecode analyser VeriFlux to produce an
analysis result. The form of the analysis result depends on the particular analysis goal
and will be described below.
We highlight two applications of Java bytecode analysis:
Verifying adherence to the RTSJ’s memory management discipline
Computing upper bounds for Java stack sizes
Figure 2-18: Java bytecode analysis in CHESS
2.4.2 Adherence to RTSJ's Memory Management Discipline
In order to facilitate predictable memory management, the RTSJ offers APIs for region-
based memory management. These APIs enable the safe management of dynamic
memory without the use of an automatic garbage collector. In region-based memory
CHESS model Java source code
Java bytecode Analysis result
UML2RTSJ translates
javac translates
VeriFlux analyzes
D4.2.2 Transformations and analysis support to predictability
Page 22 Version 1.3 13 January 2012
Confidentiality: Public Distribution
management, the execution time for deallocating dynamic memory is predictable:
memory deallocation occurs at well-specified points in time, and its execution time is
proportional to the size of the deallocated memory. In contrast, many automatic garbage
collectors might trigger deallocation at seemingly random points in time, and garbage
collection tasks might require scans of the entire heap. Unlike manual memory
management (as featured for instance in C or C++), region-based memory management
guarantees memory safety: there is no danger of accessing stale memory via dangling
pointers.
Region-based memory management requires adherence to a certain programming
model. This programming model is enforced by runtime checks. If the programming
model is violated, RTSJ programs will throw an IllegalAssignmentError or a
ScopeCycleError at runtime. It is desirable to verify, prior to running an RTSJ program,
that these errors will not be thrown. If a program is free of these runtime errors,
VeriFlux can prove this statically in many cases.
VeriFlux can thus be applied to:
1. verify that programs generated by UML2RTSJ do not throw
IllegalAssignmentErrors or ScopeCycleErrors
2. verify that library calls from UML2RTSJ-generated code do not throw
IllegalAssignmentErrors
The former point is interesting, because UML generally supports so-called opaque
behaviours. Unfortunately, opaque behaviours make it possible that erroneous code
might get inserted into models. Checking the generated bytecode ensures that opaque
behaviours do not violate the RTSJ programming model.
The latter point is interesting, because generated Java programs might make use of pre-
existing libraries. The availability of many powerful and widely used libraries,
including Java's standard libraries, is an especially attractive feature of the Java
programming language. The RTSJ facilitates the use of libraries that are unaware of the
RTSJ. However, some library methods might cause IllegalAssignmentErrors when
called from certain RTSJ contexts. In particular, library methods that assign freshly
allocated objects to global variables are likely to cause IllegalAssignmentErrors in
many RTSJ contexts. VeriFlux can be used to detect erroneous calls of library methods,
this way ensuring that library methods are used in admissible contexts only.
RTSJ memory management is based on a stack-like memory model. Temporary
dynamic memory is logically organized in a stack of disjoint scoped memory areas.
This is depicted in Figure 2-19 . In that figure, the current scope stack consists of the
global heap memory at the bottom of the stack and two scoped memory areas ScopedA
and ScopedB on top. The programming model restricts assignments that cross
boundaries of memory areas: Assignments that create references from higher (in the
scope stack) memory areas to lower memory areas are allowed. Thus, in Figure 2-19, an
assignment of the form 'b.f = a', which creates a reference from ScopedA to ScopedB is
allowed. Assignments that create references from lower memory areas to higher
memory areas are disallowed. Thus, in Figure 2-19, an assignment of the form 'a.f = b',
which creates a reference from ScopedB to ScopedA, is disallowed and triggers an
IllegalAssignmentError.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 23
Confidentiality: Public Distribution
Figure 2-19: RTSJ scope stack (basic idea)
VeriFlux detects such IllegalAssignmentErrors statically. Figure 2-20 shows the result
of a VeriFlux analysis when applied to an RTSJ program. In this example, the program
attempts to assign an object that is part of a scoped memory area s2 to an array that is
part of scoped memory area s1, where s2 is above s1 in the current scope stack.
VeriFlux highlights source code lines that are known to be free of errors in green.
Source code lines that may contain errors are highlighted in yellow. When scrolling
over the line that contains the error, VeriFlux pops up a message that reports the kind of
the error.
Figure 2-20: Analysis result, IllegalAssignmentError
D4.2.2 Transformations and analysis support to predictability
Page 24 Version 1.3 13 January 2012
Confidentiality: Public Distribution
ScopeCycleErrors are thrown when a program tries to enter a scoped memory area that
is below the current scoped memory area on the scope stack, thus creating a cycle in the
scope stack and destroying the stack structure. VeriFlux can detect possible
ScopeCycleErrors statically.
Note that VeriFlux, might yield so-called false positives. That is, VeriFlux might warn
of IllegalAssignmentErrors or ScopeCycleErrors, even if no such errors actually occur
at runtime. The possibility of false positives is the reason, why VeriFlux highlights in
yellow, rather than in red. VeriFlux cannot assert that runtime errors occur for certain.
Instead, it can identify program locations where runtime errors might possibly occur.
2.4.3 Computation of Stack Sizes
The memory needed by Java applications can be subdivided into three parts: the Java
stack, the native stack, and the heap. The RTSJ, in addition, supports immortal memory
and scoped memory. The sizes of these memory areas can be configured at start-up of
the Java Virtual Machine. VeriFlux can be used to predict upper bounds for Java stacks.
These bounds can then be used to configure Java stack sizes for the JamaicaVM.
The problematic issue in static stack size computation is recursive methods. While real-
time programs often avoid recursion, recursive methods do exist in Java's libraries. To
deal with these recursive methods, VeriFlux's stack size analysis relies on recursion
depth annotations. A recursion depth annotation consists of an expression that evaluates
to a natural number that is an upper bound on the number of nested recursive calls.
Syntactically, recursion depth annotations are provided as so-called 'measuredBy'
clauses from the Java Modelling Language (JML) [10]. VeriFlux uses these recursion
depth annotations as assumptions in order to predict stack sizes for recursive methods.
VeriFlux does not verify the correctness of these recursion depth annotations, which is
beyond the capabilities of the static analysis techniques that VeriFlux is based on.
If VeriFlux discovers recursive methods that do not carry a recursion depth annotation,
it uses a default recursion depth, which is a positive natural number or infinity. This
number can be configured in VeriFlux's GUI. In case the default recursion depth is
configured to be infinity, the stack size analysis will report an infinite stack size for all
threads that call recursive methods that do not carry a recursion-depth annotation.
We assume that programs that UML2RTSJ generates do not contain recursive methods.
This is an acceptable restriction for the sake of predictability. Consequently, it is not
necessary to express recursion depths inside models.
Recursion depth annotations inside code are needed, because Java libraries frequently
contain recursive methods. For generated code that uses such libraries, it is necessary
that recursive methods in libraries be annotated with the above-mentioned recursion
depth annotations.
Figure 2-21 shows the result of a stack size analysis. The result is provided in textual
form. At the bottom of the console, you see the following two lines:
Correctness guarantee for memory management analysis: If VeriFlux
does not warn of ScopeCycleErrors or IllegalAssignmentErrors, then it is
guaranted that the analyzed program does not throw
IllegalAssignmentErrors or ScopeCycleErrors at runtime.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 25
Confidentiality: Public Distribution
The first line means that there is a set of threads whose stack sizes are bounded by
203912 bytes. The second line means that there is another (disjoint) set of threads
whose stack sizes are bounded by 277996 bytes. Note that each of these two entries
actually represents a set of threads. During the analysis, several concrete thread objects
get merged into the same abstract thread object. For this reason, VeriFlux is not in
general suitable to compute a stack bound for each thread separately. VeriFlux can
compute a bound on the stack sizes for all threads. In this example, this would be
277996 bytes. This number can be used for configuring Java stack sizes for deployment.
In Figure 2-21, the Errors/Warnings area shows that, for some library methods, the
default recursion depth is used. In this example, this default recursion depth is set to 100
nested recursive calls, which is very pessimistic. This explains why the computed Java
stack size is so high. Replacing the default recursion depth by 10, results in a maximum
stack size of 72560 instead of 277996. Annotating the displayed library methods with
realistic recursion depths on a per-method basis, can lead to better analysis results than
relying on a necessarily pessimistic default recursion depth.
Note that the upper bounds computed by VeriFlux are not tight, i.e., they are higher than
necessary. Note also that VeriFlux does not address StackOverflowErrors due to
overflows of native stacks. Native stacks are needed to cope with methods that are
compiled to native machine code (for optimization purposes) and with native methods
that are called through the Java Native Interface JNI (in order to access services
provided by platform-specific native libraries).
STACK USE: 203912 FOR THREAD: java/lang/Thread[Opaque]
STACK USE: 277996 FOR THREAD: java/lang/Thread[Opaque]
Correctness guarantee for stack size analysis: Assume that all
recursion depth annotations for all recursive methods called from
program P are correct. Furthermore, assume that for all recursive
methods that are called from P and that do not carry a recursion
depth annotation, the default recursion depth provides an upper
bound on the number of nested recursive calls. Finally, assume that
the Java stack size for P is configured to be at least as high as the
largest Java stack size that VeriFlux computes for P. Then P does not
throw a StackOverflowError due to a Java stack overflow.
D4.2.2 Transformations and analysis support to predictability
Page 26 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure 2-21: Analysis result, stack size computation
3. IMPLEMENTATION OF TRANSFORMERS
Section 2 of this deliverable introduced the design of transformations identified in
D4.2.1. This section includes implementation decisions for these transformations. These
kind of implementations include decisions about the integration of UML profiles and
ecore modelling languages into transformations and detailed behaviours into
transformation rules (Section 2 introduces the rules but does not provide their
behavioural models).
3.1 SCHEDULING ANALYSIS
Two kinds of transformations are supported for scheduling analysis: CHESS-ML to
MARTE-PSM transformations and MARTE to MAST analysis language.
3.1.1 Transformation from CHESS-ML to MARTE-PSM
This transformation is implemented in QVT-o. The Table 3.1.1 resumes the stereotypes,
UML elements, and related attributes involved in the transformation.
The transformation relies primarily on the output of the “Build instances” command of
the CHESS editor which generates the set of InstanceSpecification for (I) the logical
system and (II) the hardware system. The result of the transformation is the PSM,
whose additional entities are located in the “RTAnalysisView” Package.
The transformation maps a set of PIM-level entities stereotyped with CHESS and
MARTE stereotypes to PSM-level entities stereotyped with MARTE stereotypes either
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 27
Confidentiality: Public Distribution
in a 1-to-1 fashion or a 1-to-many fashion and organizes them inside Packages.
Moreover, currently, some assumptions are made in order to overcome some tool
shortcomings.
The transformation uses the traceability features of QVT-o and utilizes the “resolve”
construct to retrieve PSM-level entities during the execution of the transformation. The
“resolve” construct is useful when a mapping must elaborate on the results of a previous
mapping. On the one hand its adoption relieves the programmer from the creation of
temporary data structures that binds PIM to PSM entities; on the other hand it
introduces potentially subtle precedence constraints. For example, with reference to the
Table 3.1, mappings M6-M9 are eligible to be executed in a single loop which iterates
over some Slots; however, M9 needs information from M6-M8 and therefore it must be
placed in a second loop which iterates all over the same Slots.
Furthermore the transformation depends on a QVT-o black box library which is written
in Java. In fact a collection of Java utilities is needed to overcome two problems: 1) the
MARTE library is not recognized by QVT-o but is visible from Java. Therefore,
operations dealing with the metaclasses defined therein (e.g., to set the “protectedKind”
of <SaSharedResource>) need this external support; 2) the VSL expressions need
regular expressions to be parsed with ease and QVT-o does not provide such facility.
It is important to say that the name of the entities of the PSM must be unique since the
transformation by the University of Cantabria between the PSM and the analysis input
and vice versa relies on the entity names to bind the PSM to the analysis output in order
to perform the back propagation.
Before the execution of the mappings the four following Packages are created inside the
“RTAnalysisView” Package:
1. “Host” contains the Classes stereotyped with <SaExecHost>, <SaCommHost>
and <GaCommChannel>;
2. “Operation” contains plain or <SaSharedResource> Classes that embed the
<SaStep> Operations;
3. “Task” contains the <SchedulableResource> Classes;
4. “AnalysisContext” contains the one and only <SaAnalysisContext> Class.
The entry point of the transformation is the <CHGaResourcePlatform> Package which
is mapped to the <SaAnalysisContext> Class through the mapping
CHGaResourcePlatform2SaAnalysisContext (M1). The necessary and
sufficient condition for the transformation to start is that there should be one and only
one <CHGaResourcePlatform> Package in the PIM. This mapping calls all the other
mappings.
The mapping CHHwComputingResource2SaExecHost (M2) maps each of the
<CH_HwComputingResource> InstanceSpecification owned by the
<CHGaResourcePlatform> Package to <SaExecHost> Class.
D4.2.2 Transformations and analysis support to predictability
Page 28 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Then all the instances of buses are collected from the InstanceSpecification owned by
the <CHGaResourcePlatform> Package that are stereotyped with <CH_HwBus>. For
each bus:
CHHwBus2SaCommHost (M3) maps it to <SaCommHost> Class.
CHHwBus2OperationClass (M4) maps it to a Class containing two
<SaStep> Operation 'send' and 'receive'. The “execTime” of those operations is
currently hardcoded in the transformation. This limitation will be removed in a
subsequent version of the transformation.
Each of the InstanceSpecification which represents the resources which are
connected to this bus is mapped to <GaCommChannel> Class through
links2GaCommChannel (M5). The GaCommChannel “schedParams”
values are currently hardcoded.
The <CHRtPortSlot> Slot are filtered out from the package containing the
InstanceSpecification of the logical system. Each Slot represents the Port annotated with
the <CHRtSpecification> Comment. In order to avoid name clashes when generating
the Operation outop corresponding to the Operation inop referenced by
<CHRtSpecification> “context” and its related resources, the name of outop is the
concatenation of the “context” name, and the type name of all its parameters (denoted in
the following Table by the attribute “fullName”).
For each Slot:
CHRtSlot2SchedulableResource (M6) maps it to a
<SchedulableResource> Class and sets the “host” value to the corresponding
<SaCommHost> Class. The latter is retrieved by looking at the <Assign>
Comment owned by the <GaResourcesPlatform> Component whose “from”
value is the InstanceSpecification owning this Slot.
For each corresponding <CHRtSpecification> Comment which is Unprotected
or Periodic, CHRtUnprotectedComment2SaStep (M7) maps it (and
more precisely inop) to a <SaStep> Operation. The latter is owned by a Class
that is the result of the mapping
ComponentImplementation2Class4SaStep from the
<ComponentInstance> owning inop. Therefore single Class owns all the
operations of a <ComponentInstance>.
For each corresponding <CHRtSpecification> Comment that is Protected,
CHRtProtectedComment2SaStep (M7P) maps it to a <SaStep>
Operation. The latter is owned by the <SaSharedResource> created expressly for
this operation by the mapping
ProtectedOperation2SaSharedResource (M8). This setting
currently violates Rule 6 in Section 2.1.1.
For each corresponding <CHRtSpecification> Comment which is Sporadic,
CHRtSporadicComment2SaStep (M7S) maps it to a <SaStep>
Operation, which is the result of the mapping
ComponentImplementation2Class4SaStep from the
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 29
Confidentiality: Public Distribution
<ComponentInstance> owning inop. Therefore single Class owns all the
operations of a <ComponentInstance>. Moreover a <SaSharedResource> Class
is created expressly for this operation by the mapping
SporadicOperation2SaSharedResource. This mapping creates also
two <SaStep> Operations “put” and “get” owned by this Class whose
“sharedRes” are set the <SaSharedResource> itself and “execTime” are
hardcoded.
If the inop owns one Activity (it should be at most one) the mapping
Slot2EndtoEndWorkFlow (M9) is executed. M9 maps the Slot to a
<SaEndtoEndFlow> Activity that is owned by <SaAnalysisContext> Class. The
Activity owns one <GaWorkloadEvent> InitialNode, one FinalNode, one
<GaLatencyObs> Contraint as precondition and one or more <SaStep>
OpaqueAction depending of the length of the transitive call chain defined in
Section 2.1.1 by Rules 1, 2, 4, and 5. In addition to these rules, a transitive call
chain begins with the bus “receive” operation and/or ends with a bus “send”
operation if the former is caused by, or causes, the execution of a remote
operation. For each <SaStep> OpaqueAction outopac, the corresponding
<SchedulableResource> (or <GaCommChannel> for bus operations) and
<SaStep> Operation are calculated. The outopac attributes “concurRes” and
“subUsage” are set respectively to these values.
Currently the rule for the Bursty <CHRtSpecification> Comment (Rule 3) is not
implemented.
Mapping Input <I>I Output <O>O Results
M1 <CHGaResourcePlatform> Package
<SaAnalysisContext> Class
O.name = model.name+“_analysisContext”
M2 <CH_HwComputingResource> InstanceSpecification
<SaExecHost> Class O.name = I.name <O>.schedPriRange = “[1..256]”
<O>.speedFactor = <I>.speedFactor
M3 <CH_HwBus>
InstanceSpecification
<SaCommHost> Class <O>.speedfactor = <I>.speedfactor
<O>.blockT = <I>.blockT <O>.packetT = <I>.packetT
M4 <CH_HwBus> InstanceSpecification
Class O.name=I.name+“_operations” <SaStep>Operation.name=“send”
<SaStep>Operation.name=“receive”
<SaStep>.execTime=“(worst=2.5,value=2.5,best=2.5,unit=ms)”
M5 <CH_HwComputingResource>
InstanceSpecification linked to
<CH_HwBus> InstanceSpecification
<GaCommChannel> Class O.name= I.name+bus.name+“_server”
<O>.schedParams is hardcoded
<O>.host = <SaExecHost> corresponding to the connecting bus
M6 <CHRtPortSlot> Slot <SchedulableResource> Class
O.name = <I>.chrtSpecification.partWithPort.name+<I>.chrtSpecification.cont
ext.fullName
<O>.isProtected = false <O>.schedParams = <I>.chrtSpecification.relativePriority
M7 <CHRtSpecification> Comment <SaStep>Operation O.name = <I>.context.fullName <O>.execTime = <I>.localWCET
M7P <CHRtSpecification> Comment <SaStep>Operation O.name = <I>.context.fullName
<O>.execTime = <I>.localWCET <O>.sharedRes = <SaSharedResource> generated by M8
M7S <CHRtSpecification> Comment <SaStep>Operation O.name = <I>.context.fullName <O>.execTime = <I>.localWCET
D4.2.2 Transformations and analysis support to predictability
Page 30 Version 1.3 13 January 2012
Confidentiality: Public Distribution
M8 <CHRtSpecification> Comment <SaSharedResource>
Class
O.name = <I>.context.owner.name+<I>.context.fullName+“_State”
<O>.ceiling = <I>.ceiling
<O>.protectKind = “PriorityCeiling”
M9 <<CHRtPortSlot>> Slot <<SaEndtoEndFlow>> Activity
O.name=I.owningInstance.name+<I>.chrtSpecification. context.full- Name
<GaLatencyObs>.latency=<I>.chrtSpecification.rlDl
<GaWorklodEvent>.pattern=<I>.chrtSpecification.oc- cKind <SaStep>OpaqueAction.subUsage=corresponding
<SaStep>Operation
<SaStep>OpaqueAction.concurRes=corresponding <SchedulableResource>Class
Table 3.1: Resume of mapping rules implementation
The PIM => PSM => MAST transformation chain then proceeds with the creation of
the input for the analysis tool.
The implementation of this transformation consists in a modified version of a plugin
developed by University of Cantabria.
The original transformation takes as input a UML model with MARTE stereotypes and
coverts it to the textual input for the MAST analysis tool. This step is performed with a
M2T transformation implemented mainly in Acceleo with the use of Java for the
parsing of VSL expressions. The transformation also calls the MAST analysis tool GUI.
The output of the analysis is an XML file whose results are propagated back into the
UML model. This step is performed with an ad-hoc Java transformation. The bindings
between the model and the XML file are calculated on line by generating the name of an
entity of the model from the name of an entity of the XML file. This results in an
implementation that is rather complex.
The modifications of the plugin which were done by University of Padova are as
follows:
the Acceleo transformation has been adapted to accept a CHESS model which
is more complex than the original input
the Java backpropagation transformation has been adapted to modify only a part
of the CHESS model – the SAM subset of the PSM – instead of the entire
model
the execution of the MAST analysis tool GUI has been made optional and
replaced with the silent execution of the command line version, so as to make
the whole transformation chain seamless.
After the back propagation of the results of the analysis in the PSM, another back
propagation step reports the results back to the PIM. Although, in theory QVT-o
contains the mechanisms to read a trace file with the “resolve” construct, in practice it is
not able to read an already created trace file and thus it is not possible to implement a
transformation which propagates information between models using a trace file for
reference. Therefore the PSM=>PIM back propagation has been implemented with an
ad-hoc transformation in Java which is very simple since it does not need to infer the
bindings between the PSM and the PIM thanks to the trace file created by the
transformation from PIM to PSM. The PIM stereotyped entities that are currently
modified by the back propagation are:
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 31
Confidentiality: Public Distribution
1. <CH_HwBus> InstanceSpecification: “utilization” attribute;
2. <CH_HwComputingResource> InstanceSpecification: “utilization” attribute;
3. <CHRtSpecification> Comment: “blockT” and “respT” attributes;
4. The “ceiling” attribute of the <CHRtSpecification> Comment is currently not
reported. Although MAST correctly calculates the ceiling priorities of protected
resources, it does not report their values in the analysis output. We plan to add a
localized modification to MAST to overcome this issue and being able to report
at PIM level the calculated ceiling priorities.
The PSM entities generated for the transformation and used for the analysis are left in
the <<RtAnalysisView>> package of the model after the transformation. The user can
then inspect the generated PSM in read-only mode.
A “Purge user model” command available in the CHESS menu can be used to remove
the PSM entities and all the analysis results present in the user model.
Some of the generated entities are used exclusively for schedulability analysis (e.g.
<<SaAnalysisContext>>, <<SaEndToEndFlow>>): hence they belong to the SAM
subset of the PIM. Others can be used also for code generation (e.g.
<<SaSharedResource>>, <<SchedulableResource>>): hence they pertain generically
the PSM dimension.
The next version of this transformation will investigate the generation of the PSM that
is aligned as much as possible with that of the VERDE project.
3.1.2 Transformation from MARTE-PSM to MAST
The implementation and deployment of MARTE-PSM to MAST transformer is based
on three basic modelling tool components:
MARTE1x1: this modelling tool component supports the following MDA
artefacts:
1. All MARTE 1.1 subprofiles. All these subprofiles are supported in
UML2.2 and in ecore.
2. MARTE modelling library.
3. An auxiliary editor that supports the edition of stereotype application that
includes UML data types and some other types for tagged values that are
not well supported in most of modelling tools. MARTE uses extensively
these data types in profiles.
MAST_RMA: this modelling tool component supports two domain specific
modelling language (DSL), and some additional artefacts for the execution of
MAST analysis tools:
1. MAST modelling language. This is the language that supports the MAST
models specified in ecore.
D4.2.2 Transformations and analysis support to predictability
Page 32 Version 1.3 13 January 2012
Confidentiality: Public Distribution
2. MAST results language. This is the language that uses MAST analysis
tools for the representation of results.
3. MAST artefacts. These artefacts include programs that support the
analysis, and the transformer from XML format to MAST internal
language.
MARTE_SA_Gen: this modelling tool component includes QVT-o transfomers
introduced in section 2.1.2. MARTE1x1 and UML2 project support the input
modelling languages of MARTE to MAST transformation. And MAST_RMA
supports the output language of MARTE to MAST transformation, and the input
modelling language for the round-trip transformation that back propagates the
results in the source model.
3.1.2.1 Modelling Languages Definitions
This subsection introduces the most important implementation decisions in the
integration of MARTE profiles, ecore modelling languages and QVT transformations.
MARTE1x1 includes the ecore model attached to each MARTE subprofile (this is the
approach used in most of modelling tools). MARTE1x1 plugins register the ecore model
that supports the subprofiles in MARTE and registers the UML profiles too. For all
subprofiles it defines two registry extensions included in Listing 3-1. In that registrations
an URI is used to reference MARTE profiles (in this example the symbolic name is
(http://MARTE.MARTE_Foundations/schemas/Alloc/1 and the registration defines
the reference to the physical location). These URI are not defined in MARTE standard,
but we follow the same approach as some other UML profile standards). This approach
makes QVT and other transformers independent of MARTE installations (QVT
transformations are reusable in any MARTE implementation that conforms to MARTE
standard and follows the same approach to identify MOF implementations of UML
profiles).
<extension
point="org.eclipse.uml2.uml.dynamic_package">
<profile
uri=http://MARTE.MARTE_Foundations/schemas/Alloc/1
location="platform:/plugin/MARTE1x1/model/MARTE.profile.uml#
_ar8OsAPMEdyuUt-4qHuVvQ"/>
</extension>
<extension
point="org.eclipse.emf.ecore.dynamic_package">
<resource
uri=http://MARTE.MARTE_Foundations/schemas/Alloc/1
location="platform:/plugin/MARTE1x1/model/MARTE.profile.uml#
_mX_70X1CEeCFT8J9kHwmxg"/>
</extension>
Listing 3-1: Registration of ecore model and UML profile
Each modelling tool has specific UML profiles registrations to take into account. The
registration of ecore languages makes possible to reuse ecore profiles into QVT and
MOF2Text.
Examples of QVT model type declarations that reuse MARTE profiles are in Listing 3-2.
QVT module uses MARTE SAM-QQAM-GRM modelling elements as any other
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 33
Confidentiality: Public Distribution
modelling element and it does not need UML2 specific operations that are very tedious.
This approach makes QVT modules independent of MARTE installation.
modeltype MARTE_SAM uses
SAM("http://MARTE.MARTE_AnalysisModel/schemas/SAM/1");
modeltype MARTE_GQAM uses
GQAM("http://MARTE.MARTE_AnalysisModel/schemas/GQAM/1");
modeltype MARTE_GRM uses
GRM("http://MARTE.MARTE_Foundations/schemas/GRM/1");
Listing 3-2: Reuse of ecore models for UML profiles in QVT
MAST_RMA defines two ecore languages that reuse QVT transformers. They are
included in Listing 3-3 and these registrations includes the URI to be used in QVT and
MOF2Text transformers.
<extension point="org.eclipse.emf.ecore.generated_package">
<package
uri=http://mast.unican.es/xmlmast/mast_mdl
class="mast_mdl.Mast_mdlPackage"/>
</extension>
<extension point="org.eclipse.emf.ecore.generated_package">
<package
uri=http://mast.unican.es/xmlmast/result
class="mast_res.Mast_resPackage"/>
</extension>
Listing 3-3: Registration of MAST modelling languages
The implementation and installation of MARTE to MAST transformations includes 15
eclipse plugins.
3.1.2.2 Deployment of MARTE and MAST Modelling Tools Assets
Figure 3-4 includes the diagram used for the delivery and deployment of modelling
assets that support MARTE 1.1, MAST, MARTE to MAST generators, modelling
support for RTSJ and Ada Ravenscar and code generators. This diagram represents five
modelling assets that support five types of MDA artefacts: MARTE 1.1 profile
(MART1x1), MAST tool support in eclipse (MAST_RMA), RTSJ in UML (UML2RTSJ),
Ada Ravenscar in UML (UML2AdaRavenscar) and the set of transformers
(MARTE_SA_Gen).
Each asset includes different kinds of MDA artefacts such as UML profiles, UML
model libraries, QVT-o transformers, MOF2Test generators, EMF metamodels, and
Java based editors and wizards. Each asset defines their dependencies
(MARTE_SA_Gen depends on the others assets), and the dependency on standard MDA
artefacts such a UML 2.2 metamodel, and some basic eclipse plugins.
The assets define their external commands and services required/provided to other
assets. These services are supported in behavioural implementation languages such as
QVT-o, MOF2Text, Java and OCL.
D4.2.2 Transformations and analysis support to predictability
Page 34 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Finally the assets include documentation artefacts supported in wiki. These artefacts are
wiki tutorials embedded in the assets and delivered with the assets.
Figure 3-4: MARTE and Analysis Modelling Tools Assets
3.2 DEPLOYMENT CONFIGURATION ANALYSIS
The implementation of the deployment configuration analysis is based on extracting the
analysis input from a CHESS model and the back-annotation of the results to the model.
The only exceptions are the bottom-up approaches where parts of the analysis input is
extracted from source code when performing -the analysis for mapping configurations,
and during the bus access configuration analysis where the input is enriched with
information from an external FIBEX configuration file which is also the target for the
results.
3.2.1 Initial Mapping Configuration
The analysis extracts its inputs from an ecore metamodel that is also the target of the
analysis results. The top-down approach extracts the analysis input from the CHESS
model. Therefore QVT-o transformations are implemented where the target is the
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 35
Confidentiality: Public Distribution
analysis input model. The results are back-annotated with a QVT-o transformation from
the analysis model to the CHESS model. Figure 3-5 illustrates this approach.
Figure 3-5: Top-Down Mapping Determination
The bottom-up approach extracts behavioural and computational complexity directly
from the SystemC source code. The source code is referenced by the software
components in the CHESS model. An abstract syntax tree (AST) representation with
SystemC structural information is extracted from the source code using the Eclipse
CDT framework with QVT-r transformations. This abstraction process was
implemented in the ITEA2 VERDE project. This AST is then instrumented to capture
execution counts of basic blocks during execution. This AST is then transformed into
source code and executed. The AST with the SystemC constructs is used as input for a
transformation that extracts the control flow to the internal process graph (iPG)
described in deliverable D4.1. Additionally for each basic block the computation
complexity is extracted and added with the execution count to the edges in the extended
iPG. This iPG is hardware independent and represents the functional behaviour for the
analysis. This is depicted in Figure 3-6.
D4.2.2 Transformations and analysis support to predictability
Page 36 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure 3-6: Bottom-Up Mapping Determination
The analysis generates a hardware-specific iPG for each computational hardware
component. These iPGs feed the integer linear programming (ILP) formulation to
determine the mapping configuration. The mapping configuration is then back-
annotated to the CHESS model using MARTE Assign stereotypes.
3.2.2 Scheduling Configuration
Figure 3-7 depicts the design flow implementation of the scheduling configuration
analysis. All information are extracted from the CHESS model and transformed to the
analysis input with QVT-o transformations. Additionally to the top-down mapping
approach timing and mapping information are extracted from the source model. The
generated fixed-priority scheduling configurations are back-annotated to the CHESS
model using cHRTSpecification stereotypes.
Figure 3-7: Scheduling Priorities Determination
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 37
Confidentiality: Public Distribution
3.2.3 Bus Access Configuration
Figure 3-8 illustrates the bus access configuration analysis. The analysis reuses the
results of the former analysis and generates bus scheduling parameters. Additionally to
the information extracted from the CHESS model, the FIBEX configuration file is
needed. FIBEX is an automotive standard to describe the topology and configurations of
automotive field bus systems like FlexRay, MOST, CAN and LIN. The analysis
determines the bus access configuration for FlexRay systems. Information about the
topology and rudimentary configurations are extracted from the FIBEX file. The results
are then back-annotated to the FIBEX file and not to the CHESS model.
Figure 3-8: Determination of Bus-Access Configurations
3.3 SIMULATION-BASED ANALYSIS
In order to analyse the given timing constraints via the simulation-based analysis, a
simulation of the system has to be generated. This process starts with a first
transformation from the CHESS model into a corresponding EAST-ADL2 model that is
done with QVT-o.
3.3.1 Initiation of EAST-ADL2 Model Structure
In a first step of the transformation the output model is created with an initial EAST-
ADL2 structure. For this purpose a helper function named setEASTStructure
creates a group of UML classes on the top-level of the model, which are applying
certain stereotypes of the EAST-ADL2 profile that represents the hierarchical structure
of the automotive specific profile:
system model – describing a formal domain root in the EAST-ADL2 structure
analysis level – representation of the analysis level
design level – representation of the design level
implementation level – representation of the implementation level
D4.2.2 Transformations and analysis support to predictability
Page 38 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Functional Analysis Architecture – represents the system at analysis level
Functional Design Architecture – represents the software-architecture at design
level
Hardware Design Architecture – represents the hardware-architecture at design
level
FDALib – package that contains the implementations of the software
components
HDALib – package that contains the implementations of the hardware
components
Even though only the design level is used in the transformed model, the whole structure
is created in order to conform to the EAST-ADL2 profile.
3.3.2 Transformation of the Systems SW-/HW-Architecture
For the transformation of the architecture, the QVT-o transformation utilizes two helper
functions called createFDALib and createHDALib, which transform the software
and hardware components from the CHESS model into EAST-ADL2 Design Function
Types and Hardware Component Types. These types are stored in separate packages
called FDALib and HDALib. The Design Function Types are based on software
components from the component view of the CHESS model, while the hardware
components are transformed from information given by deployment view.
The information about the interconnection of the software components is transformed
into the Functional Design Architecture. At this point, instances of the Design Function
Types, so called Design Function Prototypes, are created, which are typed by the
transformed software components.
The deployment is displayed in the hardware design architecture. This contains
properties that are stereotyped as hardware component prototypes. These represent
instances of the previously described hardware component types. The hardware design
architecture serves the role as a description of the deployment.
3.3.3 Annotation and Transformation of Timing Constraints
Timing constraints modeled in the CHESS-ML are firsthand representations of the
TADL timing constraints used in the EAST-ADL2. Therefore the mapping of these
elements is done in a direct, straightforward manner. Stimulus and response of the
timing constraints are modeled in the CHESS-ML via UML time observation instances.
The mapping to EAST-ADL2 is done by a helper function called
createEventTiming, which creates an event chain that handles events
corresponding to the stimulus and the response that are annotated at ports in the CHESS
model.
3.4 STATIC ANALYSIS ON JAVA CODE
3.4.1 VeriFlux’s Bytecode Analysis
The heart of VeriFlux is a so-called point-to analysis, which statically computes the set
of values that program variables may possibly refer to at runtime. Simultaneously to the
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 39
Confidentiality: Public Distribution
points-to analysis, VeriFlux computes an invocation graph, taking advantage of the
points-to information in order to deal quite accurately with dynamic method dispatch.
After points-to sets and invocation graph have been computed, VeriFlux uses this
information to detect various possible run-time errors, including attempts to dereference
null-pointers, attempts of out-of-bounds array accesses, attempts of failing class casts,
possible deadlocks, and possible data races.
VeriFlux is designed to be sound, that is, it is designed to not miss any errors.
Technically, this is achieved by ensuring that the computed points-to set for each
program variable over-approximates the actual points-to sets at runtime. The statically
computed points-to set always contains at least all values that the variable points to at
runtime, but may contain additional spurious values. For decidability reasons, spurious
values are unavoidable, resulting in VeriFlux sometimes issuing spurious warnings (so-
called false positives). A common goal of all sound static analysis tools is keeping the
number of false positives low (preciseness), while keeping the analysis sound and
efficient enough to be useable. VeriFlux strikes a good balance between preciseness and
efficiency by employing a form of context-sensitivity known as object-sensitivity. This
form of context-sensitivity is tailored towards object-oriented programs and
distinguishes calling contexts based on the receiver parameter (this in Java).
3.4.2 Adherence to RTSJ Memory Management Discipline
To be able to verify the absence of IllegalAssignmentErrors and ScopeCycleErrors, the
analysis keeps track of context information for each method invocation. This context
information represents the current allocation context, which is identified by a memory
area instance. On a call to enter a memory area, the context is set to that memory area
for the invocation of the run method that executes in this area.
3.4.2.1 Absence of IllegalAssignmentErrors
The value that represents an allocated object includes the memory area context of the
invocation that allocates the object. This information can be used to check if an
IllegalAssignmentError might occur during runtime. The checks are carried out at those
store instructions, where the assigned value has a reference type. If the assigned
reference might be allocated in a memory area that is not equal to the target of the
assignment or to a parent of the target, then a possible IllegalAssignmentError will be
reported by the analysis.
3.4.2.2 Absence of ScopeCycleErrors
Verification of the absence of scope cycles is performed by recording an ordering
relation between scoped memory areas. This relation is updated whenever a scoped
memory area is entered in a context that uses another scoped memory area as a
surrounding allocation context. When a new entry is added to the ordering relation, it is
checked that this new entry respects the so-called single parent rule defined by the
RTSJ. If this is not the case, a possible ScopeCycleError is reported.
3.4.3 Computation of Stack Sizes
As mentioned above, VeriFlux computes an invocation graph. Nodes in this graph
represent method invocations. Upper bounds on stack uses are computed by three depth-
first traversals of this invocation graph.
Recursive method calls correspond to cycles in the invocation graph. In order to
eliminate cycles, we first compute the strongly connected components (SCCs) of the
D4.2.2 Transformations and analysis support to predictability
Page 40 Version 1.3 13 January 2012
Confidentiality: Public Distribution
invocation graph. This computation requires two depth-first graph traversals, using a
textbook algorithm. Each SCC with more than zero arcs is then replaced by a single
node that is annotated by the sum of the sizes of all stack frames that correspond to
nodes (i.e., method invocations) in that SCC, multiplied by the maximal recursion depth
over all nodes (i.e., method invocations) in that SCC. The recursion depths are
computed by evaluating the recursion depth annotations of invoked methods, or using
the default recursion depth for methods that do not carry a recursion depth annotation.
All nodes that are not in a SCC with more than zero arcs are simply annotated by the
size of the stack frame of the corresponding method invocation.
After merging each SCC, we are left with a directed acyclic graph (DAG), where each
node is annotated with a positive integer. Let this annotation be called the stack frame
size of the node. We want to compute a stack size for each node. To this end, we simply
need to add the stack frame size of the node to the maximum of the (recursively
computed) stack sizes of its successor nodes. This can be achieved, for all nodes, in a
depth-first traversal of the DAG.
4. APPLICATION OF TRANSFORMERS
This section includes some general ideas for the application of transformations. We
include some general rules that must fulfil models for the consistent application of
transformations, and some guidelines for the location of analysis results into models.
4.1 RULES FOR THE APPLICATION OF TRANSFORMERS
Source models must guarantee some constraints for the application of transformations
and these transformations assume some general structure in source models. This section
introduces some general rules for the correct application of transformation.
4.1.1 General rules for the application of scheduling analysis in CHESS ML
This section enumerates the requirements on the user model for schedulability analysis
to be enabled; it also provides guidance for the interpretation of the analysis results.
The schedulability analysis transformation performs the following chain of actions:
1. Verification of the user model (PIM);
2. Creation of a working copy of the user model in a working directory called
“schedulability_analysis”;
3. Transformation of the working copy to a platform-specific model (PSM) that
conforms to the computational model assumed by schedulability analysis, while
it also preserves the semantic meaning of the attribute settings made by the user;
this model – once confirmed by analysis – will be used as the basis for extra-
functional code generation;
4. Transformation of the PSM in a form suited for input to the third-party
schedulability analysis tool in use in CHESS;
5. Execution of the analysis and back propagation of its results to the PSM;
6. Further back propagation of the analysis results to the working copy of the PIM;
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 41
Confidentiality: Public Distribution
7. Replacement of the original user PIM with the working copy as modified by
back propagation.
4.1.1.1 Transformation requirements
In order for transformation to take place, conformance verification of the user PIM is
performed. In case that verification fails, the user is prompted with a dialog box with
information on the errors detected in the PIM, which prevent the transformation.
To avoid incurring this problem the user should mind the requirements reported in the
following.
Hardware System
There should be one and only one <CHGaResourcePlatform> Package in the
PIM. The Package is automatically created by the “Build Instance” command
executed over the Component that specifies the hardware platform in the
Deployment View which must be stereotyped as <CHGaResourcePlatform>;
There should be at least one <Assign> Comment. For all the <Assign> the from
and to attributes should contain InstanceSpecification elements built from the
“Build Instance” command for the software and hardware system, respectively;
There should be exactly one <<Assign>> that references a given
InstanceSpecification in Assign.from;
HwBus.packetT should be not null, however the value “(worst=1.0,unit=ms)” is
used by default;
HwBus.speedFactor should be not null, however the value “(value=1.0)” is used
by default;
HwBus.blockT should be not null, however the value “(worst=0.0,unit=ms)” is
used by default.
Intra-component bindings
All the PI operations should have an Activity specifying the intra-component
bindings except the ones which do not depend on RI;
These ICB Activities should not be in the attribute classifierBehavior of the
component implementation owning the concerned operation. They should be
just in ownedBehavior;
In all the CallOperation nodes of the ICB Activities:
o The operation attribute should refer to an operation of a Interface;
o The port attribute should refer to the RI port.
Operations marked as cyclic cannot be called through a component instance
binding;
D4.2.2 Transformations and analysis support to predictability
Page 42 Version 1.3 13 January 2012
Confidentiality: Public Distribution
At least one operation of a RI on a component instance should be called in an
intra-component binding diagram of the component instance implementation.
Software System
All RI of component instances should be fulfilled;
The binding between a RI and a PI holds if and only if their respective interfaces
are compatible;
All operations in all PI of every component instance should be annotated with
<CHRtSpecification> Comments;
In all the <CHRtSpecification> the attributes should be:
o partWithPort and context must be always valued with the property
owning the port annotated and the concerned operation, respectively;
o localWCET should always have the value compatible with the following
string pattern: “(unit = ms, value = N)” where N is an natural number.
The local WCET is the cost relative to the sole execution of the
concerned operation, excluding the execution cost of its required
interface.
For a cyclic operation the <CHRtSpecification> attributes should be:
o occKind with a string pattern compatible with
“periodic(period=(value=R,unit=ms))” where R is non-negative real
number;
o rlDl with a string pattern compatible with “(value=R,unit=ms)” where R
is non-negative real number; the rlDl represents the relative deadline of
the relevant operation;
o protection equals “sequential”.
o relativePriority equals N where N is a non-negative natural number.
For a sporadic operation:
o occKind with a string pattern compatible with
“sporadic(minInterarrival=(value=R,unit=ms))” where R is a non-
negative real number;
o rlDl with a string pattern compatible with “(value=R,unit=ms)” where R
is a non-negative real number;
o protection equals “guarded”;
o relativePriority equals “N” where N is a non-negative natural number.
For a protected operation:
o protection equals “guarded”;
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 43
Confidentiality: Public Distribution
o ceiling equals “(value=N,source=req)” where N is a non-negative natural
number.
For a unprotected operation:
o protection equals “concurrent”.
4.1.2 General rules for the application of scheduling analysis in MARTE profile
4.1.2.1 General structure of analysis models
MARTE does not provide constraints to guarantee consistency of analysis models.
GQAM and its subprofiles (SAM and PAM), are practically independent models with
modelling elements attached to UML elements as stereotype applications. The problem
of this approach is that stereotype applications would not be taken into account in the
analysis, if it is not integrated in the GQAM analysis model. To solve this problem we
must take care of general analysis model structure:
<<SaAnalysisContext>>.workload: this must include the set of
<<GaWorkloadBehavior>> applications to take into account in the analysis.
<<SaAnalysisContext>>.platform: this must include the set of
<<GaResourcesPlatform>> applications to take into account in the analysis.
<<GaWorkloadBehavior>>.behavior: this must include the set of
<<GaScenario>> (or its specializations GaStep and SaStep) that must be taken
into account in the analysis.
<<GaWorkloadBehavior>>.demand: this includes the set of external workload
events to take into account in the analysis.
<<GaResourcesPlatform>>.resources: this includes the set of resource
applications (<<SaExecHost>>, <<SaSharedResource>> and
<<SchedulableResource>> and its specializations).
There are multiple references from behaviours to resources (e.g.
<<GaStep>>.concuRes), and from demands (workload events) to behaviours (e.g.
<<GaWorkloadEvent>>.effect).
The constraints that handle the general structure of analysis models are:
Analysis resources: all referenced resources must be included in
<<GaResourcesPlatform>>.resources.
Analysis behaviours: all referenced <<GaScenario>> (and specializations) must be
included in <<GaWorkloadBehavior>>.behavior.
Workload events effect: a workload event must have associated one and only one
behaviour effect.
D4.2.2 Transformations and analysis support to predictability
Page 44 Version 1.3 13 January 2012
Confidentiality: Public Distribution
4.1.2.2 MARTE Redundancies
MARTE is redundant for several values. The redundancy should be reduced as much as
possible to avoid inconsistencies and to avoid unexpected results. Redundancies that
affect to scheduling analysis are:
<<SchedulableResource>> and <<GaWorkloadEvent>> are redundant because both
element include parameters for the description of release events.
<<SchedulableResource>> include release parameters from the resource reservation and
from scheduling server view point, and <<GaWorkloadEvent>> specifies the release
events from analysis workload view point. But, often, they characterize the same release
parameter. E.g. a periodic external event must be represented with a
<<GaWorkloadEvent>> and it must be specified with a periodic arrival pattern. This
periodic event can be handled in a <<SchedulableResource>>, and the root step for the
<<GaWorkloadEvent>> should be handled with the <<SchedulableResource>>.
Consistency of <<GaWorkloadEvent>> and <<SchedulableResource>>: arrival
pattern of workload event handled in step supported in schedulable resource must be
consistent in pattern and temporal parameters. <<GaWorkloadEvent>> defines the
values handled in analysis, but <<SchedulableResource>> is the main source for the
code generation.
<<GaWorkloadEvent>> behaviour specification: The behaviour of a workload event is
characterized in the effect field. But there are alternative interpretations depending on
the values of properties in <<GaScenario>> and the specializations <<GaStep>> and
<<SaStep>>. The priority of interpretation is as follow:
1. If the <<GaStep>> has associated an outputRel step, then behaviour will be a
sequence of steps and the next step will be the step that reference outputRel.
Next steps can include specification of multiple steps. The steps property must
be empty.
2. If the <<GaStep>> includes a set of references in steps property outputRel must
be null. The step will be replaced by the sequence of steps.
3. A step can specify the schedulable resource (concurRes) that execute the step.
When the step is not included in a steps property and concurRes is null the
schedulable resource that execute the step is the same as previous step in the
sequence. The root step of a workload event must specify the concurRes or it
can be specified in the first step included in the steps specification.
4. When a step included in steps does not specifies concurRes the schedulable
resource that execute the step by default is the concurRes that specifies the
parent step, and when this is null (and it is null for all its parents) the
schedulable resource will be the concurRes of previous step in the sequence.
Only sequences of steps are handled: the behaviour that represents the effect of a
workload event must be represented with a sequence of steps. MAST and MARTE
support parallel and alternative precedence (and some others), but they are not
supported in the analysis. Because of that, transformers only handle the precedence
sequence.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 45
Confidentiality: Public Distribution
Execution time of step with steps: the execution time of steps that include steps in steps
property should not specify execution time (the steps specify their own execution
times). This restriction is the same for <<GaScenario>> that has associated a root step.
NFP_Duration is used for the specification of many time values (e.g. periods, execution
times, minInterarrival). NFP_Duration has associated the properties: value, worst and
best.
NFP_Duration properties: for time values that represent worst case values (e.g. worst
case execution time, worst case response time, blocking times) worst is the property
handled and value is ignored. For the time values that represent a single value (e.g.
period, intialBudget, deadline) value is the property handled and worst is ignored.
Frequently MARTE profile supports multiple values for temporal attributes such as
arrival patterns and execution times, and some other attributes such as the schedulable
resource that execute a single step.
Multiplicity of timing values: by default the multiplicity of timing properties is a single
value.
Multiple execTime and usageResource: <<ResourceUsage>> and specializations (e.g.
GaScenario, GaStep, SaStep) can have associated multiple used resources (property
usageResource). For each resource must be there an execution time (execTime); both
attributes are ordered.
4.1.2.3 Time Units
MARTE time units include any time unit (e.g. second, millisecond) and in particular the
time unit tick. Most of time values in MARTE have associated a unit.
MAST does not handle time unit; it assumes that all time values has the same unit.
Because of this we must convert MARTE unit into a single time unit.
Tick and other time units cannot be combined: A model cannot combine tick with other
time units.
4.1.2.4 Resources Rules
When an executing resource does not have associated a scheduler, default scheduler will
be created in the analysis. Schedulable resources must have associated a scheduler and
an executing resource.
<<SchedulableResource>> must specify executing host or scheduler: a schedulable
resource must specify the host or the dependentScheduler. The dependentScheduler
specifies the host.
A <<SchedulableResource>> must provide a single scheduling parameter: the
schedulable resource application must include one and only one scheduling parameter,
and, for the analysis, the scheduling parameter must specify at least the priority or the
deadline. Code generation requires the rest of parameters.
Shared Resource with Ceiling Protocol must specify ceiling: <<SaSharedResource>>
with protection kind PriorityCeiling must include ceiling value.
D4.2.2 Transformations and analysis support to predictability
Page 46 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Shared Resource Protocols Handled: the protocols handled are PriorityCeiling,
PriorityInheritance and StackBased.
Multiplicity of ISRprioRange and schedPriRange attributes in <<SaExecHost>> is 2:
the multiplicity of ISRprioRange and schedPriRange attributes must be 0 or 2.
Policies handled are EDF and Priority: Scheduling policies handled in executable
resources are EarliestDeadlineFirst and FixedPriority.
4.1.3 General rules for the application of deployment analysis
The deployment configuration analysis performs the following actions
1. Top Down: Mapping analysis + schedulability configuration + bus access
configuration
a. Checking if needed information is provided
b. Creation of system model as input for the analysis
c. Execution of analysis
d. Back-propagation of results to working copy
2. Bottom-Up Mapping Analysis
a. Checking if needed information is provided
b. Creation of system model as input for the analysis
c. Creation of instrumented source cod
d. Manuel user steps (see below)
e. Back-propagation of execution results to system model
f. Execution of analysis
g. Back-propagation of results to working copy
Manual user steps (step 2.d):
Write test bench
Compile and execute code
Start second phase of analysis
4.1.3.1 Transformation Requirements
This section describes the requirements for the transformation to perform the analysis.
Some requirements are only for certain analysis. The following acronyms indicate the
analysis:
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 47
Confidentiality: Public Distribution
(MapTD) Top-Down Mapping Analysis
(MapBU) Bottom-Up Mapping Analysis
(Sched) Scheduling configuration
(Bus) Bus access configuration
Software System
All RI of component instances should be fulfilled;
The binding between a RI and a PI holds if and only if their respective interfaces
are compatible;
All operations in all PI of every component instance should be annotated with
<CHRtSpecification> Comments;
(MapBU) Link component implementations in SystemC in
<ComponentImplementation>.sourceCodeLocation
(Sched ) (Bus) In all the <CHRtSpecification> the following attributes must be
set:
o partWithPort and context must be always valued with the property
owning the port annotated and the concerned operation, respectively;
o localWCET should always have the value compatible with the following
string pattern: “(unit = ms, value = N)” where N is an natural number.
The local WCET is the cost relative to the sole execution of the
concerned operation, excluding the execution cost of its required
interface.
For a cyclic operation the <CHRtSpecification> attributes should be:
o occKind with a string pattern compatible with
“periodic(period=(value=R,unit=ms))” where R is non-negative real
number;
o rlDl with a string pattern compatible with “(value=R,unit=ms)” where R
is non-negative real number; the rlDl represents the relative deadline of
the relevant operation;
o (Bus) relativePriority equals N where N is a non-negative natural
number.
For a sporadic operation:
o occKind with a string pattern compatible with
“sporadic(minInterarrival=(value=R,unit=ms))” where R is a non-
negative real number;
o rlDl with a string pattern compatible with “(value=R,unit=ms)” where R
is a non-negative real number;
D4.2.2 Transformations and analysis support to predictability
Page 48 Version 1.3 13 January 2012
Confidentiality: Public Distribution
o (Bus) relativePriority equals “N” where N is a non-negative natural
number.
Intra-component bindings
All the PI operations should have an Activity specifying the intra-component
bindings except the ones which do not depend on RI;
These ICB Activities should not be in the attribute classifierBehavior of the
component implementation owning the concerned operation. They should be
just in ownedBehavior;
(MapTD) Describe computational effort for components.
o Use activity diagrams
o Apply <cHControlFlow> to the control flow edges of an activity
diagram.
Define:
Repetitions: execution numbers of this edge when the interface is
called once
Order (if one node has more than one outgoing edge (loops) that
the order describes the execution (e.g. 0 for the loop backward
edge, 1 for the outgoing edge of the loop)
compComplex: adds the computational complexity to edge.
Define
<ComputeComplexity>
<OperationCount>
<*SW_DataType>
Hardware System
Define computational components and stereotype them with
<CH_HWProcessor>
o (MapTD) Define characteristics
dataType
<HWDataType > and <DataTypeExecution> as nested
classifier of the processor component. Every
<HWDataType> needs at least one <DataTypeExecution>
architecture
frequency
nbPipelines
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 49
Confidentiality: Public Distribution
speedfactor
Define communication components and stereotype them with <CH_HWBus>
o Define attribute bandwidth
To link the components of interest to the analysis context the user need to
execute the build instance and link the <CHGaResourcePlatform> platforms of
interest to the analysis context attribute platform.
(MapTD) Map <*SWDataType> to <HWDatatype> using the
<DataTypeAssign> stereotype
(Sched )There should be at least one <Assign> Comment. For all the <Assign>
the from and to attributes should contain InstanceSpecification elements built
from the “Build Instance” command for the software and hardware system,
respectively;
Analysis Context
Define the Analysis Context for each analysis
Specify the hardware and software components of interest for the analysis within
the context
(Bus) Link FIBEX file in <BusConfigurationAnalysis>.inputFibex
4.1.4 Rules for the Application of the CHESS-ML to EAST-ADL2 Transformation
In order to apply the transformation of the CHESS-ML model into a conform EAST-
ADL2 model, the given input model has to incorporate certain structural requirements.
Otherwise problems in the analysis process or in the transformation could occur that
may lead to incorrect analysis results. The following part therefore describes important
information on how the model has to be designed.
Component View:
Every Software Component has to apply the UML-stereotype
“ComponentType” provided by the CHESS Component Model Package.
Every Software Instance has to be typed by a corresponding Component
Implementation that applies the UML-stereotype “ComponentImplementation”
provided by the CHESS Component Model Package
Every Component Implementation needs a realization dependency to a
corresponding Software Component
If a constraint is modelled onto a port, this is done via linked comments that
apply either the stereotype “CHRtSpecification” in case of repetition constraints
or else the stereotypes “CHInputSynchronisationConstraint” and
“CHOutputSynchronisationConstraint”
D4.2.2 Transformations and analysis support to predictability
Page 50 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Deployment View:
Every Bus has to apply the UML-stereotype “CH_HwBus” provided by the
CHESS Hardware Baseline Package
Every Node has to apply the UML-stereotype “CH_HwProcessor” provided by
the CHESS Hardware Baseline Package
For each software component instance, a computation node has to be assigned.
4.2 GUIDES FOR THE INTERPRETATION OF ANALYSIS RESULTS
This section introduces some general approaches for the interpretation of analysis
results. The section introduces the results provided and the modelling elements that
include the values. This section is not centred in the analysis tools, and the precise
interpretation of results requires the knowledge of these tools.
4.2.1 Interpretation of scheduling analysis results in CHESS ML
The analysis results that are back-propagated to the user PIM from PSM are as follows:
The worst-case response time for cyclic and sporadic operations. The
corresponding attribute is specified by CHRtSpecification.respT
The worst-case blocking time for cyclic, and sporadic operations. The
corresponding attribute is specified by CHRtSpecification.blockT
The utilization for processors and busses. The corresponding attribute is
specified by CH_HwBus.utilization (or that of
CH_HwProcessor/CH_HwComputingResource)
The ceiling priority for protected operations is currently not reported, however it
is specified by CHRtSpecification.ceiling
4.2.2 Interpretation of scheduling analysis results in MARTE Profile
The execution of MAST analysis for MARTE models provides an analysis result model.
This model includes modeling elements for the representation of:
System slack: This represents the percent of system slack.
Scenario results (in MAST terminology they are transaction results): Results for
each MARTE GaScenario includes:
o Step results (in MAST terminology they are operation results): Results
for each MARTE Step are:
Worst case response global time for the scenario. The global time
represents the global response time for the workload event
associated to the GaScenario.
Best case response global time for the scenario.
Jitter associated to the workload event.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 51
Confidentiality: Public Distribution
o Number of suspensions that can occur during the execution of the
scenario.
o Worst blocking times for the scenario.
Host result (in MAST terminology they are processing resource results): the
results for each processing resource are:
o Utilization: percent of utilization of the resource.
o Slack: percent of slack of the resource.
Shared resources result: for each shared resource the results are:
o Ceiling.
MARTE profile includes several properties for the representation of scheduling analysis
results. But there are not properties for the representation of all MAST results. MAST
properties handled in MARTE profile are:
<<SaExecHost>>.utilization: this represents the percent of utilization of
resource.
<<SaExecHost>>.slack: this represents the slack of the resource.
<<SaStep>>.nonpreemptionBlocking: this represents the blocking time for the
step.
<<SaStep>>.numberSelfSuspensions: this represents the number of suspensions
for the step.
<<SaStep>>.respT: this represents the worst case response time for the step.
<<GaSchenario>>.respT: this represents the global worst case response time for
the global scenario trhat handle the workload event.
<<SaSharedResource>>.ceiling: this is the ceiling for the shared resource.
<<SaAnalysisContext>>.isSched: this boolean value includes the global result
for the analysis (it is true when the analysis result is schedulable).
4.2.3 Interpretation of Deployment Analysis Results
Mapping Configuration
Assign Statements are back-propagated to the model in linked in
MappingConfigurationAnalysis.resultingMapping
Scheduling Configuration
Priorities are back-annotated to the <CHRtSpecification>.relativePriority
attribute
D4.2.2 Transformations and analysis support to predictability
Page 52 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Bus access Configuration
Resulting FIBEX file in linked in <BusConfigurationAnalysis>
.resultingBusConfig
Communication Latencies are back annotated in <BusConfigurationAnalysis>
.resultingCommLatencies
4.2.4 Interpretation of the simulation-based analysis results
For the interpretation of the simulation-based analysis, the results are visualized using a
self-developed eclipse plug-in. This visualisation connects the analysis of the modelled
constraints with the collected traces of the simulation. The following part describes the
interpretation of the visualised analysis results.
Figure 4-1: Screenshot of the visualisation
After the analysis of the simulation traces is done, the user can open the visualisation by
right-clicking the resulting “.dynasim”-file and selecting the option “visualisation” in
the context menu.
The visualisation consists of two views, which are integrated into the eclipse
environment. The first view is called “Component View” and presents all components
and connectors of the model. Here the user can select the elements he wants to analyse.
If one or more elements are selected, the user can open the visualisation of the analysis
by pressing the “visualize”-button.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 53
Confidentiality: Public Distribution
This opens the second view, called “Constraint View”. Here the visualisation plug-in
presents every constraint that is connected to the selected model elements. Furthermore
every event related to the constraint and a graphical representation that shows, if the
constraint was violated.
5. REFERENCES
[1] The Real-Time Specification for Java. http://www.rtsj.org
[2] Jamaica VM 3.4 User Manual. aicas. http://www.aicas.com/documentation
[3] MAST, Modeling and Analysis Suite for Real-Time Applications, http://mast.unican.es/
[4] D2.1 – CHESS Modelling Language and Editor Version. CHESS Project.
[5] Object Management Group. OMG Unified Modeling Language TM (OMG UML),
Superstructure. Version 2.2. http://www.omg.org/spec/UML/2.2/Superstructure
[6] Object Management Group. A UML Profile for MARTE: Modeling and Analysis of Real-
Time Embedded systems. ptc/ 2010-08-32
[7] EMF: Eclipse Modeling Framework, 2nd Edition. Dave Steinberg, Frank Budinsky,
Marcelo Paternostro, Ed Merks. Addison-Wesley Professional. 2009.
[8] Open SystemC Initiative (OSCI) http://www.systemc.org/
[9] SysXplorer Homepage http://www.fzi.de/index.php/de/component/content/article/238-ispe-
sim/4353-sim-tools-sysxplorer
[10] Gary Leavens et al, JML Reference Manual, http://www.eecs.ucf.edu/~leavens/jmlrefman
D4.2.2 Transformations and analysis support to predictability
Page 54 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Appendix A: Reference Manual for Transformation Application
A.1. SCHEDULING ANALYSIS
A.1.1 Transformation from MARTE-PSM to MAST
A.1.1.1 Execution of MARTE to MAST Transformations
For the execution of MARTE to MAST generation we must configure what kind of
generation we are going to do. The invocation of configuration is at menu entry Run-
>MARTE_SA_Gen Configuration. We must provide two parameters:
1. Unification of time units: MARTE includes time unit for the specification of any
duration and time value. MAST assumes that all durations and times are represented
in the same unit. The unit can be any time unit or numbers or ticks. This
configuration parameter defines the conformation unit for MARTE values. The tick
unit cannot be converted to time unit (some additional parameters would be needed
for this configuration), because of that tick unit cannot be combined with some other
units.
2. The second parameter defines the type of code to generate combined with the
analysis. RTSJ will generate Java code with same behaviour as the analysis
generated. AdaRavenscar generates Ada and analysis. And NonCode only generates
the MAST model.
After configuration we can invoke the MAST model generation: Run-
>MARTE_SA_Gen Commands->Generate Ada/RTSJ and MAST (see Figure A-1). We
must select a UML modelling element annotated with <<SaAnalysisContext>>
stereotype to enable this command. The invocation of this command requires time for
its execution because QVT-o takes some time to compiler QVT-o libraries and because
this loads many ecore metamodels and profiles. After several seconds a wizard allows to
configure the <<SaAnalsysContext>> that will be the root of the generation; the default
value will be the selected value, and, in general, we do not need to reconfigure this
parameter. The generators will then generate the following models:
SAM2MAST_ravenscar_RTSJ_mast_model.mast_mdl: This is the MAST model
that will be analysed in next subsection.
SAM2MAST_ravenscar_RTSJ_neutral_ada.uml: This is an intermediate UML
model based on UML2.2 L0, which will be input for the MOF2Text code
generators of Ada.
SAM2MAST_ravenscar_RTSJ_neutral_java.javaxmi: This is an Abstract Syntax
Tree of Java program that represents the Java code generated. This language is
based on JDT (Java Development Tools) Java model that is the same that uses
Eclipse Modisco project uses (the Java model in JDT is the set of classes that
model the objects associated with creating, editing, and building a Java
program). This model represents Java language abstract syntax and some
additional elements such as compilation units and Java projects. We have
modified Modisco ecore model because Modisco Java model is designed for
reverse engineering and some additional elements are needed for direct
engineering, and we base our generator on model libraries (for the representation
of RTSJ that cannot be modified during code generation).
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 55
Confidentiality: Public Distribution
SAM2MAST_ravenscar_RTSJ_trace.qvtotrace: This is the QVT-o trace model
that defines the relations between source and target modelling elements.
Figure A-1: Configuration of analysis generator
A.1.1.2 Execution of MAST Analysis
MAST_RMA asset has two associated commands:
Run->MAST_RMA Commands->Start MAST Analysis: This command makes
the invocation of MAST analysis for a MAST eclipse model; to execute this
command the model element MASTMODELType (see Figure A-2). This is the
root of MAST models in eclipse.
Run->MAST_RMA Configuration: We use this menu command to configure the
analysis tool to be used in MAST analysis. MAST includes different analysis
tools to make the scheduling analysis (see
http://mast.unican.es/mast_analysis_techniques.pdf). The application of different
tools imposes different restrictions in source models (see
http://mast.unican.es/mast_restrictions.pdf) MAST Restrictions. This menu
configures the analysis tool. The default tool is holistic.
Start MAST Analysis is a command available at Run->MAST_RMA Commands->Start
MAST Analysis. This command starts the execution of MAST analysis programs
embedded in MAST_RMA asset.
D4.2.2 Transformations and analysis support to predictability
Page 56 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure A-2: Start Analysis Command
Start MAST Analysis is active when a MASTModelType modeling element in a MAST
model is selected. Start MAST Analysis will use this model as input model of MAST.
Start MAST Analysis opens a File Wizard for the selection of an output file where to
save the analysis results. This file should have extension .mast_res. It will contain the
result of the analysis. We can open these files with Mast_rest model editor included in
MAST_RMA asset (see Figure A-3).
Figure A-3: Results of Analysis Command in Console
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 57
Confidentiality: Public Distribution
The execution of Start MAST Analysis will output some results (the standard output
result of MAST analysis tools) in the eclipse Console. The console will provide some
additional information about the analysis. When the scheduling analysis is not possible,
the command does not generate the output file.
A.1.2 Execution of MAST Results to MARTE Transformation
This is a transformation with two input models and one input-output model:
1. Input model SAM2MAST_ravenscar_RTSJ_mast_model.mast_res: in section 0, when
MAST analysis is executed without problems, the MAST results model is generated.
2. Input model SAM2MAST_ravenscar_RTSJ_trace.qvtotrace: in section 0 the
generation of MAST model generated the QVT-o trace model that defines the
relations between source elements (UML+SAM MARTE model) and target
elements (MAST model). This round-trip transformation reuses these relations to
identify the source elements related with results.
3. Input-Output model UML+SAM MARTE source model: in section 0 we used a
UML+SAM MARTE to generate the MAST model. This must be the model that we
will use to update result.
To execute this round-trip transformation we must include in the same edition resource
set all these models. We must open the UML model with a UML2 editor and we must
load in the resource set the other two models (QVT-o trace and MAST Results). We
must select the <<SaAnalysisContext>> source of analysis and invoke the command
Run->MARTE_SA_Gen Commands->Update MARTE::SAM with Results. This
command requires three input parameters:
1. The first parameter is the <<SaAnalysisContext>> modelling element that is
configured with the selected element.
2. In the second parameter we must select the element mast_res.impl.
REALTIMESITUATIONTypeImpl that is the root of analysis results in MAST results
model (this element is in the SAM2MAST_ravenscar_RTSJ_mast_model.mast_res
model that we have loaded in the resource set).
3. In the third parameter we must select the element org.eclipse.m2m.internal.qvt.oml.
trace.impl.TraceImpl that is included in SAM2MAST_ravenscar_RTSJ_trace
.qvtotrace model.
There is a problem not well solved yet in this round-trip transformation. MARTE
includes a property in GaScenario, GaStep and SaStep: respT. This property represents
the response time of this step or the response time for the work load event when the step
is the behaviour of work load event.
From the analysis and MAST view point we handle two different values:
1. The deadline for the work load event or for the step. This is an input value for the
analysis; this is a constraint that must guarantee the behaviour. Typically this is the
end of the period but it could be earlier than the end of the period or minimum time
for sporadic events.
D4.2.2 Transformations and analysis support to predictability
Page 58 Version 1.3 13 January 2012
Confidentiality: Public Distribution
2. The worst case response time for the step or work load event. This is a result value.
The analysis computes the worst case response time taking into account the worst
case blocking, worst case pre-emption and worst case execution times. This time
should guarantee the respT defined as input value. But we do not have specific
property where to include this result. We include this result in the respT property,
where there is not respT value (the default should be the end of the period) in the
source model. But the problem is that if we do not remove this value from source
model, it will be used as input value, and it will be handled as deadline value. This
can cause problem.
This problem would be solved with the modification of MARTE to represent two
different values with two different properties: deadline (an input value) and worst case
response time (result value).
A.2 DEPLOYMENT ANALYSIS
A.2.1 Execution of Deployment Analysis
For the execution of the deployment analysis the CHESS menu provides four different
menu items. The menu is depicted in Figure A-4.
Figure A-4: Invoke Analysis
There exists one menu item for each of the three types of analysis (mapping,
scheduling, and bus access). Each transformation generates the input model for the
analysis. The fourth menu item triggers the import of the bottom-up mapping analysis.
Due to the fact that the profiling approach of source code needs some manual work
from the designer an additional transformation is needed.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 59
Confidentiality: Public Distribution
If the mapping determination analysis is triggered with the bottom-up flag set to true,
the source code linked to the software components is abstracted to an abstract syntax
tree representation, instrumented and back transformed to source code. The user then
has to write a testbench, compile the code and execute it. The profiling results are
written to a file which must be specified during the import approach.
A.3 SIMULATION BASED ANALYSIS
A.3.1 Execution of the Analysis
The execution of the analysis can be done via the context menu. To show the analysis
menu the user has to right-click on a model-file. The context menu will show a sub-item
called “DynaSim”.
This is the plug-in capable of the simulation-based analysis. Inside this sub-menu, the
user can find the selector “Run Analysis” which leads to the selection of the given
timing constraints. The sub-menu will show all timing constraints that are applicable for
the chosen model. It is also possible to select multiple models. In that case, the
DynaSim sub-menu will show all constraints that are applicable for every selected
model.
A.3.2. Execution of Preferred Analysis
If it is necessary to execute multiple analyses for one or multiple models, this is done
via the menu item “Run Preferred Analyses” in the DynaSim context menu. After
selection, a dialog will open that displays all analyses, which are applicable on the
selected models.
The user can now select the preferred set of analysis and perform them by clicking the
ok-button.
D4.2.2 Transformations and analysis support to predictability
Page 60 Version 1.3 13 January 2012
Confidentiality: Public Distribution
A.3.3. Extension of the Analysis
For the addition of custom analyses, the eclipse plug-in concept gives the advantage of
implementing these via a given extension point. The analysis plug-in handles this as a
framework and offers a comfortable solution to add in arbitrary analyses in eclipse that
are based on the gathered simulation traces. The framework will handle initialization,
execution, user-interaction and execution of exceptions.
To create new analyses, the developer has to start an own plug-in project in eclipse and
add the analysis framework into the plug-in dependencies. Now it is possible to add the
extension point of the framework and right-click on it in the manifest description. A
sub-menu will open that provides the possibility to add a new analysis.
Once the analysis is created the developer has to define a Java Class that contains the
code to execute the analysis. It is also possible to define an own name and a special file
extension, where the analysis should be applicable to.
A.4 STATIC ANALYSIS ON JAVA CODE
A.4.1 Installing and Starting VeriFlux
VeriFlux is distributed as a zip archive. Unpack the archive. The archive contains a file
VeriFlux.jar. A user manual is included in the doc directory.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 61
Confidentiality: Public Distribution
VeriFlux also requires the rt.jar included in the Jamaica distribution. This rt.jar
contains Java’s standard classes, the RTSJ classes and Jamaica-specific Java classes.
The rt.jar is needed, because VeriFlux performs a whole-program analysis that analyzes
all library methods that are called from the analyzed program.
VeriFlux can be run with any Java VM. We recommend using the default Java VM that
is installed on your development machine, as invoked with the ‘java’ command.
Starting VeriFlux from the Command Line
Typing 'java -jar VeriFlux.jar' on the command line will start the VeriFlux GUI.
Starting VeriFlux from Eclipse
For the CHESS project, we recommend to start VeriFlux from Eclipse. To this end, you
need to install the Remote Systems Explorer plugin:
Help > Install New Software
Select the Indigo Update Site: http://download.eclipse.org/releases/indigo/
Under ‘General Purpose Tools’ select 'Remote System Explorer End-User
Runtime' and install it.
The Remote Systems Explorer allows you to start a shell from within Eclipse. To this
end, do the following:
Open the 'Remote Systems' view.
In that view, right-click 'Local Shells' and select 'Launch Shell'.
You can now enter shell commands into the Command box at the bottom.
Start the VeriFlux GUI, by entering 'java -jar VeriFlux.jar' into the command
box.
Configuring the Analysis
Next, VeriFlux needs to be configured for the analysis.
In the 'System' tab of the VeriFlux GUI, enter the following:
o 'System Class Path': Enter here the path to the rt.jar of your Jamaica
distribution. This path has the form <Path-to-
Jamaica>/target/<specific-target>/lib/rt.jar.
o You can leave the other fields of the 'System' tab empty (and ignore
warning messages that you may receive later on). Figure A-5 shows a
completed System tab.
In the 'Application' tab of the VeriFlux GUI, enter the following:
o 'Application Class Path': Enter the path to your application classes. In an
Eclipse Java project, this has the form <Path-to-Project>/bin by default.
D4.2.2 Transformations and analysis support to predictability
Page 62 Version 1.3 13 January 2012
Confidentiality: Public Distribution
o 'Application Source Path': Enter here the path to your application
classes. In an Eclipse Java project, this has the form <Path-to-
Project>/src by default.
o 'Analysis Temporary Directory': Enter a directory where VeriFlux will
store text files with analysis results. VeriFlux will produce two result
files per analysis run. These have the suffixes .dfa_results and
dfa_results.summary. You can for, instance, simply use your Eclipse
project’s root directory as the 'Analysis Temporary Directory'. VeriFlux
also reports its analysis result in the GUI, but the text files provide more
detail and save the results.
o 'Entry Point': Enter here the entry point for the analysis. You need to
either enter the fully qualified name of your program’s main method, or
the fully qualified name of your program’s main class. In the former
case, VeriFlux uses the main method as the entry point for the analysis.
In the latter case, VeriFlux uses all public methods of the main class as
entry points for the analysis.
o You can leave all other fields of the 'Application' tab empty. Figure A-6
shows a completed 'Application' tab.
In the 'Analysis' tab of th VeriFlux GUI, you need to set parameters for the
analysis. The various options are explained in detail in the VeriFlux manual.
You can also view short explanations of the options when you scroll over the
respective option. Figure A-7 and Figure A-8 show the recommended analysis
configurations for IllegalAssignmentErrors and stack sizes, respectively.
Running the Analysis
In order to run the analysis, push the 'Analyze' button at the bottom. VeriFlux performs
a pretty expensive analysis that might run for a while. If you click on the Console tab,
you can see messages that report on the progress.
Interpreting the Analysis Results
After the analysis has completed, the results of an RTSJ memory management
analysis (absence of IllegalAssignmentErrors and ScopeCycleErrors) can be
viewed in the Result tab. See Figure 2-20 in Section 2.4.2, for an example.
The results of a stack size analysis can be viewed in the Console tab. See Figure 2-21
in Section 2.4.3, for an example.
See the VeriFlux user manual, included in the VeriFlux distribution, for further
details on how to interpret the analysis results.
D4.2.2 Transformations and analysis support to predictability
13 January 2012 Version 1.3 Page 63
Confidentiality: Public Distribution
Figure A-5: A completed System Tab
Figure A-6: A completed Application Tab
D4.2.2 Transformations and analysis support to predictability
Page 64 Version 1.3 13 January 2012
Confidentiality: Public Distribution
Figure A-7: A completed Analysis Tab for Scoped Memory Analysis
Figure A-8: A completed Analysis Tab for Stack Size Analysis