[ieee 2013 35th international conference on software engineering (icse) - san francisco, ca, usa...

10
Managing Non-functional Uncertainty via Model-Driven Adaptivity Carlo Ghezzi * , Leandro Sales Pinto * , Paola Spoletini , and Giordano Tamburrelli * * DeepSE, Politecnico di Milano, Italy. {ghezzi|pinto|tamburrelli}@elet.polimi.it Universit` a dell’Insubria, Italy. [email protected] Abstract—Modern software systems are often characterized by uncertainty and changes in the environment in which they are embedded. Hence, they must be designed as adaptive systems. We propose a framework that supports adaptation to non-functional manifestations of uncertainty. Our framework allows engineers to derive, from an initial model of the system, a finite state automaton augmented with probabilities. The system is then executed by an interpreter that navigates the automaton and invokes the component implementations associated to the states it traverses. The interpreter adapts the execution by choosing among alternative possible paths of the automaton in order to maximize the system’s ability to meet its non-functional requirements. To demonstrate the adaptation capabilities of the proposed approach we implemented an adaptive application inspired by an existing worldwide distributed mobile application and we discussed several adaptation scenarios. I. I NTRODUCTION Modern software systems are characterized by an increased complexity, for example, in terms of size as well as geo- graphical distribution. Moreover, engineers increasingly design systems by relying on components operated by third-party organizations. All these factors introduce many sources of uncertainty in designing software that should guarantee certain quality requirements. For example, a component operated by a third-party organization has an uncertain execution time and failure rate. This means that, at design time, engineers can make certain assumptions, but these may be invalidated at run time. Indeed, uncertainty may subvert design-time assumptions, jeopardizing the system’s ability to meet its requirements. The execution time of a remote component may increase unexpectedly affecting the response time of the overall system or, similarly, a component failure may affect the system’s reliability. Both cases may lead to potential violations of non-functional requirements that cannot be tolerated. To cope with uncertainty, software can be designed as an adaptive system [4]. In practice, engineers typically design systems by explicitly programming alternative behaviors and by heavily using exception handling techniques to adapt the execution to detected changes in the environment that reify the sources of uncertainty. This is quite hard per-se and cannot be done by inexperienced developers. In addition, using this approach, such alternative behaviors cannot be kept separated from each other and from the exception handling code. As a result, the architecture and the code are hard to understand and maintain. To overcome this limitation, we present ADAM: 1 a model- driven framework conceived to support the development and execution of software that tolerates manifestations of uncer- tainty by self-adapting to changes in the environment, trying to do its best to satisfy certain non-functional requirements. In particular we support adaptation aimed at mitigating the non-functional uncertainty concerning (1) response time and (2) faulty behavior of components integrated in a composite application. The proposed solution models the system as a workflow of abstract functionalities. For each of them, one or more implementations are provided. The approach derives then a finite state automaton aug- mented with probabilities in which each state of the automaton represents an implementation of an abstract functionality of the system, while paths represent all the possible execution flows of the system. The system execution is performed by an ad-hoc interpreter that navigates the automaton state by state and invokes the implementations associated with the states it traverses. The interpreter is responsible for driving and adapting the execution by choosing among alternative paths of the automaton in order to maximize the system’s ability to meet its non-functional requirements. With this paper we contribute to research in self-adaptive systems in two distinct ways: 1) We lay the foundations of a model-driven methodol- ogy that supports the design of systems in terms of abstract functionalities and delegates the execution to an optimizing interpreter, which self-adapts to changes by selecting the implementation variants to bind to abstract functionalities in order to satisfy non-functional requirements; 2) We present a novel technique to support non-functional adaptation that exploits Probability Theory and Prob- abilistic Model Checking [1]. The proposed technique computes and assigns a probability to each possible execution flow of the system indicating their likelihood to meet the desired non-functional requirements. In our approach the interpreter relies on such values to drive the execution; The remainder of the paper is organized as follows. Sec- tion II introduces a reference example we use throughout the paper. Section III describes the ADAM approach in its essential steps, presenting it also applied to the example, 1 Ada ptive M odel-driven execution 978-1-4673-3076-3/13/$31.00 c 2013 IEEE ICSE 2013, San Francisco, CA, USA 33

Upload: giordano

Post on 03-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Managing Non-functional Uncertaintyvia Model-Driven Adaptivity

Carlo Ghezzi∗, Leandro Sales Pinto∗, Paola Spoletini†, and Giordano Tamburrelli∗∗ DeepSE, Politecnico di Milano, Italy. {ghezzi|pinto|tamburrelli}@elet.polimi.it

† Universita dell’Insubria, Italy. [email protected]

Abstract—Modern software systems are often characterized byuncertainty and changes in the environment in which they areembedded. Hence, they must be designed as adaptive systems. Wepropose a framework that supports adaptation to non-functionalmanifestations of uncertainty. Our framework allows engineersto derive, from an initial model of the system, a finite stateautomaton augmented with probabilities. The system is thenexecuted by an interpreter that navigates the automaton andinvokes the component implementations associated to the statesit traverses. The interpreter adapts the execution by choosingamong alternative possible paths of the automaton in orderto maximize the system’s ability to meet its non-functionalrequirements. To demonstrate the adaptation capabilities of theproposed approach we implemented an adaptive applicationinspired by an existing worldwide distributed mobile applicationand we discussed several adaptation scenarios.

I. INTRODUCTION

Modern software systems are characterized by an increasedcomplexity, for example, in terms of size as well as geo-graphical distribution. Moreover, engineers increasingly designsystems by relying on components operated by third-partyorganizations. All these factors introduce many sources ofuncertainty in designing software that should guarantee certainquality requirements. For example, a component operated bya third-party organization has an uncertain execution timeand failure rate. This means that, at design time, engineerscan make certain assumptions, but these may be invalidatedat run time. Indeed, uncertainty may subvert design-timeassumptions, jeopardizing the system’s ability to meet itsrequirements. The execution time of a remote componentmay increase unexpectedly affecting the response time of theoverall system or, similarly, a component failure may affect thesystem’s reliability. Both cases may lead to potential violationsof non-functional requirements that cannot be tolerated.

To cope with uncertainty, software can be designed as anadaptive system [4]. In practice, engineers typically designsystems by explicitly programming alternative behaviors andby heavily using exception handling techniques to adapt theexecution to detected changes in the environment that reify thesources of uncertainty. This is quite hard per-se and cannotbe done by inexperienced developers. In addition, using thisapproach, such alternative behaviors cannot be kept separatedfrom each other and from the exception handling code. As aresult, the architecture and the code are hard to understandand maintain.

To overcome this limitation, we present ADAM:1 a model-driven framework conceived to support the development andexecution of software that tolerates manifestations of uncer-tainty by self-adapting to changes in the environment, tryingto do its best to satisfy certain non-functional requirements.In particular we support adaptation aimed at mitigating thenon-functional uncertainty concerning (1) response time and(2) faulty behavior of components integrated in a compositeapplication. The proposed solution models the system as aworkflow of abstract functionalities. For each of them, one ormore implementations are provided.

The approach derives then a finite state automaton aug-mented with probabilities in which each state of the automatonrepresents an implementation of an abstract functionality ofthe system, while paths represent all the possible executionflows of the system. The system execution is performed by anad-hoc interpreter that navigates the automaton state by stateand invokes the implementations associated with the statesit traverses. The interpreter is responsible for driving andadapting the execution by choosing among alternative pathsof the automaton in order to maximize the system’s ability tomeet its non-functional requirements.

With this paper we contribute to research in self-adaptivesystems in two distinct ways:

1) We lay the foundations of a model-driven methodol-ogy that supports the design of systems in terms ofabstract functionalities and delegates the execution toan optimizing interpreter, which self-adapts to changesby selecting the implementation variants to bind toabstract functionalities in order to satisfy non-functionalrequirements;

2) We present a novel technique to support non-functionaladaptation that exploits Probability Theory and Prob-abilistic Model Checking [1]. The proposed techniquecomputes and assigns a probability to each possibleexecution flow of the system indicating their likelihoodto meet the desired non-functional requirements. In ourapproach the interpreter relies on such values to drivethe execution;

The remainder of the paper is organized as follows. Sec-tion II introduces a reference example we use throughoutthe paper. Section III describes the ADAM approach in itsessential steps, presenting it also applied to the example,

1Adaptive Model-driven execution

978-1-4673-3076-3/13/$31.00 c© 2013 IEEE ICSE 2013, San Francisco, CA, USA33

highlighting several relevant scenarios. Section IV discussesits advantages w.r.t. the state of the art. Section V evaluatesthe performance of ADAM in several scenarios of growingcomplexity. Finally, Section VI discusses related work, whileSection VII draws some concluding remarks.

II. THE SHOPREVIEW APPLICATION

ShopReview (hereafter referred to as SR) has been selectedas the adaptive application used throughout the paper to ex-plain our approach. SR is a mobile application for smartphonesinspired by ShopSavvy,2 an existing worldwide distributedapplication, which allows users to share data concerning acommercial product or query for data shared by others. Usersmay use SR to publish the price of a product they have foundin a certain shop and, in response, the application provides theusers with more convenient prices offered by: (i) nearby places(i.e., LocalSearch) or (ii) on-line stores (i.e., WebSearch). Theunique mapping between the price signaled by the user and theproduct is obtained by exploiting the product’s barcode. Theapplication starts by asking the user to scan a barcode with thesmartphone camera and to type the price of the product whosebarcode has been scanned. In response to these inputs theapplications: (1) recognizes the barcode number in the scannedimage, (2) looks up online the product associated to thebarcode number, (3) retrieves the user location, (4) performsthe WebSearch as well as the LocalSearch, (5) displays theobtained results, and (6) allows the user to publish the priceof the product to be used by searches issued by other users.

Furthermore, we decide to implement the code for recogniz-ing the barcode from a picture acquired through the camera,which runs an ad-hoc developed component that encapsulatesan image recognition algorithm. Since such component exe-cutes correctly only on devices with an autofocus camera anddoes not work properly on other devices, this choice wouldlimit the usability of our application. For this reason, SR,as for the original ShopSavvy app, is designed to providealso an alternative recognition algorithm. Indeed, it detectsif the camera on the current device has autofocus and, if not,it invokes an external service to process the acquired imagewith an ad-hoc blurry decoder algorithm. In both cases, if thebarcode cannot be recognized (e.g., the image is blurred), theapplication asks the user to take the picture again.

In addition, let us assume that SR must satisfy the non-functional requirements listed in Table I. R1, for example, is aperformance requirement typically imposed by several market-places of mobile applications.3 Notice that, in our application,several features contribute—at large—to usability (R2). Forexample, to retrieve the user location, the GPS is preferableover the NPS4 because of its increased precision. Similarly,presenting a sorted list (by price and distance) of the resultsof WebSearch and LocalSearch, respectively, also increasesthe app’s usability.

2http://shopsavvy.mobi/3The Windows Phone Marketplace requires that applications unresponsive for more

than three seconds must display a visual progress or busy indicator: msdn.microsoft.com/en-us/library/hh184840(v=vs.92)

4Network Positioning System.

TABLE ISHOPREVIEW NON-FUNCTIONAL REQUIREMENTS.

Description Metric ClassR1 After an input the application Response Threshold

shall respond in at most 3s. Time (RT) BasedR2 Maximize application usability Usability (U) MaxR3 Minimize battery consumption Energy Consumption (E) Min

ADAM supports two classes of non-functional require-ments: (1) Threshold-Based and (2) Max/Min. The formercomprise requirements in the form: m ≤ th or m ≥ th, wherem is a non-functional metric (e.g., response time, energy,etc.) and th is a threshold (see R1). The latter requires tominimize/maximize a non-functional metric (see R2−R3).

SR is subject to several sources of uncertainty and, as aconsequence, run-time conditions may jeopardize the appli-cation’s ability to meet its non-functional requirements. Forexample, engineers may design SR to be compliant with R1by estimating or experimentally measuring the response timeof the component implementing the WebSearch functionality,which invokes a remote back-end. However, such responsetime may increase unexpectedly during operation becauseof the network latency or other external factors, causingthe system to violate R1. Such unexpected behavior, if leftunmanaged, may cause an unsatisfactory user experience or,in the worst case, the rejection from the marketplace. Thus,even with such a simple example, uncertainty management isfundamental for a successful system design. It is important tonotice that the concepts illustrated through this paper applyseamlessly to larger and more complex systems where thebenefits of our approach are even more relevant.

III. THE ADAM APPROACH

ADAM is a model-driven framework conceived to supportthe development and run-time operation of self-adaptive sys-tems, which can react to (1) unexpectedly higher response timeor (2) unexpected faults of the parts they rely upon.

Let us start by giving an overview of the approach, asillustrated in Figure 1. First, developers provide a modelof the system in terms of abstract functionalities organizedin a workflow. In particular, we support systems modeledby Activity Diagrams [8]. For each abstract functionality,developers also provide one or more target implementations,annotated with a description of their non-functional behaviors.Annotations may concern, for example, the expected executiontime, cost, or the impact on energy consumption or on theapplication’s usability. The supplied target implementationsdisplay different non-functional qualities. The goal is to beable to develop dynamic composition of target implementa-tions that best match the overall application’s requirements.We assume target implementations to be stateless components.

Given these inputs and a set of non-functional requirementsthe system has to meet, the approach relies on two distincttools: (1) the Generator and (2) the Interpreter. The Gen-erator analyzes the Activity Diagram and the correspondingannotated target implementations and generates a finite state

34

Developer

UML Activity Diagrams Implementations

-----------

-----------

-----------

generates

invokes

executes

uses

annotatesdesigns

uses

Embedded Model

…………… …

Interpreter

PRISM

uses

Generator

Fig. 1. The ADAM Approach.

automaton, called Embedded Model (EM). In the EM, eachstate represents an implementation of an abstract functionalityof the system, while paths represent all the possible executionflows. The Interpreter is, instead, in charge of executing thesystem by navigating the automaton state-by-state and byinvoking the chosen target implementations associated to thestates it traverses. In particular, it is responsible for driving andadapting the execution by choosing among alternative pathsof the automaton in order to maximize the system’s abilityto meet its non-functional requirements. By this we meanthat the Interpreter first measures the effects of non-functionaluncertainty (e.g., the response time of invoked functionalities)and consequently chooses the most convenient path in theEM to maximize the likelihood of meeting all the system’srequirements. This way, if the Interpreter detects that thecurrent execution is slower w.r.t. a certain performance re-quirement, it may autonomously decide to drive the executionby choosing a specific (fast) path in the EM that guarantees thecompliance with the performance requirement. The approachcomprises the following steps: (1) Modeling, (2) Transforma-tion, (3) Model Manipulation, and (4) Execution. Hereafter,we describe each of them in detail. For each step, we alsoillustrate it referring to the SR example.

A. Modeling

As previously introduced, the system is initially conceivedin terms of abstract functionalities and modeled by one or moreUML Activity Diagrams, which organize them in workflows.For each abstract functionality, engineers also provide one ormore corresponding alternative target implementations. Thedesign methodology to derive the set of target concrete func-tionalities for each abstract one, given the overall requirementsand an uncertainty mitigation policy, is out of scope of thepresent paper. We observe, however, that designing systemsin terms of alternative implementations corresponds to anapproach already used for complex software systems, evenif informally. For example, in mobile applications, the userlocation is typically obtained by relying on two alternatives:(1) the GPS sensor or (2) the NPS. Clearly, every abstractfunctionality needs at least one corresponding implementation.In addition, while modeling, engineers are allowed to annotatea subset of the abstract functionalities as Optional. Usually,optional functionalities are not essential for the correctness of

0.3

2: Photo

Acquisition1: Input Price

5: Product

Lookup

8: Web Search7: Local

Search

11: Publish

Price

<<Optional>>

10: Result

Ordering

3: Local

Recognition

4: Remote

Recognition

[hasAutoFocus]

<<Optional>>

9: Secondary

Web Search

0.6 0.4

true false

6: Location

[recognized]

0.7

truefalse

13: ShowMap

12: NPS

Location

(NPS Alternative)

Fig. 2. ShopReview UML Activity Diagram.

the final result, but may, however, affect usability. If necessary,they are sacrificed to accomplish more important goals. Asillustrated in the example, each non-functional requirementpredicates over a certain non-functional metric. As a conse-quence, each implementation is annotated with the impact ithas w.r.t. these metrics. For example, an implementation ofan abstract functionality with an expected response time of2 seconds is annotated with responseTime=2s. Concreteimplementations that require user interaction cannot be an-notated with an impact on response time, since they dependon the user’s think time. They are therefore annotated with@UI, whose meaning will become clear later on. Notice thatthe annotation process occurs for each requirement metric onall the implementations. Finally, ADAM requires engineers toannotate each branch of decision nodes in the UML ActivityDiagram with the expected probability that an execution of thesystem may take that branch. When not specified, branches areconsidered to have the same probability.

Modeling the SR Application. The modeling step applied tothe SR example may produce the Activity Diagram illustratedin Figure 2. For each abstract functionality, one or more con-crete implementations are provided. For instance, concerningProductLookup, which translates a barcode into a productname, SR relies on a remote service (e.g., searchupc.com) asone of the possible implementations. Alternatively, the appli-cation may ask the user to directly provide the product’s name.As for searching the Web for more convenient prices, SRrelies on a primary remote service (e.g., shopzilla.com) and oncomplementary services, represented by the abstract function-alities WebSearch and SecondaryWebSearch, respectively. Notethat the SecondaryWebSearch is annotated as an Optionalfunctionality to represent the fact that it may be omitted at run-time, if necessary. Similarly, the ResultOrdering functionality,which sorts the results of WebSearch and LocalSearch by priceand distance, respectively, has been annotated as optional.

Concrete implementations are provided by Java methodsusing the ad-hoc annotation @Implementation to refer tothe abstract functionality they implement. Moreover, the anno-tation @Impact is used to specify the impact the implemen-

35

@Implementation (name= ” ProductLookup ” )@Impact ( met r i cs={” responseTime ” , ” energy ” , ” u s a b i l i t y ”} , values ={1, 2 , 1})public S t r i n g automaticProductLookup ( S t r i n g barcode){

/ / invoke h t t p : / / searchupc . com/}

@Implementation (name= ” ProductLookup ” )@Impact ( met r i cs={” energy ” , ” u s a b i l i t y ”} , values ={1, 0})@UIpublic S t r i n g manualProductLookup ( S t r i n g barcode){

/ / ask the user to i n s e r t the product name}

Listing 1. Implementations for the ProductLookup functionality.

tation has w.r.t. the requirements metrics. Listing 1 illustratestwo methods implementing the ProductLookup functionality,as described earlier. Note that impacts may be objective values,as for the response time, or subjective measures as for theusability or energy consumption. For example, we annotatewith 2 the expected energy consumption of the automat-icProductLookup implementation to indicate that it requiresmore energy (i.e., battery usage) w.r.t. the second alternative,which is annotated with an impact equal to 1. Indeed, thefirst alternative includes the invocation to a remote service,which implies a higher energy consumption. Alternatively,the implementations may be annotated with the real energyconsumption they have. However, such values are difficult tomeasure and are also device dependent. By relying on theproposed qualitative annotations, we do not need to necessarilyknow the real power consumption of the two alternativeimplementations, but only their relative difference.

Furthermore, being these impacts estimates and/or experi-mental results, they may be also automatically updated andrefined at run time as discussed for example in our previouswork [7], [9], [10]. Similarly, concerning usability, we annotatethe two implementations with different impacts to indicatethat the automatic alternative is preferable over the alternativewhich asks the user with an explicit input.

Activities in the diagram may also map to sub-diagrams asfor the Location that maps to: (1) a concrete implementationconcerning the GPS alternative and (2) another activity dia-gram as reported (in a dashed box) in Figure 2. Accordingto the second option, the user’s location is obtained by firstinvoking the NPS and then showing a map (i.e., ShowMap)centered in the obtained location for a manual refinement. Thisway we allow for a hierarchical composition of diagrams.

The last step in the modeling phase concerns the probabilityannotations of decision nodes. Decision nodes guarded by theconditions hasAutoFocus and recognized have beenannotated with the probabilities reported in Figure 2. Theyexpress, respectively, the fact that 40% of mobile devices donot have an autofocus camera and, on average, 70% of bar-codes contained in acquired pictures are correctly recognized.

B. Transformation

Activity Diagrams are translated into a formal representa-tion, the Embedded Model (EM), which is a Markov DecisionProcess (MDP) [18]. MDPs are finite state machines aug-mented with probabilities and non-deterministic transitions.The formal concepts concerning MDPs needed to understandthe rest of the paper are summarized in the Appendix.

The translation of Activity Diagrams into an MDP isperformed by the Generator tool. It first translates each abstractfunctionality into a simple MDP with an initial state, a finalstate, and as many intermediate states as the number ofimplementations associated with the abstract functionality. Inaddition it adds a non-deterministic transition from the initialstate to each intermediate state that are, in turn, connected tothe final state. The resulting MPDs are then merged in a singleMDP which represents the translation of the overall ActivityDiagram. Moreover, annotations attached to implementationsare also propagated into the EM by annotating each state inthe model with the same impact numbers as its correspondingimplementation. Notice that connections which participate indecision nodes (i.e., connections labeled with probabilities) aretranslated in the MDP as probabilistic transitions labeled withthe same probabilities as the originating connections.

Transforming the SR Activity Diagram. As we said above,we first translate each abstract functionality into a simpleMDP with an initial state, a final state, and as many inter-mediate states as the number of implementations associatedto the abstract functionality. For example, the ProductLookupfunctionality, which has two corresponding implementations,results in the MPD in Figure 3(a).

A functionality annotated as optional also requires the intro-duction of a direct transition from the initial state to the finalone. For example, if we have two alternative implementationsof the SecondaryWebSearch, which differ in the remote servicethey invoke, we obtain the MDP in Figure 3(b).

The resulting MDPs are then composed together as follows.Two MDPs corresponding respectively to two subsequentnodes in the Activity Diagram are composed by merging thefinal state of the first one with the initial state of the secondone. By merging such states we generate a symbolic state.This process is exemplified in Figure 3(c), where symbolicstates are highlighted in grey and labeled with a letter. Decisionnodes are translated similarly. The MDP corresponding to afunctionality that precedes a decision node has its final statemerged to the initial state of all the MDPs corresponding to thefunctionalities subsequent to the decision node. The mergingprocess generates a symbolic state as aforementioned.

Concerning nodes that also map to other diagrams, as forthe Location node, we first translate the node (as explainedso far) considering only its implementations (e.g., the GPSalternative). Subsequently, we add, as an alternative path fromthe initial to the final state, the MPD obtained by translatingthe other diagrams (e.g., the NPS sub-diagram).

The translation process applied to the Activity Diagramof the SR application results in the EM reported in Fig-ure 3(d). For intance, the functionality Location is representedin the MDP by state 6a (i.e., the GPS alternative) and states12− f − 13 (i.e., the translation of the NPS nested diagram).

At this stage we propagate the annotations attached to theimplementations into the EM. In Figure 3(d) we indicate themwith RT , E, and U for response time, energy consumption,and usability, respectively. For example, state 5b, which cor-

36

5: Product

Lookup5a 5b

..…..…

Activity MDP

(a) Alternative Implementations.

<<Optional>>

9: Secondary

WebSearch9a 9b

..…..…

Activity MDP

(b) Optional Functionality.

8

Activity

MD

P

8: WebSearch..…

i

..…

<<Optional>>

9: Secondary

Web Search

9a

9b

(c) Composition of MDPs.

4

3

RT= 0.3s

E=1

U=1

10

7c g h

RT=0.6s

E=2

U=1

RT=0.5s

E=2

U=1

RT=0.3s

E=1

U=1

RT=0.6s

E=1

U=1

RT=0.8s

E=2

U=1

RT=2s

E=3

U=1

8 i

RT=0.6s

E=2

U=1

9a

9b

j

RT=0.6s

E=2

U=1

RT=0.5s

E=2

U=1

0.6

0.4

6a

12

eb

f 13

RT=0s

E=1

U=0

1 2a0.7

0.3

5a

5b

RT=0s

E=1

U=0

d

RT=0s

E=1

U=0

RT=0s

E=1

U=0

k 11

RT=0s

E=2

U=1

(d) ShopReview Embedded Model.

Fig. 3. Translation Process.

responds to the automaticProductLookup implementation (seeListing 1), is annotated with its impact in terms of responsetime (i.e., 0.5s), energy consumption (i.e., 2), and usability(i.e., 1). Since symbolic state are artificially generated by thetranslation process they are annotated with neutral values:RT = 0, E = 0, U = 0. Notice that, by construction, theobtained EM represents all the possible execution flows of thesystem in terms of target implementations. Indeed, startingfrom its initial state, the MDP has multiple alternative pathstowards the final state. The translation process performed bythe Generator hides the complexity of MDPs to developers.A formal description of the automatic translation algorithm isnot given here for space reasons. It is based on the automatictranslation of an annotated Activity Diagram into a Markovprocess that was presented in our previous work (i.e., [13]).

C. Model Manipulation

The annotations attached to the states of the EM representthe impact of the corresponding implementation on qualitymetrics. Formally, this information corresponds to rewardsin the MDP formalisms (see the Appendix). It can be usedto compute the minimum and maximum cumulative rewards(indicated as minR(s) and maxR(s)) from each state s to thefinal state in the model and for each quality metric. The com-putation of such cumulative rewards may be arbitrarily com-plex because of three characteristics of the model: (1) loops,(2) probabilities attached to transitions, (3) a large number ofalternative paths. We rely on a probabilistic model checker,such as PRISM [14], to compute them. Given these premises,we manipulate the model by replacing impact numbers at-tached to each state s with an interval 〈minR(s),maxR(s)〉for every requirement metric of the system. It is important tonotice that such intervals represent forecasts of the impactsnecessary to complete the execution (i.e., reach the final state)

starting from a specific state s of the model. At executiontime, such values are used by the Interpreter to select the mostappropriate path towards the final state, as illustrated in SectionIII-D. Figure 4 illustrates the cumulative rewards obtained byexploiting PRISM for some states of the EM. Notice that,when cumulative rewards are computed for response time,all the states characterized by user interaction (i.e., whosecorresponding implementations are annotated with @UI) areconsidered as final states of the EM together with the originalfinal states. Indeed, the requirements concerning response time(e.g., R1) predicate over the portions of the system in whichthe computation occurs autonomously, i.e., without user input.

9a

9b

i...

RT=<0.5; 1.1>

E=<4; 5>

U=<2; 3>

RT=<0.6; 1.2>

E=<4; 5>

U=<2; 3>

j

RT=<0; 0.6>

E=<2; 3>

U=<1; 2>

...

Fig. 4. Model Execution Example.

Manipulating the SR Model. At this stage, we manipulateeach state s of the model in Figure 3(d) by replacing impactnumbers with intervals in the form 〈minR(s),maxR(s)〉, forevery requirement metric, obtained by running the probabilis-tic model checker, as explained above. For example, let usfocus on state 6a and usability. In this case the model checkeryields the following values: 〈4; 6〉. These values indicate thatan execution reaching state 6a will have an additional usabilityimpact value in the interval 〈4; 6〉 to reach the final state.Similarly, for the response time and energy consumption weobtain 〈2.9; 4.1〉 and 〈8; 11〉, respectively.

37

D. Execution

At run time, given the annotated EM, the Interpreter isin charge of executing the application, by navigating throughthe state space. It invokes the corresponding implementationand performs two additional tasks. First, it keeps track ofthe cumulated impact of all the quality metrics. For example,assuming that the Interpreter invoked two implementations thatexecuted in 1s and 0.5s, its aggregated response time is 1.5s.The run-time data structure that contains all these aggregatedvalues is called the Execution Context. Second, it selects oneof the alternative paths in the model. The choice amongalternative paths occurs if the current state has many outgoingnon-deterministic transitions, i.e., the next functionality tobe executed has many corresponding implementations and/oris optional.5 The choice is performed by the Interpreter asdescribed next.

Given a system with a set of non-functional requirementsR = {r1, ..., rn} and an execution that reached state sccharacterized by a set T = {t1, ..., tk} of non-deterministicoutgoing transitions, the Interpreter computes a Probability ofSuccess for each alternative transition ti. Such probability Pti

represents the likelihood that the system may complete theexecution meeting all the requirements by taking transition ti.

Let si be the destination state of each outgoing transitionti and let us focus initially on a threshold-based requirementReq that predicates on response time, i.e., RT ≤ th. We knowfrom the interval associated to each state si that an executionthrough that state has an expected response time included inthe interval 〈ai, bi〉, as computed in the model manipulationstep of the approach. In addition, we know from the ExecutionContext that, to reach state w, the execution already cumulateda response time cRT (i.e., the execution spent cRT seconds toreach state sc). Given these premises, we model the responsetime associated with the execution from state si to the finalstate as a random variable xsi uniformly distributed over theinterval 〈ai; bi〉 (i.e., xsi ∼ U(ai, bi)). In this setting, theability of the system to meet Req boils down to choosing atransition towards a state si such that the response time fromthat state to the final one is less than the requirement thresholddecreased by the time already consumed to reach the currentstate: xsi ≤ th − cRT . In particular, the probability that thismay occur (i.e., P (xsi ≤ th−cRT )) corresponds to the area inthe Uniform Distribution comprised among ai and th− cRT :

Pti,RT = P (xsi ≤ th− cRT ) =(th− cRT )− ai

bi − aiSuch value corresponds to the probability of success for

transition ti concerning only response time. In particular, threecases may occur that yield to a probability of success, as illus-trated in Figure 5. Intuitively, if the threshold decreased by cRT

is greater than the interval that contains the expected valuesof response time, probability of success is one. Conversely, if

5States with multiple outgoing transitions labeled with probabilities do notrepresent a decision point among alternative paths since they correspond todecision nodes in the Activity Diagram and the next state in the execution isselected by evaluating run-time conditions (e.g., hasAutoFocus for SR).

th-cRT

0 < P(RT≤ th) < 1

a b

P(RT≤ th) th-cRT

a b

a b RT

th-cRT

P(RT≤ th) = 1

P(RT≤ th) = 0

RT

RTP(RT≤ th)

P(RT≤ th)

b-a1

b-a1

b-a1

Fig. 5. Probability of Success.

the threshold decreased by cRT is smaller than the interval,the probability is zero. In all the other cases the probability isin [0, 1] accordingly to the relative position of the interval andthe quantity th − cRT . Notice that requirements in the formof RT ≥ th are processed similarly by inverting the areaconsidered in computing the probability of success. We repeatthis procedure for all requirements in R. In particular, if wehave n requirements and k possible outgoing transitions, theInterpreter builds a n× k matrix in which each element (i, j)represents the probability that choosing transition ti wouldyield to an execution compliant with requirement rj (i.e.,probabilities of success Pti,rj ). The Interpreter is parametrizedwith a vector w = w1, . . . , wn of weights. Such vector isused to aggregate the probabilities of success by computing aweighted average of the values in each column of the matrix.The result is a single value for each outgoing transition, whichrepresents the probability of satisfying all the requirementsaccording to the weight: Pti =

∑1≤j≤n (wj × Pti,rj ). The

Interpreter proceeds the execution by choosing the transitionti with the highest Pti . Notice that weights are used by thedesigner to prioritize requirements.

Executing the SR Application. Let us consider requirementR1 and let us assume that the Interpreter reached state i (seeFigure 4). In this scenario, it has to choose among threeoutgoing transitions (i.e., 9a, 9b and j). The probability ofsuccess of the transition towards state 9a corresponds to theprobability that the execution terminates within 3s (R1). Weknow from the interval associated to state 9a that the executionthrough that state will terminate with a response time in theinterval 〈0.5s; 1.1s〉. At this stage, we model the response timeof the execution through state 9a as a random variable x9a uni-formly distributed over the interval (i.e., x9a ∼ U(0.5, 1.1)).Furthermore, let us assume that the Execution Context containsa cumulated value for the response time equal to cRT = 1.95s.The Interpreter computes the probability that the response timewill be lower than the requirement threshold decreased by thevalue in the Execution Context (i.e., 3s−1.95s = 1.05s) as thearea in the Uniform Distribution in the interval 〈0.5s; 1.05s〉:P (x9a ≤ 3s) = 1.05−0.5

1.1−0.5 = 0.92. Similarly, for the outgoingtransitions to state 9b we have P (x9b ≤ 3s) = 0.75. Finally,for the outgoing transitions to state j we have P (xj ≤ 3s) = 1(i.e., the third case of Figure 5).

38

Let us consider now requirement R2, which is a maximiza-tion requirements and thus needs a slightly different process.In this case the probability of success concerning usabilityassociated with the transition toward state 9a (i.e., (P9a,U ))corresponds to the probability that the usability value obtainedby choosing it is greater than the usability value obtainedwith the transitions to state 9b and to state j. We knowthat by choosing the transition to state 9a usability wouldbe in the interval in 〈2; 3〉, while choosing the transitions tostate 9b and j we obtain a usability value in the intervals〈2; 3〉 and 〈1; 2〉, respectively. By modeling the usability of theexecution through state 9a as a random variable y9a uniformlydistributed over the discrete interval (i.e., y9a ∼ U(2, 3))and the usability of the execution through states 9b and jwith random variables y9b , yj uniformly distributed over theirintervals (i.e., y9b ∼ U(2, 3) and yj ∼ U(1, 2)), we obtainthat the probability of success associated with the transitiontoward state 9a (w.r.t. R2) is:

P9a,U = P (y9a ≥ y9b ∧ y9a ≥ yj) =∑(i,k,l)∈A

P (y9a = i ∧ y9b = k ∧ yj = l)

where: A = {(i, k, l)|i ∈ 〈2; 3〉, k ∈ 〈2; 3〉, l ∈ 〈1; 2〉, i ≥k, i ≥ l}. Considering y9a , y9b and yj are independentvariables we obtain P9a,U = 0.75. Similarly, we obtain:P9b,U = 0.75 and Pj,U = 0.13. Finally, for requirement R3(i.e., a minimization requirement) we can apply a similar ap-proach, with obvious changes, obtaining P9a,E = 0, P9b,E = 0and Pj,E = 1.

Let us configure the Interpreter with the following weightvector w = {0.4, 0.4, 0.2}, which corresponds to requirementsR1, R2, and R3 respectively. This choice prioritizes R1 − 2over R3. Given these weights the probabilities of success are:P9a = 0.67, P9b = 0.6, and Pj = 0.65. In this setting, theInterpreter proceeds in the execution towards state 9a.

Notice that the approach, as described so far, uses onlyuniform distributions. However, the concepts illustrated in thepaper apply seamlessly to other distributions that may adoptedin specific scenarios if needed. In addition, it is importantto notice that the proposed approach applies identically tocontinuous (e.g., R1) as well discrete non-functional require-ment (e.g., R2 − 3). The only difference is the mathematicalcomputation of probabilities.

IV. ADVANTAGES OF THE ADAM APPROACH

The inability to manage at run time the consequencesof design-time uncertainty leads to system that may violatetheir requirements. A non-adaptive implementation for SR,when execution reaches a certain state—say, i—would proceedby selecting the same transition—say, the transition towardsstate 9a—irrespective of the time spent by the computation toreach state i. The adaptive execution supported by ADAMwould, instead, select the transition towards state 9a in ascenario where state i is reached in 1.95s (i.e., cRT = 1.95s),and select transition j in the case which state i is reached

in 2s (i.e., cRT = 2s). In the latter case, with cRT = 2s,the probability of success associated with outgoing transi-tions from state i changes. In particular, by repeating thecalculations illustrated in the previous section, we obtain thefollowing values: P9a = 0.63, P9b = 0.57, and Pj = 0.65.As a consequence, the Interpreter proceeds the execution byselecting transition j (i.e., the highest probability).

This choice is reasonable since the new context (i.e., 2s)indicates a slower execution. In this situation the probability ofsuccess of the outgoing transitions 9a and 9b decreases, mak-ing the option that skips the SecondaryWebSearch functionality(i.e., transition towards j) preferable. Indeed, skipping thisfunctionality speeds up such slower executions, minimizing therisk of violating R1. Conversely, in faster executions, transitionj has a lower probability of success because of its smallercontribution to usability w.r.t. 9a and 9b. The same adaptationmechanisms apply to all of the EM non-deterministic choices;for example, the choice between the GPS and NPS, whichrequire different execution times and offer different degreesof usability and energy consumption.

A traditional strategy to manage uncertainty consists ofdesigning systems by explicitly programming alternative be-haviors and by heavily using exception handling techniques toadapt the execution to the manifestations of uncertainty. Letus consider the case of the alternative implementations of theLocation functionality. A traditional adaptive implementationwould require engineers to write an ad-hoc adaptation logic(e.g., cascaded if-elses to choose between the two implemen-tations) and exception handling constructs aimed at managingthe potential failures of both the alternatives. Such logic, notonly results in convoluted solutions, but is also intertwinedwith the application logic yielding to code that is hard toread and maintain. Conversely, ADAM applications do notrequire such adaptation logic (see Listing 1) and delegatethe adaptation management to the framework tools. As aconsequence of this simplification, the implementations inthe form of distinct annotated methods result in code thatis easy to read, write, maintain, and evolve. Furthermore,ADAM also increases reusability, since the same functionalityimplementations can be reused across different applications.

ADAM also offers another source of adaptation againstuncertainty. In Section I we stated that ADAM not onlymanages non-functional uncertainty in terms of higher re-sponse time, but also handles unexpected faults. Indeed, thesystem execution managed by the Interpreter introduces auseful self-healing feature in ADAM applications. As soonas the Interpreter catches a run-time exception while invokinga functionality implementation, it redirects the execution—ifpossible—towards another EM path that allows the system tocomplete successfully. For example, if invoking the functional-ity associated with state 9a the Interpreter catches an exceptionindicating that the underlying Web service is unavailable, itmay backtrack the execution to the previous state (i.e., i) andredirect the execution through an alternative path (e.g., 9b).Even if this is a simple self-healing mechanism, it is usefulto maximize the system reliability in many scenarios. For

39

example, in the mobile domain, the GPS may fail unexpectedlyand the NPS may be executed alternatively and transparently.

Finally, ADAM also achieves a clear separation among thedifferent aspects of the application: from the more abstractones, captured by Activity Diagrams, to those closer to thetechnical domain, captured by the implementations. By relyingon such sharp separation of concerns, developers may firstmodel the features they want to introduce in the system, ig-noring how they will be implemented later on. Let us consideragain the Location functionality of the SR application. In theinception phase, developers only focus on the fact the systemneeds such feature and connect it to the other features byrelying on the Activity Diagram. Later on, they can implementa first prototype that leverages NPS and the manual input ofthe user realizing that this solution needs to be improved interms of usability. The applications may gradually evolve, byadding other implementations for this feature (e.g., the GPS).This process, in which the system design is decoupled fromthe implementation, as enforced by the proposed approach, isa widely recognized best practice in Software Engineering.

It is important to notice that the advantages provided by theADAM approach are obtained transparently w.r.t. to engineers,who only have to produce one or more Activity Diagramsand their corresponding implementations. Even the complexityconcerning the EM and MDPs is managed behind the scenesby the ADAM tools.

V. VALIDATING THE APPROACH

ADAM is implemented a publicly available open-sourcetool.6 Although our approach is general and applies withlimited technological modifications to other languages, wefocused on the Java language for our prototype. In this Section,we discuss the validation of the ADAM approach, focusingon its run-time overhead and scalability. The validation hasbeen carried out by performing a large simulation campaign.Hereafter, we report the most significant results and we referthe reader to our prototype implementation for the replicabilityof the presented data. Since ADAM requires additional run-time computation, we start by discussing the overhead imposedto navigate and execute the model. This includes the com-putation of the probabilities of success for each alternative.We developed an application that automatically generatesdifferent models, and we used the ADAM prototype to executethem. To test its performance under different situations, wevaried the number of abstract functionalities that are partof the model and the number of alternative implementationsbound to them. Finally, we also varied the number of non-functional requirements present in the model to be satisfiedduring execution. Our evaluation was carried out on thefollowing hardware setting: i5-540M processor, 4GB of RAM,Ubuntu Linux 11.10, and Oracle Java Runtime Environmentversion 1.6.0 26. Moreover, each of the experiments wasrepeated at least 30 times. The first experiment investigatesthe overhead imposed by ADAM to execute a simple model

6http://code.google.com/p/adam-java/

where each abstract functionality is associated with a singleimplementation. To calculate the overhead, we compared anADAM execution with an equivalent (hard coded) sequenceof method calls to the same implementations.7 We have variedthe number of used abstract functionalities (and, consequently,the number of method calls) from 10 to 1000. The measuredresults are reported in Figure 6(a). These results show that theway ADAM models are navigated and executed introduces anegligible overhead, around 2.35%. For instance, for a largemodel with 500 abstract components, the execution takesin average 25.65s, while the Java execution with the samenumber of method calls requires 25.06s. To further investigatethe overhead imposed by ADAM we extended the previousexperiment as follows. Let us consider a base scenario with thefollowing parameters: 500 abstract functionalities, each with 3alternative implementations, annotated with different impacts;6 non-functional requirements, 3 of which are threshold basedand the remaining 3 are min/max requirements.

Figure 6(b) shows how the size of the model affects theoverall execution time. In this experiment, we consider thebase scenario increasing the size of the model from 10 to1000. Note that these results differ from Figure 6(a) becausethey include also the time to compute and choose the bestalternative. In general, the experiments show an overhead ofapproximately 4− 5%, which we still claim to be reasonable.For instance, for a large model of 500 abstract functionalities,the execution time increases from 25.06s to 26.15.

Figure 6(c) shows the time required to execute an ADAMmodel in our base scenario with an increasing number ofalternatives bound to each abstract functionality from 1 to 5.Note that we plotted with an horizontal black line the averagetime for the Java execution. Even in the worst scenario, inwhich we have 5 alternatives for each abstract functionalities,the overhead compared to the Java execution reaches anacceptable value of 9%. Consider that this testbed includeda very large number of alternatives, 2500 in total, which isfar from what we expect from a realistic application. If weconsider that we typically have two alternatives, the averageexecution takes 25.83s instead of 25.06s.

Finally, Figure 6(d) shows, instead, how the execution timeis affected, in our base scenario, by increasing in the numberof requirements, which we varied from 2 to 10. Note that theADAM Interpreter scales very smoothly, allowing applicationsto add multiple requirements without decreasing the ADAMperformance. In general, from the above assessment we mayconclude that the ADAM approach is feasible and its useintroduces an acceptable (often negligible) overhead in theexecution time of the overall application, especially whencompared to the advantages as discussed in the Section IV.

VI. RELATED WORKS

Many existing works address the problem of uncertaintymanagement and investigate adaptive software systems. Therequirements engineering community has been particularly

7All the methods have a default implementation which sleeps for 50ms.

40

(a) Execution Overhead. (b) Execution Overhead: Increasing Model Size.

(c) Execution Overhead: Increasing Number of Alternatives. (d) Execution Overhead: Increasing Number of Requirements.

Fig. 6. ADAM Overhead.

active w.r.t. this research challenge [20]. For example RE-LAX [24], a requirements language for adaptive systems,explicitly addresses uncertainty by enabling engineers to cap-ture uncertainty in the requirements definition. Differentlyfrom our approach, RELAX achieves adaptation by relaxingnon-critical requirements instead of relying on alternative oroptional functionalities as in the ADAM approach. Souza etal. [21], instead, enable adaptation by conceiving systemsin control loop in which a set of system parameters (i.e.,variability points) is tuned at run-time to satisfy the systemrequirements. Finally, Wang et al. [23] describe a frameworkthat exploits software variability and goal models to allow self-repair in cases of failure. From a model-driven perspective, wecan mention [17] and [16]. The former focuses on functionaladaptations and investigates the adoption of aspect orientedprogramming to weave new system configurations togetherwith run-time models used to validate them. The latter alsoexploits aspect oriented programming, but focuses on non-functional properties of systems. It describes a middleware forrun-time adaptation that chooses between alternative applica-tion variants. In addition, concerning the concept of EmbeddedModel, we can mention the work by Balz et al. [2], [3], thatdescribes an approach to embed full model semantics intosource code. In this case, the ultimate goal of the authorsis to support the synchronization between the model andthe implementation. This approach may be considered as analternative solution to implement the ADAM concepts. Froman architectural viewpoint, we may mention the foundationalwork on a three-layer architecture for software adaptation,described in [15], [22], which focuses mainly on functionaladaptation. Cheng et al. in [5] investigate instead architecture-based adaptation through resource prediction, that is similar tothe non-functional forecasts obtained via probabilistic model

checking in ADAM, even if at architecture level. In addition,concerning monitoring, it is important to mention the workby Ehlers et al. [6], that propose an approach to localizeperformance Anomalies and enable adaptations through a rule-based expert system even if they focus only on responsetime. In addition, Fleurey et al. [11] define the impact offeatures (i.e., functionalities), as well as high-level adaptationrules to choose among them according to the context andachieve an adaptive behavior. In the same way, Floch etal. [12] adopt utility functions to determine which componentimplementation should be selected according to the context.The work by Ramirez et al. [19] represents, instead, anapproach that may used to complement the ADAM solution.Indeed, it allows the automatic discovery of combinations ofenvironmental conditions that produce requirements violationsand latent behaviors in an adaptive system. Such approachmay be used in ADAM to support engineers in designingthe alternative implementations needed to face the discoveredenvironmental conditions. All the mentioned approaches differfrom ADAM in many aspects. Some of them, for example, donot consider non-functional aspects. Others, even if they focuson non-functional adaptation, do not offer the same degreeof flexibility provided in ADAM that, for example, not onlyoptimizes the non-functional behavior of the system, but alsomaximizes its reliability, as described in Section IV.

VII. CONCLUSIONS AND FUTURE WORK

In this paper we presented ADAM , a novel approach thatsupports the effective development and operation of adaptivesystems. To demonstrate its advantages, we used the proposedapproach to implement a realistic mobile application and weassessed the overhead introduced by the approach and its scal-ability by performing a simulation campaign. In addition, to

41

encourage the adoption of the proposed approach and to allowthe replication of experiments, the ADAM implementationhas been released as an open-source tool. Finally, ADAMcurrently supports parallelism only in the implementationmethods and not at the Activity Diagram level. We plan toovercome this limitation in our future work.

ACKNOWLEDGMENTS

This research has been funded by the EU, ProgrammeIDEAS-ERC, Project 227977-SMScom and FP7-PEOPLE-2011-IEF, Project 302648-RunMore.

REFERENCES

[1] C. Baier and J.-P. Katoen. Principles of Model Checking. The MITPress, 2008.

[2] M. Balz and M. Goedicke. Embedding process models in object-orientedprogram code. In Proceedings of the 1st Workshop on BehaviourModelling in Model-Driven Architecture, page 7. ACM, 2009.

[3] M. Balz, M. Striewe, and M. Goedicke. Embedding behavioral modelsinto object-oriented source code. Software Engineering, 2009.

[4] B. Cheng, R. de Lemos, H. Giese, P. Inverardi, J. Magee, J. Andersson,B. Becker, N. Bencomo, Y. Brun, B. Cukic, et al. Software engineeringfor self-adaptive systems: A research roadmap. Software Engineeringfor Self-Adaptive Systems, pages 1–26, 2009.

[5] S. Cheng, V. Poladian, D. Garlan, and B. Schmerl. Improvingarchitecture-based self-adaptation through resource prediction. SoftwareEngineering for Self-Adaptive Systems, pages 71–88, 2009.

[6] J. Ehlers, A. van Hoorn, J. Waller, and W. Hasselbring. Self-adaptivesoftware system monitoring for performance anomaly localization. InICAC. ACM, 2011.

[7] I. Epifani, C. Ghezzi, R. Mirandola, and G. Tamburrelli. Model evolutionby run-time parameter adaptation. In ICSE. IEEE, 2009.

[8] H. Eriksson and M. Penker. Business modeling with UML. Wiley, 2000.[9] A. Filieri, C. Ghezzi, and G. Tamburrelli. Run-time efficient probabilistic

model checking. In ICSE 2011. ACM.[10] A. Filieri, C. Ghezzi, and G. Tamburrelli. A formal approach to adaptive

software: continuous assurance of non-functional requirements. FormalAspects of Computing, pages 1–24, 2012.

[11] F. Fleurey and A. Solberg. A domain specific modeling languagesupporting specification, simulation and execution of dynamic adaptivesystems. Model Driven Engineering Languages and Systems, 2009.

[12] J. Floch, S. Hallsteinsen, E. Stav, F. Eliassen, K. Lund, and E. Gjorven.Using architecture models for runtime adaptability. Software, IEEE,23(2):62–70, 2006.

[13] S. Gallotti, C. Ghezzi, R. Mirandola, and G. Tamburrelli. Qualityprediction of service compositions through probabilistic model checking.Quality of Software Architectures. Models and Architectures, 2008.

[14] A. Hinton, M. Kwiatkowska, G. Norman, and D. Parker. Prism: A toolfor automatic verification of probabilistic systems. TACAS, 3920, 2006.

[15] J. Kramer and J. Magee. Self-managed systems: an architecturalchallenge. In FOSE 2007.

[16] S. Lundesgaard, A. Solberg, J. Oldevik, R. France, J. Aagedal, andF. Eliassen. Construction and execution of adaptable applicationsusing an aspect-oriented and model driven approach. In DistributedApplications and Interoperable Systems, pages 76–89. Springer, 2007.

[17] B. Morin, O. Barais, G. Nain, and J. Jezequel. Taming dynamicallyadaptive systems using models and aspects. In ICSE, 2009.

[18] M. Puterman. Markov decision processes. Handbooks in operationsresearch and management science, 2:331–434, 1990.

[19] A. Ramirez, A. Jensen, B. Cheng, and D. Knoester. Automaticallyexploring how uncertainty impacts behavior of dynamically adaptive sys-tems. In Automated Software Engineering (ASE), 2011 26th IEEE/ACMInternational Conference on, pages 568–571. IEEE, 2011.

[20] P. Sawyer, N. Bencomo, J. Whittle, E. Letier, and A. Finkelstein.Requirements-aware systems: A research agenda for re for self-adaptivesystems. In RE 2010, pages 95–103. IEEE, 2010.

[21] V. Silva Souza, A. Lapouchnian, and J. Mylopoulos. System iden-tification for adaptive software systems: a requirements engineeringperspective. Conceptual Modeling–ER 2011, pages 346–361, 2011.

[22] D. Sykes, W. Heaven, J. Magee, and J. Kramer. From goals tocomponents: a combined approach to self-management. In SEAMS 2008.

[23] Y. Wang and J. Mylopoulos. Self-repair through reconfiguration:A requirements engineering approach. In Proceedings of the 2009IEEE/ACM International Conference on Automated Software Engineer-ing, pages 257–268. IEEE Computer Society, 2009.

[24] J. Whittle, P. Sawyer, N. Bencomo, B. Cheng, and J. Bruel. Relax: alanguage to address uncertainty in self-adaptive systems requirement.RE, 2010.

APPENDIX

In this section we provide a brief introduction to the mathe-matical concepts used throughout the paper: Markov DecisionProcesses (MDPs) and rewards. MDPs are non-deterministicKripke structures with probabilistic transitions among states.Formally, an MDP over a set of atomic propositions AP is atuple 〈S, s0, F, Steps〉, where:• S is a finite set of states, s0 ∈ S is the initial state andF ⊆ S is the set of final states;

• Steps is the transition function, such that for each s ∈ S,either Steps(s) = Dist(S) or Steps(s) ∈ 2S , whereDist(S) = {(s1, p1), . . . , (sk, pk)}, with

∑kj=1 pj = 1,

is a discrete probability distribution over S.Notice that, if, for a final state sF ∈ F , Steps(sF ) ={(sF , 1)}, the self-edge is often not represented graphically.MDPs may be augmented with rewards, through which onecan quantify a benefit (or loss) due to the residence in a specificstate. A reward ρ : S → R≤0 is a non-negative value assignedto a state. They can represent information such as averageexecution time, power consumption or usability. In MDPs,each state s can be also annotated with rewards cumulatedfrom s to a final state sF . Given a state s, there may be manypaths that connect it to one of the final state sF . Each ofthese paths cumulates as reward the sum of the rewards ofthe states in the path. If s is such that Steps(s) = Dist(S),the overall cumulated reward for s is given by the weightedsum w.r.t. Dist(s) of the path cumulated rewards for thepaths starting from s. However, this quantity becomes aninterval because of the non-deterministic transitions. Indeed,all the non-deterministic choices have associated a cumulativerewards. To represent all these alternatives, each state s isannotated with the interval 〈minR(s),maxR(s)〉 in whichall the non deterministic cumulative rewards are comprised.These values are computed as follows. If s is a final state,minR(s) = ρ(s), instead, if s ∈ S − F , when Steps(s) ={(s1, p1), . . . , (sk, pk)}, i.e., the transitions outgoing froms form a probability distribution, minR(s) =

∑kj=1 pj ×

minR(sj), and, when Steps(s) = {s1, . . . , sm}, i.e., thetransitions outgoing from s represent a non deterministicchoice, minR(s) = mini∈{1,...,m}minR(si). If one of thepaths on which the cumulative reward is computed containsloops, the above equations recall themselves recursively. Iter-ative numerical methods are used to solve these equations.The termination condition of such methods is obtained bychecking when the maximum difference of the solutions fromsuccessive iterations drops below a fixed threshold. The upperbound of the interval, maxR(s), is computed analogously. Acomprehensive in-depth treatment of MDPs be found in [1].

42