14004 l3 - © d. deugo, 2003 - 2008 object-oriented validation and verification starting at the...

19
1 4004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

Upload: buddy-gilmore

Post on 30-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

14004 L3 - © D. Deugo, 2003 - 2008

Object-Oriented Validation and

Verification

Starting at theBeginning!

Page 2: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

24004 L3 - © D. Deugo, 2003 - 2008

About Scope

• The scope of a test is the collection of software components to be verified:– system: complete integrated application as a black box

– integration: interactions of the components of a system or subsystem

– unit: in OO, testing an instance of a class

• Different testing strategies (or guidelines, a.k.a. heuristics) apply to different scopes…– ** we will first learn how to create test models and then how to derive tests! **

• Beyond the notions of tests and test cases, Binder introduces the idea of a test point:– A test point is a specific value for test case input and state variables.

– Some well-known heuristics for test point selection include equivalence classes, boundary value analysis and special values testing. They are examples of a general strategy called partition testing: can we define sets of equivalent test points.

– Later, we will need to consider how applicable this strategy is to each the difference scopes of testing we consider.

Page 3: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

34004 L3 - © D. Deugo, 2003 - 2008

Where to Start?

• Testing (implicitly of code) is inherently bottom up: units (methods and classes) then clusters and subsystems, and finally systems.

• But contrary to Binder, we are not limiting ourselves to the testing of code and first want to consider the creation and V&V of test-ready models, that is, models from which tests can be generated.

• We will therefore start with the modeling process put in place in the previous course: (see next slide)– from a problem description– to requirements (FRs and NFRs), assumptions, use-cases and use-case diagrams– to use-case maps– to message sequence charts and class diagrams– to finite state machines and/or code

• This process is NOT formal:– e.g., how to write use-cases, how to go from use-case maps to MSCs, how to build

hierarchical statecharts are all subjects of disagreement between different researchers!

Page 4: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

44004 L3 - © D. Deugo, 2003 - 2008

A Scenario Driven Modeling Approach

Problem Description

Use Case 1 Use Case 2 Use Case 3

MSCs

UCMs

FSMs and/orcode

MSC MSC MSC

Inter-scenariorelationships

Reqs

Page 5: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

54004 L3 - © D. Deugo, 2003 - 2008

Where to Start? (Take 2)

• OO V&V requires a traceability scheme:– This scheme will be incrementally examined as we scrutinize the supplied

case study.

– At the beginning of that document, we find a problem description (section 1) from which FRs, NFRs and Assumptions are obtained (section 2). Everything else is traced back to these artifacts.

– This first transformation appears to be very ad hoc… And yet our validation activities will depend on what we find at the origin of traceability links!

• Eric Yu proposes a Goal-Oriented Language (GRL) for capturing and reasoning about functional and mostly non-functional requirements (also called quality attributes):– distinguish satisfiability (strict criteria) from satisficability (acceptable limit)

• GRL + UCM = URN (User Requirements Notation) submitted to ITU– check out GRL tutorial at www.cs.toronto.edu/km/GRL:

» it mostly emphasizes soft goals…

» there is also a tool and a methodology

Page 6: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

64004 L3 - © D. Deugo, 2003 - 2008

About Requirements Traceability (1)

• For ITU-SG10, the definition of requirement traceability relations is important for many reasons:

To evaluate requirements coverage. An important question that a developer must be able to answer is: Are all requirements addressed in the current version of the system? In order to answer this question, one must be able to determine precisely the set of requirements that are referenced in the different system models.

To evaluate the impact of requirements modifications. Another important question that a developer must be able to answer is: What are the model elements that are related to a specific requirement? This question must often be answered in the case where modifications are made to requirements. The existence of traceability relationships allows evaluating the impact of modifications on the different models, and making the changes to affected models in a consistent manner.

To allow requirements testing. In a scenario-driven (or use case-driven) process, requirements are associated with specific scenarios. Therefore, in order to test that the current implementation of a system is correct with respect to a specific requirement, one must first determine the set of scenarios that are related to the requirement.

Page 7: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

74004 L3 - © D. Deugo, 2003 - 2008

About Requirements Traceability (2)

To allow the identification of conflicting requirements. The causes of system errors are various. One important cause of errors is conflicting requirements. This type of error is often difficult to prevent and is only discovered late in the development process. For this reason, when an error is found in the system, it is important to be able to trace it back to the different models, and ultimately to requirements, and see where the error has been introduced. If the error comes from conflicting requirements, then these requirements can be precisely identified.

To reduce maintenance efforts. If one can determine precisely the set of model elements that can be impacted by the modification of a specific requirement (or model element), then the cost of modification would be significantly reduced.

To preserve the rationale for design decisions. Knowing the original reasons for design decisions helps maintainers and enhancers to evaluate whether implementation should be changed in the light of new circumstances. Such re-engineering of implementations may be essential to keeping a product vital and competitive in the marketplace.

Page 8: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

84004 L3 - © D. Deugo, 2003 - 2008

Use Cases Revisited

(Binder section 8.3 and chapter 14)

As few use-cases as possible describing as many scenarios as possible.

Page 9: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

94004 L3 - © D. Deugo, 2003 - 2008

System Scope

• Fundamental truth about software testing: individual verification of components cannot guarantee a correctly functioning system.– We need to test the system against the requirements– Binder suggests 3 patterns discussed in this section

• Complete, consistent and testable requirements are necessary to develop an effective test suite.– Each element of our Requirement Abstraction Layer must be a test

model!

• UML ’s use cases are typically assumed to capture the requirements when in fact they each capture a set of scenarios associated to some requirements…

• Use cases are in NL and thus not test-ready.• Semantics: tables 8.1 and 8.2 (p.280):

– <<uses>> and extends>> are transitive: Binder suggests at least checking every fully expanded extension combination…» this is a form of path testing… keep that in mind...

Page 10: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

104004 L3 - © D. Deugo, 2003 - 2008

Extended Use Cases

• Jacobson suggests some tests can be derived from use cases:– basic paths, alternative paths, associated reqs

• Several questions remain unanswered:– how to choose test cases?– In what order shall I apply my tests?– When am I done?

• Binder suggests “extended UCs”, for which we need to determine:– the frequency of each use case– sequential dependencies (if not all relationships) between UCs– operational variables

• Operational variables are inputs, outputs, and environment conditions that: – lead to « significantly different » paths of a use case

» This idea will carry through to UCMs– abstract the state of the system under test– result in « significantly different » system responses

Page 11: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

114004 L3 - © D. Deugo, 2003 - 2008

Operational Variables

• Consider a typical Use Case Diagram for an ATM system:– Figure 8.4 p.279– Table 14.1 p.722 considers different resulting paths of a same

use case in terms of input and output combinations– This viewpoint can be re-expressed in terms of operational

variables for each use case: table 14.2 p.726» we need 4 variables to capture all combinations» we will discuss combinational models in detail later but we

must understand NOW that the variants do not overlap!• We have partitioned the input and output space successfully!

– Finally, we can minimally ensure every variant is made true at least once, and false at least once.» A true test case is a set of values that satisfies all

conditions in a variant» A false test case has at least one condition false» see table 14.3 p.727

Page 12: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

124004 L3 - © D. Deugo, 2003 - 2008

Binder ’s Format for Testing Patterns

• We don’t yet know where Binder’s patterns fit in our design process. We will get to that later. For now, we consider his pattern format and study individual patterns.

• The proposed format is:– Name: suggests a general approach

– Intent: kind of test suite produced by this pattern

– Context: When does this pattern apply?

– Fault Model: What kinds of faults are to be detected?

– Strategy: How is the test suite designed and coded?

– Oracle: How can we derive expected results?

– Automation: How much is possible?

– Entry and Exit Criteria: Pre- and Post conditions to use

– Consequences: Advantages and disadvantages

Page 13: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

134004 L3 - © D. Deugo, 2003 - 2008

Pattern 1: Extended UC Test

• Intent: Build a system-level test suite by modeling essential capabilities as extended use-cases

• Context: Applies if most, if not all, essential requirements of the SUT can be expressed as extended Use Cases

• Strategy: A UC specifies a family of responses to be produced for specific combinations of external input and system state. This pattern represents these relationships as a decision table. – More on how to use decision tables later.

– We need a complete inventory of operation variables and of their constraints (see recipe pp.724-728)

– we should use a traceability matrix like the one of Figure 14.1 p.729

– An extended UC model can be used to catch faults similar to the ones expected of other models based on combinational logic (see next slide).

Page 14: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

144004 L3 - © D. Deugo, 2003 - 2008

Pattern 1: Expected Faults

• From combinational logic:– domain faults: usually on boundary of conditions

» go back to the triangle example… expired date… etc.– logic faults: logic of specification is incorrectly coded

» ands, ors and nots got mixed up somewhere... (e.g., cash status)– incorrect handling of don ’t cares

» output needs to stay the same across different values for DCs– incorrect or missing dependency on pre-conditions

» a UC behaves correctly despite a violated pre-condition...

• From system scope:– undesirable feature interactions (or is it scenario interactions)

» e.g., ATM shut downs while user is doing a transaction!– incorrect output (e.g., wrong balance)– abnormal termination (e.g., ATM eats your card…)– omissions and surprises

» e.g., ATM does not get validated, all your accounts are zeroed…

Page 15: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

154004 L3 - © D. Deugo, 2003 - 2008

Pattern 1: Other Fields

• Oracle: expected results by human intuition

• Automation: Appendix in Binder ’s book presents tools (rather distantly) relevant to use case testing

• Entry criteria:– extended UCs must have been validated (via traceability? GRL?)

– no execution of test cases at system level before its components have been tested (i.e., bottom up test execution… next slide)

• Exit criteria: (as a % of completeness of req coverage)– XUVC = (# of implemented UCs) * (Total # variants tested) * 100

(# of required UCs) (Total # of variants)

• Consequences:– no one agrees on level of abstraction of a UC!

– Leaves out performance, fault tolerance, etc.

– extended UC reduces to a decision table

Page 16: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

164004 L3 - © D. Deugo, 2003 - 2008

Pattern 2: Covered in CRUD

• Intent: verifies that all basic operations are exercised for each class in the system under test…

• Strategy: – build a use case/class coverage table matrix such as the one of

Figure 14.2 p.733

– C: creation; R: read, U: update, D: delete

• Exit criterion: – all basic operations of each class have been exerciced at least

once…

Page 17: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

174004 L3 - © D. Deugo, 2003 - 2008

Pattern 3: Allocate Tests by Profile

• Intent: Allocate the overall testing budget to each use case in proportion to its relative frequency.

• Context: any time you use Pattern 1, specially in the presence of a combinatorial explosion of possible paths.

• Strategy: you must somehow (!) obtain an operational profile from the potential users. Then you merely sort.– Table 14.5 p.738 for ATM is representative!

Page 18: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

184004 L3 - © D. Deugo, 2003 - 2008

Implementation Specific System Tests

• Several issues are typically downplayed if not ignored through use cases:– Configuration (wrt versions of s/w and h/w)

– Compatibility

– Setup/shutdown

– Performance (see next slide)

– Etc.

• For Human Computer Interaction:– Usability (McGrenere: « bloated UIs> »), security,

documentation, operator procedure testing, etc.

• Beyond system testing:– Alpha and beta testing (by independent volunteers),

acceptance testing (by real customer), compliance testing (wrt standards and regulations)

Page 19: 14004 L3 - © D. Deugo, 2003 - 2008 Object-Oriented Validation and Verification Starting at the Beginning!

194004 L3 - © D. Deugo, 2003 - 2008

About Performance

• We need quantitative formulations of performance requirements:– Throughput: number of tasks completed per unit of time– Response time: we need average and worst-case– Utilization: how busy is the system

• Other issues:• We need a worst case analysis

– Performance modeling initially requires lots of magic numbers– Load testing considers how the system responds to increases input events– Concurrency testing: load testing with concurrent events– Stress testing: rate of inputs exceeds design limits– Recovery Testing: testing recovery from a failure mode

• For real-time systems we must distinguish 3 types of events:– Repeating: must be accepted within a certain interval– Intermittent critical: aperiodic input with response within a fixed interval of

time– Repeating critical: combination of 2 previous ones