software engineering testing

66
Based on Presentation by Mira Balaban Department of Computer Science Ben-Gurion university Ian Sommerville: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/Presentati ons/index.html Dan Berry: http://se.uwaterloo.ca/~dberry/COURSES/software.engr/lectures.pdf/ Glenford J. Myers : The Art of Software Testing, Second Edition Software Engineering Testing Testing Software Engineering 2011 Department of Computer Science Ben-Gurion university

Upload: halla-stokes

Post on 01-Jan-2016

24 views

Category:

Documents


0 download

DESCRIPTION

Software Engineering Testing. Based on Presentation by Mira Balaban Department of Computer Science Ben-Gurion university Ian Sommerville: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/Presentations/index.html - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Software Engineering Testing

Based on Presentation by Mira Balaban Department of Computer Science Ben-Gurion universityIan Sommerville: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/Presentations/index.htmlDan Berry: http://se.uwaterloo.ca/~dberry/COURSES/software.engr/lectures.pdf/Glenford J. Myers : The Art of Software Testing, Second Edition

Software EngineeringTestingTesting

Software Engineering 2011Department of Computer Science Ben-Gurion university

Page 2: Software Engineering Testing

Software Engineering, 2011 Software Testing 2

Software verification and validation

Verification and validation is intended to show that a system conforms to its specification and meets the requirements of the system customer

Involves checking and review processes and system testing.

System testing involves executing the system with test cases that are derived from the specification of the real data to be processed by the system.

Page 3: Software Engineering Testing

Software Engineering, 2011 Software Testing 3

Verification vs. ValidationVerification:

"Are we building the product right?”Tests against a simulation.The software should conform to its specification.

Validation:"Are we building the right product?”Tests against a real world situation. The software should do what the user really

requires.

Page 4: Software Engineering Testing

Software Engineering, 2011 Software Testing 4

Verification & Validation GoalsEstablish confidence that the software is fit for

purpose. Two principal objectives

The discovery of defects in a systemThe assessment of whether or not the system is useful

and useable in an operational situation.This does not mean:

completely free of defectscorrect

Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed.

Page 5: Software Engineering Testing

Software Engineering, 2011 Software Testing 5

V & V confidenceDepends on system’s purpose, user

expectations and marketing environmentSoftware function

The level of confidence depends on how critical the software is to an organization.

User expectationsUsers may have low expectations of certain kinds of

software.Marketing environment

Getting a product to market early may be more important than finding defects in the program.

Page 6: Software Engineering Testing

Software Engineering, 2011 Software Testing 6

Static and dynamic verificationStatic verification. Concerned with Analysis of the

static system representation to discover problemsSoftware inspections

May be supplement by tool-based document and code analysisStatic analysisFormal verification

Dynamic V & V: Concerned with exercising and observing product behaviour, using Software testing.The system is executed with test data and its

operational behaviour is observed

Page 7: Software Engineering Testing

Software Engineering, 2011 Software Testing 7

Static verificationCode InspectionsFormalized approach to document reviewsIntended explicitly for defect detection (not correction).Defects may be: logical errors, anomalies in the code

that might indicate an erroneous condition (e.g. an uninitialized variable) or non-compliance with standards.

Inspections can check conformance with a specification but not conformance with the customer’s real requirements.

Inspections cannot check non-functional characteristics such as performance, usability, etc.

Usually done by a group of programmersThe comments are to the program not the programmer!!

Page 8: Software Engineering Testing

Software Engineering, 2011 Software Testing 8

Static verificationCode InspectionsAn Error Checklist for Inspections

The checklist is largely language independentSometimes (unfortunately) concentrate more on issues of

style than on errors (for example, “Are comments accurate and meaningful?” and “Are if- else, code blocks, and do-while groups aligned?”), and the error checks are too nebulous to be useful (such as “Does the code meet the design requirements?”).

Beneficial side effects The programmer usually receives feedback concerning

programming style, choice of algorithms, and programming techniques.

The other participants gain in a similar way by being exposed to another programmer’s errors and programming style.

Page 9: Software Engineering Testing

Software Engineering, 2011 Software Testing 9

Static verificationCode Inspections

Page 10: Software Engineering Testing

Software Engineering, 2011 Software Testing 10

Static verificationCode Inspections -example

Data Flow Errors Will the program,

module, or subroutine eventually terminate?

what happens if NOTFOUND never becomes false?

Comparison errorsDoes the way in which

the compiler evaluates Boolean expressions affect the program?

This statement may be acceptable for compilers that end the test as soon as one side of an and is false, but may cause a division-by-zero error with other compilers.

If ((x==0) && (y/x)>z)

Page 11: Software Engineering Testing

Software Engineering, 2011 Software Testing 11

Static verificationAutomated Static AnalysisStatic analyzers are software tools for source text

processing. A static Analyzer:

A collection of algorithms and techniques used to analyze source code in order to automatically find bugs

Parses the program text and try to discover potentially erroneous conditions and bring these to the attention of the V & V team.

Very effective as an aid to inspections – they are a supplement to but not a replacement for inspections.

Page 12: Software Engineering Testing

Software Engineering, 2011 Software Testing 12

Examples Static Analyzer outputExample from the PVS-Studio's

Static Code Analyzer for C/C++/

Diagnostic message: V523 The 'then' statement is

equivalent to the 'else' statement.

V501 There are identical sub-expressions 'p_transactionId' to the left and to the right of the '!=' operator.

V532 Consider inspecting the statement of '*pointer++' pattern. Probably meant: '(*pointer)++'. mpeg2_dec umc_mpeg2_dec.cpp 59

if (Condition) {

Block StatementA

else

Block StatementA

}

p_transactionId!=p_transactionId

static IppTableInitAlloc(Ipp32s *tbl, ...)

{ ...

for (i = 0; i < num_tbl; i++) {

*tbl++; }

}•http://www.viva64.com/en/a/0069/#ID0EY6BI

Page 13: Software Engineering Testing

Software Engineering, 2011 Software Testing 13

Static verificationFormal Verification MethodsFormal methods can be used when a

mathematical specification of the system is available.

Form the ultimate static verification technique.Involve detailed mathematical analysis of the

specification May develop formal arguments that a program

conforms to its mathematical specification.Model checking.Predicate abstraction.Termination analysis.

Model (Syste

m Design

)

Formal Specificat

ion

Output- Does the

model satisfied Secificati

on

Model Checking Tool

Page 14: Software Engineering Testing

Software Engineering, 2011 Software Testing 14

Static verificationArguments for/against formal methodsFor Against

Producing a mathematical specification requires a detailed analysis of the requirements and this is likely to uncover errors.

Can detect implementation errors before testing when the program is analyzed alongside the specification

Require specialized notations that cannot be understood by domain experts.

Very expensive to develop a specification and even more expensive to show that a program meets that specification.

It may be possible to reach the same level of confidence in a program more cheaply using other V & V techniques.

Page 15: Software Engineering Testing

Software Engineering, 2011 Software Testing 15

Dynamic V & V Software TestingAccording to Boehm (1975),Programmers in large software projects

typically spend their time as follows:45-50% program checkout33% program design20% coding

Page 16: Software Engineering Testing

Software Engineering, 2011 Software Testing 16

Software testingYet, in spite of this checkout expense, delivered

“verified” and “validated” code is still unreliable.We have attempted to make the design and

coding processes more systematic.

What is Software Testing?The only validation technique for non-functional

requirements as the software has to be executed to see how it behaves.

Should be used in conjunction with static verification to provide full V&V coverage.

Page 17: Software Engineering Testing

Software Engineering, 2011 Software Testing 17

Two types of TestingDefect testing

Tests designed to discover system defects.A successful defect test is one which reveals

the presence of defects in a system.

Validation testingIntended to show that the software meets its

requirements.A successful test is one that shows that a

requirements has been properly implemented.

Page 18: Software Engineering Testing

Software Engineering, 2011 Software Testing 18

Psychology of Testing (1)A program is its programmer’s baby!

Trying to find errors in one’s own program is like trying to find defects in one’s own baby.

It is best to have someone other than the programmer doing the testing.

Tester must be highly skilled, experienced professional.

It helps if he or she possesses a diabolical mind.

Page 19: Software Engineering Testing

Software Engineering, 2011 Software Testing 19

Psychology of Testing (2)Testing achievements depend a lot on what are the

goals.Myers says (79):

If your goal is to show absence of errors, you will not discover many.

If you are trying to show the program correct, your subconscious will manufacture safe test cases.

If your goal is to show presence of errors, you will discover large percentage of them.

Testing is the process of executing a program with the intention of finding errors (G. Myers)

Page 20: Software Engineering Testing

Software Engineering, 2011 Software Testing 20

TestingWhen?When?

Testing along software development process.

What?What?What to test?

How?How?How to conduct the test?Discussion on Integration testing

Page 21: Software Engineering Testing

Software Engineering, 2011 Software Testing 21

Testing along software development

WhenWhen??

Page 22: Software Engineering Testing

Software Engineering, 2011 Software Testing 22

The Testing Process

Page 23: Software Engineering Testing

Software Engineering, 2011 Software Testing 23

Black-box and White-box TestingBlack-box testing

Also named : closed box , data-driven , input/output driven, behavioral testing

Provide both valid and invalid inputs. The output is matched with expected result (from

specifications). The tester has no knowledge of internal structure of

the program. It is impossible to find all errors using this approach. Test all possible types of inputs (including invalid ones like characters, float, negative integers, etc.).

Page 24: Software Engineering Testing

Software Engineering, 2011 Software Testing 24

Black-box and White-box Testing White-box Testing

Also named: open-box, clear box , glass box , logic-driven    

Examine the internal structure of the program. Test all possible paths of control flow. Capture:

errors of omission – neglected specificationerrors of commission – not defined by the specification

Page 25: Software Engineering Testing

Software Engineering, 2011 Software Testing 25

Unit testingIndividual components are tested.

Module testingRelated collections of dependent components are tested.

Sub-system testingModules are integrated into sub-systems and tested.

The focus here should be on interface testing.System testing

Testing of the system as a whole. Testing of emergent properties.

Acceptance testingTesting with customer data to check that it is acceptable.

User Testing

Component Testing

Testing Stages

System Testing

Integration Testing

Page 26: Software Engineering Testing

Software Engineering, 2011 Software Testing 26

Testing Stages

Component testing – unit and moduleTesting of individual program components, in

isolation from specifications;Usually the responsibility of the component

developer (except sometimes for critical systems);

Tests are derived from the developer’s experience.

Based on program structure – structural tests. white-box testing

Page 27: Software Engineering Testing

Software Engineering, 2011 Software Testing 27

Testing Stages: (cont)Integration testing:

Testing of groups of components integrated to create a system or sub-system;The test team has access to the system source code. The system is tested as components are integrated.

System Testing (Release Testing): Tests are based on a system specification.

The test team tests the complete system to be delivered.

Acceptance testing: Tests involving the customer.

Based on customer requirements. Usually black-box testing.

Page 28: Software Engineering Testing

Software Engineering, 2011 Software Testing 28

Component testing (1)Component or unit testing is the process of

testing individual components in isolation.It is a defect testing process.

Components may be:Individual functions or methods within an object;Object classes with several attributes and

methods;Composite components with defined interfaces

used to accessn their functionality.

Page 29: Software Engineering Testing

Software Engineering, 2011 Software Testing 29

Component testing (2)Ideal object class testing:

Complete test coverage of a class involvesTesting all operations associated with an object;Setting and interrogating all object attributes;Exercising the object in all possible states.

Inheritance makes it more difficult to design object class tests as the information to be tested is not localized.

Page 30: Software Engineering Testing

Software Engineering, 2011 Software Testing 30

Integration testingInvolves building a system from its components

and testing it for problems that arise from component interactions.

Top-down integrationDevelop the skeleton of the system and populate

it with components.Bottom-up integration

Integrate infrastructure components then add functional components.

To simplify error localisation, systems should be incrementally integrated

Page 31: Software Engineering Testing

Software Engineering, 2011 Software Testing 31

System/Release testing (1)Testing a release of a system that will be distributed

to customers.Primary goal :Increase the supplier’s confidence

that the system meets its requirements.Release testing is usually black-box or functional

testingBased on the system specification only;Testers do not have knowledge of the system

implementationPart of release testing may involve testing the

emergent properties of a system, such as performance and reliability.

Page 32: Software Engineering Testing

Software Engineering, 2011 Software Testing 32

System Testing (1) Usability Testing

Has each user interface been tailored to the intelligence, educational background, and environmental pressures of the end user?

Are the outputs of the program meaningful, nonabusive, and devoid of computer gibberish?

Security Testing attempting to devise test cases that subvert the program’s security checks. For example, get around an operating system’s memory protection

mechanism. You can try to subvert a database management system’s data security

mechanisms. Volume Testing

Heavy volumes of data If a program is supposed to handle files spanning multiple volumes, enough

data is created to cause the program to switch from one volume to another

Page 33: Software Engineering Testing

Software Engineering, 2011 Software Testing 33

System Testing (2) Stress Testing

The tester attempts to stress or load an aspect of the system to the point of failure

The tester identifies peak load conditions at which the program will fail to handle required processing loads within required time spans.

Stress testing has different meaning for different industries where it is used.

Stressing the system often causes defects to come to light.Stressing the system test failure behavior. Systems should not fail catastrophically. Particularly relevant to distributed systems that can exhibit severe

degradation as a network becomes overloaded.

Page 34: Software Engineering Testing

Software Engineering, 2011 Software Testing 34

System Testing (3) Performance Testing

Determine if the system meets the stated performance criteria: response times and throughput rates under certain workload and configuration conditions.

E.g. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.).

Recovery TestingPrograms such as operating systems, database management systems,

have recovery objectives that state how the system is to recover from programming errors, hardware failures, and data errors.

Programming errors can be purposely injected into a system to determine whether it can recover from them.

Hardware failures such as memory parity errors or I/O device errors can be simulated.

Data errors such as noise on a communications line or an invalid pointer in a database can be created purposely or simulated to analyze the system’s reaction.

Page 35: Software Engineering Testing

Software Engineering, 2011 Software Testing 35

Test case designInvolves designing the test cases (inputs and

outputs) used to test the system.The goal of test case design is to create a set of tests

that are effective in validation and defect testing.Design approaches:

Requirements-based testing (black-box)Mainly in release testing;

Partition testingBased on structure of input or output domain;In component and release testing.

Structural testing (based on code, white-box).In component testing.

What?What?

Page 36: Software Engineering Testing

Software Engineering, 2011 Software Testing 36

Finding What to Test (1)Only exhaustive testing can show a program is

free from defects.Exhaustive testing is impossible – leads to

combinatorial explosion. We need to group cases into equivalence

classes.Two test cases are in the same equivalence class

if they find the same error.Try to find a set of test cases with exactly one

element of each equivalence class.

Page 37: Software Engineering Testing

Software Engineering, 2011 Software Testing 37

Finding What to Test (2)Each new test case added to suite should

expose at least one (and as many as possible) previously undetectable error.Each tested thing must appear in at least, and

preferably at most one, test case.A test case may and should test more than one

thing.It is hoped to get enough test cases to find all

errors – high test coverage.A certain amount of skill and deviousness is

needed to generate test cases.

Page 38: Software Engineering Testing

Software Engineering, 2011 Software Testing 38

Finding What to Test (3)Content of a test case:

describe what is tested give input data give output data (assumed to be expected) Further constraints, if any.

Page 39: Software Engineering Testing

Software Engineering, 2011 Software Testing 39

Requirements based testingA general principle of requirements engineering :

requirements should be testable.A validation testing technique where you consider each

requirement and derive a set of tests for that requirement.

Use cases can be a basis for deriving the tests for a system.They help identify operations between an actor and

the system, or internal system operations.For each operation, there is a contract, that specifies

input of the operation, preconditions and postconditions for the operation.

The contracts specify test cases

Page 40: Software Engineering Testing

Software Engineering, 2011 Software Testing 40

Partition based testing (1) Input data and output results often fall into different

classes where all members of a class are related.Each of these classes is an equivalence partition or

domain where the program behaves in an equivalent way for each class member.

Test cases should be chosen from each partition.Thoroughly test all input options described in the

specifications, in all combinationsincluding that of same input as standard input and as file

input, if that is an option.Test boundaries of inputTest boundaries of outputTest each invalid input

Page 41: Software Engineering Testing

Software Engineering, 2011 Software Testing 41

Partition based testing (3)

Page 42: Software Engineering Testing

Software Engineering, 2011 Software Testing 42

Partition based testing (4)Example: Testing guidelines for sequences: Test software with sequences which have only

a single value.Use sequences of different sizes in different

tests.Derive tests so that the first, middle and last

elements of the sequence are accessed.Test with sequences of zero length.Test with sequences of maximal length or

nesting – if exists.

Page 43: Software Engineering Testing

Software Engineering, 2011 Software Testing 43

Structural testing – code-based (1)Sometime called white-box testing.Used mainly in component testingTo distinguish from functional (requirements-based,

“black-box” testing)Testing product functionality against its specification. Derivation of test cases according to program

structure.Knowledge of the program is used to identify

additional test cases.Objective is to exercise all program statements (not

all path combinations).

Page 44: Software Engineering Testing

Software Engineering, 2011 Software Testing 44

Structural testing – code-based (2)Test each conditional

branch in each direction

Test each path at least once if reasonable:For loops, test

repetition of:0, 1,representative number

of times;max number (if exists)

of times.

For loops – termination verification:Try to find invariants that

guarantee termination:e.g., show for some loop

variable Its value decreases with

every round, the value domain for this

variable is well founded – no infinite descending chains.

x’ = x – i, where x’ is value in next loop round, i > 0,

x range over natural numbers.

Page 45: Software Engineering Testing

Software Engineering, 2011 Software Testing 45

Structural testing – code-based (3)Test sensitivities to particular values, e.g.,

division by 0,overflow,underflow,subscript boundaries,null pointers.

when you have a large range, use boundary, typical, and sensitive values.

Page 46: Software Engineering Testing

Software Engineering, 2011 Software Testing 46

Structural testing – code-based (4)Thoroughly test

all input options described in the specifications, in all combinations,

including that of same input as standard input and as file input, if that is an option.

Test “all command line options” in “all combinations”.

In the above, “in all combinations” must be taken with a grain of salt, as it could lead to combinatorial explosion.

Page 47: Software Engineering Testing

Software Engineering, 2011 Software Testing 47

Structural testing – code-based (5)If an analysis of the code shows that

some options are truly independent, then

the number of combinations can be reduced.

The risk: a mistake could be made in determining independence:

The bug could be the non-independence!

Solution: verify the independence assumption with some test cases and then assume it.

Page 48: Software Engineering Testing

Software Engineering, 2011 Software Testing 48

Error testingTest all specified error conditions.Test all algorithm implied errors.Try to crash the program.

Page 49: Software Engineering Testing

Software Engineering, 2011 Software Testing 49

Regression TestingWhen you are doing an enhancement:

you should run all old testseven ones that are invalid for original purpose; just provide new expected results.

An expensive process. Requires automation.

Page 50: Software Engineering Testing

Software Engineering, 2011 Software Testing 50

Testing guidelines – (Whittaker 02)Testing guidelines - hints for the testing team

to help them choose tests that will reveal defects in the systemChoose inputs that force the system to generate

all error messages;Design inputs that cause buffers to overflow;Repeat the same input or input series several

times;Force invalid outputs to be generated;Force computation results to be too large or too

small.

Page 51: Software Engineering Testing

Software Engineering, 2011 Software Testing 51

Testing policies – for choosing testsTesting policies may define the approach to be

used in selecting system tests:All functions accessed through menus should be

tested;Combinations of functions accessed through the

same menu should be tested;Where user input is required, all functions must

be tested with correct and incorrect input.

Page 52: Software Engineering Testing

Software Engineering, 2011 Software Testing 52

Test automation Testing is an expensive, difficult activity. Testing workbenches provide a range of tools to reduce the time

required and total testing costs. Systems such as Junit support the automatic execution of tests:

comparing output with expected output. reporting when there is no match. Combining tests into test suits.

There are tools for generating test cases from code, each branch in each direction each path once

There are tools for running a program against test cases. Most testing workbenches are open systems because testing needs

are organisation-specific. They are sometimes difficult to integrate with closed design and

analysis workbenches.

How?How?

Page 53: Software Engineering Testing

Software Engineering, 2011 Software Testing 53

Integration testing (1)Integration testing is testing how a system consisting

of more than one module works together, i.e., testing the interfaces.

Assume a system structure having:a root component (no client components), or components,and leaf components (no dependencies on other

components).Two Major Kinds of Testing

big-bang!!!!!!Incremental - several orders possible:

top-down, bottom-up, critical piece firstfirst integrating functional subsystems and then integrating the

subsystems in separate phases using any of the basic strategies.

Page 54: Software Engineering Testing

Software Engineering, 2011 Software Testing 54

Integration testing (2)For a given module m, A driver for m is a

module that repeatedly calls m with test data & maybe even checks results for each test case.

A stub for replacing m is a skeletal module with:interface identical to that of m;A body that either

does nothing, orprints out name of m maybe with parameter values.

A stub may return pre-cooked results for precooked inputs for planned test cases, maybe even testing that actual input is same as planned or some such.

Page 55: Software Engineering Testing

Software Engineering, 2011 Software Testing 55

Driver and StubA test driver is a routine that calls a particular

component and passes test cases to it. (Also it should report the results of the test cases).

A test stub is a special-purpose program used to answer the calling sequence and passes back output data that lets the testing process continue.

Driver Module Stub

Public class RandInt{…public int generateRandInt() { return (int)(Math.random()*100 + 1);}...}

public class RandIntTest { public static void main(String[] args) { RandInt myRand = new RandInt(); System.out.println("My first rand int is :" + myRand.generateRandInt()); } }

Public class RandInt{…public int generateRandInt() { return 1; }…}

Page 56: Software Engineering Testing

Software Engineering, 2011 Software Testing 56

Integration testing (3)Big-Bang TestingEach module is tested in isolation under a

driver and with a stub for each module it invokes.

And then one day ... cross your fingers ... do a big-bang test of the whole bloody system!

Page 57: Software Engineering Testing

Software Engineering, 2011 Software Testing 57

Integration testing (4)Big-Bang TestingAdvantage:

Every module thoroughly tested in isolation.Disadvantages:

Driver and stub must be written for each module (except no driver for root and no stubs for leaves).

No error isolation — if error occurs, then in which module or interface is it?

Interface testing must wait until all modules are programmed critical interface & major design problems are found very late into project, after all code has been committed!

Page 58: Software Engineering Testing

Software Engineering, 2011 Software Testing 58

Integration testing (5)Incremental integration:

There are several specific orders of adding modules oneby- one to an ever growing system until the whole system is obtained.

A module is tested as it is added in the context ofwhatever of its callers and callees are

already in the System, and drivers and stubs for callers and callees that are not yet in the system, respectively.

Page 59: Software Engineering Testing

Software Engineering, 2011 Software Testing 59

Integration testing (6) – incremental Orders:

bottom-up top-downSandwichmodified sandwich

General Advantages:Error isolation — assuming that all previously added modules were

thoroughly tested, any errors that crop up should be in the module being added or in its interface.

Avoid making both driver and stub for each module – if no cyclic dependencies, then at most one is needed for each module.

General Disadvantage: Some modules (which ones depends on the order of adding

modules) are not tested thoroughly in isolation, because do not have driver and stub for every module.

Page 60: Software Engineering Testing

Software Engineering, 2011 Software Testing 60

Bottom Up Integration.

Sample 12-module program

Intermediate state in the top-down test

Page 61: Software Engineering Testing

Software Engineering, 2011 Software Testing 61

Integration testing (7) – incrementalBottom-Up

Test leaf modules in isolation with drivers.Test any non-leaf module m in the context of all of its

callees (previously tested!) with a driver for m.Advantages:

Thorough testing of each leaf module.Abstract types tend to be tested early.

Disadvantages:Must make drivers for every module except root.Major design flaws and serious interface problems

involving high modules are not caught until late; could cause a major rewrite of everything that has been tested before.

Page 62: Software Engineering Testing

Software Engineering, 2011 Software Testing 62

Top Down Integration

Sample 12-module program

Second step in the top-down test

Intermediate state in the top-down test

Page 63: Software Engineering Testing

Software Engineering, 2011 Software Testing 63

Integration testing (8) – incrementalTop-down

Only the root is tested in isolation with stubs for its callees.Test any non-root module m by having it replace its stub in its

callers, and with stubs for its callees.Advantages:

Can test during top-down programming.Major flaws and major interface problems are found early.No drivers needed, as modules are tested in context of actual

callers.Disadvantages:

Stubs can be difficult to write for planned test cases.Lower modules are not tested thoroughly; tested only in ways

used by upper modules and not as fully as possible against specs.

Page 64: Software Engineering Testing

Software Engineering, 2011 Software Testing 64

Integration testing (9) – incrementalSandwich

Root and leaf modules are tested in isolation as in T-D and B-U methods.

Work from root to middle T-D, and work from leaves to middle BU.

Advantages:Root and leaf modules tested thoroughly.Major design flaws and major interface problems found early.Most abstract data types tested early.No stubs and no drivers needed for middle modules.

Disadvantages:Stubs needed for upper modules and drivers needed for lower

modules.Upper-middle modules may not be thoroughly tested.

Page 65: Software Engineering Testing

Software Engineering, 2011 Software Testing 65

Integration testing (10) – incrementalModified Sandwich

Do sandwich.Also test upper-middle modules in isolation using

drivers and actual callees.Advantages:

Those of sandwich +Removal of its second disadvantage.

Disadvantages:First of sandwich +Extra drivers to write.

Page 66: Software Engineering Testing

Software Engineering, 2011 Software Testing 66

Decay of corrected programsMyers (79) observes that as the number of detected

errors increases in a piece of software, so does the probability of the existence of more, undetected errors. Thus-

testing and debugging, is no substitute for programming it right in the first place.

Belady and Lehman (76, 85) have shown that assuming a non-zero probability that an attempted correction introduces new bugs, any system which is continually fixed eventually decays beyond usability.