software testing part

46
Software Testing Software Testing Techniques Techniques Preeti Mishra Preeti Mishra Course Instructor Course Instructor

Upload: preeti-mishra

Post on 18-Aug-2015

60 views

Category:

Engineering


4 download

TRANSCRIPT

Page 1: Software testing part

Software Testing TechniquesSoftware Testing Techniques

Software Testing TechniquesSoftware Testing Techniques

Preeti MishraPreeti Mishra

Course InstructorCourse Instructor

Page 2: Software testing part

Observations about Testing

• “Testing is the process of executing a program with the intention of finding errors.” – Myers

• “Testing can show the presence of bugs but never their absence.” - Dijkstra

Page 3: Software testing part

Good Testing Practices

• A good test case is one that has a high probability of detecting an undiscovered defect, not one that shows that the program works correctly

• It is impossible to test your own program• A necessary part of every test case is a

description of the expected result

Page 4: Software testing part

Software testing axioms

1. It is impossible to test a program completely.2. Software testing is a risk-based exercise.3. Testing cannot show the absence of bugs.4. The more bugs you find, the more bugs there are.5. Not all bugs found will be fixed.6. It is difficult to say when a bug is indeed a bug.7. Specifications are never final.8. Software testers are not the most popular members of

a project.9. Software testing is a disciplined and technical

profession.

Page 5: Software testing part

YAHOO!

Page 6: Software testing part

Characteristics of Testable Software

• Operable– The better it works (i.e., better quality), the easier it is to

test

• Observable– Incorrect output is easily identified; internal errors are

automatically detected

• Controllable– The states and variables of the software can be controlled

directly by the tester

• Decomposable– The software is built from independent modules that can

be tested independently

Page 7: Software testing part

Characteristics of Testable Software (continued)

• Simple– The program should exhibit functional, structural, and code

simplicity

• Stable– Changes to the software during testing are infrequent and

do not invalidate existing tests

• Understandable– The architectural design is well understood; documentation

is available and organized

Page 8: Software testing part

Test Characteristics

• A good test has a high probability of finding an error– The tester must understand the software and how it might fail

• A good test is not redundant– Testing time is limited; one test should not serve the same

purpose as another test

• A good test should be “best of breed”– Tests that have the highest likelihood of uncovering a whole

class of errors should be used

• A good test should be neither too simple nor too complex– Each test should be executed separately; combining a series

of tests could cause side effects and mask certain errors

Page 9: Software testing part

Criteria for Completion of Testing

• When are we done testing? (Are we there yet?)• How to answer this question is still a research question1. One view: testing is never done… the burden simply

shifts from the developer to the customer2. Or: testing is done when you run out of time or money3. Or use a statistical model:

Assume that errors decay logarithmically with testing time

Measure the number of errors in a unit period Fit these measurements to a logarithmic curve Can then say: “with our experimentally valid statistical

model we have done sufficient testing to say that with 95% confidence the probability of 1000 CPU hours of failure free operation is at least 0.995”

Page 10: Software testing part

White-box TestingWhite-box TestingWhite-box TestingWhite-box Testing

Page 11: Software testing part

White-box Testing

• Uses the control structure part of component-level design to derive the test cases

• These test cases– Guarantee that all independent paths within a module

have been exercised at least once– Exercise all logical decisions on their true and false sides– Execute all loops at their boundaries and within their

operational bounds– Exercise internal data structures to ensure their validity

“Bugs lurk in corners and congregate at boundaries”

Page 12: Software testing part

Black-box TestingBlack-box TestingBlack-box TestingBlack-box Testing

Page 13: Software testing part

Black-box Testing

• Complements white-box testing by uncovering different classes of errors

• Focuses on the functional requirements and the information domain of the software

• Used during the later stages of testing after white box testing has been performed

• The tester identifies a set of input conditions that will fully exercise all functional requirements for a program

• The test cases satisfy the following:– Reduce, by a count greater than one, the number of

additional test cases that must be designed to achieve reasonable testing

– Tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific task at hand

Page 14: Software testing part

Black-box Testing Categories

• Incorrect or missing functions• Interface errors• Errors in data structures or external data base access• Behavior or performance errors• Initialization and termination errors

Page 15: Software testing part

Questions answered by

Black-box Testing

• How is functional validity tested?• How are system behavior and performance tested?• What classes of input will make good test cases?• Is the system particularly sensitive to certain input

values?• How are the boundary values of a data class isolated?• What data rates and data volume can the system

tolerate?• What effect will specific combinations of data have on

system operation?

Page 16: Software testing part

Traditional Testing Traditional Testing v/s Object Oriented v/s Object Oriented

Testing Testing

Traditional Testing Traditional Testing v/s Object Oriented v/s Object Oriented

Testing Testing

Page 17: Software testing part

Levels of Testing

• Testing Levels• Irrespective of using any software development

paradigm, testing is performed at four important levels

Page 18: Software testing part

Unit Testing

• A unit is a smallest testable part of an application. • In conventional/procedural programming, a unit may

be – an individual program,– module, – function,– procedure

• while in object-oriented programming, the smallest unit is – a encapsulated class, which may belong to a base/super

class, abstract class or derived/child class.

Page 19: Software testing part

Integration Testing• After the units are individually tested successfully, the next

level of testing i.e integration testing proceeds.• Integration testing in conventional software development:

– is to progressively integrate the tested units either incrementally or non-incrementally, so as to check whether the software units in an integrated mode are working properly.

• In an object-oriented software development:– Integrated testing is to verify the interaction among classes

(interclass). – The relationships among classes are the basic characteristics of an

object-oriented system and define the nature of interaction among classes and objects

at runtime.

Page 20: Software testing part

System Testing(Validation Testing)

• Validation testing is followed by integration testing. Validation testing focuses on user-visible actions and

• It demonstrates conformity with requirements. • Black-box testing is a testing type used for validation

purpose. Since black-box testing is used for validating the functionality of software, it is otherwise called as functional testing.

• Equivalence partitioning and Boundary value analysis are the two broad categories of black-box testing.

Page 21: Software testing part

System Testing(Validation Testing)

• There is no distinction among conventional, object-oriented with respect to validation testing

• System testing verifies that all elements (hardware, people, databases) are integrated properly so as to ensure

• whether the overall product met its requirement and achieved the expected performance.

• System testing also deals with non-functional requirements of the software such as recovery testing, security testing, stress testing and performance testing.

Page 22: Software testing part
Page 23: Software testing part

Object Oriented Object Oriented TestingTesting

Object Oriented Object Oriented TestingTesting

Page 24: Software testing part

Object-Oriented Testing

• When should testing begin?• Analysis and Design:

Testing begins by evaluating the OOA and OOD models How do we test OOA models (requirements and use cases)? How do we test OOD models (class and sequence diagrams)? Structured walk-throughs, prototypes Formal reviews of correctness, completeness and consistency

• Programming: How does OO make testing different from procedural

programming? Concept of a ‘unit’ broadens due to class encapsulation Integration focuses on classes and their context of a use case

scenarioor their execution across a thread

Validation may still use conventional black box methods

Page 25: Software testing part

Testing OO CodeClass testsClass tests IntegrationIntegration

teststests

ValidationValidationteststests

SystemSystemteststests

Page 26: Software testing part

Class (Unit) Testing

• Smallest testable unit is the encapsulated class• Test each operation as part of a class hierarchy

because its class hierarchy defines its context of use• Approach:

Test each method (and constructor) within a class Test the state behavior (attributes) of the class between

methods

• class testing focuses on each method, then designing sequences of methods to exercise states of a class

• But white-box testing can still be applied

Page 27: Software testing part

Class Testing Process

classclassto beto be

testedtested

test casestest cases

resultsresults

softwaresoftwareengineerengineer

How to test?How to test?

Why a Why a loop?loop?

Page 28: Software testing part

Class Test Case Design

1. Identify each test case uniquely - Associate test case explicitly with the class and/or method to be tested

2. State the purpose of the test 3. Each test case should contain:

a. A list of messages and operations that will be exercised as a consequence of the test

b. A list of exceptions that may occur as the object is testedc. A list of external conditions for setup (i.e., changes in the

environment external to the software that must exist in order to properly conduct the test)

d. Supplementary information that will aid in understanding or implementing the test

Automated unit testing tools facilitate these requirements

Page 29: Software testing part

Challenges of Class Testing

• Encapsulation: Difficult to obtain a snapshot of a class without building

extra methods which display the classes’ state

• Inheritance and polymorphism: Each new context of use (subclass) requires re-testing

because a method may be implemented differently (polymorphism).

Other unaltered methods within the subclass may use the redefined method and need to be tested

• White box tests: Basis path, condition, data flow and loop tests can all apply

to individual methods, but don’t test interactions between methods

Page 30: Software testing part

Random Class Testing

1. Identify methods applicable to a class2. Define constraints on their use – e.g. the class must always be

initialized first3. Identify a minimum test sequence – an operation sequence that

defines the minimum life history of the class4. Generate a variety of random (but valid) test sequences – this

exercises more complex class instance life histories• Example:

1. An account class in a banking application has open, setup, deposit, withdraw, balance, summarize and close methods

2. The account must be opened first and closed on completion3. Open – setup – deposit – withdraw – close4. Open – setup – deposit –* [deposit | withdraw | balance |

summarize] – withdraw – close. Generate random test sequences using this template

Page 31: Software testing part

Integration Testing

• OO does not have a hierarchical control structure so conventional top-down and bottom-up integration tests have little meaning

• Integration applied three different incremental strategies: Thread-based testing: integrates classes required to

respond to one input or event Use-based testing: integrates classes required by one

use case Cluster testing: integrates classes required to

demonstrate one collaboration

• What integration testing strategies will you use?

Page 32: Software testing part

Thread based v/s Use based

• Thread-based testing, integrates the set of classes required to respond to one

• input or event for the system. Each thread is integrated and tested individually. Regression testing is applied toensure that no side effects occur.

• Use-based testing, begins the construction of the system by testing those classes (called independent classes) that use very few of server classes.

• After the independent classes are tested,the dependent classes that use the independent classes are tested. This sequence of testing layers of dependent

• classes continues until the entire system is constructed

Page 33: Software testing part

System/ Validation Testing

• Are we building the right product? • Validation succeeds when software functions in a manner

that can be reasonably expected by the customer. • Focus on user-visible actions and user-recognizable outputs• Details of class connections disappear at this level• Apply:

Use-case scenarios from the software requirements spec Black-box testing to create a deficiency list Acceptance tests through alpha (at developer’s site) and beta (at

customer’s site) testing with actual customers

• How will you validate your term product?

Page 34: Software testing part

Object-oriented software testing problems

Integration testing may add a large cost (time resources) to software development process.

Polymorphism - attribute may have more than one set of values and an operation may be implemented by more than one method.

Inheritance - object may have more than one super class.

Encapsulation - information hiding.

Page 35: Software testing part

Performance TestingPerformance TestingPerformance TestingPerformance Testing

Page 36: Software testing part

Introduction

• What is Performance Testing?• Performance testing, a non-functional testing technique performed

to determine the system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

• Attributes of Performance Testing:– Speed– Scalability– Stability– reliability

Page 37: Software testing part

Performance Testing Techniques:

– Load testing - It is the simplest form of testing conducted to understand the behaviour of the system under a specific load. Load testing will result in measuring important business critical transactions and load on the database, application server, etc., are also monitored.

– Stress testing - It is performed to find the upper limit capacity of the system and also to determine how the system performs if the current load goes well above the expected maximum.

– Soak testing - Soak Testing also known as endurance testing, is performed to determine the system parameters under continuous expected load. During soak tests the parameters such as memory utilization is monitored to detect memory leaks or other performance issues. The main aim is to discover the system's performance under sustained use.

– Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large amount and measuring the performance of the system. The main aim is to determine whether the system will be able to sustain the workload.

Page 38: Software testing part

Stress TestingStress TestingStress TestingStress Testing

Page 39: Software testing part

Basics

• Stress testing is a software testing activity that determines– the robustness of software by testing beyond the

limits of normal operation.

• Stress testing is particularly important for "mission critical" software, but is used for all types of software.

Page 40: Software testing part

Introduction

• It is a type of non-functional testing.• It involves testing beyond normal operational capacity, often

to a breaking point, in order to observe the results. • It is a form of software testing that is used to determine the

stability of a given system.• It  put  greater emphasis on robustness, availability, and error

handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances.

• The goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).

Page 41: Software testing part

Load test v/s Stress Test

• Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose of this process is to make sure that the system fails and recovers gracefully—a quality known as recoverability.

• Load testing implies a controlled environment moving from low loads to high.

• Stress testing focuses on more random events, chaos and unpredictability.

Page 42: Software testing part

Reason behind Stress Testing

• The software being tested is "mission critical", that is, failure of the software (such as a crash) would have disastrous consequences.

• The amount of time and resources dedicated to testing is usually not sufficient, with traditional testing methods, to test all of the situations in which the software will be used when it is released.

Page 43: Software testing part

Reason behind Stress Testing

• Even with sufficient time and resources for writing tests, it may not be possible to determine before hand all of the different ways in which the software will be used. This is particularly true for operating systems and middleware, which will eventually be used by software that doesn't even exist at the time of the testing.

• Customers may use the software on computers that have significantly fewer computational resources (such as memory or disk space) than the computers used for testing.

Page 44: Software testing part

Reason behind Stress Testing

• Concurrency is particularly difficult to test with traditional testing methods. Stress testing may be necessary to find race conditions and deadlocks.

• Software such as web servers that will be accessible over the Internet may be subject to denial of service attacks.

• Under normal conditions, certain types of bugs, such as memory leaks, can be difficult to detect over the short periods of time in which testing is performed. However, these bugs can still be potentially serious. In a sense, stress testing for a relatively short period of time can be seen as simulating normal operation for a longer period of time.

Page 45: Software testing part

Scalability TestingScalability TestingScalability TestingScalability Testing

Page 46: Software testing part

Introduction

• What is Scalability Testing?– Scalability, a performance testing parameter that investigates a system's

ability to grow by increasing the workload per user, or the number of concurrent users, or the size of a database.

• Scalability Testing Attributes:– Response Time– Throughput– Hits per second, Request per seconds, Transaction per seconds– Performance measurement with number of users– Performance measurement under huge load– CPU usage, Memory usage while testing in progress– Network Usage - data sent and received– Web server - Request and response per second