chapter 9 testing the system, part 2. testing unit testing white (glass) box code walkthroughs and...
Post on 19-Dec-2015
225 views
TRANSCRIPT
Chapter 9
Testing the System, part 2
Testing Unit testing
White (glass) box Code walkthroughs and inspections
Integration testing Bottom-up Top-down Sandwich Big Bang
Function testing Performance testing Acceptance testing Installation testing
One-minute quiz
What is the difference between verification and validation?
One-minute quiz
What is meant by regression testing?
Is regression testing used for verification or validation?
Function Testing
Test cases derived from requirements specification document Black box testing Independent testers Test both valid and invalid input and the
success of the test is determined by the produced output
Equivalence partitioning Boundary values
Performance Testing
Stress tests Volume tests Configuration tests Compatibility tests Regression tests Security tests Timing tests
Environmental tests Quality tests Recovery tests Maintenance tests Documentation tests Human factors
(usability) tests
Reliability, Availability, and Maintainability
Software reliability: operating without failure under given condition for a given time interval
Software availability: operating successfully according to specification at a given point in time
Software maintainability: for a given condition of use, a maintenance activity can be carried out within stated time interval, procedures and resources
Measuring Reliability, Availability, and Maintainability
Mean time to failure (MTTF) Mean time to repair (MTTR) Mean time between failures (MTBF)
MTBF = MTTF + MTTR Reliability
R = MTTF / (1+MTTF) Availability
A = MTBF / (1+MTBF) Maintainability
M = 1 / (1+MTTR)
When to stop testing
Fault seeding Adding faults to the code to estimate the
number of remaining faults. Suppose 50 faults have been seeded in
the code. Regression testing identifies 60 faults, forty of which are the seeded faults.
What is the estimate of the total number of undiscovered real faults remaining?
Acceptance Tests
Enable the customers and users to determine if the built system meets their needs and expectations
Written, conducted, and evaluated by the customers
Types of Acceptance Tests
Pilot test: install on experimental basis
Alpha test: in-house test Beta test: customer pilot Parallel testing: new system operates
in parallel with old system
Test Documentation Test plan: describes system and plan for
exercising all functions and characteristics Test specification and evaluation:
details each test and defines criteria for evaluating each feature
Test description: test data and procedures for each test
Test report: results of each test
Test Documentation
Test Plan
Used to organize testing activities guides the scheduling of the
programming explains the nature and extent of each
test documents test input, specific test
procedures, and expected outcomes
Defect Tracking Form
Quality Assurance
Quality Control Testing the quality of the program
Quality Assurance Building quality into the program Management level – proactive process Checklists
Testing Safety-Critical Systems Recognize that testing cannot remove all faults
or risks
Assume that every mistake users can make will be made Do not assume that low-probability, high-
impact events will not happen
Emphasize requirements definition, testing, code and specification reviews, and configuration control Cleanroom testing
Different Levels of Failure Severity
Catastrophic: causes death or system loss Critical: causes severe injury or major
system damage Marginal: causes minor injury or minor
system damage Minor: causes no injury or system damage