testing testing is a critical process within any software development and maintenance life cycle...
TRANSCRIPT
Testing
• Testing is a critical process within any software development and maintenance life cycle model.
• Testing is becoming increasingly important to the software development industry because of its implications on the quality of the produced software and the reputation of the producing company.
• Testing activities include testing specifications, designs and implementations.
• Testing implementations is often referred to as software testing, or simply testing.
Testing
• Testing was performed in ad hoc and informal manners• Testers were not properly trained and unmotivated • Testing often downplayed and little resources were allocated. • Now: quality of produced software is greatly affected by the
quality of testing performed. • Reputation and marketability of the software is affected by
the type and amount of errors discovered after release. – Errors should be resolved and fixed before the software is
released in the market. – Revealing all failures and hence removing their resulting
errors or faults is a very optimistic idea.
Test coverage
• Only exhaustive testing can guarantee full coverage.• Exhaustive testing is impractical since it may take years to
complete even for testing small size programs. • Consider testing the software when a first name text field of
20 characters maximum is filled by a user. Exhaustive testing requires that we test the software for all possible correct and incorrect field input values. – Assume 80 possible correct and incorrect characters, the number of
possible combinations for a valid or invalid first name would exceed 8020. Using a computer that takes, say, 50 nanoseconds to process one combination, we would need about 1011 years to complete the testing for all possible combinations. It is indeed impractical!
The reality
• With the absence of exhaustive testing, there are no formal guarantees that soft ware is error-free.
• In fact, it may happen in practice that even when all errors have been detected during testing, they may not all be fixed due to the lack of time.
• Unfixed errors may be fixed in future releases of the software.
• The difficulty in software testing is the inability to quantify the errors existing in software, and hence predict how many errors are left undiscovered.
Test coverage experiment
• Random testing experiment: errors of different levels of complexity were inserted in the software under test
• All errors except for one were discovered progressively. • No indication on how much more testing is needed• Finding errors is an asymptotic nonlinear function of time.
Ethics for software organizations
• It is the ethical responsibility of the software development company to ensure that – adequate and appropriate resources are allocated to the
testing activi ties, – employees are properly trained to perform their testing
tasks, – state-of-the-art testing tools are available for use by
trained employees, and – an environment emphasizing the importance of software
quality and dependability is being promoted internally.
Ethics for testers
• It is the ethical responsibility of the software tester to ensure that – they remain up-to-date in their testing knowledge,
and – on the look for better tools and techniques, and– that they put their best efforts and mental
capabilities to discover as many errors as possible.
• After all, human well-being and physical safety may be at stake.
Dynamic and static testing
• Dynamic testing – Test the software by executing it and applying test
inputs– Observe the output and make a judgment to take a
verdict• Static testing
– Check code without executing it– Automatic static testing: Code as an input to a static analysis
tool– Non-automatic static testing: review, walkthrough, inspection
Test case
• Test plan includes a test suite• A test suite is a set of organized test cases
– Organize around use cases for example
• Template to describe a test case– Id, name, reference, priority, preconditions,
preamble, input, expected output, observed output, verdict and remarks
• Test cases are obtained using known testing techniques and strategies
Test case design vs execution• Test case is developed first – verdict and remarks are
empty• Test case is executed when the actual testing is
performed – the verdict is obtained• Typically test cases are developed (designed) before
even writing any code• Acceptance test cases in the acceptance test plan (ATP)
can be developed after requirements specification phase• Unit and integration test plans can be developed after
the design is completed and prior to coding
White-box, black-box and grey-box testing techniques
• White-box: module or unit testing• Black-box: module testing and system testing
Black-box unit/module testing techniques
• Boundary value analysis • Equivalence class testing• Decision table / decision tree based testing• State based testing• Robustness testing
Boundary value analysis
• BVA: test inputs obtained from the boundaries – Serious errors hide at the boundaries.
• For example, simple boundary values for an input of integer data type would include -INT_MAX, -INT_MAX + 1, -1, 0, +1, INT_MAX - 1 and INT_MAX.
• For complex and composite data types, boundary values become more difficult to generate and may require more ingenuity to obtain them.
• Moreover, if interface specification includes more than one input, different test cases considering possible feasible combinations of BVs for each input must be considered
BVA Example
• Suppose you are given an executable code including a function called digitCount whose functional interface specification states that digitCount takes an integer n as an input parameter, and returns an integer representing the number of digits in n.
• Test cases whose test inputs are selected based on boundary value analysis are partially shown in Table 8.3.
• Assume that for a 32-bit integer, INT_MAX is +2147483647.
Testing function digitCount based on boundary value analysis
Example 8.2 – Testing search function
Equivalence class testing
• Partitioning input domain into disjoint subsets or partitions• Partition includes related values• Consider one test input from each partition• Equivalence relation among partition elements
– ‘having the same number of digits’– ‘all positive integers’– ‘all even numbers’– ‘all prime numbers’– ‘all powe or 2 numbers’ – …..
Decision table based testing
• Decision table: each column denoting a rule becomes the basis of a test case– Conditions produce inputs– Actions produce expected outputs– In table 4.1: input variables are age, history and
speed, and output variables are penalty and points
Decision tree-based testing
• Each distinct path from the root node to a leaf node is the basis of a test case
• If along the path, a variable input value does not exist, the test input for that variable can be assigned any valid value.
• For example, in fig 4.23, for the test case corresponding to the leftmost path, the speed value can be any illegal speed
State-based testing
• An implemented finite state machine conforms to its specification, if the observed output sequence matches the expected output according to the specification
• Test sequence must cover all states transitions• Used to test do black box testing of systems
– GUI, protocols, web page connectivity, …• Transition tour: a sequence covering all transitions
and returning back to the initial state – Can only cover output errors
State based testing• We need to cover state errors – reaching bad states • Use distinguishing sequences DS to detect if the correct
state has been reached• DS is unique for every state – its is a state identification
sequence– When DS is applied at the different states it produces different
output sequences hence recognizing the state at which DS was applied
– For a transition from s1 to s2 with i1/o1, assuming we know we are at s1, we apply i1 and must observe o1, then to make sure we reached s2 we apply the DS and if we observe the expected output for s2, we can confirm that we have reached s2
Black box-based robustness testing
• Test a system using one or preferably more black box based testing method
• Test for incorrect and illegal inputs also• In equivalence class testing, select one
representative bad input from every partition of the bad input values
• For FSM based systems, apply unexpected inputs at each state – the system should survive
• For boundary value testing, test for illegal values beyond the boundaries – the system should survive
White-box testing techniques
• Given the source code of the module to test• Control flow based testing techniques• Data flow based testing techniques• Path testing
Control flow based testing
• Given source code of a module, transform it to a control flowgraph
• Nodes and edges – decision nodes, joining nodes
Example: digitCount
1 int digitCount(int n)2 {3 int count = 0;4 while (n != 0)5 { n = n / 10;6 count++;7 }8 return count;9 }
Problem
• When we have a multiple condition decision• Reduced number of test cases but …• A typo error may not be detected – because
of short circuiting• Solution – break the decision into many nodes
Data flow testing
• Need to test problems with the data variables and the way they are used
• Produce a data flow graph by annotating the control flow graph
• Can only consider critical variables• Actions on variables: definition (def), destroying or
dereferencing (kill), use in a predicate (p-use), use in a computation (c-use), declaration (dcl)
Example 8.11
• Examine what is happening to variable x.• Check if there any anomalous pattern of use• For example, 2 contiguous x def, or x kill
followed by p-use or c-use of x
x x def
p-use
c-use
dcl
kill
x
x
xdef
c-usex
x
Example 8.12
• Examining 3 variables• Consider the path for each variable separately• Any pattern of anomalous use?
Path expression
• Representing in a generic way an execution sequence
• Can represent the complete program• Use operators:
+ for choice. For a concatenation {} * for zero or more repetition{}+ for one or more repetition() for grouping
Path expression
• Useful to study properties of test sequences• Annotate the transitions with desirable
property then analyze the resulting expression• For example: assign a probability to each
transition – allows you to evaluate the probability of each path; helps you decide on testing the most probable paths
Integration testing
• After testing all modules individually using both white and black box testing techniques
• Top-down integration• Bottom-up integration• Sandwich / hybrid integration
Top down integration
• Start with top level modules and integrate left to rigth• Good for testing GUI heavy modules• Increase of confidence to see GUI at the beginning• Less time to test low level utility modules – high
reusability modules• Variation: integrate i/o modules to get real input and
outputs as early as possible• Need stubs – fake modules to represent called
modules
Bottom up integration
• Start from low level modules • Good testing of low level utility modules• Can see the GUI late in the process• Require driver modules – to call the integrated
modules
Sandwich integration testing
• Two teams – one starting bottom up and the other starting top down
• They meet in the middle • Good testing of both high level GUI modules and low
level utility modules• Requires coordination and good management to
ensure middle level modules also get a good share of the testing time
(black box) Testing NFRs
• Load / stress testing• Performance testing• Usability testing• Interoperability testing• Security testing• Robustness / fault tolerance testing• Installation / deployment testing
More test selection strategies
• Random testing• Probabilistic testing• Mutation testing
Testing issues
• Test architectures• Test correctness• Test description languages• Regression testing• Software testability
Software quality assurance
• Quality of the product and the processes by which the product is delivered
• SQA with respect to testing:– Training of testers– Documentation of test procedures– Documentation and analysis of test results– Review of test plans– Proper management of testing activities– Adherence to testing standards– Develop and update of inspection checklists – ….
Responsibilities of the SQA group• Audit and review of all product deliverables • Execution of test plans according to the SQA plan, and reporting
software problems • Dealing with new or modified standards and practices while the
project is in progress• Ensuring the proper handling of media and code, and their libraries• Ensuring quality control over supplied, subcontracted and outsourced
products • Collection of process and product quality related metrics• Execution of the SQA plan, updating its schedule, and ensuring its
progress• Assessing and auditing the above main SQA responsibilities