istqb certified tester
TRANSCRIPT
ISTQB Certified TesterFoundation Level
Basics for Software
Testing
Basics of Software TestingISTQB Certified Tester, Foundation Level
2. Introduction
Introduction
In this chapter we will introduce
the concept of certified tester
Training (modules, curriculum, examination, and certification)
Institutions and international connections• Examination/ certification bodies,• Training and accreditation providers,• ISTQB and national boards.
The materials for this course
Introduction – Certified Tester
Background
Software become ubiquitous
Problems in software boil down to quality (or the lack of it)
Testing is the way to prove it works as it ahould
Almost annything can be tested• Products• Prototypes• documentation
Introduction – Certified Tester
Training need
Testers need to know• The test object
• Their trade: process, techniques, tools
Managers need to know that testers:• Are good at what they do
• Know what to do
Certification provides reasonable level of confidence
Introduction – Certified Tester
Expected abilities
Analyze information and evaluate business risks to design and plan inspections and tests
Know the limitation:• Framework, technology, process, people, risks, etc. to define
• Test targets, techniques, tasks, and schedules within them.
Manage the testing process: • Execute tests, file and track faults, measure results, and report
Introduction – Certified Tester
Expected abilities
Plan, conduct, and follow on reviews and inspecions• Of documents and code
Identify and use appropiate tools
Select test cases depending on• Probability of finding faults, associated risks, and requied coverage.
Introduction – Training
Curriculum
Training is a way to achieve adequate skills and knowledge
Two levels• Foundation (this course)
• Advanced (three courses – Test manager, Functional Tester and Technical Tester)
Standardized training program for education of testers
International recognition
Introduction – Training
Syllabus, Examination and Certification
Main chapters in Syllabus prepared by ISTQB• Fundamentals of testing
• Testing throughout the software lifecycle
• Static techniques
• Test design techniques
• Test management
• Toll support for testing
Courses delivered by accredited training providers
Examination delivered by certification body – iSQl, GASQ
Introduction – Organizations
ISTQB
International Software Testing Qualification Board
www.istqb.org
An international organization of software testers
Mission = organize and support the ISTQB Certified Tester ualification scheme for testers.
Provides the core syllabi and sets guidelines for accreditation and examination.
Introduction – Organizations
ISTQB (continued)
Members are national and supra national qualification testing boards
Bulgaria participates in the SEETB – South East European Testing Board
ISTQBWork Groups Syllabi Exam Questions
German TestingBoard
American Testing Board
… Testing Board
Introduction – Organizations
GASQ
Global Association for Software Quality
www.gasq.org
Provides services related to software quality – from personnel certification to international standardization.
• Delivers exams
• Accredits trainers and courses
Mission = adance the field by coordinating efforts aiming at new, higher industry standards
Introduction – Organizations
Other
ISEB = Information Systems Examinations Board Of the British Computer Society or BCS www.bcs.org/BCS/Products/Qualifications/ISEB/
iSQI = International Software Quality Institute The German training and certification body www.isqi.org
IEEE = Institute of Electrical and Electronics Engineers An American organization with 40% international members. www.ieee.org
Introduction – Organizations
For this course
The Handouts• Course Schedule
• Slides
• Student Textbook
• Evaluation Sheet
Additional• Glossary of terms used in Software Testing download v.1.2 from:
www.istqb.orgdownload.php
• British Testing Standard – BS7925-2 download the draft from: http://www.testingstandards.co.uk/
Basics of Software TestingISTQB Certified Tester, Foundation Level
3. Fundamentals of testing3.1 Why is testing necessary?
Why is testing necessary
In this session we will
Understand what testing is and why it is necessary
Look at the cause and cost of errors
Talk about quality measurements
Mention stndards and models for improuvement
Why is testing necessary
Why is testing becoming more important?
The Y2K problem
EMU (Economic and Monetary Union)
e Commerce
Increased user base
Increased complexity
Speed to Market
Why is testing necessary
How Error Occurs
No one is perfect! We all make mistakes or omissions
The more pressure we are under the more likely we are to make mistakes
In IT development we have time and budgetary deadlines to meet
Poor Training
Why is testing necessary
How Errors Occur
Poor Communication
Requirements not clearly defined
Requirements change & are not properly documented
Data specifications not complete
ASSUMPTIONS!
Why is testing necessary
The Cost Of Failures
A single failure may incur little cost – or millions
Report layouts may be wrong – little true cost
Or a significant failure may cost millions…Ariane V, Venus Probe, Mars Explorer and Polar Lander
Why is testing necessary
The Cost Of Failures
In extreme cases a software or systems failure may cost LIVES
Usually safety critical systems are tested exhaustively Aeroplane, railway, nuclear power systems, etc
Unfortunately there are exceptions to this London Ambulance Service
Why is testing necessary
Example for huge errors
A small town in Illinois received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to faults in new software that had been purchased by the local power company to deal with Y2K software issues.
Software faults caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
Software faults in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. the software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a ‘…funny feeling in my gut’, decided the apparent missile attack was a false alarm. The filtering software code was rewritten.
Why is testing necessary
The Cost Of Failures
The cost of failures increases proportionately (tenfold) with the passing of each succesive stage in the system development process before they are detected
To correct a problem at requirements stage may cost $1
To correct the problem post-implementation may cost $000’s
The model of cost escalation
The cost of correcting a defect during progressively later phases in the SDLC rises alarmingly (tenfold for each phase).
UR = User Reuquirements FS = Functional Specification TS = Technical Specification UAT = User Acceptance Testing
Cost
UR FS TS Code Unit System UAT
Phase of testing
Why is testing necessary
Why test?
Identifies faultsReduces live defect
Improves the quality of the users application
Increases reliability
To help ensure live failures don’t impact costs & profitability
To help ensure requirements are satisfied
To ensure legal reqiurements are met
To help maintain the organization’s reputation
Provides a measure of quality
Why is testing necessary
How much test is enough?
How we know when to stop?
If we stop testing too soon we risk errors in the live system
If we carry on testing we can delay the product launch• This may lose revenue
• And damage the corporate image
Why is testing necessary
Exhaustive Testing Find all faults by testing everything?
To test everything is rarely possible
The time required makes it impractical
The resource commitment required makes it impractical
Why is testing necessary
If we can’t test everything, what can we do?
Managing and reducing Risk
Carry out a Risk Analysis of the application
Prioritise tests to focus on the main areas if risks
Apportion time relative to the degree of risk involved
Understand the risk to the business of the software not functioning correctly
Why is testing necessary
Risk Reduction Profile:
The risk can be defined as a combination of two factors, the likelihood of a problem to occur and the impact that this problem will inflict
The decision as to which areas to test is normally based upon the risk involved. The higher the risk to the business the greater priority the testing of a given area will be.
Risk
Time
Business Functional and Non – Functional
Atributes
Why is testing necessary
Should you stop testing when the product goes live?
Continuing testing beyond the implementation date should be considered
Better that you find the errors than the users
Why is testing necessary
Testing & Quality The purpose of testing is to find defects. The benefits:
Reduce the number of defects in the released product Improve the quality of the software Increase reliability – fewer faults means more reliableProvide a measure of quality of the software
To demonstrate the level of quality for the SUT, testing must be carried out throughout the SDLC.
Quality starts at the beginning of the project,
Why is testing necessary
Quality Measurements – Quality can be measured by testing for Correctness Reliability Usability Maintainability Reusability Testability Legal requirements
Contractual requirementsRegulatory requirements
Why is testing necessary
ISO 9000Defines the overall quality management system with document
control, training, best practices• ISO 9000 compliance is not a guarantee of quality, it merely provides
for reproducibility
ISO 9126Defines the quality characteristics of software at the lifecycle stages.Four parts
Quality Model-criteria for good software: functionality, reliability, efficiency, maintainability and portability/applicability
External Use – external metrics (the behavior of software) Internal Use – internal metrics (the software itself) Quality in Use metrics – the effects of using the software in context
Why is testing necessary
Process Improvement Models
ISO/IEC 15504 Software Process Improvement and Capability dEtermination (SPICE)
Software Engineering Institute (SEI) Capability Maturity Model (CMM)
SEI Capability Maturity Model Integration (CMMI)
Illinois Institute of Technology (IIT) Testing Maturity Model (TMM)
IQUIP Informatica B.V. Test Process Improvement (TPI)
Why is testing necessary
Process Improvements Models (continued)
Improvement models have two objectives in common: Assessment of an existing process Improvement of an existing process
Some models – SPICE – assess the “Capabilities” of a process, and identify areas for improvement
Others – CMM – assess the “Maturity” of a process, give it a “Level”, and define what to achieve to progress to a next level
Why is testing necessary
Summary
The purpose of testing is to find faultsFaults can be fixed, making better softwareBetter software is more reliable, less prone to failures
Establish the relationship between the software and its’ specification through testing
Testing enables us to measure the quality of the software Thus understand & manage the risk to the business
Several models for process measurement and improvement
Basics of Software TestingISTQB Certified Tester, Foundation Level
3.2. What is testing?
What is testing
In this session we will
Define some of the words used in software testing
All the terms in this course are derived from the
Glossary of terms used in Software Testing
version 1.2 (dd. June, 4th 2006)
produced by the ISTQB ‘Glossary Working Party’
Terms - What is testing
Some Definitions of Testing
“…the process of executing a program in order to certify its Quality”
“… the process of executing a program with the intent of finding errors”
“…the process of exercising software to detect errors & to verify that it satisfies specified functional & non-functional requirements”
Terms - What is testing
The ISTQB Definition
The process consisting of
All life cycle activities, both static and dynamic,
Concerned with planning, preparation and evaluation of software products and related work products
To determine that they satisfy specified requirements, To demonstrate that they are fit for purpose and To detect defects.
Terms - What is testing
ERROR“A human action that produces an incorrect result”
Terms - What is testing
DEFECT = FAULT = BUG = PROBLEM
“A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data
definition. A defect, if encountered during execution, may cause a failure of the component or system.”
Also called: ‘error condition’
Terms - What is testing
FAILURE
Actual deviation of the component or system from its expected delivery, service or result.
Also called ‘external fault’
Terms - What is testing
ANOMALY
Any condition that deviates from expectation based on requirements specifications, design documents, user
documents, standards, etc. or from someone’s perception or experience.
Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.
Terms - What is testing
DEFICIENCY
An absence of some necessary quality or element or the lack of fulfillment of a justified expectation.
Deficiencies include missing data, incomplete data, or incomplete reports
*Note: The term is not the ISTQB glossary
Terms - What is testing
DEFECT MASKING
An occurrence in which one defect prevents the detection of another.
Also called ‘fault masking’
Terms - What is testing
TEST
A set of one more test cases
TEST CASE
A set of input values, execution preconditions, expected results and execution postconditions, developed for a
particular objective or test condition, such as to exercise a particular program path or to verify compliance with a
specific requirement.
Terms - What is testing
QUALITY
“the totally of the characteristics of an entity that bear in its ability to satisfy stated or implied
needs”
According to DIN-ISO 9126 software quality includes: reliability , usability, efficiency, maintainability and portability/aplicability.
Terms - What is testing
RELIABILITY
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or
for a specified number of operations.
Terms - What is testing
USABILITY
The capability of the software to be understood, learned, used and attractive to the user when used under specified
conditions.
Terms - What is testing
EFFICIENCY
The capability of the software product to provide appropriate performance, relative to the amount of resources used under
stated conditions.
Terms - What is testing
MAINTAINABILITY
The ease with which a software product can be modified to correct defects, modified to met new requirements, modified to make
future maintenance easier, or adapted to a changed environment.
Terms - What is testing
PORTABILITY
The ease with which the software product can be transferred from one hardware or
software environment to another.
Terms - What is testing
Does testing improve quality?
Testing does not build Quality into the software
Testing is a means of determining the Quality of the Software under Test
Terms - What is testing
QUALITY ASSURANCE
“All those planned actions used to fulfill the requirements for quality”
QUALITY CONTROL
“Overseeing the software development process to ensure that QA procedures and standards are
being followed”
Basics of Software TestingISTQB Certified Tester, Foundation Level
3.3. General testing principles
General testing principles
In this session we will
Look at the general testing principles
Look at the level of testing maturity
General testing principles
A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing
Principle 1 – Testing shows presence of defects
Principle 2 – Exhaustive testing is impossible
Principle 3 – Early testing
Principle 4 – Defect clustering
Principle 5 – Pesticide paradox
Principle 6 – Testing is context dependant
Principle 7 – Absence-of-errors fallacy
Testing Maturity
A different viewDr. Boris Beizer describes five levels of testing maturity:
Level 0 - “There is no difference between testing and debugging. Other than in support of debugging, testing has no purpose. ”
Level 1 - “The purpose of testing is to show that software works.”
Level 2 - “The purpose of testing is to show that software doesn’t work.”
Level 3 - ”The purpose of testing is not to prove anything, but to reduce the perceived risk of not working to an acceptable value.”
Level 4 - ”Testing is not an act. It is a mental discipline that results in low-risk software without much testing effort.”
Basics of Software TestingISTQB Certified Tester, Foundation Level
3.4. The Fundamental Test Process
The Fundamental Test Process
In this session we will
Look at the steps involved in The Fundamental Test Process
Look in detail at each step
The Fundamental Test Process
What is the objective of a test?
A “successful” test is one that does detect a fault
The Fundamental Test Process
TEST MANAGEMENT
TEST ASSET MANAGEMENT
TEST ENVIROMENT MANAGEMENT
PLAN SPECIFY EXECUTE RECORD CHECK
COMPLETE
The Fundamental Test Process
There are five defined steps in the Test Process
Test planning and control Test analysis and design Test implementation and execution Evaluating exit criteria and reporting Test closure activities
The Fundamental Test Process
Test planning and control
The Test Plan describes how the Test Strategy is implemented
A project plan for testing
Defines what is to be tested, how it is to be tested, what is needed for testing etc.
Defined the different types of testing – functional, non-functional, used automation tools, etc.
The Fundamental Test Process
Test planning and control
The most critical stage of the process
Effort spent now will be rewarded later
The foundation on which testing is built
The Fundamental Test Process
Test planning and control
The major tasks concerning test control are:
Measuring and analyzing results;Monitoring and documenting progress, test coverage and exit
criteria;Assign extra resources;Re-allocate resources;Adjust the test schedule;Arrange for extra test environments;Refine the completion criteria;
The Fundamental Test Process
Test analysis and design
Three step progress
Preparation & analysis
Building test cases
Defining expected results
The Fundamental Test Process
Test analysis and design
Test preparation
Analyze the Application
Identify test conditions
Identify test cases
Document thoroughly
Cross-referencing
The Fundamental Test Process
Test analysis and design
Build Test Cases
Test cases compriseStanding dataTransaction dataActionsExpected results
The Fundamental Test Process
Test analysis and design
Identify Expected Results
The outcome of each action
The state of the application during & after
The state of the data during & after
The Fundamental Test Process
Test analysis and design
Cross-Reference & Classify test cases
Enables maintainability of Test Assets
Allows testing to be performed in a focussed manner directed at specific areas
The Fundamental Test Process
Test implementation and execution
Test execution checklistTest execution schedule/log Identify which tests are to be runTest environment primed & readyResources ready, willing % ableBack-up & recovery procedures in place Batch runs planned & scheduled
Then we are ready to run the tests
The Fundamental Test Process
Execution
If planning and preparation is sufficiently detailed this is the easy part of testing
The test is run to verify the application under test
The test itself either passes or fails
The Fundamental Test Process
Recording The test log should record
Software and test versions Specifications used as test base Testing timings Test results
Actual results Expected results
Defect details (if applicable)
Test Script <Name>
Seq No. Detailed Instruction Test Criteria Expected Results Pass/Fail
Tested by:
Date:
The Fundamental Test Process
Evaluation exit criteria & reporting
Have we fulfilled the Test Exit Criteria?
Used to determine when to stop this phase of testingKey functionality testedTest coverageBudget used?Defect detection ratePerformance satisfacatory
The Fundamental Test Process
Test closure activities
When testing is over the testing project can be closed.
Documents should be updated and put under version control.
The extract way depends on the specifics of your organization
The Fundamental Test Process
Summary
There are five steps in The Fundamental Test ProcessTest planning and control Test analysis and designTest implementation and executionEvaluating exit criteria and reportingTest closure activities
The Fundamental Test ProcessExpected Outcome and Test Oracle
In this subchapter we will
Understand the need to define Expected Outcome
Understand where expected results can be found
Understand what a Test Oracle is
The Fundamental Test ProcessExpected Outcome and Test Oracle
Expected Results = expected outcomes
Identify required behavior
If the expected outcome of a test is not defined the actual output may be misinterpreted
The Fundamental Test ProcessExpected Outcome and Test Oracle
Running a test with only a general concept of the outcome is fatal
It is vital the expected results are defined with the tests, before they are used
You cannot decide whether a test has passed just because it looks right
The Fundamental Test ProcessExpected Outcome and Test Oracle
Test Oracle
A source to determine expected result
In broader terms, a principle or mechanism to recognize a problem
Our oracle can be an existing system, a document, a person… but not a code itself
The Fundamental Test ProcessExpected Outcome and Test Oracle
Oracle Heuristics or Possible Oracles
History: Present behavior is consistent with past behavior
Our Image: behavior is consistent with an image the organization wants to project
Comparable Products: behavior is consistent with that of similar functions in comparable products
Claims: behaves as claimed or advertised (as seen on TV)
The Fundamental Test ProcessExpected Outcome and Test Oracle
Oracle Heuristics or Possible Oracles (continued)
Specifications or Regulations: behaves as documented
User Expectations: behaves the way we think users want
The Product (within): comparable to functions patterns within the product
Purpose: behavior is consistent with apparent purpose
The Fundamental Test ProcessExpected Outcome and Test Oracle
Oracles in use
Simplification of RiskDo not think ‘pass – fail’Think instead ‘problem – no problem’
Oracles and AutomationOur ability to automate testing is fundamentally constrained by
our ability to create and use oracles
Possible PitfallsFalse alarmsMissed bugs
The Fundamental Test ProcessExpected Outcome and Test Oracle
Summary
Without defining expected results do you know if a test has passed or failed?
Expected results must be based on oracles.
The Fundamental Test ProcessPrioritizing the Tests
In this subchapter we will
Understand why we need to prioritize tests
Understand how we decide the priority of individual tests
The Fundamental Test ProcessPrioritizing the Tests
It is not possible to test everything
Therefore errors will get through to the live system
We must do the best testing possible in the time available
This means we must prioritize and focus testing on the priorities
The Fundamental Test ProcessPrioritizing the Tests
Aspects to Consider
Severity
Probability
Visibility
Priority of requirements
Customer / User Requirements
Frequency of change
Vulnerability to error
Technical criticality
Complexity
Time & Resources
The Fundamental Test ProcessPrioritizing the Tests
Business Criticality
What elements of the application are essential to the success of the organization?
The Fundamental Test ProcessPrioritizing the Tests
Customer factors
How visible would a failure be?
What does the customer want?
The Fundamental Test ProcessPrioritizing the Tests
Technical Factors
How complex is it?
How likely is an error?
How often does this change?
The Fundamental Test ProcessPrioritizing the Tests
Summary
There will never be enough time to complete all tests
Therefore those tests that cover those areas deemed most important (to the business, highest risk etc) must be tested first
Basics of Software TestingISTQB Certified Tester, Foundation Level
3.5. The Psychology of Testing
The Psychology of Testing
In this session we will
Understand what qualities make good testers
Look at a testers relationship with developers
Look at a testers relationship with management
Understand the issues with testing independence
The Psychology of Testing
What makes a Tester?
Testing is primarily to find faultsCan be regarded as “destructive”Development is “constructive”Testing ask questionsTesters need to ask questions
A tester needs many qualities…
The Psychology of Testing
What makes a Tester?
Intellectual qualities
Can absorb incomplete factsCan work with incomplete factsCan learn quickly on many levelsGood verbal communicationGood written communicationAbility to prioritizeSelf-organization
The Psychology of Testing
What makes a Tester?
Knowledge
How projects work How computer systems and business needs interact IT – technology IT – commercial aspectsTesting techniquesTesting best practiceTo be able to think inside and outside of a system specification
The Psychology of Testing
What makes a Tester?
More skills to acquire
Ability to find faults – planning, preparation & executionAbility to understand systemsAbility to read specificationsAbility to extract testable functionality Ability to work efficientlyAbility to focus on essentials
The Psychology of Testing
Reporting Faults
Faults need to be reported toDevelopers to enable them to fix them Management so they can track progress
Communication to both groups is vital
The Psychology of Testing
Communication with Developers
A good relationship is vital
Developers need to keep testers up to date with changes to the application
Testers need to inform developers of defects to allow fixes to be applied
The Psychology of Testing
Communication with Management
Managers need progress reports
The best way is through metricsNumber of tests planned & preparedNumber of tests executed to dateNumber of defects raised & fixedHow long planning, preparation and execution stages take
The Psychology of Testing
Testing independence
It is important that testing is separate from development
The developer is likely to confirm adherence not deviation
The developer will make assumptions – the same when testing as developing
The Psychology of Testing
Levels of Independence
Low – Developers write their own tests
Medium – tests are written by another developer
High – Tests written by an independent bodyTests written by another sectionTests written by another organization
Utopia – Tests generated automatically
The Psychology of Testing
Summary
Testers require a particular set of skillsThe desire to break thingsThe desire to explore and experimentCommunicationQuestioning
Testing requires a different mentality to development“destroying” things rather than creating themTesting should be separate from development
References:
References
Black, 2001, Kaner, 2002
Beizer, 1990, Black, 2001, Myers, 1979
Beizer, 1990, Hetzel, 1998
Craig, 2002
Basics of Software TestingISTQB Certified Tester, Foundation Level
4. Testing throughout the Software Life Cycle4.1. Software development models
Software development models
In this session we will
Look at the most commonly accepted models for software development:
The V-Model Iterative Development ModelV, V&T (Validation, verification and Testing)
Software development models
There are many approaches for software development
The most commonly used model is: The V – Model
The main activities in software testing are: V,V&T (Validation, Verification and Testing)
There are various other models, but they are not part of this course:
Sequential model (Waterfall)
Iterative Development Model
Software development models
Validation
“Confirmation by examination and through provision of objective evidence that the requirements for a specific
intended use or application have been fulfilled.”
Software development models
Verification
“confirmation by examination and through the provision of objective evidence that specified requirements have been
fulfilled”
Software development models
Testing
“The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that
they are fit for purpose and to detect defects.”
Software development models
V-model
The most commonly used model in software development
It represents the Software Development Life Cycle
Shows the various stages in development and testing
Shows the relationships between the various stages
Software development models
high level
low level time
Requirements
design
code
UAT
System test
Unit test
T
T
T
A simple V-model
Software development models
level
time
A typical development life-cycle V - Model
Business Requirements
Technical Specification
Functional Specification
Design Specification
Code
User Acceptance testing
Integration testing (large)
System Testting
Integration testing (small)
Component Testing
Software development models
level
time
V- model – where do we start testing?
Requirements
Design
Code
T
T
T
UAT
System Test
Unit test
Software development models
V-Model – Start Testing Early
time
level
Reqs
Spec
Code
UAT
System Test
Unit Test
R P R R T T T
R P R T T
R P TLegend:
R – Review
P – Plan
T - Test
Software development models
Iterative Development Models Breaking down the project into a series of small projects Each subproject may follow a mini V-model Related to RAD and agile development
Benefits: Early analysis of risksRevisiting plansQuality increments (when needed)High motivation
A potential pitfall is that the project duration seem to increase. This is more of a psychological problem, though.
Software development models
Summary
The V- Model is the most common model for software development
There are other models used for testing, but these are less widely used
Including V, V & T Iterative Development Model Other models have been created
Basics of Software testingISTQB Certified Tester, Foundation Level
4.2. Test Levels4.2.1. Component testing
Test Levels – Component Testing
In this subchapter we will
Understand what Component Testing is
Look at the Component Testing Process
Look at the myriad types of Component Testing
Test Levels – Component Testing
What is component testing?
“the testing of individual software components”
What is a component?
“a minimal software item that can be tested in isolation.”
Component testing is also known as:Unit testingModule testingProgram testing
Test Levels – Component Testing
Component testing is the lowest form of testing
At the bottom of the V-model
Each component is tested in isolation Prior to being integrated into the system
Involves testing the code itself Test the code in the greatest detail Usually done by the components developer
Test Levels – Component Testing
Component Test Process
Component test Planning
Component Tests Specification
Component Test Execution
Component Test Recording
Checking for Component Tests Completion
BEGIN
Component test planning
Component test specification
Component test execution
Component test recording
Component test completion
END
Test Levels – Component Testing
Testing techniques
Equivalence Partitioning Boundary Value Analysis State transition testing Cause-Effect Graphing Syntax Testing Statement Testing Branch/Decision testing
Data Flow Testing Branch Condition Testing Branch Condition
Combination Testing Modified Condition
Decision Testing LCSAJ Testing Random Testing Other Testing
Techniques
Test Levels – Component Testing
What about “Playtime” ?
Often a short unscripted session with a component can be of benefit
More common at later stages (once formal test techniques have been completed)
Test Levels – Component Testing
Not done too well?
Some basics would quickly improve it: Checklist Standards Procedures Code & version control Sign off (check from independent actor – for example,
Technical Lead)
Test Levels – Component Testing
Summary
Component testing is intended to test a software component before it is knitted into the system
There are lots of different approaches
Basics of Software TestingISTQB Certified tester, Foundation Level
4.2. Test Levels4.2.2. Integration Testing
Test Levels – Integration Testing in the Small
In this subchapter we will
Understand what “Integration in the Small” is
Look at how & why we do “Integration Testing in the Small”
Test Levels – Integration Testing in the Small
Integration testing
“Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.”
Integration testing in the small = component integration testing
“testing performed to expose defects in the interfaces and interaction between integrated components”
Test Levels – Integration Testing in the Small
databases
External systems
A sample system
Test Levels – Integration Testing in the Small
All components of a computer application work on only one thing
DATA
Data is passed from one component to another
Data is passed from one system to another
Software components will be required to add, amend, & delete information, validate the information and report on the information
Test Levels – Integration Testing in the Small
Integration testing
Testing should start with the smallest, definable component
Building tested components up to the next level of definition
Eventually the complete business processes is tested ‘End to End’
Test Levels – Integration Testing in the Small
Testing Components
Each component needs to be tested individually to ensure it processes the data as required
This is Component Testing
Test Levels – Integration Testing in the Small
Once components have been tested we can link them together to see if they can work together
This is integration Testing in the SmallLinking components together to ensure that they communicate
correctlyGradually linking more and more components togetherProve that the data is passed between components as required Steadily increase the number of components and create & test
sub-systems and then the complete system
Test Levels – Integration Testing in the Small
Planning testing
We need to determine when and at what levels Integration testing will be performed
Identify the boundaries
A similar process will need to be followed for all elements Once these elements have been tested individually they are
further integrated to support the business process
Test Levels – Integration Testing in the Small
Stubs and drivers
Used to emulate the “missing bits”External systemsSub – systemsMissing components
May need to be written specifically for the SUT
Test Levels – Integration Testing in the Small
Approaches to integration testing
Incremental Top down Bottom-up Functional
Non-incremental “Big bang”
Test Levels – Integration Testing in the Small
Summary
Integration testing is needed to test how components interact with each other and how the system under test works with other systems
Integration testing in the small is concerned with the interaction between the various components of a system
Test Levels – Integration Testing in the Large
In this subchapter we will
Understand what Integration Testing is
Understand what Integration testing in the Large is
Look at how & why we test system integration
Test Levels – Integration Testing in the Large
Integration testing“testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems.”
Integration testing in the large = system integration testing Testing he integration of systems and packages; testing interfaces
to external organizations (e.g. Electronic Data Interchange, Internet)
Why do we need to do integration testing in the large Just because the SUT works in the test lab doesn’t mean it will
work in the real world
Test Levels – Integration Testing in the Large
All components of a computer system work on only one thing
DATA
Data is manipulated within the SUT It may then be passed to another system It may be stored in a database
This is done to satisfy a business process requirement
Test Levels – Integration Testing in the Large
A typical system
Data External applications
Env Env Env
SUT SUT SUT
Data Model
Databases
Test Levels – Integration Testing in the Large
Integration Testing on a Large Scale
Test the way the SUT interfaces with external systems
The SUT and the external systems may run on the same and / or different platforms (Windows, Linux, MacOS…)
They may be located is separate locations and use communication protocols to pass data back & forth
Test Levels – Integration Testing in the Large
Additional challenges posed by large scale
Multiple Platforms Communications between platforms Management of the environments Coordination of changes Different technical skills required
Test Levels – Integration Testing in the Large
Typical approach
Test the system In a test environmentUse stubs & drivers where external systems aren’t available
Test the system “in situ” In a replica of the production / live environment Use stubs & drivers where external systems aren’t available
Test the system and its’ interfaces with other systems In a replica of the production / live environmentAccess test versions of the external systems
Test Levels – Integration Testing in the Large
Fundamentally we are still looking at
DATA
& how that data is obtained, processed, manipulated, stored and made available or transported for use
elsewhere within or outside the organization
Test Levels – Integration Testing in the Large
Planning
We need to determine when and at what levels Integration testing will be performed
Identify the boundaries
A similar process will need to be followed for all elements Once these elements have been tested individually they are
further integrated to support the business process
Test Levels – Integration Testing in the Large
Approaches to integration testing
IncrementalTop down Bottom-upFunctional
Non-incremental“Big bag”
Test Levels – Integration Testing in the Large
Summary
Integration testing is “testing performed to expose faults in the interfaces and in the interaction between integrated components”
Large-scale integration testing is needed to test how the system under test works with other systems
Just because a system works in a test environment doesn’t mean it will work in a live environment
Basics of Software TestingISTQB Certified tester, Foundation Level
4.2. Test Levels4.2.3. System Testing
Test Levels – System Testing
What is System Testing?“The process of testing an integrated system to verify that it
meets specified requirements.”
Testing of the complete systemMay be the last step in integration testing in the smallMay be the first time that enough of the system has been put
together to make a working system
Ideally done by an independent test team
Two types – functional and non-functional
Basics of Software TestingISTQB Certified tester, Foundation Level
4.2. Test Levels4.2.3.1.Functional System Testing
Test Levels – Functional System Testing
In this subchapter we will
Understand what functional system testing is
Understand the benefits of functional system testing
Test Levels – Functional System Testing
A Functional Requirement is
“A Requirement that specifies a function that system component must perform”
Functional System testing is geared to checking the function of the system against specification
May be requirements based or business process based
Test Levels – Functional System Testing
Testing Based on Requirements
Requirements specification used to derive test cases
System is tested to ensure the requirements are met
Test Levels – Functional System Testing
Testing carried out against Business processes
Based on expected use of the system
Builds use cases – test cases that reflect actual or expected use of the system
Test Levels – Functional System Testing
Security Testing
“testing to determine the security of the software product”
How easy is it for an unauthorized user to access the system?
How easy is it for an unauthorized person to access the data?
Test Levels – Functional System Testing
Summary
Functional system testing allows us to test the system against the specifications, user requirements and business processes
Two approaches: From the functional requirements andFrom a business process
Basics of Software TestingISTQB Certified tester, Foundation Level
4.2. Test Levels4.2.3.2. Non-Functional System Testing
Test Levels – Non- Functional System Testing
In this subchapter we will
Understand what Non-Functional System testing is
Understand the need for Non-Functional System Testing
Look at the various types of Non-Functional System Testing
Test Levels – Non- Functional System Testing
Non-Functional System Testing is defined as:
“testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability,
maintainability and portability. ”
Users, generally, focus on the Functionality of the system
Test Levels – Non- Functional System Testing
Functional testing may show that the system performs as per the requirements
Will the system work if now deployed?
There are other factors beyond functionality that are critical to the successful use of system
Test Levels – Non- Functional System Testing
Usability testing
“testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users
under specified conditions.”
Employ people to use the application as a user and study how they use it – often it is required that “virgin users” are brought in to test the system
A beta – test may be a cheap way of doing Usability testing There is no simple way to examine HOW people use the product Testing for ease of learning is not the same as testing for ease of
use
Test Levels – Non- Functional System Testing
Storage Testing = resource utilization testing
“The process of testing to determine the resource-utilization of software product.”
Study how memory and storage is used by the product Predict when extra capacity may be needed
Test Levels – Non- Functional System Testing
Instability Testing
“The process of testing the instability of a software product”
Does the installation work?
Does installation effect other software
Is it easy to carry out?
Does it uninstall correctly?
Test Levels – Non- Functional System Testing
Documentation testing
“Testing the quality of the documentation, e.g. user guide or installation guide.”
Is the documentation complete?Is the documentation accurate?Is the documentation available?
Test Levels – Non- Functional System Testing
Recovery Testing = Recoverability testing
“The process of testing to determine the recoverability of a software product.”
Should include recovery from system back-ups Related to reliability
Test Levels – Non- Functional System Testing
Load testing
“Testing geared to assessing the applications ability to deal with the expected throughput of data & users”
Load test
A test type concerned with measuring the behavior of a component or system with increasing load, e. g. number of parallel users and/or numbers of transactions to determine
what load can be handled by the component or system.
Test Levels – Non- Functional System Testing
Stress Testing
“testing conducted to evaluate a system or component at or beyond the limits of its specified requirements”
Each major component in an application or system is tested to its limits
Guides support personnel to prepare for situations when problems are likely to occur
Also, Mean Time to Failure – MTTF is calculated
Test Levels – Non- Functional System Testing
Performance Testing
“The process of testing to determine the performance of a software product ”
Related to Efficiency testing
Test Levels – Non- Functional System Testing
Volume Testing
“Testing where the system is subjected to large volumes of data”
Related to Resource-utilization testing
Test Levels – Non- Functional System Testing
Summary
Just because a system’s functions have been tested doesn’t mean that testing is complete
There are a range of non-functional tests that need to be performed upon a system
Basics of Software testingISTQB Certified Tester, Foundation Level
4.2. Test Levels
4.2.4. Acceptance Testing
Test Levels – Acceptance Testing
In this subchapter we will
Understand what acceptance testing isWhy you would want to do itHow you would plan it and prepare for itWhat you need to actually do it
Understand the different types of acceptance testing
Test Levels – Acceptance Testing
What is Acceptance Testing?
“Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the
user, customers or other authorized entity to determine whether or not to accept the system.”
Test Levels – Acceptance Testing
What is User Acceptance Testing (UAT)?
Exactly what it says it is
Users of end product conducting the tests
The set-up will represent a working Environment
Covers all areas of a project, not just the system
A.K.A. Business Acceptance Testing or Business Process Testing
Test Levels – Acceptance Testing
time
level
A typical development life-cycle V-Model
User Acceptance Testing
Integration Testing (large)
System Testing
Integration Testing (small)
Component Testing
Code
Design Specification
Functional Specification
Technical Specification
Business Requirements
Test Levels – Acceptance Testing
Planning UAT
Why plan?
Things to consider
Preparing TestsManualAutomated
Test Levels – Acceptance Testing
Why Plan?
If you don’t, then how do you know you have achieved what you set out to do
Avoids repetition
Test according to code releases
Makes efficient and effective use of time and resources
Test Levels – Acceptance Testing
Things to consider
Timescales
Resources
Availability
Test Levels – Acceptance Testing
Preparing your tests
Take logical approach
Identify the business processes
Build into everyday business scenarios
Test Levels – Acceptance Testing
Data Requirements
Copied Environments
Created Environments
Test Levels – Acceptance Testing
Running the Tests
Order of tests
Confidence Checks
Automated and Manual test runs
Test Levels – Acceptance Testing
Contract Acceptance
A demonstration of the acceptance criteria
Acceptance criteria will have been defined in the contract
Before the software is accepted, it is necessary to show that it matches its specification as defined in the contract
Test Levels – Acceptance Testing
Alpha Testing
Letting your customers do your testing
Requires a stable version of the software
People who represent your target market use the product in the same way(s) as if they had bought the finished version
Alpha testing is conducted in-house (at the developers site)
Test Levels – Acceptance Testing
Beta Testing
As Alpha testing but users perform tests at their site
Test Levels – Acceptance Testing
Summary
Before software is released it should be subjected to Acceptance Testing
User representation in testing is VITAL
If the product does not pass UAT then a decision about implementation needs to be made
Basics of Software testingISTQB Certified tester, Foundation Level
4.3. Test Types – the targets of testing
Test Types – retesting & regression testing
In this subchapter we will
Understand how fault fixing and re-testing are key to the overall testing process
Look at how test repeatability helps
Understand what regression testing is and where it fits in
Understand how to select test cases for regression testing
Test Types – retesting & regression testing
Testing is directed at finding faults
These faults will need to be corrected and a new version of the software released
The tests will need to be run again to prove that the fault is fixed
This is known as Re-Testing
Test Types – retesting & regression testing
The need for re-testing needs to be planned & designed for
Schedules need to allow for re-testing
Tests need to be easily re-run
Test data needs to be re-usable
The environment needs to be easily restored
Test Types – retesting & regression testing
Regression Testing
“Re-testing of a previously tested program following modification to ensure that defects have not been
introduced or uncovered as a result of the changes made. It is performed when the software or its environment is
changed.”
Test Types – retesting & regression testing
Data InputQuote
Generation
Printing function
Printing function
Program 1
Program 2
Program 3
Program 1
Program 2
Program 3
Program 4
Program 4
Program 5
Program 6
Load
How can you fix this bug break that ?
Report function
Report Function
Test Types – retesting & regression testing
Tests will also need to be re-run when checking software upgrades
Regression tests should be run whenever there is a change to the software or the environment
Regression tests are executed to prove aspects of a system have not changed
Regression Testing is a vital testing technique
Test Types – retesting & regression testing
Selecting cases for regression
Tests that cover safety or business critical functions
Tests for areas that change regularly
Tests of functions that have a high level of faults
Test Types – retesting & regression testing
Regression testing is the ideal foundation for Automation
Selecting the right test cases is vital, and requires a degree of knowledge of the application and their expected evolution
Test Types – retesting & regression testing
Summary
Once a fault has been fixed the software MUST be re-testedNew faults may have been introduced by the fixExisting faults may have been uncovered by the fixTests need to be written to enable their re-use
Re-testing is the rerunning of failed test once a fix has been implemented to check that the fix has worked
Regression testing is running of a wider test suite to check for unexpected errors in unchanged code
Basics of Software TestingISTQB Certified Tester, Foundation Level
4.4. Maintenance testing
Maintenance Testing
In this session we will
Look at the challenges that testers face when an application changes post implementation
See how to ensure that maintenance applied to the system does not cause failures
Maintenance Testing
What is Maintenance Testing?
“testing the changes to an operational system or the impact of a changed environment to an operational system.”
Testing of changes to existing, established systemsWhere maintenance has been performed on the (old) code
Checking that the fixes have been made and that the system has not regressed
Maintenance Testing
The Challenge
Old code
No documentation
Out of date documentation (worse)
How much testing to do and when to stop testing Mitigation of risk vs. lost revenue
Maintenance Testing
Often application will need maintenance
In doing this minor changes will be required
These need to be tested quickly & effectively to allow service to be restored
These systems may have been in place and working for years
They are likely to be vital to the running of the business They may be in use 24/7
Maintenance Testing
Main causes for maintenance testing
Migration of the program products Transition of the program products Changing in the production environments Retirement of the software or systems
Maintenance Testing
You need to be able to test the changes quickly and effectively
As well as being able to prove that the change introduced does not impact the other functions
A Regression test is in order
Maintenance Testing
Other Risks
Reactive, high risk when changing
Impact analysis is vital but is difficult
Stopping testing too soon – errors may be missed
Stopping testing too late – missed business through delayed implementation
Maintenance Testing
Summary
All established systems need maintenance from time to time
The changed code / function will need to be tested
A regression test will need to be executed to ensure that the change(s) have been made correctly and that they have not affected some other part of the system
Impact analysis is key, but difficult
Testing throughout the Software Life Cycle
References
CMMI, Craig, 2002, Hetzel, 1998, IEEE 12207
Copeland, 2004, Myers, 1979
Beizer, 1990, Black, 2001, Copeland, 2004
ISO 9126, IEEE 829
Copeland, 2004, Hetzel, 1998
Basics of Software TestingISTQB Certified Tester, Foundation Level
5. Static techniques
Static techniques
Over the next three sessions we will
Discover the various approaches to Static Techniques (Static Testing)
Discover the benefits of Static Testing
Discover the issues with Static Testing
Static techniques
What is Static Testing
“Testing of a component or system at specification or implementation level without execution of that software, e.g.
reviews or static code analysis”
How is this done
By reviewing system deliverables
Static techniques
Session Topics
1. Reviews and the Test Process
2. Types of Review
3. Static Analysis
Basics of Software testingISTQB Certified Tester, Foundation Level
5.1. Review and the Test Process
Reviews and the Test Process
Session Objectives
To identify what and when to review
To cover the Costs & Benefits of Reviews
Reviews and the Test Process
Why review?
To identify errors as soon as possible in the development lifecycle
Reviews offer the chance to find errors in the system specifications
This should lead toDevelopment productivity improvementsReduced development time-scalesLifetime cost reductionsReduced failure levels
Reviews and the Test Process
When to review?
As soon as an object is ready, before it is used as a product or the basis for the next step in development
Reviews and the Test Process
What to review?
Anything and everything can be reviewed
Requirements, System and Program Specifications should be reviewed prior to publication
System Design deliverables should be reviewed both in terms of functionality & technical robustness
Code can be reviewed
Reviews and the Test Process
What should be reviewed?
Program specifications should be reviewed before construction
Code should be reviewed before execution
Test Plans should be reviewed before creating tests
Test Results should be reviewed before implementation
Reviews and the Test Process
Reviews can be hazardous
If misused they can lead to friction
The errors & omissions found should be regarded as a good thing
The author(s) should not take errors & omissions personally
It is a positive step to identify defects and issues before the next stage
Reviews and the Test Process
Building a Quality culture
In order for them to work, Reviews must be regarded as a positive step and must be properly organized
This is often a part of the company’s Quality cultureDefine the review panel Define the roles of its membersDefine the review procedures“Sell” the quality culture to staffCompany culture is enhanced to include Quality Control
Reviews and the Test Process
There are other dangers in a regime of regular reviews
Lack of Preparation
“familiarity breeds contempt”
No follow up to ensure correction has been made
The wrong people doing them
Used as witch-hunts when things are going wrong
Reviews and the Test Process
Systems development is very complex
This means that it is difficult to build a system with all elements completely & accurately specified
By reviewing a deliverable, its deficiencies can be identified and remedied earlier – saving time and money
Reviews and the Test Process
What else we can Tester review?
All development and design documentation Test plans Test cases Test results Causes of defect Test metrics Development metrics Operational defects Defects
We can measure how well we do
Reviews and the Test Process
The costs of reviews
On-going reviews cost approximately 15% of development budget
This includes activities such as the review itself, metric analysis & process improvement
Reviews are highly cost effective Finding and fixing faults in reviews is significantly cheaper
than finding and fixing faults in later stages of testing
Reviews and the Test Process
The benefits of reviews
Reviews save time and money
Development productivity improvements People take greater care They have more pride in doing good work They have a better understanding of what they are required to deliver
Remove defects / errors earlier
Reduced development costs Reduced fault levels Reduced lifetime cost
Reviews and the Test Process
Summary
Reviews enable us to ensure that the systems specification is correct and relates to the users requirements
Anything generated by a project can be reviewed
In order for them to be effective, reviews must be well managed
Basics of Software TestingISTQB Certified Tester, Foundation Level
5.2. Review in general
Review in general
In this session we will
Look at the types f review & the activities performed Look at goals & deliverables Look at roles & responsibilities Examine the suitability of each type of review Discover review pitfalls
Review in general – Types of Review
Types of review Author’s review Informal Review Walkthroughs Presentations Technical Review Inspections
Review instruments Prototypes Checklists
Review in general – Types of Review
Author’s review
Before issuing a document , the author should always review his / her work
Conformance to company standards Fitness for purposePresentationsSpelling
Review in general – Types of Review
Informal Review
It is very helpful to ask a colleague to read through the deliverable to
Identify jargon to eliminate or translate Make sure it all makes sense Rationalize the sequence Identify where the authors familiarity with the subject assumes the
reader knows more than he or she actually does
This s the least effective from of review as there is no recognized way of measuring it
Review in general – Types of Review
Walkthrough
Usually led by the author, although the Project Manager or team Leader may conduct them
For some technical deliverables or complex processes it is very useful for the author to walk through the document
Reviewers are asked to comment and query the concepts and processes that are to be delivered
Review in general – Types of Review
Presentations
Used to explain to users your understanding of their Requirements
To explain how you will test those requirements i.e. your approach
To gain agreement for the next step
Review in general – Types of Review
Technical (Peer) Review
Your Peers can also perform more formal technical reviews in larger groups
Often used by workgroups as a self-checking mechanism
It’s format is defined, agreed and practised with documented processes and outputs
Requires technical expertise
Review in general – Types of Review
Inspections
Focus on reviewing code or documents
Formal process based on rules and checklists Chaired by Moderator Every attendee has a defined role Includes metrics and entry & exit criteria Training is required
Reviewers must be prepared prior to attendance
Inspections last 2 hours max
Review in general – Types of Review
Fagan Inspections
Are very effective – to the point where they can replace unit testing
Defects are reported – no debate
Contentious issues and queries are resolved outside the inspections and the result reported back to the Moderator
After Michael Fagan founder of the method – www.mfagan.com.
Review in general – Types of Review
Prototypes (review instrument)
Allows the non-technician to see what is going to be built
Enables the business to refine their business flows or adjust sequences on input screens
Is used to demonstrate and display features of the [proposed] system
GUI screens, Database designs, Network architecture etc
Review in general – Types of Review
Checklists (review instrument)
Extremely simple and useful method of ensuring that all elements of a deliverable have been considered
Tends to ensure conformity of content, sequence and style
Can be devised for almost any activity
Review in general
Fundamental procedures and phases of a review
Planning
Organizational preparation
Individual preparation
Review meeting
Follow-up
Review in general
Goals
Validation and verification against specifications and standards
Achieve consensus before moving on to next stage
Review in general
Activities
Reviews need to be planned
They may require overview meetings
Preparation
The review meeting
Follow-up
Review in general
For each type of review (especially the formal ones) roles and responsibilities must be clearly defined
Training may be required to understand and perform the respective roles
Roles & responsibilities Manager Moderator Author Reviewer(s) Secretary
Review in general
Manager – decides on the execution of a review. Responsible also to promote and encourage the review process
Moderator – a neutral person. Ensures that the review stays focused on the goals. This is usually a specially trained person
Author – writer or person with chief responsibility for the document(s). Authors sometimes take the review personally and do see it as personal criticism
Reviewer(s) – technical experts who take part in the meeting. Must be open to new ideas. Has to be aware that the author might take any criticism personally
Secretary – documents the deviations, problems, points and suggestions. Should take as little part in the discussion as possible in order to stay focused
Review in general
Deliverables
Any project deliverable can (and should) be reviewed
Reviewing these items can lead to Product changes Source document changes Improvements (to both reviews and development)
Review in general
Success factors for reviews include:
Each review has a clear predefined objectiveThe right people for the review objectives are involvedDefects found are welcomed, and expressed objectivelyPeople issues and psychological aspects are dealt with (e.g. making
it a positive experience for the author)Review techniques are applied that are suitable to the type and level
of software work products and reviewers
Review in general
Success factors for reviews include (cont.):
Checklists or roles are used if appropriate to increase effectiveness of defect identification
Training is given in review techniques, especially the more formal techniques, such as inspection
Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules)
There is an emphasis on learning and process improvement
Review in general
Pitfalls Lack of training
And therefore understanding of role, responsibilities and purpose of review
Lack of documentation
Lack of management support
Review in general
Review Management
Formal reviews require proper management
Allow people time to prepare Issue guidance notes (if not in Standards) Give plenty of notice Practice Presentation & Walkthroughs Sound out “contentious issues” ALWAYS document outcomes & Actions
Review in general
InformalReview
Technicalreviews
Inspections Walkthrough
What
Approach
Why / when
By Who
To Whom
Req.s & Designs
Informal
Before detail
Author(s)
IT & Business
Everything
Informal
Always*
Peer
Author
Everything
Formal
Before detail
Group(s)
IT & Business
Specs & Code
Formal
Before sign off
Moderator
Reviewers
Always* : can be done at all points when a document is released
Review in general
Summary
There are various types of reviewsCompanies must decide which one(s) are best for them
In order to gain maximum benefit from them they must be organized and implemented
Basics of Software TestingISTQB Certified Tester, Foundation Level
5.3. Static Analysis by tools
Static Analysis by tools
In this session we will:
Understand what Static Analysis is
Look at some of the elements of Static AnalysisCompilersStatic Analysis toolsData flow analysisControl flow analysisComplexity analysis
Static Analysis by tools
What is Static Analysis?
“Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.”
Essence of Static Analysis:
Examine the logic and syntax of the codeAttempting to identify errors in the codeProvides metrics on flow through the SUT and its complexityA form of automation – usually done with tools
Static Analysis by tools
Benefits of static analysis:
Early detection of defects prior to test execution. Early warning about suspicious aspects of the code or
design, by the calculation of metrics, such as a high complexity measure.
Identification of defects not easily found by dynamic testing.
Detecting dependencies and inconsistencies in software models, such as links.
Improved maintainability of code and design. Prevention of defects, if lessons are learned in
development.
Static Analysis by tools
What do we hope to find?
Errors in the code
Unreachable codeUndeclared variablesParameter type mismatchesUncalled functions and proceduresPossible array boundary violations
Static Analysis by tools
What else will we get?
Information on the code Its complexityThe flow of data through itThe control flow through itMetrics and measurements
Static Analysis by tools
Compilers
Convert program language code into Machine Code for faster execution
Will perform some basic Static AnalysisUsually produces a source code listOften produces a memory mapOften produces a cross-reference of labels and variablesWill do a syntax check on the code prior to compilation
Static Analysis by tools
Static Analysis Tools
A Static Analyzer is used to parse the Source Code & identify errors, such as
Missing branches and linksNon returnable “performs”Variables not initializedStandards not metUnreachable code
Will also find any errors found by a Compiler
Static Analysis by tools
Data Flow Analysis
Technique to follow the path that various specific items of data take through the code of a program
Looking for possible anomalies in the way the code handles the data items
Variables used without being first createdVariables being “used” after they have been “killed”
Static Analysis by tools
Control Flow Analysis
Examines the flow of control through the code
Can identify Infinite loopsUnreachable codeCyclomatic complexity and other metrics
Static Analysis by tools
Control Flow Graph
B1
B7
B6B5B9
B8B4B3B2
Static Analysis by tools
Metrics
Static Analyzers generate metrics based on the code
Cyclomatic Complexity Identifies the number of decision points in a control flowCan be calculated by counting the number of decision points in
a control flow graph and adding 1
Other complexity MetricsNumber of Lines of CodeNumber of nested statements
Static Analysis by tools
Summary
Static Analysis in the inspection of program code and logic for errors
The code is analyzed without being executed by the tool
Errors are highlighted and metrics are produced
Static Analysis by tools
References:
van Veenendaal, 2004
IEEE 1028
Gilb, 1993
Basics of Software TestingISTQB Certified Tester, Foundation Level
6. Test Design Techniques
Test Design Techniques
In this chapter we will
Understand the differences between Black & White Box testing and where they feature in a testing lifecycle
Understand how a systematic approach provides confidence
Understand how tools can be used to improve and increase productivity
Basics of Software TestingISTQB Certified Tester, Foundation Level
6.1. Identifying test conditions and designing test cases
Test Design Techniques
The following steps are required for the successful process of identifying test conditions and designing test cases:
Designing tests by identifying test conditions Specifying test cases Specifying test procedures
Test Design Techniques
During test cases design, the project documentation is analyzed in order to determine what to test, i.e. to identify the test conditions
During test specification the test cases are developed and described in details with the relevant test data by using test design techniques
Expected results should be produced as part of the test case specification and should be included outputs, changes to data, states, etc.
Basics of Software TestingISTQB Certified Tester, Foundation Level
6.2. Categories of test design techniques
Black & White Box Testing
Black Box testing
“Testing, either functional or non-functional, without reference to the internal structure of the component or system.”
White Box testing
“Testing based on an analysis of the internal structure of the component or system.”
Black Box Testing
Concentrates on the Business Function
Can be used throughout testing cycle
Dominates the later stages of testing Although is relevant throughout the development life cycle
Little / No knowledge of the underlying code needed
Black Box Testing
We have all done it!
What do you do when changing a light bulb?
Remove Old Bulb
Insert New Bulb
Switch light / lamp on
Switch light / lamp off
White Box Testing
Testing at a detailed level
Normally used after an initial set of tests has been derived using black box test techniques
Appropriate for component testing but becomes less useful as testing moves towards system and acceptance testing
Not aimed at testing the functionality of a component or the interaction of a series of components
Needs to be planned
White Box Testing
Testing at a detailed level – cont.
A.K.A. “Glass Box testing” or “Structural testing”
Focuses on Lines of Code
Looks at specific conditions
Looks at the mechanics of the Application
Useful in the early stages of testing
White Box Testing
Requires a detailed knowledge of the code
Business Function / Process not a prime consideration
Aim to achieve code coverage, not just functional coverage
Plays a lesser role later in the testing cycle
White Box Testing
Code Coverage objective
To devise tests that exercise each significant line of program code at least once
This does not mean every combination of paths and data – which is normally unattainable
Black & White Box Testing
Techniques and measurements
Systematic techniques exist for black and white box testing Give us a systematic approach to testing Test cases are developed in a logical manner Enable us to have a sub-set of tests that have a high probability of
finding faults
By using defined techniques we should be able to produce corresponding measurements
Means we can see how testing is progressing
Black & White Box Testing
Software Testing Tools
Are useful for both Black & White Box Testing
Can increase productivity and quality of tests
Particularly useful for white box testing
Black & White Box Testing
Summary
Black Box testing focuses on functionality
White Box testing focuses on code
A systematic approach is needed for both Tests need to be planned, prepared, executed and verified Expected results need to be defined and understood Tools can help increase productivity and quality
Basics of Software TestingISTQB Certified Tester, Foundation Level
6.3. Specification-based or Black Box Techniques
Black Box Techniques
In this session we will
Understand what black box testing is
Look at some of the different types of black box testing
Black Box Techniques
Black box testing is
“Testing, either functional or non-functional. Without reference to the internal structure of the component or system .”
Black Box Techniques
Why do we need black box techniques?
Exhaustive testing is not possible Due to the constraints of time, money and resources
Therefore we must create a sub-set of tests These must be achievable, but should not reduce coverage
We should also focus on areas of likely risk Those places where mistakes may occur
Black Box Techniques
Each black box test techniques has
A method i.e. how to do it
A test case design approach How to create test cases using the approach
A measurement technique Except Random & Syntax
See BS7925-2 for detailed information
Black Box Techniques
BS7925-2 lists all the black box test techniques
Equivalence Partitioning Boundary Value Analysis State Transition Cause & Effect Graphing Syntax Testing Random Testing
Black Box Techniques
Equivalence Partitioning
Uses a model of the component to partition Input & Output values into sets
Such that each value within a set can be reasonably expected to be treated in the same manner
Therefore only one example of that set needs to be input to or resultant from the component
Black Box Techniques
Equivalence Partitioning – Test Case Design
Inputs to the component
Partitions exercised: Valid partitions Invalid partitions
Expected results
Black Box Techniques
Example
Enter car details Engine size 800 – 5500cc Year of Manufacture Make
Identify Valid partition and invalid
partitions Test cases including
expected results
Enter car details x
OK
Engine Size:
Year of
manufacture
Make
Black Box Techniques
Boundary Value Analysis
Uses a model of the component to identify the values close to and on partition boundaries
For input and output , valid & invalid data
Chosen specifically to exercise the boundaries of the area under test
Most useful when combined with Equivalence Partitioning
Black Box Techniques
Boundary Value Analysis – Test Case Design
Inputs to the component
Boundaries to be exercised:Just belowOnJust above
Expected results
Black Box Techniques
Example
Enter car details Engine size 800 – 5500cc Year of Manufacture Make
IdentifyBoundariesTest casesTest case categoryExpected results
Black Box Techniques
What are the valid and invalid values for Equivalence Partitioning and Boundary Value Analysis?
Q1 To be eligible for a mortgage you must be between the ages of 18 and 64 (inclusive). The age input field will only accept two digits and will not accept minus figures (“-”)
Q2 An input field on a mortgage calculator requires a value between 15,000 and 2,000,000. The field only allows numerical values to be entered and has a maximum length of 9 digits.
Q3 The term of a mortgage can be between 5 and 30 years.
Q4 The font formatting box in a word processing package allows the user to select the size of the font – ranging from 6 point to 72 point (in 0.5 steps).
Black Box Techniques
What are the valid and invalid values for Equivalence Partitioning and Boundary Value Analysis?
Q5 A screen for entering mortgage applications requires information on both peoples wages and will generate the maximum amount available for the mortgage (based on 3¼ times larger wage , 1 ¼ times lower wage). If the mortgage is less than £250,000 then the interest rare is 4.5%, if the amount is £250,000 to £1,000,000 then the interest rate is 4%.
Q6 Personal loan of between £1000 to £25000. for loans between £1,000 and £10,000 there is an interest rate of 8.5%, loans between £10,001 and £25,000 have an interest rate of 8%.
Q7 A grading system takes students marks (coursework 0 – 75 and exam 0 – 25) and generates a grade based on those marks (0 – 40 Fail, 41 – 60 C, 61 – 80 B and 81 – 100 A).
Black Box Techniques
Decision Table Testing
Uses a model of the relationship between causes and effects for the component
Each can cause or effect is expressed as a Boolean condition Represented as a Boolean graph, from which a decision table is
produced
Good for:
Capturing system requirements that contain logical conditions Documenting internal system design Recording complex business rules that a system is to implement Serving as a guide to creating test cases that otherwise might not be
exercised
Black Box Techniques
Example
A check debit function has inputs:
Debit amount Account type
P = postal C = counter
Current balance
And outputs are:
New balance Action code
D&L = process debit and send out letter D = process debit only S&L = suspend account and send letter L = send out letter only
Black Box Techniques
Example – cont
The conditions are:C1 New balance in creditC2 New balance overdraft, but within authorized limitC3 Account is postal
The actions are:A1 Process debitA2 Suspend accountA3 Send out letter
C1
C2
C3 A3
A2
A1
Λν
ν
Black Box Techniques
State transition
Uses analysis of the specification of the component to model its behavior by state transitions
IdentifiesThe various states that the software may be inThe transitions between these pointsThe events that cause the transactionsActions resulting from the transactions
Black Box Techniques
State Transition – test Case Design
Identify possible transitions
Identify initial state and the data / events that will trigger the transition
Identify the expected results
Black Box Techniques
Example
Displaying Time (S1)
Changing Date (S4)
Displaying Date (S2)
Changing Time (S3)
Change
mode (CM)
Display
Time (T)
Change
mode (CM)
Display
date (D)
Reset (R)
Alter time (AT)
Reset (R)
Alter date (AD)
Time set (TS)
display time (T)
Date set (DS)
Display date (D)
Test Case 1 2 3 4 5 6
Start State S1 S1 S3 S2 S2 S4
Input CM R TS CM R DS
Expected Output D AT T T AD D
Finish State S2 S3 S1 S1 S4 S2
Black Box Techniques
Other techniques
Decision Table Testing (Cause & Effect Tabling) Analysis of the specification to produce a model of relationship
between causes and their effects Cases expressed as Boolean conditions in the form of a decision table
Use Case Testing The tests are designed based on business activities as opposed to
based on project requirements
Syntax Testing Uses the syntax of the components’ inputs as the basis for the design
of the test case
Random Testing Means of testing functionality with a randomly selected set of values
Black Box Techniques
Summary
Black box testing concentrates on testing the features of the system
Techniques enable us to maximize testingCreate an achievable set of tests that offer maximum coverage Ensure possible areas of risk are tested
Black box testing is relevant throughout the testing process
Basics of Software TestingISTQB Certified Tester, Foundation Level
6.4. Structured-based or White Box Techniques
White Box Techniques
In this session we will
Understand what White Box testing is
Look at some of the different types of White Box testing
White Box Techniques
White Box testing
“Testing based on an analysis of the internal structure of the component or system ”
Also known as Glass Box testing, Clear Box testing
White Box Techniques
Why do we need White Box techniques?
Provide formal structure to testing code
Enable us to measure how much of a component has been tested
Example <100 lines of code 100,000,000,000,000
possible paths At 1,000 tests per second
would still take 3,170 years to test all paths Loop ≤ 20 timeout
White Box Techniques
To plan and design effective cases requires a knowledge of the
Programming language used
Databases used
Operating system(s) used
And ideally knowledge of the code itself
White Box Techniques
BS7925-2 lists all the white box test techniques
Statement Testing Branch / Decision Testing Branch Condition Testing Branch Condition Combination Testing Modified Condition Decision Testing Linear Code Sequence & Jump Data Flow Testing
White Box Techniques
Statement Testing
“A white box test design technique in which test cases are designed to execute statements.”
Test cases are designed and run with the intention of executing every statement in a component
White Box Techniques
Statement Testing example
a;
if (b)c;
d;
Any test case with b TRUE will achieve full statement coverage
NOTE: Full statement coverage can be achieved without exercising with b FALSE
White Box Techniques
Branch / Decision testing
A technique used to execute all branches the code may take based on decision made
Test Cases designed to ensure all branches & decision points are covered
White Box Techniques
Branch / Decision testing example
a;
if (b)
c;
d;
100% statement coverage requires 1 test case (b = true)
100% branch / decision coverage requires 2 test cases
(b = true & b = false)
White Box Techniques
Exercises Q1 How many tests are required to
achieve: 100% statement coverage 100% branch coverage
enter user IDIF user ID is valid THENdisplay “enter password”
IF password is valid THENdisplay account screen
ELSEdisplay “wrong
password”ELSE
display “wrong ID”END IF display time & date
White Box Techniques
Exercises Q2 How many tests are required
to achieve: 100% statement coverage 100% branch coverage
READ AGEIF AGE >0 THEN
IF AGE=21 THENPRINT “21st”
ENDIFENDIFPRINT AGE
White Box Techniques
Exercises Q3 How many tests are required to
achieve: 100% statement coverage 100% branch coverage
READ AGE READ BIRTHDAYIF AGE>0 THEN
IF BIRTHDAY = 0 THENPRINT “No values”
ELSEPRINT BIRTHDAYIF AGE>21 THEN
PRINT AGEENDIF
ENDIFENDIFREAD BIRTHMONTH
White Box Techniques
Exercises Q4 How many tests are required to
achieve: 100% statement coverage 100% branch coverage
READ HUSBANDAGEREAD WIFEAGEIF HUSBANDAGE>65
PRINT “Husband not retired”ENDIFIF WIFEAGE>65
PRINT “Wife retired”ELSE
PRINT “Wife not retired”ENDIF
White Box Techniques
Exercises How many tests are required to
achieve: 100% statement coverage 100% branch coverage
Exercises:Q5
READ HUSBANDAGEREAD WIFEAGEIF HUSBANDAGE>65
PRINT “Husband retired”ENDIFIF WIFEAGE>65
PRINT “Wife retired”ENDIF
Exercises:Q6
READ HUSBANDAGEIF HUSBANDAGE<65
PRINT “Below 65”ENDIFIF HUSBANDAGE>64
PRINT “Above 64”ENDIF
White Box Techniques
Other techniques
Branch Condition testing, Branch Condition Combination testing & Modified Condition Decision testing
Condition testing based upon an analysis of the conditional control flow within the component
LCSAJLinear Code Sequence And Jump
Data flow testingTests the flow of data through a component
White Box Techniques
Summary
White box testing can be done immediately after code is written
Doesn’t need the complete systemDoes need knowledge of the code
A combination of all techniques are required for a successful test
Don’t rely on just one technique
Basics of Software testingISTQB Certified tester, Foundation Level
6.5.Experience-based Technique6.5.1. Error Guessing
Error Guessing
In this session we will
Understand how you can see experience to ‘predict’ where errors may occur
Understand how this can benefit testing
Error Guessing
What is error Guessing?
“A test design technique where the experience of the tester is used to anticipate what defects might be present in the
component or system under test as a result of errors made, and to design tests specifically to expose them.”
Error Guessing
Used to guess where errors may be lurking
Based on information about how the system has been put together and previous experience of testing it
Use to complement more systematic techniques not instead of
Not ad-hoc testing, but testing that targets certain parts of the application
Error Guessing
Based on the users or testers experience of the commercial aspects of the system
Table based calculations such as calculating benefits payments
Where the user is allowed a high degree of flexibility in GUI navigation using multiple windows
Error Guessing
Experience of the developers or the development cycle
Knowledge of an individuals “style”
Knowledge of the development lifecycle and especially the change management process
Error Guessing
Experience of the operating system used
It is known that Windows NT is better at storage management than Windows 95
Error Guessing
Should still be planned
Is part of the test processShould be used as a supplement to systematic techniques
Ad-Hoc testing can be encouraged ifThe tester maintains a log of those tests performed & is able to
reasonably recollect what actions were performed
If you have the time to run extra (Ad-Hoc) test that passes there is little loss
If it finds defect then that’s a bonus
Basics of Software TestingISTQB Certified Tester, Foundation Level
6.5.2. Exploratory Testing
Exploratory Testing
Can be powerful approach to testing
Sometimes is much more productive than testing using scripts
Involves tacit constraints concerning which parts of the product are to be tested, or what strategies are to be used
Still this are not test scripts
Every test is influenced by the result of the previous one
Basics of Software TestingISTQB Certified Tester, Foundation Level
6.6. Choosing Test Techniques
Choosing Test Techniques
The choice of a test technique may be made based on:
Regulatory standards
Experience and / or
Customer / contractual requirements
Some techniques are more applicable to certain situations and test levels; others are applicable to all test levels
Choosing Test Techniques
Selection factors
The following selection factors are important: Risk level and risk type (thoroughness) Test objective Documentation available Knowledge of the test engineers Time and budget Supporting tools – for design / regression testing
Choosing Test Techniques
Selection factors
Product life cycle status Previous experiences Measurements on test effectiveness International standards – many times they are industry
specific Customer / contractual requirements Auditability / traceability
A lot of information, skills necessary!
Choosing Test Techniques
Summary
You can never do enough testing!The more tests that you can run the more errors you may find
Error guessing can help ensure that areas where defects are likely to occur are fully tested
Uses experience and gut feeling to supplement systematic testing techniques
Exploratory testing is most useful where there are a few or inadequate specifications or severe time pressure
Choosing Test Techniques
References:
Craig, 2002, Hetzel, 1998, IEEE 829
Copeland, 2004, Myers, 1979
Kaner, 2002
Beizer, 1990
Basics of Software TestingISTQB Certified Tester, Foundation Level
7. Test Management7.1. Test Organization
Test Organization
In this session we will
Gain an appreciation of the many different testing structures in place
Understand that it is likely that every organization will have a unique testing set-up
Look at some of the more common, general set-ups and appreciate the impact they may have on the success of the testing carried out
Test Organization
Testing may be the responsibility of the individual developers
Developers are only responsible for testing their own work
Goes against the ideal that testing ought to be independent
Tends to confirm the system works rather than detect errors
Test Organization
Testing may be the responsibility of the developer team
A “buddy system”
Each developer is responsible for testing his / her colleagues work
Test Organization
Each team has a single tester
Is not involved in the actual development, solely concentrates on testing
Able to work with the team throughout the development process
May be too close to the team to be objective
Test Organization
Dedicated test teams may be in place
These do no development whatsoever
Take a more objective view of the system under test
Test Organization
Internal Test Consultants
Provide testing advice to project teams
Involvement throughout the lifecycle
Test Organization
Testing outsourced to specialist agencies
Guarantees independence
Empathy with the business of the organization?
Test Organization - Main Tasks
Multi – disciplined teams needed to cover
Production of testing strategies
Creation of test plans Production of testing scripts Execution of testing scripts Test asset management Reviewing & reporting Quality assurance
Logging results Executing necessary retests Automation expertise User interface expertise Project management Technical support
Database admin, environment admin
Test Design Techniques
Testing terms will typically include:
Test analysts to prepare, execute and analyze tests Test Consultants prepare strategies and plans Test automation experts Load & performance experts Database administrator or designer Test managers / team leaders Test environment managers And others…
Test Design Techniques – Detailed Tasks
A various tasks and activities hat testers might perform include:
Review and contribute to test plans Analyze, review and assess user requirements, specifications and models for
testability Create test specifications Set-up the test environment (often coordinating with system administration and
network management) Prepare and acquire test data Implement tests on all test levels, execute and log the tests, evaluate the results
and document the deviations from expected results Use test administration or management tools and test monitoring tools as
required Automate tests (may be supported by a developer or a test automation expert) Measure performance of components and systems (if applicable ) Review tests developed by others
Test Organization – Personnel Qualification
Besides technical skills and knowledge testers
Socially competent and good team players
Political and diplomatic skills
Assertive, confident, exact and creative, analytical
Willing and ready to query in depth
Quickly familiarize themselves with complex implementations and applications.
Test Organization – Tasks of the test Manager
Test manager = engine and mind of the testing effort
Estimate the time and cost of testing
Determine the needed personnel skills
Plan for and acquire the necessary resources.
Negotiate and set test completion criteria.
Define which test cases, by which tester, in which sequence and at which point in time are to be carried out.
Test Organization – tasks of the Test Manager
Test manager (continued)
Adapt the test plan to the results and progress of the testing
Introduce suitable compensatory measures
Report test results and progress/ Receive reports
Decide when the tests can be completed based on test completion criteria
Introduce and use metrics
Test Organization
Summary
Testing structure will vary between organizations
Testing terms require a range of skills
Basics of Software TestingISTQB Certified Tester, Foundation Level
7.2. Test Planning and Estimation
Test Planning and Estimation
In this session we will
Understand how we estimate how long we need for testing
Understand how we monitor the progress of testing once it has started
Understand how we monitor the progress of testing once it has started
Understand what steps we take to ensure that the effort progresses as smoothly as possible
Basics of Software TestingISTQB Certified Tester, Foundation Level
7.2.1. Test Planning
Test Planning and Estimation
In this session we will
Look at how a test plan is put together
Understand how it should be used and maintained
Understand why they are so important to a testing project
Test Planning and Estimation
What is a test plan?
“A document describing the scope, approach, resources and schedule of intended test activities.
It identifies amongst other test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence,
the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their
choice, and any risks requiring contingency planning.
It is a record of the test planning process.”
A project plan for testing Covering all aspects of testing A “living” document that should change as testing progresses
Test Planning and Estimation
Before you plan
Accord with the QA Plan
A test strategy in place
Put down the test concept
Test Planning and Estimation
Test plan identifier Introduction Test items Features to be tested Approach Item pass / fail criteria Suspension criteria &
resumption criteria
Test deliverables Testing tasks Environment Responsibilities Staffing and training needs Schedules Risks and contingencies Approvals
Based upon IEEE 829-1998 standard for software test documentation
Test Planning and Estimation
Test Plan Identifier
A unique identifier for the test plan
Test Planning and Estimation
Introduction
Overview of the plan A summary of the requirementsDiscuss what needs to be achievedDetail why testing is needed
Reference to other documentsQuality assurance and configuration management
Test Planning and Estimation
Test Items
The software items to be tested
Their versions numbers / identifiers
How they will be handed over to testing
References to relevant documents
Test Planning and Estimation
Features to Be Tested
List all features of the SUT that will be tested under this plan
Test Planning and Estimation
Features Not to Be Tested
List all features of the SUT that will not be tested under this plan
Between the test items, features to be tested and features not to be tested we have scope of the project
Test Planning and Estimation
Approach
Describes the approach to testing the SUT
This should be high level, but sufficient to estimate the time and resources required
What this approach will achieveSpecify major activitiesTesting techniquesTesting tools / aidsConstraints to testingSupport required – environment & staffing
Test Planning and Estimation
Item Pass / Fail Criteria
How to judge whether a test item has passedExpected vs. Actual resultsCertain % of tests passNumber of faults remaining (know and estimated)Should be defined for each test item
Test Planning and Estimation
Suspension criteria & resumption requirements
Reasons that would cause testing to be suspended
Steps necessary to resume testing
Test Planning and Estimation
Test deliverables
Everything that goes to make up the tests
All documentation – specification, test plans, procedures, reports
Data
Testing tools – Test management tools, automation tools, Excel, Word etc
Test systems, Manual and automated test cases
Test Planning and Estimation
Testing Tasks
Preparation to perform testingTest case identification Test case design Test data storageBaseline application
Special skills neededSpreadsheet skills, test analysis, automation etc
Intertask dependencies
Test Planning and Estimation
Environment
Requirements for test environment
Hardware & softwarePCs, servers, routers etcSUT, interfaced applications, databases
ConfigurationMay be operating systems or middleware to test against
FacilitiesOffice space, desks, internet access
Test Planning and Estimation
Responsibilities
Who is responsible?
For which activities
For which deliverables
For the environment
Test Planning and Estimation
Staffing and Training Needs
Staff required Test managers, team leaders, testers, test analysts
Skill levels requiredAutomation skills, spreadsheet skills
Training requirementsTool specific training, refresher courses
Test Planning and Estimation
Schedule
Timescales, dates and milestones
Resources required to meet milestones
Availability of software and environment
Deliverables
Test Planning and Estimation
Risks and Contingencies
What might go wrong?
Actions for minimizing impact on testing should things go wrong
Test Planning and Estimation
Approvals
Who has approved the test plan
Names and dates of approval
Why is it so importantEvidence that the document has been viewedShows that the approach has been agreed and has the backing
of those who matterYou have commitment, now make them stick to it!
Test Planning and Estimation
Exit Criteria
Should be used to determine whether the software is ready to be released
There is never be enough time, money or resource to test everything so testers must focus on business critical function
Some examples of test completion Has Key Functionality been tested Has test Coverage been achieved Has the Budget been used What is the Defect detection rate Is Performance acceptable Are any defects outstanding
Test Planning and Estimation
Approaches differ based on when the intensive test design work begins
Preventative – tests are designed as early as possible
Reactive – test design comes after the software or system has been produced
Test Planning and Estimation
There are some specific approaches Can be combined or used separately
Model – based
Methodical
Process
Dynamic and heuristic
Consultative
Regression - averse
Basics of Software TestingISTQB Certified Tester, Foundation Level
7.2.4. Test Estimation
Test Planning and Estimation
Test estimation
The same estimating for any project
We need to identify the number of tasks (tests) to be performed, the length of each test, the skills and resources required and the various dependencies
Testing has a high degree of dependencyYou cannot test something until it has been deliveredFaults found need to be fixed and re-testedThe environment must be available whenever a test is to be run
Test Planning and Estimation
Factors to Consider
Risk of Failure
Complexity of Code / Solution
Criticality
Coverage
Stability Of SUT
Test Planning and Estimation
Testing is a tool to aid Risk Management
Consider the cost to the business of failure of a feature
Test the most important features as soon as possible
Be prepared to repeat these tests
Test Planning and Estimation
Consider the complexity of the supporting code
Estimate & plan tests for these ASAP
These are likely to be re-run many times, allow for this in plans
Test Planning and Estimation
Data is critical to testing
You can in certain situations load the data for testing & test in one pass
These tests will be run first but only once… unless…
The testing environment may need to be re-set before a test can be re-run
Test Planning and Estimation
Use previous experience or best estimates to predict the reliability of components
Plan to run tests for the unreliable elements sooner rather than later
Test Planning and Estimation
Allow time for re-testing
Time must be included for identifying, investigating & logging faults
Faults that have been “fixed” must be re-tested
Whenever a change is made to the SUT or the environment a regression test should be run
How do we build these factors into the estimation?
Test Planning and Estimation
Summary
Test plans are project plans for testing
They identify why testing is needed, what will be tested, the scope of this phase of testing, what deliverables testing will provide and what is required to enable testing to succeed
They are “living” documents that must evolve as the project progresses
Basics of Software TestingISTQB Certified Tester, Foundation Level
7.3. Test Progress Monitoring and Control
Test Progress Monitoring and Control
Test preparation
Estimate number of tests needed
Estimate time to prepare
Refine these figures
Track percentage of tests prepared
time
implementation
Fit for purpose
Test Criteria Passes
Test Progress Monitoring and Control
Why monitor
The purpose of test monitoring is to give feedback and visibility about test activities
Information may be collected manually or automatically
Test Execution
Assess the coverage achieved to date by testingCoverage – amount of tests completed, amount of code
exercised by tests, amount of features tested so far
Estimate time required to run the test suite
Test Progress Monitoring and Control
Whilst testing, you can monitor
The number of tests run
The number of tests passed & failed
The number of defects raisedThese can be categorized by Severity, Priority & Probability
The number of re-tests – that pass & that fail
Test Progress Monitoring and Control
The status of the project should be regularly reported
Any deviations from the schedule raised ASAP
Any critical faults found should be raised immediately
Test Progress Monitoring and Control
Describes any guiding or corrective actions taken as a result of information and metrics gathered and reported
Controlling measures
Assign extra resource
Re-allocate resource
Adjust the test schedule
Arrange for extra test environments
Refine the completion criteria
Test Progress Monitoring and Control
Test Reporting – concerned with summarizing information about the testing
What happened during a period of testing
Analyzed information and metrics supporting future actions
According to the ‘Standard for Software Test Documentation’ (IEEE 829), reporting has a defined structure
Test Progress Monitoring and Control
Test reporting – a test summary report shall have the following structure:
Test summary report identifier Summary Variances Comprehensive assessment Summary of results Evaluation Summary of activities Approvals
Test Progress Monitoring and Control
Summary
Multiple factors must be considered when estimating the length of time we need to perform testing
Once testing has started it is necessary to monitor the situation as it progress
Careful control must be kept to ensure project success
Basics of Software TestingISTQB Certified Tester, Foundation Level
7.4. Configuration Management
Configuration Management
In this session we will
Gain an understanding of Configuration Management
Look at the potential impact of poor Configuration Management on testing
And understand how introducing Configuration Management helps address those problems
Configuration Management
What is Configuration Management
Generally, ensuring that no-one is able to change a part of the system without proper procedures being in place
Configuration Management
The IEEE definition of Configuration Management
A discipline applying technical and administrative direction and surveillance to:
Identify and document the functional and physical characteristics of a configuration item,
Control changes to those characteristics,Record and report change processing and implementation
status, andVerify compliance with specified requirements.
Configuration Management
The BS 6488 Code of Practice for Configuration ManagementConfiguration Management identifies in detail
The total configuration (i.e. hardware , firmware, software, services and supplies) current at any time in the life cycle of each system to which it is applied,
Together with any changes or enhancements that are proposed or are in course of being implemented
It provides traceability of changes through the lifecycle of each system and across associated systems or groups of systems.
It therefore permits the retrospective reconstruction of a system whenever necessary.
Configuration Management
Closely linked to Version Control
Version Control looks at each component
Holds the latest version of each component
What versions of components works with others in a configuration
Configuration Management
Without it, testing can be seriously impeded
Developers may not be able to match source to object code
Simultaneous changes may made to the same source
Unable to identify source code changes made in a particular version of the software
Configuration Management
Critical to successful testing
It associates programs with specifications, with test plans, with test data
When a component is successfully tested test results can be included
Configuration Management
It enables you to understand what versions of components work with each other
It allows you to understand the relationship between test cases, specifications, and components
Configuration Management
If any configuration component is out of step – Results are unpredictable
This applies throughout the project lifecycle and beyond
Configuration Management
Everything intended to last must be subject to Configuration Management
Anything that isn’t, shouldn’t exist as part of the product
Configuration Management
Other Configuration Management terms
Configuration identification
Selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation
That is, all Configuration Items and their versions are known
Configuration Management
Other Configuration Management terms
Configuration control
Evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after
formal establishment of their configuration identification.
That is, Configuration Items are kept in a library and records are maintained on how Configuration Items
change over time
Configuration Management
Other configuration management terms
Status accounting
Recording and reporting of information needed to manage a configuration effectively. This information includes:
A listing of the approved configuration identification The status of proposed changes to the configuration, and The implementation status of the approved changes.
That is, all actions on Configuration Items are recorded and reported on
Configuration Management
Other Configuration Management terms
Configuration auditing
The function to check on the contents of libraries of configuration items for standards compliance
Configuration Management
Summary
Configuration management enables us to store all information on a system, provides traceability and enables reconstruction
Configuration Management is a necessary part of any system development
All assets must be known and controlled
Basics of Software TestingISTQB Certified Tester, Foundation Levels
7.5. Risk and Testing
Risk and Testing
What is a risk:
Can be defined as the chance of an event, hazard, threat or a potential problem
The harm resulting from that event determines the level of the risk
When analyzing, managing and mitigating risks, the test manager follows well established project management principles
Risk and Testing
Project Risks
Relate to the project’s capability to deliver its objectives:Supplier issues: failure of the third party, contractual issuesOrganizational factors:
Skill and staff shortages Personal and training issues Political issues
Technical issues: Problems is defining the right requirements The extent that requirements can be met given existing constraints The quality of the design, code and tests
Risk and Testing
Product Risks and risk management activities
Potential failure areas in the software or system They are a risk to the quality of the product
Risk management activities provide a disciplined approach to:Assess what can go wrong (risks)Determine what risks are important to deal with Implement actions to deal with those risks Identify new risks
Risk and Testing
Risk analysis – used to maximize the effectiveness of the overall testing process
A risk factor should be allocated to each function in order to differentiate between
Critical functions
Required functions
Risk and Testing
Critical functions – must be fully tested and available as soon as the changes go live. The costs to the business is high if these functions are unavailable for any reason
Required functions – not absolutely critical to the business. Usually possible to find adequate methods to ‘work around’ these problems using other mechanisms
Basics of Software TestingISTQB Certified Tester, Foundation Level
7.6. Incident Management
Incident Management
In this session we shall
Understand what an “incident” is
Understand the impact they have on the testing process
Understand how they are logged, and tracked
Understand why & how they should be analyzed to help ensure they don’t happen again
Incident Management
An ‘incident’ is when testing cannot progress as planned
“any significant, unplanned event that occurs during testing that requires subsequent investigation and / or correction”
They may be raised against documentation, the System Under Test, the test environment or the tests theselves
Incident Management
The incident needs to be viewed in terms of its impact on testing
As testing progresses through its lifecycle priorities can change
The progress of the testing activity can be monitored by analyzing incidents
Incident Management
Top Priority must be given to any issues that prevent testing from progressing
At the second level we need to assign those that can be worked around
In the lower categories we log what are minor issues & questions
Incident Management
As project deadlines get nearer what is considered high priority may well change
The emphasis will move towards the product and what will compromise its success
Incident Management
When logging incidents testers should include
Test ID, System ID, Testers ID
Expected and Actual results
Environment is use
Severity & priority of the incident
Incident Management
Service levels should be set up for each category of incident
Progress monitored accordingly
Involve Development, Testers and Users
Issue regular status reports
Incident Management
STATUS should also be tracked
This ensures that we are able to swiftly establish the situation with an incident
Incident Management
A typical status flow is
When first raised, an incident is OPEN It then passes to development to fix – WITH DEVELOPMENT Development may question the incident – PENDING Or fix the problem and return it for – RETEST Should the retest prove successful the defect is CLOSED
Incident Management
Analyzing incidents
Incidents may be analyzed to help monitor the test process
And to aid in improvement of the test process
Incident Management
Summary
An incident is “any significant, unplanned event that occurs during testing that requires subsequent investigation and / or correction”
Incidents should be recorded and tracked
Analysis of incidents will enable us to see where problems arose and to aid in test process improvement
Basics of Software Testing ISTQB Certified Tester, Foundation Level
Cost and Economic Aspects
Cost and Economic Aspects
In this subchapter we will
Look at the cost of testing (or the cost of not testing)
Appreciate the value of good, timely testing
Cost and Economic Aspects
The cost of defects that have remained undetected
Direct expenses for the clientLoss of data, interruption of transactions, wrong orders or
equipment damage lead to
Indirect costs due to turnover lossesAs employees or customers are dissatisfied with a product
they will leave
Further cost is added for correction and retesting
Cost and Economic Aspects
The cost of testing
Maturity of the development and test processes,
Testability of the software,
Personnel qualification,
Quality targets,
Test strategy applied.
Cost and Economic Aspects
Testing – cheaper that defects
Look for optimal balance between Test cost,Resources,Expected failure costs
In most cases the costs of testing should stay below the costs incurred if faults or deficiencies remain in the end product.
Cost and Economic Aspects
The earlier a fault is found, the cheaper it is to remedy
Errors in the requirements can lead to major re-engineering of the entire system
Many errors can be found using reviewsReviews of documentation and / or code
Allows testing to find more ‘substantial’ faults
Cost and Economic Aspects
The average cost of fixing a fault increases tenfold with every step of the development process
In code reviews you will find and fix errors in an average of 1 – 2 minutes
In initial testing, fault fix times will average between 10 and 20 minutes
In integration testing each fault can cost an hour or more
In system test each fault can cost 10 to 40 or more engineer hours
Watts Humphrey
Cost and Economic Aspects
Inspection and testing should begin as early as possible and be used in every phase of the project
Early test design can prevent fault multiplication
Analysis of specification during test preparation often brings errors in the specification to light
If errors in documentation are not found then the system may be developed incorrectly
Cost and Economic Aspects
The cost of not Testing
What is the cost of testing?
Which is cheaper?Testing and finding faults before releaseNot testing and finding faults once the system is live
Companies rarely have figures for either
Cost and Economic Aspects
Summary
The purpose of testing is to find faults
The earlier a fault is found is cheaper (and easier) it is to fix
Starting testing early will help find faults quicker
Basics of Software TestingISTQB Certified Tester, Foundation Level
Standards for Testing
Standards for Testing
In this subchapter we will
Understand the various standards, norms, guidelines, and practices that may influence testing
Standards for Testing
A variety of sources for norms, standards, and guidelines for testing
Company standards, policies, and practices
Quality Assurance standards
Industry- or branch-specific standards
Software testing standards
Standards for Testing
Company standards, policies, and best practices
Developed in-house or
Borrowed from industry or branch
Written or verbal
Standards for Testing
Quality Assurance standards
Only specify that testing should be done
E.g. ISO 9000-3:1991
Quality management and quality assurance standards –Part3: Guidelines for the application of ISO 9001 to the development,
supply and maintenance of software
Standards for Testing
Industry- or branch-specific standard
Specify what level of testing to perform
E.g. railway, medical, insurance
ASTM F153-95 Standard Test Method for Determining the Yield of Wide Inked Computer Ribbons
Standards for Testing
Testing standards
Specify how to perform testing
E.g. BS7925-1 Software Testing vocabulary BS7925-2 Software component testing
IEEE 829 Standard for Test Documentation
Standards for Testing
Summary
There are a range of standards that can influence testing
These come from different sources and affect different areas of testing
Various industries have legal requirements that will influence testing
Sign of development process maturity
Can bring legal or commercial benefits
Standards for Testing
References
Black, 2001, Craig, 2002
Hetzel, 1998
IEEE 829, Kaner 2002
Basics of Software TestingISTQB Certified Tester, Foundation Level
8. Tool support for testing8.1. Testing Tool Classification
Testing Tool Classification
In this session we will
Understand that help is at hand in the form of CAST ToolsCAST – Computer Aided Software Testing
Look at the range of tools available in the market today
Identify the areas in which tools can best help the testing process
Testing Tool Classification
Techniques as Test Tools
Products (CAST Tools)
To automate the most repetitious and time consuming tasks
Tools are able to do some things that people cannot easily do
Testing Tool Classification
Tool support for management of testing and tests:
Test management tools
Requirements testing toolsRequirements management toolsRequirements testing tools
Problem reporting and tracking Incident management tools
Configuration management tools
Testing Tool Classification
Tool support for static testing:
Review process support toolsMay store information about review processesProvide aid for online reviews
Static Analysis Tools – purposesThe enforcement of coding standardsThe analysis of structures and dependenciesAiding in understanding the code
Modelling tools
Testing Tool Classification
Tool support for test specification:
Test design tools (definition)A tool that supports the test design activity by generating test
inputs from a specification that may be held is a CASE tool repository, e.g. requirements management tool, or from specified test conditions held in the tool itself.
Test data preparation tools (definition)A type of test tool that enables data to be selected from
existing databases or created, generated, manipulated and edited for use in testing.
Testing Tool Classification
Tool support for test execution and logging:
Test execution tools Simulators
HarnessesTest comparators/robots
Security toolsCapture and replay tools = record and playback toolsWeb testing tools
Coverage Measurement Tools
Testing Tool Classification
Tool support for performance and monitoring:
Dynamic analysis tools
Performance/ load/ stress testing tools
Monitoring tools
Testing Tool Classification
Tool support for specific application areas
Tools classified above can be specialized for use in a particular type of application
Commercial tool suites may target specific application areas
Testing Tool Classification
Summary
CAST tools can help automate areas of the testing process
There is a wide range of functions that can be automated
Basics of Software TestingISTQB Certified Tester, Foundation Level
8.2. Effective Use of Tools: Potential benefits and risks
Effective Use of Tools
In this session we will
Examine the processes involved in identifying where tools can help the testing process
Learn how to approach the selection and implementation of those tools
Effective Use of Tools
Potential benefits and risks of tool support
(main tasks and activities)
Before buying a tool
Examine the test process
Examine the testing environment (benefits and risks)
Effective Use of Tools
Special consideration for some kind of tool
Test execution toolsReplay designed to implement tests that are stored
electronicallyOften requires significant effort in order to achieve significant
benefits
Performance testing tools
Need someone with expertise in performance testing to help design the tests and interpret the results
Effective Use of Tools
Special consideration for some kind of tool
Static analysis tools
These tools, when applied to source code can enforce coding standards
Test management tools
Need to interface with other tools or spreadsheetsThe reports need to be designed and monitored so that they
provide benefit
Effective Use of Tools
As indicated tools can help many aspects of testing
How do you select where you want to start introducing CAST support?
The selection and implementation of a tool is a major task and should be regarded as a project in its own right
Effective Use of Tools
Three aspects
Economics
Selection
Introduction
Effective Use of Tools
If testing is currently badly organized tools may not be the immediate answer
Automating chaos simply speeds the delivery of chaos
The benefits of tools usually depend on a systematic and disciplined test process being in place
Address the test processes and introduce disciplines first
Effective Use of Tools
Examine the test process in detail
Identify the weaknesses, and the cause of those weaknesses
Find tools that fit in with your processes and address the weak areas
This may be more important than selecting the tool with the most number of features
Prioritize the areas of greatest importance
Effective Use of Tools
Examine the Test Environment(s)
One of the most crucial areas for tool implementation
Shared environments will not accept automation unless the SUT allows the data to be entered fresh every time
Automation may also fill an environment up and make it run out of space as it allows a large amount of data to be entered in a short space of time
Test facilities may be (will be!!!) less than Production facilities and use of the tools with the SUT can cause systems to run out of memory
Effective Use of Tools
A 6-step process
Requirements specification of the use of the tool
Market analysis
Demonstrations with manufacturers
Evaluation of a smaller selection of tools
Review
Final tool selection
Effective Use of Tools
Market Analysis and Funding
Know the tools out there
Direct costs – the price of the tool
Indirect costs – man hours, training
It is necessary to budget for both the tool and the resources required to implement
Effective Use of Tools
Demonstrations Evaluations and Review
Your choice
Evaluation
Demos
Market
Effective Use of Tools
Pilot project
Review of the experiences from the pilot project
Process adaptations (if needed)
Tool configuration
User training
On-the-job coaching.
Effective Use of Tools
Plan the roll-out of the tool
Gain commitment from the tools users
Ensure new projects are aware of the cost of introducing a tool
Effective Use of Tools
Summary
Tools can improve the organization and efficiency of a test process
Before tools can be implemented a sound testing process must be in place
Tool selection and implementation must be carefully considered and managed
Effective Use of Tools
References
Buwalda, 2001 , Fewster, 1999
Fewster, 1999