software testing ii
TRANSCRIPT
-
8/8/2019 Software Testing II
1/60
Software Testing
-
8/8/2019 Software Testing II
2/60
-
8/8/2019 Software Testing II
3/60
Software testing can also be stated as the
process of validating and verifying that a
software program/application/product:
Meets the business and technical
requirements that guided its design and
development; Works as expected; and
Can be implemented with the same
characteristics.
-
8/8/2019 Software Testing II
4/60
CONCEPT
-
8/8/2019 Software Testing II
5/60
A primary purpose of testing is to detect software
failures so that defects may be discovered and
corrected. Testing cannot establish that a product functions
properly under all conditions but can only establish
that it does not function properly under specific
conditions.
Software testing includes examination of code as
well as execution of that code in various
environments and conditions as well as examining
the aspects of code
Test techniques include, the process of executing a
program or application with the intent of finding
software bugs
-
8/8/2019 Software Testing II
6/60
SOFTWARE BUGS
-
8/8/2019 Software Testing II
7/60
Asoftware bug is the common term used to describe
an error, flaw, mistake, failure, or fault in a computer
program or system that produces an incorrect orunexpected result, or causes it to behave in
unintended ways.
Most bugs arise from mistakes and errors made by
people in either
1)A program's source code or
2)Programs design, and
3)A few are caused by compilers producing incorrectcode.
-
8/8/2019 Software Testing II
8/60
-
8/8/2019 Software Testing II
9/60
Logic bugs
Infinite loops and infinite recursion
Off by one error, counting one too many or too
few when looping
Syntax bugs
Use of the wrong operator, such as performing
assignment instead of equality test. In simple
cases often warned by the compiler; in many
languages, deliberately guarded against bylanguage syntax
-
8/8/2019 Software Testing II
10/60
Resource bugs
Null pointer dereference
Using an uninitialized variable
Using an otherwise valid instruction on the wrong data type (seepacked decimal/binary coded decimal)
Access violations
Resource leaks, where a finite system resource such as memory
or file handles are exhausted by repeated allocation withoutrelease.
Buffer overflow, in which a program tries to store data past theend of allocated storage. This may or may not lead to an accessviolation or storage violation. These bugs can form a securityvulnerability.
Excessive recursion which though logically valid causes stackoverflow
-
8/8/2019 Software Testing II
11/60
Multi-threading programming bugs
Deadlock
Race condition
Concurrency errors in Critical sections, Mutual
exclusions and other features of concurrent
processing. Time-of-check-to-time-of-use
(TOCTOU) is a form of unprotected critical
section.
-
8/8/2019 Software Testing II
12/60
Teamworking bugs
Unpropagated updates; e.g. programmer
changes "myAdd" but forgets to change
"mySubtract", which uses the same algorithm.
These errors are mitigated by the Don't
Repeat Yourself philosophy. Comments out of date or incorrect: many
programmers assume the comments
accurately describe the code Differences between documentation and the
actual product
-
8/8/2019 Software Testing II
13/60
Software verification and validation
Verification: Have we built the software right?
(i.e., does it match the specification). Validation: Have we built the right software?
(i.e., is this what the customer wants).
Verification is the process of evaluating a
system or component to determine whetherthe products of a given development phasesatisfy the conditions imposed at the start ofthat phase.
Validation is the process of evaluating asystem or component during or at the end ofthe development process to determinewhether it satisfies specified requirements.
-
8/8/2019 Software Testing II
14/60
Functional vs non-functional testing
Functional testing refers to tests that verify a
specific action or function of the code. These areusually found in the code requirementsdocumentation, although some developmentmethodologies work from use cases or userstories. Functional tests tend to answer the
question of "can the user do this" or "does thisparticular feature work".
Non-functional testing refers to aspects of the
software that may not be related to a specificfunction or user action, such as scalability orsecurity. Non-functional testing tends to answersuch questions as "how many people can log in atonce".
-
8/8/2019 Software Testing II
15/60
Testing methods
Software testing methods are traditionally
divided into white- and black-box testing.
These two approaches are used to describe
the point of view that a test engineer takes
when designing test cases.
-
8/8/2019 Software Testing II
16/60
White Box Testing
White-box testing (clear box testing, glass box
testing, transparent box testing, or structuraltesting) is a method of testing software that
tests internal structures or workings of an
application as opposed to its functionality
(black-box testing).
An internal perspective of the system, as well
as programming skills, are required and used
to design test cases.
-
8/8/2019 Software Testing II
17/60
Black box testing
Black box testing treats the software as a
"black box"without any knowledge ofinternal implementation. Black box testing
methods include: equivalence partitioning,
boundary value analysis, all-pairs testing, fuzztesting, model-based testing, traceability
matrix, exploratory testing and specification-
based testing.
-
8/8/2019 Software Testing II
18/60
T
ypes ofT
esting
-
8/8/2019 Software Testing II
19/60
FunctionalT
esting
-
8/8/2019 Software Testing II
20/60
System testing System testing tests a completely integrated
system to verify that it meets its requirements.
system testing takes, as its input, all of the"integrated" software components that havesuccessfully passed integration testing and alsothe software system itself integrated with any
applicable hardware system(s).
The purpose of integration testing is to detect anyinconsistencies between the software units thatare integrated together (called assemblages) orbetween any of the assemblages and thehardware.
-
8/8/2019 Software Testing II
21/60
Unit testing Unit testing refers to tests that verify the functionality of
a specific section of code, usually at the function level. These type of tests are usually written by developers as
they work on code (white-box style), to ensure that the
specific function is working as expected.
Unit testing alone cannot verify the functionality of apiece of software, but rather is used to assure that the
building blocks the software uses work independently of
each other.
Unit testing is also called component testing.
-
8/8/2019 Software Testing II
22/60
-
8/8/2019 Software Testing II
23/60
Regression testing Regression testing focuses on finding defects after a major
code change has occurred. Specifically, it seeks to uncoversoftware regressions, or old bugs that have come back.Such regressions occur whenever software functionalitythat was previously working correctly stops working asintended. Typically, regressions occur as an unintendedconsequence of program changes, when the newly
developed part of the software collides with the previouslyexisting code. Common methods of regression testinginclude re-running previously run tests and checkingwhether previously fixed faults have re-emerged. The depthof testing depends on the phase in the release process and
the risk of the added features. They can either becomplete, for changes added late in the release or deemedto be risky, to very shallow, consisting of positive tests oneach feature, if the changes are early in the release ordeemed to be of low risk.
-
8/8/2019 Software Testing II
24/60
Acceptance testingAcceptance testing can mean one of two things:
A smoke test is used as an acceptance test prior
to introducing a new build to the main testing
process, i.e. before integration or regression.
Acceptance testing performed by the customer,often in their lab environment on their own
hardware, is known as user acceptance testing
(UAT). Acceptance testing may be performed as
part of the hand-off process between any two
phases of development.
-
8/8/2019 Software Testing II
25/60
-
8/8/2019 Software Testing II
26/60
Beta testing Beta testing comes after alpha testing.
Versions of the software, known as beta
versions, are released to a limited audience
outside of the programming team. The
software is released to groups of people sothat further testing can ensure the product
has few faults or bugs. Sometimes, beta
versions are made available to the open public
to increase the feedback field to a maximal
number of future users.
-
8/8/2019 Software Testing II
27/60
-
8/8/2019 Software Testing II
28/60
Performance testing is executed to determine how fast a
system or sub-system performs under a particular
workload. It can also serve to validate and verify other
quality attributes of the system, such as scalability,
reliability and resource usage.
Load testing is primarily concerned with testing that can
continue to operate under a specific load, whether that
be large quantities of data or a large number ofusers.
This is generally referred to as software scalability. The
related load testing activity of when performed as a non-
functional activity is often referred to as endurancetesting.
-
8/8/2019 Software Testing II
29/60
-
8/8/2019 Software Testing II
30/60
Usability testing is needed to check if the user
interface is easy to use and understand.
Security testing is essential for software that
processes confidential data to prevent system
intrusion by hackers.
Destructive testing attempts to cause thesoftware or a sub-system to fail, in order to
test its robustness.
-
8/8/2019 Software Testing II
31/60
Testing cycle
-
8/8/2019 Software Testing II
32/60
Requirements
Analysis
Test planning
Test executionDefect Retesting
Test Closure
Regression
testing
Testdevelopment
Test reportingTest result
analysis
TESTINGCYCLE
-
8/8/2019 Software Testing II
33/60
Requirements analysis: Testing should begin in therequirements phase of the software development life cycle.During the design phase, testers work with developers in
determining what aspects of a design are testable and withwhat parameters those tests work.
Test planning: Test strategy, test plan creation. Since manyactivities will be carried out during testing, a plan isneeded.
Test development: Test procedures, test scenarios, testcases, test datasets, test scripts to use in testing software.
Test execution: Testers execute the software based on theplans and test documents then report any errors found tothe development team.
Test reporting: Once testing is completed, testers generatemetrics and make final reports on their test effort andwhether or not the software tested is ready for release.
-
8/8/2019 Software Testing II
34/60
Test result analysis: Or Defect Analysis, is done by thedevelopment team usually along with the client, in order todecide what defects should be treated, fixed, rejected (i.e.found software working properly) or deferred to be dealtwith later.
Defect Retesting: Once a defect has been dealt with by thedevelopment team, it is retested by the testing team alsocalled Resolution testing.
Regression testing: It is common to have a small testprogram built of a subset of tests, for each integration ofnew, modified, or fixed software, in order to ensure thatthe latest delivery has not ruined anything, and that thesoftware product as a whole is still working correctly.
Test Closure: Once the test meets the exit criteria, the
activities such as capturing the key outputs, lessonslearned, results, logs, documents related to the project arearchived and used as a reference for future projects.
-
8/8/2019 Software Testing II
35/60
-
8/8/2019 Software Testing II
36/60
Testing tools
Program testing and fault detection can be aided significantly bytesting tools and debuggers. Testing/debug tools include features
such as: Program monitors, permitting full or partial monitoring of program
code including:
Instruction set simulator, permitting complete instruction levelmonitoring and trace facilities
Program animation, permitting step-by-step execution and conditional
breakpoint at source level or in machine code Code coverage reports
Formatted dump or symbolic debugging, tools allowing inspectionof program variables on error or at chosen points
Automated functional GUI testing tools are used to repeat system-level tests through the GUI
Benchmarks, allowing run-time performance comparisons to bemade
Performance analysis (or profiling tools) that can help to highlighthot spots and resource usage
Some of these features may be incorporated into an IntegratedD
evelopmentE
nvironment (IDE
).
-
8/8/2019 Software Testing II
37/60
Software Quality
-
8/8/2019 Software Testing II
38/60
Definitions
Software quality is something that everydeveloper / manufacturer has to be
concerned with at every stage of the
development process. S/w Quality as an attribute of an item, quality
refers to measurable characteristics that we
can compare to known standards.
-
8/8/2019 Software Testing II
39/60
Quality is the totality of features and
characteristics of a product or service which
bear on its ability to satisfy a given need
(British Standards Institution).
Quality software is software that does what it
is supposed to do.
User satisfaction =compliant product + good
quality + delivery within budget and schedule
-
8/8/2019 Software Testing II
40/60
-
8/8/2019 Software Testing II
41/60
-
8/8/2019 Software Testing II
42/60
Quality Assurance
Quality assurance consists of the auditing and
reporting functions of management.
Quality assurances goal is to provide
management with the data necessary to be
informed about product quality, thereby
gaining insight and confidence that product
quality is meeting its goals.
-
8/8/2019 Software Testing II
43/60
Cost of Quality
The cost of quality includes all costs incurred
in the pursuit of quality or in performing
quality-related activities. Cost of quality
studies are conducted to provide a baseline
for the current cost of quality, identify
opportunities for reducing the cost of quality,
and provide a normalized basis of comparison.
-
8/8/2019 Software Testing II
44/60
Quality costs may be divided into costs associated
with prevention, appraisal, and
failure.
Prevention costs
quality planning
formal technical reviews
test equipment
training
-
8/8/2019 Software Testing II
45/60
Appraisal costs include activities to gain insight into
product condition the first time through each
process. Examples of appraisal costs include
in-process and inter-process inspection
equipment calibration and maintenance testing
-
8/8/2019 Software Testing II
46/60
Failure costs are those that would disappear if
no defects appeared before shipping a
product to customers. Failure costs may be subdivided into internal
failure costs and external failure costs.
-
8/8/2019 Software Testing II
47/60
Internal failure costs are incurred when we detect a defect in
our product prior to shipment. Internal failure costs include
rework repair
failure mode analysis
External failure costs are associated with defects found after theproduct has been shipped to the customer. Examples of
external failure costs are;
complaint resolution
product return and replacement help line support
warranty work
-
8/8/2019 Software Testing II
48/60
Software quality assurance (SQA)
-
8/8/2019 Software Testing II
49/60
Software Quality Assurance (SQA) is a formal
process for evaluating and documenting the
quality of the work products produced during
each stage of the software development
lifecycle.
SQAs main objective is to ensure the
production of high-quality work products
according to stated requirements andestablished standards.
-
8/8/2019 Software Testing II
50/60
Software quality assurance comprises a
variety of tasks associated with two different
constituencies, i.e. the software engineerswho do the technical work and the SQA group
that has responsibility for quality assurance
planning, oversight, record keeping, analysisand reporting.
Software engineers address quality by
applying solid technical methods and
measures, conducting reviews and performing
well-planned software testing.
-
8/8/2019 Software Testing II
51/60
Activities performed as part of SQA
Preparing an SQA plan for the project
--developed during project planning
--Reviewed by all interesting Stakeholders
-
8/8/2019 Software Testing II
52/60
Plan identifies:
oE
valuations to be performedo Audits and reviews to be performed
o Standards that are applicable to the project
o Procedures for error reporting and trackingo Documents to be produced by the SQA group
o Amount of feedback to be provided to the
software project team
-
8/8/2019 Software Testing II
53/60
Participating in the development of the project's
software process description
Reviewing software engineering activities to verifycompliance with the process
Auditing designated software work products to
verify compliance with those defined as part of the
software process
--verifies that corrections have been made
and periodically reports results to the
project manager
Ensuring that deviations in work and work products
are handled according to a pre-specified procedure
-
8/8/2019 Software Testing II
54/60
SOFTWAREREVIEWS
Software reviews are a "filter" for the software
engineering process.
Reviews are applied at various points during
software development and serve to uncover
errors and defects that can then be removed.
-
8/8/2019 Software Testing II
55/60
-
8/8/2019 Software Testing II
56/60
-
8/8/2019 Software Testing II
57/60
-
8/8/2019 Software Testing II
58/60
The Review Meeting
Every review meeting should abide by thefollowing constraints:
Between three and five people (typically)
should be involved in the review. Advance preparation should occur but should
require no more than two hours of work foreach person.
The duration of the review meeting should beless than two hours.
-
8/8/2019 Software Testing II
59/60
Review Reporting and Record Keeping
During the FTR, a reviewer (the recorder)
actively records all issues that have been
raised. These are summarized at the end of the
review meeting and a review issues
list is produced.
-
8/8/2019 Software Testing II
60/60
Once a formal technical review summary
report is completed, Areview summary
report answers three questions: 1. What was reviewed?
2. Who reviewed it?
3. What were the findings and conclusions?