asap d.d. (p) ltd_manual testing material

Upload: omkar-jagadish

Post on 05-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    1/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 1

    Foundation LevelM anual Testing Concepts

    V ersion 2012

    ASAP D.D. (P) Ltd

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    2/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 2

    LICENSING AND COPYRIGH T I NFORM ATI ON

    Manual Testing user Guide,

    Publ ished By

    ASAP D.D. (P) LtdFlat N o: 302, 4 th floor, Sir ipuram Towers,

    Visakhapatnam 530003India

    Phone: 0891-2795845

    E-mail: hr @asapdd.comWeb: www.asapdd.com.in

    Copyright 2010 ASAP D.D. (P) Ltd, Al l rights reserved.

    This guide is confidential and propr ietary to A SA P D.D. (P) Ltd . No part of th is document may bephotocopied, reproduced, stored in a retrieval system, or transmitted, in any form or by any means

    whether, electronic, mechanical, or otherwi se without the pri or w ri tten permission of ASA P D.D. (P) Ltd.

    The information provi ded in th is document is intended as a guide only. Information in thi s document issubject to change without notice. ASA P D.D. (P) Ltd reserves the right to change or improve its products

    and to make changes in the content wi thout obligation to notif y any person or organization of suchchanges or improvements.

    Document Title: Manual Testing Guid e

    If you have any comments or suggestions regard ing this document, please send them via e-mail [email protected]

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    3/59

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    4/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 4

    Table of Contents

    Acknowledgement 7Purpose of thi s Document 8Level of Detail 81. Fundamentals of Testing 9

    1.1. What is Testing? 91.2. Why is Testi ng Necessary? 10

    1.2.1. Software Systems Context 101.2.2. Causes of Softw are Defects 101.2.3. Role of Testing in Software Development, Maintenance and Operations 101.2.4. Testing and Quality 111.2.5. How Much Testing is Enough? 11

    1.3. Seven Testing Principles 111.4. Fundamental Test Process 12

    1.4.1. Test Planning and Control 12

    1.4.2. Test Analysis and Design 131.4.3. Test Imp lementation and Executi on 131.4.4. Evaluating Exit Criteria and Repor t 141.4.5. Test Closure Activ it ies 14

    1.5. Psychology of Testing 142. Testing Throughout the Softw are Li fe Cycle 16

    2.1. Software Development Models 162.1.1. Sequenti al (Waterfall and V Models) 162.1.2. Iterative Model 182.1.3. Evolut ionary (Proto typ ing, Concurr ent Dev, Spi ral Model) 192.1.4. Agile Model 222.1.5. Testing Within a Lif e Cycle Model 23

    2.2. Test Levels 232.2.1. Component Testing 242.2.2. Integration Testing 242.2.3. System Testi ng 252.2.4. Acceptance Testing 26

    2.3. Test Types 282.3.1. Testing of Function (Functional Testing) 282.3.2. Testing of Non-functional Software Characteristics (Non-functional Testing) 282.3.3. Testing of Software Structur e/Architectur e (Structur al Testing) 292.3.4. Testing Related to Changes: Re-testi ng and Regression Testing 29

    2.4. Maintenance Testing 293. Static Techniques 31

    3.1. Static Techniques and Test Process 313.2. Review Process 31

    3.2.1. Activit ies of formal Revi ew 323.2.2. Roles and Responsibi li ti es 323.2.3. Types of Reviews 333.2.4. Success Factor s for Reviews 34

    3.3. Static A nalysis by Tools 35

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    5/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 5

    4. Test Design Techniques 364.1. The Test Development Process 364.2. Categories of Test Design Techniques 374.3. Specification based or Black-box Techniques 38

    4.3.1. Equivalence Part itioni ng 384.3.2. Boundary Value Analysis 384.3.3. Decision Table Testing 384.3.4. State Transiti on Testing 394.3.5. Use Case Testing 40

    4.4. Str ucture based or Whit e-box Techniques 404.4.1. Statement Testing and Coverage 404.4.2. Decision Testing and Coverage 414.4.3. Other Structure-based Techniques 42

    4.5. Experience based Techniques 424.6. Choosing Test Techniques 43

    5. Test Management 445.1. Test Organization 44

    5.1.1. Test Organization and Ind ependence 445.1.2. Tasks of the Test Leader and Testers 44

    5.2. Test Planning and Estimati on 465.2.1. Test Planning 465.2.2. Test Planning Activ it ies 465.2.3. Entry Criteri a 465.2.4. Exit Criteria 475.2.5. Test Estimation 475.2.6. Test Strategy, Test App roach 47

    5.3. Test Progress Monitoring and Control 485.3.1. Test Progress Monitoring 485.3.2. Test Report ing 485.3.3. Test Control 49

    5.4. Configur ation Management 495.5. Risk and Testing 50

    5.5.1. Project Risks 505.5.2. Product Risks 50

    5.6. Incident Management 516. Tool Support for Testing 53

    6.1. Types of Test Tools 536.1.1. Understanding the Meaning and Purpose of Tool Support for Testing 536.1.2. Test Tool Classif ication 536.1.3. Tool Support for Management of Testing and Tests 546.1.4. Tool Support for Static Testing 546.1.5. Tool Support for Test Specification 556.1.6. Tool Support for Test Execution and Logging 556.1.7. Tool Support for Performance and Monitoring 566.1.8. Tool Support for Specific Testing Needs 56

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    6/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 6

    6.2. Effective use of Tools: Potent ial Benefi ts and Risks 566.2.1. Potential Benefits and Risks of Tool Supp ort for Testing (for all tools) 566.2.2. Special Consid erations for Some Typ es of Tools 57

    6.3. Introducing a Tool into an Organization 58

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    7/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 7

    Acknowledgment

    Many thanks to the contr ibutors to this document: the Test Manager, t he ProcessManager, and other staff .

    Portions of this manual were adapted from the following sources: ISTQB 2010 Foundation Level M anual CSTE Foundation Level Manual

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    8/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 8

    Purpose of this Document

    The Foundation Level qualification is aimed at anyone involved in software testing.This includes people in roles such as testers, test analysts, test engineers, testconsult ants, test managers, user acceptance testers and software developers. ThisFoundation Level qualifi cation is also appropriate for anyone who w ants a basicunderstanding of softw are testing, such as project managers, quality managers,software development managers, business analysts, IT di rectors and managementconsultants.

    Level of Detail

    The level of detail in this syl labus allows internationally consistent teaching andexamination. In order to achieve this goal, the syllabus consists of:

    General i nstr uctional objectives describing the intention of the Foundation Level A list of information to teach, including a description, and references to

    additional sources if required Learni ng objectives for each knowledge area, describing the cognit ive learning

    outcome and mindset to be achieved A li st of terms that students must be able to recall and understand

    A descript ion of the key concepts to teach, including sources such as acceptedli terature or standards

    The syl labus content i s not a descript ion of the enti re know ledge area of softwaretesting; it reflects the level of detail to be covered i n Foundation Level t raini ng courses.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    9/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 9

    1. Fundamentals of Testing

    1.1 What is Testing?

    A common percept ion of testing is that it only consists of running tests, i.e. executi ng the software. This ispart of testing, but not all of the testing activit ies.

    Test activ ities exist before and after test execution. These activ it ies include pl anning and control , choosingtest condi tions, designing and executing test cases, checking results, evaluating exit criteria, report ing onthe testing process and system under test, and finalizing or comp leting closure activ it ies after a test phasehas been completed. Testing also includes revi ewing documents (including source code) and conducti ngstatic analysis.

    Testing software may be defined as operating the software under controlled conditions,

    1. veri fy that it behaves as specified

    2. to detect errors, and

    3. to valid ate that what has been specifi ed i s what the user actually wanted.

    Verif ication ensures the product is designed to deliver all functionality to the customer; it typicallyinvolves reviews and meetings to evaluate documents, pl ans, code, requir ements and specifications; thi scan be done with checklists, issues li sts, walkthroughs and inspection meetings. You CA N learn to doverification, with little or no outside help.

    Validation ensures that functionali ty, as defined in requir ements, is the intended behaviour of theproduct; validation typ ically involves actual testing and takes place after v eri fications are completed.

    Error Detection: Testing should i ntentionally attempt to make things go wrong to determine if thingshappen when they shouldnt or things don t happen w hen they should.

    Both dynamic testing and static testing can be used as a means for achieving simi lar objectives, and wil lprovide information that can be used to improve both the system being tested and the development andtesting processes. Testing can have the foll owing objectives:

    Finding defects Gaining confidence about the level of quality

    Providing information for decision-making Preventing defects

    The thought process and activities involved in designing tests early in the life cycle can help to preventdefects from being intr oduced into code e.g. Verifying test basis via test design, Requirements.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    10/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 10

    Dif ferent vi ewpoints in t esting take di fferent objectives into account. In development testing, the main objective may be to cause as many fail ures In acceptance testing, the main objective may be to confi rm that the system works as expected, to

    gain confidence that i t has met the requirements In some cases the main objective of testing may be to assess the qualit y of the software (wi th no

    intention of fixing defects), to give information to stakeholders of the risk of releasing the systemat a given time

    Maintenance testing often includes testing that no new defects have been introduced duringdevelopm ent of the changes

    Operational testing, the main objective may be to assess system characteristics such as reliabilityor availabili ty

    1.2 Why is Testing Necessary?

    1.2.1 Software Systems ContextSoftware systems are an integral part of li fe, from business app lications (e.g. banking) to consumerproducts (e.g. cars). Most people have had an experience with softw are that did not w ork as expected.Software that does not work correctly can lead to many problems, including loss of money, time orbusiness reputation, and could even cause injury or death.

    1.2.2 Causes of Software Defects

    A hum an being can make an err or (mi stake), which produces a defect (fault, bug) in the program code, orin a document. If a defect in code is executed, the system may fail to do what it should d o (or dosomething i t shouldnt), causing a failure. Defects in software, systems or documents may result in

    failur es, but not all d efects do so.

    Defects occur because human beings are fallible and because there is time pressure, complex code,complexity of infrastructure, changing technologies, and/or many system interactions.

    Failur es can be caused by envi ronmental condit ions as well. For example, radiation, magneti sm,electronic fields, and pollution can cause faults in firmware or influence the execution of software bychanging the hardware condit ions.

    1.2.3 Role of Testing in Software D evelopment, M aintenance and Operations

    Rigorous testing of systems and documentation can help to reduce the risk of problems occurring duringoperation and contribute to the quality of the software system, if the defects found are corrected beforethe system is released for operational use.

    Software testing may also be required to meet contractual or legal requir ements, or ind ustry-specificstandards.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    11/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 11

    1.2.4 Testing and Quality

    With the help of testing, it is possible to measure the quality of software in terms of defects found, forboth functional and non-functional software requirements and characteristics (e.g., reliability, usability,efficiency, maintainability and portability).

    Testing can give confidence in the qualit y of the softw are if it finds few or no defects. A properlydesigned test that passes reduces the overall level of ri sk in a system. When testing does find defects, thequali ty of t he software system increases when those defects are f ixed.

    Lessons should be learned f rom pr evi ous projects. By understanding the root causes of defects found i nother projects, processes can be impr oved, which in turn should pr event those defects fr om reoccurr ingand, as a consequence, improve the quality of future systems. This is an aspect of quality assurance.

    Testing should be in tegrated as one of the quali ty assurance activ ities (i.e. alongside developmentstandards, training and defect analysis).

    1.2.5 How M uch Testing is Enough?

    Deciding how much testing is enough should take account of the level of risk, including technical, safety,and business risks, and project constraints such as time and budget.

    Testing should pr ovide suff icient information to stakeholders to make informed decisions about therelease of the softw are or system being tested, for the next development step or handover t o customers.

    1.3 Seven Testing Principles

    Principle 1 Testing shows presence of defectsTesting can show that defects are pr esent, but cannot pr ove that there are no defects. Testing reduces theprobability of undiscovered defects remaining in the software but, even if no defects are found, it is not aproof of corr ectness.

    Principle 2 Exhaustive testing is impossible

    Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases.Instead of exhaustive testing, ri sk analysis and pr iorit ies should be used to focus testing efforts.

    Principle 3 Early testing

    To find defects early, testing activ it ies shall be started as earl y as possible in the software or systemdevelopm ent li fe cycle, and shall be focused on defined objectiv es.

    Principle 4 Defect clustering

    Testing effor t shall be focused proport ionally to the expected and later observ ed defect density ofmodules. A small number of modules usually contains most of the defects discovered during pre-releasetesting, or i s responsible for most of the operational failures.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    12/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 12

    Principle 5 Pesticide paradox

    If the same tests are repeated over and over again, eventually t he same set of test cases will no l onger findany new defects. To overcome this pesticide paradox , test cases need to be regularl y r eviewed andrevi sed, and new and di fferent tests need to be wr it ten to exercise di fferent parts of the softw are orsystem to find potentially more defects.

    Principle 6 Testing is context dependent

    Testing is done differently in different contexts. For example, safety-critical software is tested differentlyfrom an e-commerce site.

    Principle 7 Absence-of-errors fallacy

    Finding and fixing defects does not help if the system built is unusable and does not fulfil the usersneeds and expectations.

    1.4 Fundamental Test ProcessThe most v isible part of testing is test execution. But to be effectiv e and efficient, test plans should alsoinclude time to be spent on pl anning the tests, design ing test cases, pr eparing for execution andevaluating results.

    The fund amental test pr ocess consists of the foll owing main activit ies: Planning and contr ol Analysis and design Implementation and execution Evaluating exit cri teria and report ing Test closure activ ities

    Although logically sequential, the activities in the process may overlap or take place concurrently.Tailoring these main activiti es within the context of the system and the project is usually r equir ed.

    1.4.1 Test Planning and Control

    Test planning is the activity of defining the objectives of testing and the specification of test activities inord er to meet the objectives and mi ssion.

    Test control is the ongoing activity of comparing actual progress against the plan, and reporting thestatus, includ ing deviations from the plan. It involves taking actions necessary to meet the mission andobjectives of the project. In order to control testing, the testing activities should be monitored throughoutthe project. Test planning takes into account the feedback from monitoring and control activities.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    13/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 13

    1.4.2 Test Analysis and Design

    Test analysis and design is the activity during which general testing objectives are transformed intotangible test conditions and test cases.

    The test analysis and design activ ity has the follow ing major tasks: Reviewing the test basis (such as requirements, software integri ty level1 (risk level), risk analysis

    reports, architecture, design, interface specif ications) Evaluating testabil ity of the test basis and test objects Identifying and prioritizing test conditions based on analysis of test items, the specification,

    behaviour and structure of the softw are Designing and priorit izing high level test cases Identifying necessary test data to supp ort the test conditions and test cases Designing the test envir onment set-up and identif ying any required infrastructure and tools Creating bi -directional t raceabili ty betw een t est basis and test case

    1.4.3 Test Implementation and Execution

    Test imp lementation and executi on is the activ ity where test procedures or scripts are specif ied bycombining the test cases in a particular order and including any other information needed for testexecution, the environment i s set up and the tests are run.

    Test implementation and execution has the following major tasks: Finalizing, implementing and priorit izing test cases (including the identif ication of test d ata) Developing and priori tizing test p rocedures, creating test d ata and, optionally, pr eparing test

    harnesses and wri ting automated test scripts Creating test sui tes from the test p rocedures for efficient t est execution Verifying that the test envi ronment has been set up correctly Verifying and updating bi -directional traceabili ty between the test basis and t est cases Executing test procedu res either manually or by u sing test execution tools, according to the

    planned sequence Logging the outcome of test execution and recording the identities and versions of the software

    under test, test tools and testw are Comparing actual results with expected results Repor ting d iscrepancies as incidents and analyzing them in order to establi sh their cause (e.g., a

    defect in the code, in specified test data, in the test document, or a mistake in t he way the test wasexecuted)

    Repeating test activities as a result of action taken for each discrepancy, for example, re executionof a test that previously f ailed in order to confirm a fix (confi rmation testing), execution of acorrected test and/or executi on of tests in order to ensure that defects have not been intr oduced inunchanged areas of the software or that defect f ixing d id not uncover other d efects (regressiontesting)

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    14/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 14

    1.4.4 Evaluating Exit Criteria and Reporting

    Evaluating exit criteria is the activ ity w here test execution i s assessed against the defined objectiv es. Thisshould be done for each test level.

    Evaluating exit criteria has the follow ing major tasks: Checking test logs against the exit criteria specified in test planning A ssessing if more tests are needed or if the exit cri teria specif ied should be changed Writ ing a test summary report for stakeholders

    1.4.5 Test Closure Activities

    Test closur e activ ities collect data from comp leted test activi ti es to consolidate experience, testw are, factsand numbers. Test closure activ ities occur at pr oject mi lestones such as when a software system isreleased, a test project is completed (or cancelled), a milestone has been achieved, or a maintenancerelease has been completed.

    Test closure activities include the following major tasks: Checking which planned deliverables have been delivered Closing incident reports or raising change records for any that remain open Documenting the acceptance of the system Finalizing and archiving testware, the test environment and the test infrastructure for later reuse Handing over the testware to the maintenance organization A nalyzing lessons learned to d etermine changes needed for future releases and projects Using the information gathered to improve test matur ity

    1.5 Psychology of TestingThe mindset to be used w hil e testing and reviewing is dif ferent from that used w hil e developingsoftware. With the right mindset developers are able to test their own code, but separation of thisresponsibility to a tester is typically done to help focus effort and provide additional benefits, such as anindependent view by trained and professional testing resources. Independent testing may be carried outat any level of testing.

    A certain degree of i ndependence (avoiding the author bias) often makes the tester more effectiv e atfinding defects and failures. Independence is not, however, a replacement for familiarity, and developerscan efficiently find many defects in their own code. Several l evels of independence can be defined asshown here from low to high:

    Tests designed by the person(s) who w rote the software under test (low level of i ndependence) Tests designed by another person(s) (e.g., from the development t eam) Tests designed by a person(s) from a dif ferent organizational group (e.g., an independent test

    team) or test specialists (e.g., usability or performance test specialists)

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    15/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 15

    Tests designed by a person(s) from a dif ferent organization or comp any (i.e. outsourcing orcerti fication by an external body)

    Peopl e and pr ojects are driven by objectiv es. People tend to align their plans wi th t he objectiv es set by

    management and other stakeholders, for examp le, to find defects or to confi rm that softw are meets itsobjectiv es. Therefore, it is imp ortant to clearl y state the objectiv es of t esting.

    Identi fying failur es during testing may be perceiv ed as cri ticism against the product and against theauthor. As a result, testing is often seen as a destructive activ ity, even though it is very constructive in themanagement of product risks. Looking for failures in a system requires curiosity, professional pessimism,a critical eye, attention to detail, good communication with development peers, and experience on whichto base error guessing.

    If err ors, defects or f ailures are communicated in a constr uctive way, bad feelings betw een the testers andthe analysts, designers and developers can be avoided. This appli es to defects found duri ng revi ews aswell as in testing.

    The tester and test l eader need good interpersonal skill s to communicate factual inform ation aboutdefects, progress and ri sks in a constructiv e way. For the author of the software or d ocument, defectinformation can help them impr ove their skil ls. Defects found and fixed during testing wil l save time andmoney later, and reduce ri sks.

    Communication pr oblems may occur, part icularly if testers are seen only as messengers of unw antednews about defects. However, there are several ways to improve communication and relationshipsbetween testers and others:

    Start with collaboration rather than battles remind everyone of the common goal of betterquality systems

    Communicate findings on the product in a neutral, fact-focused w ay without crit icizing theperson who created it, for example, write objective and factual incident reports and reviewfindings

    Try to understand how the other person feels and why they react as they do Confirm that the other person has understood what you have said and vice versa

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    16/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 16

    2. Testing Throughout the Software Life Cycle

    2.1 Software Development Models

    Testing does not exist in isolation; test activ ities are related to software development activ ities. Differentdevelopm ent l ife cycle models need d ifferent approaches to testing.

    Software development model is a framework that is used to structure, plan and control the pr ocess ofdeveloping an information system. Below mentioned are the widely used models:

    Sequential (Waterfall & V Model etc) Iterative Evolut ionary (Proto typi ng, Concurrent Dev, Spi ral) Agile

    2.1.1 Sequential M odelA. Waterfall M odel:

    Each phase must be completed in it s ent ir ely before the next phase can begin. A t the end of each phase, areview takes place to determine if the project is on the right path and if they can continue or di scard theproject.

    Re uirements

    Design

    Development

    Testing

    Im lementation

    Maintenance

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    17/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 17

    Advantages: Most Common Simple to understand and use Easy to manage due to r igidi ty each phase has specific deliverables and a review

    Phases are pr ocessed and completed ind iv id ually Works well for smaller projects, or where requirements are well understood

    Disadvantages: A djusting scope during the lif ecycle can end a pr oject No working softw are is pr oduced unt il late in the li fecycle High amounts of risk and uncertainty Poor model for complex or object-oriented projects Poor model for long or ongoing projects, or where requirements are likely to change

    B. V M odel V-Model evolved from w aterfall M odel It is also called as verif ication and validation Model Instead of moving down in a linear w ay, the process steps are bent upw ards after the coding

    phase, to form the typical V shape Testing is emphasized in thi s model more than in the waterfall m odel It is a structured approach to testing Brings high quali ty into the development of our pr oducts

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    18/59

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    19/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 19

    Advantages: The key to successful use of an it erative software developm ent l ifecycle is rigorous validation of

    requirements, and verification (including testing) of each version of the software against thoserequirements within each cycle of the model

    highly accurate and specificDisadvantages:

    Simple early designs may be inadequate When using the iterative model people work ing on the project can get stuck in a loop (A lw ays

    find ing pr oblems than having to go back and d esign a fix, imp lement i t)

    2.1.3 Evolutionary M odel A llows the software to evolve as need grow s or become better understood, or become defined

    Each delivery becomes more complex, with addition of new features/functions Goal of evolut ionary m odels is extensibil ity Some Evolutionary Models are:

    o Prototyping Modelo Concurr ent Development Modelo Spir al M odel

    A. Prototyping Model

    It is recommendable for the projects when

    Short amount of t ime for product Needs revisions done after release Requi rements are fuzzy Developer i s unsure of

    o The efficiency of an algorithmo The adaptabili ty of OSo User interface is not well defined

    Advantages: Delivers a working system early & cheaply

    A voids Build ing systems to bad r equir ements Fits top-down imp lementation and testing strategies

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    20/59

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    21/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 21

    Framework activiti es: Customer Communication: tasks required to establish effective communication Planning: tasks requir ed to define resources, timelines and other project related information Risk A nalysis: tasks requi red t o assess the technical and management r isks Engineering: tasks required to build one or more representation of the application Construction & Release: tasks required to construct, test and suppor t (e.g. Documentation &

    Training) Customer evaluation: tasks requir ed to obtain periodic customer feedback

    Advantages: Emphasis risk which is often ignored Emphasis risk reduction techniques Provides Check points for p roject cancell ation Constant Customer involv ement and v alidation

    Disadvantages: Full analysis requires training, skil l, and considerable expense So, it may only be appropriate for large projects run by large companies

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    22/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 22

    2.1.4 Agile Model

    The Agi le development method is characterized as being able to adapt quickly to changing reali ti es. Itincorporates pl anning, requir ements analysis, and design, coding, testing, and documenting tasks torelease mini-increments of new functionali ty.

    Promises A llows for adaptiv e planning Project ri sk i s minim ized by developing software in short iterations where each iteration is a

    small project on it s own A llows for just-in-time requirements and t he abil ity to adapt to constantly changing requir ements Less time is wasted on wri tten documentation. The emphasis is on real-time communication,

    preferably face-to-face, over written documents Progress is measured by pr oducing crude and executable systems presented to stakeholders and

    continually improving them

    There is continuous client communication the project i s very close to the user and relies heavi lyon client interaction to bu il d the best system that meets the user's needs

    Deliverables are short-w in, business-focused releases, released typically every coupl e of w eeks ormonths unti l the entire project is completed

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    23/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 23

    Realities: Can resul t in cowboy coding; that i s the absence of a defined development m ethod and team

    members have the freedom to do whatever they feel is righ t There is often insuff icient structure and the necessary documentation to maintain and enhance

    the application on a going-forw ard basis Only works well w ith senior-level, highly experienced developers Often incorporates insufficient software architecture and design for complex applications Business partners are often nervous and confused about w hat they wil l receive as a final package.

    They resist iterative requirements work and are not prepared for iterative testing Outp ut may be surpri sing. Due to the incremental nature of adaptiv e planning, the overall resul t

    may d iffer substantially f rom the original intent. This may be a better resul t but perceiv ed as achaotic process

    Requires a lot of cultural change for an organization to adopt (if this doesn't already exist) Many tools for development (e.g. project management, requirements management, developm ent

    tools) were based on the Waterfall approach and require extra work to be effective in an Agilemethodology

    An A gile methodology is best suited f or development projects that are evolv ing and continuously facingchanging condit ions.

    2.1.5 Testing wi thin a Life Cycle M odel

    In any li fe cycle model, there are several characteristics of good testing: For every development activity there is a corresponding testing activity Each test level has test objectives specific to that level

    The analysis and design of tests for a given test level should begin du ring the correspond ingdevelopment activ ity

    Testers should be involved in revi ewing documents as soon as dr afts are available in thedevelopment li fe cycle

    Test levels can be combined or reorganized depending on the nature of the project or the systemarchitecture. For examp le, for the integration of a Commercial Off -The-Shelf (COTS) software productin to a system, the pu rchaser may perform integration testing at the system level (e.g., integration to theinfrastructur e and other systems, or system deployment) and acceptance testing (functional and/or non-functional, and user and/or operational testing).

    2.2 Test LevelsThe system is tested in steps, in line with the planned build and release strategies, from individual unitsof code thr ough int egrated subsystems to the deployed releases and to the final system. Testing proceedsthrough various physical levels of the application development lifecycle. Each completed level representsa mil estone on the pr oject pl an and each stage repr esents a known level of physical integration andquali ty. These stages of integration are known as test l evels.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    24/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 24

    Component Testing Int egration Testing System Testi ng Acceptance Testing

    2.2.1 Component Testing

    Test basis: Component requirements Detailed design Code

    Typical test objects: Components Programs

    Data conversion / m igration programs

    Component testing (also known as unit, module or program testing) searches for defects in, and veri fiesthe functioning of, software modules, programs, objects, classes, etc., that are separately testable. It maybe done in isolation f rom the rest of the system, depending on the context of the development l ife cycleand the system.

    Component testing may include testing of functionality and specific non-functional characteristics, suchas resource-behavior (e.g., searching for memory leaks) or robustness testing, as well as structur al testing(e.g., decision coverage). Test cases are deri ved from work products such as a specification of thecomponent, the software design or the data model.

    Typically, component testing occurs with access to the code being tested and w ith the support of adevelopment environment, such as a unit test framework or debugging tool. In practice, componenttesting usuall y involves the programmer w ho wrote the code. Defects are typ ically fixed as soon as theyare found, w ithout formally managing t hese defects.

    One approach t o component testing i s to pr epare and automate test cases before coding. This is called atest-f ir st appr oach or test-dri ven development. This approach is highly i terativ e and is based on cycles ofdeveloping test cases, then bui ld ing and integrating small p ieces of code, and executing the componenttests correcting any issues and iterating until they pass.

    2.2.2 Integration Testing

    Test basis: Software and system design Architecture Workflows Use cases

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    25/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 25

    Typical test objects: Sub-systems database implementation Infrastructure Interfaces

    System configuration Configuration data

    Integration testing tests interfaces betw een components, int eractions wi th di fferent parts of a system,such as the operating system, file system and hardware, and interfaces between systems.

    There may be more than one level of i ntegration testing and i t may be carr ied out on test objects ofvarying size as follows:

    1. Component i ntegration testing tests the interactions between softw are components and is doneafter component testing

    2. System integration testing tests the int eractions betw een d if ferent systems or between hardwareand softw are and may be done after system testing. In thi s case, the developing organization maycontrol only one side of the interface. This might be considered as a risk. Business processesimplemented as workflows may inv olve a seri es of systems. Cross pl atform issues may besignificant.

    The greater the scope of integration, the more di ff icul t it becomes to isolate failures to a specificcomponent or system, which may lead to increased r isk and addit ional time for t roubleshooting.

    At each stage of integration, testers concentr ate solely on the integrati on itself. For example, if they are

    integrating module A with module B they are interested in testing the communication betw een themodules, not the functionali ty of the ind ividual module as that w as done during component testing. Bothfunctional and structural approaches may be used.

    Ideally, testers should understand the architecture and influence integration planning. If integration testsare planned before components or systems are bui lt , those components can be bui lt in the order requiredfor most eff icient testing.

    2.2.3 System Testing

    Test basis:

    System and softw are requi rement specification Use cases Functional specification Risk analysis reports

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    26/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 26

    Typical test objects: System, user and operation manuals System configuration

    System testing is concerned with the behavior of a w hole system/product. The testing scope shall beclearl y addressed in the Master and/or Level Test Plan for that t est level.

    In system testing, the test environment should corr espond to the final target or pr oduction envir onmentas much as possible in order to minimize the risk of environment-specific failures not being found intesting.

    System testing may includ e tests based on r isks and/or on r equi rements specifications, businessprocesses, use cases, or other high level t ext descript ions or models of system behavior, in teractions wi ththe operating system, and system resources.

    System testing should investigate functional and non-functional requirements of the system, and dataquali ty characteristics. Testers also need to deal w ith incomplete or undocumented requi rements. Systemtesting of functional requirements starts by using the most appropriate specification-based (black-box)techniques for the aspect of the system to be tested. For example, a decision table may be created forcombinati ons of effects described in business rul es. Structure based techniques (white-box) may then beused to assess the thoroughness of the testing with r espect to a structural element, such as menu str uctur eor web page navigation.

    An independent test team of ten carr ies out system testing.

    2.2.4 Acceptance Testing

    Test basis: User requirements System r equi rements Use cases Business processes Risk analysis reports

    Typical test objects: Business processes on fully integrated system Operational and maintenance processes

    User procedures Forms Reports

    Acceptance testing is often the responsibil ity of the customers or users of a system; other stakeholdersmay be involved as well.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    27/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 27

    The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system. Find ing defects is not the main focus in acceptance testing.Acceptance testing may assess the systems readiness for deployment and use, although it is notnecessari ly the final level of testing. For examp le, a large-scale system integration test may come after theacceptance test for a system.

    Acceptance testing may occur at v ari ous times in the life cycle, for example: A COTS software product may be acceptance tested w hen it is installed or integrated Acceptance testing of the usability of a component may be done during component testing Accept ance testing of a new functional enhancement may come before system testing

    Typical forms of acceptance testing include the following:

    User acceptance testing:Typically veri fies the fitness for use of the system by business users.

    Operational (acceptance) testing

    The acceptance of the system by the system administrators, including: Testing of backup/restore Disaster recovery User management Maintenance tasks

    Data load and migration tasks Periodic checks of security vulnerabilities

    Contract and regulation acceptance testing

    Contract acceptance testing is performed against a contracts acceptance cri teria for p roducing custom-developed softw are. Acceptance criteria should be defined when the parties agree to the contract.Regulation acceptance testing is performed against any regulations that must be adhered to, such asgovernment, legal or safety regulations.

    Alpha and beta (or field) testing

    Developers of market, or COTS, software often want to get feedback from potenti al or existing customersin their market before the softw are product is pu t up for sale commercially. A lpha testing is performed atthe developing organizations site but not by the developing team. Beta testing, or field-testing, isperformed by customers or potential customers at their own locations.

    Organizations may use other terms as well, such as factory acceptance testing and site acceptance testingfor systems that are tested before and after being moved to a customers site.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    28/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 28

    2.3 Test TypesA group of test activ it ies can be aimed at verifying the software system (or a part of a system) based on aspecific reason or target for testing.

    A test type is focused on a particular test objective, which could be any of the following: A function to be performed by the softw are A non-functional quality characteristic, such as reliabili ty or usabil ity The str uctur e or architectur e of the software or system

    Change related, i.e. confirming that defects have been fixed (confirmation testing) and looking forunint ended changes (regression testing)

    2.3.1 Testing of Function (Functional Testing)

    The functions that a system, subsystem or component are to perform may be described in w ork p roductssuch as a requirements specification, use cases, or a functional specification, or they may beundocumented. The functions are what the system does.

    Functional tests are based on f uncti ons and features (described in d ocuments or understood by thetesters) and their int eroperabil ity w ith specif ic systems, and may be performed at all test l evels (e.g., testsfor components may be based on a component specification).

    Specif ication-based techniques may be used to derive test conditions and test cases from the functionalityof the software or system. Functional testing considers the external behavior of the software (black-boxtesting).

    A type of functional testing, security testing, investigates the functions (e.g., a firewall) relating todetection of threats, such as viruses, from malicious outsiders. Another type of functional testing,interoperabil ity testing, evaluates the capabili ty of the softw are product to interact w ith one or morespecified components or systems.

    2.3.2 Testing of Non-functional Software Characteristics (N on-functional Testing)

    Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing,usabil ity testing, maintainabil ity testing, reliabil ity testing and p ortabilit y testing. It is the testing of how the system w orks.

    Non-functional testing may be performed at all test levels. The term non-functional testing describes thetests requir ed to measure characteri stics of systems and softw are that can be quanti fied on a varyingscale, such as response times for performance testing. These tests can be referenced to a quality modelsuch as the one defined in Software Engineering Software Product Quality. Non-functional testingconsiders the external behavior of the software and in most cases uses black-box test design techniques toaccomplish that.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    29/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 29

    2.3.3 Testing of Software Structure/Architecture (Structural Testing)

    Str uctur al (white-box) testing may be performed at all test l evels. Str uctural techniques are best used afterspecification-based techniques, in order to help m easure the thoroughness of t esting through assessmentof coverage of a typ e of str ucture.

    Coverage is the extent that a str ucture has been exercised by a test suite, expressed as a percentage of theitems being covered. If coverage is not 100%, then more tests may be designed to test those items thatwere missed t o increase coverage.

    At all test levels, but especially in component testing and component in tegration testing, tools can beused to measure the code coverage of elements, such as statements or decisions. Structural testing may bebased on the archi tecture of the system, such as a calli ng h ierarchy.

    Structural testing approaches can also be applied at system, system integration or acceptance testinglevels (e.g., to business models or menu structures).

    2.3.4 Testing Related to Changes: Re-testing and Regression Testing

    After a defect is detected and f ixed, the softw are should be re-tested to confi rm that the ori ginal defecthas been successfull y removed. Thi s is called confirmation . Debugging (defect fixing) is a developm entactiv ity, not a testing activity.

    Regression testing i s the repeated testing of an alr eady tested program, after modification, to di scover anydefects intr oduced or uncovered as a result of the change(s). These defects may be either in the softwarebeing tested, or in another related or unrelated software component. It is performed when the software,or its environment, is changed. The extent of regression testing is based on the ri sk of not find ing defectsin software that was wor king previously.

    Tests should be repeatable if they are to be used for confirmation testing and to assist regression testing.

    Regression testing may be performed at all test levels, and includes functional, non-functional andstructural testing. Regression test sui tes are run many times and generally evolve slowl y, so regressiontesting is a strong candidate for automation.

    2.4 Maintenance TestingOnce deployed, a software system i s often in serv ice for years or decades. Dur ing this time the system, itsconfiguration data, or i ts environment are often corrected, changed or extended. The pl anning of releasesin advance is crucial for successful maint enance testing. A di stinction has to be made betw een plannedreleases and hot fixes. Maintenance testing is done on an existing operational system, and i s triggered bymodi fications, migration, or r eti rement of the softw are or system.

    Modifications include planned enhancement changes (e.g., release-based), corrective and emergencychanges, and changes of environment, such as planned operating system or database upgrades, planned

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    30/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 30

    upgrade of Commercial-Off-The-Shelf software, or patches to correct newly exposed or d iscoveredvulnerabilities of the operating system.

    Maintenance testing for migration (e.g., from one platform to another) should include operational tests of

    the new envir onment as well as of the changed software. Migr ation testing (conversion testing) i s alsoneeded when data from another application w il l be migr ated into the system being maintained.

    Maintenance testing for the retirement of a system may i nclude the testing of d ata migr ation or archivingif long data-retention periods are required.

    In addition to testing w hat has been changed, maintenance testing includes extensive regression testingto p arts of the system that have not been changed. The scope of maintenance testing is related to the riskof the change, the size of the existing system and to the size of the change. Depending on the changes,maintenance testing may be done at any or all test l evels and for any or all test types. Determining howthe existing system may be affected by changes is call ed impact analysis, and is used to help d ecide howmuch regression testing to d o. The impact analysis may be used to d etermine the regression test sui te.

    Maintenance testing can be difficult if specifications are out of date or missing, or testers with domainknowledge are not available.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    31/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 31

    3. Static Techniques

    3.1 Static Techniques and the Test Process

    Unlike dynamic testing, which requires the execution of software; static testing techniques rely on themanual examination (reviews) and automated analysis (static analysis) of the code or other projectdocumentation without the execution of the code.

    Reviews are a way of testing software work products (including code) and can be performed well beforedynamic test execution. Defects detected during reviews early in the life cycle (e.g., defects found inrequirements) are often much cheaper to remove than those detected by running tests on the executingcode.

    A review could be done entirely as a manual activ ity, but there is also tool support. The main manualactiv ity is to examine a wor k pr oduct and make comments about i t. Any software work p roduct can berevi ewed, includ ing requi rements specifications, design specif ications, code, test plans, test specif ications,test cases, test script s, user guid es or web pages.

    Benefits of reviews include early defect detection and correction, development productivityimprovements, reduced development t imescales, reduced t esting cost and time, lifetime cost reductions,fewer defects and improved communication. Reviews can find omissions, for example, in requirements,which are unl ikely to be found in dynamic testing.

    Revi ews, static analysis and d ynamic testing have the same objectiv e identifying d efects. They arecomplementary; the different techniques can find different types of defects effectively and efficiently.

    Compared to dynamic testing, static techniques find causes of f ailures (defects) rather than the fail uresthemselves.

    Typical defects that are easier to find in reviews than in dynamic testing include: deviations fromstandards, requirement defects, design defects, insufficient maintainability and incorrect interfacespecifications.

    3.2 Review ProcessThe different typ es of revi ews vary from informal, characteri zed by no wri tten instructions for r eviewers,to systematic, characterized by including team participation, documented results of the review, and

    documented procedures for conducting the review. The formality of a review process is related to factorssuch as the maturi ty of the development p rocess, any legal or regulator y requi rements or the need for anaudit trail.

    The way a review is carr ied out depends on the agreed objectives of th e review (e.g., find defects, gainunderstanding, educate testers and new team members, or discussion and decision by consensus).

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    32/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 32

    3.2.1 Activities of a Formal Review

    1. Planning: Defin ing the review criteria Selecting the personnel

    Allocating roles2. Defining the entry and exit criteria for more formal review types (e.g., inspections)

    Selecting which parts of documents to review

    3. Kick-off: Distributing documents Explaining the objectiv es, pr ocess and documents to the part icipants

    4. Checking entry cri teria (for more formal revi ew typ es)

    5. Individual preparation Preparing for the revi ew meeting by r evi ewi ng the document(s)

    6. Noting potent ial defects, questions and comments7. Examination/evaluation/recording of result s (review meeting):

    Discussing or logging, wi th documented resul ts or minutes (for more formal review types) Noting defects, making recommendations regarding handling the defects, making decisions

    about the defects

    8. Examining/evaluating and recording during any physical meetings or t racking any group electroniccommunications

    9. Rework :

    10. Fixing defects found (typically done by t he author) Recording updated status of defects (in formal reviews)

    11. Follow-up: Checking that defects have been addressed Gathering metrics

    12. Checking on exit criteria (for more formal review types)

    3.2.2 Roles and Responsibilities

    A typical formal review wi ll include the roles below: Manager: decides on the executi on of revi ews, allocates time in project schedul es and determines

    if the revi ew objectiv es have been met

    Moderator: the person who leads the review of the document or set of documents, includingpl anning the revi ew, r unning the meeting, and foll owing-up after the meeting. If necessary, themoderator may mediate betw een the various points of v iew and is often the person upon w homthe success of the review rests

    A uthor: the wri ter or p erson with chief responsibili ty for the document(s) to be reviewed Reviewers: ind iv iduals wi th a specific technical or business background (also call ed checkers or

    inspectors) who, after the necessary preparation, identi fy and d escribe findings (e.g., defects) in

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    33/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 33

    the product und er r evi ew. Revi ewers should be chosen to represent d if ferent perspectives androles in the review process, and should take part in any review meetings

    Scribe (or recorder): documents all the issues, problems and open points that were identi fiedduring the meeting

    Looking at software products or related wor k p roducts from di fferent perspectives and using checklistscan make revi ews more effectiv e and eff icient. For example, a checkl ist based on v arious perspectivessuch as user, maintainer, tester or operati ons, or a checkl ist of typ ical requirements problems may help touncover pr eviously undetected issues.

    3.2.3 Types of Reviews

    A single softw are product or related w ork product may be the subject of more than one review. If morethan one type of review is used, the order may vary. For example, an informal review may be carried outbefore a technical r evi ew, or an inspection may be carr ied out on a requi rements specification before awalkthrough with customers. The main characteristics, options and purposes of common review typesare:

    Informal Review: No formal process May take the form of pair programming or a technical lead r evi ewing designs and code Results may be documented Varies in usefulness depending on the reviewers Main purpose: inexpensive way to get some benefi t

    Walkthrough:

    Meeting led by author May take the form of scenarios, dry runs, peer group participation Open-ended sessions

    o Optional p re-meeting pr eparation of r evi ewerso Optional p reparation of a review report including l ist of find ings

    Optional scribe (who is not the author) May vary in practice from quite informal to very formal Main purposes: learning, gaining und erstanding, finding defects

    Technical Review: Documented, defined defect-detection process that includes peers and technical expert s with

    optional management part icipation May be performed as a peer review w ithout management participation Ideally led by trained moderator (not the author) Pre-meeting preparation by reviewers Optional use of checklists

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    34/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 34

    Preparation of a review r eport which includes the list of fi ndings, the verd ict whether thesoftware product meets its requirements and, where appropriate, recommendations related tofindings

    May vary in practice from quite informal to very formal

    Main purposes: discussing, making decisions, evaluating alternatives, finding defects, solvingtechnical problems and checking conformance to specifications, plans, regulations, and standards

    Inspection: Led by tr ained moderator (not the author) Usually conducted as a peer examination Defined roles Includes metrics gathering Formal process based on rules and checklists Specified entry and exit cr iteria for accept ance of the software pr oduct Pre-meeting preparation

    Inspection report including list of f indings Formal follow -up p rocess Optional process improvement components Optional reader Main purpose: find ing defects

    Walkthroughs, technical r evi ews and inspections can be performed w ithin a peer group, i.e. coll eagues atthe same organizational level. This type of review i s called a peer review .

    3.2.4 Success Factors for Reviews

    Success factors for reviews include: Each revi ew has clear p redefined objectiv es The right people for the review objectives are involved Testers are valued reviewers who contribute to the review and also learn about the product

    which enables them to p repare tests earl ier Defects found are welcomed and expr essed objectively Peopl e issues and psychological aspects are dealt with (e.g., making it a positive experience for

    the author) The review is conducted in an atmosphere of trust; the outcome will not be used for the

    evaluation of the part icipants

    Review techniques are applied that are sui table to achieve the objectives and to the type and levelof softw are work pr oducts and revi ewers

    Checkl ists or roles are used i f appropri ate to increase effectiv eness of defect i denti fication Training is given in revi ew t echniques, especially the more formal techniques such as inspection Management supports a good r eview p rocess (e.g., by incorporating adequate time for review

    activities in p roject schedul es) There is an emphasis on learning and process improvement

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    35/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 35

    3.3 Static Analysis by ToolsThe objective of static analysis is to f ind defects in software source code and softw are models. Staticanalysis is performed wi thout actually executing the softw are being examined by the tool; dynamictesting d oes execute the softw are code. Static analysis can locate defects that are hard to f ind in dynamic

    testing. As with r evi ews, static analysis finds defects rather than failures. Static analysis tools analyzeprogram code (e.g., control flow and data flow), as well as generated output such as HTML and XML.

    The value of static analysis is: Earl y detection of defects prior to test execution Earl y w arning about suspicious aspects of the code or design by the calculation of metr ics, such

    as a high complexity m easure Identification of defects not easily found by dynamic testing Detecting dependencies and inconsistencies in softw are models such as links Improved maintainabil ity of code and design

    Prevention of defects, if lessons are learned in development

    Typical defects discovered by static analysis tools include: Referencing a variable wi th an undefined value Inconsistent interf aces betw een modules and components Vari ables that are not used or are impr operly declared Unreachable (dead) code Missing and erroneous logic (potenti ally i nfinit e loops) Overly complicated constructs

    Programming standards violations Securi ty vulnerabili ties Syntax violations of code and software models

    Static analysis tools are typically used by developers (checking against predefined rules or programmingstandards) before and during component and integration testing or when checking-in code toconfiguration management tools, and by designers during software modelling. Static analysis tools mayproduce a large number of warning messages, which need to be well-managed to all ow the most effectiv euse of the tool.

    Compilers may offer some support for static analysis, including the calculation of metrics.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    36/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 36

    4. Test Design Techniques

    4.1 The Test D evelopment Process

    The test developm ent p rocess described in this section can be done in d if ferent ways, from very inform alwi th li tt le or no documentation, to very formal (as it is described below). The level of formalit y dependson the context of the testing, including the maturity of testing and development processes, timeconstraints, safety or regulatory requirements, and the people involved.

    During test analysis, the test basis documentation is analyzed in order to determine what to test, i.e. toidentify the test condit ions. A test condit ion is defined as an i tem or event that could be veri fied by one ormore test cases (e.g., a function, transaction, quali ty characteristic or structural element).

    Establishing traceabili ty from test condit ions back to the specifications and requirements enables botheffective impact analysis when requirements change, and determining requi rements coverage for a set oftests. During test analysis the detailed test approach is imp lemented to select the test d esign techniques touse based on, among other considerations, the identi fied r isks.

    During test design the test cases and t est data are created and specif ied. A test case consists of a set ofinput values, execution pr econditions, expected r esults and execution post condi tions, developed to covera cert ain test objectiv e(s) or t est condition(s). The Standard for Softw are Test Documentation (IEEE STD829-1998) describes the content of test design specifications (containing test conditions) and test casespecifications.

    Expected r esul ts should be produced as part of the specification of a test case and include outputs,

    changes to data and states, and any other consequences of the test. I f expected resul ts have not beendefined, then a plausible, but erroneous, resul t may be interpreted as the correct one. Expected resultsshould ideally be defined pr ior to test execution.

    During test implementation the test cases are developed, implemented, prioritized and organized in thetest procedure specification. The test procedure specifies the sequence of actions for the execution of atest. I f tests are run using a test executi on tool , the sequence of actions is specifi ed in a test script (which isan automated test procedure).

    The various test procedures and automated test scripts are subsequentl y formed into a test executionschedu le that defines the order in w hich the various test procedu res, and p ossibly automated test script s,are executed. The test execution schedule will take into account such factors as regression tests,prioritization, and technical and logical dependencies.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    37/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 37

    4.2 Categories of Test Design TechniquesThe purpose of a test design technique is to identi fy test condit ions, test cases, and t est data.

    It is a classic d istinct ion to denote test techniques as black-box or white-box. Black-box test design

    techniques (also called specification-based techniques) are a way to derive and select test conditions, testcases, or test d ata based on an analysis of the test basis documentati on. This includes both functional andnon-functional testing. Black-box testing, by defin iti on, does not use any inf ormation regard ing thein ternal structure of the component or system to be tested. White-box test d esign techniques (also calledstructural or structure-based techniques) are based on an analysis of the structur e of the component orsystem. Black-box and whi te-box testing may also draw upon the experi ence of developers, testers andusers to determine what should be tested.

    Some techniques fall clearly in to a single category; others have elements of more than one category.

    This syllabus refers to specif ication-based test design techniques as black-box techniques and str ucture-based test design techniques as white-box techniques. In addition experience-based test designtechniques are covered.

    Common characteristics of specification-based test design techniques include: Models, either formal or informal, are used for the specification of the problem to be solved, the

    software or its components Test cases can be derived systematically from these models

    Common characteristics of structure-based test design techniques include:

    Inform ation about how the software is constr ucted is used to deri ve the test cases (e.g., code anddetailed design information)

    The extent of coverage of the software can be measured for existing test cases, and further testcases can be deriv ed systematicall y to increase coverage

    Common characteristics of experi ence-based test design techniques include: The knowledge and experience of people are used to derive the test cases The know ledge of testers, developers, users and oth er stakeholders about the softw are, its usage

    and its environment is one source of information Know ledge about l ikely defects and their d istribution is another source of in formation

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    38/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 38

    4.3 Specification based or Black-box Techniques

    4.3.1 Equivalence Partitioning

    In this method t he input domain data is div ided into d ifferent equivalence data classes. This method is

    typically used to r educe the total number of test cases to a fi nite set of testable test cases, sti ll coveringmaximum requirements. In short it is the pr ocess of taking all possible test cases and p lacing them intoclasses. One test value is picked from each class while testing.

    Equivalence partit ioni ng can be used to achieve input and output coverage goals. It can be appli ed tohuman input, input via interfaces to a system, or interface parameters in integration testing.

    4.3.2 Boundary V alue Analysis

    Its widely recognized that input values at the extreme ends of input domain cause more errors in system.More application errors occur at the boundaries of input d omain. Boundary value analysis testing

    technique is used to identify err ors at boundaries rather than finding those exist in centre of inputdomain.

    Boundary value analysis is a next part of Equivalence part it ioning for design ing test cases where testcases are selected at the edges of the equivalence classes. Boundary value analysis is often call ed as a partof stress and negative testing.

    Boundary value analysis can be app lied at all test l evels. It is relatively easy to apply and i ts defectfinding capabili ty is high. Detailed specif ications are helpful in d etermining the interesting boundaries.

    4.3.3 Decision Table TestingDecision table is one of the most commonly used testing t echniques. Decision tables are used to r ecordcomplex business ru les or scenari o and break down into simpl er t abul ar form at called decision tables.

    Consider a real time example:

    A Particular website has 3 different level of user access: Users who are not registered to the site Users who are registered Premium users or paid members who have access to all actions in the site

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    39/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 39

    Advantages: A decision table prov ides a fr amework for a complete and accurate statement of pr ocessing or

    decision logic Helps to identi fy the test scenarios faster because of t abular repr esentati on

    Easy to understand Easy to maint ain and update decision table if there is change in requi rement It is possible to check that all test combinations have been considered

    The str ength of decision table testing is that i t creates combinations of condi tions that otherwise mightnot have been exercised during testing. It may be applied to all situations when the action of the softwaredepends on several logical decisions.

    4.3.4 State Transition Testing

    State transition testing is used for systems where some aspect of the software system can be described in

    'finite state machine'. This means that the system can be in a finite number of di fferent states, and thetr ansiti ons from one state to another are determined by the rules of the 'machine.

    This is the model on which the system and the tests are based. A ny system w here you get a dif ferentoutp ut for the same input, depending on w hat has happened before, is a finite state System.

    State transition testing is much used within the embedded software industry and technical automation ingeneral. H owever, the technique is also sui table for modelli ng a business object having specific states ortesting screen-dialogue flow s (e.g., for Internet applications or business scenarios).

    For e.g. suppose you want to w ithdr aw Rs. 500 from a bank ATM, you may be given cash. A fter sometime you again try to wi thd raw Rs. 500 but you may be refused the money (because your account balanceis insufficient). This refusal i s because your bank account state has changed fr om having suff icient fundsto cover wi thdrawal to having insufficient funds. The transaction that caused your account to change itsstate was probably the earl ier withdr awal.

    A state transiti on model has four basic parts: The states that the software may occupy (funded/insufficient funds) The transitions from one state to another (all transitions are not allowed) The events that cause a transition (like withdrawing money)

    The actions that resul t f rom a transition (an error message or being given your cash)

    In any given state, one event can cause only one action, but that the same event from a different state maycause a different action and a different end state.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    40/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 40

    Tests can be designed to cover a typ ical sequence of states, to cover every state, to exercise everytr ansiti on, to exercise specific sequences of t ransitions or to test invalid transit ions.

    State transition testing is much used within the embedded software industry and technical automation in

    general. H owever, the technique is also sui table for modelli ng a business object having specific states ortesting screen-dialogue flow s (e.g., for Internet applications or business scenarios).

    4.3.5 Use Case Testing

    Use Case:

    A use case is a descrip tion of a systems behavior as it responds to a request that or iginates from out sideof that system (the user). In other w ords, a use case describes who can do what with the system inquestion.

    The use cases describe the system from the users point of view. Use case testing is a technique that helps

    us identi fy test cases that exercise the w hole system on a tr ansaction by transaction basis from start tofinish.

    Use cases are defined in terms of the end user and not the system, use case describe what the user doesand w hat the user sees rather than what inputs the softw are system expects and what the system outputs.

    Use cases use the business language rather than technical terms. Use cases are very useful for designingacceptance tests wi th customer/user participation

    4.4 Structure-based or White-box Techniques

    Str uctur e-based or w hite-box testing is based on an identif ied structure of the software or the system, asseen in the following examples:

    Component level: the str uctur e of a softw are component, i.e. statements, decisions, branches oreven distinct paths

    Integration level: the structure may be a call tree (a diagram in which modules call othermodules)

    System level: the str uctur e may be a menu structure, business process or web page structure

    4.4.1 Statement Testing and Coverage:

    Statement coverage is the assessment of the percentage of executable statements that have been exercised

    by a test case sui te. It does not ensure coverage of all functionality.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    41/59

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    42/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 42

    Example 2:

    3 Test Conditions are requir ed t o cover all statements and to make any decision.

    Decision coverage is stronger than statement coverage.

    100% decision coverage for a component is achieved by exercising all decision outcomes in thecomponent. 100% decision coverage guarantees 100% statement coverage, but not v ice versa.

    4.4.3 Other Structure-based TechniquesThere are str onger levels of structural coverage beyond decision coverage, for example, condit ioncoverage and multiple condition coverage.

    The concept of coverage can also be appli ed at other test levels For example, at the integration level thepercentage of modu les, components or classes that have been exercised by a test case suite could beexpressed as mod ule, component or class coverage.

    Tool support is useful for the structural testing of code.

    4.5 Experience Based TechniquesExperi enced-based testing is where tests are deriv ed from the testers skil l and intuition and theirexperi ence with similar appl ications and technologies. When used to augment systematic techniques,these techniques can be useful in i denti fying special tests not easily captured by formal techniques,especially when applied after more formal approaches. However, this technique may yield widelyvarying degrees of effectiveness, depending on the testers experience.

    A commonly used experi enced-based technique is error guessing . Generally testers anticipate defectsbased on experience. A str uctur ed approach to the error guessing technique is to enum erate a list ofpossible defects and to design tests that attack these defects. This systematic approach is called fault

    attack. These defect and failure lists can be buil t based on experi ence, available defect and failure data,and from common know ledge about w hy software fails.

    Exploratory testing i s concurrent test design, t est execution, test logging and learning, based on a testchart er containing test objectives, and carried out within time-boxes. It i s an appr oach that is most usefulwhere there are few or inadequate specifications and severe time pressure, or in order to augment orcomplement other, mor e formal testing. It can serve as a check on the test process, to help ensur e that themost serious defects are found.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    43/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 43

    4.6 Choosing Test TechniquesThe choice of which test t echniques to use depends on a number of factors:

    the type of system regulatory standards

    customer or contractual requirements level of risk type of risk test objective documentation available knowledge of the testers tim e and budget development li fe cycle use case models

    previous experience with types of defects found

    Some techniques are more applicable to certain situations and test l evels; others are appl icable to all testlevels.

    When creating test cases, testers generally use a combination of test techniques including p rocess, ruleand data-dr iven t echniques to ensure adequate coverage of the object under test.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    44/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 44

    5. Test M anagement

    5.1 Test Organization

    5.1.1 Test Organization and IndependenceThe effectiveness of finding defects by testing and revi ews can be imp roved by using independent testers.Options for independence include the follow ing:

    No independent testers; developers test their ow n code Ind ependent testers within the development teams Ind ependent t est t eam or group within the organization, report ing to pr oject management or

    executive management Independent testers from the business organization or user community Independent test specialists for specific test types such as usability testers, security testers or

    certification testers (who certify a software product against standards and regulations)

    Independent testers outsourced or external to the organization

    For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with someor all of the levels done by independent testers. Development staff may participate in testing, especially atthe lower levels, but their lack of objectivity often limits their effectiveness. The independent testers mayhave the authority t o requi re and define test processes and ru les, but testers should take on such process-related roles only in the pr esence of a clear management mandate to do so.

    The benefit s of ind ependence include: Independent testers see other and different defects, and are unbiased

    An independent tester can verify assumptions people made during specification andimplementation of the system

    Drawbacks include: Isolation fr om the development team (if treated as totally i ndependent) Developers may lose a sense of responsibil ity for quality Independent testers may be seen as a bottleneck or blamed for delays in release

    Testing t asks may be done by people in a specific testing role, or may be done by someone in anotherrol e, such as a project manager, quality manager, developer, business and domain expert , infrastructureor IT operations.

    5.1.2 Tasks of the Test Leader and Testers

    In this syl labus tw o test positions are covered, test l eader and tester. The activ ities and tasks performedby people in these tw o roles depend on the project and pr oduct context, the people in the roles, and theorganization.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    45/59

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    46/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 46

    people may take over the role of tester, keeping some degree of independence. Typicall y testers at thecomponent and integration level w ould be developers, testers at the acceptance test l evel would bebusiness experts and users, and testers for operational acceptance testing would be operators.

    5.2 Test Planning and Estimation5.2.1 Test Planning

    Test planning is a continuous activity and is performed in all life cycle processes and activities. Planningis inf luenced by

    Test policy of the organization Scope of testi ng Objectives Risks Constraints Criticality Testability A vailabil ity of resources

    5.2.2 Test Planning Activities

    Test planning activities for an entire system or part of a system may include: Determining the scope and risks and identifying the objectives of testing Defining the overall appr oach of testing, including the definition of the test l evels and entry and

    exit criteria Int egrating and coordinating the testing activ it ies into the softw are life cycle activiti es

    (acquisition, supply, development, operation and maintenance) Making decisions about what to test, what roles will perform the test activities, how the test

    activities should be done, and how the test results will be evaluated Scheduli ng test analysis and design activit ies Scheduling test implementation, execution and evaluation Assigning resources for the different activities defined Defining the amount, level of detail, structure and templates for the test documentation Selecting metrics for monitoring and controlling test preparation and execution, defect resolution

    and r isk issues Setting the level of detail for t est procedur es in order to provid e enough inform ation to support

    reproducible test preparation and execution

    5.2.3 Entry Criteria

    Entry cri teria define when to start testing such as at the beginning of a test l evel or when a set of tests isready for execution. Typically entry cri teri a may cover the foll owing:

    Test environment availability and readiness Test tool r eadiness in the test envir onment

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    47/59

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    48/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 48

    the technology the natur e of the system test objectives regulations

    Typical approaches include: Analytical approaches , such as risk-based testing where testing is directed to areas of greatest

    risk M odel-based approaches , such as stochastic testing using statistical information about fail ur e

    rates (such as reliabili ty grow th m odels) or usage (such as operational p rof il es) M ethodical approaches , such as failure-based (i ncluding error guessing and fault-attacks),

    experienced-based, checklist-based, and quality characteristic-based Process- or standard-compliant approaches , such as those specified by industry-specif ic

    standards or the various agile methodologies

    Dynamic and heuristic approaches , such as expl oratory testing w here testing i s more reactive toevents than pr e-planned, and w here execution and evaluation are concurrent tasks

    Consultative approaches , such as those in which test coverage is driven primarily by the adviceand guidance of technology and/or business domain experts outside the test t eam

    Regression-averse approaches , such as those that include reuse of existing test materi al,extensive automation of functional regression tests, and standard test suit es

    5.3 Test Progress M onitoring and Control5.3.1 Test Progress Monitoring

    The purpose of test monitoring is to provide feedback and v isibil ity about test activi ties. Information tobe monitored may be coll ected manually or automatically and may be used to measure exit criteria, suchas coverage. Metrics may also be used to assess progress against the planned schedule and budget.Common test metrics include:

    Percentage of work done in test case preparation (or percentage of planned test cases prepared) Percentage of work done in test environment preparation Test case execution (e.g., number of test cases run/not run, and test cases passed/f ailed) Defect information (e.g., defect density, defects found and fixed, failure rate, and re-test resul ts) Test coverage of requirements, risks or code Subjective confidence of testers in the product Dates of test mi lestones Testing costs, including the cost compared to the benefit of finding the next defect or to run the

    next test

    5.3.2 Test Reporting

    Test r eport ing is concerned w ith summarizing information about the testing endeavor, including: What happened during a period of testing, such as dates when exit criteria were met

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    49/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 49

    A nalyzed information and metrics to support r ecommendations and decisions about f utur eactions, such as an assessment of defects remaining, the economic benefit of continued testing,outstanding risks, and the level of confidence in the tested software

    Metrics should be collected d ur ing and at the end of a test l evel in order to assess: The adequacy of the test objectives for that test level The adequacy of the test approaches taken The effectiveness of the testing w ith r espect to the objectives

    5.3.3 Test Control

    Test control describes any guiding or correctiv e actions taken as a resul t of information and metricsgathered and reported. Actions may cover any test activity and may affect any other software life cycleactiv ity or task.

    Examples of test control actions include: Making decisions based on information fr om test monitor ing Re-prioritizing tests when an identified risk occurs (e.g., software delivered late) Changing the test schedule due to availabili ty or unavailabilit y of a test envir onment Setting an entry criterion requiri ng fi xes to have been re-tested (confirmation tested) by a

    developer before accepting them into a build

    5.4 Configuration M anagementThe purpose of configuration management is to establish and maintain the integrity of the products(components, data and documentation) of the software or system through the project and product lifecycle.

    For testing, configuration management may involv e ensur ing the follow ing: A ll it ems of testw are are ident if ied, version controlled, tr acked for changes, related to each other

    and related to developm ent it ems (test objects) so that tr aceabili ty can be maint ained thr oughoutthe test p rocess

    All identified documents and software items are referenced unambiguously in testdocumentation

    For the tester, configuration management helps to uniquely identify (and to reproduce) the tested item,test documents, the tests and the test harness(es).

    Dur ing test p lanning, the configuration management pr ocedures and infr astructure (tools) should bechosen, documented and implemented.

  • 8/2/2019 ASAP D.D. (P) Ltd_Manual Testing Material

    50/59

    Manual Testing

    ASAP D.D. (P) Ltd Page 50

    5.5 Risk and Testing Risk can be defined as the chance of an event, hazard , threat or situation occurr ing and resul ting inundesirable consequences or a potential problem. The level of risk will be determined by the likelihood ofan adverse event happening and the impact (the harm resulting from that event).

    5.5.1 Project Risks

    Project ri sks are the ri sks that surround the projects capabil ity to deliv er i ts objectiv es, such as: Organizational factors:

    o Skill, training and staff shortageso Personnel issueso Political i ssues, such as:

    Probl ems wi th t esters communicating their needs and test results Failur e by the team to follow up on information found in testing and

    reviews(e.g., not improving development and testing practices)o Improper attitude toward or expectations of testing (e.g., not appreciating the value of

    finding defects during testing) Technical issues:

    o Problems in d efining the ri ght requirementso The extent to which requi rements cannot be met given existing constraintso Test environment not ready on timeo Late data conversion, migration planning and development and testing data

    conversion/migration toolso Low quality of the design, code, configuration data, test data and tests

    Supplier issues:

    o Failur e of a thi rd partyo Contractual issues

    5.5.2 Product Risks

    Potential failure areas (adverse futur e events or hazards