testing tools

85
Software A software is a set of programs. They will take input and provide outputs. They are two types 1) Software Application 2) Software Product 1) A software development for a specific customer requirements called as Software Application. 2) A software development depending on overall requirements in market called as software product. The interested customers are purchasing the licenses of Software Product. Software Bidding : A proposal to develop a New Software is called Software Bidding. In Software Application Development, the proposal is coming from specific customer. In product development our organization is taking their own proposal. Kick of Meeting : The CEO category person is conducting a meeting with high level management and select a Project Manager to handle the New Software Development Process. PIN (Project Initiation Note) Document : The selected Project Manager (PM) is preparing this document to estimate the required people, the required technologies, required time and required resources. He/She submitting the report to CEO. The CEO is conducting a review to give green signal to Project Manager. SDLC (Software Development Life Cycle) : (Water Model) Required Gathering Analysis & Planning Designing Code Testing Release & Maintenance

Upload: setu-software-systems-pvt-ltd

Post on 20-Jan-2015

1.158 views

Category:

Education


4 download

DESCRIPTION

Useful

TRANSCRIPT

Page 1: Testing tools

Software A software is a set of programs. They will take input and provide outputs. They are

two types 1) Software Application 2) Software Product 1) A software development for a specific customer requirements called as Software

Application. 2) A software development depending on overall requirements in market called as

software product. The interested customers are purchasing the licenses of Software Product.

Software Bidding : A proposal to develop a New Software is called Software Bidding. In Software Application Development, the proposal is coming from specific customer. In product development our organization is taking their own proposal. Kick of Meeting : The CEO category person is conducting a meeting with high level management and select a Project Manager to handle the New Software Development Process. PIN (Project Initiation Note) Document : The selected Project Manager (PM) is preparing this document to estimate the required people, the required technologies, required time and required resources. He/She submitting the report to CEO. The CEO is conducting a review to give green signal to Project Manager. SDLC (Software Development Life Cycle) : (Water Model)

Required Gathering ↓

Analysis & Planning ↓

Designing ↓

Code ↓

Testing ↓

Release & Maintenance

Page 2: Testing tools

In above SDLC process, the single stage of testing is available and conducting the testing by Developers. Due to these reasons, the organizations are concentrating on Multiple Stages of Testing and separate testing teams to achieve quality.

Software Quality : → Meet Customer Requirements (Functionality) → Meet Customer Expectations (Usability Performance) → Cost to Purchase License → Time to Release Software Quality Assurance (SQA) : The Monitoring and Measuring the strength of development process is called as Software Quality Assurance / Verification. Software Quality Control (SQC) : The Validation of product with respect to customer requirements is calling as Software Quality Control / Validation / Testing. “V” Model : ‘V’ Stands for Verification & Validation. This model is defining development process with Testing Stages. This model is extension of SDLC Model.

Analysis & Planning With Review

High Level Design & Review

Low Level Design & Review

User Acceptance Testing

System Testing

Integration Testing (Programs Testing)

Unit Testing (Program Testing)

Coding

Requirements Gathering & Review

Verification Validation

Page 3: Testing tools

In above ‘V’ Model Reviews are calling as Verification Methods and Testing levels are calling as Validations. In small and medium scale organizations the management is maintaining the separate Testing Team for System Testing Only to decrease project cost, because the System Testing is Bottle Next Stage in Software Development Process. I) Reviews in Analysis : In general the software development process is starting with requirements gathering from Specific Customer in Application Development and requirements gathering from Model Customers in Product development. After gathering requirements the responsible Business Analyst is preparing BRS ( Business Requirements Specification) document. This document is also known as User Requirement Specification or Customer Requirement Specification. After gathering requirements, the business analyst sit with Project Manger and develop SRS and Project Plan. The Software Requirements Specification Consists of functional requirements to be developed and system requirements to be used.

After completion of BRS & SRS preparations, the corresponding Business Analyst is conducting a review to estimate completeness and correctness of the documents. → Are they Correct Requirements? → Are they Complete Requirements? → Are they Achievable Requirements? → Are they Reasonable(Time) Requirements? → Are they Testable Requirements? Go to V Model Next

Example :

BRS SRC

What? How?

Addition

Functional Requirement : 2 Inputs , 1 Out Put, ‘+’ is Operation

System Requirement : ‘C’ Language

Page 4: Testing tools

II) Reviews in Design : After completion of successful Analysis and Review, the Design Category people are preparing HLD, LLDs (High Level Design & Low Level Designs) The High Level Design specifies the overall architecture of the Software. It is also known as System Design or Architectural Design.

Every Functionality or Module Internal Structure specified by Low Level Design Documents. These are also known as Structural Design or Component Design.

HLD is a system level design and LLD is component or Module level design. So one Software design consists of one HLD and Multiple LLDs. The corresponding designers are conducting a review on that document for completeness and correctness. → Are they Understandable Designs? → Are they Correct Designs? → Are they Complete Designs? → Are they Followable Designs? Go to V Model Next

LOGIN

Example : User

Data Base

Next Window

Re-Login

Invalid

Valid

User ID & Password

LOGIN

Example :

LOGOUT

Mailing Chatting

Root

Leaf :

Page 5: Testing tools

III) Unit Testing : After completion of successful designs and reviews the corresponding programmers are starting coding to construct a Software Physically. In this stage the programmers are writing programs and Test each program using White Box / Glass Box / Open Box Testing Techniques.

(A) Basic Paths Coverage : The programmers are using this technique to estimate the Execution of a programs. In this technique the programmer Executing a program more than one time to cover all areas of that program in execution. (B) Control Structure Coverage : After completion of successful Basic path coverage the corresponding programmer is concentrating on the Correctness of that program execution in terms of Inputs, Process and Outputs. (C) Program Technique Coverage After successful Basic Paths & Control Structure Coverage, the corresponding programmer is calculating the execution of that program. If that program execution speed is not acceptable then the programmer is performing changes in that program structure without disturbing the functionality. In this coverage the programmers are using Monitors and Profiles like 3rd party software to calculate the execution speed of the program. Note : Monitors are used in VB.net Profilers are used in Java

→ Basic Paths Coverage → Control Structure Coverage → Program Technique Coverage → Mutation Coverage

Programs

}

Page 6: Testing tools

(D) Mutation Coverage Mutation means a change in program. Programmers are performing changes in programs to estimate the completeness and correctness of that program testing.

Basics Paths Coverage, Control Structure Coverage and Program Technique Coverage are applicable on a program to test. Mutation Coverage is applicable Program Testing to estimate completeness and correctness of that Testing. Go to V Model Next IV) Integration Testing : After completion of dependent programs development and Unit Testing, the programmers are interconnecting them to form a complete System / Software. This testing is also known as Interface Testing there are Four Approaches to Integrate Programs and Testing. A) Top Down Approach :- In this approach the programmers are interconnecting main program and some of subprograms. In the place of remaining sub-programs, the programmers are using Temporary programs called “Stub"

Test ↓

↓ Passed

Repeat Test ↓

Test ↓

↓ Passed (Incomplete Test

↓ Failed (Complete Testing)

Change Change

Main

Sub2

STUB (Under Construction)

Sub1

Page 7: Testing tools

B) Bottom Up Approach :- In this approach the programmers are interconnecting sub-programs without coming from Main Program.

C) Hybrid Approach :- In is a combined approach of Top Down & Bottom Up approaches. It is also known as Sand Witch Approach.

D) System Approach :- The Integration of programs after completion of 100% coding is called System Approach or Big Bang Approach

Main

Driver (Under Construction)

Sub1

Sub2 Driver (Under Construction)

Sub3

Main

Driver (Under Construction)

Sub1

Sub2

Page 8: Testing tools

V) System Testing : After completion of successful Integration Testing, the Development Team is Releasing a Software Build to separate Testing Team in our organization. This System Testing classified into Three Sub Stages.

1. Usability Testing 2. Functional Testing 3. Non-Functional Testing

1. Usability Testing : In general the testing execution is starting with Usability Testing. During this Test the Testing Team is Concentrating on “User Friendliness of Software Build” There are 2 sublevels in this Usability Testing.

a) User Interface Testing : → Ease of Use (Understandable Screens) → Look & Feel (Attractive Screens) → Speed in Interface (short Navigations in Screens)

b) Manuals Support Testing :

In this test the Testing Team is verifying the Help of that Software. Case Study : Receive S/w Build from Developers after Integration Testing.

↓ User Interface Testing

↓ Functional Testing

↓ Non-Functional Testing

↓ Manuals Testing

Usability Testing

Page 9: Testing tools

2. Functional Testing : It is a Mandatory Testing level in System Testing. During this test the Testing Team is concentrating on the Correctness of Customer requirements in that S/w Build. This Testing classified into below sub tests. a) Control Flow Testing :-

The changes in properties of objects in an Application / S/w Build with respect to mouse and keyboard operations.

b) Error Handling Testing :- The prevention of wrong operations with meaningful messages. c) Input Domain Coverage :- Whether our S/w Build is taking valid type and size of inputs or not? d) Manipulations Coverage :- Whether our S/w Build is providing customer expected output or not? e) Database Testing :- The input of Front End Screens operations on Back End database contact f) Sanitation Testing :- Finding extra functionality with respect to Customer Requirements Case Study :-

Software Build

Screens (Front End) Data Base

(Back End)

Control Flow Error Handling

I/p Domain Manipulations

Sanitation

Data Base Testing

Functional / Black Box Testing

Page 10: Testing tools

3. Non-Functional Testing : It is an optional level in System Testing. This level is expensive and complex to conduct. During this test the Testing Team is concentrating on extra characteristics of Software. a) Reliability Testing :- It is also known as Recovery Testing. During this test the Testing Team is validating whether our S/w Build is changing from Abnormal State to Normal State or not? b) Compatibility Testing :- It is also known Portability Testing. During this test the Testing Team is concentrating on whether our S/w Build is running on Customer Expected platform or not? Platform means Operating System, Browser, Compilers and Other System Software’s. c) Configuration Testing :- It is also known as Hardware Compatibility Testing. During this test the Testing Team is concentrating on whether our S/w Build is supporting different technology hardware devices or not? Ex :- Different Technology Printers, Networks … etc., d) Inter System Testing :- It is also known as End to End Testing or Interoperability Testing. During this test the Testing Team is concentrating on whether our S/w Build is co-existence with other Software application to share common resources or not? Case Study :-

Compatibility Testing S/w Build → Operating System

Configuration Testing S/w Build → H/w Device Ex : Printers

Inter System Testing S/w Build → Other S/w Build

Page 11: Testing tools

e) Data Volume Testing :- During this test the Testing Team is inserting model data in our Application Build to estimate peak limit of data. This data limit estimate is calling as Data Volume Testing. Ex : 1) M.S.Access Technology Software are managing 2GB Data Base, SQL

Server managing 6-7GB Data Base and Orcle Tech. managing 10-12GB Data Base as maximum.

f) Installation Testing :-

→ Setup program execution to start Installation. → Easy interface during Installation. → Occupied disk space after Installation. g) Load Testing :- Load means that in number of Concurrent users are using our S/w Build at a time. During this test the Testing Team is executing our S/w Build under customer expected configuration and customer expected load to estimate speed of processing or performance.

h) Stress Testing :- The execution of our S/w Build under customer expected configuration and more than Customer Load to estimate peak limit of Load is called Stress Testing. i) Endurance Testing :- The execution of our S/w Build under Customer Expected configuration and customer expected load to estimate continuity in processing is called Endurance Testing. j) Security Testing :-

S/w Build +

Supported S/w Install

Customer expected configuration system Customer expected size of Ram, HDD,

Processor, OS…. Etc.,

S/w Build Process

ServerClient 1 □ Client 2 □.

.

. Client N □

Page 12: Testing tools

It is also known as penetration testing. During this test the Testing Team is concentrating on three factors. Authorizations : S/w Build is allowing valid users and preventing invalid users. Ex : Login with password, PIN, Digital Signatures, Finger Prints, Eye Retina,

Scratch Cards….etc.,

Access Control : The permission of valid users to access functionality in Build. Ex : Admin, User Encryption / Decryption : The code conversation in between client and server

process.

k) Localization and Internationalization Testing :- This testing is applicable for Multi Languity Software. This type of softwares are allowing multiple user language characters. Ex : English, Spanish, French …. Etc., In localization testing the Test Engineer is providing multiple language characters as Inputs to the S/w Build. In Internationalization Testing the Test Engineer is providing a common language character (English) to S/w as Input. In this scenario the 3rd party tools transfer common language character to other language characters. Note : Java Unicode is better technology to develop multi languity softwares. l) Parallel Testing :- It is also known as Competitive / Comparative Testing. During this test the Testing Team is comparing our S/w Build with old version of same S/w or with similar product in market to estimate competitiveness. VI) User Acceptance Testing :

Client Server

Cipher Text

Cipher Text

Request

Encrypted

Decrypted

Decrypted

Encrypted

Response

Page 13: Testing tools

After completion of successful System Testing the Project Manger is concentrating on UAT to collect feedback from real customers or model customers. There are two ways in this User Acceptance Testing.

α Alpha Testing β Beta Testing

→ For S/w Application → For S/w Products → By real customers with involvement Of Developers and Testers

→ By Model Customers

→ In Development Site → In Model Customer Site VII) Release Testing : After completion of UAT and their modifications the Project Manger is forming Release Team or On Site Team to release application to Real Customer or to release Product to license purchased customer. This release team or onsite team consists of Few Programmers, Few Testers, Few Hardware Engineers with a Team Lead. This team is observing below factors in Customer Site.

1) Complete Installation 2) Overall Functionality 3) Input devices handling (Key Board, Mouse….etc.,) 4) Output devices handling (Monitor, Printer….etc.,) 5) Secondary storage devices handling (Floppy, Pen Drive…etc.,) 6) O/s error handling 7) Co-existence with other S/w in customer site.

The above factors checking in customer site is also known as Port Testing / Deployment Testing. After successful release, the release team is conducting training sessions to customer site people & then back to our organization.

Page 14: Testing tools

VIII) Maintenance: During utilization of a Software, the customer site people are sending Software Change Request (SCR) to our organization. These requests received by a special team in our organization called Change Control Board (CCB). This team is consists of Few Programmers, Few Testers, Few Hardware Engineers along with Project Manager.

Case Study :-

Testing Stages Deliverable to be Tested Responsibility Testing Techniques

Reviews in Analysis BRS & SRS BA Walk Through,

Inspections & Peer Reviews

Review in Design HLD & LLDs Designers Walk Through,

Inspections & Peer Reviews

Unit Testing Programs Programmers White Box Testing Techniques

Integration Testing Interface in between Programs Programmers Top Down, Bottom

Up, Hybrid, System

System Testing S/w Build Test Engineers / Quality Control

Engineers

Usability, Functional / Black

Box, Non-Functional Testing

User Acceptance Test S/w Build Real Customers /

Model Customers α -Testing, β - Testing

Releasing Testing S/w Build Release Team S/w Release Factors (7 Factors in VII)

Maintenance Level Testing S/w changes CCB Regressing Testing

S/w Change Request

Enhancement Missed Deffects

Impact Analysis ↓

Perform S/w Changes

↓ Test S/w Changes

Impact Analysis ↓

Perform S/w Changes ↓

Test S/w Changes ↓

Improve Testing Process & People

Capability

Conducted by CCB

Page 15: Testing tools

Walk Through :- A document study to estimate completeness and correctness Inspection :- Search & Issue in a document called as Inspection Peer Reviews :- Comparing the document with other similar document.

Challenges in Software Testing In general every Testing Team is planning formal testing to conduct. Due to some challenges in testing, the Testing Teams are going to conduct Ad-hoc Testing or Informal Testing. There are Five Styles of Ad-Hoc Testing. a) Monkey / Chimpangy Testing :- Due to lack of time the Testing Team is conducting testing on Main Activities of a Software. This type / stage of testing is called as Monkey Testing. b) Buddy Testing :- Due to lack of time the Project Management is combining one programmer and one Tester as a Buddy. This teams are conducting Development & Testing Parallely. c) Exploratory Testing :- It is also known as Artistic Testing. Due to lack of Documentation, the Test Engineers are depending on Past Experience, Discussions with others, Video Conference with customer site people, Internet Browsing & Similar S/w surfing to understand customer requirements. This type of testing is called Exploratory Testing. d) Pair Testing :- Due to lack of knowledge the Senior Test Engineers are groping with Junior Test Engineers to share their knowledge. This style of testing is called Pair Testing. e) Bebugging:- To estimate the efforts of Test Engineers the Development People are adding defects to coding. This informal way is called Bebugging or Defect Feeding / Seeding.

Page 16: Testing tools

System Testing Process

Test Initiation

Test Planning

Test Design

Test Execution

Test Closure

Test Reporting

Development Vs System Testing

S/w Design & Review (HLD, LLDs) ↓

Coding → Unit Testing (White Box Technique) ↓

Integration → Integration Testing

System Test Initiation ↓

System Test Planning ↓

Test Design

S/w Bidding ↓

Kick of meeting ↓

PIN Document ↓

Requirements Gathering (BRS) ↓

Analysis & Planning (SRS & Project Plan)

Initial Build ↓

System Test Execution ↓

System Test closure ↓

User Acceptance Test ↓

Release & Maintenance

Test Reporting

Page 17: Testing tools

I) System Test Initiation : In general the System Testing process is starting with System Test Initiation by Project Manager or Test Manager. They will develop Test Strategy or Test Methodology Document. This document defines the reasonable Test to be applied in current project.

Components in Test Strategy : The Test Strategy Document consists of below components to define Test Approach to be followed by Team in current project. 1. Scope & Objective :- The Purpose of Testing in current project 2. Business Issues :- The Budget allocation for Testing in current project

3. Rolls & Responsibilities :- The names of jobs in Testing Team and responsibility of each job in current project 4. Communication & Status Reporting :- The required negotiations in between various jobs in Testing Team

64% Development

& Maintenance

Ex : 100% → Project Cost

36% System Testing

Test Initiation SRS Test Strategy

I/P O/P

Project Manager / Test Manager

Page 18: Testing tools

5. Test Responsibility Matrix (TRM) :- The list of reasonable test to be applied in current project. Ex.

6. Test Automation & Testing Tools :- The purpose of automation testing in current project and available testing tools in our organization. 7. Defect Reporting & Seeking :- The required negotiation in between Testing Team and Development Team to report & solve defects. 8. Change & Configuration Management :- The maintenance of deliverable in testing for future reference. 9. Risks & Assumptions :- The expected list of risks and solutions to over come. 10. Testing measurements & Metrics The list of measurements & Metrics to estimate test status. 11. Training Plan :- The required number of training sessions to Testing Team to understand customer requirements.

Testing Topic Yes/No Comment

UI Testing Yes - Manual Testing Yes -

Functional Testing Yes - Load Testing No Lack of Resources Stress Testing No Lack of Resources

Endurance Testing No Lack of Resources Compatibility

Testing Yes -

Inter System Testing No

No need with respect to

requirements ..etc,, ..etc,, ..etc,,

* **

Page 19: Testing tools

II) Test Planning : After completion of Test Strategy document preparation the Test Lead Category people are concentrating on Test Plan Documents Preparation.

Testing Team Formation : In general the Test Planning is starting with Testing Team formation. In this stage the Test Lead is depending on below factors. → Project Size (No. of Functional Prints)

→ No.of Testers available on the bench → Test Duration W.R.T Project Plan → Available Test Environment Resources. (Ex. Testing Tools….)

Case Study :

Type of Project Developers : Testers→ ERP, Client / Server, Website 3 : 1 → System S/w Application 1 : 1 → Machine Critical 1 : 7

Identify Risks : After completion of Testing Formation the Test Lead is concentrating on Team Level Risks Analysis. Ex :- Risk 1 : Lack of Time Risk 2 : Lack of Resources Risk 3 : Lack of Documentation Risk 4 : Delays in Delivery Risk 5 : Lack of Development Process Seriouness Risk 6 : Lack of Communication

SRS, HLD & LLDs

Project Plan

Test Strategy

Testing Team Formation Identify Risks Prepare Detailed Text Plans Review Plans

Test Plans

Page 20: Testing tools

Prepare Detailed Test Plans : After Completion of Testing Team Formation and the risks analysis, the test lead is concentrating on test plan document preparation in IEEE 829 Format (Institute of Electrical and Electronics Engineer) Format :

1. Test Plan ID : Unique number or name for future reference about project.

2. Introduction : About Project

3. Test Items : The names of Modules or Functionalities in Project

4. Features to be Tested : The names of functionalities to be tested.

5. Features not to be Tested : The names of tested modules if available.

6. Test Approach : The List of selected tests by P.M.

7. Test Environment : The required Hardwares & Softwares to using testing.

8. Entry Criteria : Test Cases Designed, Test Environment Established, S/w Build received from Developers.

9. Suspension Criteria : → Test Environment Abounded

→ Shows stopper in build (Build not working)

→ Pending defects are more

10. Exit Criteria : → All modules in build covered

→ Test duration exceeded

→ All major defects solved

11. Test Deliverables : The list of testing documents to be prepared by test engineers in testing. (Test Scenarios, Test Cases, Automation Programs, Test Log, Defects reports and weekend reports)

12. Staff and Training Needs : The names of selected test engineers & required training sessions to understand customer requirements.

13. Responsibilities : Work allocation to above selected test engineers. 9 All responsible tests on specified modules or specified testing on all modules.)

14. Schedule : The dates & times to conduct testing

15. Risks & Assumptions : The previously analyzed risks and solutions to over come.

16. Approvals : The signature of Test Lead & Project Manager.

What to Test

How to Test

Whom to Test

When to Test

Page 21: Testing tools

Review Test Plan : After completion of Test Plan document preparation the test is conducting a review meeting to estimate completeness and correctness of that planed document. → Requirements / Module / Features / Functionalities Coverage → Testing Topics Coverage → Risks Oriented Coverage Note : After completion of Test Planning and before starting Test Designs, the Business Analyst and Test Lead are conducting Training Sessions to select Test Engineers on that customer requirements in Project. Some organizations are inviting Domain Experts / Subject Experts for that Training Sessions from out side. III) Test Design : After completion of required training sessions on customer requirements the corresponding Test Engineers are concentrating on Test Design to prepare Test Scenarios and Test Cases. The Test Scenarios specifies “What” to test. The Test Cases specifies “How” to test including a detailed procedure. From these sentences the Test Cases are drawing from Test Scenarios. There are four methods in this Test Design.

1. Functional Specification Based Test Case Design 2. Use Cases Based Test Case Design 3. User Interface Based Test Case Design 4. Functional & System Specification Based Test Case Design

1. Functional Specification Based Test Case Design : To prepare Test Scenarios and Cases for Functional Testing, the Test Engineers are using this method. In this approach, the Test Engineers are preparing Scenarios and Cases depending on Functional Specifications in SRS.

BRS ↓

SRS ↓

HLD ↓

LLDs ↓

S/w Build

(Functional Specifications)

Test Scenarios ↓

Test Cases

Test Design

System Test Execution

Functional Testing

UT NFT

Page 22: Testing tools

Approach : Step 1 :- Collect Functional Specifications related to responsible areas. Step 2 :- Take one specified and read that specification to gather entry point, required inputs, normal flow, coming outputs, alternative flows, exit point and exceptions are rules. Step 3 :- Prepare Test Scenarios depending on above gathering information Step 4 :- Preview that Test Scenarios and implement them as Test Cases Step 5 :- Go to Step2 until all responsible Functional Specifications Study. Functional Specification – 1 :- A login process allows User ID& Password to Authorized users. The User ID object is taking alphanumeric in lower case from 4 to 16 characters long. The password object is taking alphabets in lower case from 4 to 8 characters long.Prepare Test Scenario. Test Scenario 1 :- Verify User ID object Boundary Value Analysis (BVA) (Size) : Min = 4 Char. → Pass Max = 16 Char. → Pass Min-1 = 3 Char. → Fail Max-1 = 15 Char. → Pass Min+1 = 5 Char. → Pass Max+1 = 17Char. → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 2 :- Verify Password Object Boundary Value Analysis (BVA) (Size) : Min = 4 Char. → Pass Max = 8 Char. → Pass Min-1 = 3 Char. → Fail Max-1 = 7 Char. → Pass Min+1 = 5 Char. → Pass Max+1 = 9 Char. → Fail Equivalence Class Partition (ECP) (Type) :

Valid In-Valid a-z, 0-9 A-Z, Special Characters, Blank Field

Valid In-Valid 0-9 a-z, A-Z, Special Characters, Blank Field

Page 23: Testing tools

Test Scenario 3 :- Verify Password Object Login Operation Decision Table :

User Id Password Expected O/pValid Value Valid Value Next Window Valid Value In Valid Error MessageInvalid Valid Error MessageValid Blank Field Error MessageBland Valid Error Message

Note : Exhaustive Testing is not possible due to this reason. The Testing Team is conducting Optimal Testing using Black Box Testing Techniques like BVA,ECP, Decision Table, regular expressions … etc., Functional Specification – 2 :- In an Insurance application, users are applying for different types of Insurance policies. If a user select Type-A Insurance, then our system asks the age of that user. The age value should be grater than 16 years and should be less than 80 years. Prepare Test Scenario. Test Scenario 1 :- Verify Type-A selection Test Scenario 2 :- Verify focus to Age when you selected Type-A Insurance Test Scenario 3 :- Verify Age Value Boundary Value Analysis (BVA) (Range) : Min = 17 → Pass Max = 79 → Pass Min-1 = 16 → Fail Max-1 = 78 → Pass Min+1 = 18 → Pass Max+1 = 80 → Fail Equivalence Class Partition (ECP) (Type) :

Functional Specification – 3 :- In a shopping application users are applying for different type to items purchase orders. The purchase order is allowing user to select Item No. and to enter Qty. up to 10. The purchase order returns Total Amount along with one item price. Prepare Test Scenario.

Valid In-Valid 0-9 a-z, A-Z, Special Characters, Blank Field

Page 24: Testing tools

Test Scenario 1 :- Verify Item No. Selection Test Scenario 2 :- Verify Qty. Value Boundary Value Analysis (BVA) (Range) : Min = 1 → Pass Max = 10 → Pass Min-1 = 0 → Fail Max-1 = 9 → Pass Min+1 = 2 → Pass Max+1 = 11 → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 3 :- Verify Total Amount, given Qty. * Item Pass Functional Specification – 4 :- A Door Opened when a person comes to in front of the door and that door closed when that person went to inside. Prepare Test Scenario. Test Scenario 1 :- Verify Door Open

Person Door CriteriaPresent Opened Pass Present Closed Fail Absent Opened Fail Absent Closed Pass

Test Scenario 2 :- Verify Door Close Person Door Criteria Inside Closed Pass Inside Opened Fail Test Scenario 3 :- Verify Door operation when a person is standing at the middle of the door. Functional Specification – 5 :- In an e-banking application, the customers are connecting to Bank Server through a login process. This login allows customer to fill below fields. Password : 6 digits number Prefix : 3 Digits number but does not start with 0 & 1 Suffix : 6 Digits alphanumeric

Valid In-Valid 0-9 a-z, A-Z, Special Characters, Blank Field

Page 25: Testing tools

Area Code : 3 Digits no but it is optional Command : Cheque Deposit, Money Transfer, Mini Statement and Bills Paid. Prepare Test Scenario. Test Scenario 1 :- Verify Password Value Boundary Value Analysis (BVA) (Size) : Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 2 :- Verify Prefix Boundary Value Analysis (BVA) (Size) : Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 3 :- Verify Suffix Boundary Value Analysis (BVA) (Size) : Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 4 :- Verify Area Code Boundary Value Analysis (BVA) (Size) : Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → Fail Equivalence Class Partition (ECP) (Type) :

Valid In-Valid 0-9 a-z, A-Z, Special Characters, Blank Field

Valid In-Valid [2-9][0-9][0-9] a-z, A-Z, Special Characters, Blank Field

Valid In-Valid 0-9, a-z, A-Z Special Characters, Blank Field

Valid In-Valid 0-9, Blank Field a-z, A-Z, Special Characters

Page 26: Testing tools

Test Scenario 5 :- Verify command selection like Cheque Deposit, Money Transfer, Mini Statement and Bills Paid. Test Scenario 6 :- Verify login operation to connect to Bank Server

Remaining Fields Area Code Expected O/pAll are valid Valid Next Window All are valid Blank Field Next Window All are valid Invalid Error Message

Any one Invalid Valid/Blank Error MessageAny one Blank Field Valid/Blank Error Message

Functional Specification – 6 :- In a library Management System the readers are applying for Identity No. to get this no., the reader is filling below fields. Reader Name : Alphabets in lower case with Init Cap as single word House Name : Alphabets in lower case as single word PIN Code : Related to India Postal Department City Name : Alphabets in uppercase as single word Phone No. : Related to India Subscribers and optional Prepare Test Scenario Test Scenario 1 :- Verify Reader Name Boundary Value Analysis (BVA) (Size) : Min = 1Char. → Pass Max = 256Char. → Pass Min-1 = 0Char. → Fail Max-1 = 255Char. → Pass Min+1 = 2Char. → Pass Max+1 = 257Char. → Fail (In any front end developed programs the default max. char are 256.) Equivalence Class Partition (ECP) (Type) :

Test Scenario 2 :- Verify House Name Boundary Value Analysis (BVA) (Size) : Min = 1Char. → Pass Max = 256Char. → Pass Min-1 = 0Char. → Fail Max-1 = 255Char. → Pass Min+1 = 2Char. → Pass Max+1 = 257Char. → Fail

Valid In-Valid [A-Z][a-z]* 0-9, Special Characters, Blank Field

Page 27: Testing tools

Equivalence Class Partition (ECP) (Type) :

Test Scenario 3 :- Verify PIN Code Boundary Value Analysis (BVA) (Size) : Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 4 :- Verify City Name Boundary Value Analysis (BVA) (Size) : Min = 1Char. → Pass Max = 256Char. → Pass Min-1 = 0Char. → Fail Max-1 = 255Char. → Pass Min+1 = 2Char. → Pass Max+1 = 257Char. → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 5 :- Verify Phone Number Boundary Value Analysis (BVA) (Size) : Min = 10 Digits → Pass Max = 12 Digits → Pass Min-1 = 9 digits → Fail Max+1 = 13 Digits → Fail Min+1 = 11 Digits → Pass Equivalence Class Partition (ECP) (Type) :

Valid In-Valid [a-z]* A-Z, 0-9, Special Characters, Blank Field

Valid In-Valid [1-9][0-9][0-9][0-9][0-9][0-9] a-z, A-Z, Special Characters, Blank Field

Valid In-Valid [A-Z]* a-z, 0-9, Special Characters, Blank Field

Valid In-Valid 0-9, Blank Field A-Z, a-z, Special Characters

Page 28: Testing tools

Test Scenario 6 :- Verify Reader Registration Decision Table : Remaining Fields Telephone Number Expected O/p All are valid Valid Identity No. All are valid Blank Field Identity No. All are valid Invalid Error Msg. Any one Invalid Valid / Blank Error Msg. Any one Blank Field Valid / Blank Error Msg. Functional Specification – 7 :- A Computer Shut Down Operation Test Scenario 1 : Verify Shut Down option selection using Shut Down Test Scenario 2 : Verify Shut Down option selection using Alt+F4 Test Scenario 3 : Verify Shut Down option selection using Ctr+Alt+Del Test Scenario 4 : Verify Shut Down operation success Test Scenario 5 : Verify Shut Down operation using Run Command. Test Scenario 6 : Verify Shut Down operation when a process is running Test Scenario 7 : Verify Shut Down operation using Power Off Button Functional Specification – 8 :-

Money With Drawl From ATM with all Rules and Regulations Test Scenario 1 : Verify Card Insertion Test Scenario 2 : Verify Card Insertion in Wrong Angle Test Scenario 3 : Verify Cancel After Card Insertion Test Scenario 4 : Verify Language Selection Test Scenario 5 : Verify Cancel after selection of Language Test Scenario 6 : Verify PIN Entry Test Scenario 7 : Verify operation with wrong PIN Test Scenario 8 : Verify operation when you enter wrong PIN 3 times consecutively

Page 29: Testing tools

Test Scenario 9 : Verify Cancel after enter PIN Test Scenario 10 : Verify Amount type selection Test Scenario 11 : Verify operation when you selected wrong account type with

respected to the inserted card Test Scenario 12 : Verify cancel after account type selection Test Scenario 13 : Verify with drawl option selection Test Scenario 14 : Verify cancel after selection of with drawl Test Scenario 15 : Verify amount entry Test Scenario 16 : Verify operation with wrong denomination in amount Test Scenario 17 : Verify with drawl operation success. (Correct amount, right receipt,

able to take card back) Test Scenario 18 : Verify with drawl operation with grater than possible balance. Test Scenario 19 : Verify with drawl operation with grater than day limit. Test Scenario 20 : Verify with drawl operation with Net work problem Test Scenario 21 : Verify with drawl amount with lack of amount in ATM Test Scenario 22 : Verify with drawl operation with exceeded no.of Transactions per

day Test Scenario 23 : Verify with drawl operation with other bank card Test Scenario 24 : Verify with drawl operation with stolen card

Page 30: Testing tools

2. Use Cases Based Test Case Design : It is an alternative method for Functional Specification Based Test Case Design. In this method the Test Engineers are depending on Use Cases instead of Functional Specifications to prepare Test Scenarios and Test Cases.

From the above diagram the Business Analyst and Test Lead category people are developing use cases depending on corresponding functional specifications in SRS. Every Use Case is an Implemented Form of Functional Specifications. Use Case Format :-

1. Use Case ID : Unique number or name for future reference

2. Use Case Description : The summery of corresponding Functionality

3. Required Inputs : The required Inputs for corresponding Functionality

4. Precondition : The necessary Condition to follow before operating corresponding functionality

5. Events List :

Events / Tasks Expected O/p or Out come

(A Step by Step procedure with expected outputs)

6. Activity Flow Diagram : A Pictorial / Diagrammatic of corresponding functionality

7. Post Condition : Necessary tasks to do after corresponding functionality

BRS ↓

SRS ↓

HLD ↓

LLDs ↓

Coding (UT & IT) S/w Build

(Functional Specifications)

Test Scenarios ↓

Test Cases

Use Cases

System Test Execution

BA + Test Lead

Page 31: Testing tools

8. Alternative events list : Alternative procedures to do this functionality if available

9. Proto Type : A screen shot related to corresponding functionality.

10. Related use cases : The names of other Use Cases relation to corresponding functionality

Approach :

Step1 : Collect use cases of responsible areas

Step2 : Take one use case and study

Step3 : Identify Entry Point, Required I/p, Normal Flow, Expected O/p, Exit Point, Alternative Flows and Exceptions rules.

Step4 : Prepare Test Scenarios depending on above Identified Information.

Step5 : Review that scenario and implement them as Test Cases

Step6 : Go to Step2 until all responsible Use Cases Study

Use Case 1 :

1. Use Case ID : UC_Login

2. Use Case Description : Login operation is authorization

3. Required Inputs : User ID is in alphabets lower from 4-16 characters long. The Password alpha numeric in lower case from 4-8Char. Long.

4. Precondition : New User Registration to get valid User ID & Password

5. Events List :

Events / Tasks Expected O/p or Out come

Enter User ID an Password Values and then click OK Button

Next window for valid user and invalid data error msg.

for Invalid user.

Page 32: Testing tools

6. Activity Flow Diagram :

7. Post Condition : Log out operation is mandatory after successful Login

8. Alternative events list : None

9. Proto Type :

10. Related use cases : UC_New User, UC_Logout

LOGIN

Example : User

Data Base

Next Window

Re-Login

Error Msg.

Valid

User ID & Password

Page 33: Testing tools

Test Scenario 1 :- Check User ID

Boundary Value Analysis (BVA) (Size) : Min = 4Char. → Pass Max = 16Char. → Pass Min-1 = 3Char. → Fail Max-1 = 15Char. → Pass Min+1 = 5Char. → Pass Max+1 = 17Char. → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 2 :- Check Password Boundary Value Analysis (BVA) (Size) : Min = 4Char. → Pass Max = 8Char. → Pass Min-1 = 3Char. → Fail Max-1 = 7Char. → Pass Min+1 = 5Char. → Pass Max+1 = 9Char. → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 3 :- Check Ok Button Click

User ID Password Expected Out Put Valid Valid Next Window Valid Invalid Invalid Data Error Msg.

Invalid Valid Invalid Data Error Msg.Value Blank Field Invalid Data Error Msg.

Blank Value Value Invalid Data Error Msg. Test Scenario 4 :- Check Cancel Button

Event Expected Out Put Click Cancel after open login Login Window Closed

Click Cancel after enter user ID Login Window ClosedClick cancel after enter Password Login Window Closed Test Scenario 5 :- Check Minimize Icon Test Scenario 6 :- Check Maximize Icon Test Scenario 7 :- Check Close Icon

Valid In-Valid a-z A-Z, 0-9, Special Characters, Blank Field

Valid In-Valid a-z,0-9 A-Z, Special Characters, Blank Field

Page 34: Testing tools

Use Case 2 :

1. Use Case ID : UC_Book_Issue

2. Use Case Description : Issue a Book for Valid User

3. Required Inputs : User ID is in below format

Mm_yy-xxxx (4 digits)

Book ID is in below format

BOOK_xxxx

4. Precondition : New User Registration to get valid User ID

5. Events List :

Events / Tasks Expected O/p or Out come

Enter User ID and then click “Go” Button

Focus to Book ID for Valid User and Invalid User error msg. for

Invalid User

Enter Book ID and click “Go”

Button

Book issued message for available book and unavailable book

message for unavailable book id

6. Activity Flow Diagram :

7. Post Condition : Received that issued book from Computer Operator

8. Alternative events list : None

BOOK ISSUE

Example : User

Data Base Re-Login

Invalid User

Valid Book ID

Valid User ID

BOOK ISSUE

“Book Issued”

Re-Login

Unavailable Book

Valid

Data Base

Page 35: Testing tools

9. Proto Type :

10. Related use cases : UC_New User, UC_Book Feeding

Test Scenario 1 :- Verify User ID

Boundary Value Analysis (BVA) (Size) : Min = Max = 10 Position Value → Pass = 9 Position Value → Fail = 11 Position Value → Fail Equivalence Class Partition (ECP) (Type) :

Test Scenario 2 :- Verify “Go” button click User ID Expected O/p after click ‘Go’ Valid Value Focus to Book ID Invalid Value “Invalid User” Error Message Blank Field “Invalid User” Error Message

Test Scenario 3 :- Verify User ID

Boundary Value Analysis (BVA) (Size) : Min = Max = 8 Position Value → Pass = 7 Position Value → Fail = 9 Position Value → Fail Equivalence Class Partition (ECP) (Type) :

Valid In-Valid [0][1-9][_][0-9][0-9][_][0-9][0-9][0-9][0-9][1][0-2][_][0-9][0-9][_][0-9][0-9][0-9][0-9]

a-z, A-Z, 0-9, Special Char. except _,Blank Field

Valid In-Valid [B][O][O][K][_][0-9][0-9][0-9][0-9] a-z, A-Z Except B,O,K,

Special Char. except _,Blank Field

Book Issue

User ID

Book ID

Go

Go

- □ X

Page 36: Testing tools

Test Scenario 4 :- Verify “Go” Click

Book ID Expected O/p after click “Go” Valid Book ID “Book issued” Msg. Invalid Book ID “Unavailable Book” Message Blank Field “Unavailable Book” Message

Test Scenario 5 :- Verify minimized Icon

Test Scenario 6 :- Verify maximized Icon

Test Scenario 7 :- Verify close Icon

3. User Interface Based Test Design : The Functional Specification Based Test Design or The Use Cases Based Test Designs are using to prepare Test Scenarios and Cases for Functional Testing. This User Interface Based Test Design is using by Test Engineers to prepare Test Scenarios and cases for “Usability Testing”.

In this method the Test Engineers are depending on User Interface Requirements in SRS. In general the Test Engineers are writing Common Test Scenarios for Usability Testing, which are applicable on any type of Application Scenarios. Test Scenario 1 :- Verify Spelling in every scenario Test Scenario 2 :- Verify error msg. meaning Test Scenario 3 :- Verify Int.Cap of labels in every screen

BRS ↓

SRS ↓

HLD ↓

LLDs ↓

Coding (UT & IT) S/w Build

(UI Requirements)

Test Scenarios ↓

Test Cases

System Test Execution

Page 37: Testing tools

Test Scenario 4 :- Verify color uniqueness through out the screens Test Scenario 5 :- Verify Font or Style uniqueness through the screens Test Scenario 6 :- Verify size uniqueness throughout the scene Test Scenario 7 :- Verify alignment of objects in every screens Test Scenario 8 :- Verify line spacing uniqueness through out the screens Test Scenario 9 :- Verify Tool Tips of icons in every screen. Test Scenario 10 :- Verify default object in every screen. Test Scenario 11 :- Verify Uniform Background colors of objects in every screen. Test Scenario 12 :- Verify Scroll Bars when our screen size is grater than Desk Top Test Scenario 13 :- Verify keyboard accessing of every object in every screen Test Scenario 14 :- Verify abbreviations & Short cuts in screens Test Scenario 15 :- Verify Multiple Data Object positions in every screen. Ex : List Box, Menu, Table … etc., Test Scenario 16 :- Verify Help Messages (Manual Support Testing) Test Scenario 17 :- Verify Functionally Grouped Objects in every screen. Test Scenario 18 :- Verify Boarders of Functionally Grouped Objects in every screens Test Scenario 19 :- Verify Labels of objects with respect to Functionality Test Scenario 20 :- Verify Window Labels with respect to Functionality 4. Functional and System Specification Based Test Design : After completion of Test Scenarios selection for Functional and Usability Testing the Test Engineers are concentrating on Test Scenario selection for Non-Functional Testing depending on Functional and System Specifications in SRS. Functional Specifications are describing the required functionalities in Software and System specifications are describing the required environment to be used.

Page 38: Testing tools

Example Test Scenarios for Compatibility Testing : Test Scenario 1 : Verify Login in Win NT with Customer expected configuration Test Scenario 2 : Verify Login in Win 2000 with Customer expected configuration Test Scenario 3 : Verify Login in Win Vista with Customer expected configuration And more… Example Test Scenarios for Performance Testing : Test Scenario 1 : Verify Login Under Customer expected Load and Configuration Test Scenario 2 : Verify Login Under more than Customer expected configuration And more…. Example Test Scenarios for Installation Testing : Test Scenario 1 : Verify Setup Program to Start Installation. Test Scenario 2 : Verify Interface easiness during Installation Test Scenario 3 : Verify occupied disk space after Installation And more… Test Case Format : After completion of Test Scenarios selection for responsible areas in terms of Functional, Usability and Non-Functional Testing, the Test Engineers are implementing them as Test Cases. Test Engineers are using IEEE (Institute of Electrical & Electronics Engineer) 829 Test Case Format.

1. Test Case ID : Unique Number / Name for Future Reference

2. Test Case Name : The Corresponding Test Scenario

3. Feature to be Tested : The Name corresponding Module or Functionality

BRS ↓

SRS (Functional

Specifications + System Specifications)

↓ HLD & LLDs

↓ Coding (UT & IT)

S/w Build

Test Scenarios ↓

Test Cases

System Test Execution

Page 39: Testing tools

4. Test Suite ID : The Unique number or name of a Test Batch. This case is a member in that Batch

5. Priority : The importance of this Test Case (P0 priority for Functional Test Cases, P1 Priority for Non-Functional Test Cases and P2 Priority for Usability Test Cases.)

6. Test Environment : The required Hardware and Software to execute this test.

7. Test Effort : Person per hour (Ex.20min is average Test Execution Time)

8. Test Duration : The data and time to execute this test.

9. Test Setup : The necessary tasks to do before start this test execution.

10. Test Procedure / Data Matrix :

Step No.

Action / Task event

Required I/p

Expected O/p

Actual O/p Result Defects

Id Comments

ECP (Type) BVA (Range / Size)I/p Object

Valid Invalid Min Max

11. Test Case Pass / Fail Criteria : The Final result of this Test Case after execution

Note 1 : In general the test engineers are not interesting to fill all fields in Test Case Format due to lack of time and similarity in fields values of Test Cases.

Note 2 : The test engineers are using test procedure for operation test cases and data matrix for input object test cases.

Functional Specification : In a Banking application the valid employees are creating fixed deposit operations with depositors provided information. In this fixed deposit operation, the employees are filling below fields.

Depositor Name : Alphabets in Lower Case with Int.Cap, allows multiple words in name

Amount : 1500 to 1,00,000

Test Design Test Execution

} Data Matrix in

Page 40: Testing tools

Time : Up to 12 months

Interest : Numeric with one decimal

If the time>10months, then the Interest>10% from Bank Rules

Prepare Test Scenarios and Test Cases :

Test Scenario 1 : Verify Depositor Name

Test Scenario 2 : Verify Amount

Test Scenario 3 : Verify Time

Test Scenario 4 : Verify Interest

Test Scenario 5 : Verify Fixed Deposit Operation

Test Scenario 6 : Verify Fixed Deposit Operation with Bank Rule

Test Case Documents :

Test Case 1 :- 1. Test Case ID : TC_FD_Ravi_24th May_1

2. Test Case Name : Verify Depositor Name

3. Test Suit ID : TS_FD

4. Priority : P0

5. Test Setup : Depositor Name is taking inputs

6. Data Matrix :

ECP (Type) BVA (Size) I/p Object

Valid Invalid Min Max

Depositor Name ([A-Z][a-z]*)* 0-9,Spl.Char, Blank Field 1 Char 256 Char

Test Case 2 :- 1. Test Case ID : TC_FD_Ravi_24th May_2

2. Test Case Name : Verify Amount

3. Test Suit ID : TS_FD

4. Priority : P0

5. Test Setup : Depositor Object is taking inputs

Page 41: Testing tools

6. Data Matrix :

ECP (Type) BVA (Range) I/p Object

Valid Invalid Min Max

Amount 0-9 a-z, A-Z, Spl.Char, Blank Field 1500 100000

Test Case 3 :- 1. Test Case ID : TC_FD_Ravi_24th May_3

2. Test Case Name : Verify Time

3. Test Suit ID : TS_FD

4. Priority : P0

5. Test Setup : Time Object is taking inputs

6. Data Matrix :

ECP (Type) BVA (Range) I/p Object

Valid Invalid Min Max

Time 0-9 a-z, A-Z, Spl.Char, Blank Field 1 Month 12 Months

Test Case 4 :- 1. Test Case ID : TC_FD_Ravi_24th May_4

2. Test Case Name : Verify Interest

3. Test Suit ID : TS_FD

4. Priority : P0

5. Test Setup : Interest Object is taking inputs

6. Data Matrix :

ECP (Type) BVA (Range)I/p Object

Valid Invalid Min Max

Interest 0-9 . 0-9 with one decimal a-z, A-Z, Spl.Char, Blank Field 0.1 100

Page 42: Testing tools

Test Case 5 :- 1. Test Case ID : TC_FD_Ravi_24th May_5

2. Test Case Name : Verify Fixed Deposit Operation

3. Test Suit ID : TS_FD

4. Priority : P0

5. Test Setup : Valid Values are available in hand

6. Test Procedure :

Step No. Action Required I/p Expected O/p

1. Connect Bank Server Valid Exp Id Menu Appears

2. Select “FD” Option None Fixed Deposit Form Opened

All are valid Acknowledgement

Any one Invalid Error Msg. 3. Fill Fields and Click Ok

Any one Blank Field Error Msg.

Test Case 6 :- 1. Test Case ID : TC_FD_Ravi_24th May_6

2. Test Case Name : Verify Fixed Deposit Operation with Bank Rule

3. Test Suit ID : TS_FD

4. Priority : P0

5. Test Setup : Valid Values are available in hand

6. Test Procedure :

Step No. Action Required I/p Expected O/p

1. Connect Bank Server Valid Exp Id Menu Appears

2. Select “FD” Option None Fixed Deposit Form

Opened

Valid Name, Amount, Time>10 with Interest>10 Acknowledgement

3. Fill Fields and Click Ok Valid Name, Amount, Time>10

With Interest <=10 Error Msg.

Page 43: Testing tools

Like as above example the Test Engineers are implementing Test Scenarios as Test Cases. Every Test Case is a combination of corresponding Test Scenario and required details to apply this test on S/w Build.

Test Cases Selection Review : After completion of Test Scenarios and Cases writing the Test Lead & Test Engineers are conducting a review meeting to estimate the completeness and correctness of that documents. In this review the Testing Team is depending on below coverages.

□ Requirements Oriented Coverage (Modules)

□ Testing Topic Oriented Coverage (UT,FT,NFT)

IV. Test Execution :- After completion of Test Design and Review the Testing Team is concentrating on below issue.

□ Formal meeting with developers

□ Test Environment Establishment

□ Levels of Test Execution

□ Formal Meeting :- In general the Test Execution process is starting with a Formal Meeting in between Testing Team & Development Team representatives. In this meeting the corresponding representatives are concentrating on Build Version Control and Defect Tracking.

From Build version control concept, the Development Team is modifying S/w Build Coding, to resolve defects and they will release that modified build with Unique version number. This version numbering system is understandable to Test Engineers to distinguish Old Build & Modified Build. For this version controlling, the Developers are using Version Control Tools also. (Ex : - VSS (Visual Source Safe))

To report mismatches to Development Team the Test Engineers are reporting that mismatch to Defect Tracking Team (DTT) First

Test Lead + Project Manager + Project Lead + Business Analyst → DTT

Page 44: Testing tools

□ Test Environment Establishment :- After completion of Formal Meeting, the Testing Team is concentrating on Test Environment Establishment with required all Hardware and Software

FTP : File Transfer Petrol (Single Location)

TCP/IP : Transmission Control Protocol / Internet Protocol (Different Location(s))

□ Levels of Test Execution:-

SERVER

Configuration Repository

Development Environment

Test Environment

Project Management

TCP/IP FTP

TCP/IP

FTP

TCP/IP

FTP

Development Testing

Level-0 (Sanity) Level-1 (Comprehensive) Level-2 (Regression) Level-3 (Final Regression)

Defect Fixing

Initial BuildStable Build

Defect Report

Modified Build

Page 45: Testing tools

Case Study :-

Initial Build ↓

Sanity Testing (Level-0) ↓

Stable Build ↓

Comprehensive (Level-1) ↓

Defect Detection ↓

Modified Build ↓

Regression Test (Level-2) ↓

Defect Closing ↓

Master Build ↓

Final Regression (Leve-3) ↓

Golden Build (Able to Release)

□ Levels of Test Execution Vs Test Cases :- Level -0 → Some P0 (Functional) Test Cases Level–1 → All P0,P1&P2 Test Cases Level-2 → Selected P0,P1&P2 Test Cases with respect to modification Level-3 → Selected P0,P1&P2 Test Cases with respect to Defect Density

□ Level-0 Sanity Testing :- After Downloading Initial Build from Configuration Reporting in server, the Testing Team is concentrating on Level-0 sanity testing to estimate Testability of that Software. Testability means that Understandable, Operatable, Observable, Controllable, Consistency, Simplicity, Maintainable and Automatable.

If that Initial Build is not Stable then the Testing Team sends back that Build to Developers. If that build is Stable Build then the Test Engineers are concentrating on Level-1 Test Execution to detect defects. This Level-0 testing is also known as Sanity Testing / Smoke Testing / Testability Testing / Tester Acceptance Testing or Build Verification Testing /n Octangle Testing.

Page 46: Testing tools

□ Level-1 Comprehensive / Real Testing :- In this Level-1 Test Execution, the Test Engineers are executing all Test Cases as Batches. Every Test Batch Consist of a set of dependent Test Cases. In these test batches the end state of one test is Base State to Next State. Test batches are also known as Test Suite or Test Set or Test Build or Test Chain.

From the above diagram the Test Engineers are continuing Test Execution Batch by Batch and Case by Case in every Batch. If our Test Case Step expected is not equal to actual then the Test Engineer is concentrating on Defect Reporting. If possible, they will continue Test Execution also.

In this Level-1 test execution, the Test Engineers are preparing Test Log Document to specify test results.

Test Log Document Format :-

Test Case ID

Results (Pass / Fail)

Defect ID

Executed By

Executed On Comments

There are three types of Test Results.

→ Passed, All expected values are equal to Actual

→ Failed, Any one expected are not equal to Actual

→ Blocked, Test execution postponed due to incorrect parent functionality

Receive Stable Build from Developers

Make Test Cases as Batches

Select A Batch

Select a Test Case

Take a Step in Case

Defect Reporting

Step Expected= Actual

Build

No

Yes Next Case

Next Batch

Page 47: Testing tools

V. Defect Reporting & Tracking :- During Level-1 Test Execution, the Test Case expected values are not equal to Actual. These mismatches are calling as Defects / Issues / Bugs / Flaws

Defect Report :-

1. Defect ID : Unique No. or Name 2. Description : Summary of that mismatch in between Tester expected

value and Build actual value 3. Build Version ID : The version number of Current Build (The Test Engineers detected this defect in that Build) 4. Feature : The Name of Module or Functionality (Test Engineers detected this defect in that Module) 5. Test Case ID : The ID of failed test case (Test Engineers detected this defect in that case

execution) 6. Reproducible : Yes → Defect appears every time in Test Execution No → Defect appears rarely in Test Execution 7. If Yes, attach procedure : 8. If No, attach procedure and screen shots : 9. Severity : The seriousness of defect in terms of Functionality High / Critical :- Not able to continue testing without

resolving. Medium / Major :- Able to Continue Testing but

Compulsory / Mandatory to resolve Low / Minor :- Able to continue, May or May Not to

resolve. 10. Priority : The importance of defect to solve in-terms of customer

interest. (High / Medium / Low) 11. Detected By : The name of the Test Engineer 12. Detected On : The data of detection and submission 13. Status : New : Reporting first time Re-Open : Re-Reporting 14. Assigned to : Report to Tracking Team 15. Suggested Fix : Suggestion to Solve that Defect. (Optional)

Page 48: Testing tools

Defect Reporting Process :

Test Engineer Report Defect to DTT as New

DTT Analize that Defect

Accepted Defect Status Changed to “ Rejected”

Categorized that defect and change status to “Open”

Data related Defect

Assigned to Testing Team

No

Yes

Yes

Procedure Related Defect

Assigned to Testing Team Yes

No

No

No

Page 49: Testing tools

H/w or Infrastructure Defect

Assigned to H/w Team Yes

No

No

Code Related defect is Assigned to Development Team

Case Study :-

Test Engineer

Defect Tracking Team

Project Lead +

Programmers

Report

Defect Assigned

Test Engineer

Defect Tracking Team BA+TL+TE

Report

Defect Assigned

Code Related Defect

Test Case Procedure & Test Data Related Defect

Test Engineer

Defect Tracking Team

H/w or Infrastructure

Team Report

Defect

Assigned

H/w or Environment Related Defect

Page 50: Testing tools

New : Reporting First Time Assigned : Accepted by DTT Reject : Not Accepted by DTT Deferred : Accepted but not interested to solve due to low severity and low priority. Open : Responsible Team is ready to resolve Fixed : Defect not Correctly solved (or) Re reporting Closed : Defect correctly solved and confirmed through Regression Testing. Test Data Related Defect Fixing : If our reported defect accepted by Defect Tracking Team (DTT) and they decided that defect as Test Data Related Mismatch. In this situation the responsible testing team is concentrating Correct Data Collection (CDC) without having conceptual gap with the help of BA and TL and then, the Test Engineers are re-executing previously failed test on same Build with correct test data. This test repetition is calling as Retesting or Confirmation Testing.

Defect Life Cycle or Bug Life Cycle :

New ↓

Assigned ↓

Open ↓

Fixed ↓

Closed

Deferred Reject

Reopen

Build Test Case Defect Reporting Failed

Data Related Defect

Collect Correct Data

Repeat Test Case With correct

Data

Build

Retesting / Confirmation Testing

Testing

Page 51: Testing tools

Test Script or Procedure Related Defect Fixing : If our reported defect accepted as Test Procedure Related Defect by DTT, then Responsible Testing Team is preparing Correct Procedure for that Test Case with help of TL and BA

Infrastructure Related Defect Fixing : If our Report Defect Accepted by DTT as Environment Related or Infrastructure Related or Hardware Related Defect, then responsible Hardware Team is Re-establishing correct test environment.

Build Test Case Report to DTT Failed

Environment Related Defect

Re-establish Test Environment by

H/w Team Repeat Test Case

In modified environment

Build

Retesting / Confirmation Testing

Testing

Build Test Case Report to DTT Failed

Procedure Related Defect

Correct Test Procedure

Prepared by Test Engineers

Repeat Test Case In correct procedure

Build

Retesting / Confirmation Testing

Testing

Page 52: Testing tools

Code Related Defect Fixing :- If our reported defect accepted as Code Related Defect, then the responsible Programmers / Developers are performing changes in Build Coding to Resolve that defect.

After receiving build from Development Team, the Testing Team is concentrating on re-testing & Regression Testing

From the above model the test engineer is re-executing previously failed test on modified build to confirm defect fixing, called as Retesting or Confirmation Testing.

PL Updates the status of Defect

to “Open”

Impact Analysis by

Programmers

Selected Coding areas reviewed

by PL

Changes Required in Documents

Changes by concerned

person (BA/Design)

Review Document, changes by

BA/Designers & Project Lead

Changes in coding by

Programmers

Unit Test & Make modified

Build

PL changes defect status to

“Fixed”

Release Modified Build with Unique Version

Number and Release Note

Yes

No

Build

Faild DTT Code Related DefectReport Defect

Programmers

Test Cases

Pass

Modified Build

PassedPassedFailed Test

Related Passed Tests

Page 53: Testing tools

To identify side effects of defect fixing modifications in modified build, the test engineers is re-executing previously passed related test on that modified build called Regression Testing. Level-2 Regression Testing :

Case 1:- If the development team fixed defect severity is High then the Test Engineers are repeating All P0, All P1 and Carefully Selected P2 Test Cases on that Modified Build w.r.t. modifications specified in release note. Case 2 :- If the Development Team fixed defect severity is Medium then the Test Engineers are repeating All P0, Carefully Selected P1 and Some P2 Test Cases on that modified build w.r.t. modifications specified in release note. Case 3 :- If the Development Team fixed defect severity is Low then the Test Engineers are repeating Some P0, Some P1 and Some P2 Test Cases on that modified build w.r.t. modifications specified in release note. Case 4:- If the development team release modified build w.r.t. changes in Customer Requirements then the Test Engineers are re-executing All P0, All P1 and Carefully Selected P2 Test Cases on that Modified Build w.r.t. changes in Customer requirements. In this case Test Engineers are performing changes in Test Scenarios and Test Cases w.r.t. changes in Customer Requirement. VI. Test Closure :-

Take Modified Build and Release Note

Identify severity of fixed defect in that Modified Build

All P0 All P1 Carefully Selected P2 Cases

All P0 Carefully Selected P1 And Some P2 Test Cases

Some P0 Some P1 Some P2 Test Cases

High Medium Low

On that modified build to detect Side Effects in Build with respect to Modifications Specified in Release Note

Page 54: Testing tools

After completion of all reasonable tests and detected defects closing, the test lead is conducting a review meeting to Stop Testing. In this review the TL is analyzing below factors with the involvement of Test Engineers. 1. Coverage Analysis :- → Requirements Oriented Coverage (Module) → Testing Topic Related Coverage (Usability, Functional, Non-Functional) 2. Defect Density Calculation : Ex :

Modules / Requirement % A 20% B 20% C 40% ( Need Regression Test )D 20%

Total 100% 3. Analysis of Deferred Defect : Whether the deferred defects are postponed or not? Level-3 Final Regression Testing : After completion of successful Test Closure review the Testing Team is concentrating Leve-3 or Final Regression Testing.

VII. User Acceptance Testing (UAT) :

Identify High Defect Density

Module

Golden Defect Reporting If

Required

Effort Estimation

Plan Regression

Regression Testing

Person / Hour

Page 55: Testing tools

After Completion of Final Regression Testing the Project Management is concentrating on User Acceptance Testing to collect feedback from Real Customers / Model Customers. There are two ways in User Acceptance Testing, such as Alpha Testing and Beta Testing. VIII. Sign Off : After completion of successful User Acceptance Testing and there modifications, the Test Lead is preparing Final Test Summary Report and review corresponding Test Engineer from this project. The final Test Summary Report is a combination below document. → Test Strategy / Methodology → Test Plan(s) → Test Scenarios → Test Cases → Test Logs → Defect Reports → Requirements Traceability Matrix

Required ID

Test CaseID

Result (Pass / Fail)

DetectedID

Status (Closed / Deferred)

Comments

It is a mapping between requirements and defects via test cases. Case Study (5Months of Testing Process) :-

Deliverable Responsibility Duration Test Strategy PM / TM 4-5 days Test Planning Test Lead 4-5 days

Requirements Training to Test Engineers BA + Domain / Subject Experts 5-10 days

Test Scenarios & Review Test Engineer 5-10 days Test Cases Implementation Test Engineer 10-15 days

Review Build + Level-0 (Sanity Testing)

Test Engineer 2-3 days

** Test Automation Test Engineer 10-15 days Level-1 and Level-2 Testing Execution

Test Engineer 30-40 days

Deliverable Responsibility Duration

Page 56: Testing tools

Defect Reporting Test Engineer On Going (Same Day)

Status Reporting Test Lead Weekly Twice Test Closure & Level-3 Test Lead & Test Engineer 5-10 days

User Acceptance Testing Real / Model Customers with In front of Developers and Testers 3-5 days

Sign Off Test Lead 1-2 days

From the above W-Model, the Testing Tools are available for Functional Testing and Some of Non-Functional Testing and Endurance Testing and Data Volume Testing. The remaining Non-Functional Tests and Usability Testing conducted by Test Engineers Manually.

W-Model

Development

Req. Analysis

System Testing And Manual Test Automation

S/w Design

Coding + Unit Testing

Integration Testing

Build

Usability Testing

F.T

N.F.T

No Tools in Market

Win Runner / QTP / Robot / Silk

Load Runner & J Meter

Note : Test Automation is Optional

Page 57: Testing tools

Win Runner 8.0 :

Developed by Mercury Inter Active and Take over by Hewlett Packed (HP) Functional Testing Tool This Version released in “2005”January Supports VB, .Net, Java, Power Builder, HTML, Delphi, VC++, D2K, and Siebel

and Siebel Technology Software for Functional Testing. To Support SAP, People Soft, XML, Multimedia and Oracle Applications

(“ERPS”) including above technologies, Test Teams are using Quick Test Professional (QTP)

Win Runner runs on windows only X-Runner for Unix / Linux

Win Runner Test Process :

Receive Stable Build From Developers after Sanity Testing ↓

Identify Functional Test Cases (Priority P0) to Automate (English + Manual) ↓

Create Automation Programs (TSL) for that Functional Test Cases ↓

Runs Programs on S/w Build to detect defects ↓

Test Reporting if required From the above approach, the Test Engineers are concentrating Manual Functional Test Cases into Test Script Language (TSL) programs. TSL is a “C” like language Add-in Manager : This window list out all Win Runner supporting technologies with respect to license. Test Engineers are selecting current project technology in that list Welcome Screen : After Successful Win Runner launching Welcome Screen is coming on the Desktop. The screen consists of 3 New Options like

→ Create a New Test → Open an Existing Test → A Quick Preview of Win Runner

Page 58: Testing tools

Win Runner Icons :

Win Runner Test Automation Frame Works : The Win Runner 8.0 is allowing you to convert our Manual Functional Test Cases into Test Script Language (TSL) programs in 4 ways

→ Record and Playback Frame Work → Data Driven Frame Work → Keyword Driven Frame Work → Hybrid Frame Work

I. Record & Playback Frame Work : In this frame work the Test Engineers are converting manual test cases into automation programs with Two Steps of procedure.

A. Recording Operations B. Inserting Check Points

A. Recording Operations :- In Test Automation program creation, the Test Engineers are recording S/w Build operations. There are two modes in recording such as Context Sensitive Mode and Analog Mode. In Context Sensitive Mode, the tool is recording Mouse and Keyboard operations with respect to objects and window in build. To select this mode the Test Engineers are using below options. Click “Start Recording” icon Once Test Menu → Record Context Sensitive Option. To record mouse pointer movements with respect to desktop co-ordinates, Test Engineers are using Analog Mode in Win Runner. To select this mode we can use below options.

Start Recording ↓ Run From Top → Run From Arrow

Stop Recording Pause (Stop Run)

Page 59: Testing tools

Click “Start Recording” icon Twice Test Menu → Record Analog Ex :- Digital Signatures, Graphs Drawing and Image Movements. “F2” is a short cut key to change from one mode to another mode. Note :- In Analog Mode the Win Runner is Recording Mouse Pointer Movements with respect to Desktop Co-ordinate. Due to this reason the Test Engineers are not changing corresponding window position and monitor resolution. B. Inserting Check Point : After recording build operations, the Test Engineers are inserting check points with respect to expectations. Every check point is comparing Test Engineer given Expected Value and Build Actual Value. There are Four check points in Win Runner.

GUI (Graphical User Interface) Check Point Bitmap Check Point Database Check Point Text Check Point

GUI (Graphical User Interface) Check Point :

To verify properties of Objects, we can use this check point. It consists of 3 sub options.

i. For Single Property ii. For Object / Window

iii. For Multiple Object i. For Single Property :- To verify one property of one object we can use this option. Ex.-1 : Test Procedure :-

Step No. Action Required I/p Expected O/p

1 Open an order in Flight Reservation Window

Order No. as Valid

Delete Order button “enabled”

Page 60: Testing tools

Build :- Flight Reservation Window Automation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, “1”); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1); Ex.-2 : Test Procedure :-

Step No. Action Required I/p Expected O/p

1 Open an order in Flight Reservation Window

Order No. as Valid

Insert Order button “disabled”

Build :- Flight Reservation Window Automation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, “1”); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Insert Order”, “enabled”, 0); Note :- TSL is case sensitive language and it is taking # symbol for comments.

Page 61: Testing tools

Ex.-3 : Test Procedure :-

Step No. Action Required I/p Expected O/p

1 Open an existing order in Flight Reservation Window

Valid Order No.

Update Order button “disabled”

Build :- Flight Reservation Window Automation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, “1”); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Update Order”, “enabled”, 0); Note :- No need to “Stop Recording” before “Inserting Check Point” Case Study :- 1) Manual : Click a button TSL : button_press (“Button Name”); 2) Manual : Select a Menu Option TSL : menu_select_item (“Menu Name; Option Name”); 3) Manual : Fill a Text Box TSL : edit_set (“Edit Box Name”, “Given Text”); 4) Manual : Select a Radio Button TSL : button_set (“Radiobutton Name”, ON); 5) Manual : Check Box Selection TSL : button_set (“Check Box Name”, ON/OFF);

Page 62: Testing tools

6) Manual : Select an Item in List Box TSL : list_select_item (“List Box Name”, “Selected Item”); 7) Manual : Fill a Password Box TSL : password_edit_set (“Password Object”, “encrypted value”); 8) Manual : Activate a window TSL : win_active (“Window Name”); 9) Manual : Auto Focus to window through an object operation TSL : set_window (“Window Name”, time); Ex.-4 : Test Procedure :-

Step No. Action Required I/p Expected O/p

1 Enter User ID and Password

Valid User ID & Password

“OK” button “enabled”

Build :-

Automation Program :- set_window (“Login”,Time); edit_set (“User ID”, “Valid Value”); passoword_edit_set (“Password”, “Encrypted Value”); button_check_info (“OK”, “enabled”, 1);

Login

User ID

Password OK

Page 63: Testing tools

Ex.-5 : Test Procedure :- Step No. Action Required I/p Expected O/p

1 Focus to Student window None “Submit” button “disabled” 2 Select Roll No. None “Submit” button “disabled” 3 Enter Student Name Valid Name “Submit” button “enabled”

Build :-

Automation Program :- win_active (“Student”); button_check_info (“Submit”, “enabled”, 0); list_select_item (“Roll No.”, “Selected Item”); button_check_info (“Submit”, “enabled”, 0); edit_set (“Name”, “Valid Value”); button_check_info (“Submit”, “enabled”, 1); Ex.-6 : Test Procedure :- Step No. Action Required I/p Expected O/p

1 Focus to Student window None “Submit” button “disabled” 2 Select Roll No. None “Submit” button “disabled” 3 Enter Student Name Valid Name “Submit” button “enabled”

Student

Roll No.

Name Submit

Page 64: Testing tools

Build :-

Automation Program :- win_active (“Employee”); button_check_info (“Accept”, “enabled”, 0); list_select_item (“Emp. No.”, “Selected Item”); button_check_info (“Accept”, “enabled”, 0); edit_set (“Name”, “Valid Value”); button_check_info (“Accept”, “enabled”, 0); button_set (“Button Name (Male/Female)”, ON); button_check_info (“Accept”, “enabled”, 1); Ex.-7 : Test Procedure :-

Step No. Action Required I/p Expected O/p

1 Focus to Flight Reservation Window None “Update” disabled

2 Open an Existing Order Valid Order No. “Update” Order disabled

3 Perform a Change in that Order Valid Change “Update” button “enabled”

Build :- Flight Reservation Automation Program :- win_active (“Flight Reservation”); button_check_info (“Update”, “enabled”, 0); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”);

Employee

Emp. No.

Name

Accept ○ Male ○ Female

Page 65: Testing tools

button_check_info (“Update”, “enabled”,0); edit_set (“Name”, “Ravi Kiran”); button_check_info (“Update”, “enable”,1); Ex.-8 : Test Procedure :- Step No. Action Required I/p Expected O/p

1 Focus to Flight Reservation None Date of Flight object focused2 Open an Existing Order Valid Order No. Date of Flight object focused

Build :- Flight Reservation Automation Program :- win_active (“Flight Reservation”); obj_check_info (“Date of Flight Object”, “focused”,1); set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); obj_check_info (“Date of Flight Object”, “focused”,1); ii. For object / window :- To verify more than one property of one object we can use this option. Ex.-1 : Test Procedure :- No. Action Req. I/p Expected O/p

1 Focus to Flight Reservation and Open an Existing Order

Valid Order No.

Tickets Object value is Numeric and value inbetween 1-10

Build :- Flight Reservation Automation Program :- win_active (“Flight Reservation”); set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); set_window (“Flight Reservation”,1); obj_check_gui (“Tickets:”, “list1.ckl”, “gui1”,1); # list1.ckl consists Range and Regular Expression # gui1 consists of 1-10 and [0-9]*

Page 66: Testing tools

ii. For multiple objects:- We can use this option to verify more than one property of more than one object. Ex.-1 : Test Procedure :- Step No. Action Required

I/p Expected O/p

1 Focus to Flight Reservation Window None Insert Order, Delete Order and Update

Order buttons are disabled

2 Open an Existing Order

Valid Order No.

Insert Order and Update Order buttons are disabled and Delete Order button is

Enabled

3 Perform a Change in that Order

Valid Change

Insert Order is Disabled, Update Order and Delete Order Enabled

Build :- Flight Reservation Automation Program :- win_active (“Flight Reservation”); win_check_gui (“Flight Reservation”, “list.ckl”, “gui1”,1); #Check Point set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); win_check_gui (“Flight Reservation”, “list.ckl”, “gui2”,1); # Check Point set_window (“Flight Reservation”,1); edit_set (“Tickets:”, “3”); win_check_gui (“Flight Reservation”, “list.ckl”, “gui3”,1); # Check Point Note : To save Test Creation and Execution Time, the Test Engineers are inserting “For Multiple Object” Check Point “For Multiple Objects” option is applicable on Multiple Object in same window Case Study-1 : obj_check_info() for single property obj_check_gui () for object / window win_check_gui () for multiple objects

Page 67: Testing tools

Case Study-2 : Object Type Testable Properties Push Button Enabled, Focused Radio Button Enabled, Status (ON / OFF) Check Box Enabled, Status (ON / OFF)

List / Combo Box Enabled, Value, Count

Menu Enabled, Count Test Box / Edit

Box Enabled, Value, Focused, Range, Regular Expression, Date Format,

Time Format, … Table Grid Columns Count, Rows Count, Cell Content

Case Study-3 :

iii. Bitmap Check Point : We can use this check point to compare images. This check point is supporting Static Images only. To Support movies like dynamic images comparison the Test Engineers are using Manual Testing (or) QTP Tool. Ex.-1 : Test Procedure :- Step No. Action Required

I/p Expected O/p

1 Focus to Flight Reservation Build

Old Version and select About Option in Help Menu

None The Old Version Logo is equal

to Flight Reservation Build New Version Logo

BRS ↓

SRS (Functional

Specifications) ↓

HLD & LLDs ↓

Coding (UT & IT) ↓

Build

Test Scenarios ↓

Test Cases ↓

Automation Programs

Manual Testing

Automation Testing

Use Cases

Page 68: Testing tools

Build :- Flight Reservation Automation Program :- win_active (“Flight Reservation”); set_window (“Flight Reservation”,1); menu_select_item (“Help; About….”); set_window (“About Flight Reservation System”,1); obj_check_bitmap (“Button”, “Imgl”, 1); # Check Point Ex.-2 : Test Procedure :- Step No. Action Required

I/p Expected O/p

1 Focus to Flight Reservation

Window and select analysis menu, graphs option

None Graph opened for existing data

2 Open an Existing Order and perform a change in no. of Tickets

Valid Change

Existing Graph changed with respect to changes in No.of

Tickets Build :- Flight Reservation Automation Program :- win_active (“Flight Reservation”); set_window (“Flight Reservation”,1); menu_select_item (“Analysis ; Graphs…”); set_window (“Graphs”,1); obj_check_bitmap (“Gs_Drawing”, “Img1”,1, 158,26,178,154) # Screen area Check Point win_close (“Graph”); set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); set_window (“Flight Reservation”,1); edit_set (“Tickets:”, “3”); # Changes in Tickets button_press (“Update Order”); Order”); Note : The Win Runner Bitmap check point is comparing Complete Images or Part of Images.

Page 69: Testing tools

For object / window : obj_check_bitmap (“Image Name”, “Image File”, Time); For screen area : obj_check_bitmap (“Image Name”, “Image File”, Time, x,y,width, height); To verify manipulations (or) calculations for our application build we can use check point. This check point is a combination of “2 concepts” such as Get Text Option and If Condition. The Get Text option consists of 2 sub options 1. From Object / Window 2. From Screen Area. 1. From Object / Window : To capture an object value we can use this option. Navigation :- Insert Menu → Get Text → Select Required Object Syntax :- obj_get_text ( “Object Name”, Variable); 2. From Screen Area : To capture selected value from a screen, we can use this option Navigation :- Insert Menu → Get Text → From Screen Area → Select required value region in that screen → Right click to relive from selection. Syntax :- obj_get_text (“Screen Name”, Variable, x1,y1,x2,y2); If Condition :- TSL is a “C” like language. It allows you to write control statements with “c” syntaxes. if (condition) { ---- ---- } else { ---- ---- }

Page 70: Testing tools

Ex-1 :- Manual Expected :- Output = Input * 100 Build :-

Automation Program :- set_window (“Sample”,1); obj_get_text (“Input”, x); obj_get-text (“Output”, y); if (y = = x*100) printf (“Test is Pass”); else printf (“Test is Fail”); Ex-2 :- Manual Expected :- Total = Tickets * Price in an opened order Build :- Flight Reservation Automation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); button_set ( “Order No.”, ON); edit_set (“Edit”,1); button_press (“OK”); obj_get_text (“Total”,tot); obj_get_text (“Price”, p); obj_get_text (“Tickets”, t); p = substr(p,2,length(p)-1); tot = substr(tot,2,length(tot)-1); if (tot = = p*t) printf (“Test is Pass”); else printf (“Test is Fail”);

Sample

Input xxxxxxx

Output xxxxxxx

Page 71: Testing tools

Ex-3 : Manual Expected : - Total = File1 size + File2 Size Build :

Automation Program : set_window (“Audit”,1); obj_get_text (“File1”,x); obj_get_text (“File2”,y); obj_get_text (“Total”,z); x = substr(x,1,length(x)-2); y = substr(y,1,length(y)-2); z = substr(z,1,length(z)-2); if (z = = x+y) printf (“Test is Pass”); else printf (“Test is Fail”); Ex-4 : Manual Expected : - Total = Price * Qunatity Build :

Shopping

Quantity xxxxxxx

Price Rs: xxxxx /-

Total Rs: xxxxx /-

Audit

File1 xxxxxxx KB

File2 xxxxxxx KB

Total xxxxxxx KB

Page 72: Testing tools

Automation Program : set_window (“Shopping”,1); obj_get_text (“Quantity”,Q); obj_get_text (“Price”,P); obj_get_text (“Total”,T); P = substr(y,4,length(P)-5); T = substr(z,4,length(T)-5); if (T = = P * Q) printf (“Test is Pass”); else printf (“Test is Fail”); tl_step ( ) :- “tl” stands for Test Log (Test Result). We can use this statement to prepare our own Pass / Fail Result. Syntax :- tl_step (“Step Name”, 0/1, “Message”); ‘0’ for Pass Other than ‘0’ Fail Note :- Substr() : we can use this function to get required value from given string. Syntax :- Substr (“String Value” / Variable, Starting Position, length(“String Value” / Variable)); Data Base Check Point : In a software functional testing, the test engineers are concentrating on back end coverage, In this coverage, the test engineers are estimating the correctness of Front End screens operations on Back End Tables content in terms of Data Validations and Data Integrity.

Provider

Employee Emp No. : 101 Name : Abc Dep.No. : 10 OK

Emp Table Emp.No. Name DeptX X X 101

X X X Abc

X X X 10

Dept Table Dept.No. Name Strenght10 Sales 20

21

Driven

Data Validation Data Integrity

Page 73: Testing tools

Driven : - Data Stored in Same System Provider :- Data Stored in another system. From the above example the insertion of new data correctness is called as “Data Validation” The changes in existing data correctness is calling as Data Integrity. To automate this Data Base Testing, test engineers are using “Data Base Check Point”. It consist of 3 sub points. A. Default Check B. Custom Check C. Runtime Record Check A. Default Check :- To conduct Data Base testing, depending on the content of Data Base, we can use this option. Ex :-

Create DB Check Point (Current Content of DB selected as Expected)

Perform Front End Operation Run DB Check Point

(Current Content of DB selected as Actual) Navigation: Open Win Runner → Insert Menu → Data Base Check Point → Default Check → Specify Connect to Data Base Using ODBC (or) Data Junction (ODBC for Local Data Base and Data Junction for Remote Data Base) → Click Next → Click Create to select connectivity provided by developers → Write select statement → Click Finish → Open our application build in Front End → Perform an operation manually → Run data base check point → Analyze Results Manually. Note :- From the above Navigation, the Test Engineers are gathering some information from developers like the name of connectivity in between our application build Front End and Back End, the names of Tables including columns, Back End and the mapping in between Front End Screens and Back End Tables. This information is also known as “Data Base Design Document”.

= = Fail ! = Pass

Page 74: Testing tools

B. Custom Check :- To conduct data base testing depending on Rows Count, Columns Count and Content, we can use this option. In general the test engineers are using Default Check Option. This option is showing content only by default. The content of Data Base is measurable in-terms of Rows Count and Columns Count. Due to this reason the Test Engineers are using Default Check instead of Custom Check. Syntax :- db_check (“Check list. Cdl”, “Expected Values File”); In above syntax checklist file specifies content as property in Default Check and Rows count, Columns Count and Content as properties in Custom Check. Expected values file specifies the current content of data base with respect to select statement. C. Runtime Record Check : We can use this option to estimate the correctness of Back End Table columns and Front End Report Objects

Navigation :- Insert Menu → Data Base Check Point → Run Time Record Check → Click Next → Click Create to select connectivity provided by developers → Write select statements with doubtful columns → Select doubtful objects for that columns → Click Next → Select one or more matching records option → Click Finish Ex-1 : Objects DB Table Columns Order No. orders.order_number Name orders.customer_name

Front end Screen

Forms

Reports

Data Base

User

User

Default / Custom Check

Runtime Record Check

(Pass)

Page 75: Testing tools

Ex-2 : Objects DB Table Columns Tickets orders.order_number Name orders.customer_name Syntax:- db_record_check(“Checklist.cvr”,DVR_ONE_OR_MORE_MATCH,variable); In above syntax check list file specifies expected mapping in between Back End Table Columns and Front End Report Objects. The indicator specifies the need of check point execution more that one time. Variable specifies that the no.of records matched. Case Study :- Check Point TSL Statement For Single Property in GUI Check Point

obj_check_info (“object name”, “property”, expected value);

For object/window in GUI obj_check_gui (“cbject Name”, “checklist.ckl”, “expected value file”, time);

For multiple objects in GUI win_check_gui (“window name”, “checklist.ckl”, “expeted value file”, time);

For object/window in Bitmap Check Point

obj_check_bitmap (“image object”, “image file”, time);

For Screen Area in Bitmap obj_check_bitmap (“image object’, “image file”, time, x,y,width,height);

From object/window in Get Text Check Point

obj_get_text (“object name”, variable);

From screen area in Get Text

obj_get_text(“object area name”, variable, x1,y1,x2,y2)

Default Check in Database Check Point

db_check (“checklist.cdl”,”expecteddatabase content”);

Custom Check in Database db_check (“checklist.cdl”, “expected database” , rows count columns count and content)

Runtime Record Check in Database

db_record_check (“checklist.cvr”, DVR_ONE_OR_MORE_MATCH, variable);

(Fail)

Page 76: Testing tools

II. Data Driven Automation Frame Work :- It is an advanced automation frame work in win runner testing tool. The test engineers are executing an automation program with multiple test data in this frame work. There are 4ways in Data Driven Testing.

A. Test Data From Key Board :- To read values from keyboard, the test engineers are using below TSL Statement. Variable = create_input_dialog (“Message”); Ex:-1 Manual Expected :- Delete Order Button Enabled After Open an Existing Order. Build : Flight Reservation Text Data : 5 Valid Order Numbers Automation Program : for (i=1; i<=5; i++) { x = create_input_dialog (“Enter Order Number”);

set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1); } # Sample Input in Automation Program is replaced by multiple inputs in execution is called Parameterization.

Build / Application Under Test

(AUT)

Key Board Flat File Front End Objects Excel Sheet

Automation Program in TSL

Test

Data

Page 77: Testing tools

Ex:-2 Manual Expected :- Tickets Object value is numeric in an Open Order. Build : Flight Reservation Text Data : 5 Valid Order Numbers Automation Program : for (i=1; i<=5; i++) { x = create_input_dialog (“Enter Order Number”);

set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); button_press (“OK”); set_window (“Flight Reservation”,1); obj_check_info (“Tickets:”, “list1.ckl”, “gui1”, 2); } Ex:-3 Manual Expected :- Total = Number of Tickets * Price in an opened order Build : Flight Reservation Text Data : 5 Valid Order Numbers Automation Program : for (i=1; i<=5; i++) { x = create_input_dialog (“Enter Order Number”);

set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); button_press (“OK”); set_window (“Flight Reservation”,1); obj_get_text (“Tickets:”, t); obj_get_text (“Price:”, p); obj_get_text (“Total:”, tot); p = substr (p,2,length(p)-1); tot = substr (tot,2,length(tot)-1); if(tot == p*t) tl_step (“T1”,0,”Test Pass”); else tl_step (“T1”, 1, “Test Fail”); }

Page 78: Testing tools

Ex-4 : Manual Expected : - Results = Input1 * Input2 Build:-

Test Data :- 10 pairs valid inputs Automation Program : for (i=1; i<=5; i++) { x = create_input_dialog (“Enter Input1”); y = create_input_dialog (“Enter Input2”);

set_window (“Multiply”,1); edit_set (“Input1”, x); edit_set (“Input2”, y); button_press (“OK”); obj_get_text (“Result:”, r); if(r == x*y) tl_step (“T1”,0,”Test Pass”); else tl_step (“T1”, 1, “Test Fail”); } Go Top B. Test Data From Flat File :- In this approach the test engineers are maintaining test data in a Flat File. In this approach the Win Runner is not taking the interaction of Test Engineers while running test.

Build / Application Under Test

(AUT)

Automation Program in TSL

Test

Data

.txt

Multiply

Input1 Input2

Result

Ok

Page 79: Testing tools

To use file content as test data, the Test Engineers are using below TSL Statements file_open (“Path of File”, FO_MODE_READ); file_getline (“Path of File”, variable); file_close (“path of file”); Ex-1: Manual Expected :- Delete order button enabled after open an order Build :- Flight Reservation Test Data : C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt Automation Program: f=”C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt”; file_open (f,FO_MODE_READ): while (file_getline(f,x) != E_FILE_EOF) {

set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); # Parameterization button_press (“OK”); set_window (“Flight Reservation”,1);

button_check_info (“Delete Order”, “enabled”, 1); } file_close(f); Silent Mode :- Win Runner continues test execution when a check point is failed also. The Test Engineers are using this option to continue test execution without interaction. Navigation :- Tools Menu – General Options – Run Tab – Select Run in Batch Mode Check Box – Click Ok Note :- In silent mode the win runner is not executing create_input_dialog( ) Statement.

Page 80: Testing tools

Ex-3 :- Manual Expected : Total = Price * Quantity Build :

Test Data : C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt Ravi.txt contains like below statements Ramu Purchased 101 item as 10 pieces Bhasha Purchased 102 item as 27 pieces .... etc., Automation Program : f=”C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt”; file_open (f,FO_MODE_READ): while (file_getline(f,x) != E_FILE_EOF) {

split (x,y,” “); set_window (“Shopping”,1);

edit_set (“Item No”, y[3]); # Parameterization edit_set (“Quantity”, y[6]); # Parameterization button_press (“OK”); obj_get_text (“Price”,p); obj_get_text (“Total”,t); p=substr(p,2,length(p)-1); t=substr(t,2,length(t)-1); if (t== p*y[6]) tl_step (“C1”, 0, “Calculation is Pass”); else tl_step (“C1”, 1, “Calculation is Fail”); } file_close(f); Go Top

Shopping Item No. Quantity

Price $ x x x x x x

Ok

Total $ x x x x x x

Page 81: Testing tools

C. Test Data From Front End Objects Some times the Test Engineers are re-executing their automation program depending on multiple data objects in build like Menu’s, List Boxes, Tables, Activex Controls and Data Windows.

Ex-1: Manual Expected : The Selected City Name in “Fly From” doesn’t appear in “Fly To” Build:

Test Data : All Existing City Names in “Fly From” Automation Program : set_window (“Journey”,1); list_get_info (“Fly From”, “count”, n); for (i=0; i<n; i++) { list_get_item (“Fly From”, i, x); list_select_item (“Fly From”,x); if (list_select_item (“Fly To”,x) != E_OK) tl_step (“J1”, 0, “Item doesn’t appear”); else tl_step (“J1”, 1, “Item Appears”); }

Journey

Fly From

Fly To

Build / Application Under Test

(AUT)

Test Data

From Build Object

Page 82: Testing tools

Ex-2 : Manual Expected : Total = Price * Quantity in every row of bill Build : Bill is window name

Sl.No. Quantity Price Total 1 X $ x x x x $ x x x x2 X $ x x x x $ x x x x3 X $ x x x x $ x x x x

.etc .etc .etc .etc Test Data : - All existing rows in Bill table Automation Program : set_window (“Shopping”,1); tbl_get_rows_count (“Bill”, n); for (i=1; i<n; i++) { tbl_get_cell_data (“Bill”, “#”&i; “#1”, q); tbl_get_cell_data (“Bill”, “#”&i; “#2”, p); tbl_get_cell_data (“Bill”, “#”&i; “#3”,t); p=substr(p,2,length(p)-1); t=substr(t,2,length(t)-1); if (t == p*q) tl_step (“S1”, 0, “Test Pass”); else tl_step (“S1”, 1, “Test Fail”); } PRACTICE : Total = Internal Marks + External Marks of every student Build : Marks is a window Roll No. Name Internals Externals Total

101 xxxxx xxxxx xxxxx xxxxx102 xxxxx xxxxx xxxxx xxxxx.etc. .etc. .etc. .etc. .etc.

Go Top

Page 83: Testing tools

D. Test Data From an Excel Sheet :- Some times the test engineers are re-executing automation programs depending on multiple inputs in an excel sheet, instead of Key Board, Flat Files and Front End Objects. In this method the test engineers are filling Excel Sheet through importing data from Build Data Base or with Manual Entry.

To create excel sheet oriented data driven test, Test Engineers are following below navigation. Navigation:- Open Win Runner & Build – Create an Automation Program for Sample Inputs – Table Menu – Data Driven Wizard – Click Next – Specify the path of Excel Sheet – Specify Variable Name to Store that Excel Sheet Path – Select Import Data from Data Base – Click Next – Specify Connective DB Using UDBC / Data Junction – Select Specify SQL Statement Option – Click Next – Click Create to Select Connectivity of DB Provided by Developers – Write Select Statement to Import Data From Connected DB – Click Next – Replace Sample Input With Imported Excel Sheet Column Name in Automation Program – Say Yes/No to Show Data Table (Excel Sheet) – Click Finish – Put Build in Base State and Click Run – Analyze Results after execution. Note : By Default the Win Runner is providing a default excel sheet for every test instead of our own excel sheet.

Automation Program in TSL

Test

Data

.xls

Build / AUT

Front End DB

I) Manual Entry

II) Import Data From DB

Page 84: Testing tools

Ex-1 : Manual Expected : Delete order button enabled after open an existing order. Build : Flight Reservation Test Data : Default.xls (Import Data From DB) Automation Program : table = “default.xls”; rc = ddt_open(table, DDT_MODE_READWRITE); if (rc!=E_OK && rc!=E_FILE_OPEN) pause (“Cannot Open Table”); ddt_update_from_db (“table”,”msqr1.sql”,count); ddt_save (table); ddt_get_row_count (table,n); for (i=1; i<=n; i++) { ddt_set_row (table,i); set_window (“Flight Reservation”,1); menu_select_tem (“File; Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”,ON); edit_set (“Edit”, ddt_val (table, “order_number”)); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1); } ddt_close (table); Go Top Case Study-1: ddt_open( ):- we can use this function to open an excel sheet in specified mode. Syntax : ddt_open (“Path of Excel Sheet”, DDT_MODE_READ / READWRITE); ddt_update_from_db ( ):- We can use this function to perform changes in excel sheet with respect to changes in Data Base. Syntax: ddt_update_from_db (“Path of Excel Sheet”, “Select Statement query file”, Variable); ddt_save ( ): We can use this function to save excel sheet modification. Syntax : ddt_save (“Path of Excel Sheet”);

Page 85: Testing tools

ddt_get_row_count ( ):- We can use this function to find no.of rows in an excel sheet. Syntax : ddt_get_row_count (“Path of Excel Sheet”, variable); ddt_set_row ( ):- We can use this function to point specific row in an excel sheet. Syntax : ddt_set_row (“Path of Excel Sheet”, row number); ddt_val ( ):- We can use this function to capture specified column value Syntax : ddt_val (“Path of Excel Sheet”, column name); ddt_close ( ): To close a opened file, we can use this function. Syntax : ddt_close (“Path of Excel Sheet”); Case Study – 2 :

DDT Approach

TSL Statement Silent Mode Test Engineer Interaction(During Run Time)

Test Data from Key Board

create_input_dialog ( ); Off Mandatory

file_open ( );

file_getline ( );

Test Data From

Flat File file_close ( );

On / Off Optional

list_get_item ( );

list_get_infoe ( );

tble_get_rows_count ( );

Test Data From

Front End Object

tble_get_cell_data ( );

On / Off Optional

ddt_open ( );

ddt_save ( );

ddt_set_row ( );

ddt_update_from_db ( );

ddt_get_row_count ( );

ddt_val ( );

ddt_set_val ( );

Test Data From Excel Sheet

ddt_close ( );

On / Off Optional