testing concept definition

13
TESTING CONCEPT DEFINITI ON April 28 201 1 CONTENTS: MANUAL AND AUTOMATION TESTING TEST BED AND TEST DATA POSITIVE AND NEGATIVE TEST CASES DEFECT SEVERITY AND PRIORITY RUN PLAN SECURITY AND RECOVERY TESTING COMPATIBILITY AND USABILITY TESTING SMOKE AND SANITY TESTING VOLUME AND STRESS TESTING

Upload: vivek-v

Post on 13-May-2015

4.374 views

Category:

Technology


2 download

DESCRIPTION

It helps to know about the Testing Concept Defiinition.

TRANSCRIPT

Page 1: Testing concept definition

April 28

2011CONTENTS:

MANUAL AND AUTOMATION TESTING

TEST BED AND TEST DATA

POSITIVE AND NEGATIVE TEST CASES

DEFECT SEVERITY AND PRIORITY

RUN PLAN

SECURITY AND RECOVERY TESTING

COMPATIBILITY AND USABILITY TESTING

SMOKE AND SANITY TESTING

VOLUME AND STRESS TESTING

ENTRY AND EXIT CRITERIA

SUSPENSION AND RESUMPTION CRITERIA

Page 2: Testing concept definition

Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.[1] Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Test bed and test data

Test bed is an execution environment configured for software testing.It consist of specific hardware,network topology,operating system,configuration of the product to be under test, system software and other applications. The test plan for a project should be developed from the test beds to be used.

testbed (also commonly spelled as test bed in research publications) is a platform for experimentation of large development projects. Testbeds allow for rigorous, transparent, and replicable testing of scientific theories, computational tools, and new technologies.

The term is used across many disciplines to describe a development environment that is shielded from the hazards of testing in a live or production environment. It is a method of testing a particular module (function, class, or library) in an isolated fashion. May be implemented similar to a sandbox, but not necessarily for the purposes of security. A testbed is used as a proof of concept or when a new module is tested apart from the program/system it will later be added to. A skeleton framework is implemented around the module so that the module behaves as if already part of the larger program.

A typical testbed could include software, hardware, and networking components. In software development, the specified hardware and software environment can be set up as a testbed for the application under test[dubious – discuss]. In this context, a testbed is also known as the test environment.

Testbeds are also pages on the Internet where the public is given the opportunity to test CSS or HTML they have created and want to preview the results.

Page 3: Testing concept definition

Test Data are data which have been specifically identified for use in tests, typically of a computer program. Some data may be used in a confirmatory way, typically to verify that a given set of input to a given function produces some expected result. Other data may be used in order to challenge the ability of the program to respond to unusual, extreme, exceptional, or unexpected input. Test data may be produced in a focused or systematic way (as is typically the case in domain testing), or by using other, less-focused approaches (as is typically the case in high-volume randomized automated tests). Test data may be produced by the tester, or by a program or function that aids the tester. Test data may be recorded for re-use, or used once and then forgotten.

Test data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.

Positive and negative Test CasesPositive test case:- Test to check whether the system do what it is suppose to do.ie Checking for valid

values only.

Negative Test Case:- Test to check whether the system will do what it is not supposed to do.ie Checking for invalid values.

Positive Test Cases: Positive test csaces are designed to check that we got the desired result with valid set of inputs.(Like user should login into the system with valid user name and passwords.)

Negative Test Cases: Negative test csaces are designed to check system should generate the correct error or warning messages with invalid set of inputs.(like if ueser entered the wrong username

or password then user should not login into the system and error message should be shown.

Positive test Cases

A test case which always has the positive result i.e correct answer is said to be positive test case

Negative test cases

A test case which always has the negative result i.e wrong answer it is said to be Negative Test Cases

Page 4: Testing concept definition

Positive test case is nothing but writing the test case to accept only correct value

Negative test case is nothing but used to find really where the application fails

Name field is there with specifications as1.only alphabets2.only upto 4 characters3.no null value

Positive test case

1.entering only alphabets2.entering upto 4 characters

Negative Test case

1.entering numerics special characters2.making a null 3. entering more than 4 characters

What’s the difference between priority and severity?

Answer:“Priority” is associated with scheduling, and “severity” is associated with standards.“Priority” means something is afforded or deserves prior attention; a precedenceestablished by order of importance (or urgency). “Severity” is the state or quality ofbeing severe; severe implies adherence to rigorous standards or high principles andoften suggests harshness; severe is marked by or requires strict adherence torigorous standards or high principles, e.g. a severe code of behavior. The wordspriority and severity do come up in bug tracking. A variety of commercial, problemtracking/management software tools are available. These tools, with the detailedinput of software test engineers, give the team complete information so developerscan understand the bug, get an idea of its ‘severity’, reproduce it and fix it. The fixesare based on project ‘priorities’ and ‘severity’ of bugs. The ‘severity’ of a problem isdefined in accordance to the customer’s risk assessment and recorded in theirselected tracking tool. A buggy software can ‘severely’ affect schedules, which, inturn can lead to a reassessment and renegotiation of ‘priorities’.]

Page 5: Testing concept definition

Defect priority and severity levels

Defects are given a priority and severity level. Such classification is absolutely needed as the development team cannot resolve all defects simultaneously. The test team needs to indicate how soon they want to get the defect fixed, and how big the impact on the functionality of the application under test is. Let's have a look at the classification levels:

Defect priority

High: Fix the defect immediately. A core functionality fails or test execution is completely blocked.Medium: Fix the defect soon. An important functionality fails but we don't need to test it right away and we have a workaround.Low: Don't fix this defect before the high and medium defects are fixed but don't forget this defect.

Defect priority indicates the impact on the test team or test planning. If the defect blocks or greatly slows down test execution, you might want to select the highest grade for the defect priority.

Defect severity

Critical: A core functionality returns completely invalid results or doesn't work at all.Important: This defect has impact on basic functionality.Useful: There is impact on the business, but only in a very few cases.Nice to have: The impact on the business is minor. Any user interface defect not complicating the functionality often gets this severity grade.

Defect severity indicates the impact on the business of the client. If important functionality is blocked or if that functionality functions incorrectly, the test engineer mostly selects the highest defect severity.

Above priority and severity qualifiers can be different between either companies or projects but basically their value remains the same. Assigning a defect priority and severity is always subjective as the test engineer measures the impact from his point of view. Nevertheless he should always decide with care as the defect resolution time depends on this.

Run plan generation

Converts the Scenarios into a Run Plan indicating Logical and Physical dates on which various set-up and transactions are to be put through along with the associated batch processes to be executed and expected results to be verified. This takes into consideration the following inputs

Calendar definition,

Batch Processing rules,

Date feeds from Support Components (Loan, Deposit and Interest Schedules)

Page 6: Testing concept definition

Security testing is a process to determine that an information system protects data and maintains functionality as intended.

The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from.

Recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems.

Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:

Usability testing is a technique used to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.

Smoke testing refers to the first test made after assembly or repairs to a system, to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the system is ready for more stressful testing.

A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. In arithmetic, for example, when multiplying by 9, using the divisibility rule for 9 to verify that the sum of digits of the result is divisible by 9 is a sanity test - it will not catch every multiplication error, however it's a quick and simple method to discover many possible errors.

Page 7: Testing concept definition

Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries, such as fatigue testing for materials.

Defining entry and exit criteria You can use the Entry Criteria section of the test plan to defines the prerequisite items that

must be achieved before testing can begin. You can use the Exit Criteria section of the test

plan to define the conditions that must be met before testing can be concluded. In both

cases, you add your criteria by adding rows to a table.

To define entry and exit criteria:

1. From an open test plan, click Entry Criteria or Exit Criteria.

2. Click the Add Row icon ( ).

3. Type the criteria in the Entry or Exit Criteria Description field, for example, No

Level 1 defects.

4. Type the status in the Current Value field, for example, Three Level 1 defects

still outstanding.

5. Select the criteria state from the Status list, for example, Successful.

6. Optionally, type a comment in the Comment field.

7. Click Save to save your edits to the test plan.

Page 8: Testing concept definition

What is entry criteria in software testing?

Entry criteria specified when to start testing. some of the entry criteria is as below

- Coding is complete and unit tested

- Test Cases are written, peer reviewed and signed off

What is exit criteria in software testing?

Exit criteria ensures that the testing of the application is completed and ready.

1. All the planned requirements must be met

2. All the high Priority bugs should be closed

3. All the test cases should be executed

4. If the scheduled time out is arrived

5. Test manager must sign off the release

Exit criteria ensures that the testing of the application is completed and ready.

1. All the planned requirements must be met

2. All the high Priority bugs should be closed

3. All the test cases should be executed

4. If the scheduled time out is arrived

5. Test manager must sign off the release

Page 9: Testing concept definition

Entry and Exit criteria in testing

Entry criteria is the process that must be present when a system begins,like

SRS – Software

FRS

Use case

Test case

Test plan

The exit criteria ensures whether testing is completed and the application is ready for

Release , like

Test summary report

Metrics

Defect analysis report.

Page 10: Testing concept definition

Suspension criteria & resumption requirements

Suspension criteria specify the criteria to be used to suspend all or a portion of

the testing activities while resumption criteria specify when testing can resume

after it has been suspended.

Unavailability of external dependent systems during execution.

When a defect is introduced that cannot allow any further testing.

Critical path deadline is missed so that the client will not accept

delivery even if all testing is completed.

A specific holiday shuts down both development and testing.

Save trees. Print Only When Necessary 

 

PREPARED BY

Vivek.V

Hand Phone:+91 9566582681.

Email ids: [email protected] ,

[email protected]