best practices for writing and organizing qa tests

26
Best Practices for Writing and Organizing QA Tests Best Practices Guidelines Using Test Plans, Test Suites and Test Cases

Upload: sarah-goldberg

Post on 12-Apr-2017

35 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Best Practices for Writing and Organizing QA Tests

Best Practices for Writing and Organizing QA Tests

Best Practices Guidelines

Using Test Plans, Test Suites and Test Cases

Page 2: Best Practices for Writing and Organizing QA Tests

Revision History

Date Version Description AuthorJuly 14, 2014 1.0 Initial Creation Sarah Goldberg

March 14, 2016 2.0 Revised/Updated Sarah Goldberg

ContentsAgile Testing................................................................................................................................................4

What is a Test Plan?....................................................................................................................................4

How to Write a Test Plan.........................................................................................................................5

What is a Test Suite?...................................................................................................................................6

Creating Test Suites and Adding Test Cases.............................................................................................6

What Is A Test Case?...................................................................................................................................7

Designing Test Cases................................................................................................................................7

Best Practices for Test Case Writing........................................................................................................7

Testing Levels..........................................................................................................................................8

Tips for Writing Effective Test Cases........................................................................................................8

Test Case Naming Conventions............................................................................................................8

Description..........................................................................................................................................8

Assumptions and Preconditions..........................................................................................................9

Input Test Data....................................................................................................................................9

Cover all Verification Points in Test Design Steps................................................................................9

Attach the Relevant Artifacts.............................................................................................................10

Expected Result.................................................................................................................................10

Divide Special Functional Test Cases into Sets...................................................................................10

Legible & Easily Understandable by Others.......................................................................................10

Review...............................................................................................................................................11

Reusable............................................................................................................................................11

Maintenance & Updates....................................................................................................................11

Post Conditions..................................................................................................................................11

Test Case Area Classification..............................................................................................................11

Types of Tests That Can Be Used To Build Test Cases............................................................................11

Restaurant.com Best Practices for Writing and Organizing QA Tests 2 | P a g e

Page 3: Best Practices for Writing and Organizing QA Tests

Development Testing.........................................................................................................................11

Installation Testing............................................................................................................................12

Smoke and Sanity Testing..................................................................................................................12

Functional Testing..............................................................................................................................12

Non-functional Testing......................................................................................................................12

Unit Testing.......................................................................................................................................12

Integration Testing............................................................................................................................13

Compatibility Testing.........................................................................................................................13

Graphical User Interface Testing.......................................................................................................14

Database Testing...............................................................................................................................14

Security Testing.................................................................................................................................14

User Acceptance Testing...................................................................................................................15

Application Programming Interface Testing......................................................................................16

Performance Testing..........................................................................................................................16

Destructive Testing............................................................................................................................17

Regression Testing.............................................................................................................................17

Internationalization and Localization.................................................................................................18

Resources..................................................................................................................................................19

Restaurant.com Best Practices for Writing and Organizing QA Tests 3 | P a g e

Page 4: Best Practices for Writing and Organizing QA Tests

Agile Testing is a software testing practice that follows the principles of agile software development. Agile testing involves all members of a cross-functional agile team, with special expertise contributed by testers, to ensure delivering the business value desired by the customer at frequent intervals, working at a sustainable pace.

Agile development recognizes that testing is not a separate phase, but an integral part of software development, along with coding. Agile teams use a "whole-team" approach to "bake quality in" to the software product. Testers on agile teams lend their expertise in eliciting examples of desired behavior from customers and collaborating with the development team to turn those into executable specifications that guide coding. Testing and coding are done incrementally and iteratively, building up each feature until it provides enough value to release to production.

The role of a tester in an agile project requires a wider variety of skills:

Domain knowledge about the system under test The ability to understanding the technology be used A level of technical competency to be able to interact effective with the development

team

Since working increments of the software are released often in agile software development there is a need to test often. This is done by using automated acceptance testing to minimize the amount of manual labor. Doing only manual testing in agile development would likely result in either buggy software or slipping schedules because it would most often not be possible to test the whole software manually before every release.

What is a Test Plan?A Test Plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. The plan typically contains a detailed understanding of the eventual workflow. A test plan is usually prepared by or with significant input from the Testers. More simply it is a way of keeping test cases organized.

Depending on the product and the responsibility to which the test plan applies, a test plan may include a strategy for one or more of the following:

Design Verification or Compliance Test - to be performed during the development or approval stages of the product, typically on a small sample of units.

Manufacturing or Production Test - to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control.

Acceptance or Commissioning Test - to be performed at the time of delivery or installation of the product.

Restaurant.com Best Practices for Writing and Organizing QA Tests 4 | P a g e

Page 5: Best Practices for Writing and Organizing QA Tests

Service and Repair Test - to be performed as required over the service life of the product. Regression Test - to be performed on an existing operational product, to verify that existing

functionality didn't get broken when other aspects of the environment are changed (e.g., upgrading the platform on which an existing application runs).

A complex system may have a high level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components.

In Microsoft Test Manager, a test plan defines what to test, and stores the results of your tests. All your tests are planned and performed in the context of a test plan. A team project typically has multiple test plans. You create a separate test plan for each sprint, milestone, feature, or other iteration. If your project is developing several functional areas concurrently, you would also have separate test plans for each area.

How to Write a Test PlanTest plans outline the process of testing the functionality of software. A test plan details each step taken to achieve a certain result and states the objective of each action. The plan also highlights the projected resources, risks, and personnel involved in the test. You should use a test plan if you are seeking to eliminate bugs and other errors in your software before it becomes available to customers. Follow the steps below to create a test plan.

1. Write an introduction. An introduction includes a general description and schedule of a test, as well as any related documents.

A document description provides an overall mission statement, covering the methods that will be used in the testing process and the projected results. Related documents include any peripheral material that is relevant to the current project, such as lists of specifications. A schedule details the increments of time in which each phase of the test will be completed.

2. Write a section on required resources. This section describes all of the resources needed to complete the testing, including hardware, software, testing tools, and staff.

When accounting for your staff, make sure to detail the responsibilities required of each member and the training needed to execute those responsibilities.

3. Write a section on what you are going to test. List what new aspects you will be testing and what old aspects you will be re-testing.

4. Write a section on what you will not be testing. List any features that will not be tested during the current project.

5. Write a list of documents that will be produced during testing.

Restaurant.com Best Practices for Writing and Organizing QA Tests 5 | P a g e

Page 6: Best Practices for Writing and Organizing QA Tests

6. Write a section on risks and dependencies. Detail all the factors that your project depends on and the risks involved in each step.

7. Write a section on the results of your project. Outline all the goals that you hope to achieve during the testing process. Detail the parameters for which success and failure can be measured.

What is a Test Suite?Test cases can be organized into a hierarchy of Test Suites inside test plans. Test suites can identify gaps in a testing effort where the successful completion of one test case must occur before beginning the next test case. For instance, you cannot add new products to a shopping cart before you successfully log in to the application. A test case can be shared between test plans, so that you can repeat the same test in successive sprints.

Creating Test Suites and Adding Test Cases There are different types of test suites which can be used in test plans. You can add a test case to more than one suite or test plan, or none. Deleting a suite does not delete its test cases.

Microsoft Test Manager allows you to create three types of suites:

Static Test Suites are like folders. A static test suite can contain both test cases and other suites. The root suite of the test plan is a static suite.

Requirements-based Test Suites are derived from Product Backlog Items, User Stories, or other requirements. The suite contains all the test cases that are linked to its requirement. This type helps you track how well each requirement has been tested. Whenever you add or remove a test case from a requirements-based suite, the link between the requirement and the test case is created or destroyed.

Query-based Test Suites show the results of a query that you define. For example, you could select all the test cases that have Priority = 1. You edit it to select the test case work items that you want. You can edit it again later. The query runs automatically every time you open or run the suite. You should not change the first two clauses in the work item query. They ensure that the work items are test cases in your project.

When you create a new test plan, you might want to copy some of the suites from an earlier test plan. The Copy button does not create new test cases. Instead, the copied test suites refer to the same test cases.

Delete a suite only if it has not been used. Otherwise, set its state to Completed. When you delete a test suite, any nested test suites are deleted, but the test cases it refers to are unchanged.

Restaurant.com Best Practices for Writing and Organizing QA Tests 6 | P a g e

Page 7: Best Practices for Writing and Organizing QA Tests

What Is A Test Case?A Test Case is simply a list of actions which need to be executed to verify a particular functionality or feature of your application under test.

Less simply, a test case is a set of conditions or variables and inputs that are developed for a particular goal or objective to be achieved on a certain application to judge its capabilities or features.

It might take more than one test case to determine the true functionality of the application being tested. Every requirement or objective to be achieved needs at least one test case. Some software development methodologies like Rational Unified Process (RUP) recommend creating at least two test cases for each requirement or objective; one for performing testing through positive perspective and the other through negative perspective.

Designing Test CasesTest cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information -

Purpose of the test Software requirements and Hardware requirements (if any) Specific setup or configuration requirements Description on how to perform the test(s) Expected results or success criteria for the test

Designing test cases can be time consuming in a testing schedule but they are worth it. Designing proper test cases is vital for software testing plans as a lot of bugs, ambiguities, inconsistencies and slip ups can be recovered in time. It also helps in saving time on continuous debugging and re-testing test cases.

One of the most challenging aspects of software testing is designing good test cases. Since test cases lay a foundation for effective test management, and further for sustenance engineering, it should be treated as a product itself and test professionals should take pride in the quality of the test cases because it is their creation.

Best Practices for Test Case Writing Writing effective test cases is a skill that can be achieved by some experience and in-depth study of the application on which test cases are being written. The basic objective of writing test cases is to validate the testing coverage of the application. The test cases should be written in enough detail that they could be given to a new team member who would be able to quickly start to carry out the tests and find defects.

The most extensive effort in preparing to test an application, is writing test cases. Documenting the test cases prior to test execution ensures that the tester does the ‘homework’ and is prepared for the ‘attack’ on the Application under test.

Restaurant.com Best Practices for Writing and Organizing QA Tests 7 | P a g e

Page 8: Best Practices for Writing and Organizing QA Tests

Testing Levels There are levels in which each test case will fall in order to avoid duplication efforts.

Level 1: In this level you will write the basic test cases from the available specification and user documentation. It is a good idea to cover the entire positive scenarios first and then think about all possibilities of negative scenarios; these will effectively find most of the bugs.

Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application.

Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10

Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionality's to test rather than remaining busy with regression testing.

Tips for Writing Effective Test Cases

Test Case Naming Conventions. It’s always a good practice to name your test cases in a way that it makes sense to anyone who is going to refer the test in future. Name the test case to represent the module name or functional area you are going to verify in that test case.For example, if I am working on a project “Mainsite” which has a functional area called “Login” and want to write a test to verify a simple check whether the user is able to login to the website using an email and password, instead of naming the tests as TC_01, I could use the below naming convention so that I can make out what the test is for just by looking at its name.

Mainsite _Login_ValidCredentials Mainsite _Login_InvalidCredentials

Description is where you mention all the details about what you are going to test, and the particular behavior being verified by the test. Try to make the description of the test process and the possible results as comprehensive and detailed as possible. Try to think of any assumptions you may be making about the process and instead write them out explicitly. Examples include specifying whether a test should start with a clean installation of the software to be tested (or the distribution as a whole), or whether to create some initial configuration before running a test. Try to place yourself in the shoes of someone who is entirely unfamiliar with the function to be tested, and make sure they can successfully complete the test simply by following the procedure. As much as possible, without compromising comprehensiveness, try to write the test case such that it will not go 'stale'. Try to ensure your test case will be usable for a long time without regular editing.

The following information would be found in a well-written test case description:

Test to be carried out / behavior being verified Preconditions and Assumptions (any dependencies) Test Data to be used

Restaurant.com Best Practices for Writing and Organizing QA Tests 8 | P a g e

Page 9: Best Practices for Writing and Organizing QA Tests

Test Environment Details (if applicable) Any Testing tools to be used for the test

Assumptions and Preconditions. While writing test cases, you should communicate all assumptions that apply to a test, along with any preconditions that must be met before the test can be executed. Below are the kind of details that should be covered in this section:

Any user data dependency (e.g. the user should be logged in, which page should the user start the journey, etc.)

Dependencies on test environment Any special setup to be done before Test Execution Dependencies on any other test cases – does the Test Case need to be run before/after some

other Test Case?

Again, the more detailed information provided the more accurate the test cases will be.

Input Test Data. Identifying test data can be a time consuming activity – many a times test data preparation takes most of the time in a testing cycle.

To make life easy as a tester (wherever applicable) record test data to be used for the test case within T.C. description or with the specific test case steps. This saves a lot of time in the long run as you won’t have to hunt for new test data every time you need to execute the test.

A few pointers:

In many cases where you know the test data can be re-used over time, you can mention the exact Test Data to be used for the test.

If the test only involves some values to be verified, you can opt to specify the value range or describe what values are to be tested for which field. You can do so for negative scenarios as well.

Testing with every value is impractical, so you can choose a few values from each equivalence class which should provide good coverage for your test.

You could also decide to mention the type of data which is required to run the test and not the real test data value. This applies for projects where the test data keeps on changing as multiple teams use it and may not be in the same state when I use it the next time. So, mentioning the type/state of user test data to be used helps a great deal for anyone who is running the test next!

Cover all Verification Points in Test Design Steps. Another important part of a well-written test case is the properly defined Test Case Steps. The steps should not only cover the functional flow but also each verification point which should be tested.

Restaurant.com Best Practices for Writing and Organizing QA Tests 9 | P a g e

Page 10: Best Practices for Writing and Organizing QA Tests

By comparing your Test Case steps with the Requirement documents, Use Cases, User Stories or Process Maps given for your project, you can make sure that the Test Case optimally covers all the verification points.

Attach the Relevant Artifacts. Wherever possible you should attach the relevant artifacts to your test case. If the change you are going to test is very minor, you could consider mentioning it in the test step itself.

Expected Result. A well-written Test Case clearly mentions the expected result of the application/system under test. Each test design step should clearly mention what you expect as outcome of that verification step. So, while writing test cases, mention in detail what page/screen you expect to appear after the test and, any changes you expect to be done to any backend legacy systems or Database.

You can also attach screenshots or specification documents to the relevant step and mention that the system should work as outlined in the given document to make things simpler.

Divide Special Functional Test Cases into Sets For effective test case writing, consider breaking down the Test Cases into sets and sub-sets to test some special scenarios like browser specific behaviors, cookie verification, usability testing, Web Service testing and checking error conditions etc.

If you strive to write effective test cases, you should write these special functional test cases in isolation. For instance, Test Cases that check error conditions should be written separately from the functional test cases and should have steps to verify the error messages.

For example, if you need to verify how the login feature for any application works with invalid input, you can break this negative testing for login functionality into sub tests for different values like:

Verify with invalid email-id Verify with invalid password Verify with blank email-id field and so on…

Legible & Easily Understandable by Others. When designing Test Cases, consider that they will not always be executed by the ones who design them. So, the tests should be easily understandable, legible and to-the-point.

Test Case suites that are only clear to the ones who designed them are ubiquitous.

Write Test Cases that:

Are Simple and easily understandable by everyone Are to-the-point. If a Test Case is going beyond a certain amount of steps, break it into a new

Test Case Still have enough coverage

Restaurant.com Best Practices for Writing and Organizing QA Tests 10 | P a g e

Page 11: Best Practices for Writing and Organizing QA Tests

Review. With Test Cases playing an important role in Software Testing Life-cycle, making sure they are correct and conform to standards becomes even more necessary. Test case review can be done by peer testers, BA, developers, product owners or any relevant stakeholders.

Reusable. Keep in mind when, writing test cases, that they could be re-used in the future for some other project/teams. On that note, before writing a new Test Case for your project/module, always try and find out if there are Test Cases written already for the same test.

If you spend bit of a time with other teams finding out the existing test cases, you won’t risk repeating any test cases, hence avoiding any redundancies in Test Management Tools. Also, if you get the existing test cases written earlier around the same module, you should be updating those instead of writing new TCs. Also, if you need a particular test case to execute some other test case, you can simply call the existing test case in the preconditions or at the specific test design step.

Maintenance & Updates. It’s of umpteen importance to make sure that the Test Cases are always updated as per the newly introduced changes in the application they apply to. Always consider updating the existing Test Cases (if any) before you start writing New Test Cases.

Reiterating reusability, in case of any changes to an existing journey or functionality, you should consider updating the existing Test Cases instead of writing any new Test Cases hence avoiding redundancies to the existing set.

This also makes sure you always have updated Test Cases for any journey in your application.

Post Conditions basically specify the various things that need to be verified after the Test has been carried out. In addition, post-conditions are also used to give guiding instruction to restore the system to its original state for it not to interfere with subsequent testing.

For example, this is quite useful if you mention the changes to be made to a test data for it to be used for a later Test Case for the same functionality.

Test Case Area Classification. Use the Test Case Area Classification to ‘link’ the cases together and ensure they can easily be found both manually and automatically. You should always use the appropriate area whenever creating a test case in order to help people locate the test case in the future.

Types of Tests That Can Be Used To Build Test Cases

Development Testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development testing aims to eliminate construction errors before code is promoted to QA; this strategy

Restaurant.com Best Practices for Writing and Organizing QA Tests 11 | P a g e

Page 12: Best Practices for Writing and Organizing QA Tests

is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.

Depending on the organization's expectations for software development, Development testing might include static code analysis, data flow analysis metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices.

Installation Testing assures that the system is installed correctly and working with actual customer's hardware.

Smoke and Sanity Testing can be used as build verification tests. Sanity testing determines whether it is reasonable to proceed with further testing. Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all.

Functional Testing is designed to measure the quality of the functional components of the system. Test Cases verify that the system behaves correctly from the user/business perspective and functions according to the requirements, models, story boards, or any other design paradigm used to specify the application. The functional test cases must determine if each component or business event: performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events/data, moves data correctly from one business event to the next (including data stores), and that business events are initiated in the order required to meet the business objectives of the system.

Non-functional Testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.

Unit Testing verifies the individual unit of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures are tested to determine if they are fit for use. A unit can be viewed as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method. Unit tests are short code fragments created by programmers or occasionally by white box testers during the development process.

Unit testing provides a sort of living documentation of the system and embody characteristics that are critical to its success. Developers looking to learn what functionality is provided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit's interface (API). Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.

Restaurant.com Best Practices for Writing and Organizing QA Tests 12 | P a g e

Page 13: Best Practices for Writing and Organizing QA Tests

Integration Testing is used to identify problems that occur when units are combined. In its simplest form, two units (that have already been tested) are combined into a component and the interface between them is tested. This type of testing is typically done after unit testing and before system testing. Some different types of integration testing are big bang, top-down, and bottom-up, and Sandwich.

Big Bang Testing, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.

Bottom-Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.

Top-Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

Sandwich Testing is an approach to combine top down testing with bottom up testing.

By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. The overall idea is a "building block" approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages. This method reduces the number of possibilities to a far simpler level of analysis.

Compatibility Testing A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems, or target environments that differ from the original. For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.

Restaurant.com Best Practices for Writing and Organizing QA Tests 13 | P a g e

Page 14: Best Practices for Writing and Organizing QA Tests

Graphical User Interface Testing is used to test an application's graphical user interface and to detect if application is functionally correct and meets its written specifications. GUI testing involves carrying set of tasks and comparing the result of same with the expected output and ability to repeat same set of tasks multiple times with different data input and same level of accuracy. GUI Testing includes how the application handles keyboard and mouse events, how different GUI components like menu bars, toolbars, dialogs, buttons, edit fields, list controls, images etc. reacts to user input and whether or not it performs in the desired manner. Implementing GUI testing for your application early in the software development cycle speeds up development, improves quality and reduces risks towards the end of the cycle. GUI Testing can be performed both manually with a human tester or could be performed automatically with use of a software program.

Database Testing involves testing strategies such as quality control and quality assurance of the product databases. Black-box testing and White-box testing are different database testing methods.

Black Box Testing involves testing interfaces and the integration of the database, which includes:

1. Mapping of data (including metadata)2. Verifying incoming data3. Verifying outgoing data from query functions4. Various techniques such as Cause effect graphing technique, equivalence partitioning and

boundary-value analysis.

With the help of these techniques, the functionality of the database can be tested thoroughly.

White box Testing mainly deals with the internal structure of the database. The specification details are hidden from the user.

1. It involves the testing of database triggers and logical views which are going to support database refactoring.

2. It performs module testing of database functions, triggers, views, SQL queries etc.3. It validates database tables, data models, database schema etc.4. It checks rules of Referential integrity.5. It selects default table values to check on database consistency.6. The techniques used in white box testing are condition coverage, decision coverage, statement

coverage, cyclomatic complexity.

The main advantage of white box testing in database testing is that coding error are detected, so internal bugs in the database can be eliminated. The limitation of white box testing is that SQL statements are not covered.

Security Testing is used to determine that an information system protects data and maintains functionality as intended. The security testing is performed to check whether there is any information leakage by encrypting the application or using wide range of software’s and hardware's and firewall etc. security testing techniques commonly used are; Network scanning, Vulnerability scanning, Password

Restaurant.com Best Practices for Writing and Organizing QA Tests 14 | P a g e

Page 15: Best Practices for Writing and Organizing QA Tests

cracking, Log review, Integrity checkers, Virus detection, War dialing, War driving (wireless LAN testing), Penetration testing.

Security testing is carried out when some important information and assets, managed by the software application, are of significant importance to the organization. Failures in the software security system can be serious especially when not detected, thereby resulting in a loss or compromise of information without the knowledge of that loss.

The security testing should be performed both prior to the system going into the operation and after the system is put into operation.

Security testing as a term has a number of different meanings and can be completed in a number of different ways. The six basic security testing concepts are:

Authentication - Testing the authentication schema means understanding how the authentication process works and using that information to circumvent the authentication mechanism. Basically, it allows a receiver to have confidence that information it receives originated from a specific known source.

Authorization - Determining that a requester is allowed to receive a service or perform an operation.

Availability- Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.

Confidentiality - A security measure which protects the disclosure of data or information to parties other than the intended.

Integrity – Whether the intended receiver receives the information or data which is not altered in transmission.

Non-repudiation - Interchange of authentication information with some form of provable time stamp e.g. with session id etc.

User Acceptance Testing (in agile software development) are usually created by business customers and expressed in a business domain language. User Acceptance Testing is also known as beta testing, application testing or end user testing. These are high-level tests to verify the completeness of a user story or stories 'played' during any sprint/iteration. These tests are created ideally through collaboration between business customers, business analysts, testers, and developers. It's essential that these tests include both business logic tests as well as UI validation elements (if need be). The business customers (product owners) are the primary project stakeholder of these tests. The work associated with UAT begins after requirements are written and continues through the final stage of testing before the client/user accepts the new system.

Restaurant.com Best Practices for Writing and Organizing QA Tests 15 | P a g e

Page 16: Best Practices for Writing and Organizing QA Tests

Agile User Acceptance Testing begins when user stories are defined. A user story should include both story and acceptance test cases (also known as acceptance criteria). One technique for creating and grooming user stories gathers a representative from the business, professional testing and development so that all major constituencies are represented. As a story is defined, so are the acceptance criteria. Adding the focus on business acceptance criteria during the definition of user stories begins the UAT process, rather than waiting until later in the project. Laying out the acceptance criteria when you begin work on a story creates a focus that helps the team to stay focused on what is actually needed and reduces the potential for rework adding extra features.

Application Programming Interface Testing is like testing software at the user-interface level, only instead of testing by means of standard user inputs and outputs, you use software to send calls to the API, get output, and log the system’s response. Depending on the testing environment, you may use a suite of prepared test applications, but very often, you will wind up writing code specifically to test the API. Regardless of the actual testing conditions, it is important to know what API test code should do.

An API is a set of procedures, functions, and other points of access which an application, an operating system, a library etc., makes available to programmers in order to allow it to interact with other software. An API is a bit like a user interface, only instead of a user-friendly collection of windows, dialog boxes, buttons, and menus, the API consists of a set of direct software links, or calls, to lower-level functions and operations. Thorough application programming interface testing should cover at least the following:

Discovery Testing - Testers should manually exercise the set of calls documented in the API.

Usability Testing - Usability testing evaluates whether the API is functional and friendly from the perspective of a customer (typically a software developer) who will be using it to build something. Does the API integrate well with the platforms its intended for? Is it consistent, and at a reasonable level of abstraction? Basically, does it make sense?

Security Testing - Are security requirements defined for the API? What type of authentication, if any, is required, and what permissions structures may apply? Is sensitive data always encrypted, sent over HTTPS, or both? Are there ways people can get access to things they shouldn't?

Automated Testing - In general, API testing should culminate in the creation of a set of scripts, or a tool, that can be used to regularly exercise the API and report errors with minimal human interaction.

Performance Testing determines how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. They describe the types of users and number of users of each type that will be simulated in a performance test. Test Designer should go through the non-functional requirement document before authoring the performance test case. There are a few test types that can be used to test performance of a system:

Restaurant.com Best Practices for Writing and Organizing QA Tests 16 | P a g e

Page 17: Best Practices for Writing and Organizing QA Tests

Load Testing - The simplest form of performance testing. A load test is usually conducted to understand the behavior of the system under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc. are also monitored, then this simple test can itself point towards bottlenecks in the application software.

Stress Testing - Normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system's robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum.

Soak Testing - Also known as endurance testing, is usually done to determine if the system can sustain the continuous expected load. During soak tests, memory utilization is monitored to detect potential leaks. Also important, but often overlooked is performance degradation. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good as or better than at the beginning of the test. It essentially involves applying a significant load to a system for an extended, significant period of time. The goal is to discover how the system behaves under sustained use.

Spike testing - Done by suddenly increasing the number of or load generated by users - by a very large amount - and observing the behavior of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.

Configuration testing - Rather than testing for performance from the perspective of load, tests are created to determine the effects of configuration changes to the system's components on the system's performance and behavior. A common example would be experimenting with different methods of load-balancing.

Isolation testing - It’s not unique to performance testing but involves repeating a test execution that resulted in a system problem. Often used to isolate and confirm the fault domain.

Destructive Testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines.

Regression Testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working, correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code.

Restaurant.com Best Practices for Writing and Organizing QA Tests 17 | P a g e

Page 18: Best Practices for Writing and Organizing QA Tests

Common methods of regression testing include re-running previous sets of test-cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.

Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test-cases to test parts of the new design to ensure prior functionality is still supported.

Internationalization and Localization, the general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudo-localization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).

Actual translation to human languages must be tested, too. Possible localization failures include:

Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.

Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent.

Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.

Untranslated messages in the original language may be left hard coded in the source code. Some messages may be created automatically at run time and the resulting string may be

ungrammatical, functionally incorrect, misleading or confusing. Software may use a keyboard shortcut which has no function on the source language's keyboard

layout, but is used for typing characters in the layout of the target language. Software may lack support for the character encoding of the target language. Fonts and font sizes which are appropriate in the source language may be inappropriate in the

target language; for example, CJK characters may become unreadable if the font is too small. A string in the target language may be longer than the software can handle. This may make the

string partly invisible to the user or cause the software to crash or malfunction. Software may lack proper support for reading or writing bi-directional text. Software may display images with text that was not localized. Localized operating systems may have differently named system configuration files and

environment variables and different formats for date and currency.

Resources

Restaurant.com Best Practices for Writing and Organizing QA Tests 18 | P a g e

Page 19: Best Practices for Writing and Organizing QA Tests

http://en.wikipedia.org/wiki/Agile_testing

http://agiletesting.com.au/

http://quicksoftwaretesting.com/test-case-writing-tips/

http://www.testingexcellence.com/test-cases-best-practices/

http://softwaretestinggarbage.blogspot.com/2012/06/security-testing.html

http://softwaretestingguidetocrackinterviews.blogspot.com/2011/01/best-practice-for-test-case-writing.html

http://university.utest.com/integration-testing-tips/

http://www.versionone.com/pdf/wp_agiletester.pdf

http://www.rbcs-us.com/blog/2012/06/08/test-plans-and-test-cases-in-agilescrum/

Restaurant.com Best Practices for Writing and Organizing QA Tests 19 | P a g e