performance testing using vs 2010 - part 1
DESCRIPTION
This presentation includes the basics of performance testing as a theory and how to apply the concept using Visual Studio 2010 test project.TRANSCRIPT
Performance Test Using Visual Studio 2010Presented By: Mohamed Tarek
Agenda – Day 1 Why Functional Testing is not enough ?
Why Performance Test ?
Difference between Performance, Load and Stress Testing
Performance testing process
Different Methods for Applying Load
How to Measure the System Performance ?
Points to be considered before & during testing
Risks Addressed Through Performance Testing
Why VSTS (From Testing Perspective) ?
Types of Tests
Web test
Load test
Ordered test
Agenda – Day 1 Web Testing
How it works ?
Creating a web test
Recording the web test
Editing the web test
Configuring the web test
Running the web test
Why Functional Testing is not enough
Why Performance Test (1)
Why Performance Test (2)
Difference between Performance, Load and Stress Testing
Performance - is about response, time lapses, duration ... etc.
Load testing - is about test behavior under normal/peak workload conditions. Load is more about characterizing / simulating your actual workload.
Stress testing - is about surfacing issues under extreme conditions and resource failures.
Testing objective (Perf , Load , Stress)
Common Action: Apply Load and Measure Performance
Measuring the System Performance under constant loadPerformance metrics: SpeedAccuracyStability
Performance Testing Load Testing Stress Testing
But With Different Objectives… !!
Measuring the System Performance under incremental load to study the relation between the applied load and the system performance
Applying Incremental load on the system till it reach a break point at which the system crash in order to avoid reaching this point in the future or to Enhance the system performance)
Performance testing process Identify Test Environment
Identify Performance Acceptance Criteria
Plan and Design Tests
Configure Test Environment
Implement Test Design (Configuration & Test Data)
Execute Tests
Analyze , Report and Retest
Different Methods for Applying Load Constant number of users accessing the system concurrently
Variable number of users accessing the system concurrently
Constant size of data uploaded to the system by some users
(Ex: Different types of files)
Variable size of data uploaded to the system by some users
Constant size of data downloaded from the system by some users
Variable size of data downloaded from the system by some users
Combination of all of the above
Required Settings for the applied load If the load is Virtual users (VU):
Number of users
Start time
End time
Ramp-UP period (Time taken for generating all VUs)
If the load is data file:
File name
File path
How to Measure the System Performance
Always measured by one of the following: Avg. Page Load time Avg. Transaction time Avg. Response time
Speed Accuracy Stability
Metrics used for measuring the system performance
Can be measured using the Standard Deviation and the coefficient of variation.CV=SD/Avg.Res.time*100%CV< 5% High Stability5%<CV<10% Acceptable CV>10% Poor
Can be measured using the Avg% of Error for all requests%Error<2% High Accuracy2%<%Error<4% Acceptable%Error>4% Poor
System Speed
System Accuracy
System Stability
Example for Performance Testing (1) Test objective:
We need to know the system performance during the rush hour
Test inputs and Prerequisites:
1) Duration of the rush hour
2) Number of users accessing the system concurrently at the rush hour
3) Ramp-UP Period
4) The commonly used scenarios
5) The users distribution among the scenarios (i.e. % of users executing each scenario)
6)Any input data like user credentials, Serials,..etc
Example for Performance Testing (2)Test Results and Reports:
1) Indication about the System Speed (Avg. Page time, Avg. Response time or Avg. Transaction time) 2) Indication about the System Accuracy(Avg. % of Error)3) The Errors that occurred and the URL of the failed requests4) Indication about the System Stability(Avg. Standard deviation and Coefficient of Variation)5) Tabular Report for all the results logged during the test run (Optional)6) Graphical report for System Speed, Accuracy and Stability Vs Time (Optional)
Example for Load Testing (1)Test objective:
We need to study the effect of increasing load on the system performance
Test inputs and Prerequisites:
1) Initial load to be applied on the system2) maximum load to be applied on the system (Optional) 3) Type of the load to be applied4) Step of the incremental load5) The commonly used scenarios6) The users distribution among the scenarios (i.e. % of users executing each scenario)7) Any input data like user credentials, Serials,..etc
Example for Load Testing (2)Test Results and Reports:
1) Tabular report indicating the effect of increasing load on the system speed 2) Graphical chart indicating the effect of increasing load on the system speed 3) Tabular report indicating the effect of increasing load on the system Accuracy 4) Graphical chart indicating the effect of increasing load on the system Accuracy5) Tabular report indicating the effect of increasing load on the system Stability 6) Graphical chart indicating the effect of increasing load on the system Stability7) List of the switching points at which the system performance was greatly impacted
Example For Stress TestingTest objective:
We need to Know the maximum load that we could apply to the systemTest inputs and Prerequisites:
1) Initial load to be applied on the system2) Defining the test exit criteria (System crash point) 3) Type of the load to be applied4) Step of the incremental load5) The commonly used scenarios6) The users distribution among the scenarios (i.e. % of users executing each scenario)7)Any input data like user credentials, Serials,..etc
Test Results: The Load at witch the system crash
Points to be considered before & during testing
Testing Environment should simulate the production Environment
Incase of inability to simulate the production Environment, a scaling for the test results should be done
The network factor should be isolated usually by preparing a closed test lab for the testing machine and the SUT
The testing tool should be compatible with the SUT
The firewall and Antivirus should be disabled on both testing machine and SUT during recording and execution
Pop-Ups and warnings should be disabled during the recording and execution
While recording, Suitable thinking time should be considered between different actions in the scenario
Risks Addressed Through Performance Testing - Speed Speed-related risks are not confined to end-user satisfaction, although that is what most
people think of first. Speed is also a factor in certain business- and data-related risks. Some of the most common speed-related risks that performance testing can address include:
Is the application fast enough to satisfy end users?
Is the business able to process and utilize data collected by the application before that data becomes outdated? (For example, end-of-month reports are due within 24 hours of the close of business on the last day of the month, but it takes the application 48 hours to process the data.)
Is the application capable of presenting the most current information (e.g., stock quotes) to its users?
Is a Web Service responding within the maximum expected response time before an error is thrown?
Risks Addressed Through Performance Testing - Scalability Scalability risks concern not only the number of users an application can support, but also the volume
of data the application can contain and process, as well as the ability to identify when an application is approaching capacity. Common scalability risks that can be addressed via performance testing include:
Can the application provide consistent and acceptable response times for the entire user base?
Can the application store all of the data that will be collected over the life of the application?
Are there warning signs to indicate that the application is approaching peak capacity?
Will the application still be secure under heavy usage?
Will functionality be compromised under heavy usage?
Can the application withstand unanticipated peak loads?
Risks Addressed Through Performance Testing – Stability (1) Stability is a blanket term that encompasses such areas as reliability, uptime, and recoverability.
Although stability risks are commonly addressed with high-load, endurance, and stress tests, stability issues are sometimes detected during the most basic performance tests. Some common stability risks addressed by means of performance testing include:
Can the application run for long periods of time without data corruption, slowdown, or servers needing to be rebooted?
If the application does go down unexpectedly, what happens to partially completed transactions?
When the application comes back online after scheduled or unscheduled downtime, will users still be able to see/do everything they expect?
Risks Addressed Through Performance Testing – Stability (2) When the application comes back online after unscheduled downtime, does it resume at the correct
point? In particular, does it not attempt to resume cancelled transactions?
Can combinations of errors or repeated functional errors cause a system crash?
Are there any transactions that cause system-wide side effects?
Can one leg of the load-balanced environment be taken down and still provide uninterrupted service to users?
Can the system be patched or updated without taking it down?
Why VSTS (From Testing Perspective)VSTS Provide Testers with many useful features such as the following:
Provide many types of tests to satisfy the Testers needs
Simple GUI for recording and configuring tests
Powerful in analyzing and Reporting test results
High flexibility in Test Management and control
Test scripting is allowed for Advanced Editing and control
Types of Tests in VSTS
Used for functional testing of web Applications
Web test Load test Ordered test
Types of Tests in VSTS
Used for Performance, load and stress testing of web applications
Used for managing and controlling the order of tests execution)
VSTS Provide many types of tests such as: Web test Load test Ordered test Manual test Unit test Generic test
The scope of this training is focusing only on the most important types of them
Web Testing -How it Works ? Web tests are used for testing the functionality of the web applications, Web sites, web services or a combination of all of this.
Web tests can be created by recording the interactions that are performed in the browser which are normally a series of HTTP requests (GET/POST).
This requests can be played back to test the web application by setting a validation point at the response to validate it against the expected results
Step 1
Step 2
Step 3
00Record the user interactions [Test Scenario] on the web browser
Get the test result [Either Success or failure] after validating against the expected results
Play back the recorded scenario after setting validation points on the response
Web Testing - Creating a Web test1) From FILE menu, Select New -> Project -> Test Project then determine the Project name and path
2) In the Solution Explorer Panel, Select the test project, right–click, and choose Add New test from the context menu which opens the Add New Test window that contains the different test type templates.
3) Select the Web Test template from the list of different test types and determine the test name
4) Once you select the Web Test and click OK you can see the test getting added to the selected test project and a new instance of a web browser opens.
Web Testing - Recording the Web test When the browser open to record the user interactions it will contain the Web Test Recorder in the left
pane that is used for recording the test scenarios. Recorder has five different options discussed as follows: Record: To start recording the web page requests. Pause: used to pause the recording as In some cases we may need to skip the recording of some actions
that it is not included in the test scenario then continue the recording normally Stop: This is to stop the recording session. Once we click on the Stop button, the browser will close and
the session will stop. Add a Comment: This option is used for adding any comments to the current request in the recording. Clear all requests: Used in case we need to clear out all the recorded requests
Web Testing - Editing the Web test After completing the recording of all requests, Now you can see
the Web Test editor window is open and see the recording details in the Web Test editor.
The HTTP requests sent to the server usually contains parameters sent with it.
We have two types of these parameters
Form post parameters
Types of Parameters
These parameters are sent along with the request if the method used for the request is POST. All field entries made by the user in the web page are sent to the server as Form POST parameters.
These parameters are sent along with the request if the method used for the request is GET. Web page will be retrieved from server depending on this parameters.(Ex: User name and password)
Query string parameters
Web Testing - Editing the Web test (Web test properties) There is some Properties for the Web Test that we can set such as user
credentials or giving a description to the test. Also there is some properties for the requests that we can set for each
request individually such as the timeout or think times. Web test properties are as follows: Description: To specify the description for the current test. Name: Name of the current Web Test application. User Name: To specify the username of the test user, if we are using any user credential for this test. Password: This is the password for the username specified in the
Username field. Proxy: we use this field to set the proxy server name to be used by the
test. Stop On Error: This is useful to inform the application whether to stop
the test or continue in case of any errors.
Web Testing - Editing the Web test (Requests properties) Requests properties are as follows:
------------------------------------------------- Cache Control: This property is to simulate the caching property of the
web pages. The value can be true or false. Expected Response URL: This is set to the response URL that we expect
after we sent the current request. This is to be validated against the actual Response URL
Method: This property is used to set the request method used for the current request. It can be either GET or POST.
Think Time(Seconds): Think time is set for the think time taken by the user between actions (Requests). This is not the exact time that the user can spend in thinking but is just an estimation. This property is not very useful for the normal single web test but very useful in case of Load test as it affects the system performance.
Web Testing - Editing the Web test (Requests properties) Timeout (Seconds): This is the expiry time for the request. This is the maximum time for the request to respond
back. If it doesn't return within this time limit then the page gets timed out with error. Default is 300.
Response Time Goal: Used to determine the desired response time for this request so you can check whether the actual response time meet the desired one or not. The default value is 0 which means the property is not set.
URL: This is the URL address for the request
Parse Dependent Requests: This property can be set to True or False to parse the dependent requests within the requested page. For example, we may not be interested in collecting the details for the images loaded in the web page. So we can turn off the requests for loading the images by setting this to False. Only the main request details will be collected.
Record Results: Used if we need to collect the performance data for the HTTP request.
Web Testing - Editing the Web test (Extraction rules)
Extraction rules are useful for extracting the data or information from the HTTP response.
Normally in web applications many web forms depend on other web forms. It means that the request is based on the data collected from the previous request's response.
The data from the response has to be extracted and then passed to the next request in the form of values for query string parameters.
VSTS provides several built-in types of extraction rules. This helps us to extract the values based on the HTML tags or different type of fields available in the web form.
Web Testing - Editing the Web test (Extraction rules) The different types of Extraction rules:
------------------------------------------------------ Extract Attribute Value: This is to extract the attribute value from the response
page based on the tag and the attribute name. The extracted value will be stored in the context parameter. Attribute could be an image, link,..etc.
Extract Form Field: To extract the value from any of the Form fields in the response. The field name is identified here.
Extract Text: This is to extract the text from the response page. The text is identified based on its starting and ending value with text Casing as optional.
We can add as many rules as we want, but we should make sure that the Context Parameter Names are unique across the application.
Web Testing - Editing the Web test (Extraction rules) How to add an extraction rule to a Web test:
------------------------------------------------------ Open a Web test.
2) Select the request to which you want to add the extraction rule.
3) Right-click the request and select Add Extraction Rule. The Add Extraction Rule dialog box is displayed.
4) In the Add Extraction Rule dialog box, select Extract Attribute Value.
5) In the Properties, set the Context Parameter Name property to a descriptive name such as First Link.
Web Testing - Editing the Web test (Extraction rules)Example:Consider the HTML format of what we are trying to extract is <a href=http://www.contoso.com> where a is referred to as the tag and href is the attribute of interest.
6) Set the Attribute Name property to href and the Tag Name property to a.
7) After running the test the extracted value http://www.contoso.com will be stored in the context parameter First Link
8) You can use this extracted value in further sections in the test
Web Testing - Editing the Web test (Validation rules) Validation rules are mainly used to validate certain data or text
in the response against our expectations.
How to add a validation rule to the web test:---------------------------------------------------------------
1) Right click on the request
2) Select the Add Validation Rule option which opens the validation rule's dialog
3) Select the type of validation rule required and fill the parameters required for the rule.
VSTS provides a set of predefined validation rules.
Web Testing - Editing the Web test (Validation rules)The different types of validation rules:
------------------------------------------------------Form Field: Used to verify the existence of a form field with a certain value.The parameters are Form Field Name and Expected value.
Find Text: This is to verify the existence of a specified text in the response.The parameters used for this are:Find Text: The text to search forIgnore Case: To determine whether the search will be case sensitive or not.[True: Ignore case / False: Don’t Ignore case]Pass If Text Found: To determine the acceptance criterion[True: The test will pass if the text is found / False: The test will pass if the text is not found]Use Regular Expression: Used if you need to search for a regular expression (i.e. Special sequence of characters)
Web Testing - Editing the Web test (Validation rules) Maximum Request Time: This is to verify whether the request finishes
within the specified Maximum request Time. The parameter is Max Request Time (milliseconds)
Required Attribute Value: This is similar to the attribute used in the Extraction rules. The parameters used here are the same as those used in extraction rules with two additional fields which are:
Expected Value: The expected value of the attribute Index: The index is used here to indicate which occurrence of the string to
validate. If this index is set to -1, then it checks any occurrence in the response.
The option Level that appears above the parameters of all validation rules is used in load testing to determine which level of rules should be validated
Web Testing - Editing the Web test (Validation rules) Required Tag: To verify if the specified tag exists in the response. The parameters are: Required Tag Name: The name of the Tag to be validated Minimum Occurrences: set the minimum number of occurrence if needed Response URL: Verify that the response URL is the same as the recorded response URL
Web Testing - Configuring the Web test How to set the properties of the web test
----------------------------------------------------------- 1) Open the “Test” menu from the menu bar 2) Select “Edit Test Run Configurations” 3) Select “Local Test Run” from the sub menu 4) The test configuration window will open 5) Select the “Web Test” from the left panel 6) Set the settings of the web test
This section describes all of the settings required for web testing.
These settings are applied only for the web testing but Some of the properties will be overridden in case of load testing.
Web Testing - Configuring the Web test The different properties of the web test:
--------------------------------------------------------- Number of run iterations: This is to set the number of times the test
has to run. This property does not apply to load test as the number of iterations
is determined in the load test properties and override this property. You can set the web test to run for each row in the available Data
source
Browser type: This property is to set the type of browser to use for the web test.
Network type : This is to simulate the network type on which you want to run the web test
Think time: This is to simulate the think time between the requests that will be sent in the web test
Web Testing - Running the Web test Once we've made all the required settings and finished recording all
the required requests we can start running the test.
Use the Run Test option in the Web test editor toolbar to start running the test.
You can notice the test execution progress in the web test window.
After completing the execution, the result window displays success and failure by marking against each request.
If any one of the requests in the test fails, the test result window will show the final result of the test as “Failed”
Web Testing - Running the Web test If there are multiple requests in the test, we can view the detailed
results of each request
Web Browser: This tab displays the same web page used by the request
Request: This tab contains all the information about the request like Headers, Cookies, Query String Parameters and Form Post Parameters.
Response: This tab shows the response for the requested web page represented in HTML
Context: All the run-time details assigned to the test can be viewed here.
Details: It shows the status of the validation rules that were executed during the test.
Contact UsWebsite: http://www.geekit.me
E-Mail : [email protected]