cput cput –– combinatorialombinatorial--based …sampath/presentations/...19‐oct‐11 1 cput...
Post on 15-Aug-2020
2 Views
Preview:
TRANSCRIPT
19‐Oct‐11
1
CPUT CPUT ––CCombinatorialombinatorial--Based Based PPrioritization forrioritization forUU S iS i B dB dUUserser--SessionSession--Based Based TTesting of Web Applicationsesting of Web ApplicationsSreedevi Sampath, University of Maryland, Baltimore CountyRenee Bryce, Utah State UniversitySachin Jain, University of Maryland, Baltimore County, y y , ySchuyler Manchester, Utah State UniversityRichard Kuhn and Raghu Kacker, NIST
Overview of CPUTOverview of CPUT
Automated and general framework to◦ Create and manage regression test suites◦ Create and manage regression test suites
◦ Designed for web applications
◦ Logs usage data and converts to test cases
◦ Prioritizes test cases and creates test dorders
Test cases can then be replayed and used to test web systems
Verify/ATI 2011 2Sreedevi Sampath, UMBC
19‐Oct‐11
2
Regression TestingRegression Testing
V1 V2
Test the new code: Regression Testing
Implement changes (add/delete functionality),
remove bugs
Rerun all existing tests?Rerun a subset of existing tests?Rerun tests in a specific order?
Test Prioritization
1. Rerun existing tests from V1 to ensure changes did not break functionality
2. Write new tests as necessary to test new functionality
Verify/ATI 2011 3Sreedevi Sampath, UMBC
Need for Reliable Web ApplicationsNeed for Reliable Web Applications
Increasing shift of applications to the web◦ E.g., Google Docs
Huge losses on web site failure: tune of millions of dollars per hour1
Large number of failures during maintenance2maintenance2
1. Web Application Development - Bridging the Gap between QA and Development by Michal Blumenstyk2. Causes of Failures in Web Applications by Solia Pertet and Priya Narsimhan. December 2005
Verify/ATI 2011 4Sreedevi Sampath, UMBC
19‐Oct‐11
3
Traditional Software Testing Traditional Software Testing ProcessProcess
ApplicationRepresentation
Replay Tool
TestCases
TestCases
A t l
ApplicationSpecification
Test Case Generator
ApplicationImplementation
Oracle Pass/ Fail
ActualResults
ExpectedResults
Verify/ATI 2011 5Sreedevi Sampath, UMBC
Traditional Software Testing Traditional Software Testing ProcessProcess
Hard to obtain when testing applicationsApplication
Representation
Replay Tool
TestCases
TestCases
A t l
ApplicationSpecification
Test Case Generator
ApplicationImplementation
Oracle Pass/ Fail
ActualResults
ExpectedResults
Verify/ATI 2011 6Sreedevi Sampath, UMBC
19‐Oct‐11
4
Traditional Software Testing Traditional Software Testing ProcessProcess
ApplicationRepresentation
Replay Tool
TestCases
TestCases
A t l
ApplicationSpecification
Test Case Generator
ApplicationImplementation
User-session-based Testing
Oracle Pass/ Fail
ActualResults
ExpectedResults
Verify/ATI 2011 7Sreedevi Sampath, UMBC
UserUser--sessionsession--based Testingbased TestingUser 1:login.jsp?user=“ss”&pass=“yy”shop.jsp?book=“java”&id=1
Request Parameter-values
Log UserRequests
s op jsp boo ja a & dsearch.jsp?book=“unix”&id=2
Beta Web Application (v.0.9) Deployment
TestWeb Application
UserSessions
Test Cases
Create testcases
Users
8
Oracle Pass/ Fail
ActualResults
ExpectedResults
TestCases
ppImplementation (v.1.0) Replay Tool
Verify/ATI 2011 Sreedevi Sampath, UMBC
19‐Oct‐11
5
Test prioritizationTest prioritization
Order existing tests based on some criterion to achieve a performancecriterion to achieve a performance goal◦ Examples of traditional prioritization
criteria: total statement coverage, total method coverage
Performance goal: find faults quickly in◦ Performance goal: find faults quickly in test execution cycle
Verify/ATI 2011 9Sreedevi Sampath, UMBC
UserUser--sessionsession--based Test Case based Test Case PrioritizationPrioritization
Log UserBeta Web Application UserSessions Create test
ActualResults
ExpectedResults
og URequests(v.0.9) Deployment
Web Application Implementation (v.1.0)
Replay Tool
Test Cases
Create testcases
Prioritizetest cases
Test Cases
Prioritized test cases
Oracle Pass/ Fail
u Replay Tool
Verify/ATI 2011 10Sreedevi Sampath, UMBC
19‐Oct‐11
6
Our test prioritization criteriaOur test prioritization criteria
Combinatorial-based◦ 2 way◦ 2-way
Count-based◦ Number of requests
◦ Number of parameter-values
Verify/ATI 2011 12Sreedevi Sampath, UMBC
Renee Bryce, Sreedevi Sampath, Atif Memon, Developing a Single Model and Test Prioritization Strategies for Event-Driven Software, IEEE Transactions on Software Engineering, Vol. 37, Issue 1, pp 48-64, 2011
Test Case 1:Catalog java item name=“shirt” item weight=“2”
1
CombinatorialCombinatorial--based: 2based: 2--wayway
2
Catalog.java, item_name shirt , item_weight 2View_Cart.java, ship_type=“air”, zip=“21250”
3 4
2-way interactions: (1,3) (1,4) (2,3) (2,4)
I t iti I t ti f t t Intuition: Interactions of parameters set to values on different windows expose faults
Criterion: Give higher priority to tests with larger number of 2-way interactions
Verify/ATI 2011 13Sreedevi Sampath, UMBC
19‐Oct‐11
7
CountCount--based: Request lengthbased: Request length
Test Case 1:C t l j it “ hi t” it i ht “2”
Number of requests in test: 2 Intuition: tests that contain more
requests are more likely to reveal faults
Catalog.java, item_name=“shirt”, item_weight=“2”View_Cart.java, ship_type=“air”, zip=“21250”
requests are more likely to reveal faults because they cover a large part of the underlying code
Criterion: Give higher priority to tests with larger number of requests
Verify/ATI 2011 14Sreedevi Sampath, UMBC
CountCount--based: Parameterbased: Parameter--Value Value lengthlength
Test Case 1:C t l j it “ hi t” it i ht “2”
1 2
Number of parameter-values in test: 4
Intuition: tests that set more parameters
Catalog.java, item_name=“shirt”, item_weight=“2”View_Cart.java, ship_type=“air”, zip=“21250”
3 4
Intuition: tests that set more parameters to values are more likely to reveal faults
Criterion: Give higher priority to tests with larger number of parameter-values
Verify/ATI 2011 15Sreedevi Sampath, UMBC
19‐Oct‐11
8
Empirical study: ResultsEmpirical study: Results
Measure the rate of fault detection
Prioritization criteria that were the best in all the subject applications (3 web and 3 GUI applications)Underlining indicates criterion that is always among the best
Verify/ATI 2011 16Sreedevi Sampath, UMBC
Test case creatorLogger
CPUT components: 3 enginesCPUT components: 3 engines
Log UserBeta Web Application UserSessions Create test
Prioritizer
og URequests(v.0.9) Deployment
Web Application Implementation (v.1.0)
Test
Create testcases
Prioritizetest cases
Test Cases
Prioritized test cases
Oracle Pass/ Fail
ActualResults
ExpectedResults
Replay Tool
Test Cases
Verify/ATI 2011 17Sreedevi Sampath, UMBC
19‐Oct‐11
9
CPUT componentsCPUT components Three main engines◦ Logger
◦ Test case creation
◦ Prioritizer
Verify/ATI 2011 18Sreedevi Sampath, UMBC
LoggerLogger
Enables capture of HTTP GET and POST requests and associated dataPOST requests and associated data
Developed as a module for Apache web server
Will work on both Linux and Windows platforms
Minimal performance overhead Easy integration with rest of the web
server
Verify/ATI 2011 19Sreedevi Sampath, UMBC
19‐Oct‐11
10
Deploying the LoggerDeploying the Logger
Server administrator must place module with other Apache modulesmodule with other Apache modules
Enable the module in Apache’s configuration file (can specify path and filename). Log written to a text file
Format of entry in log file
[Fri Jan 08 13:33:01 2010] ]# 127.0.0.1 ]# POST ]# /schoolMateApplication/index.php ]# PHPSESSID=a9099cd16db2e4134200cd69f6dc87cf ]# http://localhost:8000/schoolMateApplication/index.php ]# PostData:page2=5&logout=1&page=1
Verify/ATI 2011 20Sreedevi Sampath, UMBC
CPUT componentsCPUT components Three main engines◦ Logger
◦ Test case creation
◦ Prioritizer
Verify/ATI 2011 21Sreedevi Sampath, UMBC
19‐Oct‐11
11
Test case creation (parser)Test case creation (parser)
Efficient storage and retrieval of web usage logsusage logs◦ Store usage log and test cases in
PostGreSQL database
Apply previously proposed heuristics to create test cases [Sampath TSE07, Sprenkle ASE05]
Extensible and generic format of test cases to enable replay ◦ Design an XML format for tests
Verify/ATI 2011 22Sreedevi Sampath, UMBC
Test case creation (parser)Test case creation (parser)
Storing usage log in database◦ Create new table◦ Create new table
◦ Append to existing table
◦ Overwrite existing table
Verify/ATI 2011 23Sreedevi Sampath, UMBC
19‐Oct‐11
12
Test case creation: XML test Test case creation: XML test case formatcase format
1 <testSuite>
Example test case
2 <session id=“1.XML”>3 <url>4 <request type>POST</request type>5 <baseurl>/SchoolMate/index.php</baseurl>6 <param>7 <name>book_name</name>8 <value>java</value>9 </param>1010 <param>11 <name>book_author</name>12 <value>savitch</value>13 </param>14 </url>15 </session>16 </testSuite>
24Sreedevi Sampath, UMBCVerify/ATI 2011
CPUT componentsCPUT components Three main engines◦ Logger
◦ Test case creation
◦ Prioritizer
Verify/ATI 2011 26Sreedevi Sampath, UMBC
19‐Oct‐11
13
PrioritizerPrioritizer
Three prioritization criteria◦ 2way request-length parameter-value length2way, request length, parameter value length
◦ Random prioritization
Accommodate two types of web systems◦ Unique URL: each page is identified by a
unique base URL
◦ Non unique URL: application has the same◦ Non-unique URL: application has the same base URL for all its pages, and the value of one or more of parameters is used to determine which page to load next
Verify/ATI 2011 27Sreedevi Sampath, UMBC
Unique vs. NonUnique vs. Non--unique URLunique URLUnique URL Non-unique URL
Each page in web li ti h i
Same base URL for all the web pages Value ofapplication has a unique
base URL
Example test case
the web pages. Value of parameter-values determines unique page
Example test case
index.phpindex.php?name=tom&page1=0&p
2 1index.php
age2=1index.php?name=henry&pass=joy&page1=1&page2=1
index.phpregistration.php?name=tomadduser.php?name=henry&pass=joy
Verify/ATI 2011 28Sreedevi Sampath, UMBC
19‐Oct‐11
14
Tool DemonstrationTool DemonstrationCPUT main screen
Verify/ATI 2011 29Sreedevi Sampath, UMBC
top related