svcc 2011 - 0 - 60: qa automation @ box
Post on 01-Nov-2014
307 Views
Preview:
DESCRIPTION
TRANSCRIPT
Box Biz
• Cloud Content Management and Collabora'on • 7M users • 100K businesses • Adop8on in 77% of the Fortune 500 • 150M file accesses / month • 270 employees, up from 125 at previous year-‐end • 15-‐20 main-‐branch commits per day, and steadily growing
Box Tech
• LAMP Stack Web App • Na8ve and HTML5 (m.box.net) Mobile Apps • Sync for Mac and Windows • MS Office & Outlook Plugins on Windows • Open PlaVorm -‐ REST, SOAP, and XML-‐RPC APIs • Front-‐end Presenta8on • Back-‐end Services
Agenda
• Overview – Business Decisions
– Technical Decisions
• Demos
• Lots of Code
• Lessons Learned
• Immediate and Future Plans…
• Q&A
Not necessarily all in that order of precedence…
Disclaimer
We’ll be focusing primarily on browser-‐based func8onal acceptance tes8ng for this presenta8on.
The test automa8on landscape is rapidly changing.
Please keep this in mind when you decide to implement a test automa8on strategy to make sure that you make the best decision for your given situa8on.
Because of this guy…
Disclaimer: “The Most Interes8ng Man in the World” DOES NOT work at Box, although I’ve been known to occasionally drink and code – that’s (par8ally) what tests are for!
Before Automated TesAng
• Black box and exploratory manual tes8ng – Separate QA staff – Off-‐shore commercial tes8ng partner
• Insufficient coverage • Slow turn-‐around / delayed feedback • Lifle engineering involvement • Developer code reviews were a huge but imperfect safety net
With Automated TesAng • Developers “own” the tests
– Tests are authored, maintained, and executed on demand on developer’s worksta8on and / or VM
• QA “owns” the test suites – Approves automated test cases – Determines which tests belong in which suites – Executes tests against mul8ple target environments and verifies
results
• QA no longer spends 8me manually tes8ng func8onal areas with approved test coverage
• Coverage is befer and steadily improving • S8ll doing code reviews but now we have a safety net
Our IniAal Goals
• Prevent crazy late-‐night deployments by completely automa8ng our “acceptance suite”
• Ease of test authoring, easily maintainable tests
• Fast scalable plaVorm – provide feedback ASAP
(Some think) AutomaAon is Easy…
Selenium IDE Screencast available aier 10/28 Screencast demonstrates using Selenium IDE to record a test case that creates a folder and navigates to it through the Box web UI.
Yeah, right…
Selenium IDE Screencast available aier 10/28 Screencast demonstrates execu8ng the previously recorded test – fail!
And for the non-‐believers…
Selenium IDE Screencast available aier 10/28
Screencast demonstrates execu8ng the previously recorded test. This 8me we execute the test at the slowest possible speed, since it failed due to 8ming issues the first 8me, but it s8ll fails for other reasons – record and playback tools tend to struggle with modern Web 2.0+ websites.
AutomaAon is Hard…
• All marketers are liars – things rarely work 100% as adver8sed…
• Easily maintainable high-‐value automa8on requires soiware development skills and a fair amount of effort.
But worth it!
0
100
200
300
400
500
600
QA Engineers TAF
Smoke
Sanity
QA Engineer minutes to perform test suite
We also replaced 1 year of API tes8ng with 2 weeks of development effort
Defining Your High-‐Level Requirements Build Buy
Time I love building tools I need it yesterday
Money “You can use whatever you want as long as it’s free”
I’m going to lose my budget if I don’t spend it!
Resources We have skilled developers who are interested in and capable of building, maintaining, and supporAng test automa8on
I want to automate as much of my QA tasks as possible but I don’t have any development support
Framework Pla_orm
Execu8on Environment
I want to develop, execute, and monitor all of my tests on my local machine
I want to parallelize my tests as much as possible and don’t want to 8e up my machine while execu8ng them
Commercial vs. Open Source Commercial Pros Cons
Vendor Support High Cost
Documenta8on/Training Materials Long-‐term rigidity
Quick Win Dependent on vendor for fixes
Trained Candidate/Employee Pool
You may also want to consider what kind of 3rd-‐party support exists in terms of books, blogs, tools, forums, etc…
Open Source Pros Cons
Low ini8al cost Community Support +/-‐
“Use the Source” Lack of vendor support/docs
High-‐Level Weekly Release Process
“Full Push”
Incremental Pushes aka “Con8nuous Delivery”
C1 … C20
C1 C2 C3 C4 C5 … C15 C16 C17 C18 C19 C20
TC TC TC TC TC TC TC TC TC TC TC TC
Single Test Cycle
Key Design Decisions
• JVM-‐compa8ble language
• Op8mize for throughput
– Massive parallelism (all tests are idempotent)
• Ease of test development
• Flexibility, Adaptability, Maintainability, Scalability
Choosing a Framework and Pla_orm
Pla_orms Pros Cons
Selenium Grid It’s your grid It’s your grid…
Sauce Labs Grid is set up and maintained for you
Ability to customize solu8on may be limited or non-‐existent
!"#$%&'"()!! ! ! ! ! !
*+, *#-#, ./, 0123'4, 5671, 080, 0%"9, :#";%2,<)%"!
"#$#%&'(!)!! ! ! ! ! !
!! ! ! ! ! !
*#+!
"#$#%&'(!,!-.#/*0&+#01!
! ! ! ! ! !
!! ! ! !
! ! *#+!
.23&04!! ! !
!! !
!!
! !!
! ! 56!
.&%7(&$$! !! ! ! ! ! !
! !! !
! ! *#+856!
"29&!:"4!!
! ! !
! ! !!
! ! ! ! ! 56!
!!!!
The path we chose @ Box • Func8onal Acceptance Tes8ng
– Scala
– Started with Selenium 1, recently migrated to Selenium 2*
– ScalaTest/TestNG
• API Tes8ng
– Jump-‐started full API coverage with home-‐grown Ruby framework
• Unit tes8ng (adopted MUCH later…)
– PHPUnit
– specs 2
– PyUnit
• Sta8c analysis tools (JSLint)
High-‐level Architecture
BaseTestCase
BoxTestCase
MyTest
Driver
SeleniumDriver
WebDriverDriver
Shared API
Component API
Element Locators
PageObject
BoxPageObject
MyPageObject
Ac8ons
High-‐level ExecuAon Flow
Execute Test or Suite
Ini8alize Test
Execute test Body
Handle Failures/Cleanup
Two demos
• HelloName test – Illustrates the typical workflow – Simple and ar8ficial
• Sample Box.net test – Includes more advanced func8onality – Tests the real site
HelloName test – Basic structure
HelloSuite • Lists and describes tests
TestGree8ng • High-‐level test defini8on
WelcomePageObject • Reusable page-‐specific func8ons
CompilaAon and ExecuAon
• Compiles Scala code • Copies resources sbt build
• Scans for test annota8ons • Produces TestNG epr/xml files sbt build-‐suites
• Runs TestNG on a suite • Invokes @BeforeSuite • Records results
runsuite / runtest / runfailed
CompilaAon and ExecuAon
• Instan8ate testcase • Instan8ate driver • Invoke executeAndCapture() • Setup execu8on trace • Invoke testcase’s execute()
Ini8alize test
• Take screenshot or save HTML on failure (if configured to do so)
• Close browser on failure (if configured to do so) Cleanup
Two demos
• HelloName test – Illustrates the typical workflow – Simple and ar8ficial
• Sample Box.net test – Includes more advanced func8onality – Tests the real site
Framework – Tests and SupporAng Classes
BaseTestCase
BoxTestCase
MyTest
PageObject
BoxPageObject
MyPageObject
Ac8ons
Selenium 2: A Variety of Modes
• Supports Selenium 1 features • Does not provide advanced WebDriver func8onality
Compa8bility Mode
• Simulates a mouse cursor moving and clicking • Windows only, CPU intensive, slow, unreliable
Webdriver – Na8ve Events
• Not available for IE • Produces most reliable results
Webdriver – Default Mode
Cross-‐browser and cross-‐pla_orm tesAng
• How do we get the same tests to work on different browsers/plaVorms when mixing Selenium modes?
• How do we provide a consistent API for test writers while allowing for flexibility in how the Selenium calls are made? – Major changes to Selenium are not unheard of
Driver Structure w/App-‐Specific Support
Driver
SeleniumDriver
WebDriverDriver
BoxWebDriverDriver
BoxSeleniumDriver
ModificaAons and Extensions
• Sizzle (jQuery) selectors – Efficient and powerful – We recommend test-‐writers find elements by ID or through a Sizzle selector
• Flex extensions – Interact with flash/flex elements in the page – Necessary for tes8ng Box’s file preview func8onality
Good pracAces
• Perform setup ac8ons through your app’s API – Bypasses the web UI – Improves test speed – Reduces spurious failures
• Parameterize tests – Many tests will share iden8cal or similar steps – Reuse code by parameterizing tests – Add descrip8ons to each test call to differen8ate them
Bad PracAces
• Avoid using XPath – Incredibly slow in IE version < 9 – Extremely brifle
• Avoid Thread.sleep() or equivalent – poll instead – Timing issues are the main cause of spurious test failures – Pausing test execu8on is rarely the correct solu8on – Try wai8ng for a specific element to be visible
AdopAon
• Know your target end-‐user – Framework design – Diagnos8c/Analysis tools – Docs, Training, Mentoring, Support
• Eliminate FUD – Automa8on is NOT going to replace your QA organiza8on. Instead, it’ going to transform their work in a more rewarding and highly leveraged manner.
– Expect a learning curve and a bit of a rough start – Progressive adop8on policies
• Provide con8nuous feedback… – Metrics – Apply social pressure if/when necessary
Lessons Learned
• Automa8on is Rewarding!
• But some8mes frustra8ng…
• The cultural challenges are much greater than the technical challenges – we’re having some difficul8es with establishing a “stop the line” mentality
• Broken tests or tests that don’t run don’t count – we’ve recently begun to file “Blocker” bugs for all broken tests
• Func8onal tes8ng can be flakey, especially when external dependencies are involved
• Few companies do this well (because it’s hard work, real engineering, and oien underes8mated)
Our Current Goals
• Con8nually and rapidly expand the coverage of our full-‐regression suite – Iden8fy gaps in test coverage
– Make test authoring easier and less intrusive
– Make test execu8on faster
– Expand browser support
• Provide befer feedback mechanisms – No8fica8on mechanisms and dashboards
• Con8nuous Delivery – Improve test throughput by scaling the test environment
Outstanding QuesAons
• Granularity of 8meout values
• When will “Na8ve Mode” work acceptably with IE?
• Ajax 8ming issues for extremely dynamic content
If we had to start all over…
• Unit tes8ng before func8onal acceptance • Wa8r vs. Selenium if you only care about IE and don’t need grid*
• Iron out the kinks before rolling it out – we got buried in training/support and weren’t able to resolve issues quickly enough as a result
• Dedicated hardware for complete AUT and plaVorm (RC’s being an acceptable excep8on)
• Ac8onable test failure no8fica8ons – don’t spam everyone
Box is Hiring!
• Roughly 25 open engineering & opera8ons posi8ons
• Quality Engineering Posi8ons – Soiware Engineer – Tools and Frameworks
– QA Engineer
• Visit www.box.net/jobs/ for more info
• Send resumes to referrals@box.net and men8on you saw us at Code Camp!
Box SVCC 2011 PresentaAons 0 – 60: QA Automa8on at Box (Sat 11:15 AM -‐ Room 5501) Speakers: Peter White, Dave Duke Where is my data? Consistency, availability, security of cloud file storage at Box (Sat 1:45 PM – Rm 1401) Speaker: Antoine Boulanger Achieving Cloud-‐Scale Test Automa8on at Box (Sat 3:30 PM – Room 5501) Speakers: Randall Schulz , Jordan Sterling , David Wake DRY CSS & Images (Sat 5:00 PM – Room 1500) Speaker: Kimber Lockhart HTML5 Uploading and Beyond (Sun 10:45 AM – Room 8403) Speaker: Ben Trombley
Thank you!
Resources • hfps://www.box.net/developers • hfp://seleniumhq.org • hfp://wa8r.com • hfp://sahi.co.in • hfp://www.getwindmill.com • hfp://testng.org • hfp://code.google.com/p/flash-‐selenium/
Contact Info • Peter White (peter@box.net), Director of Quality Engineering • Dave Duke (dduke@box.net), Soiware Engineer
Ques8ons?
top related