software testing 2012 - a year in review

9
1 CONFIDENTIAL Software Testing 2012 A Year in Review

Upload: johan-hoberg

Post on 06-May-2015

480 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Software testing 2012 - A Year in Review

Software Testing 2012A Year in Review

Page 2: Software testing 2012 - A Year in Review

Introduction

• What are the latest trends within software testing?

• What has happened during 2012 that is worth highlighting?

• Very subjective, but here is my view

Page 3: Software testing 2012 - A Year in Review

Software Testing Trends Overview

Google Testing 2.0 Good Test Automation

Test Roles Context-driven Testing

Page 4: Software testing 2012 - A Year in Review

Google Testing 2.0 (Google)

• “This brings us to the current chapter in test which I call Testing 1.5.  This chapter is being written by computer scientists, applied scientists, engineers, developers, statisticians, and many other disciplines.  These people come together in the Software Engineer in Test (SET) and Test Engineer (TE) roles at Google. SET/TEs focus on; developing software faster, building it better the first time, testing it in depth, releasing it quicker, and making sure it works in all environments.  We often put deep test focus on Security, Reliability and Performance.  I sometimes think of the SET/TE’s as risk assessors whose role is to figure out the probability of finding a bug, and then working to reduce that probability. Super interesting computer science problems where we take a solid engineering approach, rather than a process oriented / manual / people intensive based approach.  We always look to scale with machines wherever possible.” [1]

a lot of “art” has turned into “science”

Page 5: Software testing 2012 - A Year in Review

Good Test Automation (Microsoft)

• Designing good tests is one of the hardest tasks in software development. [2]

• Relying on test automation for any part of your testing is pointless if you don’t care about the results and look at failed tests every time they fail. [3]

• I’m all for saving time and money, but I have concerns with an automation approach based entirely (or largely) on automating a bunch of manual tests. Good test design considers manual and computer assisted testing as two different attributes – not sequential tasks. That concept is such an ingrained approach to me (and the testers I get to work with), that the idea of a write tests->run tests manually->automate those tests seems fundamentally broken. [4]

• Key benefits of API testing include [7]:– Reduced testing costs – Improved productivity – Higher functional (business logic) quality   

• First off, let me state my main points for disliking GUI automation [8]:– It’s (typically) fragile – tests tend to break / stop working / work unsuccessfully often– It rarely lasts through multiple versions of a project (another aspect of fragility)– It’s freakin’ hard to automate UI (and keep track of state, verify, etc.) – Available tools are weak to moderate (this is arguable, depending on what you want to do with the

tools).

• Only those problems … which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing. [9]

Page 6: Software testing 2012 - A Year in Review

Test Roles (Google + Microsoft)

• Roles that testers play on teams vary. They vary a lot. You can’t compare them. That’s ok, and (IMO) part of the growth of the role. [5]

• Which leads me to two test roles that, while they definitely exist, I could say they’re dead to me. [5]– The first is the test-automation only role. I think the role of taking manual test scripts written by one person

and then automating those steps is a bad practice.– I’ll call the final role “waterfall-tester” – even though I know this role exists at some (fr)agile shops as well.

This is the when-I’m-done-writing-it-you-can-test-it role.

• Google Tester Roles [6]– The SWE or Software Engineer is the traditional developer role.– The SET or Software Engineer in Test is also a developer role except their focus is on

testability. They review designs and look closely at code quality and risk. They refactor code to make it more testable. SETs write unit testing frameworks and automation. They are a partner in the SWE code base but are more concerned with increasing quality and test coverage than adding new features or increasing performance.

– The TE or Test Engineer is the exact reverse of the SET. It is a a role that puts testing first and development second. Many Google TEs spend a good deal of their time writing code in the form of automation scripts and code that drives usage scenarios and even mimics a user. They also organize the testing work of SWEs and SETs, interpret test results and drive test execution, particular in the late stages of a project as the push toward release intensifies. TEs are product experts, quality advisers and analyzers of risk.

Page 7: Software testing 2012 - A Year in Review

Context-driven Testing

• Cem Kaner: “Rather than calling it a “school”, I prefer to refer to a context-driven approach.“ [10]

• “One of the striking themes in what I saw was a mistrust of test automation. Hey, I agree that regression test automation is a poor bases for an effective comprehensive testing strategy, but the mistrust went beyond that. Manual (session-based, of course) exploratory testing had become a Best Practice.” [11]

• “I think we need to look more sympathetically at more contexts and more solutions. To ask more about what is right with alternative ideas and what we can learn from them. And to develop batteries of skills to work with them. For that, I think we need to get past the politics of The One School of context-driven testing. “[11]

• Many great ideas from the context-driven world:– 37 Sources for Test Ideas [12]– The scripted and exploratory testing continuum [13]– Where does all the time go? [14]– A Testing Landscape [15]– 8 Layer Model for Exploratory Testing [16]– Silent Evidence in Testing [17]– Etc.

• Mostly improvements, not innovations?

Page 8: Software testing 2012 - A Year in Review

Summary

Is there any consensus in the testing world?

• Testing requires qualified professionals that take ownership – Hiring many unqualified testers will only result in higher costs and waste

• Reduce the time spent on useless artefacts, either by tool support, or by removing them completely

• Always look for a way to support your testing activities with tools – not only test execution

• Everything is risk-based! The more data you have (qualitative or quantitative) the better!

• Mindless automation is not valuable

• Manual scripted regression testing is not an effective way to find bugs – There should always be a good reason to run scripted manual tests, like specific legal or customer requirements

• Customer involvement is important – both as input and as live user testers

• Focus on testing in agile projects

Page 9: Software testing 2012 - A Year in Review

Reference

[1] Testing 2.0http://googletesting.blogspot.se/#!/2012/08/testing-20.html[2] Orchestrating Test Automationhttp://angryweasel.com/blog/?p=496[3] Oops I did it againhttp://angryweasel.com/blog/?p=432[4] Exploring Test Automationhttp://angryweasel.com/blog/?p=412[5] Exploring Test Roleshttp://angryweasel.com/blog/?p=444[6] How do Google Test Software – Part 2http://googletesting.blogspot.se/search?q=test+roles#!/

2011/02/how-google-tests-software-part-two.html[7] API Testing – How It Can Helphttp://www.testingmentor.com/imtesty/2011/12/01/api-testinghow-can-it-help

/[8] Design for *GUI* automationhttp://angryweasel.com/blog/?p=332[9] How Google Tests Softwre – Part 5http://googletesting.blogspot.se/search?q=exploratory+testing#!/

2011/03/how-google-tests-software-part-five.html[10] Context-driven Testinghttp://context-driven-testing.com/?page_id=9[11] Context-driven Testing is not a Religionhttp://context-driven-testing.com/?p=23[12] 37 Sources for Test Ideashttp://thetesteye.com/blog/2012/02/announcing-37-sources-for-test-ideas

/[13] The Scripted and Exploratory Testing Continuumhttp://thetesteye.com/blog/2012/01/the-scripted-and-exploratory-testing-continuum

/

[14] Where does all the time go?http://www.developsense.com/blog/2012/10/where-does-all-that-time-go/[15] A testing landscapehttp://www.shino.de/2012/11/07/a-testing-landscape/[16] 8 Layer Model for Exploratory Testinghttp://www.shino.de/2012/03/23/testbash-an-8-layer-model-for-exploratory-testing/[17] Silen Evidence in Testinghttp://testers-headache.blogspot.se/2012/03/silent-evidence-in-testing.html