unlocking software testing circa 2016

47
mentormate.com | 3036 Hennepin Avenue, Minneapolis, MN 55408 | 855-473-1556 Unlocking Software Testing Circa 2016 A History and Practical Guide to Agile Quality, Mobile Automation & Risk-Based Testing Strategies

Upload: mentormate

Post on 09-Jan-2017

54 views

Category:

Software


2 download

TRANSCRIPT

Page 2: Unlocking Software Testing Circa 2016

Software testing through the ages 1 Waterfall development: Document then develop 2

“Don’t go chasing Waterfall” 3

Made in the USA — the industrial roots

of software development 4

History lesson 5 Agile methodology in software testing 6 Don’t throw code over the wall. Take down the wall. 7

The case of exploding defects 8

6 reasons to involve software testing at

the beginning of the project 9

Software testing methodologies deconstructed 10

Types of testing 11Story-based testing 12

Sprint testing 13

Feature/Code freeze and regression testing 14

Performance testing 15

Performance test steps 16

What’s lurking in the shadows of mobile testing? 17

Table of Contents

Show me the money: The cost of quality software 18Historically testing mobile hasn’t just different.

It’s been harder. 19

Challenges managing costs in mobile software testing 20

Buried in the economics of native mobile testing 21

Scale up or stretch the timeline? 22

Managing the cost of quality with automation 23Mobile testing in 2016 25

Automate to great 26

The benefits of automation 27

Even more benefits 28

The tension between design and

mobile automation testing 29

Automation QA’s, the enemy of Agile? 30

The role of user personas in mobile automation testing 31

So why isn’t EVERYONE using test mobile automation? 32

Manual regression — gone wrong 33

Page 3: Unlocking Software Testing Circa 2016

An alternative to automation: Risk-based testing 34Why risk-based testing? 35

Intuition is a testers’ best friend 36

Enter the cloud 37

Communication is king 38

Transforming our expectations of quality 39

When should you take your testing external? 40

Software testing tool kit 41Test strategy document 42

Interactive tools 43

Talk with an expert 44

Table of Contents

Page 4: Unlocking Software Testing Circa 2016

Software testing through the ages

1

Page 5: Unlocking Software Testing Circa 2016

Back in the early days of software development, Waterfall methodology was the preferred approach. The requirements were captured at once, designed and built. In a process as new as building software, enterprise stakeholders wanted all details to be documented and configured before construction began.

Code was delivered to software testers at the end of the process. Their role? To validate that the features functioned as outlined in the original requirements document. Anything different was by definition a defect. Even if the project requirements changed throughout the course of development, testers were kept in the dark. They were still comparing against the detailed requirements list. Any nuances or adjustments decided by the team during development but not documented, lay outside the testers’ purview.

Waterfall development: Document then develop

DESIGN

IMPLEMENT

VERIFY

GATHER REQUIREMENTS

MAINTAIN

2

Page 6: Unlocking Software Testing Circa 2016

For a while, the Waterfall software development methodology was as catching as TLC’s 1994 hit song. In this era before Agile, project owners and product teams operated under the assumption that software testing was only necessary after development was completed.

Software is a rapidly evolving marketplace of ideas. The problem with testing in Waterfall development? It didn’t leave room to adapt.

Ideas be nimble, ideas be quick.Throughout the development lifecycle, product owners don’t hold fast to the first documented conception of their product. Needs change. And so do expectations. Software testers brought in at the tail end of a project were kept in the dark, left to compare the feature list planned for development and the features represented in the finished app or platform. They were left asking, “Is this a defect or an intentional change?” The time it took them to chase down the answers they needed ate up time and budget.

“Don’t go chasing Waterfall”

Required reading"Rapid Development: Taming Wild Software Schedules" — An oldie, but goodie published in 1996. Steve McConnell’s seminal work on iterative development published what many knew but hadn’t documented.

3

Page 7: Unlocking Software Testing Circa 2016

The decision to involve testers at the end of the development lifecycle evolved from the process used to manufacture physical goods.

In manufacturing, electrical or mechanical engineers would design the process to a build widget — take a smoke detector for example.

Once the smoke detector was built, it was checked for quality. Testers weren’t involved along the way. There was no need. Their only purpose was validating the functionality of the device — not informing the build.

There’s a huge difference between building a smoke detector and building an app. Software isn’t built using plastic and metal. It’s structured with ideas. As fast as an opinion changes, so too can the output of software development. Along the way there are opportunities for logical oversights and a need for validation throughout the process. Software can also be released far more frequently than changing the manufacturing line of smoke detector production.

Applying the same constraints used in the physical world no longer made sense. Only this industrial process needed more than a revolution to improve inefficiencies in software testing.

Made in the USA — the industrial roots of software development

4

Page 8: Unlocking Software Testing Circa 2016

History lessonIn the ’90s, the sequence of quality assessment changed.

Statistical Process Control (SPC) came into vogue. Following SPC, meant verifying the process was in control and capable at all times i.e. measure during the process rather than at the

end. This mentality, while identical to the new approach to Agile software testing, preceded it by nearly a decade.

5

Page 9: Unlocking Software Testing Circa 2016

Agile methodology in software testing

6

Page 10: Unlocking Software Testing Circa 2016

But rather than continuing to develop, pass code over the metaphorical “wall” to testers and wait for a defect report to be sent back as if by carrier pigeon, Agile methodology helped reimagine this antiquated development methodology.

Agile allocates testers at the beginning of the project as members of the core project team. That way, changing ideas can be tracked and features tested as they roll off the “assembly line.” In Agile, testing occurs during each two-week sprint rather than posthumously, after development is completed.

Accepting AgileThe Agile process acknowledges that the needs of the enterprise are constantly shifting during the ideation and build process. Pushing the pause button on business is a happy, albeit unrealistic, hope during software development. The reality is: Business development and learnings don’t stop when software development starts. Minds change. Requirements are adapted. Product teams and testers needed a new way to serve to the needs of businesses.

Agile values shipping code over the extensive documentation that defined the old process. By documenting less, developers can do more.

Don’t throw code over the wall. Take down the wall.

7

Page 11: Unlocking Software Testing Circa 2016

Imagine this scenario. It’s two weeks until the scheduled release of your solution. For the first time, software testers are allowed to examine the code — putting a magnifying glass to three months of hard work by your development team. They find 10 major bugs threatening the viability of the release. Then they begin testing the interactions of those defective features. The issues multiply by five. As new code is released to fix the bugs, problems with interactions continue until eventually you have 150 bugs driving the go/no-go decision one week from launch. Every day the business is asking you whether the release is threatened.

The case of exploding defects

As illustrated here, the ROI of testing throughout the process changes dramatically in Agile development. New features are implemented in every sprint. A very important part of ensuring software quality throughout the process is the re-testing of existing functional and nonfunctional areas of a system after new implementations, enhancements, patches or configuration changes. The purpose? To validate these changes have not introduced new defects. Those familiar with software development know this as regression testing.

As if you need any other reasons to move testing further forward in the Agile lifecycle. Here are six.

8

Page 12: Unlocking Software Testing Circa 2016

4.Your testers are capable of more than just rote checks. They are the first users of your solution. Incorporating this experiential feedback at the beginning of the lifecycle allows functionality to be adjusted along the way (if it lands in scope).

6.If software testers are involved throughout the development cycle, test cases can be documented at a high level. The alternative? Developers documenting in detail the needs for QAs operating with no relative context.

6 reasons to involve software testing at the beginning of the project

1.Moving software testing forward in the process gives quality testers a “red rope to pull” in challenging the development team should they identify logical oversights occurring during the build.

2.Passive validation on the back-end of the project doesn’t leverage the vast experience and intuition of testers.

3.The difficulty validating software exponentially increases if they aren’t able to track and understand the changes to initial requirements being made along the way.

5.Two words: Accumulated bugs.Therein lies an increased risk for defects if testers aren’t working in concert with developers and checking interactions with existing functionality as new features are developed.

9

Page 13: Unlocking Software Testing Circa 2016

Software testing methodologies deconstructed

Unlike the historical development methodology, Agile accommodates rapid directional pivots and saves testers time deciphering requirements made as the needs of the target market shifted.

In Agile development...

Code is delivered for testing throughout development lifecycle.

Software testers are active advocates for development logic and share feedback as insights are gathered.

Projects are tracked by testers as development progresses. Defects are assessed by the latest project requirements.

Testing efforts closely integrate with development in each sprint.

In historical development...

Code was delivered at the end of the project.

Software testers were passive participants, feedback was ignored if it compromised the release schedule.

Anything different than the written requirements was considered a defect.

Independent testing validated development met the written specifications.

10

Page 14: Unlocking Software Testing Circa 2016

Types of testing

11

Page 15: Unlocking Software Testing Circa 2016

Story-based testing

Depending on the project, a variety of different tests may be used, starting with the story-based test. These test cases are based on positive and negative user story scenarios. They include the: steps and actions, expected result, the type of execution (manual/automated), test importance (high, medium, low), sprint when the test case was added and browser execution (IE/Fx/Chrome/Tablet).

Tests are then categorized by their relative level of importance.

HighThis functionality is critical for the software. If it doesn’t work, the user will not be able to operate the system. Some examples include: Logging into the system, forgotten password and adding new enrollees.

LowTest cases in this category are assigned low priority. The functionality being tested rarely changes or is static. For example: contact us information.

MediumTests classified “medium” validate functionality that is important to users, but doesn’t prevent them from operating the system. Examples: Filtering, negative scenarios and edge cases.

12

Page 16: Unlocking Software Testing Circa 2016

This method analyzes the risks to code based on bug fixes and user stories implemented during each sprint.

The approach: • Determine code changes for

new features and bug fixes• Determine changes in user interface• Determine how the change could impact

the surrounding functions• Follow up with development team to discuss

any potential problem areas after the change

Action: Select test cases based on the above determinations and execute them during the sprint before code freeze. The goal? To discover any potential issues during the sprint testing, prioritize and address them before code freeze.

Sprint testing

13

Page 17: Unlocking Software Testing Circa 2016

Feature/Code freeze and regression testingDuring this phase, no further modifications will be made related to new feature implementations. Features are frozen typically one sprint before deployment to production. Though, the length of the freeze period depends on project complexity and size. Adding new features right up until the release represents a huge risk for quality. Similarly, implementing new user stories during this time increases the risk of introducing regression defects your testers may miss in the haste of the release.

During the feature freeze phase defects are retested and the regression testing begins. It is good strategy from all test cases to choose:

• Frequently used functionality• Functionality that has shown many bugs in the past• Complex functions• Functionality that has changed several times during development

Action: Select test cases based on the above criteria and execute them during code freeze.

14

Page 18: Unlocking Software Testing Circa 2016

Performance testingDetermining the speed or effectiveness of the software. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.

Performance testing

Load testingPutting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.

Stress testingNormally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system’s robustness in terms of extreme load. It helps application administrators determine whether the system will perform sufficiently as long as the current load goes well above the expected maximum.

Pro tipIt’s important to hold the performance test as early as possible before the production release.

15

Page 19: Unlocking Software Testing Circa 2016

Record load scenariosWe recommend jMeter.

Fix/optimize for performanceMake adjustments to the server or code as needed.

Performance test steps

Setup environmentThe tests should be executed in a exact copy of the production environment.

Define performance requirementsThere should be expected numbers of users or a number defined the system targets to support.

Define load scenariosDefine the scenario under test and how many users will be engaging in the interaction.

Execute scenarios and performance reportsAnalyze to determine where (if at all) the system is failing or needs optimization.

16

Page 20: Unlocking Software Testing Circa 2016

What’s lurking in the shadows of mobile testing?

The intricacies of mobile testing continue to push the strength of software teams — from inconsistencies across Android devices to technical issues in specific hardware/versions of the UX. The best software teams use grey box testing.

Did you catch a whiff of that bug? Sniffing in software dev.

Many of the most burdensome defects are hidden beneath the surface in the backend database. SQL queries can help crack the code by analyzing the database to learn what’s inside and explore how the data is being stored.

Greybox testing allows you to:• Uncover defects• Save on testing time• Make developers’ lives easier

We recommend the Charles and Fiddler web debugging proxy applications to explore traffic between a machine and the web, HTTP caching, compression and security for potential issues.

17

Page 21: Unlocking Software Testing Circa 2016

Show me the money: The cost of quality software

18

Page 22: Unlocking Software Testing Circa 2016

Why?

Carriers. Back in the early days of mobile, the same phone could be sold to different carriers. Then each added a different firmware version, causing the app to behave differently depending on the network.

Unpredictable GPS behavior. Testers drove in and out of coverage testing apps — racking up miles and frustration.

A few other reasons testing mobile offers more challenges...

1. Traditionally a more manual process2. Limited tools to do automation for mobile3. Dozens of operating systems and languages to test in (BREW, J2ME, BlackBerry, Pocket PC, Palm OS, Symbian, Windows Phone, iOS, Android, C++, Java, Objective C, C#/.NET, Swift)

Historically testing mobile hasn’t been just different. It’s been harder.

19

Page 23: Unlocking Software Testing Circa 2016

In the early days of software development, think just 15 years ago, ratios on the project teams looked something like ten developers to every tester. Why? The platforms were simpler. There was less device-specific variety. Fast forward to today. That ratio looks more like two developers to ever tester. Big difference. With that shift comes increased cost. How are teams managing it?

Some might say, “Hire better developers, and cut out extensive testing all together.”

Good luck. Independent minds are needed in software development to approach the product with an eye on “breaking it.” Without that, the value of the product decreases. The tester/developer relationship is much like that of the editor/writer. Writers shouldn’t proof their own work. They WILL miss things. They’re too close. The same can be said of developers. The accountability between testers and developers is a peer-level relationship. Both minds are needed to deliver success.

Challenges managing costs in mobile software testing

Additional challenges

Growth in• Devices / form factors• OS vendor and versions• App versions• Server versions

20

Page 24: Unlocking Software Testing Circa 2016

When weighing the often unintended impact of cost in mobile development and testing. Compare the complexity of these two scenarios:

Buried in the economics of native mobile testing

Mobile architecture with a cloud backendAll phones are talking to the cloud. When that backend is updated, the update process on the phones is instantaneous. It’s a much simpler, less expensive process to manage what’s deployed in the cloud.

Verdict: Testing is manageable. Additional measures to mitigate the time/effort testers spend aren’t needed.

Native mobileIn this case, backward and forward compatibility become much more important. There might be two versions of the server and four versions of the app in use. Before a release, you need to guarantee that every combination works. Regression tests must be repeated eight times over and again for every subsequent release.

Regression drivers: New OS ships, new server ships, new app ships

Verdict: You’re buried in the economics of testing. Suddenly, the focus isn’t on the software — it’s hiring more bodies to keep up with it effectively doubling and tripling your project costs to keep release times short.

21

Page 25: Unlocking Software Testing Circa 2016

Teams using a manual testing strategy have two options when the project scope grows:

• Keep high regression coverage and increase the time for testing • Spend the same amount of time on testing and reduce the

regression coverage

Economics justify an investment in automated testing. With automated regression testing there is no need to reduce testing coverage to expedite a release timeline. Automated tests are fast and can be ran frequently — cost-effective for software products with a long maintenance life. New test cases can be added to the existing automation in parallel. Automation allows developers and software testers to work in parallel. As developers are building the solution, software testers are building the automation to test it.

Scale up or stretch the timeline?

Why pile people on the problem? Just fix it.

Manually repeating these tests is costly and time consuming. Once created, automated tests can be run over and over again at no additional cost. Beyond that, they are much faster and more accurate than manual tests, when configured properly.

22

Page 26: Unlocking Software Testing Circa 2016

Managing the cost of quality with automation

23

Page 27: Unlocking Software Testing Circa 2016

Mobile testing has come a long way. There are fewer operating systems now. The systems are predictable — though more so with iOS. The Android system has retained the “wild west” feel of those first few years in mobile.

Because carrier-related variability decreased, developers and testers began to spend more time assessing the success and usability of the the UI.

Now it’s time for the next philosophical leap — decreasing our reliance on testing with an actual phone.

Mobile testing in 2016

24

Page 28: Unlocking Software Testing Circa 2016

Automation is one way teams can efficiently perform quality assurance while keeping project costs low. Unlike mere mortals, automation can quickly tell you whether new features had unintended consequences on other aspects of your code by implementing regression test scripts.

Automating regression tests eliminates the need for testers to manually comb through and interact with the code to verify changes didn’t create unintended consequences elsewhere in the operation.

Why tie up a highly-qualified software tester in rote tasks? Instead, let a computer do it. Free them to think bigger and instead plan the automation and set up the environment where it can be executed.

Why automation didn’t make sense in the old world...Software testing happened once at the end of development. There was no reason to spend the time and effort setting up the automation scripts for a one-and-done process

Why Agile software projects need automation...Spending the time on automation when running Agile software projects makes a ton of sense. The ROI changes dramatically if regression testing is required by your client every night, or on a regular basis. Ideally, highly-trafficked paths through the solution (or “happy paths”) should be tested each time new features are added to check for unexpected reactions or breakage.

Automate to great

25

Page 29: Unlocking Software Testing Circa 2016

The benefits of automation

Increased test coverage, limited time spentOften it takes herculean effort to sufficiently test the coverage of software projects. Read: Frequent repetition of the same or similar test cases performed manually. Hello monotony, goodbye efficiency. Some examples include:

• Regression testing after bug fixes or further development

• Testing of software on different platforms or with different configurations

• Data-driven testing (running same test cases using many different inputs)

Better quality softwareAutomating your regression tests is one of the biggest wins to ensure continuous system stability and functionality while changes to software are made. Automated tests perform the same steps each time they are executed. They never forget to record detailed results. What this means for your team? Shorter development cycles coupled with better software quality.

Optimize speed & efficiency while decreasing costsAutomation allows teams to keep project costs low and test coverage high by reducing the number of people teams need to test. Instead of rote tasks, your test team is focused on high value strategy and building the automation scripts.

26

Page 30: Unlocking Software Testing Circa 2016

Catch bugs earlier in the processThe only thing worse than not making your release date is finding a huge landmine in your software a few days from launch. The best way to avoid this series of unfortunate events? Perform regression testing as frequently as possible, so there are no unpleasant surprises. Automated tests give teams the ability to run more regression tests and catch bugs earlier on in the process.

Improve team focusAutomated tests give teams the opportunity to focus on new implementations rather executing repeatable actions. Testers become less concerned about whether or not they receive and validate thousands of sent emails. Instead, they can devote their mental power to understanding users and improving their overall experience.

Even more benefits

27

Page 31: Unlocking Software Testing Circa 2016

The original automation tests were based on recording actions and playing them back. They might simulate the effect of clicking on a mouse at a certain pixel’s location. If the button is moved (one of many UI changes that result from the highly iterative and exploratory practice of Agile methodology), the automated test would no longer click in the correct location.

In this way it was possible to create highly fragile test code that was costly to recreate when seemingly “minor” UI changes were made. It just wasn’t feasible to re-record the tests. The cost to update the tests began to exceed to cost of the change.

That moment when…“the tail wags the dog”

Pretty soon UI decisions were being made to avoid breaking the automation tests. Big. Problem. Validation tests can’t begin to dictate the experiential success of the solution.

The tension between design and mobile testing

28

Page 32: Unlocking Software Testing Circa 2016

Was it true? Had testers become the enemy of momentum and exploration in software development?

Every change was impactful. None were trivial. There had to be a better way. Testers needed a way to develop tests like the product itself was being developed. They needed to abstract references to components in the design and on the screen. Remember that button called out in the automation test using pixel location? Calling it by name would allow the tests to continue regardless of where it moved in design.

This shift happened much sooner in the web world driven by HTML standards. Everything on the screen had a name. It was a more difficult changeover with native apps. There weren’t necessarily standard, logical names for each component.

Automation QA’s, the enemy of Agile?

Digging deeper

Web and mobile automation use locators. The best practice is to use element ‘id’ for finding items on the screen.

29

Page 33: Unlocking Software Testing Circa 2016

The role of user personas in mobile automation testing

In setting up mobile automation, user personas are critically important. After identifying the core groups of users, the features and interactions in their happy paths through the solution should be automated. That way, your team can spend their mental aerobics on the edge cases — often 10x the number of happy paths.

Happy pathThe most trafficked paths and interactions by core users through your app. Edge caseMore uncommon solutions that effect comparatively fewer users but can still present quality challenges.

30

Page 34: Unlocking Software Testing Circa 2016

Mobile testing represents a shift in expertise many teams just aren’t ready for. Writing the automation scripts involves a new skillset.

Mobile automation also represents a philosophical shift for teams. It must be implemented at the very beginning of a project. The initial costs to build the automation may seem difficult to justify for teams just beginning to use automation. Here’s how it pays off.

Exploring the risk and return of mobile automation

The risk (investment)• Increased cost to build and perform first test• Increased cost to maintain working tests

The return• Cost to repeat existing tests on old app versions

against old server versions drops dramatically• Cost to repeat tests throughout development cycle

for new features plummets• Predictability for release cycles improves• Continuous integration now possible

So why isn’t EVERYONE using mobile test automation?

31

Page 35: Unlocking Software Testing Circa 2016

Assume it takes 10 minutes to perform manual regression tests at a cost of $11. Costs scale linearly as more tests are needed.

20 features * 3 devices * 2 OS * 1 Server * 1 App = $1,32025 features * 3 devices * 3 OS * 2 Server * 2 App = $9,90030 features * 3 devices * 4 OS * 3 Server * 3 App = $35,640

One word: OuchEspecially on projects where teams must test and account for a high degree of variability in devices, operating systems, servers and app versions — mobile automation is a very efficient bandaid.

One-time investment: NiceLet’s say developing automated test cases for the code base after refactoring has been completed will take 156 hours. A sample breakdown of the investment might look like the allocations on the next page. Though, note, the total hours depends on the size and complexity of the solution you are testing.

Manual regression — gone wrong

32

Page 36: Unlocking Software Testing Circa 2016

13% Framework

Establish automation framework

that will be the foundation of the test

77% Regression test cases

These are the most critical test cases that

must be performed for each build delivered to QA

5% Test cases

Add automation test cases

into continuous integration

5% Execution reports

Create the execution reports

for your test cases

Breaking down an investment in automation (156 hours)

33

Page 37: Unlocking Software Testing Circa 2016

An alternative to automation: Risk-based testing

34

Page 38: Unlocking Software Testing Circa 2016

Why risk-based testing?

If your client or project team can’t afford to invest in automation testing, but can afford to run Agile and involve your quality team at the beginning of the project, risk-based testing offers another respected option.

Consider this scenario:• You currently have 100 features and 1,000 possible

regression test cases.• Next release, you add 10 more features. This adds

another 100 regression test cases to the pool.

An Agile risk-based testing strategy doesn’t treat all the regression test cases as equal. Instead, your strategy would treat the new features and their associated interactions (with another, say, 20-30 features) with much higher priority.

35

Page 39: Unlocking Software Testing Circa 2016

Intuition is a testers’ best friend

This strategy leans hard on your team’s intuition aboutwhere problems have lurked throughout developmentand how this might impact the new features. It also relies on their ability to synthesize the change analysis from thesource code.

Other ways to save time and win when implementing a risk-based testing strategy• Prune low risk combinations from regression plan• Prune low risk features from regression plan

36

Page 40: Unlocking Software Testing Circa 2016

“Say goodbye/good riddance to the testers’ device cabinet.” Here are a few of the cloud solutions available to test software solutions:

Enter the cloud

Xamarin Test Cloud AWS Device Farm Sauce Labs

Testdroid SOASTA

The benefits• No growing device lab and shipping hardware between locations

• Equally accessible to all team members in a distributed team

37

Page 41: Unlocking Software Testing Circa 2016

Communication is king

Regardless of the testing strategy your team settles on, implement the following for smooth sailing from requirements gathering to delivery:

• Establish clear channels of communication, roles and responsibilities and determine which teams are testing in each environment

• When working with a partner team share test plans, so everyone is working from the same base knowledge

• Avoid the chaos by defining a process straight away

38

Page 42: Unlocking Software Testing Circa 2016

Transforming our expectations of quality

You deserve to know.Whether you’re working with an external partner or an internal department, the success of your project and your ability to refine processes that aren’t working depends on a keen understanding of overall project quality.

The best software testing teams do this: Deliver a defect trends report weekly.

It should call out found and closed issues helping your project team visualize and gauge the overall stability of development.

39

Page 43: Unlocking Software Testing Circa 2016

When should you take your testing external?

You need a partner who philosophically embraces the capabilities of the new world of testing.Software testing in 2016 has evolved far beyond repetitive keystrokes and long hours checking one stream of outcomes against another. The ability to automate large chunks of the testing frees your team to focus on the human elements of your solution. That’s the level of quality your users can feel.

You don’t have a quality function of your own.Without testers dedicated to validating the logic and experience of the solution. The role will fall to your PMs and BAs, who already have full plates of their own. Fitting testing into an already full role isn’t doing their sanity or the quality of your project any favors.

40

Page 44: Unlocking Software Testing Circa 2016

Software testing tool kit

41

Page 45: Unlocking Software Testing Circa 2016

Test strategy document

Available time: Sprint regression test (2/3 days)

Release regression test(1/2 weeks)

Release validation check list (4 hours)

Hot fix

(2/4 hour)

High risk changes

Execute all high importance test cases

Execute all high, medium and low importance test cases

Execute a specific list of high test importance cases

Execute part of high and medium importance test cases

Medium risk changes

Execute part of high importance test cases

Execute all high, medium importance test cases

Execute a specific list of high importance test cases

Execute part of high and medium importance test cases

Low risk changes

Execute part of high importance test cases

Execute all high importance test cases

Execute a specific list of high importance test cases

Execute part of high importance test cases

42

Page 46: Unlocking Software Testing Circa 2016

Interactive tools

Download our interactive template to begin tracking defects.

Download our interactive device matrix

DOWNLOAD

DOWNLOAD

Zero bug bounce (ZBB) The first time the number of critical bugs reaches zero. This is late stage indication of stability before production release.

Bug convergence When the weekly number of “Closed” issues exceed the weekly number of “New” issues. This is an early stage indication of stability before production release.

Other noteworthy defect trends to track include:

43

Page 47: Unlocking Software Testing Circa 2016

mentormate.com | 3036 Hennepin Avenue, Minneapolis, MN 55408 | 855-473-1556

Talk with an expert Ready to innovate. Contact us to learn more about our software

testing process and mobile test automation.

Contact us at (855) 403-5514 or [email protected]

MobCon developed by MentorMate MentorMate has designed, delivered and staffed digital experiences since 2001. Along the way we’ve learned a lot. Now it’s time to share. That’s why we founded MobCon in 2012 and MobCon Digital Health in 2015. Each year we host conferences for the top minds in mobile and digital strategy to do just that. Be part of what’s next and dive deep into the trends and technologies revolutionizing engagement in today’s business landscape. Register for our next event at mobcon.com.