mountains to molehills: a story of qa

60
Mountains to Molehills A story of QA By Dave Haeffner

Upload: dave-haeffner

Post on 03-Jul-2015

7.429 views

Category:

Technology


0 download

DESCRIPTION

Dave Haeffner had just changed jobs within his company. Completely shifting his career paradigm from Systems Administration to Quality Assurance. If only he knew the challenges that faced him in the year ahead. He might get excited, or maybe, he would have thought twice. There was very little he knew about his new role but that didn’t worry him. The mountainous challenges that faced QA is what kept him up at night. Join him as he retraces his steps, sharing lessons learned, how he helped change QA from the bottom up, and how he turned those mountainous challenges into molehills.

TRANSCRIPT

Mountains to MolehillsA story of QA

By Dave Haeffner

Hello, my name is Dave Haeffner and I work at The Motley Fool; an online Financial Investment Community (Fool.com).

I used to work in IT Operations. That’s all I knew. My undergrad was a Bachelor of Tehcnology , and I’ve held roughly every job that has to do with that field. But a year and a half ago I transitioned into the role of Quality Assurance Analyst, Tester, Quanalyst… let’s just call it “QA”.

The change was interesting. In Operations I had a reactionary perspective. I would see/be notified of things when they were broken and have to fix them.

But when going into QA I thought, “Oh, terrific! I get to find things that are broken and not have to fix them!”. I soon came to realize that QA was less about finding things that are broken, and more about helping to build things that aren’t broken in the first place.

Everything I need to know I learned in Kindergarten… and at Agile 2009.

I learned a lot at Agile 2009. A lot of best practices and good ideas.

But this talk focuses on the top 3 things I learned from Agile 2009 and how they have guided me and my work over the last 12 months.

Chris McMahon gave a talk titled “History of a Large Test Automation Project using Selenium” in which he discussed how SocialText approached testing.

He had 4 killer take-away points that helped paint a picture of what it takes to have a large automated web testing suite that works well.

1. Create/maintain fixtures (DSL)

2. Feature coverage, like a web

3. Fast/routine reporting of failures

4. Quick response/analysis of failures

Chris McMahon“History of a Large Test Automation Project Using Selenium”

I had a chance to participate in a lightning talk with some of the major minds in the Agile Community at a lightning talk titled “Slow and Brittle: Replacing End-to-End Testing.

During this talk an idea started to take root; Why is testing so custom and seemingly hard? Why doesn’t some kind of universal web testing harness exist? Why can’t testing be as easy as drinking a cup of coffee?

An idea can be a dangerous thing.

Arlo Belshee, James Shore“Slow and Brittle: Replacing End-to-End Testing”

On the last day of the conference I had the opportunity to attend an open jam put on by Adam Goucher (with special guest Jason Huggins).

Jason Huggins, creator of Selenium IDE, co-founder of Sauce Labs

Adam Goucher, contributor to the Selenium open-source community (aka maintainer of Selenium IDE), testing evangelist

We chatted about the intended use of Selenium IDE and the power of exporting to a programming language.– Selenium IDE: Likened to a flight simulator– Selenium RC: How you can effectively fly the plane

Jason Huggins Adam Goucher

“Selenium Open Space”

I left the conference with a new perspective.

I felt like this QA stuff was starting to make sense and that I would be able to really make a difference when I got back to work.

When I got back? I saw nothing but mountains.

20/20

It turns out that we were flying the plane with a flight simulator. All automated tests were built using Selenium IDE and grew into a massive set of Smoke and Regression suites that were brittle, slow to run, and provided very poor feedback.

20/20: Our Smoke suite took 20 minutes to run and roughly 20 minutes to interpret. Once the errors were understood, this information would be placed into an e-mail and shipped off to the appropriate Development team. Unfortunately, this information is viewed as a distraction by the Developers since it is out of band with their workflow.

Much like the man with two brains, QA has 2 minds; technical and analytical. Unfortunately, a majority of the QA’s are more analytical than technical.

Limited resources, both funding and human.

There was roughly 1 QA for every 6 Developers, and, the training budget was fairly lax (especially given the economy).

What story wouldn’t be complete without spaghetti code? 17 years of a growing code base can have that affect.

And as a result, there are often discrepancies between our production and pre-Live environments.

This presented some interesting challenges when finding issues on the live website that somehow breezed right by our Testing environments.

There was a bit of aversion to change within Tech and the QA Department. Because when you mess with someone’s spaghetti, it can get messy.

The QA Department was viewed as Outsiders and as a result there was a significant communication gap between Developers and QA.

There was also a bit of a throw it over the wall mentality. When an issue was found Devswould often say “works on my machine”.

I started to question my transition from IT Operations and felt like I was going through the 4 stages of grieving. But after much soul searching, I had a thought…

What would Chuck Norris do (if he were in QA)?

“He would subdueThe Motley Fool’s use of Selenium IDE

with a round-house kick to the face and build something in its place.

Perhaps something Ruby-flavored that leveraged open-source libraries that could be used by everyone; Business, Developerment, QA.

Thus bridging the gap between what is perceived that QA tests and what is actually tested. And he would call it “Testerfield”!

+ + = Testerfield

The solution we built was an assembly of innovation; cobbling together a bunch of different tools and concepts into a concatenation that we needed and wanted.

And we gave it a name in an attempt to shake the “Selenium” nomenclature since it was a buzz word with some negative connotations.

That and when you name something, you give it an identity. You make it your own. You care for it.

A fun side effect of this tool was that we started to gain support/respect from the Dev’s since with it we were able to write some tests that saved them some time.

Too bad our first go-round with Testerfield was a failure… Perhaps Chuck Norris’s business acumen needs some work. This is what you get when you try to solve a problem as a technologist rather than business perspective.

There was a bit of signal to noise with Testerfield:Signal: We could write stable and robust testsNoise: This was a slightly different approach to testing

than QA was used to. The thought of code seemed scary to most of QA, and the resulting output was difficult to interpret

Failure posed a significant problem for this movement. Testerfield was developed through unofficial channels (aka me and an intern).

There was no management mandate for this, no sound from on high, not even an angry mob or surly swear words at the computer screen from QAs when writing tests, just a vision in my head of how things could be.

This meant no management decision would be made until a better solution was presented and captured value

Testerfield AND the old Selenium IDE suites needed to be maintained simultaneously (read: double work)

Push back started to appear (aka I was encouraged to write new tests ONLY in Selenium IDE)… But this was before they saw Testerfield 2.0 :-)

I reveled in failure, listened for feedback, adapted the solution, and looked to re-tool my pitch

E = Q * A

I thought that if you can’t measure it, you can’t improve it.

Failure led me to learn about the effectiveness formula (E=Q*A) which offered a good framework on how to view the task at hand.

Effectiveness = Quality * Acceptance. If you have a low quality idea that is widely accepted, it is the same as have a high quality idea that is poorly adopted.

Here’s a good write-up on it http://www.prescientdigital.com/articles/best-practices/change-management-strategies-to-support-intranet-adoption/

I felt that our use of Selenium IDE was a low quality solution that was poorly accepted and that Testerfield was a high quality idea. It just needed to be well received.

So I started to wonder about what drove me to action which led me to the Human Action Model (re: Ludwig von Mises – Human Action: A Treatise of Economics).

It basically states that someone suffers from discomfort, has a vision for a better world, and takes action.

If I noticed discomfort, had a vision, and took action. How could I get others to follow? How can I get through to people? To have them see the value in this intrinsically?

Enter ‘The Golden Circle’. It is a simple but powerful model for inspirational leadership that all starts with the question “Why?”.

Simon Sinek – author of ‘Start With Why’. He offers some historic examples such as Apple, Martin Luther King, and the Wright brothers as well as a counterpoint example; Tivo. There’s a good Ted Talk about this; http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action.html

So I thought about the messaging that I wanted for Testerfield, and here’s what I came up with:

We believe that in order to create the World’s Greatest Online Financial Investment Community we need to craft quality software. We plan to do this by providing quick, reliable, and robust feedback to Developers and the Business. We just happen to have a new tool that provides this. Want to take a look?

After re-tooling I was ready. Oh, and… propaganda helps.

20 to 20

Testerfield 2.0 (now with propaganda!)

New error reporting could be read by anyone. It was now possible for people to understood what the test suite did (plain English), if it passed, if it failed, and why. The feedback loop (after information receipt) was cut from 15-20 minutes down 15-20 seconds. The communication gap between Dev and QA was beginning to narrow. Respect thermometer increasing.

We started to gain some real traction and picked up a couple of champions along the way. A movement started to take hold.

As a result, an opportunity to present to the entire Tech Department appeared, and things started to change (read: funding started to appear).

• QA received in house training for Ruby through a company called Jumpstart Lab.

• As a result, the more Analytical QA’s are now writing new and converting old tests in Testerfield

But what about the speed of the tests? The smoke suite takes 20 minutes to run! And just because you have good reporting doesn't mean that its not a distraction to the Devs who receive it. And, it may be old news by the time they get around to reading it. The process is still out of cycle with the development workflow.

What about parallelization?

Parallelization was something that we thought would be challenging but a problem that had already been solved (within our type of setup). But we were wrong, it was going to be much harder. What options we found either didn't work as we wanted, worked exactly as we wanted but was no longer maintained and outdated (wouldn't work with our version of Ruby) or looked promising but required a rework of our platform architecture and test design. That is, of course, until we found the answer.

20 to 2

Enter Sauce Labs, a provider of cloud testing goodness.

I like to call this performance gain “20 to 2” (actually it's more like 3-1/2 minutes, but 20 to 2 is catchier). We were able to keep our existing reporting, offload the heavy lifting to their grid, get the added benefit of video capture, with minimal changes on our end

Some minor additions/changes to our code base, standing up a single linux box at our data center to fork and send the test processes, and configure a secure tunnel).

If you’re not using them, you should.

What’s the score?

Old and busted New hotness

40 minute plane crash with out of band communication

4 minute, robust reporting readable by all, soon to be in-band comm

QA with 2 brains Ruby training, Analysts writing tests in the new tool

Limited resources/funding Still short on QA’s, but money for tools and training is prevalent

Aversion to change Change occuring

Developer cold shoulder Developer street cred

I would like to claim this mole hill in the name of QA.

Check out Testerfield.com

Dave Haeffner

• Twitter: @TourDeDave

• E-mail: [email protected]

Here is an example output from one of our Selenium IDE tests

Title name (can you easily tell what this test does?)

The whole test looks fairly gnarly; it’s got teeth

What can you deduce from the error? Not much, right?

Testerfield error output (leveraging the Ruby selenium-client gem http://github.com/ph7/selenium-client)

Useful category heading and test name – you can tell what the test is and what it does

Error – same as previous test, BUT

Shows you the pertinent parts of the test - the step that failed AND the steps before and after

And more importantly, the screenshot