the road to hell and back road to hell and back... · 2013-11-04 · so many times… we let a...
TRANSCRIPT
The Road to Hell and Back Patterns in Test Automation project failure & Recovery
Alon Linetzki & Michael Stahl
You may use some of the material from this presentation under the following conditions:
Copyrights still belong to the authors described below, and you get a permission to use it only
When using material or part of the material from this presentation, you are obligated to reference the authors in this way:
Source: “The Road to Hell and Back Patterns in Test Automation project
failure - and Recovery”, by Alon Linetzki, Best-Testing & Michael Stahl, Intel
Using the material inside this presentation
2
Feb 2012
Quality, Testing and product assurance coach and trainer – since the last 28 years
Vast experience in SW-only and embedded systems
Trained > 1000 professionals during my career, and coached over 20 companies
Keen on people, how we think, how well we work, how we can do better…
Established the Israeli branch of ISTQB® - ITCB (2004), serve as VP and marketing director, lead the ISTQB® Partner Program w/w
Established the SIGiST Israel – Israeli Testing Forum (2000)
Train and coach internationally since 1995
Alon Linetzki
Feb 2012
3
A common test automation project story
Alert Signals
Counter Measures
Summary
On the Menu… 4
Feb 2012
Once upon a time…
Common test automation story 5
Mr. Otto Mate Cool!
Can you make it…
Otto’s Team
Show to friends…
I can automate my work!...
Feb 2012
Common test automation story 6
Mr. Otto Mate
Perfect!
Can you add…
Otto’s Team
Show to friends…
Piece of
cake!
Mr. Mann a. Ger
What a wonderful
world!...
Feb 2012
Common test automation story 7
Mr. Otto Mate
Otto! The last
release
fails!... Otto’s Team
Arggghhh!
Fix Fix Fix
Mr. Mann a. Ger
Makes sense… Sir!
I need time! I need help!
… and mates
Feb 2012
Common test automation story 8
Mr. Otto Mate … and mates
… and mates
… and mates
Feb 2012
Common test automation story 9
Mr. Oto Mate … and mates
Let’s REDESIGN!!!
#$&@***!!!
Mr. Mann a. Ger
Feb 2012
So many times…
We let a small automation project mushroom into a huge framework…
We allowed a test automation adventure to continue uncontrolled, as a mean to retain a valued tester…
We were sucked into cycles of automation framework re-writes…
We spent years patching to a tool that was never architected properly…
We realize – too late - we never understood the overall cost of developing an in-house tool…
We got stuck with a half-functional self-developed tool…
We found out we can’t back out of the situation without a huge investment…
… and then the automation guy got bored and left us with spaghetti code.
Motivation 10
Feb 2012
Small, localized, grass-root automation initiatives are welcome…
… as long as they stay small and localized!
We need
Alert Signals to recognize the symptoms
Counter Measures to mitigate the impact
… of a runaway automation initiative
What is needed… 11
Feb 2012
Stage 1: Small, local, feature-centered
Stage 2: Generalization
Stage 3: Institutionalization and staffing
Stage 4: Change of focus: Technology Management
Stage 5: Maintenance overload; Re-design
Alert Signals/indicators identified 12
Feb 2012
Alert Signals 13
Feb 2012
14
Alert Signals
Feb 2012
15
Failing the project
Failing the project
Failing the project
Failing the project
Failing the project
Alert Signals
Feb 2012
On the next slides, you will find for each exit point:
Questions to ask – identify where you are?
Situation at this stage
Suggested Counter Measures
16
Counter Measures
Feb 2012
Questions:
Is the tool testing a single feature?
Does it have immediate, clear ROI for the task it automates?
Is one person only involved in development AND use of the tool?
Suggested mitigation actions:
If YES to the above - ensure the following… and relax:
Code control
User manual (can be as short as a "usage" line)
Documentation (what the tool does; high level design)
17
1. A Tool is built locally on a small scale
Counter Measures
Feb 2012
Questions:
Are features being added to allow automation of tests other than those of the owner?
Does tool related work fills >25% of the tester’s time?
Does the tester start missing testing tasks?
Is the decision to let this guy continue is heavily biased by an HR problem?
18
2. Requests are placed to enhance the tool and make it more generic
Counter Measures
Feb 2012
Situation at this stage
The tool is still on small scale:
No generic test-case management capabilities
Implements mostly core-business or core-technology
Can’t be bought outside – we want this tool!
But we don’t want it to expand and become to
generic…
19
2. Requests are placed to enhance the tool and make it more generic
Counter Measures
Feb 2012
Suggested mitigation actions [1/2]:
Implement proper code development
methodology Requirements, Design, Code review, Configuration management
Tests; bug reporting
Start controlling the scope Single feature
No test-management features
20
2. Requests are placed to enhance the tool and make it more generic
Counter Measures
Feb 2012
Suggested mitigation actions [2/2]:
Standardize tool development Common user interface, APIs, Programming language;
Automation strategy
Library creation guidelines, owners
No GUI automation (buy)
“I can do this in 3 weeks – let me try” – it’s a trap!
Automation as solution for HR problem? As long as the above can be done properly… why not?
21
2. Requests are placed to enhance the tool and make it more generic
Counter Measures
Feb 2012
Questions:
Did you add the title of “tool owner” or “test automation eng.”
to the job description of the involved tester? [Yes…]
Did you assign the tool owner to work MAINLY on the test
automation framework? [Yes…]
Does the owner has development experience, and use proper
development methodologies? [Some… Some/No…]
22
3. Technical owner (de-facto) spends most of his time on automation with the tool
Counter Measures
Feb 2012
Probable situation at this stage: The tool supports a number of features
Some libraries exist…
If you only caught on now:
No documentation
No proper Development processes
23
3. Technical owner (de-facto) spends most of his time on automation with the tool
Counter Measures
Feb 2012
Suggested mitigation actions [1/2]: Hold everything!!!
Complete step 2 mitigation list
Cancel any progress towards building a test-management tool
Initiate tool evaluation project:
Getting the requirements part is difficult; Put a senior person on it; give it
priority and the needed time (at least 2-3 months!)
Discuss and define “automation strategy”
Primary topic is “buy VS build” plus “core technology VS generic test-
management needs”
24
3. Technical owner (de-facto) spends most of his time on automation with the tool
Counter Measures
Feb 2012
Suggested mitigation actions [2/2]: Send your automation person to relevant computer-science and
software development training
You now need a developer – not a tester !
If you need more people for the project – hire programmers – not testers !
You may – for a short term – let them be testers, to get into the mindset…
25
3. Technical owner (de-facto) spends most of his time on automation with the tool
Counter Measures
Feb 2012
Questions:
How many features you have developed in-house, exist in commercial or open-source tools? [medium to high…]
How many features which are not related directly to the core functionality of the product (%)? [a lot…]
How much time of your automators is spent on maintenance? [>25%…]
Did you do (or discuss) a “robustness enhancement” release? [Yes…]
26
4. Tool focus degrades – less core functionality is implemented
Counter Measures
Feb 2012
The situation at this stage Lots of code was written, strongly influencing a “buy VS
build” decision
The tool’s robustness is low. New releases are painful !
People start to relay on the tool for their day to day
testing…
27
4. Tool focus degrades – less core functionality is implemented
Counter Measures
Feb 2012
Suggested mitigation actions: Hold everything!!!
Complete stage 2+3 mitigations
Re-architect the tool: separate core technology from non-core technology
Build a clear test-plan/road-map for the automation tool, and a release plan:
Dedicate people or time for tool testing
Get the core-side of the tool to a stable state
Allow development on core tech/business capabilities to continue, with first priority for bug-fixes
Do a “Buy VS Build” decision – calculating the investment correctly and ROI
28
4. Tool focus degrades – less core functionality is implemented
Counter Measures
Feb 2012
Questions [1/2]:
What % of development time is dedicated to maintenance and bug fixes? [A lot…]
Are the following terms repeated when discussing the tool?
Redesign…
Refactoring…
“Original architecture is not planned for this…”
“Need to develop an abstraction layer…”
“Need to invest in interfacing to other systems…”
29
5. Tool re-design is unavoidable…
Counter Measures
Feb 2012
Questions [2/2]:
Does the tool suffer from performance problems?
Do you experience unplanned delays in test-cycles due to test-automation tool failures?
Is the number of late-nights and weekends spent by your automation person at work trends up?
Does the tool's bug numbers trend up?
30
5. Tool re-design is unavoidable…
Counter Measures
Feb 2012
Your Test Automation
project is in Critical state ! ! !
Suggested mitigation actions:
Minimal support to keep the tool running (“alive”)
Can’t hold the test cycles…
Apply stages 2, 3, 4 mitigations !
31
5. Tool re-design is unavoidable…
Counter Measures
Feb 2012
Be alert for Alerts
Identify your stage
Analyze your situation
Implement Counter
Measures
Summary 32
Feb 2012
The Road to Hell and Back Patterns in Test Automation project failure -
and Recovery