quality contamination in agile scrum teams and the remedies - the qa strategy
TRANSCRIPT
Quality contamination in Agile Scrum Teams and the Remedies
www.srijan.net | [email protected]
The QA Strategy
1
Learning Outcome
2
• How to perform QA on an Agile Scrum Projects
• Defining efficient QA strategies for Agile Projects
• Agile/QA best practices
● No QA strategy/Plan, checklist and milestones defined for Project● No Gates/Checkpoints defined● No Metrics defined, measured and tracked● No guidelines for Defect severity/priority clearly defined● No Defect Root Cause Analysis(RCA) and Quality Improvement Plan
(QIP) prepared.● Not able to fit QA in same sprint as Development
● Don’t think before Test- No upfront test planning or defining/understanding of Test cases/scenarios
● QA not part of Development Team- QA efforts not estimated, QA resources/Testers considered separate hanging entity in scrum team only responsible for QA
● No Technical Debt assessment and Reduction Plan
Quality contaminations in Scrum Teams
3
Plan doesn’t
mean
documentation
● Do Occurrence/Impact analysis for respective Projects using the matrix and plan to
address the Issues/challenges progressively in order. Don’t try to fix everything
together.
● Keep on continuously refining and improving.
● Inspect and Adapt alternatives/solutions proposed in coming slides for each of the
Quality contaminations/challenges based on project specific applicability and suitability.
The Remedies and recommendations
4
Impact (low -> high) Occ
urre
nce
(low
-> h
igh)
● Define QA strategy/Plan, checklist to address the below categories as per Deployment strategy and Project constraints.
● Define milestones (like Feature complete, Code Freeze, UAT …) and add placeholder user stories into the backlog/Sprints, if not possible to meet DONE every User Story/Sprint(ideal case with CI/CD)
● Review the checklist against status (Meeting, On-track, Off-track, Not started, exception) at regular cadence(ideally every sprint), Highlight the Risk to Ship if any, to the stakeholders and take corresponding resolutions based on Risk assessment.
No QA strategy/Plan, checklist and milestones defined for Project
5
Code Quality Functional Testing System and Solution Testing
Release Requirement
Code Review Feature testing Load Testing Documentation
Unit/Component testing Regression testing Longevity Testing Automation
Static and complexity Analysis
Gorilla Testing Solution/Integration Testing
EFT/Alpha/Beta
Performance Testing Training
Security
● Define strict Definition of Done(DOD) checklist or Workflows for UserStory/Sprint/Release and review before acceptance (sample Sprint DOD as below)
● Define 360 Quality review feedback forums and review projects at regular cadence
● Raise Flag based on Project QA checklist defined in previous slide
No Gates/Checkpoints defined
6
User story is complete Automated Unit test, code coverage & code review complete Product owner demo accepted User story has no open defects Static analysis is at 100% pass rate on code developed in the sprint External documentation complete, technically reviewed, tested, and checked into
a source code control system Project Architect reviews design per feature UX Designer reviews implementation per UX design mockups TOI development complete per feature/epic story as applicable Test cases documented and test case results tracked Load, performance and scalability testing completed Feature test 100% automated & 100% pass rate (Positive and Negative) Feature running on acceptance test bed (deployment model) Test case(s) reviewed by scrum team and approved by QA
● Mix in Release Done along the way so you don't build up a huge debt to complete in the final Release Done
Can't Fit it all in Story or Sprint Done?
7
Sprint 1 Story
Done
Sprint 2 Story
Done
Sprint 3 Story
Done
Sprint 4 Release
Done
Sprint 5 Story
Done
Sprint 6 Story
Done
Sprint 7 Story
Done
Sprint 8 Release
Done
Sprint 9 Story
Done
Sprint 10 Story
Done
Sprint 11 Release
Done with Release to Market
For example: Performance & Solution
testing might go here
● Measure and Track various QA metrics (few key mentioned below)● Review Metrics in QA review forums, raise flag incase of metrics going beyond
Project defined thresholds or degrades and take corrective actions based on RCA. For e.g. Increase in Defect Density per module/Function point might require module refactoring
No Metrics defined, measured and tracked
8
Metrics
Defect count grouped by severity/priority
Defect Incoming vs Outgoing trend
Defect Resolution Time
Defect Density
Code complexity
Static Analysis violations
Code coverage
Measure
Metrics th
at
Actually
Work
● Define clear Defect guidelines for severity and Priority● Ensure Team raises bugs/Defects following guidelines by timely auditing
projects or Bug scrubbing at regular intervals● Do regular bug scrubbing in respective projects based on Severity/Priority,
Product Roadmap and track defect metrics For e.g. Linking duplicate defects, de-prioritizing or closing defects related to Feature de-prioritized or moved out of the Backlog scope.
No guidelines for Defect severity/priority clearly defined
9
● Identify and update component or EPIC while raising defect● Do Defect RCA at regular cadence(may be for high defect density
components) to identify focus areas or collaterals● Based on RCA, define action plan and Quality improvement Plan like some re-
factoring or re-designing etc. … and add respective stories into the backlog. Based on the Return on Investment(ROI), we can get them prioritized accordingly
No Defect RCA and Quality Improvement Plan (QIP) prepared
10
• Five whys of root cause analysis• Prioritize bugs over stories• Log bugs found by testers
● Don’t follow out-of-cycle testing or mini waterfall within sprint, with QA related work getting accumulated towards end of the sprint
● As estimates are not done or out-of-cycle testing, there is always a backlog of QA work coming from previous sprint or DONE not being met every sprint, which initiates chain reaction of Spills every sprint and low velocity of the team
● During the initial phase of the sprint QA resource is busy clearing backlog of previous sprint and during end again new items have accumulated from the current sprint items leading to no time for defining acceptance test or Test Planning – inefficient and optimal utilization of QA resources.
● Meet DONE every sprint, by completing User-Stories end-to-end, instead of doing multiple half-done Issues
● Test Automation should be focused for in-cycle QA
Not able to fit QA in same sprint as Development
11
● Estimate and groom stories efficiently so that acceptance test and test planning can be done upfront or techniques like ATDD/TDD can be applied
● Define and identify various Use cases/scenarios upfront, which might also help in clearing scope of User Story or defining more clear Acceptance Criteria
● Target maximum test coverage defining both positive and negative use-cases/test-cases and tracking the coverage metrics
Don’t think before Test- No upfront test planning or defining/understanding of Test cases/scenarios
12
● Estimate QA effort while estimating for User Stories and Tasks ● Involve QA team members during User Story grooming and estimations for
better understanding of User Stories and refining Acceptance ● QA is not designated person’s responsibility, it collective team
responsibility(necessary for self-organizing)
● Team should review Test-cases/use cases defined by QA resource for better understanding of requirements and identifying gaps/refining scope of User Stories or adding additional tests.
QA not part of Development Team- QA efforts not estimated, QA resource considered separate hanging entity in scrum team only responsible for QA
13
● Assess Debt and develop plan to reduce over weeks, months, releases, etc. based on Return of Investment(ROI) and get Stories added to backlog
● Build a practice that manages and reviews all debt and gets it to zero over “n” weeks, months or releases.
● Funding Technical Debt Reduction Use interns or new joinees bandwidth Reserve capacity each sprint or Release (may be
10-20%) in agreement with Product Owner Show the ROI to Product Owner like ability to
deliver new features faster (refactor example) or as an Opportunity asking for additional capacity
● Review the Technical Debt Plan regularly (may be at Quality review )
No Technical Debt assessment and Reduction Plan
14
Debt Examples Defect Backlog Automation—manual
tests Code Complexity mgmt. Static Analysis Violations Refactoring not part of
practice Limited/inaccurate test
use case Low unit coverage with
high complexity & defect density
Lack of or out of date Documentation
Poor architecture, usability, reliability
Build issues—slow or breaks
No Technical Debt assessment and Reduction Plan
15
Reducing build time. Increasing number of builds per day. Added continuous integration builds
7.5 8.0 8.5 9.0 10.0 10.50
2
4
6
8
10
12
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
9.00
Number of Builds/Day # of CI Builds/Day Build Duration (Hrs)
Bui
lds
Hou
rs
“Software testing proves the existing of bugs not their absence.” – Edsger W. Dijkstra
“Quality is not an act, it is a habit” – Aristotle
16
THANK YOUSumeet GuptaAgile Coach | CSP, CSM | SAFe Agilist | Certified Disciplined Agilist- Yellow Belt
17
@sumeetgupta1982 sumgupta