continuous performance testing: the new standard
DESCRIPTION
In the past several years the software development lifecycle has changed significantly with high-speed software releases, shared application services, and platform virtualization. The traditional performance assurance approach of pre-release testing does not address these innovations. To maintain confidence in acceptable performance in production, pre-release testing must be augmented with in-production performance monitoring. Obbie Pet describes three types of monitors—performance, resource, and VM platform—and three critical metrics fundamental to isolating performance problems—response time, transaction rate, and error rate. Obbie reviews techniques to acquire and interpret these metrics, and describes how to develop a continuous performance monitoring process. In conjunction with pre-release testing, this monitoring can be woven into a single integrated process, offering a best bet in assuring performance in today’s development world. Take away this integrated process for consideration in your own shop.TRANSCRIPT
Session
Presented by:
Obbie Pet T
Brought to you by:
340 Corporate Way, Suite Orange Park, FL 32073 888‐2
W10 Concurrent4/9/2014 2:00 PM
“Continuous Performance Testing: The New Standard”
icketmaster
300,68‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com
Obbie Pet Ticketmaster
Obbie Pet has twenty years of experience in QA, as tester, test lead, and QA manager. For the past thirteen years, Obbie has focused on performance testing in the area of Internet ticketing (Tickets.com, LiveNation, Ticketmaster) and in the insurance field (Wellpoint). He is certified on the LoadRunner testing tool by both Mercury and HP. Five years ago, Obbie realized that achieving performance assurance in production required expanding beyond just testing to include performance monitoring. His focus now is sharing these ideas and implementing monitoring solutions in support of performance assurance. Read more about Obbie and his thoughts on performance assurance at QAStrategy.com.
Obbie PetSr Performance Engineer
Ticketmaster.com
Internet ticketing space 10+ years
Ticketmaster Blog: http://tech.ticketmaster.com/
Contact: [email protected] [email protected]
Ticketmaster
• 450M Tickets processed (annual)• 180K Events (annual)• 12K clients• 19 countries• >8M mobile tickets in 2012
Agenda
The problem– Old paradigm SDLC– New paradigm SDLC– Old paradigm’s test tool isn’t cutting it.
The solution– Need a second tool - Monitoring– Monitoring use case– Monitoring dashboard examples
Takeways – Performance Assurance process for your shop
Old SDLC Paradigm
• Quarterly releases, low velocity change• Monolithic websites and applications• Dedicated hardware
Old SDLC performance assurance
• Pre-Release testing• Production candidate staged• High demand is simulated• Problems are detected and fixed
• Executed before every release• Works great for old SDLC• Newtons 3rd law
New SDLC Paradigm
Characteristics impacting performance• High velocity software release• Shared services• Cloud infrastructure (Variable platform capacity)
Contrast old vs new SDLC paradigm
Application characteristics Old Paradigm New Paradigm
Software release frequency, Software component change frequency
+----+----+----+----Every quarter
++++++++++++Continuous
Shared services Minimal, predictable demand
Extensive, unpredictable demand
Platform capacity Fixed CPU, dedicated h/w
Variable CPU, cloud implementation
Shared services – explained 1 of 5
Shared services – explained 2 of 5
Shared services – explained 3 of 5
Shared services – explained 4 of 5
Shared services – explained 5 of 5
• Imagine 5, 10, or 50 applications concurrently running.• It becomes hard to predict the demand on shared services.
Platform capacity contrast
Application characteristics Old Paradigm New Paradigm
Software release frequency, Software component change frequency
+----+----+----+----Every quarter
++++++++++++Continuous
Shared services Minimal, predictable demand
Extensive, unpredictable demand
Platform capacity Fixed CPU, dedicated h/w
Variable CPU, cloud implementation
Does pre-release testing mitigate new paradigm risk?
Application characteristics New Paradigm Addressed by Pre-Release testing?
Software release frequency, Software component change frequency
++++++++++++Continuous
No. Code your dependent upon is changing between you’re releases
Shared services Extensive, unpredictable demand
No. Doesn’t cover impact of other apps on shared resources
Platform capacity Variable, cloud implementation
No. Doesn’t expose under allocation of Production CPU
How do we mitigate the new risk?
AUGMENT PRE-PRELEASE TESTING WITH MONITORING.• Monitoring, most importantly performance monitoring, can
mitigate performance risk associated with the new paradigm.
How does monitoring mitigate these new risks?
Application characteristics New Paradigm Risk Mitigationvia Monitoring
Software release frequency, Software component change frequency
++++++++++++Continuous
Provide immediate feedback when a s/w change has impacted a component.
Shared services Extensive, unpredictable demand
Demand patterns on all services are observed, and can be managed.
Platform capacity Variable, cloud implementation
Metrics used to prevent application starvation of CPU/Memory/Disk/Network
What are the limitations of monitoring?
• Monitoring doesn’t remove performance risks like Pre-Release testing.
• Reactive vs proactive.• It’s the best approach I can think of.
How does Production monitoring reduce risk?
• Prevent performance issues from entering production (when monitoring tools are used with Pre-Release testing)
• Early detection/remediation of performance issues, before the customer notices.
• Fast resolution of performance issues reported by customers.• Visibility for Operations
Big picture review
Production monitoring use case
• Customers are complaining the web site is too slow.• Where is the problem? How does IT respond?
Assemble a team of experts from each tier (bad) VS
Monitored metrics immediately identify the broken tier (good).
Production monitoring use case
• Figuring out where the problem is located is 80% of the effort.• Performance monitoring does this work for us. That’s why its
valuable!
Production monitoring use case
• Diagnostic strategy:• First isolate problem tier: left to right ↔ ; • than isolate problem layer within the tier: up and down ↕
What metrics do we measure?
Performance (business) metrics for tier isolationmost important, hardest to capture
• Transaction response times• Transaction rates• Error rates
Resource metrics for layer isolation• CPU• Memory• etc…
VM metrics
Production monitoring use case
What does monitoring look like?
● Target of test –An email notification system,multiple services orchestrated into a product.
● Performance dashboard examples from a test run.
● Performance dashboard examples from production.
SUT - an email notification system
Dashboard examples from a test run
• Healthy service • Unhealthy service • Unhealthy service tier isolation
Composite Transaction B
Healthy dashboard example from a test
Unhealthy dashboard example from a test
Tier isolation of the performance problem
Composite Transaction B
Dashboard examples from production
• Dashboard 1: How is the PDF rendering service performing?
• Dashboard 2: Is Production email being delivered within SLA’s?
How is the PDF rendering service performing?
Is Production email being delivered within SLA’s?
What does monitoring look like?
• We looked at performance dashboards for• a healthy test• an unhealthy test• unhealthy tier isolation• production
Performance metrics are hard to come by
Presentation to date assumes you got’emIf you aren’t collecting them already…...its a big task to get them.
• Buy vs build
Takeaways
► Best practice for making this real in your shop
Performance MONITORING in your company
• Stick - Require dashboards for production release• Carrot - Provide easily adapted dashboard template
All dashboards based on same three key metrics Build enterprise infrastructure to support templates
• Project owners could use them as is or customize • Little burden means minimal resistance
A generic performance dashboard.
How do you implement Performance ASSURANCE
• Prior to production release, the following performance requirements would need to be satisfied.
• Pre-release Testing• Peak period scenario• Ramp up to first bottleneck test
• Performance Monitoring dashboards
• Lets talk about Pre-release testing, the other arrow in our quiver.
Requirements - Pre-Release Testing
• Peak period scenario• Stakeholder provides SLA for peak production demand• Performance test simulates peak production demand
while human users exercise the system to confirm acceptable user experience
• PASS is given when SLA is met OR stakeholders accept results.
• Ramp up to first bottleneck test• Load is ramped up until the first significant bottleneck is
found.• Useful for Ops to anticipate performance issues in
production.• Provide information for the biz to create formal SLA’s.
Requirements - Pre-Release Testing
• Performance Monitoring dashboards• Performance monitoring is operational• Resource monitoring is operational• VM monitoring is operational
Requirements - Performance Monitoring Dashboards
• Dashboards available in both Pre-PROD and PROD environments
• Pre-PROD coverage provides feedback in Pre-Release testing.
• Pre-PROD coverage provides a testing ground for prod coverage.
• Getting the right coverage will take practice.
Requirements - Performance Monitoring Dashboards
• Prior to production release, the following performance requirements would need to be satisfied.
• Pre-release Testing• Peak period scenario• Ramp up to first bottleneck test
• Performance Monitoring dashboards
How do you implement Performance ASSURANCE
• We talked about two techniques of mitigating risk.• Pre-Release testing• Monitoring
• In practice, how does this theory map to performance problems?
• My experience says →
Mitigating performance risk
Questions
• Whew!! A lot of information
Obbie PetSr Performance Engineer
Ticketmaster.com
Internet ticketing space 10+ years
Ticketmaster Blog: http://tech.ticketmaster.com/
Contact: [email protected] [email protected]
APPENDIX
Example of a Resource monitor (Open TSDB)
Performance is important
● Latency matters. Amazon found every 100ms of latency cost them 1% in sales. Google found an extra .5 seconds in search page generation time dropped traffic by 20%.
--http://blog.gigaspaces.com/amazon-found-every-100ms-of-latency-cost-them-1-in-sales/
Platform capacity– explained 1 of 6
Platform capacity– explained 2 of 6
Platform capacity– explained 3 of 6
Platform capacity– explained 4 of 6
Platform capacity– explained 5 of 6
The VM/Cloud problem.
Platform capacity– explained 6 of 6
Example of a performance log file used by Kibana to generate performance metrics:
•• datetime="2013-12-12 11:59:59,538" severity="INFO "
host="app6.template.jetcap1.coresys.tmcs" service_version="" client_host="10.72.4.75" client_version="" Correlation-ID="ab72d037-6362-11e3-80a4-f5667a7a5c6b" rid="" sid="" thread="Camel (337-camel-88) thread #42 - ServiceResolver" category="com.ticketmaster.platform.bam.strategies.PerformanceBAMStrategy" datetime="2013-12-12T11:59:59.538-08:00" bam="perf" dur="724" activity="template-call" camelhttppath="/template-notify-composite/rest/template-notify"
Performance monitoring - Technologies
• Selecting an appropriate monitoring technology is highly dependant on your specific environment. Below I share the classes of monitoring technologies to consider for your solution.
BUILD IT: Custom code needed to collect metrics, open source leveraged for metric storage, BUILD IT: Custom code needed to collect metrics, open source leveraged for metric storage, analysis and reportingnd reporting
SysLog harvesting, custom code is used to push performance data to SysLogs, which are than digested by log analyzers. (Kibana, Splunk)
Tcollector agents, performance information is pushed to a time series database (OpenTSDB)
BUY IT: End-2-End vendor monitoring solutions
Network sniffers
Network monitors or sniffer (OpNet)
Stitching agent deployment required, piecing together transaction parts from header info
Transaction marking
agent deployment required, insert and then track headers
JVM monitors agent deployment usually required (Dynatrace, AppDynamics)
E2E Monitoring tool vendors:
• BlueStripe http://bluestripe.com/• Op-Tier http://www.optier.com/• AppDynamics http://www.appdynamics.com/• Dynatrace• http://www.compuware.com/application-performance-
management/dynatrace-enterprise.html
• Gartner group does a nice evaluation of this tool space
E2E Transaction monitor example: