5 key metrics to release better software faster

69
1 COMPANY CONFIDENTIAL – DO NOT DISTRIBUTE #APMLive 5 Key Metrics to Release Better Software Faster Senior Solutions Architect Brett Hofer @brett_solarch Performance Advocate Andi Grabner @grabnerandi

Upload: dynatrace

Post on 08-Aug-2015

253 views

Category:

Technology


0 download

TRANSCRIPT

1 COMPANY CONFIDENTIAL – DO NOT DISTRIBUTE #APMLive

5 Key Metrics to Release Better Software Faster

Senior Solutions Architect

Brett Hofer

@brett_solarch

Performance Advocate

Andi Grabner

@grabnerandi

2 COMPANY CONFIDENTIAL – DO NOT DISTRIBUTE #APMLive

• Velocity insights – interesting to share for today

• Why do we care about metrics anyway?

• 3 interesting examples for learning

• Real life – let’s walk through a metric together

• Summary

• Q&A

Welcome! Today’s agenda

Agenda

Waterfall Agile: 3 Years1 Deployment per Month

“EVERYONE can do Continuous Delivery”

“EVERY MANUAL TESTER DOES AUTOMATION!”

So, why did they do

it?

Uber: 1 million

Shared taxis per dayDidi: 5 million

Sources: KPCB, Uber. Date: May 2014 (Didi), Feb 2015 (Uber)

Rapidly changing markets,requirements & user

expectations

Goal

Utmost goal: minimize cycle time

feature cycle time time

Customer / Market Users

Utmost goal: minimize cycle time

feature cycle time time

minimize Users

Utmost goal: minimize cycle time

feature cycle time time

This is where youcreate value!

minimize

You

ROI?

High performers are more agile

30x 8,000xmore frequent deployments

faster lead times than their peers

Source: Puppet Labs 2013 State Of DevOps: http://puppetlabs.com/2013-state-of-devops-infographic

High performers are more reliable

Source: Puppet Labs 2013 State Of DevOps: http://puppetlabs.com/2013-state-of-devops-infographic

2x 12xthe change success rate

faster mean time to recover (MTTR)

Challenges

Deploy Faster!!

Fail Faster!?

Right Focus?!

Unless you work for Google or Microsoft

I

learning from others

3 use cases WHY did it happen? HOW to avoid it! METRICS to guide you. TIPS along the way

Tip 1don't push

without a plan

Mobile landing page of Super Bowl ad

434 Resources in total on that page:230 JPEGs, 75 PNGs, 50 GIFs, …

Total size of ~ 20MB

Fifa.com during Worldcup

Source: http://apmblog.compuware.com/2014/05/21/is-the-fifa-world-cup-website-ready-for-the-tournament/

Leverage Key Metrics1. # Resources2. Size of Resources3. Page Size

Tip 2don't ASSUME you

know the environment

Distance calculation issues

480km biking in 1 hour!

Solution: Unit Test in Live App reports Geo

Calc Problems

Finding: Only happens on certain

Android versions

3rd party issues

Impact of bad 3rd party calls

Leverage Key Metrics4. # of functional errors5. 3rd party calls

Tip 3don't "blindly" (re)use existing components

Requirement: we need a report

Using Hibernate results in 4k+ SQL Statements to display 3 items!

Hibernate Executes 4k+ Statements

Individual Execution VERY

FAST

But Total SUM takes 6s

2 Additional Metrics6. # SQL executions7. # of SAME SQL’s 2Bonus Metrics for Today

What have we learned today?

1. # Resources

2. Size of Resources

3. Page Size

4. # Functional Errors

5. 3rd Party calls

6. # SQL Executions

7. # of SAME SQLs

MetricBased

DecisionsAre Cool

2Bonus Metrics for Today

8. Time Spent in API

9. # Calls into API

10. # of Domains

11. Total Size

12. # Items per Page

13. # AJAX per Page

14. etc.

And there are other

great metrics,

too!

We want to get from here …

To here!

How does this stuff actually work?

Looking closely, what does it actually look like in action?

Stage

Ops

Dev# of SQL executions

Life of a metric in a continuous delivery real world example

Example Technologies Continuous DeliveryContinuous Delivery

IDE

IntelliJ IDEA 

Build Automation

Ant

Testing

Silk Performer 

Selenium 

Build Server / CI Config MgmtSourceControl

- Evaluate local processes- Code Linking- Architectural Evaluations- Metric evaluation prior to check-ins

- Agent injections to monitor and record tests

- Monitor for metric degradation

- Link builds to test run monitoring

- Report build health based on metric evaluation

In this example the Eclipse IDE is used.

A Maven projectstructure has beenchosen

JAVA Class with a few methods.

JAVA Method begins using someJDBC calls.

Dynatrace will seeall of the JDBC executions made in the JVM.

Developer authors the code

In this example the JUnit Test Casemethods are built

The JUnit TestingMethod now makeswhichever calls thetest case should cover. In this caseit’s a 1:1 call to oursample DB call

Developer authors unit test

The MAVEN projectfiles are key tostructuring the buildand executionsthroughout the delivery chain.

Ultimately this fileis checked in andthen picked up by the build serverlinking metricresults to eachbuild.

Developer integrates Maven & Dynatrace

The MavenBuild/Test is executedon the DevelopersWorkstation and localProject, injecting the Dynatrace agent and Supplying metadata for tracking.

The Maven POM filehas the Dynatraceplug-in integrated.

Developer runs a Maven test

Developers might evaluate theirown runs.

Developer uses Dynatrace to analyze a local build

Evaluate the actualPurePaths of the test to visualizethe SQL executionsper test method.

Developer uses Dynatrace to analyze a local build

Evaluate all of theDB statistics aroundthe test or entirelocal execution.

Developer uses Dynatrace to analyze a local build

Architecturallyvalidate the structureof the calls that havethe suspected metricprior to check-in

Developer uses Dynatrace to analyze a local build

CI Build Server LooksFor Updates. Pulls them down and executesthe Maven goals or any other necessary steps.

Developer finalizes the analysis and checks-in the change

• The CI Build Server is configured to poll the SCM at a specific interval.

• Maven or any Integration Testing Steps with Dynatrace Integrations are executed. Dynatraceauto alerts may be fired.

• Dyntrace Reporting and results are configured into Jenkins

• Results are displayed and deploy decisions can be made automatically or manually.

The CI build server is configured to poll for changes

65 @Dynatrace

12 0 120ms

3 1 68ms

Build 15 testDBFunctions OK

testWebServiceCalls OK

Build 12 testDBFunctions OK

testWebServiceCalls OK

Build 13 testDBFunctions FAILED

testWebServiceCalls OK

Build 14 testDBFunctions OK

testWebServiceCalls OK

Build # Test Case Status # SQL # Excep CPU

12 0 120ms

3 1 68ms

12 5 60ms

3 1 68ms

75 0 230ms

3 1 68ms

Test & Monitoring Framework Results Architectural Data

We identified a regresesion

Problem solved

Exceptions probably reason for failed tests

Problem fixed but now we have an architectural regression

Problem fixed but now we have an architectural regression

Now we have the functional and architectural confidence

Let’s look behind the scenes

Deployment decisions are made a “go” or “no go”

Better Software,

Faster!!

Thank-you!

Time for Q & A

Andi Grabner

@grabnerandi

http://blog.dynatrace.com

Brett Hofer

@brett_solarch

http://blog.dynatrace.com

Participate in our Forum ::

community.dynatrace.com

Like us on Facebook ::

facebook.com/dynatrace

Follow us on LinkedIn ::

linkedin.com/company/dynatrace

Connect with us!Follow us on Twitter ::

twitter.com/dynatrace

Watch our Videos & Demos ::

youtube.com/dynatrace

Read our Blog ::

application-performance-blog.com