performance testing and obiee by quontra solutions

37
Email : [email protected] Contact : 404-900-9988 Website : www.quontrasolutions.com an !"#EE "$ %uontra &olutions

Upload: quontra-solutions

Post on 07-Oct-2015

10 views

Category:

Documents


0 download

DESCRIPTION

Quontra Solutions main motto is to Provide Industry Oriented Quality Online Training on all IT Courses. All our courses are taught by experienced trainers who have extensive field knowledge with the topics they teach. We are offering Job Oriented online Training Program on OBIEE. Learn OBIEE Course from Real Time Experienced Trainers. Quontra Solutions provides Online Training on OBIEE. Trainers with highly skilled domain expertise will train and mold you based on the real time examples.Quontra Solutions provide Training to wide range of customers like for the working professional, job seeking candidates, corporate & to the students.

TRANSCRIPT

Performance Testing and OBIEE

Email : [email protected] : 404-900-9988Website : www.quontrasolutions.com

Performance Testing and OBIEE By Quontra SolutionsWatch Water1IntroductionOracle BI specialist at Morrisons plc

Big IT development programme at its early stages implementing OBIEE, OBIA, ORDM, all on Oracle 11g & HP-UX

At Morrisons were on the latest (current) version of OBIEE 10g, and use Oracle 11g on HP.

Were using OBIA, ORDM, and weve built our own ODS using Oracle Data Integrator

The performance work that Ive done so far has been with OBIA, but what Ill be talking about today should be applicable to any OBIEE installation.

Im interested to hear other peoples experience with OBIA and performance. Come and speak to me afterwards!

We did Performance work forExisting prob with OBIAMethodology future projects

2The aim of this presentationA Performance Tuning Methodology

OBIEE techie stuff

Learn from my mistakes!Three things to take away

Questions quick as we go / Q&A discussion at end

NB testing, not //tuning//3What is performance testing all about?Response timesReportETL batchOLTP transaction

System impactResource usageScalabilityQuantified & Empirical

So what is performance testing?

I want to go through this pretty briefly, as its a huge area of theory that I cant do proper justice to.

Ill try and cover off what I see as some of the basics

Its important to really understand this when youre doing it, otherwise youre not going to get valid test results and youll probably end up wasting a lot of time.

For a proper understanding of it Id really recommend reading papers written by Cary Millsap.

Performance testing is a term used to cover quite a few different things.

The way I define it and this may or may not be industry standard, so apologies is this:

Performance testing itself is this [click] Does it go fast enough? Does my report run in a time that meets user specification, or expectation?(This is within the context of OBIEE for ETL wed be talking about job runtimes and batch windows)

The next stage after performance testing is generally load testing [click]This is to answer the questionWill it still go fast enough, once everyones using it?

The other big consideration with load testing is -- Will it break my server? Or to be a bit more precise, what kind of impact is a new system going to have on an existing one? When you put your new reporting system live what will happen to the existing one thats been running happily for a year?

A logical extension of load testing is stress testing - how far can it scale? This applies to both the reporting system youre putting in, and the servers themselves (which therefore feeds into capacity planning).

Ive heard these terms used interchangeably, and theres quite a difference between them in my mind.

Why does it matter? Well it dictates how you design and execute your testing. When you move from performance testing to load testing you generally take a step back in terms of level of detail.

My definition: Load Testing is not Performance Testing. It may be an area within it, but it is most definitely not one and the same thing. This presentation is not about performance TUNING Of course, testing feeds into tuning which in turn feeds into testing, so the two are inextricably linked. But to try and keep things focussed - I will try to avoid discussing tuning specifics otherwise well be here all day.

Performance testing is the repeatable running of a request to obtain metrics (primarily response time), to determine whether its performance is acceptable or notAcceptable is a subjective term, but would typically be driven by user requirementsIn the context of OBIEE then were basically talking about does an Answers report or dashboard run fast enough to keep the users happy?

Load Testing is about quantifying the effect an application is going to have on a systemwhether the application is going to perform acceptably on the system when its under load One report run on its own might be fine, but what happens on a Monday morning when a thousand users all log on at the same time and all run the report?

4Why performance test?(Isnt testing just for wimps?)Check that your system performsAre the users going to be happy?BaselineHow fast is fast? How slow is slow?Validate system designDo it right, first time

Capacity planning

This may be stating the obviously, but there are some pretty hefty reasons why you should incorporate multiple iterations of performance testing in any new development.

- To test if the new system performs wellIs it going to return the data to the users in the time that theyre expecting?If you dont test for this reason alone then youre either brave, or foolhardy!What are BI systems about, if not providing the best experience to the end user

- To provide baselinesHow do you know if a system is performing worse if you don't know how it performed beforeHow often have you had the problem reported to you my reports running slow? Well whats slow? 9 minutes? If it took 10 seconds to run when you put the system live, then youve got a problem. If it took 9 minutes to run when you put the system live then youve possibly just got an impatient user with unrealistic expectations.When you do get a performance problem, assuming you did your performance test packs beforehand youll be all set to diagnose where the problem liesBy their definition, performance tests generate a lot of lovely metricsOnce you think youve fixed the performance problem, you need to validate two thingsHave you fixed it?Have you broken anything else?Your performance test packs will give you the basis on which to prove it

- To validate the way you are building the systemFor example, partitioning or indexing methodsHow often have you heard It Depends from a DBA?Optimal Parallelism setting or Partitioning strategy this time round may be different from what was optimal on the last project you didAn index may help one report, how do you know it doesnt hinder another? Unless you have a pack of repeatable tests with timings then you cant quickly tell what the impact isIn effect, you do your performance tuning up-front, as part of the build, rather than with a thousand angry users furious that their reports have stopped working5Why performance test?Its never too lateYoull never catch all your problems in pre-production testing. Thats why you need a reliable and efficient method for solving the problems that leak through your pre-production testing processes.

Its not too late to put in place a solid performance test methodology around an existing system.

If you set up your performance tests now you will have a set of baselines and a full picture of how your system behaves normally

Then when it breaks or someone complains youre already set to deal with it

If you dont, then when you do have problems you have to start from scratch, finding your way through the process.

Which is better at your leisure, or with the proverbial gun of unhappy users held to your head?

Performance testing isnt optional. Its mandatory. Its just up to you when you do it.

Even if youre running Exadata or similar and a so confident that you dont have performance problems and never will how are you going to capacity plan? Do you know how your system currently behaves in terms of CPU , IO, etc? How many more users can you run?Which is easier, simulate and calculate up front, or wait until it starts creaking?6Why performance test?Because it makes you better at your job

At the very least, your performance test plan will make you a more competent diagnostician (and clearer thinker) when it comes time to fix the performance problems that will inevitably occur during production operation.

Performance Testing requires an extremely thorough understanding of the system.

If you do it properly there is no doubt you will come out of it better equipped to support and develop the system further

7Performance Testing What & WhyQuantifying response timesSystem impact

User expectationsProblem diagnosis Design validationAny questions so far?

8Timebox!Performance Testing - How?Evaluate design / config optionsDo it rightDont fudge itDo more testingIterative approachBe MethodicalRedefine testDo more testingThis is the methodology weve developed

Its a high level view subsequent slides will give detail

9Define & build your testDefine what are you going to testAim of the testScopeAssumptionsSpecificsData, environment, etc

Build how are you going to test itOBIEE specific

E.g. :Check that the system performsBaseline performanceProve system capacityValidate system designMake sure you have a specific aim

Easier to have two clear tests than try to cover everything in one

Loose analogy - 10,000 ft view vs low-level flyby

Dont forget predicates One report may have many filters and behave a lot differently depending on how theyre set

WRITE IT ALL DOWN

Think of breadth of test (# of reports) vs depth (# of metrics)

If your test aim doesnt define clearly enough how youre going to run it, consider these two options:

Option 1: Top down / Start big, see what breaks, follow standard troubleshooting of trying to isolate the problem Shortest time to get initial resultsDifficult to isolate any problems thoughBetter for firefighting an existing problem where time is limited or there are obvious quick-winsimpreciseFor example look in usage tracking for all reports that run > 5 minutes. Test only those reports.

Option 2: Bottom up / Start small - define each test component, record behaviour for each test run, combine test components into bigger test runs, scale up to load testingUltimately more preciseMore metrics = more precision = quicker to identify issues and resolutionsBut its boring !Wheres my gigs of throughput bragging rights, or smoking server groaning under the load?Whats the point running a ten thousand users through your system just to prove it goes bang (or doesnt)? Longer process, needs more accuracyFor example, report response time requirements from users, representative workloads

your testing may give rise to some tuning, your tests must be isolatable and repeatable, otherwise how do you validate that what youve changed has fixed the problem and not made it worse?

Repeatable whats the complete set of things youd need to run the same test somewhere else? Eg: Schema definition, DB config parameters, OBIEE NQSConfig, OBIEE RPD, OBIEE reports , [plus presentation services config & web cat]

Isolatable Do you have a dedicated performance environment?, What else is running at the same time? 10Consider your test scope

More components = more complex = more variables = larger margin of errorFewer components = easier to manage = more precise = more efficientReduce complexity intelligently

Dont cut corners, but distil the problem to its essence

Pick the closest point up stream from the bottleneck.

More components = greater margin of error!

So youve seen the different places in which you can test OBIEE. But how do you choose the one most applicable to the testing youre doing? Should you just run everything against the database directly?

Basically, its about keeping it simple, and avoiding unnecessary complexity.

Say someone gives you a Ducati engine to fix. [click]Youve already identified the area of the problem. Would you still spend all your time looking at the whole engine, or isolate where you know the problem lies and work on it from there? [click]

Now clearly this isnt an absolute, because there could be more than one problem with the engine, and so on but the principle is sound. Reduce the complexity, intelligently.

The same principle applies to testing OBIEE.

Depending on the kind of reports, your RPD, and your data model and database, your performance bottlenecks will be somewhere across all of the stack.

You shouldnt be looking to cut corners, but look at it as distilling down what youre testing to its essence and nothing more.

Hopefully this is pretty obvious, as its going to be a more efficient use of your time, but heres the two reasons why:

1) Your tests show a slow response time and you need to track down and diagnose this. Would you rather be considering three elements or three hundred?

When I was working on some performance testing, it was clear that very little time was spent after the BI Server passes data back up to the Presentation Services.

So what I did was cut out presentation services entirely, because the aim of my testing was to resolve reports that were taking 5, 10 minutes to run, and these 5/10 minutes were always down-wind of presentation services

Bear in mind where your dependencies lie, where the stack is coupled. The time itself was always in the database, but you cant just edit the SQL, because that comes from the BI server. So you intelligently pick the closest point up stream from the bottleneck.

2) The other reason why you should reduce complexity is the impact that a change will have on your test plans. If you decide you want to implement a change, maybe based on the results of your testing. How many stages in the test would you like to have to go and change and re-configure your monitoring for two or twenty?11OBIEE stackDatabasePresentation ServicesBI ServerReport / DashboardLogical SQLPhysical SQL statement(s)Data set(s)Data setRendered reportExcludes App/Web server & presentation services plug-inHopefully everyones familiar with this picture. Heres a quick refresher

A rather grumpy looking user runs a Request in answers

PS sends LSQL to BI Server

BI server sends SQL to DB server

DB server returns results to BI Server

BI Server processes data (aggregates, stitches, etc)

BI Server returns data to PS server

PS Server renders and returns through web/app server to the user, whos hopefully now happy

12OBIEE testing optionsDatabasePresentation ServicesBI ServernqcmdSQL ClientLSQLPhysicalSQLData set(s)Data setRendered reportLSQLPhysicalSQLUser & StopwatchLoad Testing tool(eg. LoadRunner, OATS)Report / DashboardSo, how can we do our testing? Im going to cover first all the options, and then discuss why youd choose one rather than the other

The whole point of testing is that it is repeatable, which will normally mean automated.

From the top down, here are the ways of doing it[click]- To simulate the complete end-to-end system, youve two ways:A user and a stop watch :-)A web-capable testing tool such as LoadRunner or Oracle Application Testing Suite, which simulates a user interacting with OBIEE dashboards or answers Maybe something clever with web services too?

[click] - If you want to test from the BI Server onwards only, then you have these options:

A utility that comes with OBIEE is called nqcmd. It interfaces with the BI Server using ODBC. Im going to spend a lot of this presentation talking about it, and will come back to it shortly.You could use another ODBC-capable tool to generate the workload.

[click]- Finally, you could run the SQL on the database only. This isnt as simple as it sounds, because remember that BI Server generates the SQL.

How often have you seen the SQL being run on the database and winced or had a DBA shout at you for it?

BI Server is a black box when it comes to generating the SQL, all you can do is encourage it through good data modelling both in the database schema and the RPD

However, if the focus of your performance testing (remember I talked about defining why youre doing it) is more to the load testing side of things and you are you happy that theres no big wins to be had from the BI Server but from tuning the database itself, then you could consider running the SQL directly against it.

If all that will change is the execution plan then you can save yourself a lot of time by effectively cutting out OBIEE entirely and treating it purely as a database tuning exercise.

Candidates for this approach would be if you were doing things like evaluating new indexes, or parallelism or compression settings.

If youre just interested in the database then its database vendor specific. For Oracle Id consider:SQL file run through SQL*Plus from the command line (lends itself to scripting)SQL Tuning Sets, which you can feed into SQL Performance Analyzer to run on another databaseOracle RAT - real application testing (which is made up of Database Replay and SQL Performance Analyser)13nqcmdCommand: nqcmd - a command line client which can issue SQL statements against either Oracle BI server or a variety of ODBC compliant backend databases.SYNOPSIS nqcmd [OPTION]...DESCRIPTION -d -u -p -s -o -D -C -R -a (a flag to enable async processing) -f (a flag to enable to flush output file for each write) -H (a flag to enable to open/close a request handle for each query) -z (a flag to enable UTF8 instead of ACP) -utf16 (a flag to enable UTF16 instead of ACP) -q (a flag to turn off row output) -NoFetch (a flag to disable data fetch with query execution) -NotForwardCursor (a flag to disable forwardonly cursor) -SessionVar =nqcmd is part of the OBIEE installation on both unix and windows

You can use nqcmd interactively, or from a script14nqcmd[oracle@RNMVM01 setup]$ . ./sa-init.sh

[oracle@RNMVM01 setup]$ nqcmd

------------------------------------------------------------------------------- Oracle BI Server Copyright (c) 1997-2009 Oracle Corporation, All rights reserved-------------------------------------------------------------------------------

Give data source name: RNMVM01Give user name: AdministratorGive password: Administrator

[T]able info [C]olumn info [D]ata type info [F]oreign keys info [P]rimary key info [K]ey statistics info [S]pecial columns info [Q]uery statementSelect Option: CGive catalog pattern:Give user pattern:Give table pattern: TimeGive column type pattern:-----------------------------------------------------------------------------------------------------------------------------TABLE_QUALIFIER TABLE_NAME COLUMN_NAME A_TYPE TYPE_NAME PRECISION LENGTH SCALE RADIX NULLABLE -----------------------------------------------------------------------------------------------------------------------------Sample Sales Reduced Time Day Date 9 DATE 0 0 0 10 0 Sample Sales Reduced Time Week 12 VARCHAR 12 12 0 10 0 Sample Sales Reduced Time Month 12 VARCHAR 9 9 0 10 0 Sample Sales Reduced Time Quarter 12 VARCHAR 7 7 0 10 0 Sample Sales Reduced Time Year 12 VARCHAR 4 4 0 10 0 -----------------------------------------------------------------------------------------------------------------------------Row count: 5-----------------------------------------------------------------------------------------------------------------------------Interactively, you can use it to query the logical data model that the RPD exposes

NB When you use nqcmd on unix you need to make sure youve set the environment variables for OBIEE first, by dot-sourcing sa-init.sh15nqcmd[oracle@RNMVM01 perftest]$ cat /data/perftest/lsql/test01.lsqlSELECT "D0 Time"."T01 Per Name Week" saw_0 FROM "Sample Sales" WHERE ("D01 More Time Objects"."T31 Cal Week" BETWEEN 40 AND 53) AND ("D01 More Time Objects"."T35 Cal Year" = 2007) ORDER BY saw_0

[oracle@RNMVM01 perftest]$ . /app/oracle/product/obiee/setup/sa-init.sh[oracle@RNMVM01 perftest]$ nqcmd -d AnalyticsWeb -u Administrator -p Administrator -s /data/perftest/lsql/test01.lsql

------------------------------------------------------------------------------- Oracle BI Server Copyright (c) 1997-2009 Oracle Corporation, All rights reserved-------------------------------------------------------------------------------

Connection open with info:[0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16SELECT "D0 Time"."T01 Per Name Week" saw_0 FROM "Sample Sales" WHERE ("D01 More Time Objects"."T31 Cal Week" BETWEEN 40 AND 53) AND ("D01 More Time Objects"."T35 Cal Year" = 2007) ORDER BY saw_0

SELECT "D0 Time"."T01 Per Name Week" saw_0 FROM "Sample Sales" WHERE ("D01 More Time Objects"."T31 Cal Week" BETWEEN 40 AND 53) AND ("D01 More Time Objects"."T35 Cal Year" = 2007) ORDER BY saw_0

-------------saw_0-------------2007 Week 402007 Week 412007 Week 422007 Week 432007 Week 442007 Week 452007 Week 462007 Week 472007 Week 482007 Week 492007 Week 502007 Week 512007 Week 522007 Week 53-------------Row count: 14-------------

Processed: 1 queries

Using nqcmd to execute a given logical sql script is where the real power in it is

Here we take a simple logical sql statement in the file test01.lsql

Its run as an input parameter to nqcmd using the s flag16nqcmdUsage Tracking or NQQuery.logTest scriptBI ServerDatanqcmdLogical SQLLogical SQLLogical SQLVery versatile, heres how you can use it

Unix shell scripting, or powershell on Windows

Generate a set of Logical SQL files

Run them sequentially (twice)

17Master test scriptnqcmdTest scriptBI ServerDatanqcmdLogical SQLTest scriptnqcmdTest scriptnqcmdTest scriptnqcmdAim script that encompasses the whole test press a button, does it all

Less interaction = less effort, less error

Script simulate user sleep, randomness- Automation of metric collection

Run nqcmd in parallelits just a script, invoke it twice, three times, four times

Write user scripts, that include random report choice and sleeping

Script include:Random report choiceSleepingLogging to fileSQL interaction, eg triggering SQL Tuning Set collection

Target is - Press a button to run script and get response time numbers out18LoadRunnera.k.a. HP Performance CentreSimulates user interaction HTTP traffic

Powerful, but can be difficult to set upAjax complicates things

Do you really need to use it?

ToolsFiddler2FireBug

Reference: My Oracle Support Doc ID 496417.1http://rnm1978.wordpress.com/category/loadrunner

19Defining your test - summaryBe very clear what the aim of your test isYou probably need to define multiple testsDifferent points on the OBIEE stack to interfacePick the most appropriate oneWrite everything down!

Any questions?20MeasureOnce youve defined your test you need to execute it and measure the results

Heres the ways you should consider measuring the different parts of the stack21OBIEE measuring & monitoringDatabasePresentation ServicesBI ServerPresentation Services plug-inApp ServerWeb ServerApache logOAS logAnalytics logsawserver.logNQServer.logNQQuery.logsystems managementEnterprise ManagerBI Management PackUsage TrackingPerfMon(windows only)jConsole etcPresentation servicesEnterprise ManagerASH, AWR, SQL MonitorServer metricse.g. : IO, CPU, MemoryPerfMon (Windows)Oracle OS Watcher (unix)Enterprise Manager (Oracle)Once youve decided how much of the stack youre going to test, you need to set about designing the test and how youre going to capture your metrics

Performance tests are all about collecting metrics that allow you to make statistically valid and quantifiable conclusions about your system.

The primary metric of interest is time. Whats the end-to-end response time, from request to answer, and wheres the time in between spent?If a user complains that a report take five minutes to run but the DBA says they dont see the query hit the database for the first two, and then it executes in 30 seconds, whats happened to the other two and a half minutes?

Other metrics of interest are the environmental statistics like CPU, memory, and IO, and diagnostic statistics such as the execution plan on the database and lower-level information like buffer gets etc.

So from the top down: [click]Web server, eg. Apache log first log of the user request coming inApp server, eg. OAS

Presentation Serivces plugin, Analytics (this is where you see the error logs when you get 500 Internal Server Error from analytics)

[click]sawserver.log - by default this doesnt record that much, but by changing the logconfig.xml file you can enable extremely detailed logging. This is useful for diagnosing lots of problems, but also if youre looking to do an accurate profile of where the time in an Answers request is spent. You can see when it receives the user request, when it sends on the logical SQL to the bI Server, and when it receives the data backSee http://rnm1978.wordpress.com/category/log/ for details

[click]BI Server spoilt for choice here. For a production environment I strongly recommend enabling Usage Tracking. For performance work you should also be using NQQuery.log where the variable levels of logging show you logical and physical SQL, BI Server execution plans, response times for each database query run, etc.

[click]As well as these two features there is the systemsmanagement functionality which exposes some very detailed counters through windows PerfMon or the BI Management Pack for OEM. You can also use the jmx protocol to access the data through clients like Jconsole or jManage

[click]For the database all the standard monitoring practices apply, depending on what your database is. For Oracle you should be using OEM, ASH, SQL Monitor, etc.

[click]And finally, for getting a complete picture of the stacks performance -- Speak to your users! Maybe not as empirically valid as the other components, but just as important.22NQQuery.logQuery Status: Successful Completion

Rows 1, bytes 96 retrieved from database query id:

Physical query response time 1 (seconds), id

Rows 621, bytes 9246 retrieved from database query id:

Physical query response time 10 (seconds), id

Physical Query Summary Stats: Number of physical queries 2, Cumulative time 11, DB-connect time 0 (seconds)

Rows returned to Client 50

Logical Query Summary Stats: Elapsed time 14, Response time 12, Compilation time 2 (seconds)

Useful when prob is suspected on DB, only place that individual physical SQL query response times are kept

Database query times & row counts

23Database metricsOracleEnterprise Managers Performance functionality is fantasticFor pure testing metrics capture you need to go to the tablesV$SQL_MONITOR, etc

EM is good For pure testing need to capture data :SQL Tuning SetsGood for capturing behaviour of a set of SQL over time longest running, most IO, etcLess good for focussing on individual queries because stats are aggregatedSQL Monitor export from EM (next slide)

+++++++++++++++++++++++++++++++++++SQL ServerDMVs, SQL Profiler, PerfMon counters, etcOther RDBMS pass 24Oracle SQL Monitor

Got to mention this from EM you can export a standalone HTML file that renders like this

brilliant25Measure - summaryLots of different ways to measure

Build measurement into your test planAutomate where possibleEasierLess errorLots of different ways to measure

decide what metrics are relevant to your testingLoad testing system metrics v. ImportantPerf testing indiv report maybe just response time

Plan your measurements as part of the test Trigger collection scripts automagicallyInclude manual collection in test instructions

26AnalyseAnalysis step is :

Collate data

store in sensible way- Raw data

- label your tests better to use a non-meaningful label

analyse it - visualisation - analysis will depend on aim of test - eg loadtesting identify bottlenecks 27Analysing the data

[click] raw data,

[click] compared to a previous baseline illustrate varience

[click] host metrics - IO graph

[click] response time, over time

28Analysing the data

Analysing the data

Analysing the data

Analysing the data

Illustrating the data colours!

[click]Use Excel, conditional formatting is great

32Analysing the dataResponse time11932102123Response time 11122233910Average (mean)3.450th percentile (Median)290th percentile9.1Think about the data statistically, what do the number represent?

Average / mean often used, but ignores variance

Percentile more representative

Standard Deviation indication of the variance

Sample quantity statistically valid

Good site: http://www.robertniles.com/stats/33Recording data about the testAs well as metrics record data about your tests

What are you recording Response Time against report? Physical SQL? etc

[click] Ways to capture them

For each test execution aim to record how each level relates to the nextEg lsql -> SQL IDsSQL IDs -> exec plan id

Might seem constant, but could change between tests: - changing the RPD could change logical SQL and therefore physical SQL, SQL ID, exec plan- new index wouldnt change physical SQL or SQL ID, but exec plan might

Helps with retrospective analysis

Be aware how each element links to the next

Useful when analysing test results, to be able to identify a SQL ID in Oracle AWR etc

34Extending Usage TrackingS_NQ_ACCTSTART_TS ROW_COUNT TOTAL_TIME_SEC NUM_DB_QUERY QUERY_TEXT QUERY_SRC_CD SAW_SRC_PATH SAW_DASHBOARD OBIEE_REPLAY_STATEMENTSqt_ora_hashquery_textsaw_pathdashboard

OBIEE_REPLAY_STATStestidtestenvqt_ora_hashstart_ts response_time row_count db_query_cntFollows on from how to capture data

Something i put together to help with analysing statement performance across systems

Ora_hash(query_text) common link

Database tables obvious place to store raw perf test data

Keeping track of logical SQL statements, often 2-3k in size, difficult

Used ORA_HASH to encode

Built new lookup table and new fact table

Running queries on diff systems, could compare equal statements this way35AnalyseEvaluate design / config optionsDo it rightDont fudge itIterative approachTimebox!At end of analyse - options

Most likely change something and re-measure (partitioning, system config, etc)

Choose what to do

More tests?IndexesConfig settingsEtc

is the test wrong?- redefine

completed all tests, or reached end of timebox- review36ReviewIterative approachRedefine testContinue testingImplementSummarise analysis

Compare to the test aim

Implementation optionseffort vs benefit

Branch both implement and continue testing

What did tests show summarise

Have you proved anythingHave you disproved anything

Do you need to test some more

Have you got time to test more

Do you need to define a new set of tests

Whats practical to implement? Return vs effort.37Review

Example from one of the review stages

Evaluate IO profile from four different iterations

Default parallelism bottleneck at 800MB/s

Output: - implement: reduce DOP - branch : look at auto-DOP in 11gR2

38ImplementIterative approachDont forget to validate your implementation

Use your perf test scripts

Implement the chosen option

BASELINE & test it using your perf tests

Branch the code line and do more testing if you wanteg us and parallelism, compression

When you hit perf problems in Prod, you can use your pre-defined perf tests to assess the scope and nature of the problem

39Lessons LearntYou wont get your testing right first timeTheres no shame in thatDont cook the books Better to redefine your test than invalidate its results

Stick to the methodologyDont move the goalpostsVery tempting to pick off the low-hanging fruit If you do, make sure you dont get indigestion

Timebox

Test your implementation!

Testing understand more about system as you go prob want to redefine test thats part of the process!Performance Testing is an iterative process. I cant stress this enough.

You will not get it right the first time you do it

Whatever you do, youll probably miss something or invalidate your tests.

Remember that an iterative approach is entirely valid, dont feel you got it wrong and have to fudge the results to cover your mistake. Better to abandon a test and learn from the mistake than produce a perfect test thats complete rubbish.

Stick to method Benefit of also enforces justification for changes, avoid weve always done it that waydont move the goalposts. You might find some horrible queries as you dig into them you notice some obvious quick winsIf you rush the fix in without completing your first round of testing, you risk invalidating itBE METHODICAL!!!!

Timebox the execute/measure/analyse iterations -dont get lost in diminishing returnsIts a good idea to timebox your work, and have regular review points

Test your implementation!parallel config, not tested properly after implement in test env, nearly got to prod without realising

Dont get so bogged down in the detail that you miss the wood for the treesYou can end up focussing on perfecting one element of the system at the expense of all the others.

40How to approach performance testingThink clearlyThis presentation has shown you how to run big workloads against your OBIEE system

But, resist the temptation to dash off and see what happens when you run a thousand users against your system at once.

Itll be fun, but ultimately a waste of time.

You have to define what youre going to do.

You need to define what the ultimate aim is.

Are you proving a system performs to specific user requirements? In which case your test definition is almost written for you, you just have to fill in the gaps

If youre building a performance test for best-practice and all the good reasons I spoke about before then you need to think carefully about what youll test.

Whats a representative sample of the systems workload?For example: -Analyse existing usage, pick the most frequently run reportsSPEAK TO YOUR USERS! Which reports do they care about?Be wary of only analysing the reports that users complain about though you want to be colleting lots and lots of good metrics. What happens when you fix the slow reports the old fast reports will now appear slow in comparison, so you want to have some baselines for them tooI cant stress this strongly enough.

Cary Millsap writes excellently on the whole subject of performance. I cant recommend highly enough his paper Thinking Clearly About Performance, as well as many of the articles on his blog.http://carymillsap.blogspot.com/2010/02/thinking-clearly-about-performance.html

There are books and books written on how you should approach performance testing and tuning, people like Mr Millsap have built their whole careers around it. Its way outside the scope of this, but I believe its essential to understand the approach to follow, otherwise all your testing can be in vain.

Its not the same as dashing off an OBIEE report that you can bin and recreate next week. Imagine designing your DW schema without good modelling knowledge or think of ones that youve worked with where the person who created it didnt understand what they were doing. The wasted time and misleading results can be potentially disastrous if you dont get it right up front.

Take my word for it time invested up front reading and understanding will repay itself ten-fold.

Preaching over. 41Performance Testing OBIEEEvaluate design / config optionsDo it rightDont fudge itDo more testingIterative approachBe MethodicalRedefine testDo more testingQuestions & Discussion42 Thank You!