possible causes of poor sql performance

10
In this Document Purpose Last Review Date Instructions for the Reader Troubleshooting Details Diagnostics/Remedies 1. Poorly tuned SQL 2. Poor disk performance/disk contention 3. Unnecessary sorting 4. Late row elimination 5. Over parsing 6. Missing indexes/use of 'wrong' indexes 7. Wrong plan or join order selected 8. Import estimating statistics on tables 9. Insufficiently high sample rate for CBO 10. Skewed data 11. New features forcing use of CBO 12. ITL contention References Applies to: Oracle Server - Enterprise Edition - Version: 9.0.1.0 and later [Release: 9.0.1 and later ] Information in this document applies to any platform. ***Checked for relevance on 14-SEP-2010*** Purpose This document contains a number of potentially useful pointers for use when attempting to tune an individual SQL statement. This is a vast topic and this is just a drop in the ocean. Last Review Date March 4, 2008 Instructions for the Reader

Upload: melissa-miller

Post on 16-Nov-2015

214 views

Category:

Documents


1 download

DESCRIPTION

Possible cause of poor sql performance

TRANSCRIPT

In this DocumentPurposeLast Review DateInstructions for the ReaderTroubleshooting DetailsDiagnostics/Remedies1. Poorly tuned SQL2. Poor disk performance/disk contention3. Unnecessary sorting4. Late row elimination 5. Over parsing6. Missing indexes/use of 'wrong' indexes7. Wrong plan or join order selected 8. Import estimating statistics on tables9. Insufficiently high sample rate for CBO10. Skewed data11. New features forcing use of CBO12. ITL contentionReferences

Applies to: Oracle Server - Enterprise Edition - Version: 9.0.1.0 and later[Release: 9.0.1 and later ]Information in this document applies to any platform.***Checked for relevance on 14-SEP-2010*** PurposeThis document contains a number of potentially useful pointers for use when attempting to tune an individual SQL statement. This is a vast topic and this is just a drop in the ocean.

Last Review DateMarch 4, 2008 Instructions for the ReaderA Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.Troubleshooting DetailsDiagnostics/Remedies1. Poorly tuned SQLOften, part of the problem is finding the SQL causing performance degradation. If you are seeing problems on a system, it is usually a good idea to startby eliminating database setup issues by using the statspack(8i,9i,10g), or AWR (recommended for 10g and higher). (For versions earlier than 8i use UTLBSTAT & UTLESTAT reports) See:Note 94224.1 FAQ- STATSPACK COMPLETE REFERENCE Note 276103.1 PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITYNote 62161.1Tuning using BSTAT/ESTAT

for much more on this.

Once the database has been tuned to a reasonable level then the most resource hungry selects can be determined by using Statspack and AWR reports focusing on the following sections:

SQL ordered by Buffer Gets SQL ordered by Physical Reads SQL ordered by Executions SQL ordered by Parse Calls (not in 8i)SQL ordered by CPU (AWR only)

See: Note 228913.1 Systemwide Tuning using STATSPACK Reports for more details.

It is also possible to find resource hungry SQL interactively as follows (a very similar report can be found in the Enterprise Manager Tuning Pack):SELECTaddress,SUBSTR(sql_text,1,20)Text,buffer_gets,executions,buffer_gets/executionsAVGFROMv$sqlareaWHEREexecutions>0ANDbuffer_gets>100000ORDERBY5;

Remember that the 'buffer_gets' value of > 100000 needs to be varied for the individual system being tuned. On some systems no queries will read more than 100000 buffers, while on others most of them will. This value allows you to control how many rows you see returned from the select.

The ADDRESS value retrieved above can then be used to lookup the whole statement in the v$sqltext view:SELECT sql_text FROM v$sqltext WHERE address = '...' ORDER BY piece;

Once the whole statement has been identified it can be tuned to reduce resource usage.

If the problem relates to CPU bound applications then CPU information for each session can be examined to determine the culprits. The v$sesstat view can be queried to find high cpu using sessions and then SQL can be listed as before. Steps:

1. Verify the reference number for the 'CPU used by this session' statistic:SELECTname,statistic#FROMv$statnameWHEREnameLIKE'%CPU%session';

NAMESTATISTIC#---------------------------------------------CPUusedbythissession12

2. Then determine which session is using most of the cpu:SELECT*FROMv$sesstatWHEREstatistic#=12;SIDSTATISTIC#VALUE------------------------------11202120312041205120612071208120912010120111201212016121930

3. Lookup details for this session:SELECTaddress,SUBSTR(sql_text,1,20)Text,buffer_gets,executions,buffer_gets/executionsAVGFROMv$sqlareaa,v$sessionsWHEREsid=16ANDs.sql_address=a.addressANDexecutions>0ORDERBY5;

4. Use v$sqltext to extract the whole SQL text.

5. Explain the queries and examine their access paths. Autotrace is a useful tool for examining access paths. See Note 43214.1 AUTOTRACE Option in sqlplus

2. Poor disk performance/disk contentionUse of Statspack or AWR and focusing on "Tablespace IO Statistics", and/or operating system i/o reports can help in this area. Remember that you may be able to capture the activity of a single statement by running the report around the run of your statement with no other activity.

Another good way of monitoring IO is to run a 10046 Level 8 trace to capture all the waits for a particular session. 10046 can be turned on at the session level using:alter session set events '10046 trace name context forever, level 8';

Excessing i/o can be found by examining the resultant trace file and looking for i/o related waits such as:'db file sequential read' (Single-Block i/o - Index, Rollback Segment or Sort)'db file scattered read' (Multi-Block i/o - Full table Scan).

Remember to set TIMED_STATISTICS = TRUE to capture timing information otherwise comparisons will be meaningless. See:Note 21154.1 10046 eventNote 39817.1 SQL_TRACE interpretation

If you are also interested in viewing bind variable values then a level 12 trace may be used with Event 10046.3. Unnecessary sortingThe first question to ask is 'Does the data REALLY need to be sorted?' If sorting does need to be done then try to allocate enough memory to prevent the sorts from spilling to disk an causing i/o problems.

Sorting is a very expensive operation:- High CPU usage- Potentially large disk usageTry to make the query sort the data as late in the access path as possible. The idea behind this is to make sure that the smallest number of rows possible are sorted.

Remember that: - Indexes may be used to provided presorted data.

- Sort merge joins inherently need to do a sort.

- Some sorts don't actually need a sort to be performed. In this case the explain plan should show NOSORT for this operation.In summary:- Increase sort area size to promote in memory sorts.

- Modify the query to process less rows -> Less to sort

- Use an index to retrieve the rows in order and avoid the sort.

- use sort_direct_writes to avoid flooding the buffer cache with sort blocks.

- If Pro*C use release_cursor=yes as this will free up any temporary segments held open.4. Late row elimination Queries are more likely to be performant if the bulk of the rows can be eliminated early in the plan. If this does happen then unnecessary comparisons may be made on rows that are simply eliminated later. This tends to increase CPU usage with no performance benefits.

If these rows can be eliminated early in the access path using a selective predicate then this may significantly enhance the query performance.5. Over parsingOver parsing implies that cursors are not being shared.

If statements are referenced multiple times then it makes sense to share then rather than fill up the shared pool with multiple copies of essentially the same statement. See:

Note 62143.1 Main issues affecting the Shared Pool on Oracle 7 and 8Note 70075.1 Use of bind variables with CBO

6. Missing indexes/use of 'wrong' indexesIf indexes are missing on key columns then queries will have to use Full Table Scans to retrieve data. Usually indexes for performance should be added to support selective predicates included in queries.

If an unselective index is chosen in preference to a selective one then potential solutions are:

RBO- indexes have an equal ranking so row cache order is used. See .Note 73167.1 Handling of equally ranked (RBO) or costed (CBO) indexes

CBO- reanalyze with a higher sample size- add histograms if column data has an uneven distribution of values- add hints to force use of the index you requireRemember that index usage on join can be affected by the join type and join order chosen. For more information on the use of indexes see Note 67522.1Master Note: Diagnosing Why a Query is Not Using an Index

7. Wrong plan or join order selected If the wrong plan has been selected then you may want to force the correct one.

If the problem relates to an incorrect join order, then it ofter helps to draw out the tables linking them together to show how they join e.g.:A-B-C-D-G-H|E-F

This can help with visualisation of the join order and identifications of missing joins. When tuning a plan, try different join orders examining number of rows returned to get an idea of how good they may be.8. Import estimating statistics on tablesPre 8i, import performs an analyze estimate statistics on all tables that were analyzed when the tables were exported. This can result in different performance after an export/import.

Introduced in 8i, more sampling functionality has been introduced including the facility to extract statistics on export.9. Insufficiently high sample rate for CBOIf the CBO does not have the correct statistical information then it cannot be expected to produce accurate results. Usually a sample size of 5% will be sufficient, however in some cases it may be necessary to have more accurate statistics at its' disposal. Please see:Note 44961.1 Statistics Gathering: Frequency and Strategy Guidelines

for Analysis recommendations.10. Skewed dataIf column data distribution is non uniform, then the use of column statistics in the form of histograms should be considered. Histogram statistics do not help with uniformly distributed data or where no information about the column predicate is available such as with bind variables. 11. New features forcing use of CBOA number of new features are not implemented in the RBO and their presence in queries will force the use of the CBO. These include:- Degree of parallelism set on any table in the query

- Index-only tables

- Partition Tables

- Materialised viewsSee:Note 66484.1 Which Optimizer is Being Used ?

for a more extensive list.12. ITL contentionITL contention can occur when there is not enough Interested Transaction Lists in each block to support the update volume required. This can often occur after an export and import especially when no update space has been left in the blocks and the ITLs have not been increased.

See:Note 151473.1 INITRANS relationship with DB_BLOCK_SIZE.

ReferencesNOTE:122812.1 - * TROUBLESHOOTING: Tuning Queries That Cannot be Modified