performance implications with complete refresh

15
Performance Implications With Complete Table/Mview Refresh

Upload: saeed-meethal

Post on 27-Nov-2015

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Performance Implications With Complete Refresh

Performance Implications With Complete Table/Mview Refresh

Page 2: Performance Implications With Complete Refresh

The issue

• Some databases have middle size dimension (lookup) tables/mviews regularly updated. Some updates are complete or near complete table/mview refresh.

• Some queries need access those tables in full (full table scan, for example).

• The timing of the table complete refresh and the queries may cause very long running time for the queries and higher IO usages than usual.

• Here the middle size table usually has more than 1M records and more than hundreds of thousands of blocks.

Page 3: Performance Implications With Complete Refresh

Table/Mview Complete Refresh

• There are several scenarios for table/mview complete refresh:– Using MERGE, but almost all existing rows

are updated and some new rows are added (TAOEXT).

– TRUNCATE then INSERT so the content will be totally new (UAD EDWARD2 for POW tables)

– MVIEW complete refresh – DELETE then INSERT (GENTCL)

Page 4: Performance Implications With Complete Refresh

Timing

• The query starts after the complete refresh starts, but before the complete refresh commits.

• The complete refresh starts after the query starts, but some changes on the concerned table have been done before the query retrieves all the required data blocks.

• Oracle consistent read requires the query to re-construct the table data before the changes.

Page 5: Performance Implications With Complete Refresh

Where to Read The Data?• If the refresh generates UNDO during MERGE or DELETE/INSERT,

Oracle has to read UNDO blocks to reconstruct the original data.• When Oracle reads a block from disk or buffer cache, it finds one or

more rows might have been changed. For some cases, a row might have been changed multiple times (for example, a query last for several hours and the table is refreshed hourly). Oracle has to look for UNDO blocks to restore original row values.

• In worst case, each changed row points to a different UNDO block.• Reading UNDO is slow:

– Single block reads, while FTS on the target table can use multi block reads such as direct path reads or db file scattered reads.

– Need read much more blocks: possibly one block per row, or even worse because of the UNDO chains from multiple changes. Without UNDO in concern, one block read could fetch tens even hundreds of rows.

Page 6: Performance Implications With Complete Refresh

How about TRUNCATE Then INSERT?

• No UNDO will be generated from TRUNCATE.

• Data has to be reconstructed from REDO logs.

• Reconstruct data from REDO is very expensive.

Page 7: Performance Implications With Complete Refresh

The Symptom

• How can we know the slow query response time is caused by table complete refresh?– v$session or ASH shows significant waits on “db file sequential

read” with row_wait_obj# or current_obj# as 0, and P1 (file number) points to UNDO table space.

– v$session or ASH shows significant waits on “buffer busy waits” or “read by another session” and the block class is UNDO segment (P3), when multiple queries runs at the same time.

– v$session or ASH shows significant waits on “log file sequential read”. Note “log file sequential read” is rarely seen on regular user session. So if you see such event from a query session, Oracle has to reconstruct the data from REDO because the data is not available from both buffer cache and data file.

Page 8: Performance Implications With Complete Refresh

Using v$sql_plan_monitor

• Check column output_rows, physical_read_requests and physical_read_bytes, for the plan line related to reading of the concerned table.

• out_put_rows grows slowly, while physical_read_requests and physical_read_bytes grow much bigger than the size of the table self. For example, we should not really expect physical_read_requests grows bigger than the total blocks of the concerned table. (One exception is when we have chained or migrated rows for table with very long columns and rows, such as PRDYCRM S_ORG_EXT).

Page 9: Performance Implications With Complete Refresh

Other Indicates

• Session Stats: v$sesstat– data blocks consistent reads - undo records

applied– redo k-bytes read total

Page 10: Performance Implications With Complete Refresh

Example

• UAD job 3120, table POW.ORDER_LINE– POW.ORDER_LINE is refreshed by TRUNCATE then

INSERT on hourly base. The table size is around 1.2M records and 34K blocks.

– Occasionally job 3120 runs for hours with “log file sequential read”. At one check with v$sql_plan_monitor, Oracle used 2.8 hours and more than 9M read requests to read less than 700K rows (10101 sec, 9,365,676 IO requests, 6,652,598MB of physical reads, for 687,352 rows).

– Session stats check at the same time: redo k-bytes read total – 6,772,589,456

Page 11: Performance Implications With Complete Refresh

Usually “log file sequential read” is not seen on user sessions.

Page 12: Performance Implications With Complete Refresh

No row retrieved yet.

IO read request count is 8,455,341, and the size is 5,950,125MB

Page 13: Performance Implications With Complete Refresh

Example• GENTCL, MVIEW JOE.LINE_ALL complete refresh.• MVIEW size: around 12M rows, 400K blocks. It also has 11 indexes.• One SQL using JOE.LINE_ALL three times with one execution with

FTS. The same query is executed from 7 colos with each colo sends two queries at the same time.

• The average execution time usually is 6 minutes. During the mview complete refresh time, average execution time is 102 minutes.

• Top waits: read by other session, db file sequential read and gc buffer busy acquire. We expected “db file scattered read” or “direct path read”.

• An interesting case here is, because there were 14 concurrent executions, and data mainly from UNDO, the total IO was much smaller than usual when all executions use direct path reads.

• Session Stats: majority of LIO is from UNDO.– consistent gets 11,032,957 – data blocks consistent reads - undo records applied 9,768,377

Page 14: Performance Implications With Complete Refresh

Note object id is 0

Page 15: Performance Implications With Complete Refresh

More On MVIEW Refresh With Indexes Presented

• Lesson learned from GENTCL mview JOE.ALL_LINE: 12M rows, 400K blocks, 11 indexes, from dblink (ruby).

• Complete refresh took more than 15 hours and failed (timed out at remote DB with session no activity).

• Complete mview refresh is DELETE then INSERT.• With 11 indexes enabled, both DELETE and INSERT are very IO costly and

slow. • While DELETE of table data can be done in parallel, the index deletion is

done by query coordinator (QC). With 11 indexes on, the majority of IO activity is “db file sequential read”. For example, after nearly 3 hours (10721 sec), with 11,914,475 rows read, the IO consumed by the table self is 373,982 requests with 5,969MB, with the indexes on, the deletion plan step used 1,948,652 requests and 15,223MB.

• The INSERT step is even worse. After more than 3 hours (12353 seconds), it handled 3,809,374 rows, with 5,737,116 read requests and 44821MB of data read.

• Recreate the MVIEW and add index later should be much faster.