oracle business intelligence applications version 7.9.6.x...

125
1 Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations An Oracle Technical Note, 8 th Edition May 2012 Primary Author: Pavel Buynitsky Contributors: Eugene Perkov, Amar Batham, Nitin Aggarwal, Oksana Stepaneeva, Wasimraja Abdulmajeeth, Kirill Denisenko, Andrei Dzianisau, Aliaksander Kokhno, Andrei Hes, Scott Lowe, Siarhei Kulikouski, Valery Enyukov Copyright © 2012, Oracle. All rights reserved.

Upload: phunghuong

Post on 06-Mar-2018

238 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

1

Oracle Business Intelligence

Applications Version 7.9.6.x

Performance Recommendations

An Oracle Technical Note, 8th Edition

May 2012

Primary Author: Pavel Buynitsky

Contributors: Eugene Perkov, Amar Batham, Nitin Aggarwal,

Oksana Stepaneeva, Wasimraja Abdulmajeeth, Kirill Denisenko,

Andrei Dzianisau, Aliaksander Kokhno, Andrei Hes, Scott Lowe,

Siarhei Kulikouski, Valery Enyukov

Copyright © 2012, Oracle. All rights reserved.

Page 2: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

2

Oracle Business Intelligence Applications Version 7.9.6.x Performance

Recommendations

Contents

Introduction .......................................................................................................................................................... 6

Hardware recommendations for implementing Oracle BI Applications .............................................................. 6

Storage Considerations for Oracle Business Analytics Warehouse .................................................................. 7

Introduction ................................................................................................................................................. 7

Shared Storage Impact Benchmarks ............................................................................................................ 7

Conclusion .................................................................................................................................................... 9

Source Tier ........................................................................................................................................................ 9

Oracle BI Enterprise Edition (OBIEE) / ETL Tier ................................................................................................ 9

Review of OBIEE/ETL Tier components ........................................................................................................ 9

Deployment considerations for the ETL components.................................................................................. 9

Target Tier ...................................................................................................................................................... 10

Source Environments Recommendations for Better Performance .................................................................... 10

Change Data Capture Considerations for Source Databases ......................................................................... 10

Introduction ............................................................................................................................................... 10

Oracle Golden Gate .................................................................................................................................... 10

Materialized View Logs .............................................................................................................................. 11

Database Triggers on Source Tables .......................................................................................................... 15

Extract Workload Impact on Data Sources ..................................................................................................... 15

Allocate Sufficient TEMP space OLTP Data Sources ................................................................................... 16

Replicate Source Tables to Persistent Staging Layer on Target ................................................................. 16

Utilize Target Resources to Speed up Extracts from Target Persistence Layer. ........................................ 16

Custom Indexes in Oracle EBS for Incremental Loads Performance .............................................................. 17

Introduction ............................................................................................................................................... 17

Custom OBIEE indexes in EBS 11i and R12 systems ................................................................................... 17

Custom EBS indexes in EBS 11i source systems ......................................................................................... 19

Oracle EBS tables with high transactional load .......................................................................................... 21

Custom EBS indexes on CREATION_DATE in EBS 11i source systems ........................................................ 21

Oracle Warehouse Recommendations for Better Performance ........................................................................ 22

Database configuration parameters ............................................................................................................... 22

Oracle RDBMS 64-bit Recommendation ........................................................................................................ 22

ETL impact on amount of generated REDO Logs ............................................................................................ 22

Oracle RDBMS System Statistics ..................................................................................................................... 23

Parallel Query configuration........................................................................................................................... 23

Oracle Business Analytics Warehouse Tablespaces ....................................................................................... 23

Page 3: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

3

Bitmap Indexes usage for better queries performance ................................................................................. 24

Introduction ............................................................................................................................................... 24

DAC properties for handling bitmap indexes during ETL ........................................................................... 24

Bitmap Indexes handling strategies ........................................................................................................... 26

Monitoring and Disabling Unused Indexes ................................................................................................ 30

Handling Query Indexes during Initial ETL ................................................................................................. 32

Partitioning guidelines for Large Fact tables .................................................................................................. 33

Introduction ............................................................................................................................................... 33

Range and Composite Range-Range Partitioning ...................................................................................... 34

Composite Range-range Partitioning Using Virtual Columns .................................................................... 46

Interval Partitioning ................................................................................................................................... 47

Partitioning Pruning in Star Queries ............................................................................................................... 48

Partitioning Pruning and Star Transformation Scenarios........................................................................... 48

Conclusion .................................................................................................................................................. 54

Table Compression implementation guidelines ............................................................................................. 54

Table Compression Recommendations...................................................................................................... 54

Row Chaining in Compressed Tables after DML Updates and Deletes ...................................................... 55

ETL Aggregation using Materialized Views ..................................................................................................... 56

Introduction ............................................................................................................................................... 56

Implement DAC Action Framework Support for MVs ................................................................................ 56

Updates Optimization using DBMS_PARALLEL_EXECUTE (11gR2) ................................................................. 62

Wide tables with over 255 columns performance ......................................................................................... 65

Introduction ............................................................................................................................................... 65

Wide tables structure optimization ........................................................................................................... 65

Guidelines for Oracle optimizer hints usage in ETL mappings ....................................................................... 66

Hash Joins versus Nested Loops in Oracle RDBMS .................................................................................... 66

Oracle Database Hints Use in Oracle Business Intelligence Applications 7.9.6 Mappings ........................ 69

Oracle Database Hints Use in Oracle Business Intelligence Applications 7.9.6.3 Mappings ..................... 71

Using Oracle Optimizer Dynamic Sampling for big staging tables ............................................................. 76

Oracle BI Applications Best Practices for Oracle Exadata ................................................................................... 77

Handling BI Applications Indexes in Exadata Warehouse Environment ........................................................ 77

Gather Table Statistics for BI Applications Tables .......................................................................................... 77

Oracle Business Analytics Warehouse Storage Settings in Exadata ............................................................... 77

Parallel Query Use in BI Applications on Exadata........................................................................................... 78

Compression Implementation Oracle Business Analytics Warehouse in Exadata ......................................... 78

OBIEE Queries Performance Considerations on Exadata ............................................................................... 78

Exadata Smart Flash Cache ............................................................................................................................. 79

Database Parameter File Template for Analytics Warehouse on Exadata ..................................................... 79

DB2 Warehouse Recommendations for Better Performance ............................................................................ 80

DB2 Warehouse Configuration ....................................................................................................................... 80

Database Manager Level ............................................................................................................................ 80

Database Level ........................................................................................................................................... 80

Database Registry ....................................................................................................................................... 80

Buffer Pools ................................................................................................................................................ 81

Page 4: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

4

Table Spaces ............................................................................................................................................... 81

DB2 Recommendations and Best Practices .................................................................................................... 81

Disabling Bulk Mode ................................................................................................................................... 81

Avoiding ‘Unsorted input found’ Warning ................................................................................................. 81

SIEBTRUN and SIEBSTAT Errors .................................................................................................................. 82

‘The transaction log for the database is full’ Error .................................................................................... 82

DB2 Index Usage Monitoring .......................................................................................................................... 82

Introduction ............................................................................................................................................... 82

Implement Index Usage Monitoring .......................................................................................................... 83

SQL Server Warehouse Recommendations for Better Performance ................................................................. 84

SQL Server Index Monitoring using DMV ....................................................................................................... 84

Informatica Configuration for Better Performance ............................................................................................ 86

Informatica PowerCenter 32-bit vs. 64-bit ..................................................................................................... 86

Informatica Session Logs ................................................................................................................................ 86

Informatica Lookups ....................................................................................................................................... 87

Disabling Lookup Cache for very large Lookups ............................................................................................. 87

Joining Staging Tables to Lookup Tables in Informatica Lookups .................................................................. 88

Informatica Custom Relational Connections for long running mappings ...................................................... 88

Define Custom Relational Connections in DAC .......................................................................................... 89

Define Custom Relational Connections in Informatica .............................................................................. 89

Informatica Session Parameters ..................................................................................................................... 89

Commit Interval ......................................................................................................................................... 89

DTM Buffer Size .......................................................................................................................................... 89

Additional Concurrent Pipelines for Lookup Cache Creation..................................................................... 90

Default Buffer Block Size ............................................................................................................................ 90

Informatica Load: Bulk vs. Normal ................................................................................................................. 90

Informatica Bulk Load: Table Fragmentation ................................................................................................. 90

Use of NULL Ports in Informatica Mappings ................................................................................................... 91

Informatica Parallel Sessions Load on ETL tier ............................................................................................... 91

Informatica Workflow Partitioning ................................................................................................................. 91

Workflow Session Partitioning for Writer Updates ................................................................................... 92

Requirements for Implementing Concurrent Updates .............................................................................. 92

Implement Staging Table HASH Partitioning.............................................................................................. 92

Create Parallel Sessions in Workflow Manager ......................................................................................... 93

Informatica Pipeline Partitioning .................................................................................................................... 93

Suspend and Resume Informatica Mappings (Oracle RDBMS) ...................................................................... 94

Oracle MERGE in Informatica to Improve Updates Performance .................................................................. 95

MERGE SQL in Informatica Update Override ............................................................................................. 95

MERGE in Post SQL in Update Override ..................................................................................................... 96

MERGE in Informatica SQL Transformation ............................................................................................... 96

Informatica Load Balancing Implementation ............................................................................................... 103

OBIEE Queries Performance Recommendations .............................................................................................. 103

Introduction .................................................................................................................................................. 103

OBIEE Configuration, Diagnostics and Performance Analysis ...................................................................... 104

Page 5: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

5

OBIEE Logging Using LOGLEVEL=7............................................................................................................ 104

OBIEE Init Blocks Overhead ...................................................................................................................... 104

OBIEE Cache Optimization ....................................................................................................................... 104

OBIEE Database Features ......................................................................................................................... 105

OBIEE NQQuery.log Statistics ................................................................................................................... 105

Inadequate Filtering in OBIEE Reports ..................................................................................................... 105

OBIEE Queries Optimization Using Materialized Views ............................................................................... 105

Introduction ............................................................................................................................................. 105

Database Configuration Requirements for using MVs ............................................................................. 106

Custom Materialized View Guidelines ..................................................................................................... 106

Integrate MV Refresh in DAC Execution Plan........................................................................................... 111

OBIEE Queries Optimization Using Database Views .................................................................................... 112

OBIEE Reports with SYSDATE........................................................................................................................ 114

AVG with SYSDATE in OBIEE Reports ....................................................................................................... 115

AVG CASE with SYSDATE in OBIEE Reports .............................................................................................. 116

OBIEE Reports With ‘SELECT CASE COUNT DISTINCT’ .................................................................................. 118

Oracle BI Applications High Availability ............................................................................................................ 122

Introduction .................................................................................................................................................. 122

High Availability with Oracle Data Guard and Physical Standby Database .................................................. 122

Conclusion ......................................................................................................................................................... 124

Page 6: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

6

Oracle Business Intelligence Applications Version 7.9.6 Performance

Recommendations

Introduction Oracle Business Intelligence (BI) Applications Version 7.9.6 delivers a number of adapters to various business applications on

Oracle database. 7.9.6 versions are certified with other major data warehousing platforms. Each Oracle BI Applications

implementation requires very careful planning to ensure the best performance during ETL, end user queries and dashboard

executions.

This article discusses performance topics for Oracle BI Applications 7.9.6 and higher, using Informatica PowerCenter 8.6.x and

9.x ETL platforms, and using Oracle Business Intelligence Enterprise Edition (OBIEE) 10.1.3.4.x and 11.1.1.x. Most of the

recommendations are generic for BI Applications 7.9.6.x contents and techstack. Release specific topics refer to exact version

numbers.

Note: The document is intended for experienced Oracle BI Administrators, DBAs and Applications implementers. It covers

advanced performance tuning techniques in Informatica and Oracle RDBMS, so all recommendations must be carefully verified

in a test environment before applied to a production instance. Customers are encouraged to engage Oracle Expert Services to

review their configurations prior to implementing the recommendations to their BI Applications environments.

Hardware recommendations for implementing Oracle BI Applications Depending on the volume of source, Oracle BI Applications Version 7.9.6 implementations can be categorized as small,

medium and large. This chapter covers hardware recommendations primarily for ensuring ETL performance. Refer to Oracle BI

Analytic Applications documentation for minimum hardware requirements and Oracle Business Intelligence Enterprise Edition

(OBIEE) for OBIEE hardware deployment and scalability topics.

Oracle Exadata (V2) has delivered the best performance for BI Applications ETL and OBIEE queries performance. Oracle BI

Applications on Exadata showed the best ETL runtime and throughputs. This document covers BI Applications / Exadata

specific topics in a separate chapter. Refer to Oracle Exadata documents for hardware configuration and specifications, which

will work the best for your BI Applications implementation.

Oracle Exalytics platform can effectively scale up for OBIEE end user queries performance. The Exalytics topics and best

practices are covered in a separate document.

The table below summarizes hardware recommendations for Oracle BI Applications tiers by the volume ranges.

Configuration SMALL MEDIUM LARGE

Target Tier

Target Volume Up to 200 Gb 200 Gb to 1 Tb 1 Tb and higher

# CPU cores 16 32 64*

Physical RAM 32-64 Gb 64-128 Gb 256+ Gb*

Page 7: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

7

Storage Space Up to 400 Gb 400 Gb – 2 Tb 2T b and higher

Storage System

Local (PATA, SATA, iSCSI), or

NAS, preferred RAID

configuration

High performance SCSI or SAN with

16 Gbps HBA or higher, connected

over fiber channel / 2xGb Ethernet

NIC

High performance SCSI or SAN with

24 Gbps HBA or higher, connected

over fiber channel / 2xGb Ethernet

NIC

Oracle BI Enterprise Edition / ETL Tier

# CPU cores 8 16 32

Physical RAM 8 Gb 16 Gb 32 Gb

Storage Space 100 Gb local 200 Gb local 400 Gb local

* Consider implementing Oracle RAC with multiple nodes to accommodate large numbers of concurrent users accessing web

reports and dashboards.

Important!

Depending on the number of planned concurrent users, running OBIEE reports, you may have to plan for more

memory on the target tier to accommodate for the queries workload.

To ensure the queries scalability on OBIEE tier, consider implementing OBIEE Cluster or Oracle Exalytics. Refer to

OBIEE and Exalytics documentation for more details.

It is recommended to set up all Oracle BI Applications tiers in the same local area network. Installation of any of these

three tiers over Wide Area Network (WAN) may cause additional delays during ETL Extract mappings execution in the

overall ETL window.

Storage Considerations for Oracle Business Analytics Warehouse

Introduction

Oracle BI Applications ETL execution plans are optimized to maximize hardware utilization on ETL and target tiers and reduce

ETL runtime. Usually a well-optimized infrastructure consumes higher CPU and memory on an ETL tier and causes rather heavy

storage I/O load on a target tier during an ETL execution. The storage could easily become a major bottleneck as the result of

such actions as:

Setting excessive parallel query processes (refer to ‘Parallel Query Configuration’ section for more details)

Running multiple I/O intensive applications, such as databases, on a shared storage

Choosing sub-optimal storage for running BI Applications tiers.

Shared Storage Impact Benchmarks

Sharing storage among heavy I/O processes could easily degrade ETL performance and result in extended ETL runtime. The

following benchmarks helped to measure the impact from sharing the same NetApp filer storage between two target

databases, concurrently loading data in two parallel ETL executions.

Configuration description:

Linux servers #1 and #2 have the following configurations:

2 quad-core 1.8 GHz Intel Xeon CPU

32 GB RAM

Shared NetApp filer volumes, volume1 and volume2, are mounted as EXT3 file systems:

Page 8: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

8

o Server #1 uses volume1

o Server #2 uses volume2

Execution test description:

Set record block size for I/O operations to 32k, the recommended db block size in a target database.

Execute parallel load using eight child processes to imitate average workload during ETL run.

Run the following test scenarios:

o Test#1: execute parallel load above on NFS volume1 using Linux server #1; keep Linux server #2 idle.

o Test#2: execute parallel load above on both NFS volume1 and volume2 using Linux servers #1 and #2.

The following benchmarks describe performance measurements in KB / sec:

- Initial Write: write a new file.

- Rewrite: re-write in an existing file.

- Read: read an existing file.

- Re-Read: re-read an existing file.

- Random Read: read a file with accesses made to random locations in the file.

- Random Write: write a file with accesses made to random locations in the file.

- Mixed workload: read and write a file with accesses made to random locations in the file.

- Reverse Read: read a file backwards.

- Record Rewrite: write and re-write the same record in a file.

- Strided Read: read a file with a strided access behavior, for example: read at offset zero for a length of 4 Kbytes, seek

200 Kbytes, read for a length of 4 Kbytes, seek 200 Kbytes and so on.

The test summary:

Test Type Test #1 Test #2

"Initial write " 46087.10 KB/sec 30039.90 KB/sec

"Rewrite " 70104.05 KB/sec 30106.25 KB/sec

"Read " 3134220.53 KB/sec 2078320.83 KB/sec

"Re-read " 3223637.78 KB/sec 3038416.45 KB/sec

"Reverse Read " 1754192.17 KB/sec 1765427.92 KB/sec

"Stride read " 1783300.46 KB/sec 1795288.49 KB/sec

"Random read " 1724525.63 KB/sec 1755344.27 KB/sec

"Mixed workload " 2704878.70 KB/sec 2456869.82 KB/sec

"Random write " 68053.60 KB/sec 25367.06 KB/sec

"Pwrite " 45778.21 KB/sec 23794.34 KB/sec

"Pread " 2837808.30 KB/sec 2578445.19 KB/sec

Total Time 110 min 216 min

Page 9: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

9

Initial Write, Rewrite, Initial Read, Random Write, and Pwrite (buffered write operation) were impacted the most, while

Reverse Read, Stride Read, Random Read, Mixed Workload and Pread (buffered read operation) were impacted the least by

the concurrent load.

Read operations do not require specific RAID sync-up operations therefore read requests are less dependent on the number of

concurrent threads.

Conclusion

You should carefully plan for storage deployment, configuration and usage for the Oracle BI Applications environment. Avoid

sharing the same RAID controller(s) across multiple databases. Set up periodic monitoring of your I/O system during both ETL

and end user queries load for any potential bottlenecks.

Source Tier

Oracle BI Applications data loads may cause additional overhead for CPU and memory on a source tier. There may be a larger

impact on the I/O subsystem, especially during full ETL loads. Using several I/O controllers or a hardware RAID controller with

multiple I/O channels on the source side would help to minimize the impact on Business Applications during ETL runs and

speed up data extraction into a target data warehouse. Refer to “Source Environments Recommendations for Better

Performance” chapter for additional recommendations for OLTP Data sources.

Oracle BI Enterprise Edition (OBIEE) / ETL Tier

Review of OBIEE/ETL Tier components

The Oracle BIEE/ETL Tier is composed of the following parts:

- Oracle Business Intelligence Server 10.1.3.4.x or 11g

- Informatica PowerCenter 8.6.x or 9.x Client

- Informatica PowerCenter 8.6.x or 9.x Server

- Data Warehouse Administration Console (DAC) client 10.1.3.4.1

- Data Warehouse Administration Console server 10.1.3.4.1

- Informatica BI Applications Repository (usually stored in a target database)

- DAC BI Applications Repository (usually stored in a target database)

Deployment considerations for the ETL components

The Informatica server and DAC server should be installed on a dedicated machine for best performance.

The Informatica server and DAC server cannot be installed separately on different servers.

The Informatica client and DAC client can be located on an ETL Administration client machine, or a Windows server,

running Informatica and DAC servers.

Informatica and DAC repositories can be deployed as separate schemas in the same database, as Oracle Business Analytics

Warehouse, if the target database platform is Oracle, IBM DB2 or Microsoft SQL Server.

The Informatica server and DAC server host machine should be physically located near the source data machine to

improve network performance.

You can consider deploying Informatica Load Balancing option, if you observe bottlenecks in processing Informatica

mappings on the ETL tier.

Page 10: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

10

Target Tier

Refer to separate chapters for Oracle, IBM DB2 and Microsoft SQL Server Data Warehouse tier recommendations below.

Source Environments Recommendations for Better Performance

Change Data Capture Considerations for Source Databases

Introduction

Oracle BI Analytic Applications can use different techniques for capturing changing data in the source databases, minimizing

the impact from ETL extracts on OLTP and improving incremental ETL performance. It effectively uses indexes on

LAST_UPDATE_DATE columns in Oracle EBS and Image tables in Siebel CRM. However some source databases may not have

the required logic for capturing changing rows. As the result, incremental mappings would scan large tables, causing

unnecessary workload on the source databases and extending incremental ETL runtime.

This chapter discusses the following custom Change Data Capture (CDC) options:

Golden Gate

Materialized View Logs (Oracle RDBMS)

Database Triggers

Note: you have to update both DAC and Informatica repositories to use the replicated persistent staging tables or materialized

views instead of the original source tables in Informatica workflows and DAC execution plans.

Oracle Golden Gate

Introduction

Oracle Golden Gate (GG) provides the best flexibility and performance for CDC, and the least impact on source databases. It

parses each captured record and marks it as an insert, update or delete. Golden Gate can be configured to capture changes for

a small set of source tables, used as ETL source containers. Refer to Golden Gate / OLPT Source documentation for more

details on integrating and configuring Golden Gate for your Source database.

Initial ETL and Golden Gate sync-up

Initial ETL does not need to rely on Golden Gate, since it usually processes significant, if not all source data volumes. To ensure

smooth switchover from Initial ETL, using the source database, to Incremental ETL, using Golden Gate (GG) you can:

1. Run GG EXTRACTOR process on the source to capture changed data on the source database.

2. Run your Initial ETL against the original data source. You should note the completion time for the ETL before running

any GG replication.

3. Run GG REPLICAT process with parameter HANDLECOLLISIONS on the target database to resolve data synchronization

issues. For example:

START replicat ora_rep

Sample ora_rep configuration: REPLICAT ora_rep

USERID gg_replicat@ora, PASSWORD gg_replicat

HANDLECOLLISIONS

NOCOMPRESSUPDATES

ASSUMETARGETDEFS

Page 11: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

11

INSERTALLRECORDS

MAP user_ext.employees, TARGET user_rep.employees,

COLMAP (USEDEFAULTS);

4. Check the replication status on the target database. Repeat the replicat command until the Log Read Checkpoint time passes the initial ETL completion timestamp. For example:

INFO REPLICAT ora_rep

-----------------------------------------------------------------------------------

REPLICAT HR_R Last Started 2010-29-03 15:24 Status RUNNING

Checkpoint Lag 00:00:00 (updated 00:00:00 ago)

Log Read Checkpoint File D:\GG\dirdat\or000001

2010-29-03 15:26:35.114956 RBA 1536

5. Inform the REPLICAT process about data synchronization completion: SEND REPLICAT ora_rep, NOHANDLECOLLISIONS USER_REP.employees

6. Remove HANDLECOLLISIONS parameter from the replicate process configuration file.

Golden Gate and Incremental ETL

Golden Gate can be used to replicate and maintain the specific source tables on a target, and supply the auxiliary CDC

information on DML type and timestamps. Then all the joins can be done with the use of additional indexes, partitioning,

parallelism and other techniques to achieve the best extraction performance without any impact on the original data source. A

typical incremental ETL with GG will involve the following steps:

1. GG EXTRACTOR tracks changed rows for the identified source tables from the source database Redo Log files.

2. The EXTRACTOR process sends the changed rows to the trail file.

3. The REPLICAT processes the changed rows from the trail file and adds the CDC metadata (insert, update, delete).

4. When an Incremental ETL starts, you can stop REPLICAT and restart it after the ETL completion.

Materialized View Logs

Introduction

Oracle Materialized View (MV) Logs capture the changing data in base source tables and supply the critical CDC volumes to the

extract mappings.

Important! MV Logs present additional challenges, when used in OLTP environments. You should carefully test MV Log based

CDC before implementing it in your production environment.

Review the following constraints for using MV Logs:

1. MV Logs can cause additional overhead on business transactions performance, if created on heavy volume

transactional tables in busy OLTP sources.

2. Ensure regular MV refresh to purge MV Logs. Otherwise they will grow in size and generate even more overhead for

OLTP applications.

3. Avoid sharing an MV Log between two or more fast refreshable MVs. The MV Log will not be purged until all

depending MVs are refreshed.

Refer to Oracle documentation for more details on MV and MV Logs implementation.

The next sections will use an example for using an MV Log on PS_PROJ_RESOURCE in PeopleSoft to speed up incremental

extract for SDE_PSFT_ProjectBudgetFact mapping.

Page 12: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

12

MV Log CDC Implementation

PeopleSoft ESA Application does not maintain DTTM_STAMP column in PS_PROJ_RESOURCE, which is used in

SDE_PSFT_ProjectBudgetFact extract logic. As the result, the optimizer uses an expensive full table scan for during an

incremental extract SQL execution.

The following steps describe the CDC implementation using MV Log approach:

1. Create An MV log on PS_PROJ_RESOURCE source table:

CREATE MATERIALIZED VIEW LOG ON PS_PROJ_RESOURCE NOCACHE LOGGING NOPARALLEL WITH SEQUENCE;

2. Create a primary key (PK) constraint, based on PS_PROJ_RESOURCE’s unique index

ALTER TABLE PS_PROJ_RESOURCE ADD CONSTRAINT PS_PROJ_RESOURCE_PK PRIMARY KEY

(BUSINESS_UNIT,PROJECT_ID,ACTIVITY_ID,RESOURCE_ID) USING INDEX PS_PROJ_RESOURCE;

3. Create a Materialized View using PS_PROJ_RESOURCE definition and an additional LAST_UPDATE_DT column. The latter will be populated using SYSDATE values:

CREATE TABLE OBIEE_PS_PROJ_RESOURCE_MV AS SELECT * FROM PS_PROJ_RESOURCE WHERE 1=2;

ALTER TABLE OBIEE_PS_PROJ_RESOURCE_MV ADD (LAST_UPDATE_DT DATE DEFAULT SYSDATE);

CREATE MATERIALIZED VIEW OBIEE_PS_PROJ_RESOURCE_MV

ON PREBUILT TABLE

REFRESH FAST ON DEMAND

AS SELECT * FROM PS_PROJ_RESOURCE;

4. Create an index on the MV LAST_UPDATE_DT:

CREATE INDEX OBIEE_PS_PROJ_RESOURCE_I1 ON OBIEE_PS_PROJ_RESOURCE_MV(LAST_UPDATE_DT);

5. Create a database view on the MV, which will be used in the SDE Fact Source Qualifier query:

CREATE VIEW OBIEE_PS_PROJ_RESOURCE_VW AS SELECT * FROM OBIEE_PS_PROJ_RESOURCE_MV;

6. Run the complete refresh for the MV. The subsequent daily ETLs will perform fast refresh using the MV Log.

exec dbms_mview.refresh(‘OBIEE_PS_PROJ_RESOURCE_MV’,’C’);

7. Update the SDE fact extract logic and replace the original table with the MV, and add an additional filter:

LAST_UPDATE_DT > to_date('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS')

DAC Changes to Support MV Refresh in an Execution Plan

Create Materialized View Refresh Task Action

1. Open DAC Client and navigate to Tools -> Seed Data -> Actions -> Task Actions 2. Click ‘New’ Button to create a new Task Action “Fast Refresh Materialized View”. 3. Click ‘Check Box’ icon in Value field. 4. Click Add button and enter the following values in the right upper pane:

Name: OBIEE PS Materialized View Creation

Type: SQL

Page 13: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

13

Database Connection: Target

Table Type: All Target

Valid Database Platforms: Oracle

5. Enter the following text in ‘SQL Statement’ tab in the right lower pane:

BEGIN

DBMS_MVIEW.REFRESH('getTableName()', 'F');

END;

6. Click OK to save the changes.

Register Materialized Views

1. Open your PeopleSoft container and click Design Button -> Table tab in the right pane. 2. Click ‘New’ and add OBIEE_PS_PROJ_RESOURCE_MV. 3. Choose the table type Source. 4. Add LAST_UPDATE_DT (DATE datatype) field to the table definition. 5. Save the changes.

Define Task

1. Open your Peoplesoft container, click on Design -> Tasks tab -> ‘New’ button. 2. Create a new task ‘Refresh_OBIEE_PS_PROJ_RESOURCE_MV’ and fill in the values per the screenshot below. 3. Click on Sources tab and add ‘PROJ_RESOURCE’ source. 4. Click on Targets tab and add ‘OBIEE_PS_PROJ_RESOURCE_MV’ source. 5. Check the ‘Analyze’ checkbox. 6. Save the changes.

Page 14: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

14

Modify Tasks

1. Open your Peoplesoft container , click on the Design -> Tasks tab and query each of the depending tasks: a. SDE_PSFT_ProjectBudgetFact b. SDE_PSFT_ProjectCostLineFact c. SDE_PSFT_ProjectRevenueLineFact

2. For each of the tasks above, click Sources tab, remove PROJ_RESOURCE and add OBIEE_PS_PROJ_RESOURCE_MV. 3. Save the changes. 4. Open your Peoplesoft container and ensure that all the affected tasks are Active. If not, mark them as Active.

Create Task Group

1. Open your PeopleSoft container and click on the Design -> Task Groups tab.

2. Click ‘New’ and create a new task group ‘TASK_GROUP_OBIEE_Load_ProjectFacts‘, and add all the tasks in the correct

order, as shown in the screenshot below.

Modify Subject Area

1. Open your PeopleSoft container, click on the Design -> Subject Areas and query your Projects subject area 2. Click on ‘Configuration Tags’ tab, remove three tags for PersistedStage and add three tags for NonPersistedStage.

Refer to the screenshot below. 3. Save the changes.

Page 15: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

15

Rebuild Execution Plan

Reassemble your Subject Areas and rebuild your Execution plan with the new dependencies. Validate the correct order of the tasks in the Execution plan.

Refer to BI Analytic Applications Administration Guide, chapter "Customizing DAC Objects and Designing Subject Areas" for more details.

Database Triggers on Source Tables

You can consider database triggers to capture new and updated records and populate auxiliary tables in a source database.

This option requires careful implementation to minimize the overhead on OLTP environments, especially for high volume

transaction tables.

Here is an example of such trigger for Oracle database:

CREATE OR REPLACE TRIGGER CDC_Trigger

AFTER UPDATE OR INSERT ON Base_Table

FOR EACH ROW

BEGIN

IF INSERTING THEN

INSERT INTO AUX_TABLE VALUES(:new.TEST_ID, SYSTIMESTAMP);

END IF;

IF UPDATING THEN

UPDATE AUX_TABLE SET LAST_UPDATE_DATE = SYSTIMESTAMP WHERE TEST_ID = :new.TEST_ID;

END IF;

END;

/

Review the additional considerations below:

Ensure data integrity between the primary source and auxiliary CDC tables in your design.

Consider adding a unique index on the auxiliary CDC table primary column will speed up updates.

Measure carefully the impact on your source OLTP workload before you choose the trigger CDC approach, as it can

easily generate significant overhead and impact transactional business users.

Extract Workload Impact on Data Sources ETL workload impact on OLTP Data Sources is one of the critical factors in ETL optimization and performance. ETL

Administrators may face the constraints for creating additional custom indexes on source tables, or employ database parallel

processing for speeding up their incremental ETLs. On the other hand, the target hardware sized to handle much larger

Page 16: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

16

workload from end user queries, can utilize more resources to offload data source and deliver the critical improvements

during incremental ETL windows.

When you find any critical extract mappings and you cannot use more OLTP data source resources, consider replicating the

source data segments and any additional source objects to the target tier.

This section will summarize high level steps without providing step-by-step examples, since most steps are already covered in

other chapters of the document.

Allocate Sufficient TEMP space OLTP Data Sources

Oracle BI Analytic Applications Extract mappings may operate with large data volumes, compared to the small changes from

OLTP transactional activities. As the result, OLTP Data Sources could run out TEMP space during heavy volume initial extracts.

The source TEMP space varies by OLTP size and processed volumes. So, the recommended TEMP space for BI Applications ETL

ranges from 100Gb to 1Tb. You should allocate sufficient storage for additional TEMP space in an OLTP environment. It is

more practical to reclaim unused TEMP space after large volume ETL extracts completion, than restart long running mappings

from the very beginning, because of TEMP space shortage during the ETL.

Replicate Source Tables to Persistent Staging Layer on Target

If you observe significant load from some Extract mappings on the OLTP Source environment and you face constraints for

implementing change data capture mechanism, consider replicating the participating source objects to the target warehouse.

You can create a persistence staging table (_PS) for each source table replica in a separate database schema on the data

warehouse tier. This document already covered Golden Gate as an option for change data capture and source tables

replication.

You can also use Informatica to put together simple mappings, which will replicate source table attributes from SELECT and

WHERE clauses to a smaller table on the target tier:

Create a separate Informatica mapping for each source table replica on the target tier. It will capture incremental

changes, as a part of source table extraction logic.

Implement the logic to cover inserts and updates. You can use Informatica Update Strategy transformation to perform

its default insert and update DMLs.

If there are dependencies on other objects such as views, packages, etc., you should recreate them in your target

persistence layer as well.

Seed the runtime dependencies in DAC to execute the source table replication mappings concurrently.

Utilize Target Resources to Speed up Extracts from Target Persistence Layer.

The replicated Persistence Staging tables (_PS) will be smaller in size, compared to their original parent source tables, since the

_PS objects most probably have fewer columns. Additionally, you can add desired indexes to improve the extracts

performance.

Partitioning implementation for _PS tables can help to parallelize the extract logic. You can further multiplex the logic by

running the extracts on multiple sessions in a single workflow.

The following example shows the high level steps to improve extract performance for SDE_ORA_BomItemFact by moving the

extract logic to the target and multiplexing the extracts using Informatica.

SDE_ORA_BomItemFact uses the following EBS source tables:

Page 17: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

17

BOM_COMPONENTS_B

BOM_STRUCTURES_B

BOM_PARAMETERS

FND_LOOKUP_VALUES

MTL_SYSTEM_ITEMS_B

Additionally, it uses custom CONNECT BY PLSQL API to explode BOM Items for each BOM Header. The original source BOM

Explosion API may cause more workload on the OLTP source, hence an incremental ETL will use the custom CONNECT BY API

to handle larger volumes for BOM Items explosion.

The proposed changes are:

1. Create identified source dependencies, including the custom CONNECT BY API on the target tier.

2. Create an Informatica replication workflow for each of the tables above using LAST_UPDATE_DATE CDC logic.

3. Implement partitioning for W_BOM_HEADER_DS using ORG_ID and BOM_ITEM keys. You should analyze the data

distribution for these two key value combinations. For example: Org_id = 100 & bom_item=4

Org_id = 100 & bom_item != 4

Org_id = 200 & bom_item = 4

Org_id = 200 & bom_item !=4

Org_id = 300 & bom_item=4 (*)

Org_id = 300 & bom_item !=4

Org_id = the rest & bom_item =4

Org_id = the rest & bom_item !=4

(*) split the combination into two, as it takes the longest time to complete

4. Multiplex the Informatica sessions to invoke the CONNECT BY API for each ORG_ID and BOM_ITEM combination.

Custom Indexes in Oracle EBS for Incremental Loads Performance

Introduction

Oracle EBS source database tables contain mandatory LAST_UPDATE_DATE columns, which are used by Oracle BI Applications

for capturing incremental data changes. Some source tables used by Oracle BI Applications do not have an index on

LAST_UPDATE_DATE column, which hampers performance of incremental loads. There are three categories of such source EBS

tables:

- Tables that do not have indexes on LAST_UPDATE_DATE in the latest EBS releases, but there are no performance

implications reported with indexes on LAST_UPDATE_DATE column.

- Tables that have indexes on LAST_UPDATE_DATE columns, introduced in Oracle EBS Release 12.

- Tables that cannot have indexes on LAST_UPDATE_DATE because of serious performance degradations in the source

EBS environments.

Custom OBIEE indexes in EBS 11i and R12 systems

The first category covers tables, which do not have indexes on LAST_UPDATE_DATE in any EBS releases. The creation of

custom indexes on LAST_UPDATE_DATE columns for tables in this category has been reviewed and approved by Oracle’s EBS

Performance Group. All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided

below.

If your source system is on of the following:

- EBS R12

Page 18: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

18

- EBS 11i release 11.5.10

- EBS 11i release 11.5.9 or lower and it has been migrated to OATM*

then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX prior to running the DDL.

If your source system is EBS 11i release 11.5.9 or lower and it has not been migrated to OATM*, replace <IDX_TABLESPACE>

with <PROD>X, where <PROD> is an owner of the table which will be indexed on LAST_UPDATE_DATE column.

DDL script for custom index creation:

CREATE index AP.OBIEE_AP_EXP_REP_HEADERS_ALL ON AP.AP_EXPENSE_REPORT_HEADERS_ALL(LAST_UPDATE_DATE)

tablespace <IDX_TABLESPACE> ;

CREATE index AP.OBIEE_AP_INVOICE_PAYMENTS_ALL ON AP.AP_INVOICE_PAYMENTS_ALL(LAST_UPDATE_DATE)

tablespace <IDX_TABLESPACE> ;

CREATE index AP.OBIEE_AP_PAYMENT_SCHEDULES_ALL ON AP.AP_PAYMENT_SCHEDULES_ALL(LAST_UPDATE_DATE)

tablespace <IDX_TABLESPACE> ;

CREATE index AP.OBIEE_AP_INVOICES_ALL ON AP.AP_INVOICES_ALL(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AP.OBIEE_AP_HOLDS_ALL ON AP.HOLDS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> ;

CREATE index AP.OBIEE_AP_AE_HEADERS_ALL ON AP.AP_AE_HEADERS_ALL(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index CST.OBIEE_CST_COST_TYPES ON CST.CST_COST_TYPES(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index GL.OBIEE_GL_JE_HEADERS ON GL.GL_JE_HEADERS(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_ORGANIZATION_PROFILES ON AR.HZ_ORGANIZATION_PROFILES(LAST_UPDATE_DATE)

tablespace <IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_CONTACT_POINTS ON AR.HZ_CONTACT_POINTS(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_CUST_SITE_USES_ALL ON AR.HZ_CUST_SITE_USES_ALL(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_LOCATIONS ON AR.HZ_LOCATIONS(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_RELATIONSHIPS ON AR.HZ_RELATIONSHIPS(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_CUST_ACCT_SITES_ALL ON AR. HZ_CUST_ACCT_SITES_ALL(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_CUST_ACCOUNT_ROLES ON AR.HZ_CUST_ACCOUNT_ROLES(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AR.OBIEE_HZ_PARTY_SITES ON AR.HZ_PARTY_SITES(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>

;

Page 19: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

19

CREATE index AR.OBIEE_HZ_PERSON_PROFILES ON AR.HZ_PERSON_PROFILES(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index ONT.OBIEE_OE_ORDER_HEADERS_ALL ON ONT.OE_ORDER_HEADERS_ALL(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index ONT.OBIEE_OE_ORDER_HOLDS_ALL ON ONT.OE_ORDER_HOLDS_ALL(LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index PER.OBIEE_PAY_INPUT_VALUES_F ON PER.PAY_INPUT_VALUES_F (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index PER.OBIEE_PAY_ELEMENT_TYPES_F ON PER.PAY_ELEMENT_TYPES_F (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index PO.OBIEE_RCV_SHIPMENT_LINES ON PO.RCV_SHIPMENT_LINES (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index PO.OBIEE_RCV_SHIPMENT_HEADERS ON PO.RCV_SHIPMENT_HEADERS (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index AR.OBIEE_AR_CASH_RECEIPTS_ALL ON AR.AR_CASH_RECEIPTS_ALL (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index WSH.OBIEE_WSH_DELIVERY_DETAILS ON WSH.WSH_DELIVERY_DETAILS (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

CREATE index WSH.OBIEE_WSH_NEW_DELIVERIES ON WSH.WSH_NEW_DELIVERIES (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

There is one more custom index, recommended for Supply Chain Analytics on AP_NOTES.SOURCE_OBJECT_ID column:

CREATE index AP.OBIEE_AP_NOTES ON AP.AP_NOTES (SOURCE_OBJECT_ID) tablespace <IDX_TABLESPACE> ;

Important! You must use FND_STATS to compute statistics on the newly created indexes and update statistics on

newly indexed table columns in the EBS database.

Important! All indexes introduced in this section have the prefix “OBIEE_” and they do not follow the standard Oracle

EBS Index naming conventions. If a future Oracle EBS patch creates an index on LAST_UPDATE_DATE columns for the

tables listed below, Oracle EBS’s Autopatch may fail. In such cases the conflicting OBIEE_ indexes must be dropped,

and the Autopatch can be restarted.

Custom EBS indexes in EBS 11i source systems

The second category covers tables, which have indexes on LAST_UPDATE_DATE, officially introduced Oracle EBS Release 12.

All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided below. Do not change

the index name avoid any future patch or upgrade failures on the source EBS side.

If your source system is one of the following:

- EBS R12

- EBS 11i release 11.5.10

- EBS 11i release 11.5.9 or lower and it has been migrated to OATM*

then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX prior to running the DDL.

Page 20: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

20

If you source system is EBS 11i release 11.5.9 or lower and it has not been migrated to OATM*, replace

<IDX_TABLESPACE> with <PROD>X, where <PROD> is an owner of the table which will be indexed on LAST_UPDATE_DATE

column.

DDL script for custom index creation:

CREATE index PO.RCV_TRANSACTIONS_N23 ON PO.RCV_TRANSACTIONS (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M

MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_DISTRIBUTIONS_N13 ON PO.PO_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M

MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_LINE_LOCATIONS_N11 ON PO.PO_LINE_LOCATIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M

MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_LINES_N10 ON PO.PO_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4K MINEXTENTS 1

MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQ_DISTRIBUTIONS_N6 ON PO.PO_REQ_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K

NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace

<IDX_TABLESPACE>;

CREATE index PO.PO_REQUISITION_LINES_N17 ON PO.PO_REQUISITION_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K

NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace

<IDX_TABLESPACE> ;

CREATE index PO.PO_HEADERS_N9 ON PO.PO_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 1M MINEXTENTS 1

MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> ;

CREATE index PO.PO_REQUISITION_HEADERS_N6 ON PO.PO_REQUISITION_HEADERS_ALL (LAST_UPDATE_DATE)

INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10

tablespace <IDX_TABLESPACE> ;

CREATE index AR.RA_CUSTOMER_TRX_N14 ON AR.RA_CUSTOMER_TRX_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4M

MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

Important! You should use FND_STATS to compute statistics on the newly created indexes and update statistics on

newly indexed table columns in the EBS database.

Since all custom indexes above follow Oracle EBS index standard naming conventions, any future upgrades would not be

affected.

*) Oracle Applications Tablespace Model (OATM):

Oracle EBS release 11.5.9 and lower uses two tablespaces for each Oracle Applications product, one for the tables and

one for the indexes. The old tablespace model standard naming convention for tablespaces is a product's Oracle

schema name with the suffixes D for Data tablespaces and X for Index tablespaces. For example, the default

tablespaces for Oracle Payables tables and indexes are APD and APX, respectively.

Oracle EBS 11.5.10 and R12 use the new Oracle Applications Tablespace Model. OATM uses 12 locally managed

tablespaces across all products. Indexes on transaction tables are held in a separate tablespace APPS_TS_TX_IDX,

designated for transaction table indexes.

Customers running pre-11.5.10 releases can migrate to OATM using OATM Migration utility. Refer to Oracle Support

Note 248857.1 for more details.

Page 21: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

21

Oracle EBS tables with high transactional load

The following Oracle EBS tables are used for high volume transactional data processing, so introduction of indexes on

LAST_UPDATE_DATE may cause additional overhead for some OLTP operations. For the majority of all customer

implementations the changes will not have any significant impact on OLTP Applications performance. Oracle BI Applications

customers may consider creating custom indexes on LAST_UPDATE_DATE for these tables only after benchmarking

incremental ETL performance and analyzing OLTP applications impact.

To analyze the impact on EBS source database, you can generate an Automatic Workload Repository (AWR) report during the

execution of OLTP batch programs, producing heavy inserts / updates into the tables below, and review Segment Statistics

section for resource contentions caused by custom LAST_UPDATE_DATE indexes. Refer to Oracle RDBMS documentation for

more details on AWR usage.

Make sure you use the following pattern for creating custom indexes on the listed tables below:

CREATE index <Ppod>.OBIEE_<Table_Name> ON <Prod>.<Table_Name> (LAST_UPDATE_DATE) tablespace

<IDX_TABLESPACE> ;

Prod Table Name

AP AP_EXPENSE_REPORT_LINES_ALL

AP AP_INVOICE_DISTRIBUTIONS_ALL

AP AP_AE_LINES_ALL

AP AP_PAYMENT_HIST_DISTS

AR AR_PAYMENT_SCHEDULES_ALL

AR AR_RECEIVABLE_APPLICATIONS_ALL

AR RA_CUST_TRX_LINE_GL_DIST_ALL

AR RA_CUSTOMER_TRX_LINES_ALL

BOM BOM_COMPONENTS_B

BOM BOM_STRUCTURES_B

CST CST_ITEM_COSTS

GL GL_BALANCES

GL GL_DAILY_RATES

GL GL_JE_LINES

INV MTL_MATERIAL_TRANSACTIONS

INV MTL_SYSTEM_ITEMS_B

ONT OE_ORDER_LINES_ALL

PER PAY_PAYROLL_ACTIONS

PO RCV_SHIPMENT_LINES

WSH WSH_DELIVERY_ASSIGNMENTS

WSH WSH_DELIVERY_DETAILS

Custom EBS indexes on CREATION_DATE in EBS 11i source systems

Oracle EBS source database tables contain another mandatory column CREATION_DATE, which can be used by Oracle BI

Applications for capturing initial data subsets. You may consider creating custom indexes on CREATION_DATE if your initial

ETL extracts a subset of historic data. You can use the same guidelines for creating custom indexes on CREATION_DATE

columns for improving initial ETL performance after careful benchmarking of EBS source environment performance.

Page 22: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

22

Oracle Warehouse Recommendations for Better Performance

Database configuration parameters

Oracle Business Intelligence Applications version 7.9.6 is certified with Oracle RDBMS 10g and 11g. Since Oracle BI Applications

extensively use bitmap indexes, partitioned tables, and other database features in both ETL and front-end queries logic, it is

important that Oracle BI Applications customers install the latest database releases for their Data Warehouse tiers:

- Oracle 10g customers should use Oracle 10.2.0.5 or higher.

- Oracle 11g customers should use Oracle 11.1.0.7 or higher.

Important! Oracle 10.2.0.1 customers must upgrade their Oracle Business Analytics Warehouses to the latest Patchset.

Oracle BI Applications include template init.ora files with recommended and required parameters, located in the

<ORACLEBI_HOME>\dwrep\Documentation\ directory:

- init10gR2.ora - init.ora template for Oracle RDBMS 10g

- init11g.ora – init.ora template for Oracle RDBMS 11g

- init11gR2.ora – init.ora template for Oracle RDBMS 11gR2

Review an appropriate init.ora template file and follow its guidelines to configure target database parameters specific to your

data warehouse tier hardware.

Note: init.ora template for Exadata / 11gR2 is provided in Exadata section of this document.

Oracle RDBMS 64-bit Recommendation

Oracle strongly recommends deploying Oracle Business Analytics Warehouse on Oracle RDBMS 64-bit, running under 64-bit

Operating System (OS). If 64-bit OS is not available, then consider implementing Very Large Memory (VLM) on Unix / Linux and

Address Windowing Extensions (AWE) for Windows 32 bit Platforms. VLM/AWE implementations would increase database

address space to allow for more database buffers or a larger indirect data buffer window. Refer to Oracle Metalink for VLM /

AWE implementation for your platform.

Note: You cannot use sga_target or db_cache_size parameters if you enable VLM / AWE by setting

'use_indirect_data_buffers = true'. You would have to manually resize all SGA memory components and use

db_block_buffers instead of db_cache_size to specify your data cache.

ETL impact on amount of generated REDO Logs

Initial ETL may cause higher than usual generation of REDO logs, when loading large data volumes in a data warehouse

database. If your target database is configured to run in ARCHIVELOG mode, you can consider two options:

1. Switch the database to NOARCHIVELOG mode, execute Initial ETL, take a cold backup and switch the database back to

ARCHIVELOG mode.

2. Allocate up to 10-15% of additional space to accommodate for archived REDO logs during Initial ETL.

Below is a calculation of generated REDO amount in an internal initial ETL run:

redo log file sequence:

start : 641 (11 Jan 21:10)

end : 1624 (12 Jan 10:03)

total # of redo logs : 983

log file size : 52428800

redo generated: 983*52428800 = 51537510400 (48 GB)

Data Loaded in warehouse: SQL> select sum(bytes)/1024/1024/1024 Gb from dba_segments where owner='DWH' and segment_type='TABLE';

Page 23: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

23

Gb

----------

280.49

Oracle RDBMS System Statistics

Oracle has introduced workload statistics in Oracle 9i to gather important information about system such as single and

multiple block read time, CPU speed, and various system throughputs. Optimizer takes system statistics into account, when it

computes the cost of query execution plans. Failure to gather workload statistics may result in sub-optimal execution plans for

queries, excessive temporary space consumption, and ultimately impact BI Applications performance.

Oracle BI Applications customers are required to gather workload statistics on both source and target Oracle databases prior

to running initial ETL.

Oracle recommends two options to gather system statistics:

- Run the dbms_stats.gather_system_stats('start') procedure at the beginning of the workload window, then the

dbms_stats.gather_system_stats('stop') procedure at the end of the workload window.

- Run dbms_stats.gather_system_stats('interval', interval=>N) where N is the number of minutes when statistics

gathering will be stopped automatically.

Important! Execute dbms_stats.gather_system_stats, when the database is not idle. Oracle computes desired system statistics

when database is under significant workload. Usually half an hour is sufficient to generate the valid statistic values.

Parallel Query configuration

The Data Warehouse Administration Console (DAC) leverages the Oracle Parallel Query option for computing statistics and

building indexes on target tables. By default DAC creates indexes with the 'PARALLEL' clause and computes statistics with pre-

calculated degree of parallelism. Refer to the init.ora template files, located in <ORACLEBI_HOME>\dwrep\Documentation for

details on setting the following parameters:

parallel_max_servers

parallel_min_servers

parallel_threads_per_cpu

Important! You should carefully monitor your environment workload before changing any parallel query parameters.

It could easily lead to increased resource contention, creating I/O bottlenecks, and increasing response time when the

resources are shared by many concurrent transactions.

Since DAC creates indexes and computes statistics on target tables in parallel on a single table and across multiple tables, the

parallel execution may cause performance problems if the values parallel_max_servers and parallel_threads_per_cpu are too

high. The system load from parallel operations can be observed by executing the following query:

SQL> select name, value from v$sysstat where name like 'Parallel%';

Reduce the "parallel_threads_per_cpu" and "parallel_max_servers" value if the system is overloaded.

Oracle Business Analytics Warehouse Tablespaces

By default, DAC deploys all data warehouse entities into two tablespaces: all tables into a DATA tablespace, and all indexes

into an INDEX tablespace. Depending on your hardware configuration on the target tier you can improve its performance by

rearranging your data warehouse tablespaces.

The following table summarizes space allocation estimates in a data warehouse by its data volume range:

Target Data Volume SMALL MEDIUM LARGE

Data Warehouse Size Up to 200 Gb 200 Gb to 1 Tb 1 Tb and higher

Page 24: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

24

Temporary Tablespace 50 – 75 Gb 75 – 300 Gb 150 – 250Gb

DATA Tablespace 150 Gb 150 – 800 Gb > 800 Gb

INDEX Tablespace 50 Gb 50 – 200 Gb > 200 Gb

Important! You should use Locally Managed tablespaces with AUTOALLOCATE clause. DO NOT use UNIFORM extents size, as it

may cause excessive space consumption and result in queries slower performance.

Use standard (primary) block size for your warehouse tablespaces. DO NOT build your warehouse on non-

standard block tablespaces.

Note that the INDEX Tablespace may increase if you enable more query indexes in your data warehouse.

During incremental loads, by default DAC drops and rebuilds indexes, so you should separate all indexes in a dedicated

tablespace and, if you have multiple RAID / IO Controllers, move the INDEX tablespace to a separate controller.

You may also consider isolating staging tables (_FS) and target fact tables (_F) on different controllers. Such configuration

would help to speed up Target Load (SIL) mappings for fact tables by balancing I/O load on multiple RAID controllers.

Bitmap Indexes usage for better queries performance

Introduction

Oracle Business Intelligence Applications Version 7.9.0 introduced the use of the Bitmap Index feature of the Oracle RDBMS.

In comparison with B-Tree indexes, Bitmap indexes provide significant performance improvements on data warehouse star

queries. The internal benchmarks showed performance gains when B-Tree indexes on the foreign keys and attributes were

replaced with bitmap indexes.

Although bitmap indexes improve star queries response time, their use may cause ETL performance degradations both in

Oracle 10g and 11g. Dropping all bitmap indexes on a large table prior to an ETL run, and then recreating them after the ETL

completion may be quite expensive and time consuming. This is especially the case when there are a large number of such

indexes, or when there is little change expected in the number of records updated or inserted into a table during each ETL

run. Conversely, the quality of the existing bitmap indexes may degrade as more updates, deletes, and inserts are performed

with indexes in place, making such indexes less effective unless they are rebuilt.

This section reviews the index processing behavior of the DAC and provides the recommendations for bitmap indexes handling

during ETL runs.

DAC properties for handling bitmap indexes during ETL

DAC handles the same indexes differently for initial and incremental ETL runs. Prior to an initial load in a data warehouse,

there are no indexes created on the tables except for the unique B-Tree indexes to preserve data integrity. During the initial

ETL run, DAC will create ETL indexes on a loaded table, which will be required for faster execution of subsequent mappings.

For an incremental ETL run, DAC’s index handling will vary based on the combination of the several DAC properties and

individual index usage settings.

The following table summarizes the list of parameters, available in DAC 10.1.3.4.1, to handle indexes during ETL runs:

Name Type Values Effect Default

Page 25: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

25

Drop/Create

Indices

Execution

Plan Y | N

DAC will drop all indexes on a target table, truncated before a load, and then re-

create them after loading the table. It is used mostly in small Execution plans.

Initial ETL:

- Y – all indexes irrespective of any other settings will be dropped and created

- N - no indexes will be dropped during an initial ETL

Incremental ETL:

- Y - indexes with Always Drop & Create (Bitmap) will be dropped during an

incremental ETL

- N - no indexes will be dropped during an incremental ETL

DB2/390 customers may want to set it to N. The recommended default value for

other platforms is Y, unless you are executing a micro ETL in which case it would

be too expensive to drop and create all indexes, so the value should be changed

to N.

Important! When set to N, this parameter overrides all other index level

properties.

Y

Always Drop

& Create

Bitmap

Index Y | N

The property Always Drop and Create is an index specific property, applicable to

bitmap indexes only.

- Y - a Bitmap index will be dropped prior to an ETL run.

- N - a Bitmap index will not be dropped in an incremental ETL run only.

The index property Always Drop & Create Bitmap does not override Drop/Create

Indices execution plan property if the latter is set to N'. If an index is inactivated

in DAC, the index would not be dropped and recreated during subsequent ETL

runs.

The property applies to Oracle data warehouse platform only.

N/A

Always Drop

& Create Index Y | N

The property Always Drop and Create is an index specific property, applicable to

all indexes.

- Y – an index will be dropped prior to an ETL run.

- N – an index will not be dropped in an incremental ETL run only.

The index property Always Drop & Create does not override Drop/Create Indices

execution plan property if the latter is set to N'. If an index is inactivated in DAC,

the index would not be dropped and recreated during subsequent ETL runs.

N/A

Index Usage Index ETL |

QUERY

- ETL - an index is required to improve subsequent ETL mappings performance.

DAC drops ETL indexes on a table if it truncates the table before the load, or

you set Drop/Create Indices, Always Drop and Create Bitmap or Always Drop

& Create to True. DAC will re-create the dropped ETL indexes after loading

the table, since the indexes will be used to speed up subsequent mappings.

- Query - an index is required to improve web queries performance.

N/A

Page 26: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

26

Verify And

Create Non-

Existing

Indices

System True |

False

- True – The DAC server will verify that all indexes defined in the DAC

repository are created in the target database.

- False - DAC will not run any reconciliation checks between its repository and

the target database.

This parameter is useful when the current execution plan has Drop/Create

Indexes set to True, and new indexes have been created in the DAC repository

since the last ETL run.

False

Num Parallel

Indexes per

Table

Physical

Data

Source

Number This parameter specifies the maximum number of indexes that the DAC server

will create in parallel for a single table. 1

Bitmap Indexes handling strategies

Review the following recommendations for effective bitmap indexes management in your environment.

Disable redundant bitmap indexes in DAC

Pre-packaged Oracle BI Applications releases include bitmap indexes, enabled in the DAC metadata repository, and therefore,

created and maintained as part of ETL runs, even though the indexed columns might not be used in filtering conditions in the

Oracle BI Server repository.

Reducing the number of redundant bitmap indexes is an essential step for improving initial and incremental loads, especially

for dimension and lookup tables. To identify all enabled BITMAP indexes on a table in DAC metadata repository:

- Log in into your repository through the DAC user interface, click on the Design button under top menu, select your custom

container in the pull down menu and select the Indices tab in the right pane.

- Click Query sub-tab

- Enter Table name and check ‘Is Bitmap’ box in the query row and click Go.

To identify the list of the exposed columns, included into filtering conditions in RPD repository, connect to BI Server

Administration Tool and generate the list of dependencies for each column using Query Repository and Related To features.

To disable the identified redundant indexes in DAC and drop them in Data Warehouse:

- Check the Inactive checkbox against the indexes, which should be permanently dropped in the target schema.

- Rebuild the DAC execution plan.

- Connect to your target database schema and drop the disabled indexes.

Decide whether to drop or keep bitmap indexes during incremental loads

Analyze the total time to build indexes and computing statistics during an incremental run. You can connect to your DAC

repository and execute the following queries:

SQL> alter session set nls_date_format='DD-MON-YYYY:HH24:MI:SS';

-- Identify your ETL Run and put its format into the subsequent queries:

select ROW_WID, NAME ETL_RUN

, EXTRACT(DAY FROM (END_TS - START_TS) DAY TO SECOND ) || ' days '

|| EXTRACT(HOUR FROM (END_TS - START_TS) DAY TO SECOND ) || ' hrs '

|| EXTRACT(MINUTE FROM (END_TS - START_TS) DAY TO SECOND ) || ' min '

|| EXTRACT(SECOND FROM (END_TS - START_TS) DAY TO SECOND ) || ' sec ' PLAN_RUN_TIME

from W_ETL_DEFN_RUN

order by START_TS DESC;

Page 27: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

27

-- Identify your custom Execution Plan Name:

SELECT DISTINCT app.row_wid

FROM w_etl_defn_run run

, w_etl_app app

, w_etl_defn_prm prm

WHERE prm.etl_defn_wid = run.etl_defn_wid

AND prm.app_wid = app.row_wid

AND run.row_wid = '<Unique ETL ID from the first query>’;

-- Indexes build time:

SELECT ref_idx.tbl_name table_name

, ref_idx.idx_name

, sdtl.start_ts start_time

, sdtl.end_ts end_time

, EXTRACT(DAY FROM(sdtl.end_ts - sdtl.start_ts) DAY TO SECOND) || ' days '

|| EXTRACT(HOUR FROM(sdtl.end_ts - sdtl.start_ts) DAY TO SECOND) || ' hrs '

|| EXTRACT(MINUTE FROM(sdtl.end_ts - sdtl.start_ts) DAY TO SECOND) || ' min '

|| EXTRACT(SECOND FROM(sdtl.end_ts - sdtl.start_ts) DAY TO SECOND) || ' sec' idx_bld_time

FROM w_etl_defn_run def

, w_etl_run_step stp

, w_etl_run_sdtl sdtl

, (SELECT ind_ref.obj_wid

, ind.name idx_name

, tbl.name tbl_name

FROM w_etl_index ind

, w_etl_obj_ref ind_ref

, w_etl_obj_ref tbl_ref

, w_etl_table tbl

, w_etl_app app

WHERE ind_ref.obj_type = 'W_ETL_INDEX' AND ind_ref.soft_del_flg = 'N' AND ind_ref.app_wid = ‘<Your

custom Execution Plan Name from the second query>’

AND ind_ref.obj_wid = ind.row_wid

AND tbl_ref.obj_type = 'W_ETL_TABLE' AND tbl_ref.soft_del_flg = 'N' AND tbl_ref.app_wid = ‘<Your

custom Execution Plan Name from the second query>’

AND tbl_ref.obj_wid = tbl.row_wid

AND tbl_ref.obj_ref_wid = ind.table_wid

AND ind.app_wid = app.row_wid

AND ind.inactive_flg = 'N'

) ref_idx

WHERE def.row_wid = stp.run_wid

AND def.row_wid ='<Unique ETL ID from the first query>’

AND sdtl.run_step_wid = stp.row_wid

AND sdtl.type_cd = 'Create Index'

AND sdtl.index_wid = ref_idx.obj_wid

-- AND ref_idx.tbl_name = 'W_OPTY_D'

ORDER BY sdtl.end_ts - sdtl.start_ts DESC

-- Table Stats computing time:

select TBL.NAME TABLE_NAME

, STP.STEP_NAME

, EXTRACT(DAY FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' days '

|| EXTRACT(HOUR FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' hrs '

|| EXTRACT(MINUTE FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' min '

|| EXTRACT(SECOND FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' sec' TBL_STATS_TIME

from W_ETL_DEFN_RUN DEF

, W_ETL_RUN_STEP STP

, W_ETL_RUN_SDTL SDTL

, W_ETL_TABLE TBL

Page 28: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

28

where DEF.ROW_WID=STP.RUN_WID

and DEF.ROW_WID ='<Unique ETL ID from the first query>’

and SDTL.RUN_STEP_WID = STP.ROW_WID

and SDTL.TYPE_CD = 'Analyze Table'

and SDTL.TABLE_WID = TBL.ROW_WID

order by SDTL.END_TS - SDTL.START_TS desc;

-- Informatica jobs for the selected ETL run:

select

SDTL.NAME SESSION_NAME

, SDTL.SUCESS_ROWS

, STP.FAILED_ROWS

, SDTL.READ_THRUPUT

, SDTL.WRITE_THRUPUT

, EXTRACT(DAY FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' days '

|| EXTRACT(HOUR FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' hrs '

|| EXTRACT(MINUTE FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' min '

|| EXTRACT(SECOND FROM (SDTL.END_TS - SDTL.START_TS) DAY TO SECOND ) ||' sec' INFA_RUN_TIME

from W_ETL_DEFN_RUN DEF

, W_ETL_RUN_STEP STP

, W_ETL_RUN_SDTL SDTL

where DEF.ROW_WID=STP.RUN_WID

and DEF.ROW_WID ='<Unique ETL ID from the first query>’

and SDTL.RUN_STEP_WID = STP.ROW_WID

and SDTL.TYPE_CD = 'Informatica'

order by SDTL.END_TS - SDTL.START_TS desc;

If the report shows significant amounts of time to rebuild indexes and compute statistics, and the cumulative incremental load

time does not fit into your load window, you can consider two options:

Option 1: range partition large fact tables if they show up in the report. Refer to the partitioning sections for more details.

Option 2: If the incremental volumes are low, leave bitmap indexes on the reported tables for the next incremental run and

then compare the load times. Refer to the next chapter for the implementation.

Option 2 is not recommended for fact tables (%_F). It may be used for large dimension tables, which cannot be partitioned

effectively by range.

Important! Bitmap indexes, present on target tables during inserts, updates or deletes could significantly increase the

SQL DML execution time. The same SQL would complete much faster if the indexes get dropped prior to the query

execution. Alternatively, it will take more time to rebuild the dropped bitmap indexes and compute required statistics.

You should measure the cumulative time to run a specific task plus the time to rebuild indexes and compute required

database statistics before deciding whether to drop or keep bitmap indexes in place during incremental loads.

Configure DAC not to drop selected bitmap indexes during incremental loads

If your benchmarks show that it is less time consuming to leave bitmap indexes in place on large dimension tables during

incremental loads and the incremental volumes are relatively small, then you can consider keeping the selected indexes in

place during incremental loads.

Since the DAC system property Drop and Create Bitmap Indexes Always overrides the index property Always Drop & Create,

the system property defines how DAC will handle all bitmap indexes for all containers in the data warehouse schema. To

workaround this limitation:

Log in into your repository through DAC user interface, click on the Design button under the top menu, and select the

Indices tab in the right pane.

Page 29: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

29

Click on the Query sub-tab and get the list of all indexes defined on the target table.

Check both the check boxes Always Drop & Create and Inactive against the indexes, which should not be dropped during

incremental runs.

Important: You must uncheck the Inactive checkbox for these indexes before the next initial load; otherwise DAC will

not create them after the initial load completion. Since the Inactive property is used both for true inactive indexes and

"hidden from incremental load" indexes, the property Always Drop & Create could be used for convenience to

distinguish between two different categories.

If you choose to keep some bitmap indexes in place during incremental runs, consider creating the indexes with the storage

parameter PCTFREE value to at least 50 or higher. Oracle RDBMS packs bitmap indexes in a data block much more tightly

compared to B*Tree indexes. When an update, insert, or delete occurs on table columns with enabled indexes, the bitmap

indexes quality will degrade. The higher value of PCTFREE will mitigate the impact to some degree.

Additional considerations for handling bitmap indexes during incremental loads:

- All bitmap indexes should be dropped for transaction fact tables with over 20 million records, that usually have a large

volume of data updates and inserts, such as over 0.5 – 1 percent of total records during an incremental run.

- For the large tables with a small number of bitmap indexes, consider dropping and recreating the bitmap indexes since the

time to rebuild would be short.

- For the large tables with few data updates, the indexes can be enabled during incremental runs without significant

performance degradations.

Disabling Indexes with DISTINCT_KEYS = 0 or 1

Oracle BI Applications delivers a number of indexes to optimize both ETL and end user queries performance. Depending on

end user data and its distribution there may be some indexes on columns with just one distinct value. Such indexes will not be

used in any queries, so they can be safely dropped in your Data Warehouse schema and disabled in DAC repository.

The following script helps to identify all such indexes, disable them in DAC repository and drop in database. You have to either

connect as DBA user or implement additional grants, since the script requires access to two database schemes:

ACCEPT DAC_OWNER PROMPT 'Enter DAC Repository schema name: '

ACCEPT DWH_OWNER PROMPT 'Enter Data Warehouse schema name: '

SELECT row_wid FROM "&&DAC_OWNER".w_etl_app;

ACCEPT APP_ID PROMPT 'Enter your DAC container from the list above: '

UPDATE "&&DAC_OWNER".w_etl_index SET inactive_flg = 'Y' WHERE row_wid IN (

SELECT ind_ref.obj_wid

FROM "&&DAC_OWNER".w_etl_index ind,

"&&DAC_OWNER".w_etl_obj_ref ind_ref,

"&&DAC_OWNER".w_etl_obj_ref tbl_ref,

"&&DAC_OWNER".w_etl_table tbl,

"&&DAC_OWNER".w_etl_app app,

all_indexes all_ind

WHERE ind_ref.obj_type = 'W_ETL_INDEX'

AND ind_ref.soft_del_flg = 'N'

AND ind_ref.app_wid = '&&APP_ID'

AND ind_ref.obj_wid = ind.row_wid

AND tbl_ref.obj_type = 'W_ETL_TABLE'

AND tbl_ref.soft_del_flg = 'N'

AND tbl_ref.app_wid = '&&APP_ID'

AND tbl_ref.obj_wid = tbl.row_wid

AND tbl_ref.obj_ref_wid = ind.table_wid

AND ind.app_wid = app.row_wid

AND ind.inactive_flg = 'N'

AND all_ind.index_name = ind.name

AND all_ind.table_name = tbl.name

AND all_ind.distinct_keys <= 1

Page 30: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

30

AND all_ind.uniqueness = 'NONUNIQUE'

AND all_ind.num_rows >= 1

-- AND ind.type_cd = 'Query'

AND all_ind.owner = '&&DWH_OWNER');

COMMIT;

-- Drop the indexes in the schema:

spool drop_dist_indexes.sql

SELECT 'DROP INDEX ' || owner|| '.' || index_name || ' ;'

FROM all_indexes

WHERE distinct_keys <=1 and and uniqueness = 'NONUNIQUE' owner='&&DWH_OWNER';

spool off;

-- Execute the spooled SQL file to drop the identified indexes:

-- @drop_dist_indexes.sql

Monitoring and Disabling Unused Indexes

In addition to indexes with distinct_keys<=1 there can be more redundant query indexes in your data warehouse, not used by

any end user queries. These indexes can impact incremental ETL runtime Informatica mappings performance. You can identify

such indexes by implementing monitoring index usage in your warehouse and running it over extended period of time (usually

3-4 months).

To implement index usage monitoring:

1. Create a table in your data warehouse schema to load data from v$object_usage view:

CREATE TABLE myobj_usage AS SELECT * FROM v$object_usage;

2. Create the following scripts on DAC tier in the directory <dac_home>/bifoundation/dac/scripts

pre_sql.sql

INSERT INTO myobj_usage SELECT * FROM v$object_usage;

COMMIT:

EXIT;

pre_etl.bat

<ORACLE_HOME>/bin/sqlplus <dwh_user>/<dwh_pwd>@<dwh_db>

@<dac_home>/bifoundation/dac/scripts/pre_sql.sql

3. Set "Script before every ETL" System parameter in DAC to pre_etl.bat.

4. Create a backup copy of <dac_home>/bifoundation/dac/CustomSQLs/CustomSQL.xml

5. Open CustomSQL.xml and replace <SqlQuery name = "ORACLE_CREATE_INDEX">, <SqlQuery name =

"ETL_ORACLE_CREATE_INDEX"> and <SqlQuery name = "QUERY_ORACLE_CREATE_INDEX"> sections with:

<SqlQuery name = "ORACLE_CREATE_INDEX">

BEGIN

execute immediate 'CREATE %1 INDEX

%2

ON

%3

(

%4

)

NOLOGGING';

execute immediate 'ALTER INDEX %2 MONITORING USAGE';

END;

Page 31: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

31

</SqlQuery>

<SqlQuery name = "ETL_ORACLE_CREATE_INDEX">

BEGIN

execute immediate 'CREATE %1 INDEX

%2

ON

%3

(

%4

)

NOLOGGING';

execute immediate 'ALTER INDEX %2 MONITORING USAGE';

END;

</SqlQuery>

<SqlQuery name = "QUERY_ORACLE_CREATE_INDEX">

BEGIN

execute immediate 'CREATE %1 INDEX

%2

ON

%3

(

%4

)

NOLOGGING PARALLEL';

execute immediate 'ALTER INDEX %2 MONITORING USAGE';

END;

</SqlQuery>

6. If you implement index monitoring for the first time after completing ETLs, execute the following PL/SQL block to

enable monitoring for all indexes:

DECLARE

CURSOR c1 IS

SELECT index_name

FROM user_indexes

WHERE index_name NOT IN (SELECT index_name

FROM v$object_usage

WHERE MONITORING = 'YES');

BEGIN

FOR rec IN c1 LOOP

EXECUTE IMMEDIATE 'alter index '||rec.index_name||' monitoring usage';

END LOOP;

END;

/

To query the unused indexes in your data warehouse execute the following SQL:

SELECT DISTINCT index_name FROM myobj_usage WHERE used = 'NO';

Important! There are two known cases when optimizer uses indexes but DOES NOT mark as used with Index Usage Monitoring

turned on:

DML operations against Parent table (such as DELETE or UPDATE), associated with a Child table via the child table

Foreign Key (FK) and the FK Normal Index on the Child table, do use the Child table FK index, but Oracle does not

report them as used in v$object_usage. Note that BITMAP indexes are correctly flagged as used in the same

scenario and reported in v$object_usage.

Optimizer may use extended statistics for computing correct table selectivity, using composite indexes, and yet,

not report them in v$object_usage. Such case may not be a critical one for BI Analytics warehouse, since it doesn’t

use composite BITMAP indexes, while composite NORMAL indexes are used on surrogate keys (unique indexes)

and critical columns, used in ETL or OBIEE queries.

Page 32: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

32

You should carefully review the reported ‘unused’ indexes prior to dropping them in the database and to disabling them in

the DAC repository.

After identifying redundant indexes, disabling them in DAC and dropping in your data warehouse, follow the steps below to

turn off index monitoring:

1. Restore <dac_home>/bifoundation/dac/CustomSQLs/CustomSQL.xml from its backup copy.

2. Reset "Script before every ETL" System parameter in DAC

3. Execute the following PL/SQL block to disable index monitoring:

DECLARE

CURSOR c1 IS

SELECT index_name

FROM user_indexes

WHERE index_name IN (SELECT index_name FROM v$object_usage WHERE MONITORING = 'YES');

BEGIN

FOR rec IN c1 LOOP

EXECUTE IMMEDIATE 'alter index '||rec.index_name||' nomonitoring usage';

END LOOP;

END;

Important! You should monitor the index usage for an extended period, such as one to two months, before deciding which

additional indexes can be disabled in DAC and dropped in your target schema.

Handling Query Indexes during Initial ETL

Oracle BI Applications delivers a number of query indexes, which are not used during ETL but are required for OBIEE queries

better performance. Most of the query indexes are created as BITMAP indexes in the Oracle database. Creation of such large

number of query indexes can extend both initial and incremental ETL windows. This article discusses several options on how to

reduce index maintenance, such as disabling unused query indexes, or partitioning large fact tables and maintain local query

indexes on the latest range partitions.

You can consider disabling ALL query indexes and reduce your ETL runtime in the following scenarios:

1. Disable query indexes -> run an initial ETL -> enable query indexes -> run an incremental ETL -> run OBIEE reports

2. Disable query indexes -> run an incremental ETL -> enable query indexes -> run another incremental ETL -> run OBIEE

reports

To summarize, you can disable query indexes only for the following pattern: 1st ETL –> 2nd ETL –> OBIEE. You cannot use

this option for 1st ETL –> OBIEE –> 2nd ETL sequence.

Important! If you plan to implement partitioning for your warehouse tables and you want to take advantage of conversion

scripts in the next section, then you need to have query indexes, created on the target tables prior to implementing

partitioning.

Identify and preserve all activated query indexes PRIOR to executing the first ETL run:

CREATE TABLE psr_initial_query_idx AS

SELECT ind_ref.obj_wid,

ind.NAME idx_name,

tbl.NAME tbl_name

FROM w_etl_index ind,

w_etl_obj_ref ind_ref,

w_etl_obj_ref tbl_ref,

Page 33: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

33

w_etl_table tbl,

w_etl_app app

WHERE ind_ref.obj_type = 'W_ETL_INDEX'

AND ind_ref.soft_del_flg = 'N'

AND ind_ref.app_wid = :APP_ID

AND ind_ref.obj_wid = ind.row_wid

AND tbl_ref.obj_type = 'W_ETL_TABLE'

AND tbl_ref.soft_del_flg = 'N'

AND tbl_ref.app_wid = :APP_ID

AND tbl_ref.obj_wid = tbl.row_wid

AND tbl_ref.obj_ref_wid = ind.table_wid

AND ind.app_wid = app.row_wid

AND ind.inactive_flg = 'N'

AND ind.isunique = 'N'

AND ind.type_cd = 'Query'

AND (ind.DRP_CRT_ALWAYS_FLG = 'Y' OR ind.DRP_CRT_BITMAP_FLG = 'Y')

where APP_ID can be identified from:

SELECT row_wid FROM w_etl_app;

Disable the identified query indexes PRIOR to starting the first ETL run:

SQL> UPDATE w_etl_index SET inactive_flg = 'Y' WHERE row_wid IN (SELECT obj_wid FROM

psr_initial_query_idx);

SQL> commit;

Execute your first ETL run.

Enable all preserved indexes PRIOR to starting the second ETL run:

SQL> UPDATE w_etl_index SET inactive_flg = 'N' WHERE row_wid IN (SELECT obj_wid FROM

psr_initial_query_idx);

SQL> commit;

Execute your second ETL run. DAC will recreate all disabled query indexes.

Partitioning guidelines for Large Fact tables

Introduction

Taking advantage of range, composite range-range, composite range-range using virtual columns and interval partitioning for

fact tables will not only reduce index and statistics maintenance time during ETL, but also improve web queries performance.

Since the majority of inserts and updates impact the last partition(s), you will only need to disable local indexes on a few

impacted partitions, and then rebuild disabled indexes after the load and compute statistics on updated partitions only. Online

reports and dashboards should also render results faster, since the optimizer would build more efficient execution plans using

partitions elimination logic.

Large fact tables, with more than 20 million rows, are good candidates for partitioning. To build an optimal partitioned table

with reasonable data distribution, you can consider partitioning by month, quarter, year, etc. You can either identify and

partition target fact tables before the initial run, or convert the populated tables into partitioned objects after the full load.

To implement the support for partitioned tables in Oracle Business Analytics Data Warehouse, you will need to update DAC

metadata and manually convert the candidates into partitioned tables in the target database.

Page 34: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

34

Follow the steps below to implement fact table partitioning in your data warehouse schema and DAC repository. Please note

that there are some steps, which apply for composite range-range partitioning.

Range and Composite Range-Range Partitioning

Perform the following steps to convert a regular table into a range partitioned table.

Identify a partitioning key and decide on a partitioning interval

Choosing the correct partitioning key is the most important factor for effective partitioning, since it defines how many

partitions will be involved in web queries or ETL updates. Review the following guidelines for selecting a column for a

partitioning key:

Identify eligible columns of type DATE for implementing range partitioning.

Connect to the Oracle BI Server repository and check the usage or dependencies on each column in the logical and

presentation layers.

Analyze the summarized data distribution in the target table by each potential partitioning key candidate and data

volumes per time range, month, quarter or year.

Basing on the compiled data, decide on the appropriate partitioning key and partitioning range for your future

partitioned table.

The recommended partitioning range for most implementations is a month, though you can consider a quarter or a

year for your partitioning ranges.

Some of the partitioning keys may consist of concatenated attributes, which makes it hard to ensure proper range

partitioning. Consider using virtual columns feature in Oracle database and use a virtual column for a partitioning or

sub-partitioning key.

The proposed partitioning guidelines assume that the majority of incremental ETL volume data (~90%) are new records, which

end up in the one or two latest partitions. Depending on the chosen range granularity, you may consider rebuilding local

indexes for the most impacted latest partitions:

- Monthly range: you are advised to maintain two latest partitions, i.e. define index and table actions for PREVIOUS and

CURRENT partitions

- Quarterly range: you may consider maintaining just one, CURRENT partition.

- Yearly range: you are recommended to maintain only one, CURRENT partition.

The following table summarizes the recommended partitioning keys for some large Oracle BI Applications Fact tables:

Area Table Name Partitioning Key

Financials W_AP_XACT_F POSTED_ON_DT_WID

Financials W_AR_XACT_F POSTED_ON_DT_WID

Financials W_GL_REVN_F POSTED_ON_DT_WID

Financials W_GL_COGS_F POSTED_ON_DT_WID

Financials W_TAX_XACT_F POSTED_ON_DT_WID

Financials W_GL_OTHER_F ACCT_PERIOD_END_DT_WID

Sales W_SALES_ORDER_LINE_F ORDERED_ON_DT_WID

Sales W_SALES_PICK_LINE_F PICKED_ON_DT_WID

Page 35: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

35

Sales W_SALES_INVOICE_LINE_F INVOICED_ON_DT_WID

Sales W_SALES_SCHEDULE_LINE_F ORDERED_ON_DT_WID

Procurement W_PURCH_SCHEDULE_LINE_F ORDERED_ON_DT_WID

Procurement W_PURCH_RQSTN_LINE_F APPROVED_ON_DT_WID

Procurement W_RQSTN_LINE_COST_F APPROVED_ON_DT_WID

Siebel Sales W_REVN_F CLOSE_DT_WID

HR W_WRKFC_EVT_MONTH_F EVENT_MONTH_WID

HR W_ABSENCE_EVENT_F ABSENCE_MONTH_WID

HR W_WRKFC_EVT_POW_F EVENT_YEAR

HR W_PAYROLL_F PAY_PERIOD_END_DT_WID

HR W_LM_ENROLLMENT_EVENT_F STATUS_DT

Consider implementing composite range-to-range partitioning for Financials and Projects large fact tables using the following

partitioning and sub-partitioning keys:

Area Table Name Partitioning Key Sub-partitioning Key

Financials W_GL_LINKAGE_INFORMATION_G DISTRIBUTION_SOURCE POSTED_ON_DT_WID (*)

Projects W_PROJ_EXP_LINE_F CHANGED_ON_DT EXPENDITURE_DT_WID

(*) Implementing sub-partitioning for W_GL_LINKAGE_INFORMATION_G is recommended only if end users compress

inactive sub-partitions with historic data to reclaim space. There are no queries which would benefit from partitioning on

POSTED_ON_DT_WID column.

Refer to Composite Range-Range Partitioning using Virtual Columns section for more fact tables and their partitioning keys.

Create a partitioned table in Data Warehouse

You can pre-create a partitioned table prior to the initial load, or load data into the regular table and then create its

partitioned copy and migrate the summarized data. If you have already completed the initial load into a regular table and then

decided to partition it, you DO NOT need to re-run the initial load. You can consider two options to convert a table into a

partitioned one: (a) create table as select, or (b) create table exchange partition syntax and then split partitions. The internal

tests show that the first option to create table as select is simpler and faster. The second option is preferred in high availability

data warehouses when you have to carry out partitioning with end users accessing the data.

The example below uses the following tables for converting into partitioned objects:

W_WRKFC_EVT_MONTH_F - range partitioning

W_PROJ_EXP_LINE_F - composite range-range partitioning

1. Rename the original table

SQL> rename W_WRKFC_EVT_MONTH_F to W_WRKFC_EVT_MONTH_F_ORIG;

Page 36: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

36

2. Create the partitioned table, using range partitioning by year:

SQL> create table W_WRKFC_EVT_MONTH_F partition by range (EVENT_YEAR)(

partition PART_MIN values less than (2006),

partition PART_2006 values less than (2007),

partition PART_2007 values less than (2008),

partition PART_2008 values less than (2009),

partition PART_2009 values less than (2010),

partition PART_2010 values less than (2011),

partition PART_MAX values less than (maxvalue)

)

tablespace BIAPPS_DATA

nologging parallel enable row movement

as select * from W_WRKFC_EVT_MONTH_F_ORIG;

EVENT_YEAR column in the example above uses number(4) precision, so the table partition values are defined using

format YYYY. If you choose WID column for a partitioning key, then you have to define your partition ranges using format

YYYYMMDD.

If you implement composite range-range partitioning, use the following sample syntax:

SQL> create table W_PROJ_EXP_LINE_F

partition by range (CHANGED_ON_DT)

subpartition by range (EXPENDITURE_DT_WID)

(partition PART_MIN values less then (TO_DATE('01-JAN-2008','DD-MON-YYYY'))

( subpartition PART_MIN_MIN values less than (19980000)

, subpartition PART_MIN_1998 values less than (19990000)

, subpartition PART_MIN_1999 values less than (20010000)

, subpartition PART_MIN_2001 values less than (20020000)

, subpartition PART_MIN_2002 values less than (20030000)

, subpartition PART_MIN_2003 values less than (20040000)

, subpartition PART_MIN_2004 values less than (20050000)

, subpartition PART_MIN_2005 values less than (20060000)

, subpartition PART_MIN_2006 values less than (20070000)

, subpartition PART_MIN_2007 values less than (20080000)

, subpartition PART_MIN_2008 values less than (20090000)

, subpartition PART_MIN_2009 values less than (20100000)

, subpartition PART_MIN_MAX values less than (maxvalue)

)

, partition PART_200801 values less than (TO_DATE('01-APR-2008','DD-MON-YYYY'))

( subpartition PART_200801_MIN values less than (19980000)

, subpartition PART_200801_1998 values less than (19990000)

, subpartition PART_200801_1999 values less than (20010000)

, subpartition PART_200801_2001 values less than (20020000)

, subpartition PART_200801_2002 values less than (20030000)

, subpartition PART_200801_2003 values less than (20040000)

, subpartition PART_200801_2004 values less than (20050000)

, subpartition PART_200801_2005 values less than (20060000)

, subpartition PART_200801_2006 values less than (20070000)

, subpartition PART_200801_2007 values less than (20080000)

, subpartition PART_200801_2008 values less than (20090000)

, subpartition PART_200801_2009 values less than (20100000)

, subpartition PART_200801_MAX values less than (MAXVALUE)

)

...

...

, partition PART_MAX values less than (maxvalue)

( subpartition PART_MAX_MIN values less than (19980000)

Page 37: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

37

, subpartition PART_MAX_1998 values less than (19990000)

, subpartition PART_MAX_1999 values less than (20010000)

, subpartition PART_MAX_2001 values less than (20020000)

, subpartition PART_MAX_2002 values less than (20030000)

, subpartition PART_MAX_2003 values less than (20040000)

, subpartition PART_MAX_2004 values less than (20050000)

, subpartition PART_MAX_2005 values less than (20060000)

, subpartition PART_MAX_2006 values less than (20070000)

, subpartition PART_MAX_2007 values less than (20080000)

, subpartition PART_MAX_2008 values less than (20090000)

, subpartition PART_MAX_2009 values less than (20100000)

, subpartition PART_MAX_MAX values less than (maxvalue)

)

) nologging parallel

enable row movement

as (select * from W_PROJ_EXP_LINE_F_ORIG);

The composite range-range example uses Quarter for partitioning and Year for sub-partitioning ranges.

EXPENDITURE_DT_WID column has number(8) precision, so the table partition values are defined using format

YYYYMMDD.

Important! You must use the exact format YYYY, YYYYQQ or YYYYMMDD for partitioning by Year, Quarter or Month

correspondingly. You should verify the partitioning column data type prior to partitioning a table.

3. Drop / Rename indexes on renamed table

To drop indexes on the renamed table:

SQL> spool drop_ind.sql

SQL> SELECT 'DROP INDEX '|| INDEX_NAME||';'

FROM USER_INDEXES

WHERE TABLE_NAME = 'W_WRKFC_EVT_MONTH_F_ORIG';

SQL> spool off

SQL> @drop_ind.sql

If you want to keep indexes on the original renamed table until successful partitioning conversion completion, then use

the following commands:

SQL> spool rename_ind.sql

SQL> SELECT ‘ALTER INDEX ‘|| INDEX_NAME ||’ rename to ‘|| INDEX_NAME ||

‘_ORIG; ‘ FROM USER_INDEXES

WHERE TABLE_NAME = ‘W_WRKFC_EVT_MONTH_F_ORIG’;

SQL> spool off

SQL> @rename_ind.sql

4. Create Global and Local indexes.

Execute the following queries as DAC Repository owner:

SQL> spool indexes.sql

SQL> SELECT 'CREATE '

||DECODE(ISUNIQUE,'Y','UNIQUE ')

||DECODE(ISBITMAP,'Y','BITMAP ')

||'INDEX '

||I.NAME ||CHR(10)

||' ON '

||T.NAME

||' ('

||MAX(DECODE(POSTN,1,C.NAME||' ASC')) ||CHR(10)

||MAX(DECODE(POSTN,2,' ,'||C.NAME||' ASC'))

Page 38: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

38

||MAX(DECODE(POSTN,3,' ,'||C.NAME||' ASC'))

||MAX(DECODE(POSTN,4,' ,'||C.NAME||' ASC'))

||MAX(DECODE(POSTN,5,' ,'||C.NAME||' ASC'))

||MAX(DECODE(POSTN,6,' ,'||C.NAME||' ASC'))

||MAX(DECODE(POSTN,7,' ,'||C.NAME||' ASC'))

||') tablespace USERS_IDX ' ||CHR(10)

||DECODE(ISUNIQUE,'Y','GLOBAL','LOCAL')

||' NOLOGGING;'

FROM W_ETL_TABLE T, W_ETL_INDEX I, W_ETL_INDEX_COL C

WHERE T.ROW_WID = I.TABLE_WID

AND T.NAME = 'W_WRKFC_EVT_MONTH_F'

AND I.ROW_WID = C.INDEX_WID

AND I.INACTIVE_FLG = 'N'

GROUP BY T.NAME,I.NAME,ISBITMAP,ISUNIQUE;

SQL> spool off;

The script creates indexes with a maximum of seven positions. If you have indexes with more than seven column

positions, then update modify "MAX(DECODE(POSTN...))" sentence.

Run the spooled file indexes.sql in warehouse schema.

SQL> @indexes.sql

Compute statistics on the partitioned table:

SQL> BEGIN

dbms_stats.Gather_table_stats(

NULL,

tabname => 'W_WRKFC_EVT_MONTH_F',

CASCADE => true,

estimate_percent => dbms_stats.auto_sample_size,

method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

END;

Configure Informatica to support partitioned tables

1. Enable Row Movement

2. Set skip_unusable_indexes = TRUE in DataWarehouse Relational Connection in Informatica Workflow Manager. Open

Workflow Manager -> Connections -> Relational -> edit DataWarehouse -> Update Connection Environment SQL:

ALTER SESSION SET SKIP_UNUSABLE_INDEXES=TRUE;

Configure DAC to support partitioned tables

Create new source system parameters

Important! This example below shows how to set up rebuilding indexes and maintaining statistics for the last two PREVIOUS

and CURRENT partitions for range partitioning by year. You should consider implementing PREVIOUS and CURRENT partitions

only for monthly or more granular ranges. If you choose quarterly or yearly range, then you can maintain CURRENT partition

only. Maintaining PREVIOUS partition for partitioning by a quarter or a year may introduce unnecessary overhead and extend

your incremental ETL execution time.

Define the following source system parameters:

Select Design Menu

Click on Source System Parameters tab in the right pane

Page 39: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

39

Click New Button and define two new parameters with the following attributes:

Name: $$CURRENT_YEAR_WID

Data Type: SQL

Value (click on checkbox icon to define the following parameters):

Logical Data Source: DBConnection_OLAP

Enter the following SQL:

SELECT TO_CHAR(ROW_WID) FROM W_YEAR_D WHERE W_CURRENT_CAL_YEAR_CODE = 'Current'

Name: $$PREVIOUS_YEAR_WID

Data Type: SQL

Value (click on checkbox icon to define the following parameters):

Logical Data Source: DBConnection_OLAP

Enter the following SQL:

SELECT TO_CHAR(ROW_WID) FROM W_YEAR_D WHERE W_CURRENT_CAL_YEAR_CODE = 'Previous'

Important! Verify the correct Logical Data Source, DBConnection_OLAP, which points to your target data

warehouse, when you define these new system parameters.

If you choose monthly partitions, then use the following names and values:

Name: $$PREVIOUS_MONTH_WID

Value: SELECT TO_CHAR(ROW_WID) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE ='Previous'

Name: $$CURRENT_MONTH_WID

Value: SELECT TO_CHAR(ROW_WID) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE = 'Current'

If you choose Quarterly partitions, then use the following names / values:

Name: $$PREVIOUS_QTR_WID

Value: SELECT TO_CHAR(ROW_WID) FROM W_QTR_D WHERE W_CURRENT_CAL_QTR_CODE = 'Previous'

Name: $$CURRENT_QTR_WID

Value: SELECT TO_CHAR(ROW_WID) FROM W_QTR_D WHERE W_CURRENT_CAL_QTR_CODE = 'Current'

Note: If you need to maintain more than two partitions during the incremental ETLs, then you can create more

variables and repeat the steps for them below. For example:

Name: $$THIRD_MONTH_WID

Value: SELECT to_char(add_months(TO_DATE(ROW_WID,'YYYYMMDD'), -2),'YYYYMM') FROM W_DAY_D WHERE

W_CURRENT_CAL_DAY_CODE = 'Current'

Name: $$FOURTH_MONTH_WID

Value: SELECT to_char(add_months(TO_DATE(ROW_WID,'YYYYMMDD'), -3),'YYYYMM') FROM w_DAY_D WHERE

W_CURRENT_CAL_DAY_CODE = 'Current'

Update Index Action Framework

Create the following Index Actions in DAC Action Framework:

1. Year Partitioning: Disable Local Index Parameter

Navigate to Tools -> Seed Data -> Actions -> Index Actions -> New

Enter Name: Year Partitioning: Disable Local Index

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Page 40: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

40

Define ‘PREVIOUS_YEAR_WID Local Index’ SQL:

Name: Disable PREVIOUS_YEAR_WID Local Indexes

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

alter index getIndexName() modify partition PART_@DAC_$$PREVIOUS_YEAR_WID unusable

Important! Do not use semicolon (;) at the end of SQLs in Text Area.

Click ‘Add’ button to define the second SQL command.

Define ‘CURRENT_YEAR_WID Local Index’ SQL:

Name: Disable CURRENT_YEAR_WID Local Index

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

alter index getIndexName() modify partition PART_@DAC_$$CURRENT_YEAR_WID unusable

Save the changes.

Note: If you use Quarterly or Monthly partition range, then use PREVIOUS_MONTH_WID / CURRENT_MONTH_WID or

PREVIOUS_QTR_WID / CURRENT_QTR_WID in Action names and SQLs.

Important! If you implement partitioning by Year, Quarter, Month, then you need to define separate actions for each

range.

2. Year Partitioning: Enable Local Index Parameter

Click ‘New’ in Index Actions window to create a new parameter

Enter Name: Year Partitioning: Enable Local Index

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following two values:

Name Enable PREVIOUS_YEAR_WID Local Index

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

alter index getIndexName() rebuild partition PART_@DAC_$$PREVIOUS_YEAR_WID nologging

Name Enable CURRENT_YEAR_WID Local Index

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Page 41: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

41

Enter the following command in the lower right Text Area:

alter index getIndexName() rebuild partition PART_@DAC_$$CURRENT_YEAR_WID nologging

Save the changes.

Note: If you choose Quarterly or Monthly partition range, then use PREVIOUS_MONTH_WID /

CURRENT_MONTH_WID or PREVIOUS_QTR_WID / CURRENT_QTR_WID in Action names and SQLs.

3. Year Partitioning: Enable Local Sub-Partitioned Index Parameter (for composite partitioning only)

Click ‘New’ in Index Actions window to create a new parameter

Enter Name: Year Partitioning: Enable Local Index

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following value:

Name Enable Local Sub-partitioned Index

Type: Stored Procedure

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

DECLARE

CURSOR C1 IS

SELECT DISTINCT SUBPARTITION_NAME

FROM USER_IND_SUBPARTITIONS

WHERE INDEX_NAME='getIndexName()' AND STATUS = 'UNUSABLE';

BEGIN

FOR REC IN C1 LOOP

EXECUTE IMMEDIATE 'alter index getIndexName() rebuild subpartition

'||REC.SUBPARTITION_NAME||'';

END LOOP;

END

Save the changes.

4. Year Partitioning: Create Local Bitmap Index Parameter

Click ‘New’ in Index Actions window to create a new parameter

Enter Name: Year Partitioning: Create Local Bitmap Index

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following value:

Name Create Local Bitmap Indexes

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

Page 42: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

42

Create bitmap index getIndexName() on getTableName()(getUniqueColumns()) tablespace

getTableSpace() local parallel nologging

Save the changes.

5. Year Partitioning: Create Local B-Tree Index Parameter

Click ‘New’ in Index Actions window to create a new parameter

Enter Name: Year Partitioning: Create Local B-Tree Index

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following value:

Name Create Local B-Tree Index

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

Create index getIndexName() on getTableName()(getUniqueColumns()) tablespace getTableSpace()

local parallel nologging

Save the changes.

6. Year Partitioning: Create Global Unique Index Parameter

Click ‘New’ in Index Actions window to create a new parameter

Enter Name: Year Partitioning: Create Global Unique Index

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following value:

Name Create Local B-Tree Indexes

Type: SQL

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

Create unique index getIndexName() on getTableName()(getUniqueColumns()) tablespace

getTableSpace() global parallel nologging

Save the changes.

Update Table Action Framework

Create the following Table Action in DAC Action Framework:

1. Year Partitioning: Gather Partition Stats Parameter

Navigate to Tools -> Seed Data -> Actions -> Table Actions -> New

Enter Name: Year Partitioning: Gather Partition Stats

Page 43: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

43

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following value:

Name: Gather Partition Stats

Type: Stored Procedure

Database Connection: target

Valid Database Platform: ORACLE

Enter the following command in the lower right Text Area:

DECLARE

CURSOR C1 IS

SELECT DISTINCT UTP.PARTITION_NAME

FROM USER_IND_PARTITIONS UIP,

USER_PART_INDEXES UPI,

USER_TAB_PARTITIONS UTP

WHERE UIP.INDEX_NAME=UPI.INDEX_NAME

AND UIP.STATUS = 'USABLE'

AND UTP.TABLE_NAME=UPI.TABLE_NAME

AND UTP.PARTITION_POSITION=UIP.PARTITION_POSITION

AND UPI.TABLE_NAME = 'getTableName()'

AND UTP.PARTITION_NAME IN

('PART_@DAC_$$CURRENT_YEAR_WID','PART_@DAC_$$PREVIOUS_YEAR_WID');

BEGIN

FOR REC IN C1 LOOP

DBMS_STATS.GATHER_TABLE_STATS(

NULL,

TABNAME => 'getTableName()',

CASCADE => FALSE,

PARTNAME => REC.PARTITION_NAME,

ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE,

GRANULARITY => 'PARTITION',

METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE AUTO',

DEGREE => DBMS_STATS.DEFAULT_DEGREE);

END LOOP;

END;

Save the changes.

Note: If you Quarterly or Monthly partition range, then use PREVIOUS_MONTH_WID / CURRENT_MONTH_WID or

PREVIOUS_QTR_WID / CURRENT_QTR_WID in Action names and SQLs.

2. Quarter Composite Partitioning: Gather Partition Stats Parameter (for composite partitioning only)

Navigate to Tools -> Seed Data -> Actions -> Table Actions -> New

Enter Name: Quarter Composite Partitioning: Gather Partition Stats

Click on ‘Check’ Icon in Value field

Click on Add button in the new open window

Define the following value:

Name: Gather Partition Stats

Type: Stored Procedure

Database Connection: target

Valid Database Platform: ORACLE

Page 44: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

44

Enter the following command in the lower right Text Area:

DECLARE

CURSOR C1 IS

SELECT DISTINCT UTP.PARTITION_NAME

FROM USER_IND_PARTITIONS UIP,

USER_PART_INDEXES UPI,

USER_TAB_PARTITIONS UTP

WHERE UIP.INDEX_NAME=UPI.INDEX_NAME

AND UIP.STATUS = 'USABLE'

AND UTP.TABLE_NAME=UPI.TABLE_NAME

AND UTP.PARTITION_POSITION=UIP.PARTITION_POSITION

AND UPI.TABLE_NAME = 'getTableName()'

AND UTP.PARTITION_NAME IN

('PART_@DAC_$$CURRENT_QTR_WID','PART_@DAC_$$PREVIOUS_QTR_WID');

BEGIN

FOR REC IN C1 LOOP

DBMS_STATS.GATHER_TABLE_STATS(

NULL,

TABNAME => 'getTableName()',

CASCADE => FALSE,

PARTNAME => REC.PARTITION_NAME,

ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE,

GRANULARITY => 'PARTITION',

METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE AUTO',

DEGREE => DBMS_STATS.DEFAULT_DEGREE);

END LOOP;

END;

Important! DO NOT change ‘Drop / Create Always’ or ‘Drop / Create Always Bitmap’ properties for the modified indexes.

Un-checking these properties would signal DAC to skip any actions, defined in Index Action Framework.

Attach Index Action to the desired indexes

Retrieve all local indexes on partitioned tables. Navigate to Design -> Indices -> Query ->Table Name

'W_WRKFC_EVT_MONTH_F', check ‘Is Bitmap’ checkbox -> Go.

Important! You must exclude the selected global index from the index query result set. The global index must

NOT have any assigned index action tasks.

Right click your mouse on the generated list (Upper right pane) and select ‘Add Actions’

Select ‘Drop Index’ from Action Type field

Select ‘Incremental’ from Load Type field

Click on Checkbox icon in Action field

Select ‘Year Partitioning: Disable Local Indexes’ Action Name

Click OK in Choose Action window

Click OK in Add Actions window.

Right click your mouse on the generated list (Upper right pane) and select ‘Add Actions’ one more time

Select ‘Create Index’ from Action Type field

Select ‘Incremental’ from Load Type field

Click on Checkbox icon in Action field

Select ‘Year Partitioning: Enable Local Indexes’ Action Name

Click OK in Choose Action window

Page 45: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

45

Click OK in Add Actions window.

The steps above apply to all indexes, retrieved by your query. If you want to attach the defined Index Actions for an

individual index, then select the desired index in the right upper pane, and click on ‘Actions’ sub-tab in the lower pane.

Then click ‘New’ button in the lower pane and fill in the appropriate values in the new line.

Repeat the same steps above to attach ‘Year Partitioning: Create Local Bitmap Index’, ‘Year Partitioning: Create

Local B-Tree Index’ and ‘Year Partitioning: Create Global Unique Index’ to the appropriate indexes, used in an

initial ETL run.

Important! You must choose ‘Initial’ from Load Type field, when attaching ‘Year Partitioning: Create Local Bitmap

Index’, ‘Year Partitioning: Create Local B-Tree Index’ and ‘Year Partitioning: Create Global Unique Index’ Index Action

Tasks.

Even though you select Drop/Create Index Action Type, DAC will override these actions with the steps, defined in

Index Action Framework. Every time, DAC encounter ‘Drop Index’ step for an updated index, it will make it unusable

for the last two partitions, and for ‘Create Index’ – rebuild the index for the last two partitions.

Attach Table Action to the converted partitioned table

Retrieve the partitioned tables. Navigate to Design -> Tables -> Query -> Name 'W_WRKFC_EVT_MONTH_F' -> Go.

Right click your mouse on the generated list (Upper right pane) and select ‘Add Actions’

Select ‘Analyze Table’ from Action Type field

Select ‘Incremental’ from Load Type field

Click on Checkbox icon in Action field

Select ‘Year Partitioning: Gather partition stats’ Action Name

Click OK in Choose Action window

Click OK in Add Actions window.

Important! You must use ‘Quarter Composite Partitioning: Gather Partition Stats’ parameter for composite range-

range tables.

If you want to attach the defined Table Action for an individual table, then select the desired table in the right upper

pane, and click on ‘Actions’ sub-tab in the lower pane. Then click ‘New’ button in the lower pane and fill in the

appropriate values in the new line.

Whenever DAC encounter ‘Analyze Table’ step for an updated table, it will override the default action by the set of

steps from Table Action Framework.

Unit test the changes for converted partitioned tables in DAC

You can generate the list of actions for a single task, which populates a partitioned table, to validate the correct sequence of

steps without executing them.

Follow the steps below to unit test the sequence of steps for a partitioned table:

Select ‘Execute’ button from your top sub-menu

Select your execution plan in the upper right pane

Click ‘Ordered tasks’ sub-tab in the lower right pane

Retrieve the task which populates your partitioned table

Click ‘Unit test’ button in the lower right pane menu.

Page 46: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

46

Click ‘Yes’ to proceed with unit testing.

Validate the generated sequence of steps in the new output window.

Important! DO NOT execute them in your data warehouse.

Exit unit testing window.

Composite Range-range Partitioning Using Virtual Columns

There are few more facts, which can be partitioned using range-to-range composite partitioning by virtual columns, another

useful Oracle database feature. Oracle does not store virtual columns, it derives their values on demand by computing defined

functions or expressions. This feature is very handy when query filters use a substring in a physical column. For example, a

WID column ‘201020110120000’ comprises date value ‘20110120’ in YYYYMMDD format.

Implement such partitioning using virtual columns following W_GL_BALANCE_F table example:

Identify the table’s partitioning key, and define a virtual column, which will be used as its sub-partitioning key:

Add a virtual column to the fact table:

ALTER TABLE W_GL_BALANCE_F add BALANCE_DT_V AS (TO_NUMBER(SUBSTR(BALANCE_DT_WID, 5, 8)));

The table below consolidates all such facts and their recommended keys and virtual column values for BI Analytic Applications:

Area Table Name Partitioning Key Sub-partitioning Key Sub-partitioning V-Column Value

Financials W_GL_BALANCE_F CHANGED_ON_DT BALANCE_DT_V (TO_NUMBER(SUBSTR(BALANCE_DT_WID, 5,

8)))

Projects W_PROJ_COST_LINE_F CHANGED_ON_DT PROJ_ACCOUNTING_DT_V (TO_NUMBER(SUBSTR(PROJ_ACCOUNTING_DT

_WID, 5, 8)))

Projects W_PROJ_REVENUE_LINE_F CHANGED_ON_DT GL_ACCOUNTING_DT_V (TO_NUMBER(SUBSTR(GL_ACCOUNTING_DT_

WID, 5, 8)))

Rename the original table:

RENAME W_GL_BALANCE_F TO W_GL_BALANCE_F_REF;

Create a new partitioned table:

CREATE TABLE W_GL_BALANCE_F

PARTITION BY RANGE (CHANGED_ON_DT)

SUBPARTITION BY RANGE (BALANCE_DT_V)

(PARTITION PART_MIN VALUES LESS THAN (TO_DATE('01-JAN-2008', 'DD-MON-YYYY'))

(SUBPARTITION PART_MIN_MIN VALUES LESS THAN (20080101),

SUBPARTITION PART_MIN_2008 VALUES LESS THAN (20090101),

SUBPARTITION PART_MIN_2009 VALUES LESS THAN (20100101),

SUBPARTITION PART_MIN_2010 VALUES LESS THAN (20110101),

SUBPARTITION PART_MIN_2011 VALUES LESS THAN (20120101),

SUBPARTITION PART_MIN_MAX VALUES LESS THAN (MAXVALUE)

),

PARTITION PART_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009', 'DD-MON-YYYY'))

(SUBPARTITION PART_2008_MIN VALUES LESS THAN (20080101),

SUBPARTITION PART_2008_2008 VALUES LESS THAN (20090101),

SUBPARTITION PART_2008_2009 VALUES LESS THAN (20100101),

Page 47: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

47

SUBPARTITION PART_2008_2010 VALUES LESS THAN (20110101),

SUBPARTITION PART_2008_2011 VALUES LESS THAN (20120101),

SUBPARTITION PART_2008_MAX VALUES LESS THAN (MAXVALUE)

),

PARTITION PART_2009 VALUES LESS THAN (TO_DATE('01-JAN-2010', 'DD-MON-YYYY'))

(SUBPARTITION PART_2009_MIN VALUES LESS THAN (20080101),

SUBPARTITION PART_2009_2008 VALUES LESS THAN (20090101),

SUBPARTITION PART_2009_2009 VALUES LESS THAN (20100101),

SUBPARTITION PART_2009_2010 VALUES LESS THAN (20110101),

SUBPARTITION PART_2009_2011 VALUES LESS THAN (20120101),

SUBPARTITION PART_2009_MAX VALUES LESS THAN (MAXVALUE)

),

PARTITION PART_2010 VALUES LESS THAN (TO_DATE('01-JAN-2011', 'DD-MON-YYYY'))

(SUBPARTITION PART_2010_MIN VALUES LESS THAN (20080101),

SUBPARTITION PART_2010_2008 VALUES LESS THAN (20090101),

SUBPARTITION PART_2010_2009 VALUES LESS THAN (20100101),

SUBPARTITION PART_2010_2010 VALUES LESS THAN (20110101),

SUBPARTITION PART_2010_2011 VALUES LESS THAN (20120101),

SUBPARTITION PART_2010_MAX VALUES LESS THAN (MAXVALUE)

),

PARTITION PART_2011 VALUES LESS THAN (TO_DATE('01-JAN-2012', 'DD-MON-YYYY'))

(SUBPARTITION PART_2011_MIN VALUES LESS THAN (20080101),

SUBPARTITION PART_2011_2008 VALUES LESS THAN (20090101),

SUBPARTITION PART_2011_2009 VALUES LESS THAN (20100101),

SUBPARTITION PART_2011_2010 VALUES LESS THAN (20110101),

SUBPARTITION PART_2011_2011 VALUES LESS THAN (20120101),

SUBPARTITION PART_2011_MAX VALUES LESS THAN (MAXVALUE)

),

PARTITION PART_MAX VALUES LESS THAN (MAXVALUE)

(SUBPARTITION PART_MAX_MIN VALUES LESS THAN (20080101),

SUBPARTITION PART_MAX_2008 VALUES LESS THAN (20090101),

SUBPARTITION PART_MAX_2009 VALUES LESS THAN (20100101),

SUBPARTITION PART_MAX_2010 VALUES LESS THAN (20110101),

SUBPARTITION PART_MAX_2011 VALUES LESS THAN (20120101),

SUBPARTITION PART_MAX_MAX VALUES LESS THAN (MAXVALUE)

)

) NOLOGGING PARALLEL ENABLE ROW MOVEMENT

AS (SELECT * FROM W_GL_BALANCE_F_REF);

Follow the steps in Range and Composite Range-Range Partitioning section to complete the remaining configuration

tasks in Informatica, DAC and database.

Interval Partitioning

Oracle 11G introduced a new partitioning type, Interval Partitioning. Oracle automatically creates new partitions with pre-

defined range interval. With Interval Partitioning there is no need to pre-create partitions for data in the future.

The majority of recommended partitioning keys in Oracle BI Applications are using DATE format YYYYMMDD. For example, the

POSTED_ON_WID column is based on the monthly range partitions with values less than 20041101, 20041201, 20050101,

20050201, etc. You can specify INTERVAL 100 for such a range format. Oracle will skip creating partitions for ranges with no

data. In the last example with POSTED_ON_WID there is a very large gap between ranges 20041201 and 20050101, so Oracle

will not create any partitions in that range.

For example, the syntax for creating an interval partitioned table:

SQL> create table W_WRKFC_EVT_MONTH_F partition by range (EVENT_YEAR)

interval(100) (

partition PART_MIN values less than (19900101))

Page 48: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

48

tablespace BIAPPS_DATA

nologging parallel enable row movement

as select * from W_WRKFC_EVT_MONTH_F_ORIG;

You also need to use the following SQLs to assign to DAC variables:

Name: $$PREVIOUS_MONTH_WID Value: SELECT partition_name FROM user_tab_partitions

WHERE table_name = 'W_WRKFC_EVT_MONTH_F'

AND partition_position = (SELECT MAX(partition_position)-1

FROM user_tab_partitions

WHERE table_name = 'W_WRKFC_EVT_MONTH_F');

Name: $$CURRENT_MONTH_WID Value: SELECT partition_name FROM user_tab_partitions

WHERE table_name = 'W_WRKFC_EVT_MONTH_F'

AND partition_position = (SELECT MAX(partition_position)

FROM user_tab_partitions

WHERE table_name = 'W_WRKFC_EVT_MONTH_F');

Important! You must remove the prefix PART_ prefix from the partition names in the above DAC Action

Framework scripts above. For example, use @DAC_$$PREVIOUS_MONTH_WID instead of

PART_@DAC_$$PREVIOUS_MONTH_WID.

Important! Oracle creates a new interval partition and partitioned local indexes, as soon as the first record exceeds the last

partition range value. So during an ETL, when Oracle creates a new interval partition, you may expect possibly slower mapping

performance, as all local indexes on the new partition will be enabled during the run. The impact may not be significant, since

the DML operations with local indexes in place will be done only for a single day of incremental data. DAC will kick in its

routine to turn off local indexes on the newly created partition during the next incremental ETL.

Partitioning Pruning in Star Queries

Effective partitioning implementation not only reduces Index and statistics maintenance during incremental ETLs, but also

helps to improve end user queries performance. There are, however, several factors which could affect Oracle Optimizer plans

and result in less efficient executions:

a) By its original design BI Analytic Applications do not expose the fact attributes in OBIEE logical model (in RPD). They

are resolved through foreign keys via joins to dimensional attributes. For example, if you partition your fact table using

CHANGED_ON_DT column by month, and then run an OBIEE query to filter out records for the last month, OBIEE

would not use the provided filter value directly in the fact predicate. Instead, it will resolve CHANGED_ON_DT through

foreign key to the corresponding Time Dimension table and apply the filter to its Time Dimension attribute.

b) BI Analytic Applications database design uses Star Schema and relies on Bitmap indexes for effective Star

Transformation. When Optimizer chooses Star Transformation it may exclude partitioning pruning from its execution

plan and use bitmap indexes instead.

Partitioning Pruning and Star Transformation Scenarios The following example walks through various scenarios and shows the Optimizer’s behavior for different configurations.

A sample OBIEE generated physical query below uses partitioned fact table W_SALES_ORDER_LINE_F (T90499) with the

partitioning key ORDERED_ON_DT_WID. There are also four indexes:

W_SLS_ORD_LN_F_T_F100 is a local bitmap index on PROFIT_CENTER_WID

W_SLS_ORD_LN_F_T_F200 is a local bitmap index on ORDERED_ON_DT_WID , the table’s partitioning key

W_SLS_ORD_LN_F_T_F300 is a local bitmap index on CHNL_TYPE_WID

Page 49: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

49

W_SLS_ORD_LN_F_T_F500 is a local btree index on (ORDERED_ON_DT_WID, CHNL_TYPE_WID)

WITH

SAWITH0 AS (select T156337.MCAL_DAY_DT as c1,

case when T96128.W_XACT_TYPE_CODE <> 'PAYMENT' then T90499.SALES_ORDER_NUM/*SALES_INVOICE_NUM*/ else NULL end as c2,

T96094.W_STATUS_CODE as c3,

T96094.STATUS_CODE as c4,

T96128.W_XACT_TYPE_CODE as c5,

T96128.W_XACT_SUBTYPE_CODE as c6,

T157680.NAME as c7,

T95085.PAYMENT_TERM_CODE as c8,

T174959.MCAL_DAY_DT as c9,

T175106.MCAL_DAY_DT as c10,

T175253.MCAL_DAY_DT as c11,

sum(T90499.NET_AMT /*.AR_DOC_AMT*/ * T90499.GLOBAL1_EXCHANGE_RATE) as c12,

T156337.ROW_WID as c13

from

W_MCAL_DAY_D T175106 /* Dim_W_MCAL_DAY_D_Invoice_Cleared_Date_Fiscal_Calendar */ ,

W_MCAL_DAY_D T174959 /* Dim_W_MCAL_DAY_D_Invoiced_Date_Fiscal_Calendar */ ,

W_MCAL_DAY_D T175253 /* Dim_W_MCAL_DAY_D_Payment_Due_Date_Fiscal_Calendar */ ,

W_MCAL_DAY_D T156337 /* Dim_W_MCAL_DAY_D_Fiscal_Day */ ,

W_PROFIT_CENTER_D T92473 /* Dim_W_PROFIT_CENTER_D */ ,

DWH_7962.W_PAYMENT_TERMS_D T95085 /* Dim_W_PAYMENT_TERMS_D */ ,

W_SALES_ORDER_LINE_F_TEST T90499--W_AR_XACT_F T90499 /* Fact_W_AR_XACT_F */

left outer join W_DAY_D T124588 /* Dim_W_DAY_D_ARSales Invoice Cleared Date */

On T90499.CANCELLED_ON_DT_WID/*CLEARED_ON_DT_WID*/ = T124588.ROW_WID,

--W_GL_ACCOUNT_D T91397 /* Dim_W_GL_ACCOUNT_D */ ,

DWH_7962.W_STATUS_D T96094 /* Dim_W_STATUS_D_Generic */ ,

DWH_7962.W_XACT_TYPE_D T96128 /* Dim_W_XACT_TYPE_D_Financials */ ,

DWH_7962.W_PARTY_D T157680

where ( T90499.ENTERED_ON_DT_WID /*CLEARED_ON_DT_WID */= T175106.MCAL_DAY_DT_WID

and T90499.CHNL_TYPE_WID/*MCAL_CAL_WID*/ = T175106.MCAL_CAL_WID

and T174959.ADJUSTMENT_PERIOD_FLG = 'N'

and T90499.BOOKED_ON_DT_WID /*INVOICED_ON_DT_WID*/ = T174959.MCAL_DAY_DT_WID

and T90499.CHNL_TYPE_WID/*MCAL_CAL_WID*/ = T174959.MCAL_CAL_WID

and T156337.ADJUSTMENT_PERIOD_FLG = 'N'

and T90499.ORDERED_ON_DT_WID/*POSTED_ON_DT_WID*/ = T156337.MCAL_DAY_DT_WID

and T90499.CHNL_TYPE_WID/*MCAL_CAL_WID*/ = T156337.MCAL_CAL_WID

and T90499.PROFIT_CENTER_WID = T92473.ROW_WID

and T90499.PAYMENT_TERMS_WID/*PAY_TERMS_WID*/ = T95085.ROW_WID

--and T90499.GL_ACCOUNT_WID = T91397.ROW_WID

and T90499.ORDER_STATUS_WID/*DOC_STATUS_WID*/ = T96094.ROW_WID

and T90499.XACT_TYPE_WID/*DOC_TYPE_WID*/ = T96128.ROW_WID

and T90499.CUSTOMER_WID = T157680.ROW_WID

and T175106.ADJUSTMENT_PERIOD_FLG = 'N'

and T90499.PROMISED_ON_DT_WID/*PAYMENT_DUE_DT_WID*/ = T175253.MCAL_DAY_DT_WID

and T90499.CHNL_TYPE_WID/*MCAL_CAL_WID*/ = T175253.MCAL_CAL_WID

and T90499.DELETE_FLG = 'N'

and T92473.PROFIT_CENTER_NAME = 'Amazon.com, Inc.'

and T156337.MCAL_PERIOD_NAME = 'JAN-05'

and T175253.ADJUSTMENT_PERIOD_FLG = 'N'

--and case when 0 > 0 then T91397.ACCOUNT_SEG5_CODE else 'All' end = 'All'

--and case when 0 > 0 then T91397.ACCOUNT_SEG1_CODE else 'All' end = 'All'

and TO_DATE('2011-03-10 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') is not null )

group by T95085.PAYMENT_TERM_CODE,

T96094.STATUS_CODE,T96094.W_STATUS_CODE,

T96128.W_XACT_SUBTYPE_CODE,T96128.W_XACT_TYPE_CODE,

T156337.ROW_WID, T156337.MCAL_DAY_DT, T157680.NAME,

T174959.MCAL_DAY_DT, T175106.MCAL_DAY_DT,

T175253.MCAL_DAY_DT,

case when T96128.W_XACT_TYPE_CODE <> 'PAYMENT' then T90499.SALES_ORDER_NUM/*SALES_INVOICE_NUM */else NULL end ,

T92473.PROFIT_CENTER_NAME )

select SAWITH0.c1 as c1,

SAWITH0.c2 as c2,

SAWITH0.c3 as c3,

SAWITH0.c4 as c4,

SAWITH0.c5 as c5,

SAWITH0.c6 as c6,

SAWITH0.c7 as c7,

SAWITH0.c8 as c8,

SAWITH0.c9 as c9,

SAWITH0.c10 as c10,

SAWITH0.c11 as c11,

SAWITH0.c12 as c12

Page 50: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

50

from

SAWITH0

order by c1 desc

Oracle Optimizer chooses star transformation when there are at least three dimension tables joining to the fact table and the

fact attributes, used in the joins, have bitmap indexes (and, of course, there are no conflicting hints).

The following combination does not produce Star Transformation:

ALTER INDEX W_SLS_ORD_LN_F_T_F100 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F200 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F300 INVISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F500 INVISIBLE;

Execution Plan

-------------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |

-------------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 415 | 1342 (1)| 00:01:16 | | |

| 1 | SORT GROUP BY | | 1 | 415 | 1342 (1)| 00:01:16 | | |

| 2 | NESTED LOOPS | | | | | | | |

| 3 | NESTED LOOPS | | 1 | 415 | 1341 (1)| 00:01:16 | | |

| 4 | NESTED LOOPS | | 1 | 392 | 1340 (1)| 00:01:16 | | |

| 5 | NESTED LOOPS | | 1 | 376 | 1339 (1)| 00:01:16 | | |

| 6 | NESTED LOOPS | | 1 | 344 | 1338 (1)| 00:01:16 | | |

|* 7 | HASH JOIN | | 1 | 315 | 1337 (1)| 00:01:16 | | |

|* 8 | HASH JOIN | | 1 | 291 | 912 (1)| 00:00:52 | | |

| 9 | NESTED LOOPS | | | | | | | |

| 10 | NESTED LOOPS | | 1 | 267 | 486 (1)| 00:00:28 | | |

|* 11 | HASH JOIN | | 30 | 7290 | 142 (1)| 00:00:09 | | |

| 12 | NESTED LOOPS | | | | | | | |

| 13 | NESTED LOOPS | | 40 | 8840 | 140 (0)| 00:00:08 | | |

|* 14 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 41 | 12 (0)| 00:00:01 | | |

| 15 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 16 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_M4 | | | | | | |

| 17 | PARTITION RANGE ITERATOR | | | | | | KEY | KEY |

| 18 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 19 | BITMAP INDEX SINGLE VALUE | W_SLS_ORD_LN_F_T_F200 | | | | | KEY | KEY |

|* 20 | TABLE ACCESS BY LOCAL INDEX ROWID| W_SALES_ORDER_LINE_F_TEST | 76 | 13680 | 140 (0)| 00:00:08 | 1 | 1 |

| 21 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 2 | 44 | 1 (0)| 00:00:01 | | |

| 22 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 23 | BITMAP INDEX SINGLE VALUE | W_PROFT_CNTR_D_M11 | | | | | | |

| 24 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

| 25 | BITMAP AND | | | | | | | |

|* 26 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 27 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_T_F1 | | | | | | |

|* 28 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 486 (1)| 00:00:28 | | |

|* 29 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

|* 30 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

| 31 | TABLE ACCESS BY INDEX ROWID | W_PARTY_D | 1 | 29 | 1 (0)| 00:00:01 | | |

|* 32 | INDEX UNIQUE SCAN | W_PARTY_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 33 | TABLE ACCESS BY INDEX ROWID | W_STATUS_D | 1 | 32 | 1 (0)| 00:00:01 | | |

|* 34 | INDEX UNIQUE SCAN | W_STATUS_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 35 | TABLE ACCESS BY INDEX ROWID | W_PAYMENT_TERMS_D | 1 | 16 | 1 (0)| 00:00:01 | | |

|* 36 | INDEX UNIQUE SCAN | W_PAYMNT_TRM_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

|* 37 | INDEX UNIQUE SCAN | W_XACT_TYPE_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 38 | TABLE ACCESS BY INDEX ROWID | W_XACT_TYPE_D | 1 | 23 | 1 (0)| 00:00:01 | | |

------------------------------------------------------------------------------------------------------------------------------------------

The next combination of indexes causes Optimizer to opt to Star Transformation:

ALTER INDEX W_SLS_ORD_LN_F_T_F100 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F200 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F300 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F500 INVISIBLE;

Execution Plan

-----------------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |

-----------------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 403 | 4077 (1)| 00:03:51 | | |

| 1 | TEMP TABLE TRANSFORMATION | | | | | | | |

| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D67D7_67849FF2 | | | | | | |

|* 3 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 41 | 12 (0)| 00:00:01 | | |

Page 51: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

51

| 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 5 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_M4 | | | | | | |

| 6 | SORT GROUP BY | | 1 | 403 | 4065 (1)| 00:03:51 | | |

|* 7 | HASH JOIN | | 1 | 403 | 4064 (1)| 00:03:51 | | |

|* 8 | HASH JOIN | | 1 | 380 | 2492 (1)| 00:02:22 | | |

|* 9 | HASH JOIN | | 1 | 364 | 2485 (1)| 00:02:21 | | |

|* 10 | HASH JOIN | | 1 | 340 | 2060 (1)| 00:01:57 | | |

| 11 | NESTED LOOPS | | | | | | | |

| 12 | NESTED LOOPS | | 13 | 4108 | 1635 (1)| 00:01:33 | | |

|* 13 | HASH JOIN | | 13 | 3731 | 1622 (1)| 00:01:32 | | |

|* 14 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

|* 15 | HASH JOIN | | 4281 | 1099K| 1197 (1)| 00:01:08 | | |

| 16 | TABLE ACCESS FULL | W_STATUS_D | 173 | 5536 | 2 (0)| 00:00:01 | | |

|* 17 | HASH JOIN | | 4281 | 965K| 1194 (1)| 00:01:08 | | |

| 18 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 2 | 44 | 1 (0)| 00:00:01 | | |

| 19 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 20 | BITMAP INDEX SINGLE VALUE | W_PROFT_CNTR_D_M11 | | | | | | |

|* 21 | HASH JOIN | | 5704 | 1164K| 1192 (1)| 00:01:08 | | |

| 22 | TABLE ACCESS FULL | SYS_TEMP_0FD9D67D7_67849FF2 | 1 | 29 | 2 (0)| 00:00:01 | | |

| 23 | PARTITION RANGE SUBQUERY | | 5704 | 1002K| 1190 (1)| 00:01:08 |KEY(SQ)|KEY(SQ)|

|* 24 | TABLE ACCESS BY LOCAL INDEX ROWID | W_SALES_ORDER_LINE_F_TEST | 5704 | 1002K| 1190 (1)| 00:01:08 |KEY(SQ)|KEY(SQ)|

| 25 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

| 26 | BITMAP AND | | | | | | | |

| 27 | BITMAP MERGE | | | | | | | |

| 28 | BITMAP KEY ITERATION | | | | | | | |

| 29 | BUFFER SORT | | | | | | | |

| 30 | TABLE ACCESS FULL | SYS_TEMP_0FD9D67D7_67849FF2 | 1 | 13 | 2 (0)| 00:00:01 | | |

|* 31 | BITMAP INDEX RANGE SCAN | W_SLS_ORD_LN_F_T_F200 | | | | |KEY(SQ)|KEY(SQ)|

| 32 | BITMAP MERGE | | | | | | | |

| 33 | BITMAP KEY ITERATION | | | | | | | |

| 34 | BUFFER SORT | | | | | | | |

| 35 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 2 | 44 | 1 (0)| 00:00:01 | | |

| 36 | BITMAP CONVERSION TO ROWIDS| | | | | | | |

|* 37 | BITMAP INDEX SINGLE VALUE | W_PROFT_CNTR_D_M11 | | | | | | |

|* 38 | BITMAP INDEX RANGE SCAN | W_SLS_ORD_LN_F_T_F100 | | | | |KEY(SQ)|KEY(SQ)|

|* 39 | INDEX UNIQUE SCAN | W_PARTY_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 40 | TABLE ACCESS BY INDEX ROWID | W_PARTY_D | 1 | 29 | 1 (0)| 00:00:01 | | |

|* 41 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

|* 42 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

| 43 | TABLE ACCESS FULL | W_PAYMENT_TERMS_D | 6227 | 99632 | 6 (0)| 00:00:01 | | |

| 44 | TABLE ACCESS FULL | W_XACT_TYPE_D | 1885K| 41M| 1569 (1)| 00:01:29 | | |

-----------------------------------------------------------------------------------------------------------------------------------------------

With a local b-tree index on the partitioning key column, Optimizer eliminates star query and switches to Partition Range

Iterator Scan. If the b-Tree index is created as global, then Optimizer uses Nested Loops+Index Range Scan instead.

ALTER INDEX W_SLS_ORD_LN_F_T_F100 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F200 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F300 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F500 VISIBLE; -- local index on the partitioning key column

Execution Plan

------------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |

------------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 415 | 475 (2)| 00:00:27 | | |

| 1 | SORT GROUP BY | | 1 | 415 | 475 (2)| 00:00:27 | | |

| 2 | NESTED LOOPS | | | | | | | |

| 3 | NESTED LOOPS | | 1 | 415 | 474 (2)| 00:00:27 | | |

| 4 | NESTED LOOPS | | 1 | 383 | 473 (2)| 00:00:27 | | |

| 5 | NESTED LOOPS | | 1 | 367 | 472 (2)| 00:00:27 | | |

| 6 | NESTED LOOPS | | 1 | 344 | 471 (2)| 00:00:27 | | |

| 7 | NESTED LOOPS | | 1 | 315 | 470 (2)| 00:00:27 | | |

| 8 | NESTED LOOPS | | 1 | 291 | 231 (2)| 00:00:14 | | |

| 9 | NESTED LOOPS | | 1 | 267 | 112 (1)| 00:00:07 | | |

|* 10 | HASH JOIN | | 30 | 7290 | 17 (6)| 00:00:01 | | |

| 11 | NESTED LOOPS | | | | | | | |

| 12 | NESTED LOOPS | | 40 | 8840 | 15 (0)| 00:00:01 | | |

|* 13 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 41 | 12 (0)| 00:00:01 | | |

| 14 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 15 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_M4 | | | | | | |

| 16 | PARTITION RANGE ITERATOR | | 1 | | 2 (0)| 00:00:01 | KEY | KEY |

|* 17 | INDEX RANGE SCAN | W_SLS_ORD_LN_F_T_F500 | 1 | | 2 (0)| 00:00:01 | KEY | KEY |

|* 18 | TABLE ACCESS BY LOCAL INDEX ROWID| W_SALES_ORDER_LINE_F_TEST | 76 | 15352 | 3 (0)| 00:00:01 | 1 | 1 |

| 19 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 2 | 44 | 1 (0)| 00:00:01 | | |

| 20 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 21 | BITMAP INDEX SINGLE VALUE | W_PROFT_CNTR_D_M11 | | | | | | |

Page 52: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

52

|* 22 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 112 (1)| 00:00:07 | | |

| 23 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

| 24 | BITMAP AND | | | | | | | |

|* 25 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 26 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_T_F1 | | | | | | |

|* 27 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 231 (2)| 00:00:14 | | |

| 28 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 29 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 30 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 470 (2)| 00:00:27 | | |

| 31 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 32 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

| 33 | TABLE ACCESS BY INDEX ROWID | W_PARTY_D | 1 | 29 | 1 (0)| 00:00:01 | | |

|* 34 | INDEX UNIQUE SCAN | W_PARTY_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 35 | TABLE ACCESS BY INDEX ROWID | W_XACT_TYPE_D | 1 | 23 | 1 (0)| 00:00:01 | | |

|* 36 | INDEX UNIQUE SCAN | W_XACT_TYPE_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 37 | TABLE ACCESS BY INDEX ROWID | W_PAYMENT_TERMS_D | 1 | 16 | 1 (0)| 00:00:01 | | |

|* 38 | INDEX UNIQUE SCAN | W_PAYMNT_TRM_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

|* 39 | INDEX UNIQUE SCAN | W_STATUS_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 40 | TABLE ACCESS BY INDEX ROWID | W_STATUS_D | 1 | 32 | 1 (0)| 00:00:01 | | |

------------------------------------------------------------------------------------------------------------------------------------------

And the plan for global b-Tree index W_SLS_ORD_LN_F_T_F500 on the partitioning key column:

Execution Plan

-------------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |

-------------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 415 | 475 (2)| 00:00:27 | | |

| 1 | SORT GROUP BY | | 1 | 415 | 475 (2)| 00:00:27 | | |

| 2 | NESTED LOOPS | | | | | | | |

| 3 | NESTED LOOPS | | 1 | 415 | 474 (2)| 00:00:27 | | |

| 4 | NESTED LOOPS | | 1 | 383 | 473 (2)| 00:00:27 | | |

| 5 | NESTED LOOPS | | 1 | 367 | 472 (2)| 00:00:27 | | |

| 6 | NESTED LOOPS | | 1 | 344 | 471 (2)| 00:00:27 | | |

| 7 | NESTED LOOPS | | 1 | 315 | 470 (2)| 00:00:27 | | |

| 8 | NESTED LOOPS | | 1 | 291 | 231 (2)| 00:00:14 | | |

| 9 | NESTED LOOPS | | 1 | 267 | 112 (1)| 00:00:07 | | |

|* 10 | HASH JOIN | | 30 | 7290 | 17 (6)| 00:00:01 | | |

| 11 | NESTED LOOPS | | | | | | | |

| 12 | NESTED LOOPS | | 40 | 8840 | 15 (0)| 00:00:01 | | |

|* 13 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 41 | 12 (0)| 00:00:01 | | |

| 14 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 15 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_M4 | | | | | | |

|* 16 | INDEX RANGE SCAN | W_SLS_ORD_LN_F_T_F500 | 1 | | 2 (0)| 00:00:01 | | |

|* 17 | TABLE ACCESS BY GLOBAL INDEX ROWID| W_SALES_ORDER_LINE_F_TEST | 76 | 15352 | 3 (0)| 00:00:01 | ROWID | ROWID |

| 18 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 2 | 44 | 1 (0)| 00:00:01 | | |

| 19 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 20 | BITMAP INDEX SINGLE VALUE | W_PROFT_CNTR_D_M11 | | | | | | |

|* 21 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 112 (1)| 00:00:07 | | |

| 22 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

| 23 | BITMAP AND | | | | | | | |

|* 24 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 25 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_T_F1 | | | | | | |

|* 26 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 231 (2)| 00:00:14 | | |

| 27 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 28 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 29 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 470 (2)| 00:00:27 | | |

| 30 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 31 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

| 32 | TABLE ACCESS BY INDEX ROWID | W_PARTY_D | 1 | 29 | 1 (0)| 00:00:01 | | |

|* 33 | INDEX UNIQUE SCAN | W_PARTY_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 34 | TABLE ACCESS BY INDEX ROWID | W_XACT_TYPE_D | 1 | 23 | 1 (0)| 00:00:01 | | |

|* 35 | INDEX UNIQUE SCAN | W_XACT_TYPE_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 36 | TABLE ACCESS BY INDEX ROWID | W_PAYMENT_TERMS_D | 1 | 16 | 1 (0)| 00:00:01 | | |

|* 37 | INDEX UNIQUE SCAN | W_PAYMNT_TRM_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

|* 38 | INDEX UNIQUE SCAN | W_STATUS_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 39 | TABLE ACCESS BY INDEX ROWID | W_STATUS_D | 1 | 32 | 1 (0)| 00:00:01 | | |

-------------------------------------------------------------------------------------------------------------------------------------------

Leaving only two bitmap indexes on columns, which are used in join conditions between fact and dimension tables improves

the query plan, but eliminates star transformation:

T90499.ORDERED_ON_DT_WID = T156337.MCAL_DAY_DT_WID

and T90499.CHNL_TYPE_WID = T156337.MCAL_CAL_WID

...

and T90499.PROFIT_CENTER_WID = T92473.ROW_WID

...

and T156337.MCAL_PERIOD_NAME = 'JAN-05'

Page 53: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

53

The following combination of enabled indexes does not produce the star query plan:

ALTER INDEX W_SLS_ORD_LN_F_T_F100 INVISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F200 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F300 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F500 INVISIBLE;

Execution Plan

------------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |

------------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 415 | 770 (1)| 00:00:44 | | |

| 1 | SORT GROUP BY | | 1 | 415 | 770 (1)| 00:00:44 | | |

| 2 | NESTED LOOPS | | | | | | | |

| 3 | NESTED LOOPS | | 1 | 415 | 769 (1)| 00:00:44 | | |

| 4 | NESTED LOOPS | | 1 | 392 | 768 (1)| 00:00:44 | | |

| 5 | NESTED LOOPS | | 1 | 376 | 767 (1)| 00:00:44 | | |

| 6 | NESTED LOOPS | | 1 | 344 | 766 (1)| 00:00:44 | | |

| 7 | NESTED LOOPS | | 1 | 315 | 765 (1)| 00:00:44 | | |

| 8 | NESTED LOOPS | | 1 | 291 | 379 (1)| 00:00:22 | | |

| 9 | NESTED LOOPS | | 1 | 267 | 186 (1)| 00:00:11 | | |

|* 10 | HASH JOIN | | 30 | 7290 | 41 (0)| 00:00:03 | | |

| 11 | NESTED LOOPS | | | | | | | |

| 12 | NESTED LOOPS | | 40 | 8840 | 40 (0)| 00:00:03 | | |

|* 13 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 41 | 12 (0)| 00:00:01 | | |

| 14 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 15 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_M4 | | | | | | |

| 16 | PARTITION RANGE ITERATOR | | | | | | KEY | KEY |

| 17 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

| 18 | BITMAP AND | | | | | | | |

|* 19 | BITMAP INDEX SINGLE VALUE | W_SLS_ORD_LN_F_T_F200 | | | | | KEY | KEY |

|* 20 | BITMAP INDEX SINGLE VALUE | W_SLS_ORD_LN_F_T_F300 | | | | | KEY | KEY |

|* 21 | TABLE ACCESS BY LOCAL INDEX ROWID| W_SALES_ORDER_LINE_F_TEST | 76 | 13680 | 40 (0)| 00:00:03 | 1 | 1 |

| 22 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 2 | 44 | 1 (0)| 00:00:01 | | |

| 23 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 24 | BITMAP INDEX SINGLE VALUE | W_PROFT_CNTR_D_M11 | | | | | | |

|* 25 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 186 (1)| 00:00:11 | | |

| 26 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

| 27 | BITMAP AND | | | | | | | |

|* 28 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 29 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_T_F1 | | | | | | |

|* 30 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 379 (1)| 00:00:22 | | |

| 31 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 32 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

|* 33 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 24 | 765 (1)| 00:00:44 | | |

| 34 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 35 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F2 | | | | | | |

| 36 | TABLE ACCESS BY INDEX ROWID | W_PARTY_D | 1 | 29 | 1 (0)| 00:00:01 | | |

|* 37 | INDEX UNIQUE SCAN | W_PARTY_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 38 | TABLE ACCESS BY INDEX ROWID | W_STATUS_D | 1 | 32 | 1 (0)| 00:00:01 | | |

|* 39 | INDEX UNIQUE SCAN | W_STATUS_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 40 | TABLE ACCESS BY INDEX ROWID | W_PAYMENT_TERMS_D | 1 | 16 | 1 (0)| 00:00:01 | | |

|* 41 | INDEX UNIQUE SCAN | W_PAYMNT_TRM_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

|* 42 | INDEX UNIQUE SCAN | W_XACT_TYPE_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 43 | TABLE ACCESS BY INDEX ROWID | W_XACT_TYPE_D | 1 | 23 | 1 (0)| 00:00:01 | | |

------------------------------------------------------------------------------------------------------------------------------------------

When a global index on the partitioning key column is re-created using explicit partitioning syntax, Optimizer chooses partition

pruning for the execution plan.

DROP INDEX W_SLS_ORD_LN_F_T_F200;

DROP INDEX W_SLS_ORD_LN_F_T_F400;

DROP INDEX W_SLS_ORD_LN_F_T_F500;

CREATE INDEX W_SLS_ORD_LN_F_T_F600 ON W_SALES_ORDER_LINE_F_TEST (ORDERED_ON_DT_WID)

GLOBAL PARTITION BY RANGE(ORDERED_ON_DT_WID) (

PARTITION p1 VALUES LESS THAN(20050100) TABLESPACE dwh,

PARTITION p2 VALUES LESS THAN(20050400) TABLESPACE dwh,

PARTITION p3 VALUES LESS THAN(MAXVALUE) TABLESPACE dwh);

ALTER INDEX W_SLS_ORD_LN_F_T_F100 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F300 VISIBLE;

ALTER INDEX W_SLS_ORD_LN_F_T_F600 VISIBLE;

Execution Plan

Page 54: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

54

-------------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |

-------------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 415 | 2017 (1)| 00:01:55 | | |

| 1 | SORT GROUP BY | | 1 | 415 | 2017 (1)| 00:01:55 | | |

| 2 | NESTED LOOPS | | | | | | | |

| 3 | NESTED LOOPS | | 1 | 415 | 2016 (1)| 00:01:55 | | |

| 4 | NESTED LOOPS | | 1 | 383 | 2015 (1)| 00:01:55 | | |

| 5 | NESTED LOOPS | | 1 | 367 | 2014 (1)| 00:01:54 | | |

| 6 | NESTED LOOPS | | 1 | 344 | 2013 (1)| 00:01:54 | | |

| 7 | NESTED LOOPS | | 1 | 315 | 2012 (1)| 00:01:54 | | |

|* 8 | HASH JOIN | | 1 | 293 | 2011 (1)| 00:01:54 | | |

|* 9 | HASH JOIN | | 1 | 269 | 1586 (1)| 00:01:30 | | |

|* 10 | HASH JOIN | | 1 | 245 | 1161 (1)| 00:01:06 | | |

| 11 | NESTED LOOPS | | | | | | | |

| 12 | NESTED LOOPS | | 40 | 8840 | 735 (0)| 00:00:42 | | |

|* 13 | TABLE ACCESS BY INDEX ROWID | W_MCAL_DAY_D | 1 | 41 | 12 (0)| 00:00:01 | | |

| 14 | BITMAP CONVERSION TO ROWIDS | | | | | | | |

|* 15 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_M4 | | | | | | |

| 16 | PARTITION RANGE ITERATOR | | 5704 | | 4 (0)| 00:00:01 | KEY | KEY |

|* 17 | INDEX RANGE SCAN | W_SLS_ORD_LN_F_T_F600 | 5704 | | 4 (0)| 00:00:01 | KEY | KEY |

|* 18 | TABLE ACCESS BY GLOBAL INDEX ROWID| W_SALES_ORDER_LINE_F_TEST | 76 | 16112 | 723 (0)| 00:00:41 | ROWID | ROWID |

|* 19 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

|* 20 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

|* 21 | TABLE ACCESS FULL | W_MCAL_DAY_D | 2150 | 51600 | 425 (1)| 00:00:25 | | |

|* 22 | TABLE ACCESS BY INDEX ROWID | W_PROFIT_CENTER_D | 1 | 22 | 1 (0)| 00:00:01 | | |

|* 23 | INDEX UNIQUE SCAN | W_PROFT_CNTR_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 24 | TABLE ACCESS BY INDEX ROWID | W_PARTY_D | 1 | 29 | 1 (0)| 00:00:01 | | |

|* 25 | INDEX UNIQUE SCAN | W_PARTY_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 26 | TABLE ACCESS BY INDEX ROWID | W_XACT_TYPE_D | 1 | 23 | 1 (0)| 00:00:01 | | |

|* 27 | INDEX UNIQUE SCAN | W_XACT_TYPE_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 28 | TABLE ACCESS BY INDEX ROWID | W_PAYMENT_TERMS_D | 1 | 16 | 1 (0)| 00:00:01 | | |

|* 29 | INDEX UNIQUE SCAN | W_PAYMNT_TRM_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

|* 30 | INDEX UNIQUE SCAN | W_STATUS_D_P1 | 1 | | 0 (0)| 00:00:01 | | |

| 31 | TABLE ACCESS BY INDEX ROWID | W_STATUS_D | 1 | 32 | 1 (0)| 00:00:01 | | |

-------------------------------------------------------------------------------------------------------------------------------------------

Conclusion All reviewed scenarios show that Oracle Optimizer switches between Star Transformation and Partitioning pruning in its

execution plans depending on cost effectiveness of various index combinations. One combination of b-Tree and Bitmap

indexes, local or global, could work well for one query and degrade other SQLs running against the same fact table. You should

carefully review index usage and measure the impact before deciding which indexes to keep or drop in your warehouse

schema.

Note: You can consider using partitioning pruning if your Oracle Warehouse is running on Exadata platform. The

combination of compressed partitioned tables, hash joins and no bitmap indexes could be very effective for larger set

of OBIEE queries. Refer to Exadata section for more details.

Table Compression implementation guidelines

Table Compression Recommendations

Oracle Database table compression can be applied effectively to optimize the space consumption and reduce memory use in

the buffer cache in the Oracle Business Analytics Data Warehouse. Compressed tables require significantly less disk storage

and result in improved query performance due to reduced I/O and buffer cache requirements. It is a valuable feature,

especially in a warehouse environment, where data loaded once, and read many times by end user queries.

Table compression requires careful analysis and planning to take the advantage of efficient space consumption, faster end

user query performance, while keeping incremental ETL within acceptable execution time frame.

Review the following recommendations and guidelines before table compression implementation:

1. The recommended Oracle Database version is 11.1.0.7 or higher. You must apply the following database patches

8834636 and 8930565. Check with Oracle Support for any additional database patches.

2. Table compression should be implemented for target tables after careful analysis of DML operation types, data

volumes and ETL performance benchmarks.

Page 55: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

55

3. The majority of Initial Informatica mappings use Bulk Load, so their target tables can be compressed to deliver

comparable or better ETL performance. There is a smaller set of initial mappings, which use Normal Load type in

Informatica. If you couldn’t change their Load type to Bulk, then leave their corresponding target tables

uncompressed.

4. Oracle Business Intelligence Applications delivers several Informatica mappings, which perform mass updates during

Initial ETL. The target tables for such mappings should NOT be compressed. W_POSITION_DH, updated by

SIL_PositionDimensionHierarchy_AsIsUpdate_Full. is an example of compression exception.

5. Incremental Informatica mappings always use Normal Load mode, so table compression may cause performance

overhead, especially for very large incremental volumes. You should carefully benchmark the mappings using

compressed tables before implementing the compression in your production data warehouse.

6. Consider implementing table compression for Partitioned Fact tables at partition level:

a. Active partitions, loaded during incremental ETLs, should be uncompressed

b. Older, relatively static partitions can be good compression candidates

7. After compressing a table you need to rebuild all its indexes (ALTER INDEX …. REBUILD syntax).

Row Chaining in Compressed Tables after DML Updates and Deletes

DML operations, such as updates and deletes may result in row chaining for compressed tables, and cause regressions in

queries performance. Such row chaining are not flagged by DBMS_STATS API, so you will not find any row chaining statistics in

the database dictionary views (USER_TABLES, etc).

To diagnose and troubleshoot this issue:

1. Check NUM_ROWS, BLOCKS and AVG_ROW_LEN statistics for your table in USER_TABLES. You can estimate #BLOCKS

as NUM_ROWS * AVG_ROW_LEN * 1.16 (~16% Block overhead) / Block size.

For example,W_GL_LINKAGE_INFORMATION_G table stats in 32K db_block_size environment are:

Table Name Num Rows Blocks Avg Row Len Chain Cnt Compression

W_GL_LINKAGE_INFORMATION_G 26699924 679674 125 0 ENABLED

The estimated # blocks:

26699924 num_rows * 125 avg_row_len *1.16 / 32768 =~ 118148 blocks

compared to 679674 blocks in USER_TABLES, shows more than 6 times blocks usage, which means the table most

probably was fragmented. Yet DBMS_STATS didn’t capture any row chaining, as CHAIN_CNT=0.

2. Connect to your warehouse schema and run the script $ORACLE_HOME/rdbms/admin/utlchain.sql or manually create

the following table:

create table CHAINED_ROWS (

owner_name varchar2(30),

table_name varchar2(30),

cluster_name varchar2(30),

partition_name varchar2(30),

subpartition_name varchar2(30),

head_rowid rowid,

analyze_timestamp date);

3. Run ‘analyze table’ command to capture chained row count into CHAINED_ROWS table:

analyze table W_GL_LINKAGE_INFORMATION_G list chained rows into CHAINED_ROWS;

Page 56: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

56

You can capture the similar information for partitioned tables by partition or sub-partition name:

analyze table W_AR_XACT_F partition (PART_201104) list chained rows into chained_rows;

4. Query CHAINED_ROWS table to find out if you have any chained rows:

select TABLE_NAME,PARTITION_NAAME,count(1) from CHAINED_ROWS group by table_name;

5. If you get chained rows for your compressed table, then you have to rebuild and re-compress the table, and possibly

reconsider your compression approach for the identified data segment(s).

ETL Aggregation using Materialized Views

Introduction

You may find some of PLP mappings acting as bottlenecks in your incremental ETL runs. If the logic permits, you can consider

substituting them with fast-refreshable Materialized Views (MV). DAC provides the required flexibility in its Action Framework

to define the required steps to handle MVs and MV Logs during the course of ETL executions.

Important! You need to plan to add MV Logs on target tables very carefully. Avoid sharing an MV Log for two or more MVs.

The MV Log would not be purged until all depending MVs get refreshed. Not purged MV Logs could grow in size, and affect the

performance for DMLs on the base tables.

Follow the steps in PLP_GLBalanceAggrByAcctSegCodes_Load example below to implement MV aggregation logic.

Implement DAC Action Framework Support for MVs

PLP_GLBalanceAggrByAcctSegCodes_Load populates W_GL_BALANCE_A, so you need to create two actions in DAC

Framework:

An action to create W_GL_BALANCE_A as a Materialized View and build Materialized View Logs to ensure the MV fast

refresh.

An action to perform complete and fast refresh for W_GL_BALANCE_A MV for initial and incremental ETLs accordingly.

It will replace the logic for the original PLP_GLBalanceAggrByAcctSegCodes_Load_Full and

PLP_GLBalanceAggrByAcctSegCodes_Load Informatica workflows.

1. Open DAC UI and select Tools > Seed Data > Actions > Task Action to create a new Task Action.

2. Create separate steps in Value dialog for each for the SQLs below to drop and recreate MV Logs, as well as drop and create

the MV:

drop materialized view log on W_GL_ACCOUNT_D;

drop materialized view log on W_GLACCT_GRPACCT_TMP;

drop materialized view log on W_GL_BALANCE_F;

create materialized view log on W_GL_ACCOUNT_D

with sequence, rowid (

ROW_WID

,account_seg1_code

,account_seg1_attrib

,account_seg2_code

,account_seg2_attrib

,account_seg3_code

,account_seg3_attrib

,account_seg4_code

,account_seg4_attrib

,account_seg5_code

,account_seg5_attrib

Page 57: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

57

,account_seg6_code

,account_seg6_attrib

) including new values;

create materialized view log on W_GLACCT_GRPACCT_TMP

with sequence, rowid (GROUP_ACCT_WID, gl_account_wid) including new values;

create materialized view log on W_GL_BALANCE_F

with sequence, rowid (

ledger_wid

,profit_center_wid

,company_org_wid

,BUSN_AREA_ORG_WID

,balance_dt_wid

,balance_tm_wid

,treasury_symbol_wid

,db_cr_ind

,acct_curr_code

,loc_curr_code

,datasource_num_id

,tenant_id

--,x_custom

,translated_flag

,balance_acct_amt

,balance_loc_amt

,balance_global1_amt

,balance_global2_amt

,balance_global3_amt

,activity_acct_amt

,activity_loc_amt

,activity_global1_amt

,activity_global2_amt

,activity_global3_amt

,GL_ACCOUNT_WID

,x_begin_balance_amt_beq

,x_activity_amt_beq

) including new values;

drop materialized view MV_GL_BALANCE_A;

create materialized view MV_GL_BALANCE_A

build immediate refresh fast as

SELECT w_gl_balance_f.ledger_wid

,w_gl_balance_f.profit_center_wid

,w_gl_balance_f.company_org_wid

,w_gl_balance_f.busn_area_org_wid

,w_gl_account_d.group_acct_wid

,w_gl_balance_f.balance_dt_wid

,w_gl_balance_f.balance_tm_wid

,w_gl_balance_f.treasury_symbol_wid

,w_gl_balance_f.db_cr_ind

,w_gl_balance_f.acct_curr_code

,w_gl_balance_f.loc_curr_code

,w_gl_balance_f.datasource_num_id

,w_gl_balance_f.tenant_id

,w_gl_balance_f.translated_flag

,w_gl_account_d.account_seg1_code

,w_gl_account_d.account_seg1_attrib

,w_gl_account_d.account_seg2_code

,w_gl_account_d.account_seg2_attrib

,w_gl_account_d.account_seg3_code

,w_gl_account_d.account_seg3_attrib

,w_gl_account_d.account_seg4_code

,w_gl_account_d.account_seg4_attrib

,w_gl_account_d.account_seg5_code

,w_gl_account_d.account_seg5_attrib

,W_GL_ACCOUNT_D.ACCOUNT_SEG6_CODE

Page 58: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

58

,W_GL_ACCOUNT_D.ACCOUNT_SEG6_ATTRIB

,SUM (w_gl_balance_f.balance_acct_amt) balance_acct_amt

,SUM (W_GL_BALANCE_F.BALANCE_LOC_AMT) BALANCE_LOC_AMT

,SUM (w_gl_balance_f.x_begin_balance_amt_beq) x_begin_balance_amt_beq

,SUM (w_gl_balance_f.balance_global1_amt) balance_global1_amt

,SUM (w_gl_balance_f.balance_global2_amt) balance_global2_amt

,SUM (w_gl_balance_f.balance_global3_amt) balance_global3_amt

,SUM (w_gl_balance_f.activity_acct_amt) activity_acct_amt

,SUM (W_GL_BALANCE_F.ACTIVITY_LOC_AMT) ACTIVITY_LOC_AMT

,SUM (w_gl_balance_f.x_activity_amt_beq) x_activity_amt_beq

,SUM (w_gl_balance_f.activity_global1_amt) activity_global1_amt

,SUM (w_gl_balance_f.activity_global2_amt) activity_global2_amt

,SUM (W_GL_BALANCE_F.ACTIVITY_GLOBAL3_AMT) ACTIVITY_GLOBAL3_AMT

,COUNT (W_GL_BALANCE_F.BALANCE_ACCT_AMT) cBALANCE_ACCT_AMT

,COUNT (W_GL_BALANCE_F.BALANCE_LOC_AMT) cBALANCE_LOC_AMT

,COUNT (w_gl_balance_f.x_begin_balance_amt_beq) cx_begin_balance_amt_beq

,COUNT (W_GL_BALANCE_F.BALANCE_GLOBAL1_AMT) CBALANCE_GLOBAL1_AMT

,COUNT (w_gl_balance_f.balance_global2_amt) cbalance_global2_amt

,COUNT (W_GL_BALANCE_F.BALANCE_GLOBAL3_AMT) cBALANCE_GLOBAL3_AMT

,COUNT (W_GL_BALANCE_F.ACTIVITY_ACCT_AMT) cACTIVITY_ACCT_AMT

,COUNT (W_GL_BALANCE_F.ACTIVITY_LOC_AMT) cACTIVITY_LOC_AMT

,COUNT (w_gl_balance_f.x_activity_amt_beq) cx_activity_amt_beq

,COUNT (W_GL_BALANCE_F.ACTIVITY_GLOBAL1_AMT) cACTIVITY_GLOBAL1_AMT

,COUNT (w_gl_balance_f.activity_global2_amt) cactivity_global2_amt

,COUNT (W_GL_BALANCE_F.ACTIVITY_GLOBAL3_AMT) cACTIVITY_GLOBAL3_AMT

,COUNT(*) CNT

,cast (null as date) W_INSERT_DT

,cast (null as date) W_UPDATE_DT

from W_GL_BALANCE_F

, (select /*+ USE_HASH(W_GLACCT_GRPACCT_TMP, W_GL_ACCOUNT_D)*/

W_GLACCT_GRPACCT_TMP.GROUP_ACCT_WID

,w_gl_account_d.*

FROM w_gl_account_d w_gl_account_d

,W_GLACCT_GRPACCT_TMP W_GLACCT_GRPACCT_TMP

WHERE w_gl_account_d.row_wid = w_glacct_grpacct_tmp.gl_account_wid) w_gl_account_d

WHERE 1 = 1

AND w_gl_balance_f.gl_account_wid = w_gl_account_d.row_wid

GROUP BY w_gl_balance_f.ledger_wid

,w_gl_balance_f.profit_center_wid

,w_gl_balance_f.company_org_wid

,w_gl_balance_f.busn_area_org_wid

,w_gl_account_d.group_acct_wid

,w_gl_balance_f.balance_dt_wid

,w_gl_balance_f.balance_tm_wid

,w_gl_balance_f.treasury_symbol_wid

,w_gl_balance_f.db_cr_ind

,w_gl_balance_f.acct_curr_code

,w_gl_balance_f.loc_curr_code

,w_gl_balance_f.datasource_num_id

,w_gl_balance_f.tenant_id

,w_gl_balance_f.translated_flag

,w_gl_account_d.account_seg1_code

,w_gl_account_d.account_seg1_attrib

,w_gl_account_d.account_seg2_code

,w_gl_account_d.account_seg2_attrib

,w_gl_account_d.account_seg3_code

,w_gl_account_d.account_seg3_attrib

,w_gl_account_d.account_seg4_code

,w_gl_account_d.account_seg4_attrib

,w_gl_account_d.account_seg5_code

,w_gl_account_d.account_seg5_attrib

,W_GL_ACCOUNT_D.ACCOUNT_SEG6_CODE

,W_GL_ACCOUNT_D.ACCOUNT_SEG6_ATTRIB

,cast (null as date);

Page 59: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

59

Important! Make sure that you mark all “drop” steps as Continue on Fail, so that the whole ETL process not halt because of

non-existing objects.

3. Create a Task Action for the MV Fast refresh. Select Tools > Seed Data > Actions > Task Action and create a new Action.

Enter the following anonymous PLSQL block into the SQL Statement text box:

begin

dbms_mview.refresh(list=>'MV_GL_BALANCE_A',method=>'F');

end;

Page 60: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

60

4. Create another Task Action ‘Dummy Refresh’ for the Materialized View without entering any SQL commands.

5. Locate PLP_GLBalanceAggrByAcctSegCodes_Load task in the DAC Design view and change the Execution Type to SQL File.

6. Replace Command for Incremental Load with a call to the Fast Refresh Action Task action and Command for Full Load

with a call to ‘Dummy Refresh’ Task Action:

Page 61: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

61

7. Navigate to Target Tables tab and un‐check Truncate for Full Load checkbox, otherwise the DAC would automatically

truncate the materialized view and cause ORA-32320 during its fast refresh.

Page 62: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

62

8. Add a new Preceding Action for the task to run Create Materialized View Task Action.

9. Drop the original W_GL_BALANCE_A aggregate table using SQL*Plus before running an execution plan.

10. Regenerate the execution plan in DAC to pick all the changes.

Updates Optimization using DBMS_PARALLEL_EXECUTE (11gR2) Some Oracle BI Analytic Applications ETL mappings may incur additional overhead from processing heavy volume updates. The

impact can be even more severe for flattened hierarchies, such as Position Dimension Hierarchy.

Oracle introduced a new PLSQL API DBMS_PARALLEL_EXECUTE in Oracle 11gR2, which can help to speed up such heavy

updates. The new package allows to update a table in parallel by grouping sets of rows into smaller chunks.

Page 63: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

63

Important! The user must have CREATE JOB system privilege to execute the updates in parallel using the API.

The following example shows the use of the API for improving SIL_PositionDimensionHierarchy_AsIsUpdate_Full performance

more than 2 times, using Degree of Parallelism (DOP) 10 with updated rows chunk size set to 50 rows.

The original UPDATE SQL:

UPDATE w_position_dh dh1

SET ( dh1.current_base_postn, dh1.current_base_postn_id,

dh1.current_base_divn,

dh1.current_base_login,

dh1.current_base_emp_full_name, dh1.current_base_emp_id,

dh1.current_lvl1anc_postn, dh1.current_lvl1anc_postn_id,

dh1.current_lvl1anc_divn, dh1.current_lvl1anc_login,

dh1.current_lvl1_emp_full_name, dh1.current_lvl1anc_emp_id,

dh1.current_lvl2anc_postn, dh1.current_lvl2anc_postn_id,

dh1.current_lvl2anc_divn, dh1.current_lvl2anc_login,

dh1.current_top_lvl_divn, dh1.current_top_lvl_login,

dh1.current_top_emp_full_name, dh1.current_top_lvl_emp_id ) = (

SELECT

dh2.base_postn base_postn,

dh2.base_postn_id base_postn_id,

dh2.base_divn base_divn,

dh2.base_login base_login,

dh2.base_emp_full_name,

dh2.base_emp_id base_emp_id,

dh2.lvl1anc_postn,

dh2.lvl1anc_postn_id,

dh2.lvl1anc_divn,

dh2.lvl1anc_login,

dh2.lvl1_emp_full_name,

dh2.lvl1anc_emp_id,

dh2.top_emp_full_name,

dh2.top_lvl_emp_id

FROM

w_position_dh dh2

WHERE

dh2.current_flg = 'Y'

AND dh1.base_postn_id = dh2.base_postn_id

AND dh1.datasource_num_id = dh2.datasource_num_id)

The PLSQL block below uses DBMS_PARALLEL_EXECUTE API to update W_POSITION_DH in parallel with DOP 10 by dividing the update rows into 50 row chunks.

DECLARE

l_sql_stmt clob;

l_try NUMBER;

l_status NUMBER;

BEGIN

-- Create the TASK

DBMS_PARALLEL_EXECUTE.CREATE_TASK ('mytask');

-- Chunk the table by ROWID

DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID('mytask', USER, 'W_POSITION_DH', true, 50);

-- Execute the DML in parallel

l_sql_stmt := 'update /*+ ROWID (dda) */ w_position_dh dh1

SET ( dh1.current_base_postn, dh1.current_base_postn_id,

Page 64: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

64

dh1.current_base_divn,

dh1.current_base_login,

dh1.current_base_emp_full_name, dh1.current_base_emp_id,

dh1.current_lvl1anc_postn, dh1.current_lvl1anc_postn_id,

dh1.current_lvl1anc_divn, dh1.current_lvl1anc_login,

dh1.current_lvl1_emp_full_name, dh1.current_lvl1anc_emp_id,

dh1.current_lvl2anc_postn, dh1.current_lvl2anc_postn_id,

dh1.current_lvl2anc_divn, dh1.current_lvl2anc_login,

dh1.current_lvl2_emp_full_name, dh1.current_lvl2anc_emp_id,

dh1.current_lvl3anc_postn, dh1.current_lvl3anc_postn_id,

dh1.current_lvl3anc_divn, dh1.current_lvl3anc_login,

dh1.current_lvl3_emp_full_name, dh1.current_lvl3anc_emp_id,

dh1.current_lvl4anc_postn, dh1.current_lvl4anc_postn_id,

dh1.current_top_lvl_divn, dh1.current_top_lvl_login,

dh1.current_top_emp_full_name, dh1.current_top_lvl_emp_id ) = (

SELECT

dh2.base_postn base_postn,

dh2.base_postn_id base_postn_id,

dh2.base_divn base_divn,

dh2.base_login base_login,

dh2.base_emp_full_name,

dh2.base_emp_id base_emp_id,

dh2.lvl1anc_postn,

dh2.lvl1anc_postn_id,

dh2.lvl1anc_divn,

dh2.lvl1anc_login,

dh2.lvl1_emp_full_name,

dh2.lvl1anc_emp_id,

dh2.lvl2anc_postn,

dh2.lvl2anc_postn_id,

dh2.lvl2anc_divn,

dh2.lvl2anc_login,

FROM

w_position_dh dh2

WHERE

dh2.current_flg = ''Y''

AND dh1.base_postn_id = dh2.base_postn_id

AND dh1.datasource_num_id = dh2.datasource_num_id)

WHERE rowid BETWEEN :start_id AND :end_id';

DBMS_PARALLEL_EXECUTE.RUN_TASK('mytask', l_sql_stmt, DBMS_SQL.NATIVE,

parallel_level => 10);

-- If there is an error, RESUME it for at most 2 times.

L_try := 0;

L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS('mytask');

WHILE(l_try < 2 and L_status != DBMS_PARALLEL_EXECUTE.FINISHED)

LOOP

L_try := l_try + 1;

DBMS_PARALLEL_EXECUTE.RESUME_TASK('mytask');

L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS('mytask');

END LOOP;

-- Done with processing; drop the task

DBMS_PARALLEL_EXECUTE.DROP_TASK('mytask');

END;

/

You can check the status of each chunk by running the following SQL:

SQL> SELECT chunk_id, status, start_rowid, end_rowid

2 FROM user_parallel_execute_chunks

3 WHERE task_name = 'mytask'

Page 65: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

65

4* and status='ASSIGNED

CHUNK_ID STATUS START_ROWID END_ROWID

---------- -------------------- ------------------ ------------------

635969 ASSIGNED AABKDGAAHAAB2egAAA AABKDGAAHAAB2e/CcP

635970 ASSIGNED AABKDGAAHAAB2fAAAA AABKDGAAHAAB2ffCcP

635971 ASSIGNED AABKDGAAHAAB2iAAAA AABKDGAAHAAB2ifCcP

635973 ASSIGNED AABKDGAAHAAB2mgAAA AABKDGAAHAAB2m/CcP

635975 ASSIGNED AABKDGAAHAAB2sgAAA AABKDGAAHAAB2s/CcP

635977 ASSIGNED AABKDGAAHAAB2wAAAA AABKDGAAHAAB2wfCcP

635966 ASSIGNED AABKDGAAHAAB2WAAAA AABKDGAAHAAB2WfCcP

635968 ASSIGNED AABKDGAAHAAB2agAAA AABKDGAAHAAB2a/CcP

635974 ASSIGNED AABKDGAAHAAB2pgAAA AABKDGAAHAAB2p/CcP

635976 ASSIGNED AABKDGAAHAAB2tAAAA AABKDGAAHAAB2tfCcP

10 rows selected.

Wide tables with over 255 columns performance

Introduction

Oracle Database supports relational tables with up to 1000 columns. Though there are no any differences in logical wide table

structure, the Oracle database will split wide table rows into 255 row-pieces for tables, exceeding 255 columns limit. Even if

there is enough free space in a single block, Oracle will allocate another block for the next row-piece. As the result, Oracle will

have to generate recursive calls to dynamically allocate space for the chained rows during their read/write time.

Oracle BI Applications physical data model contains several wide dimension tables, such as W_ORG_D, W_SOURCE_D,

W_PERSON_D, which could have over 255 columns after end user customizations. The table below shows the comparison

statistics for a sample W_ORG_D with 254 and 300 columns:

W_ORG_D with 300 columns W_ORG_D with 254 columns

Time: 186 sec Statistics ---------------------------------------------------------- 657 recursive calls 0 db block gets 134975 consistent gets 134867 physical reads 0 redo size 382 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 6 sorts (memory) 0 sorts (disk) 1 rows processed

Time: 54 sec Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 134888 consistent gets 134864 physical reads 0 redo size 382 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed

Depending on the queries complexity the amount of physical reads also could be much higher for wide tables with more than

255 columns. The described limitation would have critical impact on Oracle BI Applications Dashboards performance.

Wide tables structure optimization

Since the wide dimension tables were designed to consolidate attributes from multiple source databases, there are very few

customers’ implementations, which would use all pre-defined attributes. Since the unused columns will store NULLs, consider

rebuilding wide tables with over 255 columns and moving the columns with NULLs to the end. Oracle does not allocate space

to NULL columns at the end of the table, so it would not create chained row-pieces.

Page 66: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

66

Important! Optimized wide tables must be created from scratch, since the existing tables already have the chained

rows. So, any ‘ALTER TABLE’ command would not resolve the chaining problem.

After rebuilding a wide table, verify that all ETL and Query indexes get created as well.

Guidelines for Oracle optimizer hints usage in ETL mappings

Hash Joins versus Nested Loops in Oracle RDBMS

Though Oracle optimizer chooses the most efficient plan with the least cost for a query, sometimes database hints can help to

improve efficiency and increase overall ETL performance, in spite of the higher estimated query cost, reported in very large

volume Oracle Business Analytics Data Warehouses.

If tables, used in a query, have indexes defined on the joining columns in a WHERE clause, the optimizer might choose Nested

loop join over Hash join accessing a table using an index defined on a column, used in a join. Although this approach may start

returning results sooner the overall time to fetch all the records could be considerably longer.

Specifying the hint USE_HASH would change the execution plan to use a full table scan (in some cases the optimizer might still

use indexes, such as index fast full scan) for a table involved in the query. Initial records fetch may take more time as hash

joins are built in memory, but the overall time for fetching all the records would be reduced quite dramatically.

Important! Oracle might take up to 8-10 hours just to build hashes in memory for very large tables (over 100 million

records), so it is important not to kill the query.

ETL is a batch process, measured by overall time to load all the records, so you should avoid using nested loops by

incorporating hint USE_HASH for tables with volumes over ten million records.

The real life example below provides the comparison between NESTED LOOPS and HASH JOIN execution. The numbers are

applicable to the specific test case configuration, which would vary depending on hardware specifications and database

settings.

Table No of Rows

--------------------- ----------

PAY_RUN_RESULT_VALUES 900 Million

PAY_RUN_RESULTS 14 Million

PAY_ASSIGNMENT_ACTIONS 50 Million

PAY_INPUT_VALUES_F 10000

PAY_ELEMENT_TYPES_F 10000

PAY_PAYROLL_ACTIONS 1445896

PAY_ELEMENT_CLASSIFICATIONS 1897

PER_TIME_PERIODS 52728

SELECT PAY_ASSIGNMENT_ACTIONS.ASSIGNMENT_ACTION_ID,

PAY_ASSIGNMENT_ACTIONS.ASSIGNMENT_ID,

PAY_ELEMENT_TYPES_F.INPUT_CURRENCY_CODE,

PAY_ELEMENT_TYPES_F.OUTPUT_CURRENCY_CODE,

PER_TIME_PERIODS.END_DATE,

PER_TIME_PERIODS.START_DATE,

PAY_PAYROLL_ACTIONS.PAY_ADVICE_DATE,

PAY_PAYROLL_ACTIONS.LAST_UPDATE_DATE,

PAY_PAYROLL_ACTIONS.LAST_UPDATED_BY,

PAY_PAYROLL_ACTIONS.CREATED_BY,

PAY_PAYROLL_ACTIONS.CREATION_DATE,

PAY_RUN_RESULT_VALUES.INPUT_VALUE_ID,

PAY_RUN_RESULT_VALUES.RUN_RESULT_ID,

PAY_RUN_RESULT_VALUES.RESULT_VALUE,

PAY_RUN_RESULTS.ELEMENT_TYPE_ID,

PAY_INPUT_VALUES_F.LAST_UPDATE_DATE LAST_UPDATE_DATE1,

PAY_ELEMENT_TYPES_F.LAST_UPDATE_DATE LAST_UPDATE_DATE2

FROM PAY_RUN_RESULT_VALUES,

PAY_RUN_RESULTS,

Page 67: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

67

PAY_INPUT_VALUES_F,

PAY_ASSIGNMENT_ACTIONS,

PAY_ELEMENT_TYPES_F,

PAY_PAYROLL_ACTIONS,

PAY_ELEMENT_CLASSIFICATIONS,

PER_TIME_PERIODS

WHERE (PAY_PAYROLL_ACTIONS.LAST_UPDATE_DATE >= TO_DATE('01/01/2007 00:00:00','MM/DD/YYYY HH24:MI:SS')

OR PAY_INPUT_VALUES_F.LAST_UPDATE_DATE >= TO_DATE('01/01/2007 00:00:00','MM/DD/YYYY HH24:MI:SS')

OR PAY_ELEMENT_TYPES_F.LAST_UPDATE_DATE >= TO_DATE('01/01/2007 00:00:00','MM/DD/YYYY HH24:MI:SS'))

AND PAY_PAYROLL_ACTIONS.ACTION_STATUS = 'C'

AND PAY_PAYROLL_ACTIONS.ACTION_POPULATION_STATUS = 'C'

AND PAY_ASSIGNMENT_ACTIONS.ACTION_STATUS = 'C'

AND PAY_INPUT_VALUES_F.UOM = 'M'

AND PAY_RUN_RESULT_VALUES.RUN_RESULT_ID = PAY_RUN_RESULTS.RUN_RESULT_ID

AND PAY_RUN_RESULT_VALUES.INPUT_VALUE_ID = PAY_INPUT_VALUES_F.INPUT_VALUE_ID

AND PAY_RUN_RESULTS.ASSIGNMENT_ACTION_ID = PAY_ASSIGNMENT_ACTIONS.ASSIGNMENT_ACTION_ID

AND PAY_RUN_RESULTS.ELEMENT_TYPE_ID = PAY_ELEMENT_TYPES_F.ELEMENT_TYPE_ID

AND PAY_ASSIGNMENT_ACTIONS.PAYROLL_ACTION_ID = PAY_PAYROLL_ACTIONS.PAYROLL_ACTION_ID

AND PAY_PAYROLL_ACTIONS.EFFECTIVE_DATE BETWEEN PAY_INPUT_VALUES_F.EFFECTIVE_START_DATE

AND PAY_INPUT_VALUES_F.EFFECTIVE_END_DATE

AND PAY_PAYROLL_ACTIONS.EFFECTIVE_DATE BETWEEN PAY_ELEMENT_TYPES_F.EFFECTIVE_START_DATE

AND PAY_ELEMENT_TYPES_F.EFFECTIVE_END_DATE

AND PAY_ELEMENT_CLASSIFICATIONS.CLASSIFICATION_ID = PAY_ELEMENT_TYPES_F.CLASSIFICATION_ID

AND PER_TIME_PERIODS.TIME_PERIOD_ID = PAY_PAYROLL_ACTIONS.TIME_PERIOD_ID

AND PAY_INPUT_VALUES_F.NAME = 'Pay Value'

AND CLASSIFICATION_NAME NOT LIKE '%Information%'

AND CLASSIFICATION_NAME NOT LIKE '%Employer%'

AND CLASSIFICATION_NAME NOT LIKE '%Balance%'

AND PAY_RUN_RESULTS.SOURCE_TYPE IN ('I', 'E')

The Explain Plan for the query is below:

Plan hash value: 1498624813

Id Operation Name Rows Bytes TempSp

c Cost (%CPU) Time

0 SELECT STATEMENT 60 14040 111K (2) 00:22:23

1 CONCATENATION

2 NESTED LOOPS 55 12870 83423 (2) 00:16:42

3 HASH JOIN 59 12980 8064K 83304 (2) 00:16:40

4 TABLE ACCESS BY INDEX ROWID PAY_ASSIGNMENT_ACTIONS 7 147 7 (0) 00:00:01

5 NESTED LOOPS 38937 7604K 47490 (1) 00:09:30

6 HASH JOIN 5503 961K 9053 (4) 00:01:49

7 TABLE ACCESS FULL PAY_ELEMENT_CLASSIFICATIONS 1626 47154 13 (0) 00:00:01

8 MERGE JOIN 5505 806K 9039 (4) 00:01:49

9 SORT JOIN 3369 355K 8931 (4) 00:01:48

10 HASH JOIN 3369 355K 8930 (4) 00:01:48

11 MERGE JOIN 3527 292K 8579 (4) 00:01:43

12 SORT JOIN 15 675 156 (3) 00:00:02

13 TABLE ACCESS FULL PAY_INPUT_VALUES_F 15 675 155 (2) 00:00:02

14 FILTER

15 SORT JOIN 96393 3765K 11M 8424 (4) 00:01:42

16 TABLE ACCESS FULL PAY_PAYROLL_ACTIONS 96393 3765K 7424 (5) 00:01:30

17 TABLE ACCESS FULL PER_TIME_PERIODS 52728 1184K 349 (1) 00:00:05

18 FILTER

19 SORT JOIN 654 27468 106 (3) 00:00:02

20 TABLE ACCESS FULL PAY_ELEMENT_TYPES_F 654 27468 105 (2) 00:00:02

21 INDEX RANGE SCAN PAY_ASSIGNMENT_ACTIONS_N50 35 3 (0) 00:00:01

22 TABLE ACCESS FULL PAY_RUN_RESULTS 9986K 190M 20007 (4) 00:04:01

23 TABLE ACCESS BY INDEX ROWID PAY_RUN_RESULT_VALUES 1 14 3 (0) 00:00:01

24 INDEX UNIQUE SCAN PAY_RUN_RESULT_VALUES_PK 1 2 (0) 00:00:01

25 NESTED LOOPS 4 936 19634 (1) 00:03:56

26 HASH JOIN 4 820 19630 (1) 00:03:56

27 NESTED LOOPS 1460 232K 19524 (1) 00:03:55

Page 68: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

68

28 NESTED LOOPS 1552 225K 14863 (1) 00:02:59

29 NESTED LOOPS 1579 198K 8538 (1) 00:01:43

30 NESTED LOOPS 223 24084 6974 (1) 00:01:24

31 NESTED LOOPS 234 19890 6740 (1) 00:01:21

32 TABLE ACCESS FULL PAY_INPUT_VALUES_F 1 45 155 (2) 00:00:02

33 TABLE ACCESS BY INDEX ROWID PAY_PAYROLL_ACTIONS 241 9640 6585 (1) 00:01:20

34 INDEX RANGE SCAN PAY_PAYROLL_ACTIONS_N5 72295 341 (1) 00:00:05

35 TABLE ACCESS BY INDEX ROWID PER_TIME_PERIODS 1 23 1 (0) 00:00:01

36 INDEX UNIQUE SCAN PER_TIME_PERIODS_PK 1 0 (0) 00:00:01

37 TABLE ACCESS BY INDEX ROWID PAY_ASSIGNMENT_ACTIONS 7 147 7 (0) 00:00:01

38 INDEX RANGE SCAN PAY_ASSIGNMENT_ACTIONS_N50 35 3 (0) 00:00:01

39 TABLE ACCESS BY INDEX ROWID PAY_RUN_RESULTS 1 20 4 (0) 00:00:01

40 INDEX RANGE SCAN PAY_RUN_RESULTS_N50 20 2 (0) 00:00:01

41 TABLE ACCESS BY INDEX ROWID PAY_RUN_RESULT_VALUES 1 14 3 (0) 00:00:01

42 INDEX UNIQUE SCAN PAY_RUN_RESULT_VALUES_PK 1 2 (0) 00:00:01

43 TABLE ACCESS FULL PAY_ELEMENT_TYPES_F 9873 404K 105 (2) 00:00:02

44 TABLE ACCESS BY INDEX ROWID PAY_ELEMENT_CLASSIFICATION_S 1 29 1 (0) 00:00:01

45 INDEX UNIQUE SCAN PAY_ELEMENT_CLASSIFICATION_PK 1 0 (0) 00:00:01

46 NESTED LOOPS 1 234 8809 (4) 00:01:46

47 NESTED LOOPS 1 205 8808 (4) 00:01:46

48 HASH JOIN 1 191 8805 (4) 00:01:46

49 TABLE ACCESS BY INDEX ROWID PAY_RUN_RESULTS 1 20 4 (0) 00:00:01

50 NESTED LOOPS 213 31737 8699 (4) 00:01:45

51 NESTED LOOPS 217 27993 7829 (4) 00:01:34

52 NESTED LOOPS 31 3348 7612 (5) 00:01:32

53 MERGE JOIN 32 2720 7580 (5) 00:01:31

54 SORT JOIN 14 630 156 (3) 00:00:02

55 TABLE ACCESS FULL PAY_INPUT_VALUES_F 14 630 155 (2) 00:00:02

56 FILTER

57 SORT JOIN 939 37560 7424 (5) 00:01:30

58 TABLE ACCESS FULL PAY_PAYROLL_ACTIONS 939 37560 7423 (5) 00:01:30

59 TABLE ACCESS BY INDEX ROWID PER_TIME_PERIODS 1 23 1 (0) 00:00:01

60 INDEX UNIQUE SCAN PER_TIME_PERIODS_PK 1 0 (0) 00:00:01

61 TABLE ACCESS BY INDEX ROWID PAY_ASSIGNMENT_ACTIONS 7 147 7 (0) 00:00:01

62 INDEX RANGE SCAN PAY_ASSIGNMENT_ACTIONS_N50 35 3 (0) 00:00:01

63 INDEX RANGE SCAN PAY_RUN_RESULTS_N50 20 2 (0) 00:00:01

64 TABLE ACCESS FULL PAY_ELEMENT_TYPES_F 9873 404K 105 (2) 00:00:02

65 TABLE ACCESS BY INDEX ROWID PAY_RUN_RESULT_VALUES 1 14 3 (0) 00:00:01

66 INDEX UNIQUE SCAN PAY_RUN_RESULT_VALUES_PK 1 2 (0) 00:00:01

67 TABLE ACCESS BY INDEX ROWID PAY_ELEMENT_CLASSIFICATIONS 1 29 1 (0) 00:00:01

68 INDEX UNIQUE SCAN PAY_ELEMENT_CLASSIFICATION_PK 1 0 (0) 00:00:01

The query took more than 48 hours to execute and produced 128 million records even though the first record was fetched

within 1.5 hours of the execution. The reported throughput achieved is 700 RPS.

Note: The optimizer chose to access the tables through index paths, and then joined the result sets using Nested

Loops.

After adding the hint USE_HASH(PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F

PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS

PER_TIME_PERIODS) to the preceding query, the optimizer produced the following execution plan:

Plan hash value: 3421230164

Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time

0 SELECT STATEMENT 10 2340 932K (5) 03:06:29

1 HASH JOIN 10 2340 932K (5) 03:06:29

2 HASH JOIN 10 2050 932K (5) 03:06:28

3 TABLE ACCESS FULL PAY_INPUT_VALUES_F 15 675 155 (2) 00:00:02

4 HASH JOIN 103K 15M 932K (5) 03:06:27

5 HASH JOIN 1624 231K 167K (3) 00:33:28

6 HASH JOIN 1700 204K 166K (3) 00:33:23

Page 69: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

69

7 TABLE ACCESS FULL PAY_ELEMENT_TYPES_F 10527 431K 105 (2) 00:00:02

8 HASH JOIN 670K 51M 47M 166K (3) 00:33:22

9 HASH JOIN 682K 39M 4896K 128K (3) 00:25:48

10 TABLE ACCESS FULL PAY_PAYROLL_ACTIONS 96393 3765K 7424 (5) 00:01:30

11 TABLE ACCESS FULL PAY_ASSIGNMENT_ACTIONS 10M 203M 105K (3) 00:21:03

12 TABLE ACCESS FULL PAY_RUN_RESULTS 9986K 190M 20007 (4) 00:04:01

13 TABLE ACCESS FULL PER_TIME_PERIODS 52728 1184K 349 (1) 00:00:05

14 TABLE ACCESS FULL PAY_RUN_RESULT_VALUES 912M 11G 751K (4) 02:30:21

15 TABLE ACCESS FULL PAY_ELEMENT_CLASSIFICATIONS 1626 47154 13 (0) 00:00:01

Even though the estimated cost went up, the query completed much faster. Below is the summary of two executions:

Query CPU Cost First records Fetch

Start Time

Reported Informatica

Throughput

Mapping Execution

Time

No Hints (nested loops) 111K After 1 hour 30 min 700 rows / sec 48 hours

Hash Join hint 923K After 5 hours 3000 rows / sec 10 hours

Oracle Database Hints Use in Oracle Business Intelligence Applications 7.9.6 Mappings

The following table summarizes the database hints, which helped to improve Oracle Business Intelligence Applications 7.9.6

mappings performance in internal performance tests.

Area Mapping ETL Hints

Common Dimensions

Siebel, OM, SCA

SIL_PartyDimension_Person Initial /*+ USE_HASH(PTY PER DS) NO_INDEX(PTY) */

OM SIL_PartyDimension_Organization Initial /*+ NO_INDEX(ORG) */

Siebel SDE_PartyPersonDimension Initial / Incr.

set DTM Buffer Size to 32000000 set Default Buffer Block Size to 128000

PRJ, SCA, HCM, OM

SDE_ORA_PartyPersonDimension_Customer Initial $$HINT1: /*+ USE_HASH(PER PTY CNP SUP)*/ $$HINT2: /*+ USE_HASH(PP) */

SCA, SCM, OM, PRJ

SDE_ORA_PartyOrganizationDimension_Customer_Full

Initial $$HINT1: /*+ USE_HASH(HZ_ORGANIZATION_PROFILES HZ_PARTIES ) */ $$HINT2: /*+USE_HASH(DOM_ULT_DUNS, DOM_REL) */

SCA, SCM, OM, PRJ

SDE_ORA_PartyOrganizationDimension_Customer

Incr. /*+ USE_HASH(HZ_ORGANIZATION_PROFILES HZ_PARTIES) */

PRJ SIL_GLAccountDimension_SCDUpdate Incr. /*+ INDEX (TARGET_TABLE W_GL_ACCOUNT_D_U1) INDEX (SCD_OUTER W_GL_ACCOUNT_D_U1)*/

FIN, PRJ, SCA, OM

SDE_ORA_PartyContactStaging Incr. $$HINT1:/*+ INDEX(PP HZ_PARTIES_U1)*/ $$HINT2: /*+ USE_HL(OC TMP1)*/

FIN, OM, PRJ, SCA

SDE_ORA_INVENTORYPRODUCTDIMENSION_FULL

Initial /*+ FULL(PER_ALL_PEOPLE_F) */

Siebel CRM

SIL_ResponseFact_Full Initial /*+ NO_INDEX(w_regn_d) NO_INDEX(w_segment_d) NO_INDEX(offer) NO_INDEX(terr) NO_INDEX(W_WAVE_D) NO_INDEX(w_ld_wave_d) NO_INDEX(w_source_d) */

SIL_PartyDimension_Person Initial $$HINT1: /*+ USE_HASH(PTY PER DS) FULL(PTY) */

SIL_Agg_OverlappingCampaign_Accounts Incr. $$HINT1 /*+ NO_INDEX(SRC) NO_INDEX(OSRC) */ $$HINT2 /*+ FULL(W_CAMP_HIST_F) */ $$HINT3 /*+ FULL(W_CAMP_HIST_F) */

SIL_Agg_OverlappingCampaign_Contacts Incr. $$HINT1 /*+ NO_INDEX(SRC) NO_INDEX(OSRC) */ $$HINT2 /*+ FULL(W_CAMP_HIST_F) */

Page 70: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

70

$$HINT3 /*+ FULL(W_CAMP_HIST_F) */

SIL_Agg_ResponseCampaignOffer and SIL_Agg_ResponseCampaign

Incr. /*+ FULL(W_PARTY_PER_D) */

SIL_Agg_ProductLineRevn_CloseDate SIL_Agg_ProductLineRevn_OpenDate SIL_Agg_SalesPipelineRevn_CloseDate SIL_Agg_SalesPipelineRevn_OpenDate

Incr. /*+ FULL(W_REVN_F) */

EBS Supply Chain

11.5.10 SDE_ORA_PurchaseReceiptFact Initial / Incr.

/*+ FULL(RCV_TRANSACTIONS) */

SDE_ORA_StandardCostGeneral_Full Initial / Incr.

/+ USE_HASH(MTL_SYSTEM_ITEMS_B)/

SIL_ExpenseFact_FULL Initial /*+ USE_HASH(W_PROJECT_D) */

SIL_APInvoiceDistributionFact_Full Initial

/*+ USE_HASH(W_AP_INV_DIST_FS PO_PLANT_LOC PO_RCPT_LOC OPERATING_UNIT_ORG PAYABLES_ORG PURCHASE_ORG W_LEDGER_D INV_TYPE DIST_TYPE SPEND_TYPE APPROVAL_STATUS PAYMENT_STATUS W_AP_TERMS_D W_PROJECT_D EXPENDITURE_ORG CREATED_BY CHANGED_BY W_XACT_SOURCE_D W_Financial_Resource_D W_GL_ACCOUNT_D W_PARTY_D W_SUPPLIER_ACCOUNT_D) */

SDE_ORA_BOMHeaderDimension_Full Initial /*+ FULL(M) */

SDE_ORA_PROJECT_HIERARCHYDIMENSION_STAGE1

Initial / Incr.

/*+ USE_HASH(PPV1 PPV2 POR1 POR2 PPEVS1 PPE1 PPE2 PPA2) */

SDE_ORA_TASKS Initial / Incr.

$$HINT1 /*+ USE_HASH(PA_TASKS PA_TASK_TYPES PA_PROJ_ELEMENT_VERSIONS PA_PROJ_ELEMENTS PA_PROJECT_STATUSES PA_PROJ_ELEM_VER_STRUCTURE PA_PROJECTS_ALL PA_PROJECT_TYPES_ALL PA_PROJ_ELEM_VER_SCHEDULE) */ $$HINT2 /*+ USE_HASH(PA_PROJECTS_ALL PA_PROJECT_TYPES_ALL PA_TASKS) */ $$HINT3 /*+ USE_HASH(PE PEV PPS) */

SIL_PRODUCTTRANSACTIONFACT Initial /*+ USE_HASH(SRC_PRO_D TO_PRO_D) */

SIL_PURCHASECOSTFACT Initial /*+ USE_HASH(W_PROJECT_D) */

SIL_APINVOICEDISTRIBUTIONFACT Incr. Apply hint to Lkp_W_AP_INV_DIST_F query: /*+ INDEX(TARGET_TABLE) */

EBS Human Resources

R12 SDE_ORA_PayrollFact_Full Initial

$$HINT1: /*+ USE_HASH( PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS PER_TIME_PERIODS ) */ $$HINT2: /*+ ORDERED USE_HASH( PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS PER_TIME_PERIODS ) */ $$HINT3: /*+ FULL(PER_ALL_ASSIGNMENTS_F) FULL(PER_ALL_PEOPLE_F) */

SDE_ORA_PayrollFact_Agg_Items_Derive_Full

Initial /*+ parallel(W_PAYROLL_FS,4)*/

PLP_RECRUITMENTHIREAGGREGATE_LOAD Incr. $$HINT1: /*+ USE_HASH (FACT MONTH PERF LOC SOURCE AGE EMP)*/

PLP_WorkforceEventFact_Month Initial /*+ FULL(suph) */

EBS Financials

R12 SDE_ORA_APTransactionFact_LiabilityDistribution

Incr. /*+ parallel(AP_INVOICE_DISTRIBUTIONS_ALL,4) use_hash(AP_INVOICES_ALL AP_INVOICE_DISTRIBUTIONS_ALL PO_HEADERS_ALL PO_DISTRIBUTIONS_ALL PO_LINES_ALL)*/

SDE_ORA_Stage_GLJournals_Derive Incr. /*+ PARALLEL (W_ORA_GL_JOURNALS_F_TMP, 4) */

Page 71: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

71

SDE_ORA_CustomerFinancialProfileDimension

Initial / Incr.

/*+ USE_HASH (HZ_PARTIES)*/

SDE_ORA_ARTransactionFact_CreditMemoApplication

Incr. /*+ USE_HASH(AR_PAYMENT_SCHEDULES_ALL RA_CUSTOMER_TRX_ALL RA_CUSTOMER_TRX_ALL1 AR_PAYMENT_SCHEDULES_ALL1 AR_DISTRIBUTIONS_ALL) */

PLP_APIncrActivityLoad Incr. /*+ index(W_AP_XACT_F, W_AP_XACT_F_M1) */

PLP_APXactsGroupAccount_A_Stage_Full Initial /*+ full(W_GL_ACCOUNT_D) full(W_STATUS_D) full(W_AP_XACT_F) full(W_XACT_TYPE_D) full(D1) full(D2) full( D3)*/

EBS Projects

R12

SDE_ORA_ProjectFundingHeader

Initial / Incr.

/*+ USE_HASH(PA_PROJECTS_ALL PA_TASKS PA_AGREEMENTS_ALL PA_SUMMARY_PROJECT_FUNDINGS) */

SDE_ORA_ProjectInvoiceLine_Fact

Initial

/*+ USE_HASH(pa_draft_invoice_items pa_tasks pa_draft_invoices_all pa_projects_all pa_agreements_all pa_lookups) */

SDE_ORA_ProjectCostLine_Fact

Initial

/*+ USE_HASH(pa_cost_distribution_lines_all pa_expenditure_items_all pa_expenditures_all pa_implementations_all pa_implementations_all_1 gl_sets_of_books pa_project_assignments pa_resource_list_members pa_lookups pa_projects_all pa_project_types_all pa_expenditure_types) */

SIL_ProjectFundingHeader_Fact Incr. /*+ INDEX(LOOKUP_TABLE W_PARTY_D_M3) */

EBS Enterprise Sales

11.5.10 SIL_SalesPickLinesFact_Full Initial /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */

SIL_SalesOrderLinesFact_Full Initial /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */

SIL_SalesInvoiceLinesFact_Full Initial /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */

SIL_SalesScheduleLinesFact_Full Initial /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */

EBS Service

11.5.10 SDE_ORA_EntitlementDimension Initial /*+ parallel(OKC_K_LINES_TL,4) parallel (OKC_K_LINES_B,4) */

SDE_ORA_AgreeDimension Initial /*+ NO_MERGE(fndv) */

SDE_ORA_AbsenceEvent

Initial

/*+ use_hash(per_absence_attendances per_all_assignments_f per_absence_attendance_types per_abs_attendance_reasons ) */

SIL_ActivityFact_Full Initial /*+ use_hash(W_ACTIVITY_FS W_FS_ACT_CST_FS W_SOURCE_D W_ENTLMNT_D W_AGREE_D W_REGION_D W_SRVREQ_D W_ASSET_D) */

PeopleSoft HCM

8.9, 9.x SDE_PSFT_UserDimension_PersonalInformation

Initial /*+ use_hash(login person address names perdata bus_email alt_email bus_phones cell_phones fax_phones pgr_phones) */

SDE_PSFT_SupplierAccountDimension Initial /*+ use_hash(v vaddr vcont vphn) */

Oracle Database Hints Use in Oracle Business Intelligence Applications 7.9.6.3 Mappings

Review the additional hints, recommended for Bi Analytic Applications 7.9.6.3 in the table below.

Area Mapping ETL Hints

Common Dimensions

All SDE_ORA_PartyContactStaging_Full Initial

All SDE_ORA_PartyOrganizationDimension_Customer_Full

Initial SQ: $$Hint1 = /*+ USE_HASH(HZ_ORGANIZATION_PROFILES HZ_PARTIES ) */ $$Hint2 = /*+USE_HASH(DOM_ULT_DUNS, DOM_REL) */

All SDE_ORA_PartyPersonDimension_Customer_Full. SDE_ORA_PartyPersonDimension_Customer_Temporary_Full

Initial SQ: $$Hint1 = /*+ USE_HASH(PER PTY CNP SUP)*/

SRV, OM SIL_EmployeeDimension_SCDUpdate_Full Initial SQ: /*+ NO_MERGE (SCD_HISTORY)*/

Page 72: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

72

SRV, OM SIL_GLAccountDimension_SCDUpdate_Full Initial SQ: /*+ NO_MERGE (SCD_HISTORY)*/

OM SIL_GLAccountDimension_SCDUpdate Incr. /*+ FULL(RA_CUSTOMER_TRX_LINES_ALL1) FULL(RA_CUSTOMER_TRX_LINES_ALL) FULL(RA_CUSTOMER_TRX_ALL) FULL(RA_CUST_TRX_TYPES_ALL) FULL(OE_ORDER_HEADERS_ALL) FULL(OE_ORDER_LINES_ALL) */

SRV, OM SIL_PartyPersonDimension_SCDUpdate_Full Initial /*+ NO_MERGE (SCD_HISTORY)*/

SRV SDE_ORA_InternalOrganizationDimensionHierarchy_Flatten

Incr. / Initial

SQ: $$Hint1 = /*+ USE_NL(w_int_org_dh_tmp w_int_org_dh_tmp1 w_int_org_dh_tmp2 w_int_org_dh_tmp3 w_int_org_dh_tmp4 w_int_org_dh_tmp5 w_int_org_dh_tmp6 w_int_org_dh_tmp7 w_int_org_dh_tmp8 w_int_org_dh_tmp9 w_int_org_dh_tmp10 w_int_org_dh_tmp11 w_int_org_dh_tmp12) */

SRV SDE_ORA_PositionDimension, SDE_ORA_PositionDimension_Full

Incr. / Initial

SQ: $$Hint1 = /*+ USE_HASH(PER PT ASGN ASGNT JOB ORG) */

SRV, OM SIL_PositionDimensionHierarchy Incr. /*+ USE_NL(TMP, B, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14, L15, L16, T )*/

OM SIL_EmployeeDimension_SCDUpdate Incr. /*+ NO_MERGE (SCD_HISTORY)*/

OM SIL_PositionDimension_SCDUpdate Incr. /*+ NO_MERGE (SCD_HISTORY)*/

OM SIL_InventoryProductDimension_SCDUpdate_Full

Initial /*+ NO_MERGE (SCD_HISTORY)*/

OM SIL_InventoryProductDimension_SCDUpdate Incr. /*+ NO_MERGE (SCD_HISTORY)*/

OM SIL_PartyOrganizationDimension_SCDUpdate_Full

Initial /*+ NO_MERGE (SCD_HISTORY)*/

OM SDE_ORA_InternalOrganizationDimension_Full

Incr. /*+USE_HASH(HR_ALL_ORGANIZATION_UNITS, HR_ALL_ORGANIZATION_UNITS,HR_LOCATIONS_ALL)*/

PeopleSoft FSCM

PSFT_90 SDE_PSFT_GLRevenueFact_ARItems_Full Initial /*+ USE_NL(PS_ITEM_ACTIVITY, PS_ITEM)*/ When count(1) of PS_ITEM_DST << count(1) of PS_ITEM_ACTIVITY and count(1) of PS_ITEM_DST << count(1) of PS_ITEM

PSFT_90 SDE_PSFT_ARTransactionFact_ItemDistribution_Full

Initial /*+ USE_NL(PS_ITEM_ACTIVITY, PS_ITEM)*/ When count(1) of PS_ITEM_DST << count(1) of PS_ITEM_ACTIVITY and count(1) of PS_ITEM_DST << count(1) of PS_ITEM

Oracle EBS / Enterprise Sales

SIL SIL_SalesProductDimension_SCDUpdate Incr. /*+ NO_MERGE (SCD_HISTORY)*/

SIL SIL_SalesBookingLinesFact_Load_OrderLine_Credit

Incr. /*+ INDEX(W_SALES_ORDER_LINE_F W_SLS_ORD_LN_F_U1)*/

SIL SIL_SalesBookingLinesFact_Load_OrderLine_Credit.LKP_W_SALES_BOOKING_LINE_F_Credit

Incr. /*+ INDEX(W_SALES_BOOKING_LINE_F W_SLS_BKG_LN_F_M4)*/

SIL SIL_SalesBookingLinesFact_Load_OrderLine_Debt.LKP_W_SALES_BOOKING_LINE_F

Incr. /*+ INDEX(W_SALES_BOOKING_LINE_F W_SLS_BKG_LN_F_M4)*/

SIL SIL_SalesOrderLinesAggregate_Derive_PreLoadImage

Incr. /*+ INDEX(W_SALES_ORDER_LINE_F W_SLS_ORD_LN_F_U1) */

SIL SIL_SalesInvoiceLinesAggregate_Derive_PreLoadImage

Incr. /*+ INDEX(W_SALES_INVOICE_LINE_F W_SLS_INV_LN_F_U1)*/

PLP PLP_SalesCycleLinesFact_Load Incr. $$Hint= /*+ INDEX(X W_SLS_ORD_LN_F_U1)*/ $$Hint2=/*+ INDEX(P W_SLS_PCK_LN_F_M2)*/ $$Hint3=/*+ INDEX(S W_SLS_SCH_LN_F_M4)*/ $$Hint4=/*+ INDEX(I W_SLS_INV_LN_F_M3) USE_NL(I TXN_TYPE) */

PLP PLP_SalesCycleLinesFact_Load.LKP_W_SALES_CYCLE_LINE_F

Incr. /*+ INDEX(Y W_SLS_CC_LN_F_U1)*/

Oracle EBS 11i10 Service

SDE_ORA_AgreeDimension_Full

Initial 1. SQ: $$Hint1 = /*+ NO_MERGE(fndv) USE_HASH(CPLB, CPLT, FNDV) */ $$Hint2 = /*+ USE_HASH(C1, SR, B, RE) */ 2.Source Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

Page 73: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

73

SDE_ORA_AgreeItemFact_Full Initial 1. SQ: $$Hint2= /*+ NO_MERGE(fndv) INDEX_FFS (CPLT OKC_K_PARTY_ROLES_TL_U1) USE_HASH(CPLB, CPLT, FNDV)*/ 2. Sorce Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

SDE_ORA_AssetDimension_Full Initial 1. SQ: $$Hint1 = /*+USE_HASH(CSI_ITEM_INSTANCES CSI_INSTANCE_STATUSES MTL_SYSTEM_ITEMS_TL OE_ORDER_HEADERS_ALL ORG_ORGANIZATION_DEFINITIONS) FULL(MTL_SYSTEM_ITEMS_B) */ $$Hint3 = /*+USE_HASH(COVERAGE_HEADER COVERAGE_ITEM)*/ 2. Sorce Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

SDE_ORA_AssetFact_Full Initial 1. SQ: $$Hint1=/*+ USE_HASH(CSI_ITEM_INSTANCES, OE_ORDER_HEADERS_ALL, OE_ORDER_LINES_ALL, ORG_ORGANIZATION_DEFINITIONS, JTF_RS_RESOURCE_EXTNS, MTL_SYSTEM_ITEMS_B, HZ_CUST_ACCOUNTS, HR_ORGANIZATION_INFORMATION) */ 2. Sorce Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

SDE_ORA_EntitlementDimension_Full Initial SQ: $$Hint1=/*+ parallel(OKC_K_LINES_TL,4) parallel (OKC_K_LINES_B,4) */

SIL_ActivityFact_Full Initial 1. SQ: $$Hint1=/*+ USE_HASH(W_ACTIVITY_FS W_FS_ACT_CST_FS W_SOURCE_D W_ENTLMNT_D W_AGREE_D W_REGION_D W_SRVREQ_D W_ASSET_D) */ LKP_W_POSITION_DH_WID: /*+ INDEX(W_POSITION_DH, W_POSITION_DH_M39) USE_HASH(W_POSITION_DH W_ACTIVITY_FS)*/ 2. Sorce/Target Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000; for: SQ_JOINER_ActivityFact mplt_SIL_ActivityFact.LKP_W_PARTY_PER_D_Geo_WID mplt_SIL_ActivityFact.LKP_W_POSITION_DH_WID mplt_SIL_ActivityFact.LKP_W_PARTY_D_Geo_WID

SIL_AgreeFact_Full Initial 1. SQ: $$Hint1 = /*+ USE_HASH(W_AGREE_DS, W_AGREEITEM_F, W_AGREE_D) FULL(W_AGREE_D)*/ 2. Sorce Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

SIL_AgreeItemFact_Full Initial 1. SQ: $$Hint1=/*+ USE_HASH(W_AGREEITEM_FS, W_QUOTE_D, W_AGREE_D, LKP_W_ASSET_D_CVRD_ASSET, LKP_W_ASSET_D)*/ 2. Source Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

SIL_AssetFact_Full Initial Source Connection: alter session set workarea_size_policy=manual; alter session set sort_area_size=1000000000; alter session set hash_area_size=2000000000;

SDE_ORA_AgreeDimension Incr. SQ: $$Hint0 = /*+USE_NL(OKC_K_LINES_B QP_LIST_HEADERS_TL OKC_K_HEADERS_TL OKC_K_HEADERS_B OKS_K_HEADERS_B tmp)*/ $$Hint3 = /*+ USE_NL(OKC_K_HEADERS_B, OKS_K_HEADERS_B)*/

SDE_ORA_AgreeItemFact Incr. SQ: $$Hint1= /*+ leading (tmp) USE_NL(OKC_K_ITEMS, OKC_K_HEADERS_B, OKC_K_LINES_B, PARENT, DISCOUNT_PERAMT, tmp) */

SDE_ORA_AssetDimension Incr. SQ: /*+ MATERIALIZE*/ $$Hint1=/*+ USE_NL(tmp, CSI_ITEM_INSTANCES, MTL_SYSTEM_ITEMS_B, HZ_CUST_ACCOUNTS, OE_ORDER_LINES_ALL, OE_ORDER_HEADERS_ALL, MTL_SYSTEM_ITEMS_TL) */ $$Hint2=/*+ INDEX (CSI_ITEM_INSTANCES OBIEE_CSI_ITEM_INSTANCES) */

SDE_ORA_AssetFact Incr. SQ: /*+ MATERIALIZE*/

Page 74: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

74

/*+ MATERIALIZE*/ $$Hint1=/*+ USE_NL(tmp, CSI_ITEM_INSTANCES, MTL_SYSTEM_ITEMS_B, HZ_CUST_ACCOUNTS, OE_ORDER_LINES_ALL, OE_ORDER_HEADERS_ALL) */ $$Hint2=/*+ INDEX(CSI_ITEM_INSTANCES OBIEE_CSI_ITEM_INSTANCES) */

SIL_AgreeDimension Incr. LKP_Get_Target_ETL_METADATA: /*+ INDEX (TARGET W_AGREE_D_U1)*/

SIL_AgreeFact Incr. SQ: $Hint1 = /*+ INDEX(W_AGREE_D W_AGREE_D_U1)*/ LKP_Get_Target_ETL_METADATA: /*+ INDEX(TARGET W_AGREE_F_U1)*/ mplt_SIL_AgreeFact.LKP_W_PARTY_D_PARTY_SVC_PROVIDER: /*+ INDEX(LOOKUP_TABLE W_PARTY_D_U1)*/ mplt_SIL_AgreeFact.LKP_W_PARTY_D_Party_WID: /*+ INDEX(LOOKUP_TABLE W_PARTY_D_U1)*/

SIL_AgreeItemFact Incr. SQ: $$Hint1 = /*+ INDEX (LKP_W_ASSET_D_CVRD_ASSET W_ASSET_D_U1) INDEX(LKP_W_ASSET_D W_ASSET_D_U1) INDEX (W_AGREE_D W_AGREE_D_U1)*/ LKP_Get_Target_ETL_METADATA: /*+ INDEX(TARGET W_AGREEITEM_F_U1)*/

SIL_AssetDimension Incr. Lkp_W_ASSET_D: /*+ INDEX( TARGET_TABLE W_ASSET_D_U1)*/

SIL_AssetFact Incr. SQ: Hint1 = /*+ index(w_asset_d w_asset_d_u1) index(w_party_per_d w_party_per_d_u1) */

Oracle EBS HCM

SDE_ORA_PayrollFact_FULL Initial $$HINT1: /*+ ORDERED USE_HASH( PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMNT_CLASSIFICATIONS PER_TIME_PERIODS ) */ $$HINT2: /*+ FULL(PER_ALL_ASSIGNMENTS_F) FULL(PER_ALL_PEOPLE_F) */

SDE_ORA_AbsenceEvent_FULL Initial Remove hint: /*+ use_hash(per_absence_attendances per_all_assignments_f per_absence_attendance_types per_abs_attendance_reasons )*/ Add hint: /*+ use_hash(paa pat par asg )*/

PLP_WorkforceEventFact_Month Initial $$HINT1: /*+ FULL(suph) */

PLP_AbsenceEventFact_Full Initial $$HINT1: /*+ USE_HASH(WEVT EVT CAL TAB1)*/ $$HINT2: /*+ USE_HASH( EVT CAL WEVT) FULL(WEVT) */

PLP_RecruitmentRequisitionAggregate_Load_Derive_Ful

Initial $$HINT1: /*+ FULL(F) */ Apply this hint to main query and subquery as well

SDE_ORA_InternalOrganizationDimensionHierarchy_Flatten

Incr. /*+ USE_NL( w_int_org_dh_tmp w_int_org_dh_tmp1 w_int_org_dh_tmp2 w_int_org_dh_tmp3 w_int_org_dh_tmp4 w_int_org_dh_tmp5 w_int_org_dh_tmp6 w_int_org_dh_tmp7 w_int_org_dh_tmp8 w_int_org_dh_tmp9 w_int_org_dh_tmp10 w_int_org_dh_tmp11 w_int_org_dh_tmp12) */

PLP_RecruitmentApplicantAggregate_Load Incr. Update the workflow session sql with below hint /*+ USE_HASH(W_MONTH_D, W_RCRTMNT_EVENT_F, W_EMPLOYEE_D, W_BUSN_LOCATION_D, W_AGE_BAND_D, W_RCRTMNT_SOURCE_D, W_RCRTMNT_APPL_A_TMP, W_DAY_D) */

PLP_RecruitmentHireAggregate_Load Incr. $$HINT1: /*+ USE_HASH( FACT TEMP MONTH1 EMP SOURCE AGE PERF LOC W_DAY_D) */

PLP_RecruitmentRequisitionAggregate_Load_Derive

Incr. /*+ FULL(F) */ /*+ USE_HASH(F MON TMP1) */

PLP_AbsenceEventFact Incr. $$HINT1: /*+ ORDERED USE_HASH(EQ1 EVT WEVT CAL) */

SDE_ORA_PayrollFact Incr. /*+ PARALLEL(PAY_PAYROLL_ACTIONS) */

Database Hints Implementation in DAC

You may consider using the recommended hints for the mappings above for other versions only after careful testing and

benchmarking ETL runtime and performance.

These hints are not included into the packaged mappings. Each mapping may have $$HINT placeholders, defined in DAC. You

can consider applying them to your environments after verifying mappings execution with the hints in your test environment.

You can manually define $$HINT variables in your DAC and Informatica repositories by following the steps below:

- Connect to Informatica PowerCenter 8.6 Designer

- Check out the selected mapping and drag it to Mapplet Designer palette

Page 75: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

75

- Navigate to Mapplets menu -> Parameters and Variables

- Click on the Add New Variable icon

- Fill in the following fields in a new line:

o Name: $$HINT1, 2, etc.

o Type: Parameter

o Datatype: String

o Precision: make sure you specify the sufficient precision to cover your hint value

o Click OK

o Save the changes and check in the mapping into the Informatica repository

- Connect to Informatica PowerCenter 8.6 Workflow Manager

- Check out the corresponding session and drag it to Task Developer palette

- Right click on the Task in the Task Developer palette and choose Edit in the popup menu

- Click on Mapping tab and select SQ (SQL Qualifier) under Sources folder in the left pane

- Click on SQL Query attribute value

- Insert the defined $$HINT? Variable

- Save the changes

- Connect to DAC client

- Select the custom container

- Click on Design button and select Tasks menu in the right pane

- Retrieve the task which corresponds to the selected Informatica mapping

- Click on Parameters menu in the lower pane

- Fill in the fields in a new line:

o Name: use the exact variable name defined in Informatica above

o Data Type: Text

o Load Type: select the load type from the list of values

o Value: enter the hint value here

o Save the changes

- Verify the changes by inspecting the session log for the select mapping during the next ETL.

Important! DAC 10.1.3.4.1 invokes Informatica PowerCenter 8.6 command line API with <–lpf> option. Some of the

recommended hints can be very long and not fit into a single line. As a result, Informatica may not pick the valid

parameter values. If your DAC and Informatica servers share the same machine, you can resolve the issue by

implementing the following steps:

- Connect to your machine, running DAC and Informatica servers

- Open <DAC_HOME>\Conf\infa_command.xml

- Replace each occurrence of <-lpf> with <-paramfile> in the configuration file

Page 76: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

76

- Save the changes

- Restart DAC server and test the solution.

Using Oracle Optimizer Dynamic Sampling for big staging tables

A typical Source Dependent Extract (SDE) task contains the following steps:

- Truncate staging table

- Extract data from one or more OLTP tables into staging table

- Analyze staging table

The last step computes statistics on the table to make sure that Oracle Cost Based Optimizer (CBO) builds the best execution

plan for the next task. However, during the initial loads that process very large data volumes, the staging table may become so

large (hundreds of millions of rows), that the Analyze Table job would take many hours to complete. Oracle RDBMS offers a

faster, yet accurate enough alternative to use dynamic sampling instead of gathering table statistics. The purpose of dynamic

sampling is to improve server performance by determining more accurate estimates for predicate selectivity and statistics for

tables and indexes. Oracle CBO determines whether a query would benefit from dynamic sampling at the query compile time.

Oracle Optimizer would issue a recursive SQL statement to scan a small random sample of the table's blocks, and to apply the

relevant single table predicates to estimate selectivity for each predicate. In some cases sample cardinality can also be used to

estimate table cardinality.

The internal tests, performed on large staging tables, show that optimizer can produce efficient execution plans, utilizing

dynamic sampling feature at much shorter time compared to gathering table stats using conventional methods.

Below are the details of one of the internal benchmark tests:

- Hardware configuration: 8 CPU cores x 16Gb RAM x 2Tb NAS server with Linux 64bit OS

- Target Database: Oracle 10.2.0.3 64bit

- Test configuration: query involves a large staging table with over 100 Million rows, joined with two smaller dimension

tables

Test Scenarios Statistics / Sampling Execution Time Query Execution Time

No statistics were collected on the staging

table.

Dynamic sampling: 10.6 sec 2 hours 27 min 45 sec

Computed statistics on the staging table using

DBMS_STATS package.

Statistics computing: 53 min 26 sec 2 hours 20 min 43 sec

The overall run time for the second case was approximately 45 minutes longer compared to the dynamic sampling scenario.

The optimizer estimated the identical run time for both cases execution plans.

Enabling Dynamic Sampling at the system level may cause additional performance overhead, so it should be selectively applied

only to the mappings, which run the queries against the large staging table by inserting Dynamic Sampling hints into the

appropriate mapping SQLs. Refer to the publication Oracle Database Performance Tuning Guide (10g Release 2) for more

details.

Note that the DAC version released with Oracle Business Intelligence Applications Version 7.9.6 does not disable computing

statistics at a table level. To workaround this limitation, you can abort the execution plan in DAC, mark the task Analyze Table

for your staging table as Completed and restart the Execution Plan.

Page 77: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

77

Oracle BI Applications Best Practices for Oracle Exadata

Handling BI Applications Indexes in Exadata Warehouse Environment

Oracle Business Analytic Applications Suite uses two types of indexes:

ETL indexes for optimizing ETL performance and ensuring data integrity

Query indexes, mostly bitmaps, for end user star queries

Exadata Storage Indexes functionality cannot be considered as unconditional replacement for BI Apps indexes. You can employ

storage indexes only in those cases when BI Applications query indexes deliver inferior performance and you ran the

comprehensive tests to ensure no regressions for all other queries without the query indexes.

Do not drop any ETL indexes, as you may not only impact your ETL performance but also compromise data integrity in your

warehouse.

The best practices for handling BI Applications indexes in Exadata Warehouse:

Turn on Index usage monitoring to identify any unused indexes and drop / disable them in your env. Refer to the

corresponding section in the document for more details.

Consider pinning the critical target tables in smart flash cache

Consider building custom aggregates to pre-aggregate more data and simplify queries performance.

Drop selected query indexes and disable them in DAC to use Exadata Storage Indexes / Full Table Scans only after

running comprehensive benchmarks and ensuring no impact on any other queries performance.

Gather Table Statistics for BI Applications Tables

Out of the box Data Warehouse Admin Console (DAC) uses ‘FOR INDEXED COLUMNS’ syntax for computing BI Applications

table statistics. It does not cover statistics for non-indexed columns participating in end user query joins. If you choose to drop

some indexes in Exadata environment, then there would be more critical columns with NULL statistics. As the result, Optimizer

may choose sub-optimal execution plan and result in slower performance.

You should consider switching to ‘FOR ALL COLUMNS SIZE AUTO’ syntax in DBMS_STATS.GATHER_TABLE_STATS call in DAC:

1. Navigate to your <DAC_HOME>/CustomSQLs and open customsql.xml file for editing.

2. Replace ‘FOR INDEXED COLUMNS’ with ‘FOR ALL COLUMNS SIZE AUTO’ in DBMS_STATS.GATHER_TABLE_STATS call in

<SqlQuery name = "ORACLE_ANALYZE_TABLE" STORED_PROCEDURE = "TRUE"> section.

3. Save the changes.

Next time you run an ETL, DAC will compute the statistics for BI Applications tables for all columns.

Oracle Business Analytics Warehouse Storage Settings in Exadata

The recommended database block size (db_block_size parameter) is 8K. You may consider using 16K block size as well,

primarily to increase for better compression rate, as Oracle applies compression at block level. Refer to init.ora

template in the section below.

Make sure you use locally managed tablespaces with AUTOALLOCATE option. DO NOT use UNIFORM extent size for

your warehouse tablespaces.

Use your primary database block size 8k (or 16k) for your warehouse tablespaces. It is NOT recommended to use non-

standard block size tablespaces for deploying production warehouse.

Use 8Mb large extent size for partitioned fact tables and non-partitioned large segments, such as dimensions,

hierarchies, etc. You will have to manually specify INITIAL and NEXT extents size of 8Mb for non-partitioned segments.

Page 78: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

78

Set deferred_segment_creation = TRUE to defer a segment creation until the first record is inserted. Refer to init.ora

section below.

Parallel Query Use in BI Applications on Exadata

All BI Applications tables are created without any degree of parallelism in BI Applications schema. Since DAC manages parallel

jobs, such as Informatica mappings or indexes creation, during an ETL, the use of Parallel Query in ETL mappings could

generate more I/O overhead and cause performance regressions for ETL jobs.

Exadata hardware provides much better scalability for I/O resources, so you can consider turning on Parallel Query for slow

queries by setting PARALLEL attribute for large tables participating in the queries. For example:

SQL> ALTER TABLE W_GL_BALANCE_F PARALLEL;

You should benchmark the query performance prior to implementing the changes in your Production environment.

Compression Implementation Oracle Business Analytics Warehouse in Exadata

Table compression can significantly reduce a segment size, and improve queries performance in Exadata environment.

However, depending on the nature DML operations in ETL mappings, it may result in their slower mapping performance and

larger consumed space. The following guidelines will help to ensure successful compression implementation in your Exadata

environment:

Consider implementing compression after running an Initial ETL. The initial ETL plan contains several mappings with heavy

updates, which could impact your ETL performance.

Implement large facts table partitioning and compress inactive historic partitions only. Make sure that the active ones

remain uncompressed.

Choose either Basic or Advanced compression types for your compression candidates.

Review periodically the allocated space for a compressed segment, and check such stats as num_rows, blocks and

avg_row_len in user_tables view. For example, the following compressed segment needs to be re-compressed, as it

consumes too many blocks:

Num_rows Avg_row_len Blocks Compression

541823382 181 13837818 ENABLED

The simple calculation (num_rows * avg_row_len / 8k block size) + 16% (block overhead) gives ~13.8M blocks for an

uncompressed segment. This segment should be re-compressed reduce its footprint and improve its queries performance.

Refer to Table Compression Implementation Guidelines section in this document for additional information on compression

for BI Applications Warehouse.

OBIEE Queries Performance Considerations on Exadata

As mentioned before, BI Analytic Applications use query indexes, mostly BITMAPs for ensuring effective star transformation.

As an alternative to star queries, you can consider taking advantage of Exadata and Oracle database features to deliver better

performance for your end user reports.

1. Implement large fact table partitioning. It is critical to ensure effective partitioning pruning, and Oracle Optimizer will

choose partitioning pruning based on the filtering conditions and partitioning keys.

2. Consider compressing your historic partitions with hybrid columnar compression or advanced compression. Be careful

with applying compression to the latest partitions, as they could explode in size after heavy updates.

3. Enable degree of parallelism (DOP) for your fact tables. Do not set extremely high values for DOP; usually 8 – 16 is

more than enough for speeding up parallel processing on Exadata without impacting the overall system I/O.

Page 79: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

79

4. Verify that your generated explain (and execution) plan use hash join operators, rather than nested loops.

Important! You should conduct comprehensive testing with all recommended techniques in place before dropping your query

indexes.

Exadata Smart Flash Cache

The use of Smart Flash Cache in Oracle Business Analytics Warehouse can significantly improve end user queries performance.

You can consider pinning most frequently used dimensions which impact your queries performance. To manually pin a table in

Exadata Smart Flash Cache, use the following syntax:

ALTER TABLE W_PARTY_D STORAGE (CELL_FLASH_CACHE KEEP);

The Exadata Storage Server will cache data for W_PARTY_D table more aggressively and will try to keep the data from this

table longer than cached data from other tables.

Important! Use manual Flash Cache pinning only for the most common critical tables.

Database Parameter File Template for Analytics Warehouse on Exadata

Use the template file below for your init.ora parameter file for Business Analytics Warehouse on Oracle Exadata.

########################################################################### # Oracle BI Applications - init.ora template # This file contains a listing of init.ora parameters for 11.2 / Exadata ########################################################################### db_name = <database name> control_files = /<dbf file loc>/ctrl01.dbf, /<dbf file loc>/ctrl02.dbf db_block_size = 8192 # or 16384 (for better compression) db_block_checking = FALSE db_block_checksum = TYPICAL deferred_segment_creation = TRUE user_dump_dest = /<DUMP_HOME>/admin/<dbname>/udump background_dump_dest = /<DUMP_HOME>/admin/<dbname>/bdump core_dump_dest = /<DUMP_HOME>/admin/<dbname>/cdump max_dump_file_size = 20480 processes = 1000 sessions = 2000 db_files = 1024 session_max_open_files = 100 dml_locks = 1000 cursor_sharing = EXACT cursor_space_for_time = FALSE session_cached_cursors = 500 open_cursors = 1000 db_writer_processes = 2 aq_tm_processes = 1 job_queue_processes = 2 timed_statistics = true statistics_level = typical sga_max_size = 45G sga_target = 40G shared_pool_size = 2G shared_pool_reserved_size = 100M workarea_size_policy = AUTO pre_page_sga = FALSE pga_aggregate_target = 16G log_checkpoint_timeout = 3600 log_checkpoints_to_alert = TRUE log_buffer = 10485760 undo_management = AUTO undo_tablespace = UNDOTS1 undo_retention = 90000 parallel_adaptive_multi_user = FALSE parallel_max_servers = 128 parallel_min_servers = 32 # ------------------- MANDATORY OPTIMIZER PARAMETERS ----------------------

Page 80: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

80

star_transformation_enabled = TRUE query_rewrite_enabled = TRUE query_rewrite_integrity = TRUSTED b_tree_bitmap_plans = FALSE optimizer_autostats_job = FALSE

DB2 Warehouse Recommendations for Better Performance

DB2 Warehouse Configuration Review the recommended settings for BI Analytic Applications Warehouse configuration on IBM DB2 platform.

Database Manager Level

1. Keep all database monitor (snapshot) switches as default except for DFT_MON_TIMESTAMP switch, set to OFF.

2. INSTANCE_MEMORY: use its default value AUTOMATIC

3. SHEAPTHRES: 256000 (1000MB). You may set it to a larger value if your server has enough physical memory. Refer to

IBM documentation for details how to calculate its value for your database.

4. RQRIOBLK: 65535 (64KB).

5. ASLHEAPSZ: 256 (1MB).

6. NUMDB: set it to the actual number of local active databases, or the maximum number of different database aliases,

cataloged on a DB2 Connect server.

Database Level

1. SELF_TUNING_MEM: ON (default value)

2. DATABASE_MEMORY: AUTOMATIC (default value)

3. DB_MEM_THRESH: 10 (10%)

4. LOCKLIST: 25600 (100 MB)

5. MAXLOCKS: 10 (10%)

6. SHEAPTHRES_SHR: 512000 (2000 MB)

7. SORTHEAP: 51200 (200 MB)

8. DBHEAP: AUTOMATIC

9. LOGBUFSZ: 25600 (100MB)

10. UTIL_HEAP_SZ: 25600 (100MB)

11. STMTHEAP: 25600 (100MB)

12. APPLHEAPSZ: 51200 (200MB)

13. APPL_MEMORY: AUTOMATIC

14. STAT_HEAP_SZ: 25600 (100MB)

15. NUM_IOSERVERS: set it equal to the number of CPUs in the system

16. NUM_IOCLEANERS: set it equal to NUM_IOSERVERS

17. MAXFILOP: 5000

18. LOGFILSIZ: 512000 (2000MB)

19. LOGPRIMARY: 10

20. LOGSECOND: 10

Database Registry

DB2_FMP_COMM_HEAPSZ: increase fenced routines pool size if there are problems with stored procedures invocation

(SIEBSTAT, SIEBTRUN etc.). You can set to 128000 (500MB). Monitor db2diag.log to find if the value is set correctly. Check

for such warnings as: “Insufficient memory available for IPC communication with the db2fmp process”.

Page 81: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

81

Buffer Pools

Consider creating a separate buffer pool for each table space in the database. The most important settings for buffer pools

are:

1. SIZE: the default value is too small. Set its value to reasonably large SIZE for all buffer pools (from 512MB to 2+ GB if

the system is not memory constrained).

2. NUMBLOCKPAGES = SIZE * 0.05, e.g. 5% of total buffer pool SIZE.

3. Increase IBMDEFAULTBP buffer pool size.

Table Spaces

PREFETCHSIZE: 32 or 64.

BUFFERPOOL: assign a separate buffer pool to each table space.

DB2 Recommendations and Best Practices

Disabling Bulk Mode

BI Analytic Applications 7.9.6.x does not support DB2 Bulk Mode. There can be corner cases with ETL executions when two or

more mappings try to load data into the same table. Such operations are not supported in DB2. Set Informatica

DisableDB2BulkMode parameter to ‘Yes’ to disable bulk mode on the target database.

If you start and ETL with DAC/INFA tiers not configured correctly (to run with bulk mode disabled) and get failures caused by

bulk mode, DB2 bulk load utility, used by Informatica, can place table locks, preventing any further operations. For example,

truncate table statement fails:

Error while executing : TRUNCATE TABLE:$TABLE_NAME

SQL0668N Operation not allowed for reason code “3” on table “$TABLE_NAME”

To resolve this issue execute the following commands:

db2 connect to $DB_NAME

db2 load from /dev/null of del terminate into $TABLE_NAME

Avoiding ‘Unsorted input found’ Warning

INFA tracks rows order when building its lookup cache. Sometimes it may issue the following warning in a session log:

LKPDP_60:TRANSF_1_1> TT_11195 Warning: Unsorted input found when building the cache for the Lookup

transformation [$LOOKUP_NAME]. The current number of entries in the index cache is 3. For optimal

performance, use sorted input.

This issue arises when the rows are sorted not in the way INFA expects them to be sorted. This issue can often be resolved just

by validating and updating ORDER BY clause of the lookup SQL override statement. However, there is one exception for DB2

that needs a more complex update. When the original ORDER BY clause of the lookup SQL is correct, and you still get the

warning then:

1. Ensure your Integration Service is run in UNICODE mode.

2. Change sort order for the session with the problematic lookup from BINARY to any other supported.

3. Update lookup SQ’s ORDER BY clause using COLLATION_KEY_BIT DB2-specific function (set collation according to the

session sort order selected in the previous step).

Page 82: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

82

Eliminating this warning improves lookup cache creation performance.

SIEBTRUN and SIEBSTAT Errors

When SIEBTRUN/SIEBSTAT fails during the ETL with the error:

ANOMALY INFO::: Error while executing : ANALYZE TABLE:$TABLE_NAME

MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:SIEBSTAT ('$DWH_USER',

'$TABLE_NAME', 'SQL_STATS_ALL')

[IBM][CLI Driver][DB2/AIX64] SQL1042C An unexpected system error occurred. SQLSTATE=58004

Follow the steps below:

Ensure system time and date are set correctly

Make sure your system has enough memory and swapping/paging space

Run db2iupdt utility

Stop and start DB manager

Validate your DB2 installation

Check if DB2_FMP_COMM_HEAPSZ registry variable is set to the recommended value

Each action can resolve the issue itself, so check if the problem still persists after each action completed.

‘The transaction log for the database is full’ Error

Set log file size much bigger than its default value. In general, 2 GB should satisfy most requirements. To check current setting:

db2 get db cfg for $DWH_NAME | grep LOGFILSIZ

To update (2 GB is 512000 of 4 KB blocks):

db2 update db cfg for $DWH_NAME using LOGFILSIZ 512000

DB2 Index Usage Monitoring

Introduction

DB2 dp2pd Utility

db2pd utility provides statistics about DB system memory sets; its option, -tcbstats, retrieves information about tables and

indexes. ‘TCB Index Stats: Scans’ column shows the total number of scans for each index in the database. Generally, it allows

detecting indexes that are not used with the only exception - all usage data is reset after system restart. So the results need to

be preserved before each system reboot.

SYSCAT.INDEXES System Catalog View

Index usage information is stored in SYSCAT.INDEXES table, LASTUSED column. This column is available from DB2 v9.7

onwards; it does not exist in earlier DB2 versions. The column is updated each time an index is used by a DML statement or

when DB2 enforces referential integrity constraints. The index usage data is not recycled after system restart.

Check DB2_SYSTEM_MONITOR_SETTINGS setting (LAST_USE_INTERVAL:0), which controls LASTUSED daemon (db2lused EDU

– Engine Dispatchable Unit) in your environment, and update accordingly if needed.

Page 83: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

83

Index Usage Monitoring Limitations

Each time an index is dropped/created, both SYSCAT.INDEXES and db2pd stats are lost.

Each time the DB is restarted, db2pd only stats are lost.

Implement Index Usage Monitoring

DB2 V9.5

You can use db2pd utility to identify redundant indexes in your system. The proposed syntax:

db2pd -db sample -tcbstats all -file db2pd_tab_all.txt

TCB Index Stats:

Address TableName IID EmpPgDel RootSplits BndrySplts PseuEmptPg Scans KeyUpdates

InclUpdats NonBndSpts PgAllocs Merges PseuDels DelClean IntNodSpl

0x070000016ADCED40 W_PARTY_D 7 0 0 0 0 5

0 0 0 52521 0 0 0 0

0x070000016ADCED40 W_PARTY_D 6 0 0 0 0 0

0 0 0 4404 0 0 0 0

0x070000016ADCED40 W_PARTY_D 5 0 0 0 0 0

0 0 0 4402 0 0 0 0

0x070000016ADCED40 W_PARTY_D 4 0 0 0 0 0

0 0 0 11135 0 0 0 0

0x070000016ADCED40 W_PARTY_D 3 0 0 0 0 3

0 0 0 4861 0 0 0 0

0x070000016ADCED40 W_PARTY_D 2 0 0 0 0 7

0 0 0 37591 0 0 0 0

TCB Index Stats output, Scans column with value zero helps to identify indexes, never scanned since the last database startup.

You can use the following query to identify the unused indexes on W_PARTY_D table:

db2 "SELECT INDSCHEMA, INDNAME

FROM SYSCAT.INDEXES

WHERE TABNAME = 'W_PARTY_D' AND IID in (6,5,4)"

INDSCHEMA INDNAME

----------------------- ---------------

CRMDWH W_PARTY_D_F2

CRMDWH W_PARTY_D_F3

CRMDWH W_PARTY_D_F4

DB2 V9.7

In addition to db2pd based approach, you can also use SYSCAT.INDEXES system catalog view to implement Index Usage

Monitoring in V9.7.

1. You should enable LASTUSED daemon (db2lused EDU – Engine Dispatchable Unit). It is controlled by

DB2_SYSTEM_MONITOR_SETTINGS setting (LAST_USE_INTERVAL:0).

2. Create a new table to track index usage:

create table psr_index_usage as (select INDSCHEMA, INDNAME, OWNER, TABSCHEMA, TABNAME, LASTUSED from

syscat.indexes) with no data

3. Add an index on the table:

create unique index psr_index_usage_pk on psr_index_usage(INDSCHEMA, INDNAME)

4. Populate the table with data using the following SQL:

Page 84: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

84

insert into psr_index_usage (INDSCHEMA, INDNAME, OWNER, TABSCHEMA, TABNAME, LASTUSED) select INDSCHEMA,

INDNAME, OWNER, TABSCHEMA, TABNAME, LASTUSED from syscat.indexes where TABSCHEMA = '$TABSCHEMA'

Note: replace $TABSCHEMA with the schema name you want to track index usage for. If you have multiple schemas, run the

SQL for each schema, where you would like to enforce the usage monitoring.

5. Execute the following SQL before each ETL process (you can use it as pre-SQL in DAC):

merge into psr_index_usage piu using

(select INDSCHEMA, INDNAME, OWNER,TABSCHEMA, TABNAME, LASTUSED

from syscat.indexes where TABSCHEMA = '$TABSCHEMA') rs

on (piu.INDSCHEMA = rs.INDSCHEMA and piu.INDNAME = rs.INDNAME)

when matched then

update set LASTUSED = (case when piu.LASTUSED > rs.LASTUSED then piu.LASTUSED else rs.LASTUSED end)

when not matched then

insert(INDSCHEMA, INDNAME, OWNER,

TABSCHEMA, TABNAME, LASTUSED)

values(rs.INDSCHEMA, rs.INDNAME, rs.OWNER, rs.TABSCHEMA, rs.TABNAME, rs.LASTUSED)

6. To review the captured index usage statistics run the SQL below:

select * from psr_index_usage piu where piu.LASTUSED < current_date - $DAYS days

You should pick reasonably large value for $DAYS to capture the most complete statistics about index usage for all end user

queries and dashboards. If LASTUSED column contains '0001-01-01' value then the index can be dropped since it has never

been in use.

Important! You should exclude ETL type indexes from the monitoring. The indexes are critical for ensuring ETL mappings

performance in your environment.

SQL Server Warehouse Recommendations for Better Performance

SQL Server Index Monitoring using DMV You can choose DMV to track index usage information for BI Analytics Applications in Microsoft SQL Server Warehouse. It uses

SQL Server cache to extract the usage details. The cache does not persist across SQL Server instance restart. This cache with all

the usage information is emptied during the server instance shutdown.

Follow the steps below to implement SQL Server Index Monitoring.

1. Grant VIEW SERVER STATE permission:

USE master;

GRANT VIEW SERVER STATE TO YOUR_DWH_DB;

GO

2. Set YOUR_DWH_DB to your Data Warehouse DB name and run the scripts in the correct order. 3. Create a new table to store snapshots of the DMV output.

USE YOUR_DWH_DB

GO

IF OBJECTPROPERTY(object_id(N'dbo.PSRIndexUsageStats'), 'IsUserTable') = 1

DROP TABLE dbo.PSRIndexUsageStats;

GO

SELECT GETDATE () AS ExecutionTime

, sis.database_id

Page 85: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

85

, sis.object_id

, sis.index_id

, sis.user_seeks

, sis.user_scans

, sis.user_lookups

, sis.user_updates

, sis.last_user_seek

, sis.last_user_scan

, sis.last_user_lookup

, sis.last_user_update

, sis.system_scans

, sis.system_lookups

, sis.system_updates

, sis.last_system_seek

, sis.last_system_scan

, sis.last_system_lookup

, sis.last_system_update

, OBJECT_NAME(sis.object_id) AS TableName

, si.name AS IndexName

INTO dbo.PSRIndexUsageStats

FROM sys.dm_db_index_usage_stats sis

INNER JOIN sys.indexes si ON sis.OBJECT_ID = si.OBJECT_ID AND sis.Index_ID = si.Index_ID

WHERE sis.database_id=0

GO

4. Take a baseline snapshot of the DMV output. USE YOUR_DWH_DB

GO

INSERT dbo.PSRIndexUsageStats

SELECT GETDATE () AS ExecutionTime

, sis.database_id

, sis.object_id

, sis.index_id

, sis.user_seeks

, sis.user_scans

, sis.user_lookups

, sis.user_updates

, sis.last_user_seek

, sis.last_user_scan

, sis.last_user_lookup

, sis.last_user_update

, sis.system_scans

, sis.system_lookups

, sis.system_updates

, sis.last_system_seek

, sis.last_system_scan

, sis.last_system_lookup

, sis.last_system_update

, OBJECT_NAME(sis.object_id) AS TableName

, si.name AS IndexName

FROM sys.dm_db_index_usage_stats sis

INNER JOIN sys.indexes si ON sis.OBJECT_ID = si.OBJECT_ID AND sis.Index_ID = si.Index_ID

WHERE sis.Database_ID = DB_ID(‘YOUR_DWH_DB’)

-- Optionally for requested table(s)

-- AND sis.object_id IN (OBJECT_ID('YourTableName'))

GO

5. Repeat the step #4 over regular intervals (weekly /monthly) 6. Use the script below to generate the results for the captured index usage statistics:

USE YOUR_DWH_DB

GO

SELECT * FROM dbo.PSRIndexUsageStats

WHERE database_id = DB_ID('YOUR_DWH_DB')

Page 86: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

86

-- Optionally for requested table(s)

--AND object_id IN (OBJECT_ID('YourTableName'), . . . )

ORDER BY IndexName, ExecutionTime;

GO

Informatica Configuration for Better Performance

Informatica PowerCenter 32-bit vs. 64-bit

32-bit OS memory can address only 2 ^ 32 bytes, or four gigabytes of RAM, and allow maximum two gigabytes for any

application. Oracle BI Applications ETL mappings use complex Informatica transformations such as lookups, cached in memory,

and their performance is heavily impacted by data from incremental extracts and high watermark warehousing volumes.

Additionally BI Applications ETL execution plans employ parallel mappings execution. So 32-bit ETL tier can quickly exhaust the

available memory and end up with very expensive I/O paging and swapping operations, thus causing rather dramatic

regression in ETL performance.

On the contrast, Informatica 64-bit takes the advantage of more physical RAM for performing complex transformations in

memory and eliminating costly disk I/O operations. Informatica PowerCenter 8.6 provides a true 64-bit performance and the

ability to scale because no intermediate staging or hashing files on disk are required for processing.

The internal BI Applications ETL benchmarks for Informatica 8.6 32-bit vs. 64-bit showed at least two times better throughputs

for 64-bit configuration. So, Oracle Business Intelligence Applications customers are strongly encouraged to use Informatica

8.6 64-bit version for Medium and Large environments.

Informatica Session Logs

Oracle BI Applications 7.9.6 uses Informatica PowerCenter 8.x and 9.x, which has improved log reports. Each session log

provides the detailed information about transformations as well as summary of a mapping execution, including the detailed

percentage run time, idle time, etc.

Below is an example of the execution summary from an Informatica session log:

***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****

Thread [READER_1_1_1] created for [the read stage] of partition point [Sq_W_CUSTOMER_LOC_USE_DS] has completed.

Total Run Time = [559.812502] secs

Total Idle Time = [348.453112] secs

Busy Percentage = [37.755389]

Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [Sq_W_CUSTOMER_LOC_USE_DS] has

completed.

Total Run Time = [559.843748] secs

Total Idle Time = [322.109055] secs

Busy Percentage = [42.464472]

Thread work time breakdown:

Fil_W_CUSTOMER_LOC_USE_D: 2.105263 percent

Exp_W_CUSTOMER_LOC_USE_D_Update_Flg: 10.526316 percent

Lkp_W_CUSTOMER_LOC_USE_D: 13.684211 percent

mplt_Get_Etl_Proc_Wid.EXP_Constant_for_Lookup: 1.052632 percent

mplt_Get_Etl_Proc_Wid.Exp_Get_Integration_Id: 2.105263 percent

mplt_Get_Etl_Proc_Wid.Exp_Decide_Etl_Proc_Wid: 3.157895 percent

mplt_Get_Etl_Proc_Wid.LKP_ETL_PROC_WID: 20.000000 percent

mplt_SIL_CustomerLocationUseDimension.Exp_Scd2_Dates: 44.210526 percent

mplt_SIL_CustomerLocationUseDimension.Exp_W_CUSTOMER_LOC_USE_D_Transform: 3.157895 percent

Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_CUSTOMER_LOC_USE_D] has completed.

Total Run Time = [561.171875] secs

Total Idle Time = [0.000000] secs

Busy Percentage = [100.000000]

Busy Percentage for a single thread cannot be considered as an absolute measure of performance for a whole mapping. All

threads statistics must be reviewed together. Informatica computes it for a single thread in a mapping as follows:

Page 87: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

87

Busy Percentage = (Total Run Time – Total Idle Time) / Total Run Time

If the report log shows high Busy Percentage (> 70 - 80%) for the READER Thread, then you may need to review the mapping’s

Reader Source Qualifier Query for any performance bottlenecks.

If the report shows high Busy Percentage (> 60 - 70%) for the TRANSF Thread, then you need to review the detailed

transformations execution summary and identify the most expensive transformation. In the example above the

transformation “mplt_SIL_CustomerLocationUseDimension.Exp_Scd2_Dates” consumes 44.2% of all TRANSF runtime, so it may

be considered a candidate for investigation.

If the report shows high Busy Percentage for the WRITER Thread, it may not necessarily be a performance bottleneck.

Depending on the processed data volumes, you may want to turn off Bulk Mode. Refer to the section “Informatica Load: Bulk

vs. Normal” for more details.

The log above shows that most probably the mapping is well balanced between Reader and Transformation threads and it

keeps Writer busy with inserts.

Informatica Lookups

Too many Informatica Lookups in an Informatica mapping may cause significant performance slowdown. Review the

guidelines below for handling Informatica Lookups in Oracle Business Intelligence Applications mappings:

Inspect Informatica session logs for the number of lookups, including each lookup’s percentage runtime.

Check “Lookup table row count” and “Lookup cache row count” numbers for each Lookup Transformation. If Lookup

table row count is too high, Informatica will cache a smaller subset in its Lookup Cache. Such lookup could cause

significant performance overhead on ETL tier.

If functional logic permits, consider reducing a large lookup row count by adding more constraining predicates to the

lookup query WHERE clause.

If a Reader Source Qualifier query is not a bottleneck in a slow mapping, and the mapping is overloaded with lookups,

consider pushing lookups with row counts less than two million into the Reader SQL as OUTER JOINS.

Important! Some lookups can be reusable within a mapping or across multiple mappings, so they cannot be

constrained or pushed down into Reader queries. Consult Oracle Development prior to re-writing Oracle Business

Intelligence Applications mappings.

If you identify a very large lookup with row count more than 15-20 million, consider pushing it down as an OUTER JOIN

into the mapping’s Reader Query. Such update would slow down the Reader SQL execution, but it might improve overall

mapping’s performance.

You should test the changes to avoid functional regressions before implementing optimizations in your production

environment.

Disabling Lookup Cache for very large Lookups

Informatica uses Lookup cache to store the lookup data on the ETL tier in flat files (dat and idx). The Integration Service builds

cache in memory when it processes the first row of data in the cached Lookup Transformation. If Lookup data is small, the

lookup data can be stored in memory and transformation processes the rows very fast. But, if Lookup data is very large

(typically over 20M), the lookup cannot fit into the allocated memory and the data has to be paged in and out many times

during a single session. As a result, such lookup transformations adversely affect the overall mapping performance.

Additionally Informatica takes more time to build such large lookups.

If constraining a large lookup is not possible, then consider disabling the lookup cache. Connect to Informatica Workflow

Manager, open the session properties, and find the desired transformation in the Transformations folder on the Mapping tab.

Then uncheck Lookup Cache Enabled property and save the session.

Page 88: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

88

Disabling the lookup cache for heavy lookups will help to avoid excessive paging on the ETL tier. When the lookup cache is

disabled, the Integration Service issues a select statement against the lookup source database to retrieve lookup values for

each row from the Reader Thread. It would not store any data in its flat files on ETL tier. The issued lookup query uses bind

variables, so it is parsed only once in the lookup source database.

Disabling lookup cache may work faster for very large lookups under following conditions:

Lookup query must use index access path, otherwise data retrieval would be very expensive on the source lookup

database tier. Remember that Informatica would fire the lookup query for every record from its Reader thread.

Consider creating an index for all columns, which are used in the lookup query. Then Oracle Optimizer would choose

INDEX FAST FULL SCAN to retrieve the lookup values from index blocks rather than scanning the whole table.

Check the explain plan for the lookup query to ensure index access path.

You should test the modified mapping with the selected disabled lookups in a test environment and benchmark its

performance prior to implementing the change in the production system.

Joining Staging Tables to Lookup Tables in Informatica Lookups

If you identify bottlenecks with lookups having very large rowcounts, you can consider constraining them by updating the

Lookup queries and joining to a staging table used in the mapping. As a result, Informatica will execute the lookup query and

cache much fewer rows, and speed up the rows processing on its Transformation thread.

For example, the original query for Lkp_W_PARTY_D_With_Geo_Wid

SELECT DISTINCT W_PARTY_D.ROW_WID as ROW_WID,

W_PARTY_D.GEO_WID as GEO_WID,

W_PARTY_D.INTEGRATION_ID as INTEGRATION_ID,

W_PARTY_D.DATASOURCE_NUM_ID as DATASOURCE_NUM_ID,

W_PARTY_D.EFFECTIVE_FROM_DT as EFFECTIVE_FROM_DT,

W_PARTY_D.EFFECTIVE_TO_DT as EFFECTIVE_TO_DT

FROM

W_PARTY_D

Can be modified to:

SELECT DISTINCT W_PARTY_D.ROW_WID as ROW_WID,

W_PARTY_D.GEO_WID as GEO_WID,

W_PARTY_D.INTEGRATION_ID as INTEGRATION_ID,

W_PARTY_D.DATASOURCE_NUM_ID as DATASOURCE_NUM_ID,

W_PARTY_D.EFFECTIVE_FROM_DT as EFFECTIVE_FROM_DT,

W_PARTY_D.EFFECTIVE_TO_DT as EFFECTIVE_TO_DT

FROM

W_PARTY_D,

W_RESPONSE_FS

WHERE W_PARTY_D.INTEGRATION_ID=W_RESPONSE_FS.PARTY_ID AND

W_PARTY_D.DATASOURCE_NUM_ID=W_RESPONSE_FS.DATASOURCE_NUM_ID

Such change ensured the lookup row count drop from > 22M to 180K and helped to improve the mapping performance.

You can apply this approach selectively to both initial and incremental mappings after collecting thorough benchmarks.

Informatica Custom Relational Connections for long running mappings

If you plan to summarize very large volumes of data (usually over 100 million records), you can speed up the large data ETL

mappings by turning off automated PGA structures allocation and set sort_area_size and hash_area_size to higher values to

higher values. If you have limited system memory, you can increase only the sort_area_size as sorting operations for aggregate

mappings are more memory intensive. Hash joins involving bigger tables can still perform better with smaller hash_area_size.

Page 89: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

89

You will need to create and use custom relational connections in DAC and Informatica.

Define Custom Relational Connections in DAC Follow the steps below to define custom connection settings for selected tasks in DAC:

1. Connect to DAC as Administrator

2. Open Tools -> Seed Data -> Logical Data Sources menu, click New to create an entry DBConnection_OLAPM.

3. Assign the newly created Data Source for a selected task at Task level and also in Source Tables tab.

4. Navigate to Setup -> Physical Data Sources and create a physical source DataWarehouse_Manual.

5. Open Parameters tab at Execution Plan level and click Generate again. You should see the new entry

DBConnection_OLAPM.

6. Map DBConnection_OLAPM to DataWarehouse_Manual.

7. Rebuild the Execution Plan.

8. Right click on the Execution Plan and choose Add Refresh Dates. DAC will associate the relevant tables from

DataWarehouse to DataWarehouse_Manual.

Define Custom Relational Connections in Informatica Follow the steps below to create a new Relational Connection with custom session parameters in Informatica:

1. Open Informatica Workflow Manager and navigate to Connections -> Relational -> New

2. Define a new Target connection ‘DataWarehouse_Manual '. Make sure it exactly matches the name of the Physical Data

Source, defined above.

3. Use the same values as in ‘DataWarehouse’ connection

4. Click on ‘Connection Environment SQL’ and insert the following commands:

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 1000000000;

alter session set hash_area_size = 2000000000;

Save the changes.

Repeat the same steps to define another custom Relational connection to your Oracle Source database.

Informatica Session Parameters

There are three major properties, defined in Informatica Workflow Manager for each session, which impact Informatica

mappings performance.

Commit Interval

The target-based commit interval determines the commit points at which the Integration Service commits data writes in the

target database. The larger the commit interval, the better the overall mapping’s performance. However too large commit

interval may cause database logs to fill and result in session failure.

Oracle BI Applications Informatica mappings have the default setting 10,000. The recommended range for commit intervals is

from 10,000 up to 200,000.

DTM Buffer Size

The DTM Buffer Size specifies the amount of memory the Integration Service uses for DTM buffer memory. Informatica uses

DTM buffer memory to create the internal data structures and buffer blocks used to bring data into and out of the Integration

Service.

Page 90: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

90

Occasionally, you may see “WRT_8165 - TIMEOUT BASED COMMIT POINT” error message in Informatica session logs.

According to Informatica the writer thread may accidentally cause DTM deadlock. If it happens, the Writer thread will wait for

a minute and then issue an emergency timeout based commit. To resolve this issue increase the size of your DTM Buffer.

Additional Concurrent Pipelines for Lookup Cache Creation

Additional Concurrent Pipelines for Lookup Cache Creation parameter defines the concurrency for lookup cache creation.

Oracle BI Applications Informatica mappings have the default setting 0. You can reduce lookup cache build time by enabling

parallel lookup cache creation by setting the value larger than one.

Important! You should carefully analyze long running mapping bottlenecks before turning on lookup cache build concurrency

in your production environment. Oracle BI Applications execution plans take advantage of parallel workflows execution.

Enabling concurrent lookup cache creation may result in additional overhead on a target database and longer execution time.

You can consider turning on lookup cache creation concurrency when you have one or two long running mappings, which are

overloaded with lookups.

Default Buffer Block Size

The buffer block size specifies the amount of buffer memory used to move a block of data from the source to the target.

Oracle BI Applications Informatica mappings have the default setting 128,000. Avoid using ‘Auto’ value for Default Buffer Block

Size, as it may cause performance regressions for your sessions.

The internal tests showed better performance for both Initial and Incremental ETL with Default Buffer Block Size set to

512,000 (512K). You can run the following SQL to update the Buffer Block Size to 512K for all mappings in your Informatica

repository:

SQL> update opb_cfg_attr set attr_value='512000' where attr_value='128000' and attr_id = 5;

SQL> commit;

Important! You should test the changes in your development repository and benchmark ETL performance before making

changes to your production environment.

Informatica Load: Bulk vs. Normal

The Informatica writer thread may become a bottleneck in some mappings that use bulk mode to load very large volumes

(>200M) into a data warehouse.

The analysis of a trace file from a Writer database session shows that Informatica uses direct path insert to load data in Bulk

mode. The database session performs two direct path writes to insert each new portion of data. Every time Oracle scans for 12

contiguous blocks in a target table to perform a new write transaction. As the table grows larger, it takes longer and longer to

scan the segment for chunks of 12 contiguous blocks. Even though it does bypass database block cache, the Informatica Writer

thread may slow down the mapping’s overall performance.

To determine whether your mapping, which loads very large data in bulk mode, slows down because of writer thread, open its

Informatica session log, and compute the time to write the same set of blocks (usually 10,000) at the beginning and the end of

the log. If you observe significant increase in the writer execution time at the end of the log, then you should consider either

increasing commit size for the mapping or changing the session load mode from Bulk to Normal in Informatica Workflow

Manager, and test the mapping with the updated setting.

Informatica Bulk Load: Table Fragmentation

Informatica Bulk Load for very large volumes may not only slow down the mapping performance but also cause significant

table fragmentation.

Page 91: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

91

The internal tests showed that the commit size for Normal load did not affect the number of allocated extents for one million

rows in W_RESPONSE_F fact, used in the internal benchmarks. However for the Bulk Load the number of extents increased

rather significantly with commit size going down. The commit size also affected the mapping performance for both Normal

and Bulk load; the drop in throughput has been more significant for the latter scenario.

The table below shows the number of extents (ext) and throughput (rps) for each tested scenario.

Informatica Load type 1M commit 100K commit 10K commit 1K commit 10 rows commit

Normal mode 80 ext / 34K rps 80 ext / 33K rps 80 ext / 30K rps 80 ext / 27K rps 80 ext / 14K rps

Bulk mode 80 ext / 55.5K rps 190 ext / 55.5K rps 200 ext / 37K rps 960 ext / 8K rps > 5K ext (out of space) / 600 rps

Important! To ensure bulk load performance and avoid or minimize target table fragmentation, use larger commit size in

Informatica mappings.

Use of NULL Ports in Informatica Mappings

The use of connected or disconnected ports with hard-coded NULL values in Informatica mappings can be yet another reason

for slower ETL mappings performance. The internal study showed that, depending on the number of NULL ports, such

mappings performance can drop two times or even more. The performance gap becomes larger when more ports are used in

a mapping. The session CPU time grows nearly proportionally to the number of connected ports, so does the row width,

processed by Informatica. As soon as certain threshold of ports reached, the internal Informatica session processing for wide

mappings becomes even more complex, and its execution runtime slows down dramatically. The internal tests demonstrated

that Informatica treats equally NULL and non-NULL values and allocates critical resources for processing NULL ports. It also

includes NULL values into INSERT statements, executed by WRITER thread on data warehouse tier.

To ensure effective performance of Informatica mappings:

- Avoid using NULL ports in Informatica transformations.

- Try to keep the total number of ports no greater than 50 per mapping.

- Review slow mappings for NULL ports or any other potentially redundant ports, which could be eliminated.

Informatica Parallel Sessions Load on ETL tier

Informatica mappings with complex transformations and heavy lookups typically consume larger amounts of memory during

ETL execution. While processing large data volumes and executing in parallel, such mappings may easily overload the ETL

server and cause very heavy memory swapping and paging. As the result, the overall ETL execution would take much longer

time to complete. To avoid such potential bottlenecks:

Consider implementing Informatica 64-bit version on your ETL tier.

Ensure you have enough physical memory on your ETL tier server. Refer to Hardware Recommendations section for more

details.

Keep in mind that too many Informatica sessions, running in parallel, may overload either source or target database.

Set smaller number of connections to Informatica Integration Service in DAC. Navigate to DAC’s Setup screen ->

Informatica Servers tab -> Maximum Sessions in the lower pane for both Informatica and Repository connections. The

recommended range is from 5 to 10 sessions.

Benchmark your ETL performance in your test environment prior to implementing the change in the production system.

Informatica Workflow Partitioning

This section covers techniques and recommendations for mapping partitioning to speed up workflows executions for large

volume mappings or slow ETL jobs.

Page 92: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

92

Workflow Session Partitioning for Writer Updates Row by row updates can significantly slow down mappings performance, making Informatica Writer Thread the primary

bottleneck during ETL. You can quickly find such cases by analyzing Thread Busy % and volume of updates in Informatica

session logs. For example:

LOAD SUMMARY

============

WRT_8036 Target: W_CAMP_HIST_F (Instance Name: [W_CAMP_HIST_F])

WRT_8041 Updated rows - Requested: 3753687 Applied: 3753687 Rejected: 0 Affected:

3753687

WRITER_1_*_1> WRT_8043 *****END LOAD SESSION*****

WRITER_1_*_1> WRT_8006 Writer run completed.

MANAGER> PETL_24031

***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****

Thread [READER_1_1_1] created for [the read stage] of partition point [SQ_JOINER] has completed.

Total Run Time = [10753.562755] secs

Total Idle Time = [5467.169323] secs

Busy Percentage = [49.159460]

Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [SQ_JOINER] has

completed.

Total Run Time = [5758.883913] secs

Total Idle Time = [4606.931512] secs

Busy Percentage = [20.003050]

Thread work time breakdown:

Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_CAMP_HIST_F] has completed.

Total Run Time = [10696.082997] secs

Total Idle Time = [5244.599181] secs

Busy Percentage = [50.967105]

The Writer shows that all processed rows were updates, and Informatica reported Writer thread Busy Percentage =50%. Small volume updates can be sped up by ensuring indexes on columns in WHERE clause of UPDATE statement. In our example the following UPDATE statement in WRITER should have the index on ROW_WID column:

WRITER WRITER_1_*_1> WRT_8124 Target Table W_CAMP_HIST_F :SQL UPDATE statement:

UPDATE W_CAMP_HIST_F SET PARTY_WID = ?, … X_LAST_UPD_WID = ? WHERE ROW_WID = ?

Otherwise every single row update would perform Full Table Scan and result in very low throughput of few rows per second. Important! You should have the required indexes for your Update transformations to use Index (if possible, Unique) Scans rather than expensive Full Table Scans for each update record. The additional improvements for long running update mappings can be achieved by parallelizing the concurrent updates in the same target table.

Requirements for Implementing Concurrent Updates

1. You should create an index on your target table columns in WHERE clause of your UPDATE statement to use Index Access path for each UPDATE DML.

2. Ensure no BITMAP indexes on the Target table during the concurrent UPDATE executions. Otherwise you may end up with deadlocks during your ETL.

Implement Staging Table HASH Partitioning

Oracle Table Partitioning provides an option to implement hash partitions, which will ensure even data distribution across all table partitions. Every time you execute an incremental ETL, DAC will truncate staging tables (_DS, _FS, etc), and then Informatica SDE mappings populate them with incremental changes extracted from Source environments. Hash Partitioning

Page 93: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

93

the identified staging table (W_GEO_DS in our example) will ensure even data distribution across all partitions for each incremental ETL run:

SQL> RENAME w_camp_hist_fs TO w_camp_hist_fs_bak;

SQL> CREATE TABLE w_camp_hist_fs PARTITION BY HASH(integration_id) PARTITIONS 4 AS SELECT * FROM

w_camp_hist_fs_bak;

SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name='W_CAMP_HIST_FS';

PARTITION_NAME

------------------------------

SYS_P41

SYS_P42

SYS_P43

SYS_P44

Note: No changes need to be done to the table definition in DAC. General recommendations for hash partitioning implementation:

1. When picking the partitioning key, consider using the unique keys, such as ROW_WID, INTEGRATION_ID, etc. If there are no unique keys, choose the column with the largest value of distinct keys.

2. Important! If there are any indexes on the original staging table, you must create them on the hash partitioned table as well. You do not need to create them as global or local; otherwise you will have to use Action Framework for them.

3. Create 4-6 hash partitions at most. Building more hash partitions and corresponding parallel sessions would not make the mapping running faster. The larger number of parallel sessions would increase the load on Informatica tier when building CDC Lookups for each hash partition, as well as Target database tier performing more concurrent updates.

Create Parallel Sessions in Workflow Manager

Create the same number of the sessions as the number of hash partitions, each session running against a dedicated partition and configure the workflow to run the sessions in parallel:

1. Open the desired workflow in Informatica Workflow Manager. 2. Create four copies of the original Session in the opened Workflow. 3. Override each session’s SQL override, hard-coding a unique partition name instead of the staging table name, i.e.

FROM W_CAMP_HIST_FS partition(SYS_P41) W_CAMP_HIST_FS

4. If there is a CDC Lookup, which joins a staging and a target table, then make sure you update it to point to a dedicated partition of the staging table for each of the four sessions.

5. Configure your workflow to run the four new sessions in parallel and remove the original session. 6. Save the changes and test the updated mapping.

You may consider applying the same approach to such mappings as SIL_PositionDimensionHierarchy_AsIsUpdate, which reads and updates W_POSITION_DH table. In this case you apply hash partitioning to W_POSITION_DH table using ROW_WID for its partitioning key.

Informatica Pipeline Partitioning Informatica Pipeline Partitioning can help to speed up an Informatica workflow performance by implementing pipelines for its

session(s). Each session can have one or more pipelines. A pipeline consists of a source qualifier and all the transformations

and targets that receive data from the source qualifier. When the Integration Service runs a session, it partitions its pipeline(s)

and performs extract, transformation, and load for each partition in parallel.

Page 94: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

94

Important! The Pipeline Partitioning Option requires an additional license from Informatica. Please consult Informatica Power

Center Workflow Administration Guide and Power Center Performance Tuning Guide for detailed instructions on how to

enable and configure pipeline partitioning.

Oracle BI Applications SIL mappings with large fraction of row updates can benefit the most from using Informatica Pipeline

Partitioning. Such mappings run default UPDATE actions for each input row, and produce the throughput ranging from 400 to

800 rows per second depending on the hardware specifications.

You can mitigate the slower performance by running multiple update threads in parallel, assuming both Informatica and target

database boxes have sufficient resources. In general, a single update response time is determined by an average time it takes

to locate the read block. The internal tests for a target table updates using a unique index on the table PK show that it takes

three logical IOs and one physical IO per update operation. Assuming logical IO’s equal to 0.1 milliseconds and physical IO - 3

milliseconds on an average hardware, you can get 1.3 millisecond per update or approximately 770 rows per second.

Typically the SIL source qualifier can operate with much higher throughput. It reads data blocks straight from the staging table

and it normally takes no more than 10% of physical reads, the rest is cached in DB buffer cache. A single read delivers more

than one row into the client’s buffer, typically 5 to 10 rows per read. A conservative estimate for 0.4 milliseconds per one fetch

of five rows gives the throughput of 12,500 rows per second. Even though its READER thread can operate much faster, the

mapping’s performance is determined by its slowest thread, its WRITER, operating at max 800 rows per second. So, the

READER remains idle for 84% of its time, while the WRITER works at 100% of its capacity. At the same time the WRITER thread

consumes a tiny fraction of the target hardware resources for performing row-by-row updates.

You can add an additional WRITER partition to a pipeline so that they both work concurrently. Then the READER’s busy

percentage will go up from 16% to 32%, and the session’s overall throughput will double. As you proceed adding more and

more WRITER partitions, they will start competing for both the Informatica and the target resources, so the number of

pipeline partitions should not exceed more than four. Then you can expect to get 2.5-3 times better throughput for your

mapping. You should monitor the mapping threads workload and idle percentage and try to achieve a balance among all three

threads, READER, TRANS and WRITER. It is important to configure the pipeline partitioning within the overall ETL execution

plan context.

Suspend and Resume Informatica Mappings (Oracle RDBMS)

After a long running Reader query session in a source database, ETL Administrators may encounter poor Informatica Writer

performance in a target database. A typical case may involve redundant indexes on the target table, left in the database by

oversight. Rather than terminating the session and re-running the expensive Reader SQL, consider suspending Informatica

session, dropping the index and resuming the session:

Identify the Informatica Session’s Process ID in Oracle database and then use the oradebug suspend/resume commands:

SQL> SELECT p.spid, s.process FROM v$process p, s$session WHERE p.addr=s.addr AND s.module like

‘%pmcmd%’;

SQL> oradebug setorapid 172

SQL> oradebug suspend

SQL> DROP INDEX W_PAYROLL_FS_U1;

SQL> oradebug resume

Page 95: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

95

Oracle MERGE in Informatica to Improve Updates Performance

You may find several bottleneck mappings, performing heavy row-by-row updates as well as inserts in your ETL runs. Such

mappings use default INSERT and UPDATE DMLs in Informatica Writer thread. As an alternative, you can use Oracle RDBMS

MERGE DML to achieve better performance. This chapter will cover three ways to use Oracle MERGE:

1. MERGE SQL in Informatica Update Override

2. MERGE in Post SQL in Update Override

3. MERGE in Informatica SQL Transformation

Each option may cover different implementation scenarios and ETL logic requirements.

MERGE SQL in Informatica Update Override

Some BI Analytics Applications Informatica mappings use Update Override option in Target Definition. If you find a Merge SQL

to be a better performing option, then simply use the updated MERGE SQL instead of UPDATE in Update Override.

The following example shows the use of Oracle MERGE in PLP_PayrollFact_PositionHierarchy_Update mapping. The

replacement MERGE SQL eliminates costly full table scan of a major W_PAYROLL_F table, produces much more effective

execution plan, and delivers significant runtime improvement.

The original explain plan:

-----------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

-----------------------------------------------------------------------------------------------------------------

| 0 | UPDATE STATEMENT | | 383K| 31M| 1149K (34)| 18:04:32 |

| 1 | UPDATE | W_PAYROLL_F | | | | |

|* 2 | HASH JOIN RIGHT SEMI | | 383K| 31M| 199 (3)| 00:00:12 |

| 3 | VIEW | VW_SQ_1 | 1 | 39 | 2 (0)| 00:00:01 |

| 4 | NESTED LOOPS | | 1 | 85 | 2 (0)| 00:00:01 |

| 5 | NESTED LOOPS | | 1 | 70 | 0 (0)| 00:00:01 |

| 6 | INDEX FULL SCAN | W_POSITION_DH_PRE_CHG_TMP_M1 | 1 | 26 | 0 (0)| 00:00:01 |

|* 7 | INDEX RANGE SCAN | W_POSITION_DH_POST_CHG_TMP_M1 | 1 | 44 | 0 (0)| 00:00:01 |

| 8 | TABLE ACCESS BY INDEX ROWID| W_DAY_D | 47 | 705 | 2 (0)| 00:00:01 |

|* 9 | INDEX RANGE SCAN | W_DAY_D_M39 | 1 | | 1 (0)| 00:00:01 |

| 10 | TABLE ACCESS FULL | W_PAYROLL_F | 383K| 17M| 196 (3)| 00:00:12 |

| 11 | NESTED LOOPS | | 1 | 85 | 2 (0)| 00:00:01 |

| 12 | NESTED LOOPS | | 1 | 41 | 2 (0)| 00:00:01 |

| 13 | TABLE ACCESS BY INDEX ROWID | W_DAY_D | 1 | 15 | 2 (0)| 00:00:01 |

|* 14 | INDEX UNIQUE SCAN | W_DAY_D_P1 | 1 | | 1 (0)| 00:00:01 |

|* 15 | INDEX RANGE SCAN | W_POSITION_DH_PRE_CHG_TMP_M1 | 1 | 26 | 0 (0)| 00:00:01 |

|* 16 | INDEX RANGE SCAN | W_POSITION_DH_POST_CHG_TMP_M1 | 1 | 44 | 0 (0)| 00:00:01 |

-----------------------------------------------------------------------------------------------------------------

The MERGE SQL Override:

MERGE INTO W_PAYROLL_F

USING (SELECT DISTINCT TMP_NEW.ROW_WID TMP_NEW_ROW_WID,

W_PAYROLL_F.rowid rw

FROM W_DAY_D,

W_POSITION_DH_PRE_CHG_TMP TMP_OLD,

W_POSITION_DH_POST_CHG_TMP TMP_NEW,

W_PAYROLL_F

WHERE W_PAYROLL_F.PAY_PERIOD_END_DT_WID = W_DAY_D.ROW_WID

AND W_PAYROLL_F.EMP_POSTN_DH_WID = TMP_OLD.ROW_WID

AND TMP_OLD.SCD1_WID = TMP_NEW.SCD1_WID

AND TMP_NEW.EFFECTIVE_FROM_DT <= W_DAY_D.DAY_DT

AND TMP_NEW.EFFECTIVE_TO_DT > W_DAY_D.DAY_DT

AND W_PAYROLL_F.EMP_POSTN_DH_WID <> TMP_NEW.ROW_WID

) TMP

on (W_PAYROLL_F.rowid = TMP.rw)

WHEN MATCHED THEN

UPDATE

SET W_UPDATE_DT = :TU.W_UPDATE_DT,

ETL_PROC_WID = :TU.ETL_PROC_WID,

Page 96: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

96

EMP_POSTN_DH_WID = TMP_NEW_ROW_WID

The MERGE SQL Explain Plan:

-------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

-------------------------------------------------------------------------------------------------------------------

| 0 | MERGE STATEMENT | | 1 | 48 | 11 (0)| 00:00:01 |

| 1 | MERGE | W_PAYROLL_F | | | | |

| 2 | VIEW | | | | | |

| 3 | NESTED LOOPS | | 1 | 1027 | 11 (0)| 00:00:01 |

| 4 | NESTED LOOPS | | 1 | 123 | 10 (0)| 00:00:01 |

| 5 | NESTED LOOPS | | 1 | 97 | 10 (0)| 00:00:01 |

| 6 | NESTED LOOPS | | 1 | 59 | 2 (0)| 00:00:01 |

| 7 | INDEX FULL SCAN | W_POSITION_DH_POST_CHG_TMP_M1 | 1 | 44 | 0 (0)| 00:00:01 |

| 8 | TABLE ACCESS BY INDEX ROWID| W_DAY_D | 47 | 705 | 2 (0)| 00:00:01 |

|* 9 | INDEX RANGE SCAN | W_DAY_D_M39 | 1 | | 1 (0)| 00:00:01 |

|* 10 | TABLE ACCESS BY INDEX ROWID | W_PAYROLL_F | 20 | 760 | 10 (0)| 00:00:01 |

| 11 | BITMAP CONVERSION TO ROWIDS| | | | | |

|* 12 | BITMAP INDEX SINGLE VALUE | W_PAYROLL_F_F12 | | | | |

|* 13 | INDEX RANGE SCAN | W_POSITION_DH_PRE_CHG_TMP_M1 | 1 | 26 | 0 (0)| 00:00:01 |

| 14 | TABLE ACCESS BY USER ROWID | W_PAYROLL_F | 1 | 904 | 1 (0)| 00:00:01 |

-------------------------------------------------------------------------------------------------------------------

To implement Update Override, check out a mapping in Informatica Designer, double-click on Target Definition, click

Properties tab and paste an updated MERGE SQL into Update Override field. Save the changes and check in the mapping.

MERGE in Post SQL in Update Override

If an Informatica mapping use Update Override, you can implement MERGE is to use Post SQL in Informatica Target Definition:

1. Double-click Target Definition for the chosen Informatica mapping in Designer.

2. Click Properties tab

3. Delete Update Override

4. Put a MERGE syntax in Post SQL field

5. Save the changes

Such approach can be used not only for substituting default UPDATE logic, but also for more effective DELETEs. You can

suppress the source qualifier query by adding ‘1=2’ predicate to its WHERE clause, and then use a manual DELETE in the

Target Definition Post SQL.

MERGE in Informatica SQL Transformation

The next case covers more complex scenario, where a BI Analytics Applications mapping, typically load into a fact table, uses

Update Strategy Transformation to perform inserts and updates depending on specific flags (ports) values. With a small

volume of updates it is often enough to ensure unique indexes presence for the columns used in UPDATE DML WHERE clause.

It is a bigger challenge to accommodate good performance for such mappings with very high update volumes. The following

example uses SIL_APTransactionFact mapping to show Oracle MERGE implementation using Informatica SQL Transformation.

Workflow Logic for AP Transaction Fact Load with MERGE

1. Truncate an auxiliary table W_AP_XACT_F_TMP.

2. Load table W_AP_XACT_F_TMP using the modified SIL_APTransactionFact session with INSERTs only, using Bulk mode.

3. MERGE table W_AP_XACT_F_TMP with W_AP_XACT_F using Oracle MERGE statement.

Implement SIL_APTransactionFact Mapping and Session Changes

The steps below will modify SIL_APTransactionFact mapping and session to load data into an auxiliary empty table W_AP_XACT_F_TMP, instead of the original target Fact table W_AP_XACT_F.

Page 97: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

97

1. Create a new table W_AP_XACT_F_TMP using the following statement:

SQL> CREATE TABLE W_AP_XACT_F_TMP AS SELECT * FROM W_AP_XACT_F WHERE 1 = 0;

2. Create a new Target in a scratch folder (WORK) by importing the table definition from the database.

3. Export the original SIL_APTransactionFact mapping into an XML file. Make sure you keep the original version backup.

4. Edit the XML file, replace all strings “W_AP_XACT_F” (double quotes included) with “W_AP_XACT_F_TMP”

5. Import edited XML file into the working folder WORK and save it in your Informatica repository.

6. Open the imported mapping in Informatica Designer.

7. Double-click expression Exp_W_AP_XACT_F_Update_Flg, click Ports tab and locate Update_Flg port.

8. Double-click Expression field to open Expression Editor. Replace IIF expression with ‘I’:

Create MERGE_APTransactionFact Mapping

Follow the steps below to create a new mapping MERGE_APTransactionFact, which will merge the auxiliary table W_AP_XACT_F_TMP into the original target Fact table W_AP_XACT_F:

1. Create flat file based source qualifier:

a. Create and save a text file containing MERGE statement. Refer to the MERGE SQL section below.

b. Open the working folder WORK and create a new Source from the text file. Make sure you specify the full path to the text file.

c. Choose Delimited with semi column symbol as a field separator.

d. Create a column ‘SQL_query’ of type string

2. Create a new Target for storing SQL error messages

a. Create a new Target, using Flat File as a database type, in WORK Folder.

b. Add a column ‘SQL_Errors’ of type string

Page 98: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

98

3. Create a new mapping MERGE_APTransactionFact

a. Create a new mapping MERGE_APTransactionFact

b. Add a new transformation of ‘SQL Transformation’ type

c. Open the new SQL Transformation Properties window and navigate to SQL Ports tab

d. Add a new SQL Port ‘Query_Port’:

i. Enter the size of the query file from step 1 in Precision field

ii. Enter ~SQL_Port~ in SQL Query field

4. Add flat file source to the mapping

5. Add flat file target to the mapping

6. Connect Source Qualifier Port SQL_Query to SQL Transformation Port Query_Port

7. Connect SQL Transformation Port SQL_Error to Target Port SQL_Errors.

8. Save the newly created mapping in the repository.

Create SIL_APTransactionFact and MERGE_APTransactionFact Sessions

Open Informatica Workflow Manager and create two new sessions for SIL_APTransactionFact and MERGE_APTransactionFact mappings.

1. Create a new session SIL_APTransactionFact using WORK.SIL_APTransactionFact mapping in WORK Folder.

2. Open Session Properties Editor and set Commit Interval to 100,000.

3. Make sure both $sources and $target are set to $DBConnection_OLAP in Connections properties.

4. Select Source Qualifier and make sure it’s set to $DBConnection_OLAP.

5. Select Target and ensure it’s set to $DBConnection_OLAP.

6. Make sure Target load type is set to Bulk.

Page 99: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

99

7. Create the second session MERGE_APTransactionFact using WORK. MERGE_APTransactionFact

8. Open Connections properties editor and enter $DBConnection_OLAP for SQL Transformation.

9. Click on SQ_import Source and check Source file directory and Source file name properties.

Page 100: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

100

10. Copy the text file with SQL MERGE statement to the location, defined by Source file directory on Informatica server

(default: $PMSourceFileDir\).

Create SIL_APTransactionFact Workflow

Create a new workflow SIL_APTransactionFact in WORK Folder and include both modified SIL_APTransactionFact and

MERGE_APTransactionFact sessions sequentially.

ORACLE MERGE SQL for SIL_APTransactionFact

Refer to the MERGE SQL statement for MERGE_APTransactionFact Source Qualifier below.

MERGE INTO W_AP_XACT_F T USING (SELECT GL_ACCOUNT_WID, BUDGET_ORG_WID, CUSTOMER_WID,

CUSTOMER_FIN_PROFL_WID, SUPPLIER_WID, SPLR_ACCT_WID, SALES_REP_WID, SERVICE_REP_WID, ACCT_REP_WID,

PURCH_REP_WID, PRODUCT_WID, SALES_PROD_WID, INVENTORY_PROD_WID, SUPPLIER_PROD_WID, COMPANY_LOC_WID,

PLANT_LOC_WID, OPERATING_UNIT_ORG_WID, PAYABLES_ORG_WID, LEDGER_WID, COMPANY_ORG_WID, BUSN_AREA_ORG_WID,

CTRL_AREA_ORG_WID, FIN_AREA_ORG_WID, SALES_ORG_WID, PURCHASE_ORG_WID, ISSUE_ORG_WID, DOC_TYPE_WID,

CLRNG_DOC_TYPE_WID, REF_DOC_TYPE_WID, POSTING_TYPE_WID, CLRNG_POST_TYPE_WID, COST_CENTER_WID,

PROFIT_CENTER_WID, DOC_STATUS_WID, BANK_WID, TAX_TYPE_WID, PAY_TERMS_WID, PAY_METHOD_WID, PROJECT_WID,

TASK_WID, FINANCIAL_RESOURCE_WID, EXPENDITURE_ORG_WID, SOURCE_WID, TRANSACTION_DT_WID,

TRANSACTION_TM_WID, POSTED_ON_DT_WID, POSTED_ON_TM_WID, CONVERSION_DT_WID, ORDERED_ON_DT_WID,

INVOICED_ON_DT_WID, PURCH_ORDER_DT_WID, SPLR_ORDER_DT_WID, INVOICE_RECEIPT_DT_WID, CLEARED_ON_DT_WID,

CLEARING_DOC_DT_WID, BASELINE_DT_WID, PLANNING_DT_WID, PAYMENT_DUE_DT_WID, MCAL_CAL_WID, AP_DOC_AMT,

AP_LOC_AMT, AP_REMAINING_DOC_AMT, AP_REMAINING_LOC_AMT, XACT_QTY, UOM_CODE, DB_CR_IND, ACCT_DOC_ID,

ACCT_DOC_NUM, ACCT_DOC_ITEM, ACCT_DOC_SUB_ITEM, CLEARING_DOC_NUM, CLEARING_DOC_ITEM, SALES_ORDER_NUM,

SALES_ORDER_ITEM, SALES_SCH_LINE, SALES_INVOICE_NUM, SALES_INVOICE_ITEM, PURCH_ORDER_NUM,

PURCH_ORDER_ITEM, PURCH_INVOICE_NUM, PURCH_INVOICE_ITEM, CUST_PUR_ORD_NUM, CUST_PUR_ORD_ITEM,

SPLR_ORDER_NUM, SPLR_ORDER_ITEM, REF_DOC_NUM, REF_DOC_ITEM, DOC_HEADER_TEXT, LINE_ITEM_TEXT,

ALLOCATION_NUM, GL_BALANCE_ID, BALANCE_ID, FED_BALANCE_ID, GL_RECONCILED_ON_DT, DOC_CURR_CODE,

Page 101: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

101

LOC_CURR_CODE, LOC_EXCHANGE_RATE, GLOBAL1_EXCHANGE_RATE, GLOBAL2_EXCHANGE_RATE, GLOBAL3_EXCHANGE_RATE,

CREATED_BY_WID, CHANGED_BY_WID, CREATED_ON_DT, CHANGED_ON_DT, AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT,

AUX3_CHANGED_ON_DT, AUX4_CHANGED_ON_DT, DELETE_FLG, W_INSERT_DT, W_UPDATE_DT, TENANT_ID, INTEGRATION_ID,

DATASOURCE_NUM_ID, X_CUSTOM FROM W_AP_XACT_F_TMP) S ON (T.INTEGRATION_ID = S.INTEGRATION_ID AND

T.DATASOURCE_NUM_ID = S.DATASOURCE_NUM_ID) WHEN MATCHED THEN UPDATE SET T.GL_ACCOUNT_WID =

S.GL_ACCOUNT_WID, T.BUDGET_ORG_WID = S.BUDGET_ORG_WID, T.CUSTOMER_WID = S.CUSTOMER_WID,

T.CUSTOMER_FIN_PROFL_WID = S.CUSTOMER_FIN_PROFL_WID, T.SUPPLIER_WID = S.SUPPLIER_WID, T.SPLR_ACCT_WID =

S.SPLR_ACCT_WID, T.SALES_REP_WID = S.SALES_REP_WID, T.SERVICE_REP_WID = S.SERVICE_REP_WID,

T.ACCT_REP_WID = S.ACCT_REP_WID, T.PURCH_REP_WID = S.PURCH_REP_WID, T.PRODUCT_WID = S.PRODUCT_WID,

T.SALES_PROD_WID = S.SALES_PROD_WID, T.INVENTORY_PROD_WID = S.INVENTORY_PROD_WID, T.SUPPLIER_PROD_WID =

S.SUPPLIER_PROD_WID, T.COMPANY_LOC_WID = S.COMPANY_LOC_WID, T.PLANT_LOC_WID = S.PLANT_LOC_WID,

T.OPERATING_UNIT_ORG_WID = S.OPERATING_UNIT_ORG_WID, T.PAYABLES_ORG_WID = S.PAYABLES_ORG_WID,

T.LEDGER_WID = S.LEDGER_WID, T.COMPANY_ORG_WID = S.COMPANY_ORG_WID, T.BUSN_AREA_ORG_WID =

S.BUSN_AREA_ORG_WID, T.CTRL_AREA_ORG_WID = S.CTRL_AREA_ORG_WID, T.FIN_AREA_ORG_WID = S.FIN_AREA_ORG_WID,

T.SALES_ORG_WID = S.SALES_ORG_WID, T.PURCHASE_ORG_WID = S.PURCHASE_ORG_WID, T.ISSUE_ORG_WID =

S.ISSUE_ORG_WID, T.DOC_TYPE_WID = S.DOC_TYPE_WID, T.CLRNG_DOC_TYPE_WID = S.CLRNG_DOC_TYPE_WID,

T.REF_DOC_TYPE_WID = S.REF_DOC_TYPE_WID, T.POSTING_TYPE_WID = S.POSTING_TYPE_WID, T.CLRNG_POST_TYPE_WID

= S.CLRNG_POST_TYPE_WID, T.COST_CENTER_WID = S.COST_CENTER_WID, T.PROFIT_CENTER_WID =

S.PROFIT_CENTER_WID, T.DOC_STATUS_WID = S.DOC_STATUS_WID, T.BANK_WID = S.BANK_WID, T.TAX_TYPE_WID =

S.TAX_TYPE_WID, T.PAY_TERMS_WID = S.PAY_TERMS_WID, T.PAY_METHOD_WID = S.PAY_METHOD_WID, T.PROJECT_WID =

S.PROJECT_WID, T.TASK_WID = S.TASK_WID, T.FINANCIAL_RESOURCE_WID = S.FINANCIAL_RESOURCE_WID,

T.EXPENDITURE_ORG_WID = S.EXPENDITURE_ORG_WID, T.SOURCE_WID = S.SOURCE_WID, T.TRANSACTION_DT_WID =

S.TRANSACTION_DT_WID, T.TRANSACTION_TM_WID = S.TRANSACTION_TM_WID, T.POSTED_ON_DT_WID =

S.POSTED_ON_DT_WID, T.POSTED_ON_TM_WID = S.POSTED_ON_TM_WID, T.CONVERSION_DT_WID = S.CONVERSION_DT_WID,

T.ORDERED_ON_DT_WID = S.ORDERED_ON_DT_WID, T.INVOICED_ON_DT_WID = S.INVOICED_ON_DT_WID,

T.PURCH_ORDER_DT_WID = S.PURCH_ORDER_DT_WID, T.SPLR_ORDER_DT_WID = S.SPLR_ORDER_DT_WID,

T.INVOICE_RECEIPT_DT_WID = S.INVOICE_RECEIPT_DT_WID, T.CLEARED_ON_DT_WID = S.CLEARED_ON_DT_WID,

T.CLEARING_DOC_DT_WID = S.CLEARING_DOC_DT_WID, T.BASELINE_DT_WID = S.BASELINE_DT_WID, T.PLANNING_DT_WID

= S.PLANNING_DT_WID, T.PAYMENT_DUE_DT_WID = S.PAYMENT_DUE_DT_WID, T.MCAL_CAL_WID = S.MCAL_CAL_WID,

T.AP_DOC_AMT = S.AP_DOC_AMT, T.AP_LOC_AMT = S.AP_LOC_AMT, T.AP_REMAINING_DOC_AMT =

S.AP_REMAINING_DOC_AMT, T.AP_REMAINING_LOC_AMT = S.AP_REMAINING_LOC_AMT, T.XACT_QTY = S.XACT_QTY,

T.UOM_CODE = S.UOM_CODE, T.DB_CR_IND = S.DB_CR_IND, T.ACCT_DOC_ID = S.ACCT_DOC_ID, T.ACCT_DOC_NUM =

S.ACCT_DOC_NUM, T.ACCT_DOC_ITEM = S.ACCT_DOC_ITEM, T.ACCT_DOC_SUB_ITEM = S.ACCT_DOC_SUB_ITEM,

T.CLEARING_DOC_NUM = S.CLEARING_DOC_NUM, T.CLEARING_DOC_ITEM = S.CLEARING_DOC_ITEM, T.SALES_ORDER_NUM =

S.SALES_ORDER_NUM, T.SALES_ORDER_ITEM = S.SALES_ORDER_ITEM, T.SALES_SCH_LINE = S.SALES_SCH_LINE,

T.SALES_INVOICE_NUM = S.SALES_INVOICE_NUM, T.SALES_INVOICE_ITEM = S.SALES_INVOICE_ITEM,

T.PURCH_ORDER_NUM = S.PURCH_ORDER_NUM, T.PURCH_ORDER_ITEM = S.PURCH_ORDER_ITEM, T.PURCH_INVOICE_NUM =

S.PURCH_INVOICE_NUM, T.PURCH_INVOICE_ITEM = S.PURCH_INVOICE_ITEM, T.CUST_PUR_ORD_NUM =

S.CUST_PUR_ORD_NUM, T.CUST_PUR_ORD_ITEM = S.CUST_PUR_ORD_ITEM, T.SPLR_ORDER_NUM = S.SPLR_ORDER_NUM,

T.SPLR_ORDER_ITEM = S.SPLR_ORDER_ITEM, T.REF_DOC_NUM = S.REF_DOC_NUM, T.REF_DOC_ITEM = S.REF_DOC_ITEM,

T.DOC_HEADER_TEXT = S.DOC_HEADER_TEXT, T.LINE_ITEM_TEXT = S.LINE_ITEM_TEXT, T.ALLOCATION_NUM =

S.ALLOCATION_NUM, T.GL_BALANCE_ID = S.GL_BALANCE_ID, T.BALANCE_ID = S.BALANCE_ID, T.FED_BALANCE_ID =

S.FED_BALANCE_ID, T.GL_RECONCILED_ON_DT = S.GL_RECONCILED_ON_DT, T.DOC_CURR_CODE = S.DOC_CURR_CODE,

T.LOC_CURR_CODE = S.LOC_CURR_CODE, T.LOC_EXCHANGE_RATE = S.LOC_EXCHANGE_RATE, T.GLOBAL1_EXCHANGE_RATE =

S.GLOBAL1_EXCHANGE_RATE, T.GLOBAL2_EXCHANGE_RATE = S.GLOBAL2_EXCHANGE_RATE, T.GLOBAL3_EXCHANGE_RATE =

S.GLOBAL3_EXCHANGE_RATE, T.CREATED_BY_WID = S.CREATED_BY_WID, T.CHANGED_BY_WID = S.CHANGED_BY_WID,

T.CREATED_ON_DT = S.CREATED_ON_DT, T.CHANGED_ON_DT = S.CHANGED_ON_DT, T.AUX1_CHANGED_ON_DT =

S.AUX1_CHANGED_ON_DT, T.AUX2_CHANGED_ON_DT = S.AUX2_CHANGED_ON_DT, T.AUX3_CHANGED_ON_DT =

S.AUX3_CHANGED_ON_DT, T.AUX4_CHANGED_ON_DT = S.AUX4_CHANGED_ON_DT, T.DELETE_FLG = S.DELETE_FLG,

T.W_INSERT_DT = S.W_INSERT_DT, T.W_UPDATE_DT = S.W_UPDATE_DT, T.TENANT_ID = S.TENANT_ID, T.X_CUSTOM =

S.X_CUSTOM WHEN NOT MATCHED THEN INSERT (T.GL_ACCOUNT_WID, T.BUDGET_ORG_WID, T.CUSTOMER_WID,

T.CUSTOMER_FIN_PROFL_WID, T.SUPPLIER_WID, T.SPLR_ACCT_WID, T.SALES_REP_WID, T.SERVICE_REP_WID,

T.ACCT_REP_WID, T.PURCH_REP_WID, T.PRODUCT_WID, T.SALES_PROD_WID, T.INVENTORY_PROD_WID,

T.SUPPLIER_PROD_WID, T.COMPANY_LOC_WID, T.PLANT_LOC_WID, T.OPERATING_UNIT_ORG_WID, T.PAYABLES_ORG_WID,

T.LEDGER_WID, T.COMPANY_ORG_WID, T.BUSN_AREA_ORG_WID, T.CTRL_AREA_ORG_WID, T.FIN_AREA_ORG_WID,

T.SALES_ORG_WID, T.PURCHASE_ORG_WID, T.ISSUE_ORG_WID, T.DOC_TYPE_WID, T.CLRNG_DOC_TYPE_WID,

T.REF_DOC_TYPE_WID, T.POSTING_TYPE_WID, T.CLRNG_POST_TYPE_WID, T.COST_CENTER_WID, T.PROFIT_CENTER_WID,

T.DOC_STATUS_WID, T.BANK_WID, T.TAX_TYPE_WID, T.PAY_TERMS_WID, T.PAY_METHOD_WID, T.PROJECT_WID,

Page 102: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

102

T.TASK_WID, T.FINANCIAL_RESOURCE_WID, T.EXPENDITURE_ORG_WID, T.SOURCE_WID, T.TRANSACTION_DT_WID,

T.TRANSACTION_TM_WID, T.POSTED_ON_DT_WID, T.POSTED_ON_TM_WID, T.CONVERSION_DT_WID, T.ORDERED_ON_DT_WID,

T.INVOICED_ON_DT_WID, T.PURCH_ORDER_DT_WID, T.SPLR_ORDER_DT_WID, T.INVOICE_RECEIPT_DT_WID,

T.CLEARED_ON_DT_WID, T.CLEARING_DOC_DT_WID, T.BASELINE_DT_WID, T.PLANNING_DT_WID, T.PAYMENT_DUE_DT_WID,

T.MCAL_CAL_WID, T.AP_DOC_AMT, T.AP_LOC_AMT, T.AP_REMAINING_DOC_AMT, T.AP_REMAINING_LOC_AMT, T.XACT_QTY,

T.UOM_CODE, T.DB_CR_IND, T.ACCT_DOC_ID, T.ACCT_DOC_NUM, T.ACCT_DOC_ITEM, T.ACCT_DOC_SUB_ITEM,

T.CLEARING_DOC_NUM, T.CLEARING_DOC_ITEM, T.SALES_ORDER_NUM, T.SALES_ORDER_ITEM, T.SALES_SCH_LINE,

T.SALES_INVOICE_NUM, T.SALES_INVOICE_ITEM, T.PURCH_ORDER_NUM, T.PURCH_ORDER_ITEM, T.PURCH_INVOICE_NUM,

T.PURCH_INVOICE_ITEM, T.CUST_PUR_ORD_NUM, T.CUST_PUR_ORD_ITEM, T.SPLR_ORDER_NUM, T.SPLR_ORDER_ITEM,

T.REF_DOC_NUM, T.REF_DOC_ITEM, T.DOC_HEADER_TEXT, T.LINE_ITEM_TEXT, T.ALLOCATION_NUM, T.GL_BALANCE_ID,

T.BALANCE_ID, T.FED_BALANCE_ID, T.GL_RECONCILED_ON_DT, T.DOC_CURR_CODE, T.LOC_CURR_CODE,

T.LOC_EXCHANGE_RATE, T.GLOBAL1_EXCHANGE_RATE, T.GLOBAL2_EXCHANGE_RATE, T.GLOBAL3_EXCHANGE_RATE,

T.CREATED_BY_WID, T.CHANGED_BY_WID, T.CREATED_ON_DT, T.CHANGED_ON_DT, T.AUX1_CHANGED_ON_DT,

T.AUX2_CHANGED_ON_DT, T.AUX3_CHANGED_ON_DT, T.AUX4_CHANGED_ON_DT, T.DELETE_FLG, T.W_INSERT_DT,

T.W_UPDATE_DT, T.TENANT_ID, T.INTEGRATION_ID, T.DATASOURCE_NUM_ID, T.X_CUSTOM) VALUES (S.GL_ACCOUNT_WID,

S.BUDGET_ORG_WID, S.CUSTOMER_WID, S.CUSTOMER_FIN_PROFL_WID, S.SUPPLIER_WID, S.SPLR_ACCT_WID,

S.SALES_REP_WID, S.SERVICE_REP_WID, S.ACCT_REP_WID, S.PURCH_REP_WID, S.PRODUCT_WID, S.SALES_PROD_WID,

S.INVENTORY_PROD_WID, S.SUPPLIER_PROD_WID, S.COMPANY_LOC_WID, S.PLANT_LOC_WID, S.OPERATING_UNIT_ORG_WID,

S.PAYABLES_ORG_WID, S.LEDGER_WID, S.COMPANY_ORG_WID, S.BUSN_AREA_ORG_WID, S.CTRL_AREA_ORG_WID,

S.FIN_AREA_ORG_WID, S.SALES_ORG_WID, S.PURCHASE_ORG_WID, S.ISSUE_ORG_WID, S.DOC_TYPE_WID,

S.CLRNG_DOC_TYPE_WID, S.REF_DOC_TYPE_WID, S.POSTING_TYPE_WID, S.CLRNG_POST_TYPE_WID, S.COST_CENTER_WID,

S.PROFIT_CENTER_WID, S.DOC_STATUS_WID, S.BANK_WID, S.TAX_TYPE_WID, S.PAY_TERMS_WID, S.PAY_METHOD_WID,

S.PROJECT_WID, S.TASK_WID, S.FINANCIAL_RESOURCE_WID, S.EXPENDITURE_ORG_WID, S.SOURCE_WID,

S.TRANSACTION_DT_WID, S.TRANSACTION_TM_WID, S.POSTED_ON_DT_WID, S.POSTED_ON_TM_WID, S.CONVERSION_DT_WID,

S.ORDERED_ON_DT_WID, S.INVOICED_ON_DT_WID, S.PURCH_ORDER_DT_WID, S.SPLR_ORDER_DT_WID,

S.INVOICE_RECEIPT_DT_WID, S.CLEARED_ON_DT_WID, S.CLEARING_DOC_DT_WID, S.BASELINE_DT_WID,

S.PLANNING_DT_WID, S.PAYMENT_DUE_DT_WID, S.MCAL_CAL_WID, S.AP_DOC_AMT, S.AP_LOC_AMT,

S.AP_REMAINING_DOC_AMT, S.AP_REMAINING_LOC_AMT, S.XACT_QTY, S.UOM_CODE, S.DB_CR_IND, S.ACCT_DOC_ID,

S.ACCT_DOC_NUM, S.ACCT_DOC_ITEM, S.ACCT_DOC_SUB_ITEM, S.CLEARING_DOC_NUM, S.CLEARING_DOC_ITEM,

S.SALES_ORDER_NUM, S.SALES_ORDER_ITEM, S.SALES_SCH_LINE, S.SALES_INVOICE_NUM, S.SALES_INVOICE_ITEM,

S.PURCH_ORDER_NUM, S.PURCH_ORDER_ITEM, S.PURCH_INVOICE_NUM, S.PURCH_INVOICE_ITEM, S.CUST_PUR_ORD_NUM,

S.CUST_PUR_ORD_ITEM, S.SPLR_ORDER_NUM, S.SPLR_ORDER_ITEM, S.REF_DOC_NUM, S.REF_DOC_ITEM,

S.DOC_HEADER_TEXT, S.LINE_ITEM_TEXT, S.ALLOCATION_NUM, S.GL_BALANCE_ID, S.BALANCE_ID, S.FED_BALANCE_ID,

S.GL_RECONCILED_ON_DT, S.DOC_CURR_CODE, S.LOC_CURR_CODE, S.LOC_EXCHANGE_RATE, S.GLOBAL1_EXCHANGE_RATE,

S.GLOBAL2_EXCHANGE_RATE, S.GLOBAL3_EXCHANGE_RATE, S.CREATED_BY_WID, S.CHANGED_BY_WID, S.CREATED_ON_DT,

S.CHANGED_ON_DT, S.AUX1_CHANGED_ON_DT, S.AUX2_CHANGED_ON_DT, S.AUX3_CHANGED_ON_DT, S.AUX4_CHANGED_ON_DT,

S.DELETE_FLG, S.W_INSERT_DT, S.W_UPDATE_DT, S.TENANT_ID, S.INTEGRATION_ID, S.DATASOURCE_NUM_ID,

S.X_CUSTOM);

SIL_APTransactionFact MERGE Version: Test Results on Exadata V2.2 ¼ Rack

The internal benchmarks for SIL_APTransactionFact with MERGE SQL showed significant improvement on large update

volumes. The target table W_AP_XACT_F had 533,805,574 rows, and the update volume in the test scenarios was 1,206,431

rows. The original version completed in 45 min with the average Writer throughput 550 RPS. The MERGE version ranged from

17 to 25 min. Oracle spawned 50 processes for the MERGE SQL Parallel execution and completed within 7-10 min.

The results from the traced MERGE session are below:

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 51 0.06 1.62 0 908 1 0

Execute 51 3624.75 14570.56 1841896 2060130 1510235 1206431

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 102 3624.82 14572.19 1841896 2061038 1510236 1206431

Rows (1st) Rows (avg) Rows (max) Row Source Operation

---------- ---------- ---------- ---------------------------------------------------

Page 103: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

103

0 0 0 MERGE W_AP_XACT_F (cr=1306 pr=0 pw=0 time=5756109 us)

1206431 23656 1206431 PX COORDINATOR (cr=1290 pr=0 pw=0 time=4982213 us)

0 0 0 PX SEND QC (RANDOM) :TQ10002 (cr=0 pr=0 pw=0 time=0 us cost=308063 size=13245817572 card=1226124)

0 20639 22170 VIEW (cr=0 pr=1742 pw=1742 time=49057851 us)

0 20639 22170 HASH JOIN OUTER BUFFERED (cr=0 pr=1742 pw=1742 time=49039766 us cost=308063 size=13245817572

card=1226124)

0 20639 22170 BUFFER SORT (cr=0 pr=0 pw=0 time=3333175 us)

0 20639 22170 PX RECEIVE (cr=0 pr=0 pw=0 time=3293974 us cost=17352 size=11659590591 card=1132109)

0 0 0 PX SEND HASH :TQ10000 (cr=0 pr=0 pw=0 time=0 us cost=17352 size=11659590591 card=1132109)

1206431 23656 1206431 TABLE ACCESS STORAGE FULL W_AP_XACT_F_TMP (cr=1250 pr=0 pw=0 time=27361 us cost=17352

size=11659590591 card=1132109)

0 9893402 10518024 PX RECEIVE (cr=0 pr=0 pw=0 time=37570509 us cost=138045 size=291380050080 card=578135020)

0 0 0 PX SEND HASH :TQ10001 (cr=0 pr=0 pw=0 time=0 us cost=138045 size=291380050080 card=578135020)

0 454908 11869354 PX BLOCK ITERATOR (cr=39089 pr=34374 pw=0 time=970735 us cost=138045 size=291380050080

card=578135020)

0 454908 11869354 TABLE ACCESS STORAGE FULL W_AP_XACT_F (cr=39089 pr=34374 pw=0 time=894154 us cost=138045

size=291380050080 card=578135020)

The stats show the accumulated elapsed query time for 50 workers equal 14,571 seconds, or 291 seconds for the MERGE

query.

Informatica Load Balancing Implementation

To improve the performance on the ETL tier, consider implementing Informatica Load Balancing to balance the Informatica

load across multiple ETL tiers and to speed up the mappings execution. You can register one or more Informatica servers and

the Informatica Repository Server in DAC, and you can specify the number of workflows that can be executed in parallel. The

DAC server automatically load balances across the servers. It does not run more sessions than the value specified for each of

them.

To implement Informatica Load Balancing in DAC perform the following steps.

1. Register additional Informatica Server(s) in DAC. Refer to the section Registering Informatica Servers in the DAC Client in

the publication Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users, Version

7.9.6

2. Configure the database connection information in Informatica Workflow Manager. Refer to the section Process of

Configuring the Informatica Repository in Workflow Manager in the publication Oracle Business Intelligence Applications

Installation Guide for Informatica PowerCenter Users, Version 7.9.6.

Important! Deploying multiple Informatica domains and repository services on different server nodes would cause additional

maintenance overhead. Any repository updates or configuration changes, performed on one node, must be replicated across

all the participating nodes in the multiple domains configuration.

To minimize the overhead from Informatica repositories maintenance, consider the load balancing implementation below:

Configure a single Informatica domain and deploy a single PowerCenter Repository service in it.

Create Informatica services on each Informatica node and subscribe them to the single domain

OBIEE Queries Performance Recommendations

Introduction

Oracle BI Applications uses Oracle BI Server Enterprise Edition (OBIEE) for building reports and dashboards as well as running

ad-hoc queries in OBIEE Answers. End users can run stored reports from OBIEE Catalog, or put together custom queries using

the Presentation Layer components in Oracle BI Presentation Server. Each report or a query corresponds to a single Logical

SQL (LSQL), and each LSQL can spawn one or more Physical SQLs (PSQL), running in a target database (warehouse). A

dashboard is a collection of reports or LSQLs, and each LSQL spawns one and more PSQLs in database. If you set parallelism

too high in database, you could end up with spikes in the database workload, especially during peak hours with more business

Page 104: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

104

users running their reports. So, careful sizing, configuration and monitoring of all BI Apps hardware tiers is essential to ensure

desired scalability for OBIEE reports.

OBIEE deployments are available for a single node, or cluster configuration across multiple nodes for better scalability and

load balanced. Oracle also released Exalytics for addressing high volume / scalability requirements. Refer to additional

documentation on OBIEE cluster configuration and Exalytics for more details.

This chapter covers general techniques and examples to address end user queries performance. Refer to OBIEE System

Administrator Guide, “Managing Performance Tuning and Query Caching” for additional information.

OBIEE Configuration, Diagnostics and Performance Analysis

OBIEE Logging Using LOGLEVEL=7

OBIEE provides comprehensive logging of all user activities by writing the detailed diagnostic information into its NQQuery.log.

It records the most comprehensive diagnostic information when its logging level is set to 7 in OBIEE Repository (RPD). Refer to

OBIEE Documentation how to set LOGLEVEL in RPD.

Most Oracle BI Apps implementations can operate with permanent LOGLEVEL=7 without noticeable impact to OBIEE

performance. Level 7 does introduce additional overhead in OBIEE, so for high workload implementations you may consider

setting it to value = 7 for a specific period of time to capture desired statistics, and then switch it back to lower value. You

need to change LOGLEVEL for each node in OBIEE Cluster configuration.

OBIEE records detailed logging events in chronological order in its NQQuery.log file. Oracle BI Applications Performance team

developed NQQuery.log parser, delivered in the patch 11847038, and Oracle Database schema and APEX GUI in the patch

13581927 for parsing and assembling all OBIEE LSQL transactions, uploading and analyzing the parsed data. Both patches are

refreshed monthly on Oracle Support Website. Refer to the patches readme for prerequisite and implementation details.

You can install the log parser and periodically load your captured and parsed data into your local database, analyze

performance bottlenecks and monitor trends, concurrency, and other useful information using APEX GUI.

Note: NQQuery.log default size is 10 Mb. You can set it up to 100Mb to capture more details. You can consider merging

_old and .log files in correct chronological order and then run the parser for the merged file. Refer to the patch

readme on merging and processing OBIEE Cluster log files.

Important! OBIEE may write some transactions (LSQL) events into multiple NQQuery.log files (during the file switch), so you

will get Incomplete or orphan transactions in APEX GUI. They are reported as a separate category in UI.

OBIEE Init Blocks Overhead

Each OBIEE session executes all initialization blocks, defined in RPD, before it start running any reports. Init Blocks could be

one of primary sources of significant overhead in OBIEE, so make sure you monitor their use, disable / delete any outdated

and resolve failing blocks. NQQuery.log parser and APEX GUI do capture and publish init blocks statistics, however the parser

may take longer time to digest init blocks statistics.

Some init blocks can even spawn logical and physical SQLs, which may result in longer execution time. So, make sure you

carefully utilize all the flexibility provided in OBIEE session initialization.

OBIEE Cache Optimization

OBIEE Cache can help to improve reports performance and offload physical data sources (data warehouse). Refer to OBIEE

configuration parameters in NQSConfig.ini or EM to set up their values. You may consider changing the default values such as

MAX_ROWS_PER_CACHE_ENTRY, MAX_CACHE_ENTRY_SIZE, MAX_CACHE_ENTRIES to higher values to improve your caching

utilization.

OBIEE automatically purges its cache depending on the defined limits. You can review the cache statistics in the APEX GUI after

you parse NQQuery.log.

Page 105: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

105

OBIEE Database Features

OBIEE supports critical database features, which can optimize queries performance much more effectively. Make sure you

read carefully about the features, enabled by default, which you want to turn off. Some of them, such as

DERIVED_TABLES_SUPPORTED, could have significant impact on reports performance. Turning it off would result in dramatic

overhead on database, since OBIEE would fire more physical SQLs per reports, generate redundancy in database and fetch

larger row counts from database.

OBIEE NQQuery.log Statistics

OBIEE records quite few statistics for every logical SQL transaction. It is important to use the correct parameters for measuring

queries performance. The table below shows two reports from a sample log file.

Hashid Lpath #Ran Max

Lresp

Max

Lresp (s)

Max

#PSQL

Max

Lrows

Max Sum

Prows

c062a5c4c3204862fae853

36732ad21c8c0be021

/shared/Supply Chain and Order Management/Sales

Revenue/Customer Report/Recent Customer Invoices

1 0:04:04 244 1 65050 324406

bc2858b1067803c17a91ca

64dde3c57fc76ff858

/shared/Supply Chain and Order Management/Sales

Revenue/Customer Report/Customer Scorecard

1 0:00:16 16 2 10 10

Logical Response, s, (LResp, s) is the most appropriate measure of reports (LSQL) runtime.

NqDur(s) / Dur(s) define lifetime for open cursors for corresponding OBIEE sessions. They should NOT be used for

measuring reports runtime.

LRows shows the number of records fetched or prepared by BI Server to fetch onto a dashboard. Some reports may

produce many more records, but customers scroll through a smaller set.

PRows is the sum of physical rows returned by all physical SQLs, spawned by LSQL (report). BI Server may join PSQL

resultsets and produce smaller volume, so Logical rows may not match Physical rows.

The example above shows stats for two sample reports. The first report has a potential issue with lack of good filters, since it

has only one physical SQL, returning 324,406 rows. In contrast, the second report doesn’t show any red flags and completes in

16 seconds.

Inadequate Filtering in OBIEE Reports

Some reports may have too generic filter values or lack good filters, and generate very high row counts. Prows and Lrows can

be used to flag potential performance issues in such queries with bad filters. End users may not be aware of the final

rowcounts, and keep scrolling for more data, or worse, fetch all records at once. After you identify such reports:

1. Request the report owners to further constrain the row counts by adding more efficient filters.

2. Remove such reports from default dashboards.

3. Define them as links on dashboards.

OBIEE Queries Optimization Using Materialized Views

Introduction

Oracle BI Server Enterprise Edition (OBIEE) logical model for Oracle Business Intelligence Applications allows for building logical

business queries, which may result in rather complex physical SQLs (sometimes multiple physical SQLs per logical query). Pre-

aggregation, using Oracle Materialized Views (MV) to build complex views and pre-compute summaries, in conjunction with

Query Rewrite can significantly improve the end user queries performance.

Page 106: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

106

Query Rewrite is critical for BI Analytics Warehouse logical queries, handled by OBIEE. The database optimizer transparently

rewrites a physical SQL, generated by OBIEE, to use a custom MV. You do not need to expose the MV in in RPD physical or

logical layers, or make any changes to your logical SQL. Since query rewrite is transparent, MVs can be added or dropped in

the physical warehouse schema without invalidating the original logical maps in OBIEE.

Important! Depending on the set of logical SQLs, which run against target fact tables, and the aggregation scenarios, you

may not always be able to implement the aggregation logic in a single MV, and end up creating more and more MVs. You

should carefully benchmark the overhead from refreshing all MVs. If the overhead become too expensive, you can consider

building an aggregate table and updating your RPD logical model instead of using MVs.

Database Configuration Requirements for using MVs

1. You should set the following parameters in your Target Warehouse init.ora:

query_rewrite_enabled = true

query_rewrite_integrity = trusted

star_transformation_enabled = true

2. Issue the following database grants to your warehouse schema:

GRANT query rewrite TO <dwh_user>;

GRANT create materialized view TO <dwh_user>;

Custom Materialized View Guidelines

The following example provides step-by-step instructions how to build an MV and ensure query rewrite.

1. Identify a slow physical SQL generated by OBIEE, and review the SQL logic:

SELECT SUM(CASE

WHEN T263758.W_STATUS_CODE = 'APPROVED' THEN

(T631953.LINE_AMT - T631953.CANCELLED_AMT) *

T631953.GLOBAL1_EXCHANGE_RATE

ELSE

0

END) AS c1,

T31328.PER_NAME_YEAR AS c2,

T31328.CAL_MONTH AS c3,

SUBSTR(T31328.MONTH_NAME, 1, 3) AS c5,

NVL(T257401.XV_LOB, 'Unknown') AS c6

FROM W_INVENTORY_PRODUCT_D T257401 /* Dim_W_INVENTORY_PRODUCT_D */,

W_DAY_D T31328 /* Dim_W_DAY_D_Common */,

W_STATUS_D T263758 /* Dim_W_STATUS_D_Purchase_Order_Status */,

W_STATUS_D T278452 /* Dim_W_STATUS_D_Purchase_Order_Cycle_Status */,

W_XACT_TYPE_D T473562 /* Dim_W_XACT_TYPE_D_Purchase_Order_Shipment_Type */,

W_XACT_TYPE_D T476739 /* Dim_W_XACT_TYPE_D_Purchase_Order_Consigned_Type */,

W_PURCH_SCHEDULE_LINE_F T631953 /* Fact_W_PURCH_SCHEDULE_LINE_F_POApproval_Date */

WHERE (T31328.ROW_WID = T631953.ORDERED_ON_DT_WID AND

T257401.ROW_WID = T631953.INVENTORY_PROD_WID AND

T263758.ROW_WID = T631953.APPROVAL_STATUS_WID AND

T278452.ROW_WID = T631953.CYCLE_STATUS_WID AND

T473562.ROW_WID = T631953.SHIPMENT_TYPE_WID AND

T31328.PER_NAME_YEAR = '2010' AND

T476739.ROW_WID = T631953.CONSIGNED_TYPE_WID AND

T631953.DELETE_FLG = 'N' AND

T278452.W_SUBSTATUS_CODE <> 'CANCELLED' AND

T473562.W_XACT_TYPE_CODE <> 'PREPAYMENT' AND

(T278452.ROW_WID IN (0) OR

T278452.W_STATUS_CLASS IN ('PURCH_CYCLE')) AND

Page 107: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

107

T476739.W_XACT_TYPE_CODE <> 'CONSIGNED-CONSUMED')

GROUP BY T31328.CAL_MONTH,

T31328.PER_NAME_YEAR,

SUBSTR(T31328.MONTH_NAME, 1, 3),

NVL(T257401.XV_LOB, 'Unknown');

Elapsed: 00:02:06.26

The query execution plan is below:

Plan hash value: 909913791

---------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |

---------------------------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 51 | 8670 | | 250K (4)| 01:15:01 | | |

| 1 | HASH GROUP BY | | 51 | 8670 | | 250K (4)| 01:15:01 | | |

|* 2 | HASH JOIN | | 670K| 108M| | 249K (4)| 01:15:00 | | |

| 3 | INDEX FULL SCAN | W_STATUS_D_U2 | 68 | 816 | | 1 (0)| 00:00:01 | | |

|* 4 | HASH JOIN | | 670K| 100M| | 249K (4)| 01:14:59 | | |

| 5 | VIEW | index$_join$_006 | 108 | 1080 | | 3 (34)| 00:00:01 | | |

|* 6 | HASH JOIN | | | | | | | | |

| 7 | BITMAP CONVERSION TO ROWIDS | | 108 | 1080 | | 1 (0)| 00:00:01 | | |

|* 8 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | | | | |

| 9 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | | 1 (0)| 00:00:01 | | |

|* 10 | HASH JOIN | | 673K| 95M| | 249K (4)| 01:14:59 | | |

| 11 | VIEW | index$_join$_005 | 108 | 1080 | | 3 (34)| 00:00:01 | | |

|* 12 | HASH JOIN | | | | | | | | |

| 13 | BITMAP CONVERSION TO ROWIDS | | 108 | 1080 | | 1 (0)| 00:00:01 | | |

|* 14 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | | | | |

| 15 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | | 1 (0)| 00:00:01 | | |

|* 16 | HASH JOIN | | 676K| 89M| 65M| 249K (4)| 01:14:59 | | |

|* 17 | HASH JOIN | | 676K| 58M| | 54434 (4)| 00:16:20 | | |

|* 18 | TABLE ACCESS FULL | W_STATUS_D | 5 | 115 | | 2 (0)| 00:00:01 | | |

|* 19 | HASH JOIN | | 1554K| 99M| | 54417 (4)| 00:16:20 | | |

| 20 | PART JOIN FILTER CREATE | :BF0000 | 372 | 6696 | | 8 (0)| 00:00:01 | | |

| 21 | TABLE ACCESS BY INDEX ROWID| W_DAY_D | 372 | 6696 | | 8 (0)| 00:00:01 | | |

|* 22 | INDEX RANGE SCAN | X_PER_NAME_YEAR | 372 | | | 1 (0)| 00:00:01 | | |

| 23 | PARTITION RANGE JOIN-FILTER | | 8811K| 411M| | 54328 (4)| 00:16:18 |:BF0000|:BF0000|

|* 24 | TABLE ACCESS FULL | W_PURCH_SCHEDULE_LINE_F | 8811K| 411M| | 54328 (4)| 00:16:18 |:BF0000|:BF0000|

| 25 | TABLE ACCESS FULL | W_INVENTORY_PRODUCT_D | 23M| 1064M| | 148K (5)| 00:44:28 | | |

--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("T263758"."ROW_WID"="T631953"."APPROVAL_STATUS_WID")

4 - access("T476739"."ROW_WID"="T631953"."CONSIGNED_TYPE_WID")

6 - access(ROWID=ROWID)

8 - filter("T476739"."W_XACT_TYPE_CODE"<>'CONSIGNED-CONSUMED')

10 - access("T473562"."ROW_WID"="T631953"."SHIPMENT_TYPE_WID")

12 - access(ROWID=ROWID)

14 - filter("T473562"."W_XACT_TYPE_CODE"<>'PREPAYMENT')

16 - access("T257401"."ROW_WID"="T631953"."INVENTORY_PROD_WID")

17 - access("T278452"."ROW_WID"="T631953"."CYCLE_STATUS_WID")

18 - filter(("T278452"."W_STATUS_CLASS"='PURCH_CYCLE' OR "T278452"."ROW_WID"=0) AND

"T278452"."W_SUBSTATUS_CODE"<>'CANCELLED')

19 - access("T31328"."ROW_WID"="T631953"."ORDERED_ON_DT_WID")

22 – access("T31328"."PER_NAME_YEAR"='2010'

2. Create a Materialized View.

This query can be rewritten to move the aggregation logic into a Materialized View:

Note: Consider using the same aliases to physical tables in your MV as in the original physical SQL.

CREATE MATERIALIZED VIEW CUST_W_PURCH_SCHED_LINE_F_MV1

BUILD IMMEDIATE

REFRESH COMPLETE

ENABLE QUERY REWRITE

AS SELECT t31328.per_name_year,

t31328.CAL_MONTH,

t31328.MONTH_NAME,

t631953.inventory_prod_wid,

t631953.approval_status_wid,

t631953.cycle_status_wid,

t631953.shipment_type_wid,

t631953.consigned_type_wid,

t631953.delete_flg,

sum(t631953.line_amt) line_amt,

Page 108: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

108

sum(t631953.cancelled_amt) cancelled_amt,

sum((t631953.line_amt - t631953.cancelled_amt )*

t631953.global1_exchange_rate) amt,

SUM ( CASE

WHEN t263758.w_status_code = 'APPROVED'

THEN

(t631953.line_amt - t631953.cancelled_amt)

* t631953.global1_exchange_rate

ELSE 0

END

) AS amt0

FROM w_purch_schedule_line_f t631953, w_day_d t31328, w_status_d t263758

WHERE t631953.ordered_on_dt_wid = t31328.row_wid

AND t263758.row_wid = t631953.approval_status_wid

GROUP BY t31328.per_name_year,

t31328.CAL_MONTH,

t31328.MONTH_NAME,

t631953.inventory_prod_wid,

t631953.approval_status_wid,

t631953.cycle_status_wid,

t631953.shipment_type_wid,

t631953.consigned_type_wid,

t631953.delete_flg;

/

Elapsed: 00:01:17.08

The MV will be populated as soon as you execute ‘CREATE MATERIALIZED VIEW’ DDL. The subsequent refreshes will be

handled via DBMS_MVIEW.MVIEW_REFRESH.

Note: Starting from Oracle 10g, query rewrite is now possible when your SELECT statements contain analytic functions, full

outer joins, and set operations such as UNION, MINUS or INTERSECT.

Important! Depending on the logic complexity and data volumes collected in an MV you can consider adding indexes on

MV columns for improving MV query performance as well.

3. Compute statistics on each created MV:

BEGIN

DBMS_STATS.GATHER_TABLE_STATS(USER,

'CUST_W_PURCH_SCHED_LINE_F_MV1',

method_opt => 'FOR ALL COLUMNS');

END;

/

4. Verify the use of MV and query rewrite in the original physical SQL by re-running the query and checking its plan:

--------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

--------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 51 | 7089 | 97151 (1)| 00:29:09 |

| 1 | HASH GROUP BY | | 51 | 7089 | 97151 (1)| 00:29:09 |

| 2 | NESTED LOOPS | | | | | |

| 3 | NESTED LOOPS | | 48291 | 6555K| 97147 (1)| 00:29:09 |

|* 4 | HASH JOIN | | 48291 | 4291K| 412 (6)| 00:00:08 |

| 5 | VIEW | index$_join$_006 | 108 | 1080 | 3 (34)| 00:00:01 |

|* 6 | HASH JOIN | | | | | |

| 7 | BITMAP CONVERSION TO ROWIDS | | 108 | 1080 | 1 (0)| 00:00:01 |

|* 8 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | |

| 9 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | 1 (0)| 00:00:01 |

|* 10 | HASH JOIN | | 48522 | 3838K| 408 (5)| 00:00:08 |

| 11 | VIEW | index$_join$_005 | 108 | 1080 | 3 (34)| 00:00:01 |

|* 12 | HASH JOIN | | | | | |

| 13 | BITMAP CONVERSION TO ROWIDS| | 108 | 1080 | 1 (0)| 00:00:01 |

Page 109: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

109

|* 14 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | |

| 15 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | 1 (0)| 00:00:01 |

|* 16 | HASH JOIN | | 48755 | 3380K| 405 (5)| 00:00:08 |

|* 17 | TABLE ACCESS FULL | W_STATUS_D | 5 | 115 | 2 (0)| 00:00:01 |

|* 18 | MAT_VIEW REWRITE ACCESS FULL| CUST_W_PURCH_SCHED_LINE_F_MV1 | 112K| 5250K| 401 (5)| 00:00:08 |

|* 19 | INDEX UNIQUE SCAN | W_INV_PROD_D_P1 | 1 | | 1 (0)| 00:00:01 |

| 20 | TABLE ACCESS BY INDEX ROWID | W_INVENTORY_PRODUCT_D | 1 | 48 | 2 (0)| 00:00:01 |

--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

--------------------------------------------------

4 - access("T476739"."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1"."CONSIGNED_TYPE_WID")

6 - access(ROWID=ROWID)

8 - filter("T476739"."W_XACT_TYPE_CODE"<>'CONSIGNED-CONSUMED')

10 - access("T473562"."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1"."SHIPMENT_TYPE_WID")

12 - access(ROWID=ROWID)

14 - filter("T473562"."W_XACT_TYPE_CODE"<>'PREPAYMENT')

16 - access("T278452"."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1"."CYCLE_STATUS_WID")

17 - filter(("T278452"."W_STATUS_CLASS"='PURCH_CYCLE' OR "T278452"."ROW_WID"=0) AND

"T278452"."W_SUBSTATUS_CODE"<>'CANCELLED')

18 - filter("CUST_W_PURCH_SCHED_LINE_F_MV1"."PER_NAME_YEAR"='2010' AND

"CUST_W_PURCH_SCHED_LINE_F_MV1"."DELETE_FLG"='N')

19 - access("T257401"."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1"."INVENTORY_PROD_WID")

Line #18 confirms that optimizer chose the newly created MV in the latest execution plan for the original SQL.

5. Troubleshoot Query Rewrite

You can use the DBMS_MVIEW.EXPLAIN_REWRITE procedure to find out why your query failed to rewrite.

1. Create the REWRITE_TABLE table by running the following SQL:

SQL> @<ORACLE_HOME>\rdbms\admin\utlxrw.sql

REWRITE_TABLE table columns for your reference:

STATEMENT_ID ID for the query

MV_OWNER MV's schema

MV_NAME Name of the MV

SEQUENCE Seq # of error message

QUERY User query

QUERY_BLOCK_NO Block # of the current sub query

REWRITTEN_TXT Rewritten query message

MESSAGE EXPLAIN_REWRITE error message

PASS Query Rewrite pass #

MV_IN_MSG MV in current message

MEASURE_IN_MSG Measure in current message

JOIN_BACK_TBL Join back table in current message

JOIN_BACK_COL Join back column in current message

ORIGINAL_COST Cost of original query

REWRITTEN_COST Cost of rewritten query. It shows a zero if there was no rewrite of a

query or if a different materialized view was used

FLAGS Associated flags

2. Execute DBMS_MVIEW.EXPLAIN_REWRITE

EXPLAIN_REWRITE procedure provides the details for query rewrite failure, or if it rewrites, which materialized view(s)

Page 110: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

110

will be used:

BEGIN

DBMS_MVIEW.EXPLAIN_REWRITE(QUERY => 'Your query statement',

MV => 'Your MV name’,

STATEMENT_ID => ‘Your statement label’);

END;

/

You can use the following query to show EXPLAIN_REWRITE log

SELECT sequence, message, original_cost, rewritten_cost

FROM REWRITE_TABLE

WHERE mv_name = 'Your MV name’

AND statement_id = ‘Your statement label’;

In our example, you can run the following SQL to to check whether optimizer picks CUST_W_PURCH_SCHED_LINE_F_MV1

Materialized View:

SQL> DECLARE

2 QUERY VARCHAR2(4000);

3 MV_NAME VARCHAR2(30) := 'CUST_W_PURCH_SCHED_LINE_F_MV1';

4 STATEMENT_ID VARCHAR2(30) := 'Test#1 '||User;

5 BEGIN

6 QUERY := 'SELECT SUM(CASE

7 WHEN T263758.W_STATUS_CODE = ''APPROVED'' THEN

8 (T631953.LINE_AMT - T631953.CANCELLED_AMT) *

9 T631953.GLOBAL1_EXCHANGE_RATE

10 ELSE

11 0

12 END) AS c1,

13 T31328.PER_NAME_YEAR AS c2,

14 T31328.CAL_MONTH AS c3,

15 SUBSTR(T31328.MONTH_NAME, 1, 3) AS c5,

16 NVL(T257401.XV_LOB, ''Unknown'') AS c6

17 FROM W_INVENTORY_PRODUCT_D T257401,

18 W_DAY_D T31328,

19 W_STATUS_D T263758,

20 W_STATUS_D T278452,

21 W_XACT_TYPE_D T473562,

22 W_XACT_TYPE_D T476739,

23 W_PURCH_SCHEDULE_LINE_F T631953

24 WHERE (T31328.ROW_WID = T631953.ORDERED_ON_DT_WID AND

25 T257401.ROW_WID = T631953.INVENTORY_PROD_WID AND

26 T263758.ROW_WID = T631953.APPROVAL_STATUS_WID AND

27 T278452.ROW_WID = T631953.CYCLE_STATUS_WID AND

28 T473562.ROW_WID = T631953.SHIPMENT_TYPE_WID AND

29 T31328.PER_NAME_YEAR = ''2010'' AND

30 T476739.ROW_WID = T631953.CONSIGNED_TYPE_WID AND

31 T631953.DELETE_FLG = ''N'' AND

32 T278452.W_SUBSTATUS_CODE <> ''CANCELLED'' AND

33 T473562.W_XACT_TYPE_CODE <> ''PREPAYMENT'' AND

34 (T278452.ROW_WID IN (0) OR

35 T278452.W_STATUS_CLASS IN (''PURCH_CYCLE'')) AND

36 T476739.W_XACT_TYPE_CODE <> ''CONSIGNED-CONSUMED'')

37 GROUP BY T31328.CAL_MONTH,

38 T31328.PER_NAME_YEAR,

39 SUBSTR(T31328.MONTH_NAME, 1, 3),

40 NVL(T257401.XV_LOB, ''Unknown'')';

Page 111: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

111

41

42 DBMS_MVIEW.EXPLAIN_REWRITE(QUERY => QUERY, MV => MV_NAME, STATEMENT_ID => STATEMENT_ID);

43 END;

44 /

PL/SQL procedure successfully completed

SQL>

SQL> SELECT sequence, message, original_cost, rewritten_cost

2 FROM REWRITE_TABLE

3 WHERE mv_name = 'CUST_W_PURCH_SCHED_LINE_F_MV1'

4 AND statement_id = 'Test#1 ' || User

5 /

SEQUENCE MESSAGE ORIGINAL_COST REWRITTEN_COST

--------- -------------------------------------------------------------------------------- ------------- --------------

1 QSM-01151: query was rewritten 13 9

2 QSM-01033: query rewritten with materialized view, CUST_W_PURCH_SCHED_LINE_F_MV1 13 9

The log states that the query was successfully rewritten with Materialized View CUST_W_PURCH_SCHED_LINE_F_MV1.

Starting with Oracle 10g, you can use a hint, /*+ REWRITE_OR_ERROR */, which will stop the execution of a SQL statement if

query rewrite cannot be done:

SQL> select /*+ REWRITE_OR_ERROR */ * from dual;

select /*+ REWRITE_OR_ERROR */ * from dual

ORA-30393: a query block in the statement did not rewrite

The most common cause for unsuccessful query rewrite is mismatch of columns and / or aggregate functions used in MVs.

Refer to more Oracle Database manuals for additional Query Rewrite restrictions.

Integrate MV Refresh in DAC Execution Plan

The best option to maintain up-to-date custom MVs is to merge their refresh into your DAC ETL Execution Plan. Ensure proper

dependencies in your execution plan when you add your MV refresh custom task. The careful analysis of the execution

sequence will help you to identify the best place in the execution tree to run your custom MV refresh calls in parallel with

other tasks without extending the total plan runtime.

The following PLSQL call ensures COMPLETE refresh for MV W_PURCH_SCHED_LINE_F_MV1:

BEGIN

DBMS_MVIEW.REFRESH('CUST_W_PURCH_SCHED_LINE_F_MV1', 'C');

END;

Important! You should add the call to DBMS_STATS to compute statistics FOR ALL COLUMNS SIZE AUTO on each MV as part of

DAC Execution plan customization. If you created any indexes on MV, they will not be dropped / created during MV refresh, so

you need to use CASDADE = TRUE to update index statistics as well.

The following sections describe step-by-step instructions for integrating MV refresh into DAC Execution Plan.

Create Materialized View Refresh Task Action

Open DAC Client and navigate to Tools -> Seed Data -> Actions -> Task Actions

Click ‘New’ Button to create a new Task “Refresh Materialized View” and Click ‘Save’ to save the record.

Click on ‘Check Box’ icon in Value field to open Value screen

Click Add button and enter the following values in the right upper pane:

Page 112: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

112

o Name: Refresh MV

o Type: SQL

o Database Connection: target

o Table Type: All Target

o Valid Database Platforms: Oracle

Enter the following text in ‘SQL Statement’ tab in the right lower pane:

BEGIN

DBMS_MVIEW.REFRESH('getTableName()', 'C');

DBMS_STATS.GATHER_TABLE_STATS(ownname => 'getTableOwner()', tabname=> 'getTableName()',

cascade => TRUE, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL

COLUMNS SIZE AUTO', degree => DBMS_STATS.DEFAULT_DEGREE);

END;

Note: If no indexes are defined on an MV, then you do not need DBMS_STATS call in the SQL Statement, as DAC

will compute its statistics but use CASCADE => FALSE.

Click OK to save the changes.

Register Materialized Views

Click Design Button -> Table tab in the right pan

Click New -> Define your custom MV as a table in DAC

Save changes.

Define Related Tables

Search for your Fact or Aggregate table, you used in your MV query definition (W_PURCH_SCHEDULE_LINE_F in our

example, in the Tables View.

Click Related Tables Tab in the lower right pane and add your MV as the related table to the original Fact.

Rebuild Execution Plan

Reassemble your Subject Areas and rebuild your Execution plan to pick the new dependencies. Refer to BI Apps

Administration Guide, chapter "Customizing DAC Objects and Designing Subject Areas" for more details.

OBIEE Queries Optimization Using Database Views

Materialized Views and Query Rewrite may not always work for all logical reports. A simple change in report logic, such as

adding another filtering condition, could prevent Oracle from doing query rewrite and aggregate data on much larger fact

tables. You could create more MVs to cover all aggregation permutations. However, the more MVs you implement in BI

Applications the longer time it would take to refresh them as part of incremental ETLs. Besides, OBIEE may generate such

complex aggregation logic, that it would be impossible to implement it in Oracle Materialized Views.

Consider implementing custom database views with more sophisticated logic and expose them in RPD for querying directly

through reports and dashboards.

The example below shows the use of MODEL SQL syntax in a database view, which ensures equivalent logic for W_AP_XACT

fact aggregation over prior / current periods, compared to originally used Year AGO (YAGO) OBIEE Function in a logical report:

CREATE OR REPLACE FORCE VIEW V_MODEL_SQL_CUSTOM AS

WITH

sawith0 AS

(SELECT

/*+ CACHE(W_MCAL_DAY_D) CACHE(W_MCAL_DAY_D) CACHE(W_MCAL_DAY_D) CACHE(W_MCAL_DAY_D) */

COUNT(DISTINCT

Page 113: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

113

CASE

WHEN T38596.W_XACT_SUBTYPE_CODE IN ('DR MEMO', 'INVOICE')

THEN concat(concat(T33861.PURCH_INVOICE_NUM, '~'), CAST(T33861.PURCH_INVOICE_ITEM AS CHARACTER ( 30 ) ))

END ) AS v_no

, SUM(

CASE

WHEN T38596.W_XACT_SUBTYPE_CODE IN ('DR MEMO', 'INVOICE')

THEN T33861.AP_DOC_AMT * T33861.GLOBAL1_EXCHANGE_RATE * -1

ELSE 0

END ) AS v_amt

, T79787.SUB_DEPARTMENT

, T79787.DEPARTMENT

, sawith3.MCAL_PERIOD_NAME

, sawith3.MCAL_YEAR

, sawith3.mcal_period

FROM

W_AP_EMPLOYEE_D T79787 /* Dim_W_AP_EMPLOYEE_D */

, W_AP_XACT_F T33861 /* Fact_W_AP_XACT_F */

, W_XACT_TYPE_D T38596 /* Dim_W_XACT_TYPE_D_Financials */

, W_MCAL_DAY_D T68387 /* Dim_W_MCAL_DAY_D_Fiscal_Day */

, W_MCAL_DAY_D T77729 /* Dim_W_MCAL_DAY_D_Invoice_Cleared_Date_Fiscal_Calendar */

, W_MCAL_DAY_D T79826 /* Dim_W_MCAL_DAY_D_Invoice_Receipt_Date_Fiscal_Calendar */

, W_MCAL_DAY_D T81915 /* Dim_W_MCAL_DAY_D_Supplier_Payment_Due_Date */

, W_MCAL_DAY_D SAWITH3

WHERE ( T33861.X_LOL_EMPLOYEE_WID = T79787.ROW_WID

AND T33861.DELETE_FLG = 'N'

AND T68387.ADJUSTMENT_PERIOD_FLG = 'N'

AND T33861.POSTED_ON_DT_WID = T68387.MCAL_DAY_DT_WID

AND T33861.MCAL_CAL_WID = T68387.MCAL_CAL_WID

AND T77729.ADJUSTMENT_PERIOD_FLG = 'N'

AND T33861.CLEARED_ON_DT_WID = T77729.MCAL_DAY_DT_WID

AND T33861.MCAL_CAL_WID = T77729.MCAL_CAL_WID

AND T79826.ADJUSTMENT_PERIOD_FLG = 'N'

AND T33861.INVOICE_RECEIPT_DT_WID = T79826.MCAL_DAY_DT_WID

AND T33861.MCAL_CAL_WID = T79826.MCAL_CAL_WID

AND T33861.DOC_TYPE_WID = T38596.ROW_WID

AND T81915.ADJUSTMENT_PERIOD_FLG = 'N'

AND T33861.MCAL_CAL_WID = T81915.MCAL_CAL_WID

AND T33861.X_CALC_DUE_DT_WID = T81915.MCAL_DAY_DT_WID

AND SAWITH3.ADJUSTMENT_PERIOD_FLG = 'N'

AND T33861.POSTED_ON_DT_WID = SAWITH3.MCAL_DAY_DT_WID

AND T33861.MCAL_CAL_WID = SAWITH3.MCAL_CAL_WID

AND sawith3.MCAL_CAL_ID = 'R'

)

GROUP BY

T79787.DEPARTMENT

, T79787.SUB_DEPARTMENT

, sawith3.MCAL_PERIOD_NAME

, sawith3.MCAL_YEAR

, sawith3.mcal_period

)

, sawith_model AS

(SELECT

department

, SUB_DEPARTMENT

, mcal_period_name

, MCAL_YEAR

, mcal_period

, purch_no

, purch_no_prior

, purch_amt

, purch_amt_prior

FROM

sawith0 model RETURN updated rows

partition BY (department, SUB_DEPARTMENT)

dimension BY (MCAL_YEAR, mcal_period)

measures (v_no purch_no, v_amt purch_amt,0 purch_no_prior, 0 purch_amt_prior, mcal_period_name

mcal_period_name)

ignore nav

rules upsert ALL (

purch_no[ANY, 1] = purch_no[CV(MCAL_YEAR), 1] ,

purch_no[ANY, 2] = purch_no[CV(MCAL_YEAR), 2] ,

purch_no[ANY, 3] = purch_no[CV(MCAL_YEAR), 3] ,

purch_no[ANY, 4] = purch_no[CV(MCAL_YEAR), 4] ,

purch_no[ANY, 5] = purch_no[CV(MCAL_YEAR), 5] ,

Page 114: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

114

purch_no[ANY, 6] = purch_no[CV(MCAL_YEAR), 6] ,

purch_no[ANY, 7] = purch_no[CV(MCAL_YEAR), 7] ,

purch_no[ANY, 8] = purch_no[CV(MCAL_YEAR), 8] ,

purch_no[ANY, 9] = purch_no[CV(MCAL_YEAR), 9] ,

purch_no[ANY, 10] = purch_no[CV(MCAL_YEAR), 10] ,

purch_no[ANY, 11] = purch_no[CV(MCAL_YEAR), 11] ,

purch_no[ANY, 12] = purch_no[CV(MCAL_YEAR), 12] ,

purch_amt[ANY, 1] = purch_amt[CV(MCAL_YEAR), 1] ,

purch_amt[ANY, 2] = purch_amt[CV(MCAL_YEAR), 2] ,

purch_amt[ANY, 3] = purch_amt[CV(MCAL_YEAR), 3] ,

purch_amt[ANY, 4] = purch_amt[CV(MCAL_YEAR), 4] ,

purch_amt[ANY, 5] = purch_amt[CV(MCAL_YEAR), 5] ,

purch_amt[ANY, 6] = purch_amt[CV(MCAL_YEAR), 6] ,

purch_amt[ANY, 7] = purch_amt[CV(MCAL_YEAR), 7] ,

purch_amt[ANY, 8] = purch_amt[CV(MCAL_YEAR), 8] ,

purch_amt[ANY, 9] = purch_amt[CV(MCAL_YEAR), 9] ,

purch_amt[ANY, 10] = purch_amt[CV(MCAL_YEAR), 10] ,

purch_amt[ANY, 11] = purch_amt[CV(MCAL_YEAR), 11] ,

purch_amt[ANY, 12] = purch_amt[CV(MCAL_YEAR), 12] ,

purch_no_prior[ANY,ANY] = purch_no[CV(MCAL_YEAR)-1, cv(mcal_period)] ,

purch_amt_prior[ANY,ANY] = purch_amt[CV(MCAL_YEAR)-1, cv(mcal_period)] ,

mcal_period_name[ANY,1] = cv(MCAL_YEAR) ||'R '||'1' ,

mcal_period_name[ANY,2] = cv(MCAL_YEAR) ||'R '||'2' ,

mcal_period_name[ANY,3] = cv(MCAL_YEAR) ||'R '||'3' ,

mcal_period_name[ANY,4] = cv(MCAL_YEAR) ||'R '||'4' ,

mcal_period_name[ANY,5] = cv(MCAL_YEAR) ||'R '||'5' ,

mcal_period_name[ANY,6] = cv(MCAL_YEAR) ||'R '||'6' ,

mcal_period_name[ANY,7] = cv(MCAL_YEAR) ||'R '||'7' ,

mcal_period_name[ANY,8] = cv(MCAL_YEAR) ||'R '||'8' ,

mcal_period_name[ANY,9] = cv(MCAL_YEAR) ||'R '||'9' ,

mcal_period_name[ANY,10] = cv(MCAL_YEAR)||'R '||'10' ,

mcal_period_name[ANY,11] = cv(MCAL_YEAR)||'R '||'11' ,

mcal_period_name[ANY,12] = cv(MCAL_YEAR)||'R '||'12'

)

)

SELECT

"DEPARTMENT"

,"SUB_DEPARTMENT"

,"MCAL_PERIOD_NAME"

,"MCAL_YEAR"

,"MCAL_PERIOD"

,"PURCH_NO"

,"PURCH_NO_PRIOR"

,"PURCH_AMT"

,"PURCH_AMT_PRIOR"

FROM

sawith_model

WHERE

1=1

;

The view has been exposed in OBIEE RPD business model, so that it could be queried directly in OBIEE reports. Refer to Oracle

documentation on MODEL SQL syntax and examples.

OBIEE Reports with SYSDATE

Some reports rely on SYSDATE, pulled into a physical SQL by OBIEE, for computing various TO DATE aggregations. For example,

the following logical SQL:

SELECT Time."Enterprise Period Number" saw_0,

Time."Enterprise Period" saw_1,

Time."Enterprise Year" saw_2,

"Fact - Sales Cycle Lines"."Order To Ship Days Lag" saw_3,

"Fact - Sales Cycle Lines"."Ship to Invoice Days Lag" saw_4

FROM "Sales - Order Process"

ORDER BY saw_2, saw_0

results in the physical SQL:

SELECT T66755.ENT_PERIOD AS c1,

T66755.PER_NAME_ENT_PERIOD AS c2,

T66755.PER_NAME_ENT_YEAR AS c3,

AVG(

CASE

Page 115: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

115

WHEN T93511.SHIPPABLE_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

THEN

CASE

WHEN T93511.SHIPPING_INTERFACED_FLG = 'Y'

AND T104798.W_STATUS_CODE = 'ORDER FULLY SHIPPED'

THEN ( TRUNC( T93511.ACT_LAST_SHIPPED_ON_DT ) - TRUNC( T93511.ORDERED_ON_DT ) )

ELSE ( TRUNC( TO_DATE('2011-12-19' , 'YYYY-MM-DD') ) - TRUNC( T93511.ORDERED_ON_DT ) )

END

ELSE NULL

END ) AS c4,

AVG(

CASE

WHEN T93511.BILLING_FLG = 'Y'

AND T93511.SHIPPABLE_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

THEN

CASE

WHEN T93511.INVOICE_INTERFACED_FLG = 'Y'

AND T104764.W_STATUS_CODE = 'ORDER FULLY INVOICED'

THEN ( TRUNC( T93511.LAST_INVOICE_ON_DT ) - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ) )

ELSE ( TRUNC( TO_DATE('2011-12-19' , 'YYYY-MM-DD') ) - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ) )

END

ELSE NULL

END ) AS c5

FROM W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,

W_SALES_CYCLE_LINE_F T93511 /* Fact_W_SALES_CYCLE_LINE_F */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */ ,

W_STATUS_D T104764 /* Dim_W_STATUS_D_SalesCycle_Invoice */ ,

W_STATUS_D T104798 /* Dim_W_STATUS_D_SalesCycle_Fulfill */

WHERE ( T66755.ROW_WID = T93511.ORDERED_ON_DT_WID

AND T93511.DELETE_FLG = 'N'

AND T93511.XACT_TYPE_WID = T96574.ROW_WID

AND T93511.FULFILL_STATUS_WID = T104798.ROW_WID

AND T93511.INVOICE_STATUS_WID = T104764.ROW_WID )

GROUP BY T66755.ENT_PERIOD,

T66755.PER_NAME_ENT_PERIOD,

T66755.PER_NAME_ENT_YEAR

ORDER BY c3, c1

where TO_DATE('2011-12-19' , 'YYYY-MM-DD') is the value, obtained from SYSDATE, i.e. the report execution date. Custom

aggregation in MVs or regular tables cannot use include SYSDATE dependent columns, since the results would vary depend on

report execution dates.

The following sections cover the known cases of SYSDATE aggregation in aggregate tables.

AVG with SYSDATE in OBIEE Reports

The example below uses SYSDATE when computing average value using Oracle AVG function:

SELECT T66755.ENT_PERIOD AS c1,

T66755.PER_NAME_QTR AS c2,

T66755.PER_NAME_YEAR AS c3,

AVG(TRUNC( T93511.ACT_LAST_SHIPPED_ON_DT ) - TRUNC( T93511.ORDERED_ON_DT ) ) AS c4,

AVG(TRUNC( TO_DATE('2012-03-12' , 'YYYY-MM-DD')) - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ) ) AS c5

FROM W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,

W_SALES_CYCLE_LINE_F T93511 /* Fact_W_SALES_CYCLE_LINE_F */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */ ,

W_STATUS_D T104764 /* Dim_W_STATUS_D_SalesCycle_Invoice */ ,

W_STATUS_D T104798 /* Dim_W_STATUS_D_SalesCycle_Fulfill */

WHERE ( T66755.ROW_WID = T93511.ORDERED_ON_DT_WID

AND T93511.DELETE_FLG = 'N'

AND T66755.PER_NAME_QTR = '2011 Q 1'

AND T66755.PER_NAME_YEAR = '2011'

AND T93511.XACT_TYPE_WID = T96574.ROW_WID

Page 116: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

116

AND T93511.FULFILL_STATUS_WID = T104798.ROW_WID

AND T93511.INVOICE_STATUS_WID = T104764.ROW_WID)

GROUP BY T66755.ENT_PERIOD, T66755.PER_NAME_QTR, T66755.PER_NAME_YEAR;

To work around this case with use of a custom aggregate table to improve its performance:

Create a custom aggregate table:

CREATE TABLE W_SALES_CYCLE_LINE_F_CUST_A

AS

SELECT

SUM(TRUNC( T93511.ACT_LAST_SHIPPED_ON_DT ) - TRUNC( T93511.ORDERED_ON_DT ) ) AS c1,

COUNT(TRUNC( T93511.ACT_LAST_SHIPPED_ON_DT ) - TRUNC( T93511.ORDERED_ON_DT ) ) AS c2,

SUM(TO_DATE('1970-01-01' , 'YYYY-MM-DD') - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ) ) AS c3,

COUNT(TO_DATE('1970-01-01' , 'YYYY-MM-DD') - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ) ) AS c4

,T93511.ORDERED_ON_DT_WID

,T93511.SHIPPABLE_FLG

,T93511.SHIPPING_INTERFACED_FLG

,T93511.XACT_TYPE_WID

,T93511.FULFILL_STATUS_WID

,T93511.INVOICE_STATUS_WID

,T93511.BILLING_FLG

,T93511.INVOICE_INTERFACED_FLG

FROM W_SALES_CYCLE_LINE_F T93511

WHERE T93511.DELETE_FLG = 'N'

GROUP BY T93511.ORDERED_ON_DT_WID

,T93511.SHIPPABLE_FLG

,T93511.SHIPPING_INTERFACED_FLG

,T93511.XACT_TYPE_WID

,T93511.FULFILL_STATUS_WID

,T93511.INVOICE_STATUS_WID

,T93511.BILLING_FLG

,T93511.INVOICE_INTERFACED_FLG;

And rewrite the SQL as follows:

SELECT T66755.ENT_PERIOD AS c1,

T66755.PER_NAME_QTR AS c2,

T66755.PER_NAME_YEAR AS c3,

--AVG(TRUNC( T93511.ACT_LAST_SHIPPED_ON_DT ) - TRUNC( T93511.ORDERED_ON_DT ) ) AS c4

SUM(c1)/SUM(c2) as c4,

--AVG(TRUNC( TO_DATE('2012-03-12' , 'YYYY-MM-DD')) - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ) ) as c5

(TO_DATE('2012-03-12' , 'YYYY-MM-DD')-TO_DATE('1970-01-01' , 'YYYY-MM-DD'))+ SUM(c3)/SUM(c4) as c5

FROM W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,

W_SALES_CYCLE_LINE_F_CUST_A T93511 /* Fact_W_SALES_CYCLE_LINE_F */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */ ,

W_STATUS_D T104764 /* Dim_W_STATUS_D_SalesCycle_Invoice */ ,

W_STATUS_D T104798 /* Dim_W_STATUS_D_SalesCycle_Fulfill */

WHERE ( T66755.ROW_WID = T93511.ORDERED_ON_DT_WID

AND T66755.PER_NAME_QTR = '2011 Q 1'

AND T66755.PER_NAME_YEAR = '2011'

AND T93511.XACT_TYPE_WID = T96574.ROW_WID

AND T93511.FULFILL_STATUS_WID = T104798.ROW_WID

AND T93511.INVOICE_STATUS_WID = T104764.ROW_WID)

GROUP BY T66755.ENT_PERIOD,

T66755.PER_NAME_QTR,

T66755.PER_NAME_YEAR;

where TO_DATE('2012-03-12' , 'YYYY-MM-DD') is SYSDATE, and TO_DATE('1970-01-01' , 'YYYY-MM-DD') is a static date from the aggregate table W_SALES_CYCLE_LINE_F_CUST_A.

AVG CASE with SYSDATE in OBIEE Reports

The more complex AVG…CASE with SYSDATE in a physical SQL can be worked around using the logic in the following example.

The original physical SQL uses SYSDATE value, highlighted in red:

Page 117: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

117

SELECT T66755.ENT_PERIOD AS c1,

T66755.PER_NAME_QTR AS c2,

T66755.PER_NAME_YEAR AS c3,

AVG(

CASE

WHEN T93511.SHIPPABLE_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

THEN

CASE

WHEN T93511.SHIPPING_INTERFACED_FLG = 'Y'

AND T104798.W_STATUS_CODE = 'ORDER FULLY SHIPPED'

THEN ( TRUNC( T93511.ACT_LAST_SHIPPED_ON_DT ) - TRUNC( T93511.ORDERED_ON_DT ) )

ELSE ( TRUNC( TO_DATE('2012-03-12' , 'YYYY-MM-DD')) - TRUNC( T93511.ACT_FIRST_SHIPPED_ON_DT ))

END

ELSE NULL

END ) AS c4

FROM W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,

W_SALES_CYCLE_LINE_F T93511 /* Fact_W_SALES_CYCLE_LINE_F */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */ ,

W_STATUS_D T104764 /* Dim_W_STATUS_D_SalesCycle_Invoice */ ,

W_STATUS_D T104798 /* Dim_W_STATUS_D_SalesCycle_Fulfill */

WHERE ( T66755.ROW_WID = T93511.ORDERED_ON_DT_WID

AND T66755.PER_NAME_QTR = '2011 Q 1'

AND T66755.PER_NAME_YEAR = '2011'

AND T93511.XACT_TYPE_WID = T96574.ROW_WID

AND T93511.FULFILL_STATUS_WID = T104798.ROW_WID

AND T93511.INVOICE_STATUS_WID = T104764.ROW_WID)

GROUP BY T66755.ENT_PERIOD,

T66755.PER_NAME_QTR,

T66755.PER_NAME_YEAR;

The proposed modified SQL below uses the same aggregate table from the first example:

SELECT T66755.ENT_PERIOD AS c1,

T66755.PER_NAME_QTR AS c2,

T66755.PER_NAME_YEAR AS c3,

SUM(CASE

WHEN T93511.SHIPPABLE_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

THEN

CASE

WHEN T93511.SHIPPING_INTERFACED_FLG = 'Y'

AND T104798.W_STATUS_CODE = 'ORDER FULLY SHIPPED'

THEN c1

ELSE (TO_DATE('2012-03-12' , 'YYYY-MM-DD')-TO_DATE('1970-01-01' , 'YYYY-MM-DD')) + c3

END

ELSE NULL

END)/

SUM(CASE

WHEN T93511.SHIPPABLE_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

THEN

CASE

WHEN T93511.SHIPPING_INTERFACED_FLG = 'Y'

AND T104798.W_STATUS_CODE = 'ORDER FULLY SHIPPED'

THEN c2

ELSE c4

END

ELSE NULL

END) as c4

FROM W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,

W_SALES_CYCLE_LINE_F_CUST_A T93511 /* Fact_W_SALES_CYCLE_LINE_F */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */ ,

W_STATUS_D T104764 /* Dim_W_STATUS_D_SalesCycle_Invoice */ ,

W_STATUS_D T104798 /* Dim_W_STATUS_D_SalesCycle_Fulfill */

WHERE ( T66755.ROW_WID = T93511.ORDERED_ON_DT_WID

Page 118: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

118

AND T66755.PER_NAME_QTR = '2011 Q 1'

AND T66755.PER_NAME_YEAR = '2011'

AND T93511.XACT_TYPE_WID = T96574.ROW_WID

AND T93511.FULFILL_STATUS_WID = T104798.ROW_WID

AND T93511.INVOICE_STATUS_WID = T104764.ROW_WID)

GROUP BY T66755.ENT_PERIOD,

T66755.PER_NAME_QTR,

T66755.PER_NAME_YEAR;

The careful analysis of the logical SQL scenarios and the aggregation gaps in physical SQL can help to build custom aggregates, modify logical design and deliver better queries performance.

OBIEE Reports With ‘SELECT CASE COUNT DISTINCT’

Materialized Views and Query Rewrite can be used effectively to pre-aggregate data and speed up end user queries. There are

some cases, where MVs cannot be used in building aggregates. If you try to code ‘SELECT … COUNT DISTINCT’ into an MV,

Oracle will give the following error message:

ORA-12015: cannot create a fast refresh materialized view from a complex query

You can work around ORA-12015 by creating one MView on top of another MView and then implement MViews refresh in the

right sequence to ensure up-to-date contents.

The case for “SELECT …CASE… COUNT DISTINCT” cannot be resolved by means of Materialized Views. You can try to tackle

such complex SQL pattern using an aggregate table and modify the logical model in RPD to query the new table instead.

Refer to the following working scenario as an example how to work around such SQL:

The original physical SQL:

SELECT SUM(

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN T93768.NET_AMT * T93768.GLOBAL1_EXCHANGE_RATE

ELSE 0

END ) AS c1,

COUNT(DISTINCT

CASE

WHEN T94920.W_STATUS_CODE <> 'Cancelled'

AND T93768.BOOKING_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

THEN concat(concat(concat(T93768.SALES_ORDER_NUM, CAST(T93768.XACT_TYPE_WID AS CHARACTER ( 30 )

)), CAST(T93768.SALES_ORG_WID AS CHARACTER ( 30 ) )), CAST(T93768.DATASOURCE_NUM_ID AS CHARACTER ( 30 )

))

END ) AS c3,

COUNT(DISTINCT

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN concat(concat(concat(T93768.SALES_ORDER_NUM, CAST(T93768.XACT_TYPE_WID AS CHARACTER ( 30 )

)), CAST(T93768.SALES_ORG_WID AS CHARACTER ( 30 ) )), CAST(T93768.DATASOURCE_NUM_ID AS CHARACTER ( 30 )

))

END ) AS c4,

SUM(

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Returns'

THEN T93768.NET_AMT * T93768.GLOBAL1_EXCHANGE_RATE

ELSE 0

END ) AS c5,

COUNT(DISTINCT

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

Page 119: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

119

THEN concat(concat(T67704.INTEGRATION_ID, T93768.SALES_ORDER_HD_ID), CAST(T93768.DATASOURCE_NUM_ID

AS CHARACTER ( 30 ) ))

END ) AS c6,

SUM(

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN 1

ELSE NULL

END ) AS c7

FROM W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,

W_PRODUCT_D T67704 /* Dim_W_PRODUCT_D */ ,

W_SALES_ORDER_LINE_F T93768 /* Fact_W_SALES_ORDER_LINE_F */ ,

W_STATUS_D T94920 /* Dim_W_STATUS_D_Order_Status */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */

WHERE ( T66755.ROW_WID = T93768.ORDERED_ON_DT_WID

AND T66755.PER_NAME_YEAR = '2011'

AND T67704.ROW_WID = T93768.PRODUCT_WID

AND T93768.DELETE_FLG = 'N'

AND T93768.XACT_TYPE_WID = T96574.ROW_WID

AND T93768.ORDER_STATUS_WID = T94920.ROW_WID )

Create an aggregate table for COUNT DISTINCT. Include all columns from both COUNT DISTINCT and from CASE as well as WHERE into GROUP BY: CREATE TABLE W_SALES_ORDER_LINE_A_CUST

AS

SELECT W_SALES_ORDER_LINE_F.SALES_ORDER_NUM, -- from COUNT

W_SALES_ORDER_LINE_F.XACT_TYPE_WID, -- from COUNT

W_SALES_ORDER_LINE_F.SALES_ORG_WID, -- from COUNT

W_SALES_ORDER_LINE_F.DATASOURCE_NUM_ID, -- from COUNT

W_SALES_ORDER_LINE_F.SALES_ORDER_HD_ID, -- from COUNT

W_SALES_ORDER_LINE_F.PRODUCT_WID, -- from WHERE

W_SALES_ORDER_LINE_F.ORDER_STATUS_WID, -- from WHERE

W_SALES_ORDER_LINE_F.BOOKING_FLG, -- from CASE

(

CASE

WHEN 'MONTH' = 'DAY'

THEN W_DAY_D.ROW_WID

WHEN 'MONTH' = 'WEEK'

THEN W_DAY_D.CAL_WEEK_START_DT_WID

WHEN 'MONTH' = 'MONTH'

THEN W_DAY_D.M_STRT_CAL_DT_WID

WHEN 'MONTH' = 'QUARTER'

THEN W_DAY_D.CAL_QTR_START_DT_WID

WHEN 'MONTH' = 'YEAR'

THEN W_DAY_D.CAL_YEAR_START_DT_WID

END ) AS PERIOD_START_DT_WID ,

(

CASE

WHEN 'MONTH' = 'DAY'

THEN W_DAY_D.ROW_WID

WHEN 'MONTH' = 'WEEK'

THEN W_DAY_D.CAL_WEEK_END_DT_WID

WHEN 'MONTH' = 'MONTH'

THEN W_DAY_D.M_END_CAL_DT_WID

WHEN 'MONTH' = 'QUARTER'

THEN W_DAY_D.CAL_QTR_END_DT_WID

WHEN 'MONTH' = 'YEAR'

THEN W_DAY_D.CAL_YEAR_END_DT_WID

END ) AS PERIOD_END_DT_WID ,

COUNT( 1 ) as SALES_ORDER_LINE_CNT

FROM W_SALES_ORDER_LINE_F,

W_DAY_D

WHERE W_SALES_ORDER_LINE_F.ORDERED_ON_DT_WID = W_DAY_D.ROW_WID(+)

AND W_SALES_ORDER_LINE_F.DELETE_FLG = 'N'

Page 120: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

120

GROUP BY W_SALES_ORDER_LINE_F.SALES_ORDER_NUM,

W_SALES_ORDER_LINE_F.XACT_TYPE_WID,

W_SALES_ORDER_LINE_F.SALES_ORG_WID,

W_SALES_ORDER_LINE_F.DATASOURCE_NUM_ID,

W_SALES_ORDER_LINE_F.SALES_ORDER_HD_ID,

W_SALES_ORDER_LINE_F.PRODUCT_WID,

W_SALES_ORDER_LINE_F.ORDER_STATUS_WID,

W_SALES_ORDER_LINE_F.BOOKING_FLG,

(

CASE

WHEN 'MONTH' = 'DAY'

THEN W_DAY_D.ROW_WID

WHEN 'MONTH' = 'WEEK'

THEN W_DAY_D.CAL_WEEK_START_DT_WID

WHEN 'MONTH' = 'MONTH'

THEN W_DAY_D.M_STRT_CAL_DT_WID

WHEN 'MONTH' = 'QUARTER'

THEN W_DAY_D.CAL_QTR_START_DT_WID

WHEN 'MONTH' = 'YEAR'

THEN W_DAY_D.CAL_YEAR_START_DT_WID

END ) ,

(

CASE

WHEN 'MONTH' = 'DAY'

THEN W_DAY_D.ROW_WID

WHEN 'MONTH' = 'WEEK'

THEN W_DAY_D.CAL_WEEK_END_DT_WID

WHEN 'MONTH' = 'MONTH'

THEN W_DAY_D.M_END_CAL_DT_WID

WHEN 'MONTH' = 'QUARTER'

THEN W_DAY_D.CAL_QTR_END_DT_WID

WHEN 'MONTH' = 'YEAR'

THEN W_DAY_D.CAL_YEAR_END_DT_WID

END );

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I1" ON "DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("XACT_TYPE_WID")

TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I2" ON "DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("SALES_ORG_WID")

TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I3" ON

"DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("SALES_ORDER_HD_ID") TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I4" ON "DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("PRODUCT_WID")

TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I5" ON

"DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("ORDER_STATUS_WID") TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I6" ON

"DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("PERIOD_START_DT_WID") TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I7" ON

"DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("PERIOD_END_DT_WID") TABLESPACE "DWIDX_TS" ;

CREATE INDEX "DWH_7963"."W_SLS_ORD_LN_A_I8" ON "DWH_7963"."W_SALES_ORDER_LINE_A_CUST"("BOOKING_FLG")

TABLESPACE "DWIDX_TS" ;

dbms_stats.gather_table_stats(ownname => 'DWH_7963',

tabname => 'W_SALES_ORDER_LINE_A_CUST',

estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,

method_opt => 'FOR ALL COLUMNS SIZE AUTO',

cascade => TRUE);

Rewrite the original SQL as following: WITH c1 as(

SELECT COUNT (DISTINCT

CASE

WHEN T94920.W_STATUS_CODE <> 'Cancelled'

AND T93768.BOOKING_FLG = 'Y'

AND T96574.W_XACT_TYPE_CODE = 'Regular'

Page 121: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

121

THEN concat(concat(concat(T93768.SALES_ORDER_NUM, CAST(T93768.XACT_TYPE_WID AS CHARACTER ( 30 ) )),

CAST(T93768.SALES_ORG_WID AS CHARACTER ( 30 ) )), CAST(T93768.DATASOURCE_NUM_ID AS CHARACTER ( 30 ) ))

END ) AS c3,

COUNT(DISTINCT

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN concat(concat(concat(T93768.SALES_ORDER_NUM, CAST(T93768.XACT_TYPE_WID AS CHARACTER ( 30 ) )),

CAST(T93768.SALES_ORG_WID AS CHARACTER ( 30 ) )), CAST(T93768.DATASOURCE_NUM_ID AS CHARACTER ( 30 ) ))

END ) AS c4,

COUNT(DISTINCT

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN concat(concat(T67704.INTEGRATION_ID, T93768.SALES_ORDER_HD_ID), CAST(T93768.DATASOURCE_NUM_ID

AS CHARACTER ( 30 ) ))

END ) AS c6

FROM W_MONTH_D T66755 /* Dim_W_DAY_D_Common */ ,

W_PRODUCT_D T67704 /* Dim_W_PRODUCT_D */ ,

W_SALES_ORDER_LINE_A_CUST T93768 /* Fact_W_SALES_ORDER_LINE_F */ ,

W_STATUS_D T94920 /* Dim_W_STATUS_D_Order_Status */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */

WHERE ( T66755.M_END_CAL_DT_WID = T93768.PERIOD_END_DT_WID

AND T66755.M_STRT_CAL_DT_WID = T93768.PERIOD_START_DT_WID

AND T66755.PER_NAME_MONTH BETWEEN '2011 / 01' AND '2011 / 12'

AND T67704.ROW_WID = T93768.PRODUCT_WID

AND T93768.XACT_TYPE_WID = T96574.ROW_WID

AND T93768.ORDER_STATUS_WID = T94920.ROW_WID))

, c2 as (

SELECT SUM(

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN T104714.GLOBAL1_NET_AMT

ELSE 0

END ) AS c1,

SUM(

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Returns'

THEN T104714.GLOBAL1_NET_AMT

ELSE 0

END ) AS c5,

SUM(

CASE

WHEN T96574.W_XACT_TYPE_CODE = 'Regular'

AND T94920.W_STATUS_CODE <> 'Cancelled'

THEN T104714.SALES_ORDER_LINE_CNT

ELSE NULL

END ) AS c7

FROM W_MONTH_D T100027 /* Dim_W_MONTH_D */ ,

W_STATUS_D T94920 /* Dim_W_STATUS_D_Order_Status */ ,

W_XACT_TYPE_D T96574 /* Dim_W_XACT_TYPE_D_Sales_Ordlns */ ,

W_SALES_ORDER_LINE_A T104714 /* Fact_Agg_W_SALES_ORDER_LINE_A */

WHERE ( T94920.ROW_WID = T104714.ORDER_STATUS_WID

AND T96574.ROW_WID = T104714.XACT_TYPE_WID

AND T94920.DELETE_FLG = 'N'

AND T100027.M_END_CAL_DT_WID = T104714.PERIOD_END_DT_WID

AND T100027.M_STRT_CAL_DT_WID = T104714.PERIOD_START_DT_WID

AND T100027.PER_NAME_MONTH BETWEEN '2011 / 01' AND '2011 / 12') )

SELECT *

FROM c1,c2

Since OBIEE does control physical SQL generation, you need to update your logical model, expose the custom aggregate table in RPD, and validate the performance and the final results.

Page 122: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

122

Oracle BI Applications High Availability

Introduction

Both initial and incremental data loads into Oracle BI Applications Data Warehouse must be executed during scheduled

maintenance or blackout windows for the following reasons:

- End user data could be inconsistent during ETL runs, causing invalid or incomplete results on dashboards

- ETL runs may result in significant hardware resource consumption, slowing down end user queries

The time to execute periodic incremental loads depends on a number of factors, such as number of source databases, each

source database incremental volume, hardware specifications, environment configuration, etc. As the result, incremental

loads may not always complete within a predefined blackout window and cause extended downtime.

Global businesses, operating 24 hours around o’clock not always could afford few hours downtime. Such customers can

consider implementing high availability solution using Oracle Data Guard with a physical Standby database.

High Availability with Oracle Data Guard and Physical Standby Database

Oracle Data Guard configuration contains a primary database and supports up to nine standby databases. A standby database

is a copy of a production database, created from its backup. There are two types of standby databases, physical and logical.

A physical standby database must be physically identical to its primary database on a block-for-block basis. Data Guard

synchronizes a physical standby database with its primary one by applying the primary database redo logs. The standby

database must be kept in recovery mode for Redo Apply. The standby database can be opened in read-only mode in-between

redo synchronizations.

The advantage of a physical standby database is that Data Guard applies the changes very fast using low-level mechanisms and

bypassing SQL layers.

A logical standby database is created as a copy of a primary database, but it later can be altered to a different structure. Data

Guard synchronizes a logical standby database by transforming the data from the primary database redo logs into SQLs and

executing them in the standby database.

A logical standby database has to be open all the times to allow Data Guard to perform SQL updates.

Important! A primary database must run in ARCHIVELOG mode all the times.

Data Guard with Physical Standby Database option provides both efficient and comprehensive disaster recovery as well as

reliable high availability solution to Oracle BI Applications customers. Redo Apply for Physical Standby option synchronizes a

Standby Database much faster compared to SQL Apply for Logical Standby. OBIEE does not require write access to BI

Applications Data Warehouse for either executing end user logical SQL queries or developing additional contents in RPD or

Web Catalog.

The internal benchmarks on a low-range outdated hardware have showed four times faster Redo Apply on a physical standby

database compared to ETL execution on a primary database:

Step Name Row Count Redo Size Primary DB Run

Time

Redo Apply time

SDE_ORA_SalesProductDimension_Full 2621803 621 Mb 01:59:31 00:10:20

SDE_ORA_CustomerLocationDimension_Full 4221350 911 Mb 04:11:07 00:16:35

SDE_ORA_SalesOrderLinesFact_Full 22611530 12791 Mb 09:17:19 03:16:04

Create Index W_SALES_ORDER_LINE_F_U1 Index n/a 610 Mb 00:24:31 00:08:23

Page 123: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

123

Total 29454683 14933 Mb 15:52:28 03:51:22

The target hardware was configured intentionally on a low-range Sun server, with both Primary and Standby databases

deployed on the same server, to imitate heavy incremental load. The modern production systems with primary and standby

database, deployed on separate servers, are expected to deliver up to 8-10 times better Redo Apply time on a physical

standby database, compared to the ETL execution time on the primary database.

The diagram below describes Data Guard configuration with Physical Standby database:

- The primary instance runs in “FORCE LOGGING” mode and serves as a target database for routine incremental ETL or

any maintenance activities such as patching or upgrade.

- The Physical Standby instance runs in read-only mode during ETL execution on the Primary database.

- When the incremental ETL load into the Primary database is over, DBA schedules the downtime or blackout window

on the Standby database for applying redo logs.

- DBA shuts down OBIEE tier and switches the Physical Standby database into ‘RECOVERY’ mode.

- DBA starts Redo Apply in Data Guard to apply the generated redo logs to the Physical Standby Database.

- DBA opens Physical Standby Database in read-only mode and starts OBIEE tier:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

SQL> ALTER DATABASE OPEN;

Page 124: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

124

Easy-to-manage switchover and failover capabilities in Oracle Data Guard allow quick role reversals between primary and

standby, so customers can consider switching OBIEE from the Standby to Primary, and then start applying redo logs to the

Standby instance. In such configuration the downtime can be minimized to two short switchovers:

- Switch OBIEE from Standby to Primary after ETL completion into Primary database, and before starting Redo Apply

into Standby database.

- Switch OBIEE from Primary to Standby before starting another ETL.

Additional considerations for deploying Oracle Data Guard with Physical Standby for Oracle BI Applications:

1. ‘FORCE LOGGING’ mode would increase the incremental load time into a Primary database, since Oracle would logging

index rebuild DDL queries.

2. Primary database has to be running in ARCHIVELOG mode to capture all REDO changes.

3. Such deployment results in more complex configuration; it also requires additional hardware to keep two large

volume databases and store daily archived logs.

However it offers these benefits:

1. High Availability Solution to Oracle BI Applications Data Warehouse

2. Disaster recovery and complete data protection

3. Reliable backup solution

Conclusion This document consolidates the best practices and recommendations for improving performance for Oracle Business

Intelligence Applications Version 7.9.6.This list of areas for performance improvements is not complete. If you observe any

performance issues with your Oracle BI Applications implementation, you should trace various components, and carefully

benchmark any recommendations or solutions discussed in this article or other sources, before implementing the changes in

the production environment.

Page 125: Oracle Business Intelligence Applications Version 7.9.6.x ...docshare01.docshare.tips/files/23346/233460888.pdf · Oracle Business Intelligence Applications Version 7.9.6.x Performance

125

Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations

May 2012

Primary Author: Pavel Buynitsky

Contributors: Eugene Perkov, Amar Batham, Nitin Aggarwal, Oksana Stepaneeva,

Wasimraja Abdulmajeeth, Kirill Denisenko, Andrei Dzianisau, Aliaksander Kokhno,

Scott Lowe, Siarhei Kulikouski, Valery Enyukov

Oracle Corporation

World Headquarters

500 Oracle Parkway

Redwood Shores, CA 94065

U.S.A.

Worldwide Inquiries:

Phone: +1.650.506.7000

Fax: +1.650.506.7200

oracle.com

Copyright © 2011, Oracle. All rights reserved.

This document is provided for information purposes only and the

contents hereof are subject to change without notice.

This document is not warranted to be error-free, nor subject to any

other warranties or conditions, whether expressed orally or implied

in law, including implied warranties and conditions of merchantability

or fitness for a particular purpose. We specifically disclaim any

liability with respect to this document and no contractual obligations

are formed either directly or indirectly by this document. This document

may not be reproduced or transmitted in any form or by any means,

electronic or mechanical, for any purpose, without our prior written permission.

Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle

Corporation and/or its affiliates. Other names may be trademarks

of their respective owners.