oracle 11g database new_featuresd52362

206
Oracle Database 11g: New Features Overview eStudy Student Guide D52362GC10 Edition 1.0 October 2007 PRODUCTION

Upload: venuram

Post on 10-Apr-2015

595 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Oracle 11g Database New_featuresD52362

Oracle Database 11 g: New Features Overview eStudy

Student Guide

D52362GC10

Edition 1.0

October 2007

PRODUCTION

Page 2: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

This documentation contains proprietary information of Oracle Corporation. It is provided under a license agreement containing restrictions on use and disclosure and is also protected by copyright law. Reverse engineering of the software is prohibited. If this documentation is delivered to a U.S. Government Agency of the Department of Defense, then it is delivered with Restricted Rights and the following legend is applicable:

Restricted Rights Legend

Use, duplication or disclosure by the Government is subject to restrictions for commercial computer software and shall be deemed to be Restricted Rights software under Federal law, as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, Rights in Technical Data and Computer Software (October 1988).

This material or any portion of it may not be copied in any form or by any means without the express prior written permission of the Education Products group of Oracle Corporation. Any other copying is a violation of copyright law and may result in civil and/or criminal penalties.

If this documentation is delivered to a U.S. Government Agency not within the Department of Defense, then it is delivered with “Restricted Rights,” as defined in FAR 52.227-14, Rights in Data-General, including Alternate III (June 1987).

The information in this document is subject to change without notice. If you find any problems in the documentation, please report them in writing to Worldwide Education Services, Oracle Corporation, 500 Oracle Parkway, Box SB-6, Redwood Shores, CA 94065. Oracle Corporation does not warrant that this document is error-free.

Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Author

Jean-Francois Verrier, Christine Jeal, Jim Spiller, Maria Billings, Priya Vennapusa, Jim Womack

Technical Contributors and Reviewers

This book was published using: oracle tutor

Page 3: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features Overview eStudy Table of Contents i

Table of Contents

Introduction .....................................................................................................................................................1-2 Chapter 1Introduction....................................................................................................................................1-2 Overview.......................................................................................................................................................1-3 Oracle Database Innovation ..........................................................................................................................1-4 Customer Testimonials..................................................................................................................................1-5 Enterprise Grid Computing ...........................................................................................................................1-6 Oracle Database 11g: Focus Areas................................................................................................................1-7 Management Automation ..............................................................................................................................1-9 Oracle Database 11g: New Features Overview eStudy................................................................................1-10 Oracle Database 11g: Change Management Overview eStudy ....................................................................1-11 Further Information .......................................................................................................................................1-12

Managing Storage ...........................................................................................................................................2-2 Chapter 2Managing Storage..........................................................................................................................2-2 Objectives......................................................................................................................................................2-3 Automatic Storage Management (ASM) Enhancements...............................................................................2-4 ASM Fast Mirror Resync: Overview ............................................................................................................2-5 Setting Up ASM Fast Mirror Resync ............................................................................................................2-6 ASM Preferred Mirror Read: Overview........................................................................................................2-7 ASM Preferred Mirror Read: Setup ..............................................................................................................2-8 ASM Preferred Mirror Read: Best Practice...................................................................................................2-9 ASM Scalability and Performance Enhancements ........................................................................................2-10 SYSASM Role ..............................................................................................................................................2-11 ASM Disk Group Compatibility ...................................................................................................................2-12 ASM Disk Group Attributes..........................................................................................................................2-14 Simplified Diskgroup Commands .................................................................................................................2-15 ASMCMD Extensions...................................................................................................................................2-16 ASMCMD Extension: Examples ..................................................................................................................2-18 SecureFiles: Overview ..................................................................................................................................2-19 Enabling SecureFiles Storage........................................................................................................................2-21 Creating SecureFiles .....................................................................................................................................2-22 Altering SecureFiles ......................................................................................................................................2-23 Accessing SecureFiles...................................................................................................................................2-24 Migrating to SecureFiles ...............................................................................................................................2-25 Temporary Tablespace Shrink.......................................................................................................................2-26 Tablespace Option for Creating Temporary Tables ......................................................................................2-27 Demonstrations..............................................................................................................................................2-28 Summary .......................................................................................................................................................2-29

High Availability: Using the Data Recovery Advisor and Flashback.........................................................3-2 Chapter 3High Availability: Using the Data Recovery Advisor and Flashback............................................3-2 Objectives......................................................................................................................................................3-3 Repairing Data Failures.................................................................................................................................3-4 Data Recovery Advisor .................................................................................................................................3-5 Listing Data Failures .....................................................................................................................................3-7 Advising on Repair........................................................................................................................................3-8 Setting Corruption-Detection Parameters......................................................................................................3-9 Flashback Data Archive: Overview...............................................................................................................3-10 Flashback Data Archive Comparison............................................................................................................3-12 Creating a Flashback Data Archive: Example...............................................................................................3-13 Configuring a Default Flashback Data Archive: Example ............................................................................3-14 Using Flashback Data Archive: Examples ....................................................................................................3-15 Flashing Back a Transaction .........................................................................................................................3-16 Flashback Transaction Wizard: Sample ........................................................................................................3-17 Validating Dependencies...............................................................................................................................3-19 Dependency Report .......................................................................................................................................3-20 Demonstrations..............................................................................................................................................3-21

Page 4: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features Overview eStudy Table of Contents ii

Summary .......................................................................................................................................................3-22

High Availability: RMAN and Data Guard Enhancements........................................................................4-2 Chapter 4High Availability: RMAN and Data Guard Enhancements ...........................................................4-2 Objectives......................................................................................................................................................4-3 RMAN Enhancements in Oracle Database 11g.............................................................................................4-4 Duplicating a Database..................................................................................................................................4-6 Active Database Duplication: Selecting the Source .....................................................................................4-7 RMAN DUPLICATE Command ..................................................................................................................4-8 Creating a Standby Database with the DUPLICATE Command .................................................................4-9 Parallel Backup and Restore for Very Large Files ........................................................................................4-10 Using RMAN Multisection Backups.............................................................................................................4-11 Creating Archival Backups ...........................................................................................................................4-12 Archival Database Backup ............................................................................................................................4-13 IMPORT CATALOG RMAN Command .....................................................................................................4-14 RMAN Data Recovery Commands ...............................................................................................................4-15 RMAN Security Enhancements ....................................................................................................................4-16 Improved Integration of RMAN and Data Guard..........................................................................................4-17 Real-Time Query and Physical Standby Databases.......................................................................................4-18 Compressing Redo Data................................................................................................................................4-19 Dynamically Setting SQL Apply Parameters................................................................................................4-20 New Columns in DBA_LOGSTDBY_PARAMETERS...............................................................................4-21 Recording SQL Apply Event Information.....................................................................................................4-22 Logical Standby Database Flash Recovery Area...........................................................................................4-23 Initiating Fast-Start Failover from an Application ........................................................................................4-24 Setting Up a Test Environment by Using Snapshot Standby Databases ......................................................4-25 Summary .......................................................................................................................................................4-26

Security: New Features ...................................................................................................................................5-2 Chapter 5Security: New Features..................................................................................................................5-2 Objectives......................................................................................................................................................5-3 Security Enhancements .................................................................................................................................5-4 Secure Default Configuration........................................................................................................................5-5 Enabling the Built-in Password Complexity Checker ..................................................................................5-6 Managing Default Audits ..............................................................................................................................5-7 Privileges Audited By Default.......................................................................................................................5-8 Adjusting Security Settings ...........................................................................................................................5-9 Setting Security Parameters...........................................................................................................................5-10 Setting Database Administrator Authentication ............................................................................................5-11 Setting Up Directory Authentication for Administrative Users ...................................................................5-12 Transparent Data Encryption Support ...........................................................................................................5-13 TDE and Logical Standby .............................................................................................................................5-15 TDE and Streams ..........................................................................................................................................5-16 Using Tablespace Encryption........................................................................................................................5-17 Hardware Security Module ...........................................................................................................................5-18 TDE and Kerberos Enhancements.................................................................................................................5-19 Encryption for LOB Columns .......................................................................................................................5-20 Enterprise Manager Security Management ...................................................................................................5-21 Demonstration ...............................................................................................................................................5-22 Summary .......................................................................................................................................................5-23

Intelligent Infrastructure................................................................................................................................6-2 Chapter 6Intelligent Infrastructure ................................................................................................................6-2 Objectives......................................................................................................................................................6-3 Automatic SQL Tuning in Oracle Database 11g...........................................................................................6-4 Automatic SQL Tuning: Fine-Tune ..............................................................................................................6-5 Automatic SQL Tuning: Dictionary Views...................................................................................................6-6 Automatic SQL Tuning Considerations ........................................................................................................6-7 Automatic Workload Repository Baselines ..................................................................................................6-8 Moving Window Baseline.............................................................................................................................6-9 Baseline Templates .......................................................................................................................................6-10 Generating Baseline for a Single Time Period ..............................................................................................6-11

Page 5: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features Overview eStudy Table of Contents iii

Using EM to Quickly Configure Adaptive Thresholds ................................................................................6-12 Changes to Procedures and Views ................................................................................................................6-13 Automated Maintenance Tasks .....................................................................................................................6-14 Default Maintenance Resource Manager Plan ..............................................................................................6-15 Automated Maintenance Task Priorities .......................................................................................................6-16 Automatic Memory Management: Overview................................................................................................6-17 Oracle Database 11g Memory-Sizing Parameters.........................................................................................6-19 ADDM Enhancements in Oracle Database 11g ............................................................................................6-20 Automatic Database Diagnostic Monitor (ADDM) in Oracle Database 10g ................................................6-21 Automatic Database Diagnostic Monitor for Oracle RAC...........................................................................6-22 ADDM for Oracle RAC ................................................................................................................................6-23 EM Support for ADDM for Oracle RAC ......................................................................................................6-25 DBMS_ADDM Package ...............................................................................................................................6-26 Advisor-Named Findings and Directives ......................................................................................................6-27 Using the DBMS_ADDM Package...............................................................................................................6-28 New ADDM Views.......................................................................................................................................6-29 Resource Manager: New EM Interface .........................................................................................................6-30 Easier Recovery from Loss of SPFILE .........................................................................................................6-32 Summary .......................................................................................................................................................6-33

Datawarehousing Enhancements ...................................................................................................................7-2 Chapter 7Datawarehousing Enhancements ...................................................................................................7-2 Objectives......................................................................................................................................................7-3 SQL Access Advisor in Oracle Database 11g ...............................................................................................7-4 Oracle Partitioning Across Database Releases..............................................................................................7-5 11g Partitioning Enhancements.....................................................................................................................7-6 Interval Partitioning.......................................................................................................................................7-7 Interval Partitioning: Example ......................................................................................................................7-8 Moving the Transition Point..........................................................................................................................7-9 System Partitioning .......................................................................................................................................7-10 System Partitioning: Guidelines ....................................................................................................................7-11 System Partitioning: Example .......................................................................................................................7-12 Composite Partitioning Enhancements..........................................................................................................7-13 Composite Range-Range Partitioning: Example ...........................................................................................7-14 Virtual Column-Based Partitioning...............................................................................................................7-15 Virtual Column-Based Partitioning: Example...............................................................................................7-16 Reference Partitioning...................................................................................................................................7-17 Reference Partitioning: Example...................................................................................................................7-18 Bitmap Join Index for IOT ............................................................................................................................7-20 Table Compression........................................................................................................................................7-21 Demonstrations..............................................................................................................................................7-22 Summary .......................................................................................................................................................7-23

Additional Performance Enhancements........................................................................................................8-2 Chapter 8Additional Performance Enhancements.........................................................................................8-2 Objectives......................................................................................................................................................8-3 Statistic Preferences: Overview.....................................................................................................................8-4 Partitioned Tables and Incremental Statistics: Overview.............................................................................8-5 Partitioned Tables and Incremental Statistics in Oracle Database 11g.........................................................8-6 Hash-Based Sampling for Column Statistics.................................................................................................8-7 Multicolumn Statistics: Overview.................................................................................................................8-8 Expression Statistics: Overview....................................................................................................................8-9 Deferred Statistics Publishing: Overview......................................................................................................8-10 Deferred Statistics Publishing: Example .......................................................................................................8-12 Query Result Cache.......................................................................................................................................8-13 Setting Up the Query Result Cache...............................................................................................................8-14 Using the RESULT_CACHE Hint ................................................................................................................8-15 Managing the Query Result Cache................................................................................................................8-16 Using the DBMS_RESULT_CACHE Package ............................................................................................8-17 Viewing Information About the Query Result Cache ..................................................................................8-18 Oracle Call Interface Client Query Cache.....................................................................................................8-19

Page 6: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features Overview eStudy Table of Contents iv

Setting the OCI Client Query Cache .............................................................................................................8-20 PL/SQL Function Cache ...............................................................................................................................8-21 PL/SQL Function Cache: Example ...............................................................................................................8-22 Automatic "Native" Compilation ..................................................................................................................8-23 Adaptive Cursor Sharing: Overview .............................................................................................................8-24 Adaptive Cursor Sharing: Example...............................................................................................................8-25 Adaptive Cursor Sharing Views....................................................................................................................8-26 Demonstrations..............................................................................................................................................8-27 Summary .......................................................................................................................................................8-28

Page 7: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 0 - Page 1

Introduction

Page 8: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 2

Chapter 1Introduction

Introduction

Oracle Database 11 g

Page 9: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 3

Overview

Overview

• This eStudy introduces the new features of Oracle Database 11 g.

• Previous experience with Oracle databases is requir ed for a full understanding of many new features, particularly Oracle Database 10 g, releases 1 and 2.

This eStudy introduces you to the new features of Oracle Database 11g that are applicable to the work usually performed by database administrators and related personnel. It does not attempt to provide every detail about a feature or cover aspects of a feature that were available in previous releases. You gain an appreciation for the new features of Oracle Database 11g from an administrative perspective through a series of lectures on the following focus areas: server manageability, availability, and performance. The eStudy will be most useful if you have already administered Oracle databases, specifically Oracle Database 10g.

Page 10: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 4

Oracle Database Innovation

Oracle Database Innovation

Audit VaultDatabase Vault

Grid ComputingSelf Managing Database

XML DatabaseOracle Data Guard

Real Application ClustersFlashback Query

Virtual Private DatabaseBuilt-in Java VM

Partitioning SupportBuilt-in Messaging

Object Relational SupportMultimedia Support

Data Warehousing OptimizationsParallel Operations

Distributed SQL and Transaction SupportCluster and MPP Support

Multiversion Read ConsistencyClient/Server Support

Platform PortabilityCommercial SQL Implementation

30 years of sustained innovation…

… continuing with Oracle Database 11 g

As a result of its early focus on innovation, Oracle has maintained the lead in the industry with a huge number of trendsetting products. Continued focus on Oracle’s key development areas has led to a number of industry benchmarks:

• First commercial relational database

• First portable tool set and UNIX-based client/server applications

• First multimedia database architecture

Page 11: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 5

Customer Testimonials

Customer Testimonials

“Oracle customers are highly satisfied with its Rea l ApplicationClusters and Automatic Storage Management when purs uing scale-out strategies.”

Mark Beyer, Gartner, December 2006

“By consolidating with Oracle Grid Computing on Int el/Linux, we are witnessing about a 50% reduction in costs with incr eased performance.”Tim Getsay, Assistant Vice-ChancellorManagement Information SystemsVanderbilt University

Managing service-level objectives is an ongoing challenge. Users expect fast, secure, 24 × 7 access to business applications, and information technology managers must deliver this access without increasing costs and resources. The manageability features in Oracle Database 11g are designed to help organizations easily manage infrastructure grids and meet users’ service-level expectations. Oracle Database 11g introduces more self-management, automation, and advisors that help reduce management costs while increasing the performance, scalability, and security of business applications around the clock.

Page 12: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 6

Enterprise Grid Computing

Enterprise Grid Computing

RACclusters

foravailability

SMPdominance

Grids oflow-cost

hardware andstorage

Managingchange

across the enterprise

Oracle Database 10g was the first database designed for grid computing. Oracle Database 11g consolidates and extends Oracle’s unique ability to deliver the benefits of grid computing. Oracle infrastructure grids fundamentally changed the way data centers look and operate, transforming data centers from silos of isolated system resources to shared pools of servers and storage. Oracle’s unique grid architecture enables all types of applications to scale-out server and storage capacity on demand. By clustering low-cost commodity server and storage modules on Infrastructure grids, Oracle Database 11g enables customers to improve user service levels, reduce down time, and make more efficient use of IT resources while still increasing the performance, scalability, and security of their business applications. Oracle Database 11g enhances the adoption of grid computing by offering:

• Unique scale-out technology with a single database image

• Lower server and storage costs

• Increased availability and scalability

Page 13: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 7

Oracle Database 11g: Focus Areas

Oracle Database 11 g: Focus Areas

• Manageability• Availability• Performance• Business intelligence and data warehousing• Security

The Oracle infrastructure grid enables information technology systems to be built from pools of low-cost servers and storage that deliver the highest quality of service in terms of manageability, high availability, and performance. Oracle’s existing grid capabilities are extended in the areas listed in the slide to make your databases more manageable. Manageability: New manageability features and enhancements increase database administrator (DBA) productivity, reduce costs, minimize errors, and maximize quality of service through change management, additional management automation, and fault diagnosis. Availability: New high availability features further reduce the risk of down time and data loss, including further disaster recovery offerings, important high availability enhancements to Automatic Storage Management (ASM), support for online database patching, improved online operations, and more. Performance: Many innovative new performance capabilities are offered, including SecureFiles, compression for online transaction processing (OLTP), Real Application Clusters (RAC) optimizations, result query caches, TimesTen enhancements, and more.

Page 14: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 8

Oracle Database 11g: Focus Areas

Oracle Database 11 g: Focus Areas

• Information management– Content management– XML– Oracle Text– Spatial– Multimedia and medical imaging

• Application development– PL/SQL– .NET– PHP– SQL Developer

The Oracle infrastructure grid provides the additional functionality needed to manage all information in the enterprise with robust security, information life cycle management, and integrated business intelligence analytics to support fast and accurate business decisions at the lowest cost.

Page 15: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 9

Management Automation

Management Automation

Autotuning

Advisory

Instrumentation

Sto

rag

e

Bac

kup

Mem

ory

App

s/S

QL

Sch

ema

RA

C

Rec

over

y

Rep

licat

ion

Oracle Database 11g continues the efforts begun in Oracle9i to dramatically simplify and ultimately fully automate the tasks that DBAs need to perform. New in Oracle Database 11g is Automatic SQL Tuning with self-learning capabilities. Other new capabilities include automatic, unified tuning of both System Global Area (SGA) and Program Global Area (PGA) memory buffers and new advisors for partitioning, database repair, streams performance, and space management. Enhancements to the Automatic Database Diagnostic Monitor (ADDM) give it a better global view of performance in Oracle Real Application Clusters environments and improved comparative performance analysis capabilities.

Page 16: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 10

Oracle Database 11g: New Features Overview eStudy

Oracle Database 11 g: New Features Overview eStudy

Intelligent Infrastructure5

Security: New Features4

Additional Performance Enhancements7

Data Warehousing Enhancements6

RMAN and Data Guard Enhancements3

Using the Data Recovery Advisor and Flashback2

Managing Storage1

TitleLesson

Lesson 1 covers storage management in Oracle Database 11g. The main discussion points are the enhancements to Automatic Storage Management, the new ASMCMD command extensions, and the reengineered large object SecureFiles.

Lesson 2 covers the high availability features. The main components are the Data Recovery Advisor and the Flashback Data Archive.

Lesson 3 continues the high availability discussion with Recovery Manager (RMAN) command enhancements and standby database improvements. Lesson 4 describes the security features in Oracle Database 11g. Discussion points are password complexity enforcement and transparent data encryption extensions.

Lesson 5 explains Automatic SQL Tuning in Oracle Database 11g and automated maintenance tasks.

Lesson 6 covers data warehousing enhancements, including new partitioning options.

Lesson 7 covers the performance features in Oracle Database 11g.

Page 17: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 11

Oracle Database 11g: Change Management Overview eStudy

Oracle Database 11 g: Change Management Overview eStudy

Using SQL Plan Management5

Performing Online Changes4

Installing Patches7

Diagnosing Problems6

Using SQL Performance Analyzer3

Using Database Replay2

Setting Up the Test Environment1

TitleLesson

This eStudy complements the Oracle Database 11g: New Features Overview eStudy and introduces you to the change management features of Oracle Database 11g.

Lesson 1 introduces you to the concept of the life cycle of change management, with a focus on performing realistic testing and establishing a simple test environment using snapshot standby.

Lesson 2 adds to the realistic testing discussion introducing Database Replay—a feature that enables you to capture production workloads to replay under a test environment.

Lesson 3 continues with the SQL Performance Analyzer feature, which allows you to predict the impact of system changes on SQL workload response time.

Lesson 4 furthers automation provisioning by discussing a collection of features that improve database maintenance activities on applications while they are in use.

Lesson 5 introduces SQL Plan Management, which allows the system to automatically control SQL plan evolution and, therefore, not generate performance regression.

Lesson 6 introduces the diagnostic features that assist DBAs in detecting problems proactively.

Lesson 7 discusses online patching, which provides the ability to install, enable, and disable a bug fix or diagnostic patch on a live, running Oracle instance.

Page 18: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Introduction Chapter 1 - Page 12

Further Information

Further Information

For more information about topics that are not cove red in this course, please click the links in the notes se ction below.• Oracle Database 11 g: New Features eStudies

– A comprehensive series of self-paced online courses covering all new features in great detail

• Oracle By Example series: Oracle Database 11 g• Oracle OpenWorld events

• Oracle Database 11g: New Features eStudies[http://www.oracle.com/education/library]

• Oracle By Example series: Oracle Database 11g [http://otn.oracle.com/obe/obe11gdb/index.html]

• Oracle OpenWorld events[http://www.oracle.com/openworld/index.html]

Page 19: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 0 - Page 1

Managing Storage

Page 20: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 2

Chapter 2Managing Storage

Managing Storage

Oracle Database 11 g

Page 21: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Use ASM Fast Mirror Resync to improve disk failure

recovery times• Set up ASM Fast Mirror Resync • Configure ASM preferred mirror failure groups• Use the SYSASMrole to manage ASM disks

• Use the compatibility modes for disk groups• Use ASMCMDcommand extensions to back up and

restore disk groups• Discuss LOB improvements using SecureFiles• Use SQL and PL/SQL APIs to access SecureFiles• Use temporary tablespace enhancements

Page 22: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 4

Automatic Storage Management (ASM) Enhancements

Automatic Storage Management (ASM) Enhancements

• Availability:– ASM Fast Mirror Resync– ASM preferred mirror failure groups

• Scalability:– Increased limits

• Security:– New SYSASMprivilege

• Manageability:– Automatic extent size adjustments– ASM disk group attributes– New manageability options

• Additional ASMCMDextensions

Oracle Database 11g extends the functionality of Automatic Storage Management (ASM) in the areas listed in the slide.

Page 23: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 5

ASM Fast Mirror Resync: Overview

ASM Fast Mirror Resync: Overview

Disk access failure

Failure time < DISK_REPAIR_TIME

ASM redundancy used1 2

3

Oracle Database 11 g

Prim

ary

Se c

ond a

ry

Disk again accessible: Only need

to resync modified ASM data extent

4

In Oracle Database 10g, Automatic Storage Management (ASM) offlined a disk whenever it was unable to complete a write to the disk and, as a result, could not read from the offlined disks. When then were offline, ASM dropped them from the disk group. This process was a relatively costly operation and could take hours to complete, even if disk failure was only transient. Oracle Database 11g introduces the ASM Fast Mirror Resync feature to significantly reduce the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the ASM data extents that are modified during the outage. After the transient failure is repaired, ASM quickly resynchronizes only those ASM data extents that have been affected during the outage. This feature assumes that the content of the affected ASM disks has not been damaged or modified. When an ASM disk path fails, the ASM disk is taken offline but not dropped if you have the DISK_REPAIR_TIME attribute set for the corresponding disk group. The setting for this attribute determines the duration of a disk outage that ASM tolerates while still being able to resynchronize after you complete the repair. The DISK_REPAIR_TIME time elapses only when the disk group is mounted. Changing this value does not affect disks previously offlined. The default setting (3.6 hours) should be adequate for most environments. Note: The tracking mechanism uses one bit for each modified ASM data extent, ensuring that the tracking mechanism is highly efficient.

Page 24: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 6

Setting Up ASM Fast Mirror Resync

Setting Up ASM Fast Mirror Resync

• V$ASM_ATTRIBUTE: Views current resync attributes• V$ASM_DISK, V$ASM_DISK_IOSTAT: Shows repair time

left• V$ASM_OPERATION: Shows disk resync operation

ALTER DISKGROUP dgroupA SET ATTRIBUTE 'DISK_REPAIR_TIME'='3H';

ALTER DISKGROUP dgroupA

OFFLINE DISKS IN FAILGROUP controller2 DROP AFTER 5H;

ALTER DISKGROUP dgroupA

ONLINE DISKS IN FAILGROUP controller2 POWER 2 WAIT;

ALTER DISKGROUP dgroupA DROP DISKS IN FAILGROUP con troller2 FORCE;

ASM Fast Mirror Resync is used on a per-disk group basis after you have created the disk group with the ALTER DISKGROUP command (as shown in the slide). You can use the ALTER DISKGROUP OFFLINE DISK SQL statement to manually take ASM disks offline for preventative maintenance. You can specify a timer to override the one defined at the disk-group level. After you repair the disk, issue the ALTER DISKGROUP <dgname> ONLINE SQL statement that brings a repaired disk group back online and enables writes. This starts a procedure to copy all the ASM data extents marked as stale on their redundant copies. You cannot apply the ONLINE statement to already dropped disks. If you cannot repair a failure group that is in the offline state, you can use the ALTER DISKGROUP DROP DISKS IN FAILGROUP command with the FORCE option. This ensures that data originally stored on these disks is reconstructed from redundant copies of the data and stored on other disks in the same disk group. You can view the current attribute values by querying the V$ASM_ATTRIBUTE view. The REPAIR_TIMER column of either V$ASM_DISK or V$ASM_DISK_IOSTAT shows the time left before ASM drops an offlined disk. A row corresponding to a disk resync operation appears in V$ASM_OPERATION with the OPERATION column set to SYNC.

Page 25: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 7

ASM Preferred Mirror Read: Overview

ASM Preferred Mirror Read: Overview

Site BSite A

P S

Site BSite A

P S

P

S

Primary

Secondary

When you configure ASM failure groups, ASM in Oracle Database 10g always reads the primary copy of a mirrored ASM data extent. It may be more efficient for a node to read from a failure group ASM data extent that is closest to the node, even if it is a secondary ASM data extent. This is especially true in RAC-extended cluster configurations, where reading from a local copy of an ASM data extent provides improved performance. With Oracle Database 11g, you can do this by configuring preferred mirror read using the new ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter to specify a list of preferred mirror read names. The disks in those failure groups become the preferred read disks. Thus, every node can read from its local disks. This results in higher efficiency and performance as well as reduced network traffic. The setting for this parameter is instance specific.

Page 26: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 8

ASM Preferred Mirror Read: Setup

ASM Preferred Mirror Read: Setup

Setup

ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEA

ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEB

On first instance

On second instance

SELECT preferred_read FROM v$asm_disk;

SELECT * FROM v$asm_disk_iostat;

Monitor

You configure this feature by setting the ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter. This is a multivalued parameter and should contain a comma-delimited string of failure group names. The failure group names specified should be prefixed with its disk group name and a “.” character. The ASM_PREFERRED_READ_FAILURE_GROUPS parameter is dynamic and can be modified using the ALTER SYSTEM command at any time. The ASM_PREFERRED_READ_FAILURE_GROUPS parameter is valid only for ASM instances. When nodes are spread across several sites (a stretch cluster), the failure groups specified in this parameter should contain only the disks that are local to the corresponding instance. A new column, PREFERRED_READ, has been added to the V$ASM_DISK view. If the disk group pertains to a preferred read failure group, the value of this column is Y. You can use the V$ASM_DISK_IOSTAT view to identify any performance issues with ASM preferred read failure groups. The V$ASM_DISK_IOSTAT view displays disk input/output (I/O) statistics for each ASM client. If this view is queried from a database instance, only the rows for this instance are shown. You can specify a set of disks as preferred disks for each ASM instance using Enterprise Manager (EM). The preferred read attributes are instance specific. In Oracle Database 11g, the Preferred Read Failure Groups field is added to the EM Configuration page. This setting takes effect only before the disk group is mounted or when the disk group is created and applies only to newly opened files or a newly loaded ASM data extent map for a file.

Page 27: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 9

ASM Preferred Mirror Read: Best Practice

ASM Preferred Mirror Read: Best Practice

Two sites/normal redundancy

Only two failure groups: one for each instance

Two sites/high redundancy

Max four failure groups: two for each instance

P S P S S PSS

Three sites/high redundancy

Only three failure groups: one for each instance

P S S

P

S

Primary

Secondary

In practice, there are only a limited number of good disk group configurations in a stretch cluster. A good configuration takes into account both performance and availability of a disk group in a stretch cluster. Here are some possible examples:

• For a two-site stretch cluster, a normal redundancy disk group should have only two failure groups, and all disks local to one site should belong to the same failure group. Also, one failure group (at the most) should be specified as a preferred read failure group by each instance. If there are more than two failure groups, ASM may not mirror a virtual ASM data extent across both sites. Furthermore, if the site with more than two failure groups were to go down, it would take the disk group down as well. If the disk group to be created is a high redundancy disk group, two failure groups (at the most) should be created on each site with its local disks, having both local failure groups specified as preferred read failure groups for the local instance.

• For a three-site stretch cluster, a high redundancy disk group with three failure groups should be used. This is for ASM to guarantee that each virtual ASM data extent has a mirror copy local to each site and that the disk group is protected against a catastrophic disaster on any of the three sites.

Page 28: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 10

ASM Scalability and Performance Enhancements

ASM Scalability and Performance Enhancements

• The ASM data extent size grows automatically according to the file size.

• ASM supports variable sizes to:– Raise the maximum possible file size– Reduce memory utilization in the shared pool

• No administration is needed apart from manual rebalance (in case of fragmentation).

In Oracle Database 11g, ASM supports variable sizes for ASM data extents of 1, 8, and 64 allocation units (AU). This is an automated feature that enables ASM to support larger sized ASM data extents while improving memory usage efficiency. ASM uses a predetermined number of ASM data extents of each size. As soon as a file crosses a certain threshold, the next ASM data extent size is used. An ASM file can begin with 1 AU; as the file’s size increases, the ASM data extent size also increases to 8 or 64 AUs based on predefined file size thresholds. As fewer ASM data extent pointers are needed to describe the files, less memory is required to manage the ASM data extent maps in the shared pool, which would have been prohibitive in large file configurations. The ASM data extent size can vary both across files and within files. Using variable-size ASM data extents enables you to deploy Oracle databases using ASM that are several hundred terabytes (TB), even several petabytes (PB) in size. The management of variable-size ASM data extents is completely automated and does not require manual administration. As ASM data extent sizes grow and smaller noncontiguous ASM data extents are freed, any rebalance operation also performs a defragmentation operation. ASM also automatically defragments during allocation if the desired size is unavailable, thereby potentially affecting allocation times but offering much faster file opens, given the reduction in the memory required to store file ASM data extents.

Page 29: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 11

SYSASM Role

SYSASMRole

• Use the SYSASMrole to manage ASM instances and avoid overlap between DBAs and storage administrators.

• SYSDBAto be deprecated:– Oracle Database 11 g, Release 1 behaves as in 10 g.– In future releases, SYSDBAis restricted in ASM instances.

SQL> CONNECT / AS SYSASM

SQL> CREATE USER ossysasmusername IDENTIFIED by passwd;

SQL> GRANT SYSASM TO ossysasmusername;

SQL> DROP USER ossysasmusername;

SQL> CONNECT ossysasmusername / passwd AS SYSASM;

This feature introduces a new SYSASM role that is specifically intended for performing ASM administration tasks. Using the SYSASM role instead of the SYSDBA role improves security by separating ASM administration from database administration.

As of Oracle Database 11g, Release 1, the operating system (OS) group for SYSASM and SYSDBA is the same, and the default installation group for SYSASM is dba . In a future release, separate groups will have to be created, and SYSDBA users will be restricted in ASM instances. Currently, as a member of the dba group you can connect to an ASM instance using the first statement given in the slide. You can use the combination of CREATE USER and GRANT SYSASM SQL statements from an ASM instance to create a new SYSASM user. This can be useful for remote ASM administration. These commands update the password file of each ASM instance, and do not need the instance to be up and running. Similarly, you can revoke the SYSASM role from a user by using the REVOKE command, and you can drop a user from the password file using the DROP USER command. Oracle Database 11g adds the SYSASM role to the ASM instance login page, and a new column called SYSASM to the V$PWFILE_USERS view. Note: In Oracle Database 11g, if you log in to an ASM instance as SYSDBA, warnings are written in the corresponding alert.log file.

Page 30: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 12

ASM Disk Group Compatibility

ASM Disk Group Compatibility

• Compatibility of each disk group is separately controllable:– RDBMS compatibility controls minimum client level.– ASM compatibility controls ASM metadata on disk

structure.

• Useful with heterogeneous environments.• Setting disk group compatibility is irreversible.

DB instance

COMPATIBLE.RDBMSCOMPATIBLE

ASM diskgroup

>=<=

COMPATIBLE.ASM

ASM instance

<= COMPATIBLE

To support heterogeneous environments with disk groups from both Oracle Database 10g and Oracle Database 11g, there are two kinds of compatibility applicable to ASM disks, with each disk group’s compatibility independently controllable:

• RDBMS compatibility is the minimum compatible version of the RDBMS instance that would allow the instance to mount the disk group. This compatibility dictates the format of messages that are exchanged between the ASM and database (RDBMS) instances. An ASM instance can support different RDBMS clients running at different compatibility settings. The database compatible version setting of each instance must be greater than or equal to the RDBMS compatibility of all disk groups used by that database. Database instances are typically run from a different Oracle home than the ASM instance. This implies that the database instance may be running a different software version than the ASM instance. When a database instance first connects to an ASM instance, it negotiates the highest version that they both can support. The compatibility parameter setting of the database, the software version of the database, and the RDBMS compatibility setting of a disk group determine whether a database instance can mount a given disk group.

• ASM compatibility is the persistent compatibility setting controlling the format of data structures of ASM metadata on disk. The ASM compatibility level of a disk group must always be greater than or equal to the RDBMS compatibility level of the same disk group. ASM compatibility is concerned only with the format of the ASM metadata. The format of the file contents is up to the database instance. For example, the ASM compatibility of a disk group can be set to 11.0, whereas its RDBMS compatibility could be 10.1. This implies that

Page 31: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 13

the disk group can be managed only by the ASM software whose software version is 11.0 or later, whereas any database client whose software version is later than or equal to 10.1 can use that disk group.

The compatibility of a disk group needs to be advanced only when there is a change to either persistent disk structures or protocol messaging. However, advancing disk group compatibility is an irreversible operation. You can set the disk group compatibility by using either the CREATE DISKGROUP or the ALTER DISKGROUP command.

Note: In addition to the disk group compatibilities, the COMPATIBLE parameter (the database compatible version) determines the features that are enabled; it applies to the database or ASM instance dependent on the INSTANCE_TYPE parameter. For example, setting it to 10.1 would preclude use of any new features that are introduced in Oracle Database 11g (disk online/offline and variable ASM data extents, for example).

Page 32: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 14

ASM Disk Group Attributes

ASM Disk Group Attributes

Striping attribute of specified templateCOARSE|FINEAtemplate. tname.stripe

Redundancy of specified templateUNPROTECT|MIRROR|HIGHAtemplate. tname.redundancy

Length of time before removing a disk once offline

0 M to 2 32 D ACdisk_repair_time

Format of ASM metadata structures on diskValid ASM instance version

ACcompatible.asm

Format of messages exchanged between DB and ASM

Valid database versionACcompatible.rdbms

Size of allocation units in the disk group1|2|4|8|16 |32 |64MBCau_size

DescriptionValuesPropertyName

CREATE DISKGROUP DATA NORMAL REDUNDANCY

DISK '/dev/raw/raw1','/dev/raw/raw2'

ATTRIBUTE 'compatible.asm'='11.1' ;

A: ALTERcommand C: CREATEcommand

You can change an ASM disk group’s attributes using the new ATTRIBUTE clause of the CREATE DISKGROUP and ALTER DISKGROUP commands.

• ASM enables the use of different AU sizes that you specify when you create a disk group. The AU size can be 1 MB, 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, or 64 MB.

• RDBMS compatibility: See the slide titled “ASM Disk Group Compatibility” for more information.

• ASM compatibility: See the slide titled “ASM Disk Group Compatibility” for more information.

• You can specify the DISK_REPAIR_TIME in units of minutes (M), hours (H), or days (D). If you omit the unit, the default is H. If you omit this attribute, then the default is 3.6H. You can override this attribute with an ALTER DISKGROUP statement.

• You can specify the redundancy attribute of the specified template.

• You can specify the striping attribute of the specified template.

For each defined disk group, you can look at all the defined attributes by using the V$ASM_ATTRIBUTE fixed view. Note: For 11g ASM instances, the default ASM and Database compatibility values are 10.1.

Page 33: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 15

Simplified Diskgroup Commands

Simplified Diskgroup Commands

Mounts the disk group even if some disks belonging to the disk group are not accessible

ALTER DISKGROUP data MOUNT FORCE;

Enables users to drop a disk group that cannot be mounted; fails if the disk group is mounted anywhere

DROP DISKGROUP data FORCE INCLUDING CONTENTS;

EffectCOMMAND

ALTER DISKGROUP data MOUNT RESTRICT;

ALTER DISKGROUP DATA CHECK;

Checks all the metadata directories by default

When a disk group is mounted in RESTRICTEDmode, clients cannot access the files in a disk group.

The disk group CHECK command is simplified to check all the metadata directories by default, and performs additional consistency checks. The DISK, DISKS IN FAILGROUP, or FILE clauses have been deprecated. The MOUNT command is extended to allow for ASM to mount the disk group in restricted mode. In this mode, clients cannot access the files in a disk group. This allows you to perform all maintenance tasks on a disk group in the ASM instance without any external interaction. MOUNT FORCE of the disk group succeeds if ASM finds enough disks to form a quorum of the failure group. This is useful in cases where you know that some of the disk group disks are unavailable, allowing you to correct any configuration errors. You must take corrective actions before DISK_REPAIR_TIME expires to restore those devices. By default, MOUNT is NOFORCE, where all disks must be available. The DROP DISKGROUP <> FORCE command marks the headers of disks belonging to a disk group that cannot be mounted by the ASM instance as FORMER after checking whether the disk group is mounted elsewhere. If it is being used after various checks, then the statement fails. When executing the DROP DISKGROUP command with the FORCE option, you must also specify the INCLUDING CONTENTS clause. Note: For more detailed syntax, refer to the Oracle Database 11g documentation set.

Page 34: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 16

ASMCMD Extensions

ASMCMDExtensions

$ asmcmd help

md_backup

md_restorecp

lsdsk

full

nodg

newdg

User-created directoriesTemplatesDisk group compatibilityDisk group nameDisk names and failure groups

ASMCMD is extended to include ASM metadata backup and restore functionality, allowing the creation from a preexisting ASM disk group with an exact same template and alias directory structure. This alleviates the manual creation of the ASM disk group (and any required user directories or templates) from previous releases. ASM metadata backup and restore (AMBR) works in two modes:

• In backup mode, AMBR parses ASM fixed tables and views to gather information about existing disks and failure group configurations, templates, and alias directory structures. It then dumps this metadata information to a text file.

• In restore mode, AMBR reads the previously generated file to reconstruct the disk group and its metadata. You have the ability to control AMBR behavior in restore mode to do a full , nodg , or newdg restore. The difference among the three submodes is whether you want to include the disk group creation and change its characteristics.

The lsdsk command lists ASM disk information. This command can run in two modes:

• Connected mode: ASMCMD uses the V$ and GV$ views to retrieve disk information.

• Nonconnected mode: ASMCMD scans disk headers to retrieve disk information, using an ASM disk string to restrict the discovery set. The connected mode is always attempted first.

• The cp command enables you to copy files between ASM disk groups on local instances and remote instances. Here is a possible usage example:

Page 35: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 17

cp +DATA/ORCL/DATAFILE/TBSJFV.256.629730771 +DATA/ORCL/tbsjfv.bak

• In the preceding example, you copy an existing file locally. However you could specify a connect string to copy the file to a remote ASM disk group. The format of copied files is portable between little endian and big endian systems. You can also use the cp command to copy an ASM file to your operating system:

cp +DATA/ORCL/DATAFILE/TBSJFV.256.629730771 /home/oracle/tbsjfv.dbf

• Similarly, you can use the cp command to copy a file from your operating system to an ASM directory, as in this example:

cp /home/oracle/tbsjfv.dbf +data/jfv

• If you want to copy an ASM file from your local ASM instance to a remote ASM instance, you could use the following syntax:

cp +DATA/orcl/datafile/tbsjfv.256.629989893 \sys@edcdr12p1.+ASM2:+D2/jfv/tbsjfv.dbf

Note: For more information about the syntax for each of these commands, refer to the Oracle Database Storage Administrator’s Guide 11g Release 1 (11.1).

Page 36: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 18

ASMCMD Extension: Examples

ASMCMDExtension: Examples

ASMCMD> md_backup –b jfv_backup_file -g data

Disk group to be backed up: DATA#

Current alias directory path: jfv

ASMCMD>

ASMCMD> md_restore -b jfv_backup_file -t full -g da ta

Disk group to be restored: DATA#

ASMCMDAMBR-09358, Option -t newdg specified without any override options.

Current Diskgroup being restored: DATA

Diskgroup DATA created!

User Alias directory +DATA/jfv

created!

ASMCMD>

Unintentional disk group drop

Restore disk group files using RMAN

1

2

4

3

You see how to back up ASM metadata using the md_backup command, and how to restore them using the md_restore command. The first statement specifies the –b option and the –g option of the command. This is to define the name of the generated file containing the backup information as well as the disk group that needs to be backed up: jfv_backup_file and data, respectively, in the example above. At step 2, it is assumed that there is a problem on the DATA disk group, and as a result it gets dropped. Before you can restore the database files it contained, you have to restore the disk group itself. At step 3, you initiate the disk group re-creation as well as restore its metadata using the md_restore command. Here, you specify the name of the backup file generated at step 1, as well as the name of the disk group you want to restore, and also the type of restore you want to do. Here, a full restore of the disk group is done because it no longer exists. After the disk group is re-created, you can restore its database files using RMAN, for example.

Page 37: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 19

SecureFiles: Overview

Data path optimizations

SecureFiles: Overview

Compression

Deduplication

Encryption

PL/SQL APIs

SecureFiles

Oracle Database 11g introduces a completely reengineered large object (LOB) data type to dramatically improve performance, manageability, and ease of application development. The new implementation also offers advanced, next-generation functionality such as intelligent compression and transparent encryption. SecureFiles offers the following components:

Compression: Enables you to explicitly compress SecureFiles to gain disk, I/O, and redo logging savings

Data Path Optimizations: Supports performance optimizations for SecureFiles, including:

• Dynamic use of CACHE and NOCACHE, and avoids polluting the buffer cache for large cache SecureFiles

• SYNC and ASYNC to take advantage of the COMMIT NOWAIT BATCH semantics of transaction durability

• Write Gather Caching, similar to dirty write caches of file servers. It spreads the cost of space allocation, inode updates, and redo logging, and enables large I/Os to disk.

• Distributed lock manager-locking semantics for SecureFiles blocks. This uses a single distributed lock manager lock to cover all SecureFiles LOB blocks, thereby making LOB performance close to that of other file systems.

Deduplication: Automatically detects duplicate SecureFiles LOB data and conserves space by storing only one copy implementing disk storage, I/O, and redo logging savings. Deduplication can be specified at the table level or partition level and does not span across partitioned LOBs. Encryption: Encrypted LOB data is now stored in place and is available for random reads and

Page 38: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 20

writes offering enhanced data security. Inodes: New storage structures for SecureFiles are designed and implemented to support high performance (low latency, high throughput, concurrent, space-optimized) transactional access to large object data. In addition to improving basic data access, the new storage structures also support rich functionality, all with minimal performance cost, such as:

• Implicit compression and encryption

• Data sharing

• User-controlled versioning

Note: The COMPATIBLE initialization parameter must be set to 11.1 or later to use SecureFiles. The BasicFile (previous LOB) format is still supported under 11.1 compatibility. There is no downgrade capability after 11.1 is set.

Page 39: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 21

Enabling SecureFiles Storage

Enabling SecureFiles Storage

SecureFiles storage can be enabled by using:• DB_SECUREFILEinitialization parameter with the

following valid values:– ALWAYS| PERMITTED| NEVER| IGNORE

• ALTER SESSION | SYSTEMcommand:

SQL> ALTER SYSTEM SET db_securefile = 'ALWAYS';

The DB_SECUREFILE initialization parameter allows DBAs to determine the usage of SecureFiles. Valid values are:

• PERMITTED: Allows SecureFiles to be created (default)

• NEVER: Disallows SecureFiles from being created going forward

• ALWAYS: Forces all LOBs created going forward to be SecureFiles

• IGNORE: Disallows SecureFiles and ignores any errors caused by forcing BasicFiles with the SecureFiles option.

If NEVER is specified, any LOBs that are specified as SecureFiles are created as BasicFiles. All SecureFiles-specific storage options and features (compression, encryption, and deduplication) causes an exception. The BasicFile defaults are used for storage options not specified. If ALWAYS is specified, all LOBs created in the system are created as SecureFiles. The LOB must be created in an Automatic Segment Space Management (ASSM) tablespace, else an error occurs. Any BasicFile storage options specified will be ignored. The SecureFiles defaults for all storage can be changed using the ALTER SYSTEM command as shown in the slide.

Page 40: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 22

Creating SecureFiles

Creating SecureFiles

CREATE TABLE func_spec(id number, doc CLOB ENCRYPT USING 'AES128' ) LOB(doc) STORE AS SECUREFILE (DEDUPLICATE LOB CACHE NOLOGGING);

CREATE TABLE test_spec (id number, doc CLOB) LOB(doc) STORE AS SECUREFILE (COMPRESS HIGH KEEP_DUPLICATES CACHE NOLOGGING);

CREATE TABLE design_spec (id number, doc CLOB) LOB(doc) STORE AS SECUREFILE (ENCRYPT);

CREATE TABLE design_spec (id number, doc CLOB ENCRYPT)

LOB(doc) STORE AS SECUREFILE;

You create SecureFiles with the storage keyword SECUREFILE in the CREATE TABLE statement with a LOB column. The LOB implementation that was available in prior database versions is now referred to as BasicFiles. When you add a LOB column to a table, you can specify whether it should be created as SecureFiles or BasicFiles. If you do not specify the storage type, the LOB is created as BasicFiles to ensure backward compatibility. In the first example in the slide, you create a table called FUNC_SPEC to store documents as SecureFiles. Here you specify that you do not want duplicates stored for the LOB, that the LOB should be cached when read, and that redo should not be generated when updates are performed to the LOB. In addition, you specify that the documents stored in the doc column should be encrypted using the AES128 encryption algorithm. KEEP_DUPLICATES is the opposite of DEDUPLICATE and can be used in an ALTER statement. In the second example, you create a table called TEST_SPEC that stores documents as SecureFiles. For this table, you specify that duplicates may be stored and that the LOBs should be stored in compressed format and cached but not logged. The HIGH compression setting incurs more work but offers better data compression. The default compression is MEDIUM. The compression algorithm is implemented on the server side, which allows for random reads and writes to LOB data (which can be changed via ALTER statements). The third and fourth examples produce the same result: creating a table with a SecureFiles LOB column using the default AES192 encryption.

Page 41: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 23

Altering SecureFiles

Altering SecureFilesALTER TABLE t1

MODIFY LOB(a) ( KEEP_DUPLICATES );

ALTER TABLE t1 MODIFY LOB(a) ( DEDUPLICATE LOB VALIDATE );

ALTER TABLE t1 MODIFY PARTITION p1 LOB(a) ( DEDUPLICATE LOB );

ALTER TABLE t1 MODIFY LOB(a) ( NOCOMPRESS );

ALTER TABLE t1 MODIFY LOB(a) (COMPRESS HIGH);

ALTER TABLE t1 MODIFY PARTITION p1 LOB(a) ( COMPRESS HIGH );

ALTER TABLE t1 MODIFY ( a CLOB ENCRYPT USING '3DES168');

ALTER TABLE t1 MODIFY PARTITION p1 ( LOB(a) ( ENCRYPT );

ALTER TABLE t1 MODIFY ( a CLOB ENCRYPT IDENTIFIED BY ghYtp);

Disable deduplication.

Enable deduplication.

Enable partition deduplication.

Disable compression.

Enable compression.

Enable compression on SecureFiles within a single partition.

Enable encryption using 3DES168.

Enable encryption on partition.

Enable encryption and build the encryption key using a password.

DEDUPLICATE/KEEP_DUPLICATES: The DEDUPLICATE option allows you to specify that LOB data, which is identical in two or more rows in a LOB column, should all share the same data blocks. The opposite of this is KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines LOBs with identical content into a single copy, reducing storage and simplifying storage management. VALIDATE: Perform a byte-by-byte comparison of the SecureFiles with the SecureFiles that has the same secure hash value, to verify the SecureFiles match before finalizing deduplication. The LOB keyword is optional and is for syntactic clarity only. COMPRESS/NOCOMPRESS: This enables or disables LOB compression. All LOBs in the LOB segment are altered with the new setting. ENCRYPT/DECRYPT: This turns on or turns off LOB encryption. All LOBs in the LOB segment are altered with the new setting. A LOB segment can be altered only to enable or disable LOB encryption. That is, ALTER cannot be used to update the encryption algorithm or the encryption key. The encryption algorithm or encryption key can be updated using the ALTER TABLE REKEY syntax. RETENTION: Altering RETENTION affects only the space created after the ALTER TABLE statement is executed.

Page 42: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 24

Accessing SecureFiles

Accessing SecureFiles

DBMS_LOB

– GETOPTIONS

– SETOPTIONS

DBMS_SPACE.SPACE_USAGE

SecureFiles

DBMS_LOB package: LOBs inherit the LOB column settings for deduplication, encryption, and compression, which can also be configured on a per-LOB level using the LOB locator API. However, the LONG API cannot be used to configure these LOB settings. You must use the following DBMS_LOB package additions for these features:

• DBMS_LOB.GETOPTIONS: Settings can be obtained using this function. An integer corresponding to a predefined constant based on the option type is returned.

• DBMS_LOB.SETOPTIONS: This procedure sets features and allows the features to be set on a per-LOB basis, overriding the default LOB settings. It incurs a round trip to the server to make the changes persistent.

DBMS_SPACE.SPACE_USAGE: The existing SPACE_USAGE procedure is overloaded to return information about LOB space usage. It returns the amount of disk space in blocks used by all the LOBs in the LOB segment. This procedure can be used only on tablespaces that are created with ASSM and does not treat LOB chunks belonging to BasicFiles as used space.

Note: For further details, see Oracle Database PL/SQL Packages and Types Reference.

Page 43: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 25

Migrating to SecureFiles

Migrating to SecureFiles

SecureFiles

There are two recommended methods for migration of BasicFiles to SecureFiles. These are partition exchange and online redefinition. Partition Exchange

• Needs additional space equal to the largest of the partitions in the table

• Can maintain indexes during the exchange

• Can spread the workload out over several smaller maintenance windows

• Requires that the table or partition needs to be offline to perform the exchange

Online Redefinition • Does not require that the table or partition be offline

• Can be done in parallel

• Requires additional storage equal to the entire table and all LOB segments to be available

• Requires that any global indexes must be rebuilt

If you want to upgrade your BasicFiles to SecureFiles, you need to upgrade by the normal methods typically used to upgrade data (for example, CTAS/ITAS, online redefinition, export/import, column-to-column copy, or using a view and a new column). Most of these solutions mean using two times the disk space used by the data in the input LOB column. However, doing partitioning and taking these actions on a partition-by-partition basis may help lower the disk space required.

Page 44: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 26

Temporary Tablespace Shrink

Temporary Tablespace Shrink

• Sort segment extents are managed in memory when physically allocated.– Can be an issue after big sorts are done

• To release physical space from your disks, shrink temporary tablespaces: – Locally managed temporary tablespaces– Online operation

CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITEDEXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m;

ALTER TABLESPACE temp SHRINK SPACE [ KEEP 200m];

ALTER TABLESPACE temp SHRINK TEMPFILE 'tbs_temp.dbf';

Huge sorting operations often cause temporary tablespaces to grow significantly often resulting in a large temporary file on disk. In Oracle Database 11g, you can use the ALTER TABLESPACE SHRINK SPACE command to shrink a temporary tablespace, or you can use the ALTER TABLESPACE SHRINK TEMPFILE command to shrink the temporary file. Both commands allow you to specify the optional KEEP clause that defines the lowest size to which the tablespace or temporary file can be shrunk. If you omit the KEEP clause, the database then attempts to shrink the tablespace or temporary file as much as possible (shrinking the used extent space) as long as other storage attributes are satisfied. This operation is performed online. However, if some currently used extents are allocated above the shrink estimation, the system waits until these are released to finish the shrink operation. Note: The ALTER DATABASE TEMPFILE RESIZE command generally fails with an ORA-3297 error because the temporary file contains used data beyond the requested RESIZE value. Compared to the ALTER TABLESPACE SHRINK command, the ALTER DATABASE command does not deallocate sort extents after they are allocated.

Page 45: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 27

Tablespace Option for Creating Temporary Tables

Tablespace Option for Creating Temporary Tables

• Specify the temporary tablespace to use for your gl obal temporary tables.

• Decide the proper temporary extent size.

CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITEDEXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m;

CREATE GLOBAL TEMPORARY TABLE temp_table (c varchar 2(10)) ON COMMIT DELETE ROWS TABLESPACEtemp;

In Oracle Database 11g, you can now specify a TABLESPACE clause when you create a global temporary table. If no tablespace is specified, the global temporary table is created in your default temporary tablespace. In addition, indexes created on the temporary table are also created in the same temporary tablespace as the temporary table. This allows you to decide the proper extent size that reflects your sort-specific usage, especially when you have several types of temporary space usage. You can use the DBA_TEMP_FREE_SPACE view to report the temporary space usage information at the tablespace level. The information is derived from various existing views.

Page 46: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 28

Demonstrations

Demonstrations

For further understanding, you can click the link b elow for a demonstration on:• Using SecureFiles to Improve Performance, Maximize

Storage, and Enhance Security

Click the following link to further understand:

• Using SecureFiles to Improve Performance, Maximize Storage, and Enhance Security[http://www.oracle.com/technology/obe/11gr1_db/datamgmt/securefile/securefile.htm]

Page 47: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 29

Summary

Summary

In this lesson, you should have learned how to:• Use ASM Fast Mirror Resync to improve disk failure

recovery times• Set up ASM Fast Mirror Resync using SQL• Configure preferred mirror groups using the

ASM_PREFERRED_READ_FAILURE_GROUPSparameter• Use the SYSASMprivilege to manage ASM disks

• Use the compatibility modes for disk groups• Use ASMCMDcommand extensions to back up and

restore disk groups• Discuss LOB improvements using SecureFiles • Use temporary tablespace enhancements

Page 48: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Managing Storage Chapter 2 - Page 30

Page 49: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 0 - Page 1

High Availability: Using the Data Recovery Advisor and Flashback

Page 50: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 2

Chapter 3High Availability: Using the Data Recovery Advisor and Flashback

High Availability: Using the Data Recovery Advisor and Flashback

Page 51: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Perform proactive failure checks• Query the Data Recovery Advisor views• Enable tracking of table data by using Flashback Da ta

Archive• Back out data changes by using Flashback Transactio n

Page 52: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 4

Repairing Data Failures

Repairing Data Failures

Oracle Database 11 g offers the following advancements in the repair of data failures: • Data Recovery Advisor analyzes failures based on

symptoms and determines repair strategies.• Data Guard provides failover to a standby database, so

that your operations are not affected by down time.• Flashback technology protects the life cycle of a row

and assists in repairing logical problems.

When your database has a problem, analyzing the underlying cause and choosing the correct solution is often the biggest component of database down time. Oracle Database 11g offers several new and enhanced tools for analyzing and repairing database problems.

• Data Recovery Advisor: A built-in tool that automatically diagnoses data failures and reports the appropriate repair option. The Data Recovery Advisor assists you to perform the correct repair for a failure. You can choose to repair manually or request the Data Recovery Advisor to execute the repair for you.

• Data Guard: By allowing failover to a standby database (that has its own copy of the data), Data Guard allows you to continue operation if the primary database gets a data failure. After it has failed over to the standby, you can repair the failed database (old primary) without worrying about the impact on your applications. Further enhancements to Data Guard are addressed in the lesson titled “RMAN and Data Guard Enhancements.”

You can also use the Oracle Database 11g Flashback technology to repair logical problems:

• Flashback Data Archive: Maintains persistent changes of table data for a specified period of time, allowing you to access the archived data

• Flashback Transaction: Allows you to back out of a transaction and all conflicting transactions with a single click

Page 53: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 5

Data Recovery Advisor

Data Recovery Advisor

• Offers fast detection, analysis, and repair of fail ures• Minimizes down time and run-time failures• Alleviates disruptions for users• Can be implemented using:

– EM GUI– RMAN command line

4. Choose and execute repair.

3. Advise on repair.

5. Perform proactive checks.

2. List failures by severity.

1. Assess data failures.

The Data Recovery Advisor automatically gathers data failure information when an error is encountered. In addition, it proactively checks for failures. In this mode, it can potentially detect and analyze data failures before a database background process discovers the corruption and signals an error. However, database repairs are always under the control of the database administrator (DBA). The Data Recovery Advisor handles both catastrophic (inability to start up your database) down time and run-time errors (block corruptions in data files). You can use the Data Recovery Advisor from Enterprise Manager (EM) Database Control and Grid Control. You can also use it from the RMAN command line.

The automatic diagnostic workflow in Oracle Database 11g performs the workflow steps for you. With the Data Recovery Advisor, you need to only initiate an advise and a repair.

1. The Health Monitor automatically executes checks and logs failures and their symptoms as “findings” into the Automatic Diagnostic Repository (ADR). For more details about the Health Monitor, see the Oracle Database 11g: Change Management Overview eStudy or the Oracle Database 11g documentation.

2. The Data Recovery Advisor consolidates findings into failures. It lists the results of previously executed assessments with failure severity. Failures are listed in decreasing priority order, with the same priority failures listed in increasing time-stamp order.

Page 54: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 6

3. When you ask for repair advice on a failure, the Data Recovery Advisor maps failures to automatic and manual repair options, checks basic feasibility, and presents you with the repair advice.

4. You can choose to manually execute a repair or request the Data Recovery Advisor to do it for you.

5. In addition to the automatic, primarily reactive checks of the Health Monitor and Data Recovery Advisor, Oracle recommends that you additionally use the RMAN VALIDATE DATABASE command as a proactive check.

Note: In the current release, Data Recovery Advisor supports single-instance databases. Oracle Real Application Clusters (RAC) databases are not supported.

Page 55: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 7

Listing Data Failures

Listing Data Failures

On Enterprise Manager’s Perform Recovery page, click the “Advise and Recover” button to access the “View and Manage Failures” page, which is the home page of the Data Recovery Advisor. The example in the slide shows how the Data Recovery Advisor lists data failures and details. Activities that you can initiate include advising, setting priorities, and closing failures. The underlying RMAN LIST FAILURE command can also display data failures and details. Failure assessments are not initiated here; they were previously executed and stored in the ADR.

Failures are listed in decreasing priority order: CRITICAL , HIGH, and LOW. Failures with the same priority are listed in increasing time-stamp order.

Page 56: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 8

Advising on Repair

Advising on Repair

1

2a

(1) After manual repair(2) Automatic repair

2b

After you click the Advise button on the “View and Manage Failures” page, the Data Recovery Advisor generates a manual checklist. Two types of failures can appear:

• Failures that require human intervention. An example is a connectivity failure when a disk cable is not plugged in.

• Failures that are repaired faster if you can undo a previous erroneous action. For example, if you renamed a data file by error, it is faster to rename it back rather than initiate RMAN restoration from backup.

You can initiate the following actions:

• Click “Re-assess Failures” after you perform a manual repair. Failures that are resolved are implicitly closed; any remaining failures are displayed on the “View and Manage Failures” page.

• Click “Continue with Advise” to initiate an automated repair. When the Data Recovery Advisor generates an automated repair option, it generates a script that shows you how RMAN plans to repair the failure. Click Continue if you want to execute the automated repair. If you do not want the Data Recovery Advisor to automatically repair the failure, you can use this script as a starting point for your manual repair. The operating system location of the script is printed at the end of the command output. You can examine this script, customize it if necessary, and execute it manually.

Page 57: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 9

Setting Corruption-Detection Parameters

Setting Corruption-Detection Parameters

DB_LOST_WRITE_PROTECT

DB_BLOCK_CHECKSUM

DB_BLOCK_CHECKING

DB_ULTRA_SAFE

FULL or TRUEMEDIUMOFFor FALSE

TYPICALTYPICALTYPICAL

FULL

DATA_ONLY

FULLTYPICAL

DATA_AND_INDEXOFF

You can use the DB_ULTRA_SAFE parameter for easy manageability of the following initialization parameters: DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT. If you set any of these parameters explicitly, then your values remain in effect. The DB_ULTRA_SAFE parameter changes only the default values for these parameters. You can intensify the checking for block corruption by enabling the DB_ULTRA_SAFE parameter (default: OFF). Setting the DB_ULTRA_SAFE parameter results in increased system overhead because of the more intensive checks. The amount of overhead is related to the number of blocks changed per second; so it cannot be easily quantified. For a “high-update” application, you can expect a significant increase in CPU, likely in the 10% to 20% range, but possibly higher. This overhead can be alleviated by allocating additional CPUs. In summary, the initialization parameters have the following effects:

• DB_BLOCK_CHECKING: Prevents memory and data corruption

• DB_BLOCK_CHECKSUM: Detects I/O storage and disk corruption

• DB_LOST_WRITE_PROTECT: Detects nonpersistent writes on physical standby

• DB_ULTRA_SAFE: Specifies defaults for corruption detection

For further details, see the Oracle Database Reference.

Page 58: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 10

Flashback Data Archive: Overview

Flashback Data Archive: Overview

Transparently tracks historical changes to all Orac le data in a highly secure and efficient manner• Secure

– No possibility of modifying historical data– Retained according to your specifications– Automatically purged based on your retention policy

• Efficient– Special kernel optimizations to minimize performanc e

overhead of capturing historical data– Stored in compressed form in tablespaces to minimiz e

storage requirements– Completely transparent to applications– Easy to set up

A flashback data archive is a logical container for storing historical information. It is stored in one or more tablespaces and tracks the history for one or more tables. You specify a retention duration for each flashback data archive. You can group the historical table data by your retention requirements in a flashback data archive. Multiple tables can share the same retention and purge policies. You can use flashback data archives to automatically track and archive the data in tables enabled for Flashback Data Archive. This ensures that flashback queries obtain SQL-level access to the versions of database objects without getting a “snapshot too old” error. Enabling tracking on a table also enables tracking of large object (LOB) columns and the metadata (dictionary) information about the tracked table. Oracle Database 11g has been specifically enhanced to track history with minimal performance impact and to store historical data in compressed form. This efficiency cannot be duplicated by your own triggers, which also cost time and effort to set up and maintain. Operations that invalidate history or prevent historical capture are not allowed (for example, dropping or truncating a table). Flashback data archives are useful for compliance, audit reports, data analysis, and decision support systems.

Page 59: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 11

Flashback Data Archive: Overview

Flashback Data Archive: Overview

Original data in

Undo data

DML operations

buffer cache

Flashback data archivesstored in tablespaces

1 year

2 years

5 years

Example: Three flashback dataarchives with retention of:

For long-retention requirements that exceed undo

FBDA

A flashback data archive is a historical data store. Oracle Database 11g automatically tracks and archives the data in tables enabled for Flashback Data Archive with a new Flashback Data Archive background process, FBDA. You use this feature to satisfy long-retention requirements that exceed the undo retention. Flashback data archives ensure that flashback queries obtain SQL-level access to the versions of database objects without getting a “snapshot too old” error. A flashback data archive consists of one or more tablespaces (or parts thereof). You can have multiple flashback data archives. Each is configured with a specific retention duration. Based on your retention duration requirements, you should create different flashback data archives—for example, one for all records that must be kept for one year, another for all records that must be kept for two years, and so on. FBDA asynchronously collects and writes original data to a flashback data archive. It does not include the original indexes because your retrieval pattern of historical information might be quite different from your retrieval pattern of current information.

Note: You might want to create appropriate indexes just for the duration of historical queries.

Page 60: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 12

Flashback Data Archive Comparison

Flashback Data Archive Comparison

One per databaseAny number per tableAccess point-in-time

Granularity

Operation

Main benefit Physically moves entire database back in time

Access to data at any point in time without changing the current data

DatabaseTable

Offline operation, requires preconfiguration and resources

Flashback Database

Online operation, tracking enabled, minimal resource usage

Flashback Data Archive

How the Flashback Data Archive technology compares with Flashback Database:

• Flashback Data Archive offers the ability to access the data as of any point in time without actually changing the current data. This is in contrast with Flashback Database, which takes the database physically back in time.

• Tracking has to be enabled for historical access, whereas Flashback Database requires preconfiguration. Flashback Database is an offline operation that requires resources. Flashback Data Archive is an online operation (historical access seamlessly coexists with current access). Because a new background process is used, it has almost no effect on the existing processes.

• Flashback Data Archive is enabled at the granularity of a table, whereas Flashback Database works only at the database level.

• With Flashback Data Archive, you can go back to different points in time for different rows of a table or for different tables. With Flashback Database, you can go back to only one point in time for a particular invocation.

Page 61: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 13

Creating a Flashback Data Archive: Example

Creating a Flashback Data Archive: Example

2) Enabling history tracking for a specific table:

Viewing the historical data:

ALTER TABLE hr.employees FLASHBACK ARCHIVE fla1;

CREATE FLASHBACK ARCHIVE fla1 TABLESPACE tbs1 QUOTA 10 g RETENTION 5 YEAR;

1) Creating a flashback data archive:

2

1

SELECT product_number, product_name, count FROM inventory AS OF TIMESTAMP TO_TIMESTAMP ('2007-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS');

3

You must grant the FLASHBACK ARCHIVE ADMINISTER system privilege to your archive administrator to allow the execution of the following statements: • CREATE FLASHBACK ARCHIVE • ALTER FLASHBACK ARCHIVE • DROP FLASHBACK ARCHIVE

By default, FLASHBACK ARCHIVE is off for all tables. You use the ALTER TABLE … FLASHBACK ARCHIVE statement to enable or disable Flashback Data Archive for a table. The basic workflow to create and use a flashback data archive has three steps (as shown above). To perform example 2, you must have the FLASHBACK ARCHIVE object privilege on the FLA1 flashback data archive and ownership privileges on the HR.EMPLOYEES table. A table can be tracked in only one flashback data archive. You can view the historical data with an AS OF query. Note: Only limited DDL commands are supported for the Flashback Data Archive mechanism. For example, you may use the ADD COLUMN clause but not the DROP COLUMN clause. If you attempt to execute a nonsupported command, you receive an error.

Page 62: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 14

Configuring a Default Flashback Data Archive: Example

Configuring a Default Flashback Data Archive: Example

Using a default flashback archive:Create a default flashback data archive:

Enable history tracking for a table:

The name of the flashback data archive is not neede d because thedefault is used.

Disable history tracking:

CREATE FLASHBACK ARCHIVE DEFAULT fla2 TABLESPACE tbs1 QUOTA 10 g RETENTION 2 YEAR;

ALTER TABLE stock_data FLASHBACK ARCHIVE;

ALTER TABLE stock_data NO FLASHBACK ARCHIVE;

1

2

In the FLASHBACK ARCHIVE clause, you can specify the flashback data archive where the historical data for the table will be stored. By default, the system has no flashback data archive. You can create a default flashback data archive in one of two ways:

• Specify the name of an existing flashback data archive in the SET DEFAULT clause of the ALTER FLASHBACK ARCHIVE statement.

• Use the DEFAULT keyword in the CREATE FLASHBACK ARCHIVE statement when you create a flashback data archive.

You enable and disable flashback archiving for a table with the ALTER TABLE command. You can assign the internal archive table to a specific flashback data archive by specifying the flashback data archive name. If the name is omitted, the default flashback data archive is used. You specify the NO FLASHBACK ARCHIVE option to disable archiving of a table.

Page 63: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 15

Using Flashback Data Archive: Examples

Using Flashback Data Archive: Examples

Optionally, adding space:ALTER FLASHBACK ARCHIVE fla1 ADD TABLESPACE tbs3 QUOTA 10 g;

Optionally, purging data:ALTER FLASHBACK ARCHIVE fla1 PURGE BEFORE TIMESTAMP

(SYSTIMESTAMP - INTERVAL '1' day);

ALTER FLASHBACK ARCHIVE fla1 MODIFY RETENTION 2 YEA R;

Optionally, changing retention time:

DROP FLASHBACK ARCHIVE fla1;

Optionally, dropping a flashback data archive:

In the third example above, all historical data older than one day is purged from the FLA1 flashback data archive. Normally, purging is done automatically, on the day after your retention time expires. You can also override this for ad hoc clean up. The fourth example drops the FLA1 flashback data archive, which deletes its historical data but does not drop its tablespaces. The ALTER FLASHBACK ARCHIVE command enables you to:

• Change the retention time of a flashback data archive

• Purge some or all of its data • Add, modify, and remove tablespaces

You can use the following dynamic data dictionary view to track flashback data archive tables:

• USER_FLASHBACK_ARCHIVED_TABLES: Contains information about a user’s tables enabled for Flashback Archive. You see only those entries for which you have both alter privilege (or own) on the table and Flashback Archive Object privilege on the flashback archive in which the table has been archived.

Note: Removing all tablespaces of a flashback data archive causes an error.

Page 64: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 16

Flashing Back a Transaction

Flashing Back a Transaction

Oracle Database 11 g allows you to flash back a transaction using Enterprise Manager (EM) or the command line. • EM calls the DBMS_FLASHBACK.TRANSACTION_BACKOUT

procedure with the NOCASCADEoption.

• Supplemental logging must be enabled.• You must have the SELECT, FLASHBACK, and data

manipulation language (DML) privileges on all affec ted tables.

You must have SELECT, FLASHBACK, and DML privileges on all affected tables to flash back or back out a transaction. If the DBMS_FLASHBACK.TRANSACTION_BACKOUT PL/SQL call finishes successfully, the transaction has no dependencies and a single transaction is backed out successfully. The following conditions of use apply:

• Transaction backout is not supported across conflicting DDL.

• Transaction backout inherits data type support from LogMiner. See the Oracle Database 11g documentation for supported data types.

After you discover the need for transaction backout, you should start the backout operation as soon as possible for better performance because large redo logs and high transaction rates result in slower transaction backout operations.

You can optionally provide a transaction name for the backout operation that facilitates later auditing. A system-generated name is generated for you by default.

The DBA performs the following setup steps in SQL*Plus: alter database add supplemental log data;

alter database add supplemental log data (primary k ey) columns;

grant execute on dbms_flashback to hr;

grant select any transaction to hr;

Page 65: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 17

Flashback Transaction Wizard: Sample

Flashback Transaction Wizard: Sample

You select the required table you want to flash back from the Schema > Tables region of EM, and then select Flashback Transaction in the Actions drop-down list. The Flashback Transaction Wizard is invoked for your selected table. The Flashback Transaction: Perform Query page is then displayed. You need to specify the appropriate time range and add any query parameters. Flashback Transaction and LogMiner are seamlessly integrated in EM. You can also use the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure, which is described in the PL/SQL Packages and Types Reference. Essentially, you take an array of transaction IDs as starting points for your dependency search. For example:

CREATE TYPE XID_ARRAY AS VARRAY(100) OF RAW(8);

CREATE OR REPLACE PROCEDURE TRANSACTION_BACKOUT(

numberOfXIDs NUMBER, -- nbr of TXs passed as inp ut

xids XID_ARRAY, -- the list of transaction ID s

options NUMBER default NOCASCADE,-- back out dep end

timeHint TIMESTAMP default MINTIME -- hint on the TX start );

Page 66: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 18

Flashback Transaction Wizard: Sample

Flashback Transaction Wizard: Sample

The Flashback Transaction: Select Transaction page displays the transactions according to your previously entered specifications. You need to display the transaction details first to confirm that you are flashing back the correct transaction. Then select the offending transaction and continue with the wizard. The Flashback Transaction Wizard generates the Undo script and flashes back the transaction, giving you control to COMMIT this flashback operation. Before you commit the transaction, you can use the Execute SQL area at the bottom of the Flashback Transaction: Result page to view what the result of your COMMIT will be.

Page 67: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 19

Validating Dependencies

Validating Dependencies

The TRANSACTION_BACKOUTprocedure checks dependencies such as:• Write-after-write (WAW)• Primary and unique constraints

R5

R4

R3

R2

R1

TX 1 TX 2

A transaction can have a WAW dependency, which means that a transaction updates or deletes a row that has been inserted or updated by a dependent transaction. This can occur, for example, in a master/detail relationship of primary (or unique) and mandatory foreign key constraints. In the scenario in the slide, both transactions TX1 and TX2 update R1 row, so it is a conflicting row. The T2 transaction has a WAW dependency on the T1 transaction. With the NONCONFLICT_ONLY option, R2 and R3 are backed out because there is no conflict. With the NOCASCADE_FORCE option, all three rows (R1, R2, and R3) are backed out. The Flashback Transaction Wizard works as follows:

• If the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option fails due to dependent transactions, then you can change the recovery options.

• With the NONCONFLICT_ONLY option, nonconflicting rows within a transaction are backed out, maintaining database consistency (although the transaction atomicity is broken for the sake of data repair).

• You can use the NOCASCADE_FORCE option to forcibly back out given transactions, ignoring any dependent transactions. Any compensating DML commands for the given transactions are executed in reverse order of their commit times. If no constraints break, you can proceed to commit the changes, or else roll back.

• To initiate the complete removal of the given transactions and all their dependents in a post order fashion, use the CASCADE option.

Page 68: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 20

Dependency Report

Dependency Report

• The dependency report is generated in the following views: – DBA_FLASHBACK_TXN_STATE

– DBA_FLASHBACK_TXN_REPORT

• Review the dependency report, which shows all transactions backed out.

• You then need to explicitly commit or roll back to make the changes permanent.

SQL> SELECT * FROM DBA_FLASHBACK_TXN_STATE;

COMPENSATING_XID XID BACKOUT_MODE DEPE NDENT_XID USER#---------------- ---------------- ------------ ------- -------- --------0500150069050000 03000000A9050000 4 0

0500150069050000 05001E0063050000 4 0300 0000A9050000 0

If you are not using EM, you need to validate the dependency report and take appropriate action. The DBA_FLASHBACK_TXN_STATE view contains the current state of a transaction (if it is alive in the system or effectively backed out). This table is atomically maintained with the compensating transaction. For each compensating transaction, there could be multiple rows, where each row provides the dependency relation between the transactions that have been compensated by the compensating transaction. The DBA_FLASHBACK_TXN_REPORT view provides detailed information about all compensating transactions that have been committed in the database. Each row in this view is associated with one compensating transaction. For a detailed description of these tables, see the Oracle Database Reference.

Page 69: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 21

Demonstrations

Demonstrations

For further understanding, you can click the links below for demonstrations on:• Using the Data Recovery Advisor• Backing out Transactions with Flashback

Click the following links to further understand:

• Using the Data Recovery Advisor[http://www.oracle.com/technology/obe/11gr1_db/ha/dra/dra.htm]

• Backing out Transactions with Flashback[http://www.oracle.com/technology/obe/11gr1_db/ha/flatxn/flatxn.htm]

Page 70: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: Using the Data Recovery Advisor and Flashback Chapter 3 - Page 22

Summary

Summary

In this lesson, you should have learned how to:• Perform proactive failure checks by using the Data

Recovery Advisor • Enable tracking of table data by using Flashback Da ta

Archive• Back out data changes by using Flashback Transactio n

Page 71: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 0 - Page 1

High Availability: RMAN and Data Guard Enhancements

Page 72: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 2

Chapter 4High Availability: RMAN and Data Guard Enhancements

High Availability: RMAN and Data Guard Enhancements

Page 73: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Configure archive log deletion policies• Duplicate active databases using Oracle network

(without backups)• Back up large files in multiple sections• Create archival backups for long-term storage• Query a physical standby database while redo is

applied• Control the location of SQL Apply event information• Set the retention target for remote archived log fi les• Use the logical standby database flash recovery are a• Create a snapshot standby database

Page 74: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 4

RMAN Enhancements in Oracle Database 11g

RMAN Enhancements in Oracle Database 11 g

• Enhanced archive log deletion policies • Database duplication made “network aware”• Intrafile parallel backup and restore for very large files• Archival backups for long-term storage• Merging catalogs for enhanced recovery• RMAN data recovery commands • RMAN security enhancements

Enhanced Configuration of Deletion Policies Archived redo logs are eligible for deletion only when not needed by required consumers such as Data Guard, Streams, and Flashback Database. When you CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE... ARCHIVELOG use this configuration, as does the flash recovery area. Active Database Duplication You can use the “network-aware” DUPLICATE command to create a duplicate or standby database over the network without a need for preexisting database backups. Intrafile Parallel Backup and Restore for Very Large Files Backups of large data files now use multiple parallel server processes and “channels” to efficiently distribute the workload. The use of multiple sections improves the performance of backups.

Archival Backups for Long-Term Storage Long-term backups, created with the KEEP option, no longer require all archived logs to be retained, when the backup is online. Instead, archive logs needed to recover the specified data files to a consistent point in time are backed up (along with specified data files and a control file). This functionality reduces archive log backup storage needed for online, long-term KEEP backups, and simplifies the command by using a single format string for all the files needed to

Page 75: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 5

restore and recover the backup. Merging Catalogs The new IMPORT CATALOG command allows one catalog schema to be merged into another— either the whole schema or just the metadata for specific databases in the catalog. This simplifies catalog management for you by allowing separate catalog schemas, created in different versions, to be merged into a single catalog schema. RMAN Data Recovery Commands Use the LIST | CHANGE | ADVISE | REPAIR FAILURE RMAN commands to implement data recovery. RMAN Security Enhancements RMAN has been extended to support backup shredding to allow the database administrator (DBA) to delete the encryption key of transparent encrypted backups, without any physical access to the backup media. The RMAN catalog has also been enhanced to create virtual private RMAN catalogs for groups of databases and users.

Page 76: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 6

Duplicating a Database

Duplicating a Database

• Use with network (no backups required)– Includes a customized SPFILE

• Use Enterprise Manager or the RMAN command line.

Active source or TARGETdatabase

Destination or AUXILIARY database

TCP/IP

Oracle Database 11g greatly simplifies the process of duplicating your database for testing purposes or to act as a standby. You can use the “network-aware” DUPLICATE command to create a duplicate or standby database over the network without a need for pre-existing database backups. You simply instruct the source database to do online image copies and archived log copies directly to an auxiliary instance by using Enterprise Manager or the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command. The database files are from a TARGET or source database. They are copied using an interinstance network connection to a destination or AUXILIARY instance. RMAN then uses memory script (one that is contained only in memory) to complete recovery and open the database.

.

Page 77: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 7

Active Database Duplication: Selecting the Source

Active Database Duplication: Selecting the Source

In Enterprise Manager, you select Data Movement > Clone Database to duplicate your database.

• Oracle Net must be aware of the source and destination databases. The FROM ACTIVE DATABASE clause implies network action.

• If the source database is open, it must have archive logging enabled.

• If the source database is in mounted state (and not a standby), the source database must have been shut down cleanly.

• Availability of the source database is not affected by active database duplication. But the source database instance provides CPU cycles and network bandwidth.

Password files are copied to the destination. The destination must have the same SYS user password as the source. Therefore, at the beginning of the active database duplication process, both databases (source and destination) must have password files. When you duplicate a standby database, the password file from the primary database overwrites the current (temporary) password file on the standby database. When you use the command line and do not duplicate a standby database, you need to use the PASSWORD FILE clause (with the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command) when you want to copy the password file.

Page 78: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 8

RMAN DUPLICATE Command

RMAN DUPLICATECommand

DUPLICATE TARGET DATABASE

TO aux

FROM ACTIVE DATABASE

SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u31'

SET SGA_MAX_SIZE = 200M

SET SGA_TARGET = 125M

SET LOG_FILE_NAME_CONVERT = '/u01','/u31'

DB_FILE_NAME_CONVERT = '/u01','/u31';

The example assumes that you have previously connected to both the source or target and the destination or AUXILIARY instance, which have a common directory structure but different top-level disks. The destination instance uses automatically configured channels.

• This RMAN DUPLICATE command duplicates an open database.

• The FROM ACTIVE DATABASE clause indicates that you are not using backups (It implies network action.), and that the target is either open or mounted.

• The SPFILE clause indicates that the SPFILE will be restored and modified before opening the database.

• The repeating SET clause essentially issues an ALTER SYSTEM SET param = value SCOPE=SPFILE command. You can provide as many of these as necessary.

In Oracle Database 11g, the SPFILE is copied in the duplication process. You merely provide your list of parameters and desired values, and the system sets them accordingly. Note: The case must match for PARAMETER_VALUE_CONVERT. With the FILE_NAME_CONVERT parameters, pattern matching is operating-system specific.

Page 79: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 9

Creating a Standby Database with the DUPLICATE Command

Creating a Standby Database with the DUPLICATECommand

DUPLICATE TARGET DATABASE FOR STANDBY

FROM ACTIVE DATABASESPFILE PARAMETER_VALUE_CONVERT '/u01 ', '/ u31 '

SET "DB_UNIQUE_NAME"="FOO"SET SGA_MAX_SIZE = "200M"SET SGA_TARGET = "125M"SET LOG_FILE_NAME_CONVERT= '/u01','/u31'

DB_FILE_NAME_CONVERT= '/u01','/u31';

The example assumes that you are connected to the target and auxiliary instances and that the two environments have the same disk and directory structure. The FOR STANDBY FROM ACTIVE DATABASE clause initiates the creation of a standby database without using backups. The example uses 'u01' as the disk of the source and 'u31' as the top-level destination directory. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are replaced in the SPFILE .

Page 80: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 10

Parallel Backup and Restore for Very Large Files

Parallel Backup and Restore for Very Large Files

• Are created by RMAN with the SECTION SIZE value

• Are processed independently (serially or in paralle l)• Produce multipiece backup sets• Improve performance of the backup

Oracle data files can now be up to 128 TB in size. Previously, the smallest unit of RMAN backup was an entire file. In Oracle Database 11g, RMAN can break up large files into sections, and back up and restore these sections independently when you specify the SECTION SIZE option. This offers the advantage of being able to parallelize the backup and restore functionalities. Each file section is a contiguous range of blocks in a file. Each file section can be processed independently, either serially or in parallel. Backing up a file in separate sections improves both the performance and restarting of large file backups. A multisection backup job produces a multipiece backup set. Each piece contains one section of the file. All sections of a multisection backup, except perhaps for the last section, are of the same size. There are a maximum of 256 sections per file. You should not apply large values of parallelism to back up a large file that resides on a small number of disks.

Page 81: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 11

Using RMAN Multisection Backups

Using RMAN Multisection Backups

New option for BACKUPand VALIDATE DATAFILE commands:

SECTION SIZE <integer> [M | K | G]

Section 1

Section 2

Section 3

Section 4

Channel 1

Channel 2

Channel 3

Channel 4

One large data file

The BACKUP and VALIDATE DATAFILE RMAN commands accept a new SECTION SIZE option to support multisection backups. Specify your planned size for each backup section. The option is both a backup-command and backup-spec level option so that you can apply different section sizes to different files in the same backup job. Viewing metadata about your multisection backup:

• The V$BACKUP_SET and RC_BACKUP_SET views have a MULTI_SECTION column, which indicates whether this is a multisection backup or not.

• The V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE views have a SECTION_SIZE column, which specifies the number of blocks in each section of a multisection backup. Zero means a whole-file backup.

Note: You must set the COMPATIBLE parameter to at least 11.0 to use this functionality because earlier releases cannot restore multisection backups.

Page 82: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 12

Creating Archival Backups

Creating Archival Backups

KEEP {FOREVER | UNTIL TIME [=] ' date_string '}

NOKEEP

[ RESTORE POINT rsname]

Oracle Database 11g extends the RMAN KEEP command to create archival backups of your database (as well as tablespaces) to generate archival database backups that satisfy business or legal requirements. RMAN does not apply the regular retention policies to this backup. You should place your archival backup in a different long-term storage area, other than in the flash recovery area. You can use the Schedule Customized Backup Wizard in Enterprise Manager to create archival backups. Alternatively, you can use the KEEP option of RMAN commands. The KEEP option is an attribute of the backup set (not individual of the backup piece) or copy, and overrides any configured retention policy for the backup. You can retain archival backups, so that they are considered obsolete after a specified time (KEEP UNTIL ) or are never considered obsolete (KEEP FOREVER). The KEEP FOREVER clause requires the use of a recovery catalog. The RESTORE POINT clause creates a “consistency” point in the control file. It assigns a name to a specific SCN; the SCN is captured just after the data-file backup completes. The archival backup can be restored and recovered for this point in time, enabling the database to be opened. In contrast, the UNTIL TIME clause specifies the date until which the backup must be kept. RMAN includes the data files, archived log files (only those needed to recover an online backup), the relevant autobackup files, the control file, and the SPFILE. All these files must go to the same media family (or group of tapes) and have the same KEEP attributes.

Page 83: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 13

Archival Database Backup

Archival Database Backup

CONNECT TARGET /

CONNECT CATALOG rman/rman@catdb

CHANGE BACKUP TAG 'consistent_db_bkup'

KEEP FOREVER RESTORE POINT END_OF_2006;

CHANGE COPY OF DATABASE CONTROLFILE NOKEEP;

2) Changing the status of a database copy:

1) Archiving a database backup:

2

1

The CHANGE command changes the exemption status of a backup or copy in relation to the configured retention policy. For example, you can specify CHANGE ... NOKEEP to make a backup that is currently exempt from the retention policy eligible for the OBSOLETE status.

The first example changes a consistent backup into an archival backup, which you plan to store offsite. Because the database is consistent and, therefore, requires no recovery, you do not need to save archived redo logs with the backup. The second example specifies that any long-term image copies of data files and control files should lose their exempt status and so become eligible to be obsolete according to the existing retention policy:

• Deprecated clauses: KEEP [LOGS | NOLOGS]

• Preferred syntax: KEEP RESTORE POINT <rsname>

Note: The RESTORE POINT option is not valid with CHANGE. You cannot use CHANGE ... UNAVAILABLE or KEEP for files stored in the flash recovery area.

Page 84: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 14

IMPORT CATALOG RMAN Command

IMPORT CATALOGRMAN Command

Oracle Database 11 g extends the recovery catalog functionality by allowing the merge of recovery cat alogs.

• Importing metadata for all registered databases:

• Importing metadata for two registered databases:

• Importing metadata from multiple catalogs:

IMPORT CATALOG cat102/oracle@srcdb;

IMPORT CATALOG cat102/rman@srcdb;

IMPORT CATALOG cat101/rman@srcdb;

IMPORT CATALOG cat92/rman@srcdb NO UNREGISTER;

IMPORT CATALOG cat92/oracle@catdb DBID=1423241, 142 3242;

With the IMPORT CATALOG RMAN command, you import the metadata from one recovery catalog schema into a different catalog schema. If you created catalog schemas of different versions to store metadata for multiple target databases, then this command enables you to maintain a single catalog schema for all databases. RMAN must be connected to the destination recovery catalog, which is the catalog into which you want to import your catalog data. The <connectStringSpec> is the source recovery catalog connect string. The version of the source recovery catalog schema must be equal to the current version of the RMAN executable. If needed, upgrade the source catalog to the current RMAN version.

• DBID: You can specify a list of database IDs whose metadata should be imported from the source catalog schema. By default, RMAN merges metadata for all database IDs from the source catalog schema into the destination catalog schema.

• DB_NAME: You can specify the list of database names whose metadata should be imported. If the database name is ambiguous, RMAN issues an error.

• NO UNREGISTER: By default, the imported database IDs are unregistered from the source recovery catalog schema after a successful import. By using the NO UNREGISTER option, you can force RMAN to keep the imported database IDs in the source catalog schema.

For full the RMAN command syntax, see the Oracle Database 11g Backup and Recovery documentation.

Page 85: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 15

RMAN Data Recovery Commands

RMAN Data Recovery Commands

The Data Recovery Advisor command extensions for RMAN are listed in the following table:

Displays the recommended repair optionADVISE FAILURE

Repairs failure and closesREPAIR FAILURE

CHANGE FAILURE

LIST FAILURE

RMAN command

Lists previously executed failure assessment

Changes or closes one or more failures

Action

The LIST FAILURE command displays data failures and details. If the target instance uses a recovery catalog, it can be in STARTED mode; otherwise, it must be in MOUNTED mode. The CHANGE FAILURE command changes failure priority or closes one or more failures. You can change only a failure priority of HIGH or LOW priorities. Open failures are closed implicitly when a failure is repaired. However, you can also explicitly close a failure. The ADVISE FAILURE command displays a recommended repair option for the specified failures. If this command is executed from within EM, then Data Guard is presented as a repair option. This command prints a summary of the input failure and implicitly closes all open failures that are already fixed. The default behavior when no option is used is to advise on all the CRITICAL and HIGH priority failures that are recorded in the Automatic Diagnostic Repository (ADR). If a new failure has been recorded in the ADR since the last LIST FAILURE command, then ADVISE FAILURE includes a WARNING before advising on all CRITICAL and HIGH failures. The REPAIR FAILURE command is used after an ADVISE FAILURE command within the same RMAN session. The REPAIR FAILURE command uses the single, recommended repair option of the last ADVISE FAILURE execution in the current session, or initiates an implicit ADVISE FAILURE command if none exists. After completing the repair, the command closes the failure. For details about ADR, see the Oracle University course titled Oracle Database 11g: Change Management Overview eStudy.

Page 86: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 16

RMAN Security Enhancements

RMAN Security Enhancements

• Configure transparent encrypted backups with:

or with:

• Create and use the virtual private catalog:– For groups of databases and users– To consolidate repositories and maintain separate

responsibilities

RMAN> CONFIGURE ENCRYPTION FOR DATABASEON;

RMAN> SET ENCRYPTION ON;

Backup shredding allows the DBA to delete the encryption key of transparent encrypted backups, without any physical access to the backup media. The encrypted backups become inaccessible if the encryption key is destroyed. This does not apply to password-protected backups. The default setting for ENCRYPTION options is OFF, and backup shredding is not enabled. To shred a backup, the DELETE FORCE RMAN command has been extended to perform the shredding action. The RMAN catalog has been enhanced to create virtual private RMAN catalogs for groups of databases and users. The RMAN catalog owner creates the base catalog and grants RECOVERY_CATALOG_OWNER to the user who will be the virtual catalog owner. The RMAN catalog owner either grants access to the registered databases to the virtual catalog owner, or grants REGISTER to the virtual catalog owner. The virtual catalog owner can then connect to the catalog for a particular target or register a target database. After the virtual private catalog is configured, the virtual private catalog owner uses it just like a standard base catalog. This feature allows a consolidation of RMAN repositories and maintains a separation of responsibilities. Note: For further information about transparent data encryption, see the lesson titled “Security: New Features.” For further information about RMAN and the virtual private database, see the Oracle Database Backup and Recovery Advanced User’s Guide 11g Release 1 (11.1).

Page 87: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 17

Improved Integration of RMAN and Data Guard

Improved Integration of RMAN and Data Guard

• Set RMAN-persistent configurations for each databas e in the Data Guard configuration without connecting to the specific database as TARGET.

• Restore a backup control file to a standby control file, and vice versa. – No need to back up a standby control file on the pr imary

database to create a new standby database– Existing control file backup used to RESTORE AS

STANDBYautomatically

• BACKUP, RESTORE, and RECOVERwork transparently with any database in the configuration.

• Create server parameter file ( SPFILE ) backups for each database in the configuration.

Oracle Database 11g further enhances the integration of RMAN and Oracle Data Guard. You can now set RMAN-persistent configuration settings without connecting to the database in the Data Guard configuration as TARGET. You can connect to any database in the Data Guard configuration and specify persistent configuration settings for any database in the configuration. A new standby database can be created without creating a control file for the standby database on the primary database. You no longer need to create a backup control file on each standby site. RMAN is able to register physical standby databases, resynchronize file names, and update the RMAN-persistent configuration in the control file by performing a reverse resynchronization of configurations automatically to all physical standby databases. RESTORE and RECOVER commands can determine the correct file names for each standby database site automatically. This is beneficial for databases using Automatic Storage Management (ASM) and Oracle Managed Files (OMF), and databases with a large number of data files. During a RESTORE operation, the usage of the server parameter file (SPFILE ) is transparent. You no longer need to track which SPFILE backup belongs to which database in the configuration.

Page 88: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 18

Real-Time Query and Physical Standby Databases

Real-Time Query and Physical Standby Databases

Physical standby database

Primary database

Redoapply

Redo stream

Redotransport

Queries

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE 2 USING CURRENT LOGFILE DISCONNECT FROM SESSION;

In previous database releases, when you opened the physical standby database for read only, redo application stopped. Oracle Database 11g enables you to use a physical standby database for queries while redo is applied to the physical standby database. This enables you to use a physical standby database for disaster recovery and to offload work from the primary database during normal operation. The physical standby database can be opened in read-only only mode if all the files have been recovered up to the same system change number (SCN); otherwise, the open fails. You must temporarily stop Redo Apply to open the physical standby database in read-only mode. After you open the database, restart Redo Apply enabling the physical standby database to stay current with the primary database because the users perform queries against the data. The steps are as follows:

1. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE

2 CANCEL;

2. Open the database for read-only access.

3. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE

2 USING CURRENT LOGFILE DISCONNECT FROM SESSION;

Page 89: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 19

Compressing Redo Data

Compressing Redo Data

Enable compression of archived redo logs files duri ng transmission to the standby database:• Set the COMPRESSIONattribute on the

LOG_ARCHIVE_DEST_n initialization parameter:

• Use the Oracle Data Guard Broker’s RedoCompressionproperty:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='SERVICE=dest1

2> COMPRESSION=ENABLE';

DGMGRL> EDIT DATABASE <db_unique_name>

SET PROPERTY RedoCompression = {'ENABLE' | 'DISABLE '}

When the communication network to remote databases uses a high-latency, low-bandwidth wide area network (WAN) link, and the redo data that is transferred to standby databases is substantial, making the most effective use of the network bandwidth is desirable. Redo transport compression (for gap resolution) can be enabled on any remote destination to transfer gap redo data more efficiently and reduce network bandwidth utilization. Redo compression can be enabled or disabled by setting the COMPRESSION attribute on the LOG_ARCHIVE_DEST_n initialization parameter or by using the Oracle Data Guard Broker’s RedoCompression property. Redo compression is disabled by default. All databases in a Data Guard configuration must be using Oracle Database 11g to enable redo transport compression. When you add a database to the Data Guard configuration, the Data Guard Broker automatically detects whether network compression is enabled or disabled for the standby database being added and the property is set accordingly. You can query the COMPRESSION column of the V$ARCHIVE_DEST view to determine whether redo compression is enabled:

SQL> SELECT dest_name, compression FROM v$archive_d est;

DEST_NAME COMPRES

--------------------- -------

LOG_ARCHIVE_DEST_1 ENABLE

LOG_ARCHIVE_DEST_2 DISABLE

LOG_ARCHIVE_DEST_3 DISABLE

Page 90: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 20

Dynamically Setting SQL Apply Parameters

Dynamically Setting SQL Apply Parameters

• APPLY_SERVERS

• EVENT_LOG_DEST

• LOG_AUTO_DEL_RETENTION_TARGET

• LOG_AUTO_DELETE

• MAX_EVENTS_RECORDED

• MAX_SERVERS

• MAX_SGA

• PREPARE_SERVERS

• RECORD_APPLIED_DDL

• RECORD_SKIP_DDL

• RECORD_SKIP_ERRORS

• RECORD_UNSUPPORTED_OPERATIONS

SQL Apply

In Oracle Database 11g, you can set the parameters in the slide by using the APPLY_SET procedure without stopping SQL Apply. You must, however, still stop SQL Apply to change the PRESERVE_COMMIT_ORDER parameter.

Page 91: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 21

New Columns in DBA_LOGSTDBY_PARAMETERS

New Columns in DBA_LOGSTDBY_PARAMETERS

VARCHAR2(64)

VARCHAR2(64)

VARCHAR2(64)

Type

Yes

Null?

YES: The parameter can be set dynamically without having to stop SQL Apply.NO: Setting of the parameter requires that SQL Apply be stopped.

DYNAMIC

SETTING

UNIT

Name

Unit (if any)

SYSTEM: The parameter value is not explicitly set by the user. The user can change it with the appropriate call to APPLY_SET. USER: The parameter value has been explicitly set by the user.

Description

You can use the new columns in DBA_LOGSTDBY_PARAMETERS to:

• View all parameters active for the SQL Apply session

• Distinguish between parameters set internally and those set explicitly by the DBA

• Determine whether you must stop SQL Apply to modify a specific parameter

Page 92: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 22

Recording SQL Apply Event Information

Recording SQL Apply Event Information

• EVENT_LOG_DEST: Determines where SQL Apply records the occurrence of an “interesting event”

• Values:– DEST_ALL: All events are recorded in

SYSTEM.LOGSTDBY$EVENTSand in the alert log.– DEST_EVENTS_TABLE: Default. All events that contain

information about user data are recorded only in th e SYSTEM.LOGSTDBY$EVENTStable.

In previous releases, SQL Apply events were recorded in the alert log and in the SYSTEM.LOGSTDBY$EVENTS table. The contents of SYSTEM.LOGSTDBY$EVENTS is visible through the DBA_LOGSTDBY_EVENTS view. In Oracle Database 11g, use the EVENT_LOG_DEST parameter to control whether error and informational messages generated by SQL Apply are written to the alert log file. If your preference is that only system-specific events are written to the alert log, set EVENT_LOG_DEST to DEST_EVENTS_TABLE. EVENT_LOG_DEST can be modified by the DBMS_LOGSTDBY.APPLY_SET and DBMS_LOGSTDBY.APPLY_UNSET procedures. The EVENT_LOG_DEST parameter influences the behavior of RECORD_SKIP_ERRORS, RECORD_SKIP_DDL, RECORD_APPLIED_DDL, and RECORD_UNSUPPORTED_OPERATIONS parameters. As an example, if RECORD_APPLIED_DDL is set to TRUE and EVENT_LOG_DEST is set to DEST_EVENTS_TABLE, then the DDL string applied will be recorded only in SYSTEM.LOGSTDBY$EVENTS.

Page 93: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 23

Logical Standby Database Flash Recovery Area

Logical Standby Database Flash Recovery Area

Specify the flash recovery area by using the LOG_ARCHIVE_DEST_n parameter:

LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'

In Oracle Database 11g, you can use the flash recovery area of a logical standby database as a log destination.

Page 94: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 24

Initiating Fast-Start Failover from an Application

Initiating Fast-Start Failover from an Application

Primary database

Fast-start failover standby database

Observer

DBMS_DG.INITIATE_FS_FAILOVER

Application

Fast-start failover enables Data Guard to rapidly and automatically fail over to a previously chosen standby database without requiring manual intervention. This feature increases the availability of your database in the event of a disaster by reducing the need for you to perform a failover operation manually. In Oracle Database 11g, an application can initiate a fast-start failover by invoking the DBMS_DG.INITIATE_FS_FAILOVER function. The function is used to alert the primary database server that the application wants a fast-start failover to occur immediately. The primary database server notifies the observer of this request and the observer immediately initiates a fast-start failover. The standby database must be in a valid fast-start failover state to accept a failover: observed and synchronized or within the lag limit of the primary database. The application-initiated failover is an invocation of the FAILOVER command and requires SYSDBA privilege. The DBMS_DG package is defined as an invoker’s rights package to address privilege concerns. The DBMS_DG.INITIATE_FS_FAILOVER function calls DBMS_DRS.INITIATE_FS_FAILOVER . If the configuration is not in a valid fast-start failover state, the INITIATE_FS_FAILOVER function returns an ORA error, informing you that a fast-start failover operation cannot be performed. You use the DGMGRL SHOW FAST_START FAILOVER command to display all information related to fast-start failover.

Page 95: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 25

Setting Up a Test Environment by Using Snapshot Standby Databases

Setting Up a Test Environment by Using Snapshot Standby Databases

Physical standby database

Perform testing

Snapshot standby database

Open database

Back out changes

SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;

Redo stream

Redo stream

In Oracle Database 11g, a physical standby database can be opened temporarily (that is, activated) for read or write activities such as reporting and testing. A physical standby database in the snapshot standby state still receives redo data from the primary database, thereby providing data protection for the primary database while still in the reporting or testing database role. You convert a physical standby database to a snapshot standby database and open the snapshot standby database for writes by applications for testing. After you have completed testing, you discard the testing writes and catch up to the primary database by applying the redo logs. A snapshot standby database provides the combined benefit of disaster recovery, reporting, and testing using a physical standby database. Although similar to storage snapshots, snapshot standby databases provide a single copy of storage while maintaining disaster recovery. You cannot use standby databases for real-time query or fast-start failover. Note: For further discussion on using snapshot standby for testing, see the Oracle University course titled Oracle Database 11g: Change Management Overview eStudy.

Page 96: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

High Availability: RMAN and Data Guard Enhancements Chapter 4 - Page 26

Summary

Summary

In this lesson, you should have learned how to:• Configure archive log deletion policies• Duplicate active databases using Oracle network

(without backups)• Back up large files in multiple sections• Create archival backups for long-term storage• Query a physical standby database while redo is

applied• Control the location of SQL Apply event information• Set the retention target for remote archived log fi les• Use the logical standby database flash recovery are a• Create a snapshot standby database

Page 97: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 0 - Page 1

Security: New Features

Page 98: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 2

Chapter 5Security: New Features

Security: New Features

Oracle Database 11 g

Page 99: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Configure the password file to use case-sensitive

passwords• Use Transparent Data Encryption support on a logica l

standby database• Use Transparent Data Encryption support for Streams• Create a tablespace with encryption for added secur ity• Use Hardware Security Module (HSM) for storing

external encrypted data• Use large object (LOB) encryption for SecureFile LO Bs• Use Enterprise Manager to manage your database

security options

Page 100: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 4

Security Enhancements

Security Enhancements

• Extended password support:– Is case sensitive, supports multibyte characters, a nd is

hashed for comparison purposes

• Automatic secure configuration from database installation:– Default password profile– Default auditing– Built-in password complexity checking

• Default audit options to cover important security privileges

Oracle Database 11g uses more secure passwords to meet the demands of compliance to various security and privacy regulations. Passwords:

• Are now case sensitive. Uppercase and lowercase characters are now different characters when used in a password.

• May contain special characters and multibyte characters

• Are always passed through a hash algorithm, then stored as a user credential, which is then used for comparison on login

• Always use salt. A hash function always produces the same output, given the same input. Salt is a unique (random) value that is added to the input to ensure that the output credential is unique.

From installation, Oracle Database 11g creates the database with certain security features recommended by the Centre for Internet Security (CIS) benchmark. The configuration recommended by the CIS is more secure than the Oracle Database 10g, Release 2 default installation, yet allows the majority of applications to be successful. Note: There are some recommendations of the CIS benchmark that may be incompatible with certain applications.

Page 101: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 5

Secure Default Configuration

Secure Default Configuration

• By default:– The default password profile is enabled.– Account is locked after 10 failed login attempts.

• On upgrade:– Passwords are not case sensitive until changed.– Passwords become case sensitive by use of the

ALTER USERcommand.

• On creation:– Passwords are case sensitive.

When creating a custom database using the Database Configuration Assistant (DBCA), you can specify the Oracle Database 11g default security configuration:

The default password profile is enabled with the settings: PASSWORD_LIFE_TIME 180

PASSWORD_GRACE_TIME 7

PASSWORD_REUSE_TIME UNLIMITED

PASSWORD_REUSE_MAX UNLIMITED

FAILED_LOGIN_ATTEMPTS 10

PASSWORD_LOCK_TIME 1

PASSWORD_VERIFY_FUNCTION NULL

When you upgrade an Oracle Database 10g, passwords are not case sensitive until you issue the ALTER USER… command to change the password. When the database is created, the passwords are case sensitive by default.

Page 102: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 6

Enabling the Built-in Password Complexity Checker

Enabling the Built-in Password Complexity Checker

Executing the utlpwdmg.sql script creates the password verification function:

This alters the default profile:

ALTER PROFILE DEFAULTLIMIT PASSWORD_VERIFY_FUNCTION verify_function_11G;

SQL> CONNECT / as SYSDBASQL> @?/rdbms/admin/utlpwdmg.sql

You can easily modify the verify_function_11G sample PL/SQL function to enforce the password complexity policies at your site. No special characters are required to be embedded in the password. The verify_function_11G function and the previous verify_function function are included in the utlpwdmg.sql file. You enable the password complexity checking by creating a verification function owned by SYS. You can use these supplied functions or modify them to meet your needs. The example in the slide shows the utlpwdmg.sql script, which creates the verify_function_11G function that checks whether the password adheres to the following: contains at least eight characters, contains at least one number and one alphabetic character, and differs from the previous password by at least three characters. It also checks that the password is not a username or username appended with a number 1 to 100, a username reversed, a server name or server name appended with 1 to 100, or one of a set of well-known and common passwords such as 'welcome1 ','database1 ', 'oracle123 ', or oracle (appended with 1–100), for example. The following column has been added to the DBA_USERS table:

• PASSWORD_VERSIONS: The database version in which the password was created or changed

Page 103: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 7

Managing Default Audits

Managing Default Audits

In Oracle Database 11 g, default audit options cover important extended security privileges.Best practices:• Retain audit records by using:

– Data Pump export command– SELECTinto another table

• Remove archived audit records after review and archive.

By default, auditing is enabled in Oracle Database 11g for certain privileges that are very important to security. The audit trail is recorded in the database AUD$ table by default; the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database performance for most sites. You can retain audit records by using Datapump export, or use the SELECT statement to capture a set of audit records into a separate table. You should remove audit records from the SYS.AUD$ table after review and archive as they take up space in the SYSTEM tablespace. If the SYSTEM tablespace cannot grow, and there is no more room for audit records, errors are generated for each audited statement that fails. Because the CREATE SESSION command is one of the audited privileges, no new sessions are created after the SYSTEM tablespace is full. Note: The SYSTEM tablespace is created with the autoextend on option. Therefore the SYSTEM tablespace grows as needed until there is no more space available on the disk.

Page 104: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 8

Privileges Audited By Default

Privileges Audited By Default

• CREATE EXTERNAL JOB

• CREATE ANY JOB

• GRANT ANY OBJECT PRIVILEGE

• EXEMPT ACCESS POLICY

• CREATE ANY LIBRARY

• GRANT ANY PRIVILEGE

• DROP PROFILE

• ALTER PROFILE

• DROP ANY PROCEDURE

• ALTER ANY PROCEDURE

• CREATE ANY PROCEDURE

• ALTER DATABASE

• GRANT ANY ROLE

• CREATE PUBLIC DATABASE LINK

• DROP ANY TABLE

• ALTER ANY TABLE

• CREATE ANY TABLE

• DROP USER

• ALTER USER

• CREATE USER

• CREATE SESSION

• AUDIT SYSTEM

• ALTER SYSTEM

• SYSTEM AUDIT

• ROLE

The privileges mentioned above are audited for all users on success and failure, and by access.

Page 105: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 9

Adjusting Security Settings

Adjusting Security Settings

When you create a database using the DBCA tool, you are offered a choice of security settings:

• Keep the enhanced Oracle Database 11g default security settings (recommended). These settings include enabling auditing and the new default password profile.

• Revert to pre-Oracle Database 11g default security settings. To disable a particular category of enhanced settings for compatibility purposes, select from the following:

- Revert audit settings to pre-Oracle Database 11g defaults.

- Revert password profile settings to pre-Oracle Database 11g defaults.

These settings can also be changed after the database is created using DBCA.

Note: Secure permissions on software are always set regardless of your choice for the security settings option.

Page 106: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 10

Setting Security Parameters

Setting Security Parameters

Specifies non-case-sensitive passwords

SEC_CASE_SENSITIVE_LOGON

Protects against brute force attacks

SEC_MAX_FAILED_LOGIN_ATTEMPTS

Protects against denial-of-service attacks

SEC_PROTOCOL_ERROR_FURTHER_ACTION

SEC_PROTOCOL_ERROR_TRACE_ACTION

Oracle Database 11g enhances the default security of the database with these systemwide and static parameters:

SEC_CASE_SENSITIVE_LOGON: Specifies non-case-sensitive passwords. Values TRUE or FALSE.

• You can specify non-case-sensitive passwords by setting an initialization parameter: SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;

• However, disabling case sensitivity increases vulnerability to brute force attacks.

SEC_PROTOCOL_ERROR_FURTHER_ACTION: Specifies the action to be taken when bad packets from a malicious client are received

SEC_PROTOCOL_ERROR_TRACE_ACTION: Specifies the monitoring action to be taken when bad packets from a malicious client are received

SEC_MAX_FAILED_LOGIN_ATTEMPTS: Causes a connection to be automatically dropped after the specified number (default: 10) of attempts. This parameter is enforced even when the password profile is not enabled. This helps prevent automated password crackers from making a connection and attempting hundreds or thousands of passwords.

Note: For further information, see Oracle Database Reference 11g Release 1 (11.1).

Page 107: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 11

Setting Database Administrator Authentication

Setting Database Administrator Authentication

Password file

Sys/Ora$#CLe as SYSDBA

As shown previously, you can establish case-sensitive passwords to make user accounts more difficult to penetrate with brute force attacks that try all possible passwords. Case-sensitive passwords have also been extended to remote connections for privileged users. You can override this default behavior with the following command:

orapwd file=orapworcl entries=5 ignorecase=Y.

If you consider that the password file is vulnerable or that the maintenance of many password files is a burden, then strong authentication can be implemented using the following:

• Grant OSDBA or OSOPER roles in the Oracle Internet Directory.

• Use Kerberos tickets.

• Use certificates over secure sockets layer (SSL).

You must set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES to use any of these strong authentication methods. Set this parameter to NO to disable the use of strong authentication methods. Authentication through Oracle Internet Directory or through Kerberos can also provide centralized administration or single sign-on. If the password file is configured, it is the first to be checked. You may also be authenticated by the local OS by being a member of the OSDBA or OSOPER groups. Note: For further information , please see the Oracle Database Advanced Security Administrator’s Guide 11g Release 1.

Page 108: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 12

Setting Up Directory Authentication for Administrative Users

Setting Up Directory Authentication for Administrative Users

1. Create the user in the directory.2. Grant the SYSDBAor SYSOPERrole to the user.3. Set the LDAP_DIRECTORY_SYSAUTHparameter in the

database.4. Check whether the LDAP_DIRECTORY_ACCESS

parameter is set to PASSWORDor SSL.

5. Test the connection:

$sqlplus fred/t%3eEGQ@orcl AS SYSDBA

You enable the Oracle Internet Directory (OID) server to authorize SYSDBA and SYSOPER connections with the following steps:

1. Configure the administrative user with the same procedures you would use to configure a typical user.

2. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer.

3. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database by a strong authentication method.

4. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is set to PASSWORD or SSL.

5. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement as shown in the slide.

Note: If the database is configured to use a password file for remote authentication, the password file is checked first.

For further information about configuring Kerberos or SSL authentication, see the Oracle Database Advanced Security Administrator’s Guide 11g.

Page 109: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 13

Transparent Data Encryption Support

Transparent Data Encryption Support

Several new features enhance the capabilities of Transparent Data Encryption (TDE) and build on the same infrastructure: • Support for LogMiner

– Support for logical standby

• Support for Streams• Support for Asynchronous Change Data Capture• Tablespace encryption• Hardware-based master key protection• Encryption for LOB columns• Encryption / compression for Data Pump data

With TDE, the encrypted column data is encrypted in the data files, the undo segments, and the redo logs. Oracle Logical Standby depends on the LogMiner functionality to transform redo logs into SQL statements for SQL Apply. LogMiner has been enhanced to support TDE, thereby providing the ability to support TDE on a logical standby database. Encrypted columns are handled the same way in both Streams and the Streams-based Change Data Capture. The redo records are mined at the source, where the wallet exists. The data is transmitted unencrypted to the target and encrypted using the wallet at the target. The data can be encrypted in transit by using Advanced Security Option to provide network encryption. In Oracle Database 11g, Release 1, LogMiner does not support TDE with hardware security module (HSM) for key storage. User-held keys for TDE are public key infrastructure keys (public and private) supplied by the user for TDE master keys. User-held keys are not supported by LogMiner. Oracle Database 11g introduces a completely reengineered large object (LOB) data type called SecureFiles offering compression and transparent encryption. The ability to compress the metadata associated with a Data Pump job is provided in Oracle Database 10g Release 2. In Oracle Database 11g, this compression capability is extended so that you can now compress table data on export. Data Pump compression is an inline operation, so the reduced dump file size means a significant savings in disk space. Unlike operating system or file system compression utilities, Data Pump compression is fully inline on the import side as well, so there is no need to uncompress a dump file before importing it. You get full Data Pump functionality using a compressed file. Any command that you would use on a regular file also

Page 110: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 14

works on a compressed file. In Oracle Database 11g, Data Pump supplies more encryption options for more flexible and robust security. The most important new encryption feature for Data Pump is the ability to encrypt dump file sets. You can select encryption for the data, the metadata, or the entire dump file as your needs require. Please refer to the Oracle® Database Utilities 11g Release 1 (11.1) guide for more information on Data Pump.

Page 111: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 15

TDE and Logical Standby

TDE and Logical Standby

Logical standby database with TDE:• A wallet on the standby is a copy of the wallet on the

primary.• The master key may be changed only on the primary.• Wallet open and close commands are not replicated.• The table key may be changed on the standby.• The table encryption algorithm may be changed on th e

standby.

The same wallet is required for both databases and must be copied from the primary database to the standby database every time the master key is changed using alter system set encryption key identified by <wallet_password> . An error is raised if the DBA attempts to change the master key on the standby database. If an autologin wallet is not used, the wallet must be opened on the standby. Wallet open and close commands are not replicated on the standby. A different password can be used to open the wallet on the standby. The wallet owner can change the password to be used for the copy of the wallet on the standby. The DBA has the ability to change the encryption key or the encryption algorithm of a replicated table at logical standby. This does not require a change to the master key or wallet. This operation is performed with the following command:

ALTER TABLE table_name REKEY USING '3DES168';

There can be only one algorithm per table. Changing the algorithm at the table changes the algorithm for all the columns. A column on the standby can have a different algorithm than the primary or no encryption. To change the table key, the guard setting must be lowered to NONE. If encrypted columns are not replicated to the standby, you can use TDE on local tables in the logical standby database independently of the primary.

Page 112: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 16

TDE and Streams

TDE and Streams

Oracle Streams now provides the ability to transpar ently:• Decrypt values protected by TDE for filtering and

processing• Re-encrypt values so that they are never in clear t ext

while on disk

ApplyStagingCapture

In Oracle Database 11g, Oracle Streams supports TDE. Oracle Streams now provides the ability to transparently:

• Decrypt values protected by TDE for filtering, processing and so on.

• Re-encrypt values so that they are never in clear text while on disk (as opposed to memory).

If the corresponding column in the apply database has TDE support, the applied data is transparently re-encrypted using the local database’s keys. If the column value was encrypted at the source, and the corresponding column in the apply database is not encrypted, the apply process raises an error unless the apply parameter ENFORCE_ENCRYPTION is set to FALSE. Whenever logical change records(LCRs) are stored on disk, such as due to queue or apply spilling and apply error creation, the data is encrypted if the local database supports TDE. This is preformed transparently without any user intervention. LCR message tracing does not display clear text of encrypted column values.

Page 113: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 17

Using Tablespace Encryption

Using Tablespace Encryption

To create an encrypted tablespace:1. Create or open the encryption wallet:

2. Create a tablespace with the encryption keywords:

This supports 3DES168, AES128 , AES192, and AES256.

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1";

SQL> CREATE TABLESPACE encrypt_ts2> DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100 M3> ENCRYPTION USING '3DES168'4> DEFAULT STORAGE (ENCRYPT);

An enhancement to the Oracle Advanced Security TDE solution is tablespace encryption. You can now encrypt all data in an entire tablespace. The encryption encrypts on write and decrypts on read. The TDE column encryption solution is maintained for smaller data sets. The only encryption penalty is associated with input/output (I/O). All data types are supported and the SQL access paths are unchanged. The encryption wallet must be open to use tablespace encryption. The ENCRYPTION USING clause sets the encryption algorithm to be used and an ENCRYPT storage parameter causes the encryption to be used for the tablespace. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. The DBA_TABLESPACES ENCRYPTED column shows whether encryption is on or off (YES | NO ) and encryption properties are in the V$ENCRYPTED_TABLESPACES view. Encrypted data is protected during operations such as JOIN and SORT, meaning that the data is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected. The following restrictions apply:

• Temporary and undo tablespaces cannot be encrypted (Selected blocks are encrypted).

• Bfiles and external tables are not encrypted.

• Transportable tablespaces across different endian platforms is not supported.

• The key for an encrypted tablespace cannot be changed. However, you can create a tablespace with the desired encryption properties and move all objects to the new tablespace.

Page 114: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 18

Hardware Security Module

Hardware Security Module

Hardware security module

Database serverClient

Encrypt and decrypt operationsare performed on the hardware security module.

Encrypted data

A hardware security module (HSM) is a physical device that provides secure storage for encryption keys. It also provides secure computational space (memory) to perform encryption and decryption operations. HSM provides even stronger security for the TDE master key for customers who are concerned about storing the master key on the operating system. Transparent data encryption can use HSM to provide enhanced security for sensitive data in Oracle Database 11g. An HSM is used to store the master encryption key used for transparent data encryption. The key is secure from unauthorized access attempts as the HSM is a physical device and not an operating system file. All encryption and decryption operations that use the master encryption key are performed inside the HSM. This means that the master encryption key is never exposed in the insecure memory. There are several vendors that provide hardware security modules. The vendor must supply the appropriate libraries.

Page 115: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 19

TDE and Kerberos Enhancements

TDE and Kerberos Enhancements

• Uses stronger encryption algorithms (no action required)

• Provides interoperability between MS KDC and MIT KDC (no action required)

• Allows longer principal name:

CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS '[email protected]';

The Oracle client Kerberos implementation now makes use of secure encryption algorithms such as 3DES and AES in place of the data encryption standard (DES). The Kerberos authentication mechanism in Oracle Database 11g now supports the following encryption types:

• DES3-CBC-SHA (DES3 algorithm in CBC mode with HMAC-SHA1 as checksum)

• RC4-HMAC (RC4 algorithm with HMAC-MD5 as checksum)

• AES128-CTS (AES algorithm with 128-bit key in Ciphertext Stealing (CTS) mode with HMAC-SHA1 as checksum)

• AES256-CTS (AES algorithm with 256-bit key in CTS mode with HMAC-SHA1 as checksum)

The Kerberos implementation is enhanced to interoperate smoothly with Microsoft and MIT Key Distribution Centers.

The Kerberos principal name can now contain more than 30 characters and is no longer restricted by the number of characters allowed in a database username. If the principal name is longer than 30 characters, then use:

CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS '[email protected]';

This functionality simplifies the conversion of an existing user to a Kerberos user. In releases prior to Oracle Database 11g you had to create a new user to give them kerberos authentication.

Page 116: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 20

Encryption for LOB Columns

Encryption for LOB Columns

• LOB encryption is allowed only for SECUREFILELOBs.

• All LOBs in the LOB column are encrypted.• LOBs can be encrypted on a per-column or

per-partition basis:– Allows for the coexistence of SECUREFILEand

BASICFILE LOBs

CREATE TABLE test1 (doc CLOBENCRYPT USING 'AES128') LOB(doc) STORE AS SECUREFILE (CACHE NOLOGGING );

Oracle Database 11g introduces a completely reengineered large object (LOB) data type that dramatically improves performance, manageability, and ease of application development. This SecureFiles implementation (of LOBs) offers advanced, next-generation functionality such as intelligent compression and transparent encryption. The encrypted data in SecureFiles is stored in place and is available for random reads and writes. The encryption wallet must be created and open to use SecureFiles. You must create the LOB with the SECUREFILE parameter with encryption enabled (ENCRYPT) or disabled (DECRYPT, which is the default) in the LOB column. The current TDE syntax is used for extending encryption to LOB data types. LOB implementation from previous versions is still supported for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be created as SecureFiles or BasicFiles. The default LOB type is BasicFiles to ensure backward compatibility. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192. Note: For further discussion on SecureFiles, see the lesson titled “Managing Storage.”

Page 117: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 21

Enterprise Manager Security Management

Enterprise Manager Security Management

Manage security through EM.• Policy Manager replaced for:

– Virtual Private Database– Application Context– Oracle Label Security

• Added:– Enterprise User Security pages– TDE pages

Security management has been integrated into Enterprise Manager. Oracle Label Security, Application Contexts, and Virtual Private Database, previously administered through Oracle Policy Manager tool, are now managed through the Enterprise Manager. Enterprise User Security is also now managed though Enterprise Manager instead of a separate tool. A graphical interface for managing Transparent Data Encryption has been added. Managing TDE with Enterprise Manager The administrator can open and close the wallet, move the location of the wallet, and generate a new master key using Enterprise Manager. TDE changed the management pages in Export and Import Data. If TDE is configured, the wallet is open, and the table to be exported has encrypted columns; the export wizard offers data encryption. The same arbitrary key (password) that was used on export must be provided on import to import any encrypted columns. A partial import that does not include tables that contain encrypted columns does not require the password.

Page 118: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 22

Demonstration

Demonstration

For further understanding, you can click the link b elow for a demonstration on:• Using Transparent Data Encryption

Click the following link to further understand:

• Using Transparent Data Encryption in Oracle Database 11g[http://www.oracle.com/technology/obe/11gr1_db/security/tde/tde.htm]

Page 119: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 23

Summary

Summary

In this lesson, you should have learned how to:• Configure the password file to use case-sensitive

passwords• Use TDE support on a logical standby database• Use TDE support for Streams• Create a tablespace with encryption for added secur ity• Store external encrypted data by using the Hardware

Security Module • Use LOB encryption for SecureFile LOBs on a

per-column or per-partition basis• Use EM to manage your database security options

Page 120: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Security: New Features Chapter 5 - Page 24

Page 121: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 0 - Page 1

Intelligent Infrastructure

Page 122: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 2

Chapter 6Intelligent Infrastructure

Intelligent Infrastructure

Oracle Database 11 g

Page 123: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Set up and modify Automatic SQL Tuning• Create Automatic Workload Repository (AWR)

baselines for future time periods• Use additional supplied maintenance windows for

specific maintenance tasks• Simplify memory configuration by setting upper limi ts

to memory use• Use SPFILE enhancements to improve file accessibility

• Perform clusterwide analysis of performance• Utilize Enterprise Manager interface for Resource

Manager

Page 124: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 4

Automatic SQL Tuning in Oracle Database 11g

It’s automatic!

Choose candidate

SQL.One week

Automatic SQL Tuning in Oracle Database 11 g

Workload

SQL-tuning candidates

Test SQL profiles.ImplementSQL profiles.

Generaterecommendations.

AWRDBA

View reports/ Control process

Oracle Database 10g introduced the SQL Tuning Advisor to help DBAs and application developers improve the performance of SQL statements. Although Automatic Database Diagnostic Monitor (ADDM) identified the SQL that should be tuned, users had to manually look at ADDM reports and run SQL Tuning Advisor on the SQL for tuning. Oracle Database 11g further automates the SQL tuning process by identifying problematic SQL statements, running SQL Tuning Advisor on them, and implementing the resulting SQL Profile recommendations to tune the statement without requiring any user intervention. Automatic SQL Tuning uses the AUTOTASK framework through a new task called Automatic SQL Tuning that runs every night by default when the maintenance window starts. Here is a description of the Automatic SQL Tuning process:

• Based on AWR Top SQL identification (SQLs that were top in four different time periods: the past week, any day in the past week, any snapshot (default is an hour) in the past week, or single response time), Automatic SQL Tuning aims for automatic tuning.

• During the Automatic SQL Tuning task execution within the maintenance window, any previously identified SQL statements are automatically tuned using the SQL Tuning Advisor, and as a result, SQL Profiles are created for them if needed. Before making any decision, the new profile is carefully tested.

• At any time you can request a report about these automatic tuning activities, checking the tuned SQL statements to validate or remove the automatic SQL Profiles, which were generated.

Page 125: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 5

Automatic SQL Tuning: Fine-Tune

Automatic SQL Tuning: Fine-Tune

• Use DBMS_SQLTUNE:– SET_TUNING_TASK_PARAMETER

– EXECUTE_TUNING_TASK

– REPORT_AUTO_TUNING_TASK

• Use DBMS_AUTO_TASK_ADMIN:– ENABLE

– DISABLE

You can use the DBMS_SQLTUNE PL/SQL package to control various aspects of SYS_AUTO_SQL_TUNING_TASK:

• SET_TUNING_TASK_PARAMETER: The following parameters are supported for the automatic tuning task only:

- ACCEPT_SQL_PROFILES: TRUE|FALSE whether the system should accept SQL Profiles automatically

- REPLACE_USER_SQL_PROFILES: TRUE|FALSE whether the task should replace SQL Profiles created by the user.

- MAX_SQL_PROFILES_PER_EXEC: Maximum number of SQL Profiles to create per run

- MAX_AUTO_SQL_PROFILES: Maximum number of automatic SQL Profiles allowed on the system in total

• EXECUTE_TUNING_TASK function: Used to manually run a new execution of the task in the foreground (behaves like it would when it runs in the background)

• REPORT_AUTO_TUNING_TASK: Used to get a text report covering a range of task executions

You can enable and disable SYS_AUTO_SQL_TUNING_TASK using the DBMS_AUTO_TASK_ADMIN PL/SQL package.

Page 126: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 6

Automatic SQL Tuning: Dictionary Views

Automatic SQL Tuning: Dictionary Views

Shows the plans encountered during test-execute

DBA_ADVISOR_SQLPLANS

DBA_ADVISOR_SQLSTATS

DBA_ADVISOR_EXECUTIONS Gets data about each execution of the task

Shows the test-execute statistics generated from the testing of the SQL Profiles

You can view Automatic SQL Tuning information through the dictionary views mentioned in the slide. Note: Automatic SQL Tuning is stopped under the following conditions:

• When STATISTICS_LEVEL initialization parameter is set to BASIC.

• When AWR snapshots are turned off by the DBMS_WORKLOAD_REPOSITORY package.

• When AWR retention is less than seven days.

Page 127: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 7

Automatic SQL Tuning Considerations

Automatic SQL Tuning Considerations

• SQL not considered for Automatic SQL Tuning:– Ad hoc or rarely repeated SQL– Parallel queries– Still long-running queries after profiling– Recursive SQL statements– DMLs and DDLs

• These categories can still be manually tuned using SQL Advisor.

Automatic SQL Tuning does not seek to solve every SQL performance issue occurring on a system. It does not aim to tune the following types of SQL:

• Ad hoc or rarely repeated SQL: If a SQL is not executed multiple times in the same form, the advisor ignores it. SQL that does not repeat within a week are not considered as well.

• Parallel queries

• Long-running queries (postprofile): If a query takes too long to run after being SQL profiled, it is not practical to test-execute and, therefore, is ignored by the advisor. Note that this does not mean that the advisor ignores all long-running queries. If the advisor can find a SQL profile that causes a query that once took hours to now run in minutes, it could still be accepted because test-execution is still possible. The advisor executes the old plan only long enough to determine whether it is worse than the new one, and then terminates test-execution without waiting for the old plan to finish, thus switching the order of their execution.

• Recursive SQL statements

• Data manipulation language (DML) such as INSERT…SELECT statements, or data definition language (DDL) such as CREATE TABLE AS SELECT

With the exception of truly ad hoc SQL, these limitations apply to Automatic SQL Tuning only. Such statements can still be tuned by manually running the SQL Tuning Advisor.

Page 128: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 8

Automatic Workload Repository Baselines

Automatic Workload Repository Baselines

Oracle Database 11 g further enhances the AWR baselines by:• Offering out-of-the-box moving window baseline from

which you can specify adaptive thresholds• Scheduling the creation of a baseline using baselin e

templates• Renaming baselines • Setting expiration dates for baselines

Automatic Workload Repository (AWR) baselines contain a set of snapshots for an interesting or reference period of time. AWR baselines are the key for performance tuning to:

• Guide the setting of alert thresholds

• Monitor performance

• Compare advisor reports

Oracle Database 11g consolidates the various concepts of baselines in Oracle, specifically Enterprise Manager and RDBMS, into the single concept of the Automatic Workload Repository (AWR) baseline. Oracle Database 11g AWR baselines provide powerful capabilities for defining dynamic and future baselines and considerably simplify the process of creating and managing performance data for comparison purposes. Oracle Database 11g introduces the concept of a moving window baseline. By default, a system defined moving window baseline is created that corresponds to all the AWR data within the AWR retention period. In Oracle Database 11g you can collect two kinds of baselines: moving window and static baselines. Static baselines can be either single or repeating. A single AWR baseline is collected over a single time period. A repeating baseline is collected over a repeating time period (for example, every Monday in June). Oracle Database 11g AWR baselines are enabled by default as long as STATISTICS_LEVEL =TYPICAL or ALL.

Page 129: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 9

Moving Window Baseline

Moving Window Baseline

There is one moving window baseline:• SYSTEM_MOVING_WINDOWis a moving window baseline

that corresponds to the last eight days of AWR data .• It is created out-of-the-box in Oracle Database 11 g.• By default, adaptive thresholds functionality compu tes

statistics on this baseline.

There is a system-defined moving window baseline created by default that corresponds to the complete set of snapshot data in the AWR retention period. This system-defined baseline provides a default out-of-the-box baseline for EM performance screens to compare the performance with the current database performance.

Note: The default retention period for snapshot data has been changed from seven days to eight days in Oracle Database 11g to ensure the capture of an entire week of performance data.

Page 130: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 10

Baseline Templates

Baseline Templates

• Enable you to schedule the creation of baselines fo r future time periods of interest:– Single time period in the future– Repeating schedule

• For example:– A known holiday weekend– Every Monday morning from 10 a.m. to 2 p.m.

• When the end time for a baseline template changes from future to past, MMONdetects the change and creates the baseline.

Creating baselines for future time periods enables you to mark time periods that you know will be interesting. For example, you may want the system to automatically generate a baseline for every Monday morning for the whole year, or you can ask the system to generate a baseline for an upcoming holiday weekend if you suspect that it is a high-volume weekend. Previously, you could create baselines only on snapshots that already existed. A nightly MMON task goes through all the templates for baseline generation and checks to see whether any time ranges have changed from the future to the past within the last day. For the relevant time periods, the MMON task then creates a baseline for the time period. You can create two types of AWR baselines: single and repeating baselines. EM offers full support for AWR baselines from the Database Instance page > Server tab > AWR Baselines link. New and modified views have been added to support baselines and baseline templates:

• New: DBA_HIST_BASELINE_DETAILS, DBA_HIST_BASELINE_TEMPLATE

• Modified: DBA_HIST_BASELINE

Page 131: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 11

Generating Baseline for a Single Time Period

Generating Baseline for a Single Time Period

Interesting time period

T4 T5 T6 ….. Tx Ty Tz…..

BEGIN

DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE(

start_time => to_date('21-JUN-2008','DD-MON-YYYY'),

end_time => to_date('21-SEP-2008','DD-MON-YYYY'),

baseline_name => 'FALL08',

template_name => 'FALL08',

expiration => NULL ) ;

END;

You can now create a template for how baselines are to be created for different time periods in the future, for predictable schedules. If any part of the period is in the future, use the CREATE_BASELINE_TEMPLATE procedure. The manageability infrastructure generates a task using these inputs and automatically creates a baseline for the specified time period, you to identify the start- and end-snapshot identifiers. For the CREATE_BASELINE and CREATE_BASELINE_TEMPLATE procedures, you can now specify an expiration duration. The expiration duration, specified in days, represents the number of days you want the baselines to be maintained. A value of NULL means that the baselines never expire. The example in the slide illustrates a template creation for a single time period. The additional values of the CREATE_BASELINE_TEMPLATE procedure that create a repeating time schedule are:

• day_of_week : Day of week that the baseline should repeat on. Specify one of the following values: “SUNDAY”, “ MONDAY”, “ TUESDAY”, “ WEDNESDAY”, “ THURSDAY”, “FRIDAY”, “ SATURDAY.”

• hour_in_day : A value of 0–23 specifies the hour in the day the baseline should start

• duration : The duration (in hours) after hour_in_day that the baseline should last

Note: For a complete description of available procedures, see the Oracle Database 11g PL/SQL References and Types documentation.

Page 132: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 12

Using EM to Quickly Configure Adaptive Thresholds

Using EM to Quickly Configure Adaptive Thresholds

Oracle Database 11g Enterprise Manager provides significant usability improvements in the selection of adaptive thresholds for database performance metrics, with full integration with AWR baselines as the source for the metric statistics. EM offers a quick configuration option in a one-click starter set of thresholds based on OLTP or Data Warehouse workload profiles. You make the selection of the appropriate workload profiles from the subsequent pop-up window. By making this selection, the system automatically configures and evolves adaptive thresholds based on the SYSTEM_MOVING_WINDOW baseline for the group of metrics that best correspond to the chosen workload.

Page 133: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 13

Changes to Procedures and Views

Changes to Procedures and Views

CREATE_BASELINE_TEMPLATE

SELECT_BASELINE_METRICFunctions

MODIFY_BASELINE_WINDOW_SIZE

RENAME_BASELINE

Procedures

DBMS_WORKLOAD_REPOSITORYPackageNew

ModifiedNew

DBA_HIST_BASELINE_TEMPLATE

DBA_HIST_BASELINEDBA_HIST_BASELINE_DETAILSViews

The first table in the slide shows the set of PL/SQL interfaces offered by Oracle Database 11g in the DBMS_WORKLOAD_REPOSITORY package for administration and filtering. MODIFY_BASELINE_WINDOW_SIZE enables you to modify the size of the SYSTEM_MOVING_WINDOW. The data dictionary views shown in the second table support AWR baselines.

• DBA_HIST_BASELINE: Has been modified to support the SYSTEM_MOVING_WINDOW baseline and the baselines generated from templates. Additional information includes the date created, time of last statistics calculation, and type of baseline.

• DBA_HIST_BASELINE_DETAILS: Displays information that enables you to determine the validity of a given baseline, such as whether there was a shutdown during the baseline period and the percentage of the baseline period that is covered by the snapshot data.

• DBA_HIST_BASELINE_TEMPLATE: Holds the baseline templates. This view provides the information needed by MMON to determine when a baseline will be created from a template and when the baseline should be removed.

For additional details, see the Oracle Database Reference.

Page 134: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 14

Automated Maintenance Tasks

Automated Maintenance Tasks

10:00 PM–2:00 AM Mon to Fri 6:00 AM– 2:00 AM Sat to Sun

Oracle Database 10g introduced the execution of some automated maintenance tasks during the maintenance window. Specifically, the automated tasks were statistics collection , segment advisor , and Automatic SQL Tuning. With Oracle Database 11g, the Automated Maintenance Tasks feature relies on the Resource Manager being enabled during the maintenance windows. Each maintenance window is associated with a resource plan that specifies how the resources will be allocated during the window duration. This prevents any maintenance work from consuming excessive amounts of system resources. To facilitate mapping of automatic tasks to specific windows, the maintenance windows (as shown in the slide) are created in place of the existing WEEKNIGHT_WINDOW and WEEKEND_WINDOW windows in the MAINTENANCE_WINDOW_GROUP window group. You are still completely free to define other maintenance windows, as well as change start times and durations for the windows listed in the slide. Likewise, any maintenance windows that are deemed unnecessary can be disabled or removed. The operations can be done using EM or Scheduler interfaces.

Page 135: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 15

Default Maintenance Resource Manager Plan

Default Maintenance Resource Manager Plan

SQL> SELECT name FROM V$RSRC_PLAN2 WHERE is_top_plan = 'TRUE';

NAME--------------------------------DEFAULT_MAINTENANCE_PLAN

When a maintenance window opens, the DEFAULT_MAINTENANCE_PLAN resource manager plan is automatically set to control the amount of CPU used by the automated maintenance tasks. To be able to give different priorities to each possible task during a maintenance window, various consumer groups are assigned to DEFAULT_MAINTENANCE_PLAN. The hierarchy between groups and plans is shown in the slide. For high-priority tasks:

• The Optimizer Statistics Gathering automated task is assigned to the ORA$AUTOTASK_STATS_GROUP consumer group

• The Segment Advisor automated task is assigned to the ORA$AUTOTASK_SPACE_GROUP consumer group

• The Automatic SQL Tuning automated task is assigned to the ORA$AUTOTASK_SQL_GROUP consumer group

Note: If necessary, you can manually change the percentage of CPU resources allocated to the various automated maintenance task consumer groups in ORA$AUTOTASK_HIGH_SUB_PLAN.

Page 136: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 16

Automated Maintenance Task Priorities

Automated Maintenance Task Priorities

MaintenanceWindow

Run Job1with

urgentpriority

Run Job2with

urgentpriority

Run Job3withhigh

priority

… …

MMON

ABP

Job nJob 1 …

DBA_AUTOTASK_TASK

Stats

Space

SQL

urgent

high

medium

Run Job3with

mediumpriority

Run Job4with

mediumpriority

The Automated Maintenance Tasks feature is implemented by a background process, Autotask Background Process (ABP). ABP’s main purpose is to translate tasks into jobs for execution by the Scheduler. ABP also maintains the history of execution of all tasks, storing its private repository in the SYSAUX tablespace. You can view this repository through DBA_AUTOTASK_TASK. ABP is spawned by MMON typically at the start of a maintenance window. There is only one ABP required for all instances. Every 10 minutes, MMON checks to see whether ABP has crashed, in which case MMON will restart it. ABP determines the list of jobs that need to be created for each maintenance task, assigning a priority of urgent, high, or medium and arranging the jobs in the preferred order of execution. ABP creates jobs, so all urgent priority jobs are created first, then all high-priority jobs, and finally all medium-priority jobs. Depending on the task’s priority attribute (urgent, high, or medium), various Scheduler job classes are created to be able to map task’s priority consumer groups to corresponding job classes. Enterprise Manager is the preferred way for Automated Maintenance Tasks control. However, you can also use the DBMS_AUTO_TASK_ADMIN package. Note: With Oracle Database 11g, there is no job that is permanently associated with a specific automated task. Therefore, it is not possible to use DBMS_SCHEDULER API to control the behavior of automatic tasks.

Page 137: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 17

Automatic Memory Management: Overview

Automatic Memory Management: Overview

UntunablePGA

Free

Buffer cache

Large pool

Shared pool

Java poolStreams pool

SQL areas

Other SGA

SGA target

PGA target

10g&11g

OLTP

PG

A m

e mo

ryS

GA

mem

ory

BATCH

Buffer cache

Large pool

Shared pool

Java poolStreams pool

SQL areas

Other SGA

UntunablePGA

Free

BATCH

Buffer cache

Large pool

Shared pool

Java poolStreams pool

Other SGA

SQL areas

UntunablePGA

SGA target

PGA target

11g

Memory target

With Automatic Memory Management, the system causes an indirect transfer of memory from the SGA to the PGA, and vice versa. This automates the sizing of the Program Global Area (PGA) and System Global Area (SGA) according to your workload. This indirect memory transfer relies on the operating system (OS) mechanism of freeing shared memory. After memory is released to the OS, the other components can allocate memory by requesting memory from the OS. Currently, this is implemented on Linux, Solaris, HPUX, AIX, and Windows. You set your memory target for the database instance and the system then tunes to the target memory size, redistributing memory as needed between the SGA and aggregate PGA. The illustration above shows you the differences between the Oracle Database 10g mechanism and the new Automatic Memory Management with Oracle Database 11g.

Page 138: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 18

Automatic Memory Management: Overview

Automatic Memory Management: Overview

11g

Memory target

Memorymax target

250M

350M

ALTER SYSTEM SET MEMORY_TARGET=300M;

11g

Memory target

Memorymax target

300M

350M

The simplest way to manage memory is to allow the database to automatically manage and tune it for you. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and a maximum memory size initialization parameter (MEMORY_MAX_TARGET). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size serves as an upper limit so that you cannot accidentally set the target memory size to be too high. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the database also prevents you from setting the target memory size too low.

Page 139: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 19

Oracle Database 11g Memory-Sizing Parameters

Oracle Database 11 g Memory-Sizing Parameters

DB_KEEP_CACHE_SIZEDB_RECYCLE_CACHE_SIZE

DB_nK_CACHE_SIZE

LOG_BUFFERRESULT_CACHE_SIZE

SHARED_POOL_SIZEDB_CACHE_SIZELARGE_POOL_SIZEJAVA_POOL_SIZESTREAMS_POOL_SIZE

SGA_TARGET

SGA_MAX_SIZEMEMORY_MAX_TARGET

MEMORY_TARGET

Others

PGA_AGGREGATE_TARGET

The graphic in the slide shows you the hierarchy of memory initialization parameters. Although you can set MEMORY_TARGET parameter to trigger the Automatic Memory Management, you can still set lower-bound values for the various caches. You can enable Automatic Memory Management using Enterprise Manager and then view a graphical representation of your memory size history components. You can also look at the memory target advisor using the V$MEMORY_TARGET_ADVICE view. You can monitor the decisions made by Automatic Memory Management by using the following views:

• V$MEMORY_DYNAMIC_COMPONENTS: Shows current status of all memory components

• V$MEMORY_RESIZE_OPS: Maintains a circular history buffer of the last 800 SGA resize requests

Page 140: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 20

ADDM Enhancements in Oracle Database 11g

ADDM Enhancements in Oracle Database 11 g

• ADDM for Oracle Real Application Clusters (RAC)• Directives (finding suppression)• DBMS_ADDMpackage

Page 141: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 21

Automatic Database Diagnostic Monitor (ADDM) in Oracle Database 10g

Automatic Database Diagnostic Monitor (ADDM) in Oracle Database 10 g

Intelligent infrastructure

Spacemanagement

Backup and recoverymanagement

Storagemanagement

Database management

Application and SQLmanagement

System Resourcemanagement

Oracle Database 10g introduced the Automatic Database Diagnostic Monitor (ADDM), a self-diagnostic engine built directly into the Oracle database. The ADDM is automatically invoked by the Oracle database and performs analysis to determine the major issues on the system on a proactive basis. In many cases, the ADDM recommends solutions and quantifies expected benefits.

Page 142: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 22

Automatic Database Diagnostic Monitor for Oracle RAC

Automatic Database Diagnostic Monitor for Oracle RAC

Database ADDM

AWR

Instance ADDM

Self-diagnostic engine

Inst1 Inst n

Oracle Database 11g further extends database-management functionality by offering clusterwide analysis of performance. A special mode of the ADDM analyzes an Oracle Real Application Clusters (RAC) database cluster and reports on issues that are affecting the entire cluster as well as those that are affecting individual instances. When the advisor runs in this mode, it is called the database ADDM. You can run the advisor for a single instance, which is equivalent to the Oracle Database 10g ADDM and is now called the instance ADDM. The Database ADDM for Oracle RAC is not just a report of reports. It has independent analysis appropriate for Oracle RAC.

Page 143: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 23

ADDM for Oracle RAC

• Identifies the most critical performance problems f or the entire Oracle RAC cluster database

• Runs automatically when taking AWR snapshots (the default)

• Performs databasewide analysis of:– Global resources—for example, I/O and global locks– High-load SQL, hot blocks– Global cache interconnect traffic– Network latency issues– Skew in instance response times

• Is used by DBAs to analyze cluster performance

ADDM for Oracle RAC

The database ADDM has access to AWR data generated by all instances, making the analysis of global resources more accurate. Both the database ADDM and instance ADDM run on continuous time periods that can contain instance startup and shutdown. In the case of the database ADDM, there may be several instances that are shut down or started during the analysis period. You must, however, maintain the same database version throughout the entire time period. The database ADDM runs automatically after each snapshot is taken. The automatic instance ADDM runs are the same as in Oracle Database 10g. You can also perform analysis on a subset of instances in the cluster. This is called partial analysis ADDM. I/O capacity finding (The I/O system is overused.) is a global finding because it concerns a global resource affecting multiple instances. A local finding concerns a local resource or issue that affects a single instance. For example, a CPU-bound instance results in a local finding about CPU. Although the ADDM can be used during application development to test changes to the application, the database system, or the hosting machines, the database ADDM is targeted at DBAs.

Page 144: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 24

ADDM for Oracle RAC

ADDM for Oracle RAC

Specified in the DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETERprocedure:

Any value

Comma-separated list of instance numbers (1,2,5..)

“UNUSED” (default)

Value of INSTANCES

Instance ADDM. The instance specified in the INSTANCEparameter is analyzed.

A positive integer (For example: “1”)

Partial analysis ADDM. Only instances specified in the INSTANCESparameter are analyzed.

“0” or “UNUSED”(default)

Database ADDM (all instances) “0” or “UNUSED”(default)

ADDM Analysis ModeValue of INSTANCE

The distinction between the database ADDM and instance ADDM is based on the value of the advisor INSTANCE parameter. When the value is 0 or UNUSED, the task is a database ADDM. When the value is numeric, it is the instance ID for an instance ADDM task. The results of an ADDM analysis are stored in the advisor framework and accessed like any ADDM task in Oracle Database 10g. You select to run the database ADDM, instance ADDM, or partial analysis by setting the INSTANCE and INSTANCES parameters in the DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER procedure. Note: Partial ADDM is not currently exposed through EM, but command-line PL/SQL APIs exist to perform partial analysis. It is recommended that you use the DBMS_ADDM package instead.

Page 145: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 25

EM Support for ADDM for Oracle RAC

EM Support for ADDM for Oracle RAC

Finding History page:

Oracle Database 11g EM displays the ADDM analysis on the Cluster Database Home Page. The Findings Table is displayed in the Performance Analysis section. For each finding, the Affected Instances column displays the number (n of n) of instances affected. The display also indicates the percentage impact for each instance. Further drilldown on the findings takes you to the ADDM Finding Details page. The ADDM Finding Details Page enables you to see Finding History. When you click this button, you see a page with a chart on the top plotting the impact in active sessions for the finding over time. The default display period is 24 hours. The drop-down list also supports viewing for seven days. At the bottom of the display, a table similar to the results section is shown, displaying all findings for this named finding. From this page, you can set filters on the findings results. Different types of findings (for example, CPU, Logins, and SQL) have different kinds of criteria for filtering. Note: Only automatic runs of ADDM are considered for Finding History. These results reflect the unfiltered results only.

Page 146: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 26

DBMS_ADDM Package

DBMS_ADDMPackage

The DBMS_ADDMpackage eases ADDM management. It consists of the following procedures and functions:

Create an ADDM task for analyzing a subset of instances.

ANALYZE_PARTIAL

Delete a created ADDM task (of any kind).DELETE

Get the default text report of an executed ADDM task.

GET_REPORT

Create an ADDM task for analyzing a local instance.

ANALYZE_INST

ANALYZE_DB Create an ADDM task for analyzing the database globally.

The following example illustrates the creation and execution of a database ADDM task:

SQL> var tname varchar2(60);

SQL> BEGIN

SQL> :tname := 'my database ADDM task';

SQL> dbms_addm.analyze_db(:tname, 1, 2);

SQL> END;

Here, parameters 1 and 2 are the start and end snapshots, respectively. You can then use the GET_REPORT procedure to view the result:

SQL> SELECT dbms_addm.get_report(:tname) FROM DUAL;

Page 147: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 27

Advisor-Named Findings and Directives

Advisor-Named Findings and Directives

SQL> select finding_name from dba_advisor_finding_names ;FINDING_NAME----------------------------------------Top Segments by I/OTop SQL by "Cluster" Wait. . .Undersized Redo Log BufferUndersized SGAUndersized Shared PoolUndersized Streams Pool

• The DBA_ADVISOR_FINDING_NAMESview lists all possible findings.

• Advisor results are now classified and named, andexist in the DBA{USER}_ADVISOR_FINDINGSview.

Oracle Database 10g introduced the advisor framework and various advisors to help DBAs manage databases efficiently. These advisors provide feedback in the form of findings. Oracle Database 11g now classifies these findings, so that you can query the Advisor views to understand how often a given type of finding recurs in the database. A FINDING_NAME column has been added to the following Advisor views:

• DBA_ADVISOR_FINDINGS

• USER_ADVISOR_FINDINGS

Page 148: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 28

Using the DBMS_ADDM Package

Using the DBMS_ADDMPackage

• Create an ADDM directive, which filters “Undersized SGA” findings:

• Possible findings found in DBA_ADVISOR_FINDING_NAMES

SQL> var tname varchar2(60);SQL> BEGIN

2 dbms_addm.insert_finding_directive (NULL,3 'My undersized SGA directive',4 'Undersized SGA', 5 2, 6 10); 7 :tname := 'my instance ADDM task';8 dbms_addm.analyze_inst (:tname, 1, 2); 9 END;

10 /SQL> SELECT dbms_addm.get_report (:tname) from dual;

You can use possible finding names to query the findings repository to get all occurrences of that specific finding. In the code shown above, you see the creation of an instance ADDM task with a finding directive called My undersized SGA directive . When the task name is NULL, it applies to all subsequent ADDM tasks. The finding name (“Undersized SGA ”) must exist in the DBA_ADVISOR_FINDING_NAMES view (which lists all the findings) and is case sensitive. The result of DBMS_ADDM.GET_REPORT shows an “Undersized SGA ” finding only if the finding is responsible for at least 2 (min_active_sessions ) average active sessions during the analysis period, and this constitutes at least 10% (min_perc_impact ) of the total database time during that period. The following are some of the procedures to use directives:

• INSERT_FINDING_DIRECTIVE

• INSERT_SQL_DIRECTIVE

• INSERT_PARAMETER_DIRECTIVE

• DELETE_FINDING_DIRECTIVE

• DELETE_SQL_DIRECTIVE

• DELETE_SEGMENT_DIRECTIVE

Note: For a complete description of available procedures, see the Oracle Database 11g PL/SQL References and Types documentation.

Page 149: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 29

New ADDM Views

New ADDM Views

• DBA{USER}_ADDM_TASKS: Displays every executed ADDM task. Is an extension of the corresponding Advisor views.

• DBA{USER}_ADDM_INSTANCES: Displays instance-level information for ADDM tasks that are completed

• DBA{USER}_ADDM_FINDINGS: Displays extensions of the corresponding Advisor views

• DBA{USER}_ADDM_FDG_BREAKDOWN: Displays the contribution for each finding from the different in stances for database and partial ADDM

• DBA_ADDM_SYSTEM_DIRECTIVES:Displays the directives in the system that affect all tasks

• DBA_ADDM_TASK_DIRECTIVES:Displays the directives in the system affecting a specific task. Use TASK_ID or TASK_NAME

to limit to a specified task.

Note: For a complete description of available procedures, see the Oracle Database Reference.

Page 150: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 30

Resource Manager: New EM Interface

Resource Manager: New EM Interface

Using Enterprise Manager, you can access the Resource Manager section from the Server page. The Resource Manager section is organized in the same way as the Database Resource Manager. Clicking the Getting Started link takes you to the “Getting Started with Database Resource Manager” page, which provides a brief description of each step as well as the links to the corresponding pages. Note: The DBMS_RESOURCE_MANAGER PL/SQL interface has some deprecated and new parameters. For example, you should no longer use CPU_Pn parameters of the CREATE_PLAN_DIRECTIVE procedure, but replace them by MGMT_Pn parameters. Another example is the new SWITCH_FOR_CALL parameter instead of SWITCH_TIME_IN_CALL parameter of the CREATE_PLAN_DIRECTIVE procedure. For more information, see the Oracle Database Administrator’s Guide.

Page 151: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 31

Resource Manager: New EM Interface

Resource Manager: New EM Interface

When you create an Oracle 11g database, the Resource Plans (shown above) are created by default. However, with the exception of the DEFAULT_PLAN resource plan, none are active by default. However, the DEFAULT_PLAN has no limits set for its thresholds. Oracle Database 11g introduces the following two new I/O limits that you can define as thresholds in a resource plan:

• I/O Limit (MB)

• I/O Request Limit (Requests)

These I/O limits are created, using either EM or PL/SQL, when you create a resource plan directive with the following arguments:

• switch_io_megabytes : Specifies the amount of I/O (in MB) that a session can issue before an action is taken. Default is NULL, which means unlimited.

• switch_io_reqs : Specifies the number of I/O requests that a session can issue before an action is taken. Default is NULL, which means unlimited.

The EM Resource Manager Statistics page displays statistics for only the current active plan.

Page 152: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 32

Easier Recovery from Loss of SPFILE

Easier Recovery from Loss of SPFILE

The FROM MEMORYclause allows the creation of current systemwide parameter settings.

CREATE PFILE [= 'pfile_name' ]FROM{ { SPFILE [= 'spfile_name'] } | MEMORY} ;

CREATE SPFILE [= 'spfile_name' ]FROM{ { PFILE [= 'pfile_name' ] } | MEMORY} ;

Oracle Database 11g offers simplified parameter management through easier recovery from the loss of the SPFILE. In Oracle Database 11g, you can use the FROM MEMORY clause to create a pfile or spfile using the current systemwide parameter settings. In an Oracle RAC environment, the created file contains the parameter settings from each instance. During instance startup, all parameter settings are logged to the alert.log file. However, as of Oracle Database 11g, the alert.log parameter dump text is now written in valid parameter syntax. This facilitates cutting and pasting of parameters into a separate file, and then using as a pfile for a subsequent instance. The name of the pfile or spfile is written to the alert.log at instance startup time. In cases when an unknown client-side pfile is used, the alert log indicates this as well. To support this additional functionality, the COMPATIBLE initialization parameter must be set to 11.0.0.0 or higher.

Page 153: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 33

Summary

Summary

In this lesson, you should have learned how to:• Set up and modify Automatic SQL Tuning• Create AWR baselines for future time periods• Use additional supplied maintenance windows for

specific maintenance tasks• Simplify memory configuration by setting

MEMORY_TARGETinitialization parameters• Improve file accessibility of the SPFILE file

• Perform clusterwide analysis of performance using RAC-aware ADDM

• Utilize Enterprise Manager interface for Resource Manager

Page 154: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Intelligent Infrastructure Chapter 6 - Page 34

Page 155: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 0 - Page 1

Datawarehousing Enhancements

Page 156: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 2

Chapter 7Datawarehousing Enhancements

Datawarehousing Enhancements

Oracle Database 11 g

Page 157: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Use SQL Access Advisor’s recommendations for

partitioning options• Utilize partitioning enhancements to gain significa ntly

faster data access:– Interval partitioning– System partitioning– Composite partitioning enhancements– Virtual column-based partitioning– Reference partitioning

Page 158: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 4

SQL Access Advisor in Oracle Database 11g

SQL Access Advisor in Oracle Database 11 g

The DBMS_ADVISOR.QUICK_TUNEprocedure is a shortcut for: • CREATE_TASK

• UPDATE_TASK_ATTRIBUTES

• DELETE_TASK

Indexes Materializedviews

Materializedviews log

SQL Access Advisor

Workload

Partitionedobjects

SQL Access Advisor identifies and helps resolve performance problems relating to the execution of SQL statements by recommending which indexes, materialized views, materialized view logs, or partitions to create, drop, or retain. Oracle Database 11g extends its recommendations to include partitioning options for tables, indexes, and materialized views. Partition recommendations work on tables that have at least 10,000 rows, and workloads that have some predicates and joins on columns of the NUMBER or DATE type. In addition, partitioning advice can be generated only for single-column INTERVAL and HASH partitioning. INTERVAL partitioning recommendations can be output as the RANGE syntax but INTERVAL is the default. HASH partitioning is done only to leverage partition-wise joins. The DBMS_ADVISOR.QUICK_TUNE procedure is introduced to perform all the necessary operations that analyze a single SQL statement. The operation creates a default task, confining the workload to the specified statement only. Finally, the task is executed and the results are saved in the repository. You can also instruct the procedure to implement the final recommendations, resulting in true automatic tuning. The QUICK_TUNE procedure is a shortcut for the CREATE_TASK, UPDATE_TASK_ATTRIBUTES, and DELETE_TASK procedures. You can find a complete description of each of these procedures in the Oracle Database PL/SQL Packages and Types Reference.

Page 159: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 5

Oracle Partitioning Across Database Releases

Interval partitioningPartition Advisor

More composite choicesREF partitioningVirtual column partitioning

Oracle Database 11 g

Fast drop tableMultidimensional pruning

1M partitions per tableOracle10 g R2

Local index maintenance

Global hash indexesOracle10 g

Fast partition splitComposite range-list partitioning

Oracle9 i R2

Global index maintenance

List partitioningOracle9 i

Merge operationPartition-wise joinsDynamic pruning

Hash and composite range-hash partitioning

Oracle8 i

Basic maintenance operations: add, drop, exchange

Static partition pruning

Range partitioningGlobal range indexes

Oracle8

ManageabilityPerformanceCore functionality

Oracle Partitioning Across Database Releases

The slide summarizes the ten years of partitioning development at Oracle. Note: REF partitioning enables pruning and partition-wise joins against child tables. Although performance seems to be the most visible improvement, do not forget about the rest. Partitioning must address all business-relevant areas of performance, manageability, and availability.

Page 160: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 6

11g Partitioning Enhancements

11g Partitioning Enhancements

• Interval partitioning• System partitioning• Composite partitioning enhancements• Virtual column-based partitioning• Reference partitioning

• Data Pump Enhancement:– Single Partition Transportable Tablespace

Partitioning allows the DBA to employ a “divide and conquer” methodology for managing database tables, especially as those tables grow. Partitioned tables allow a database to scale for very large datasets while maintaining consistent performance, without unduly impacting administrative or hardware resources. Partitioning enables faster data access within an Oracle database. Whether a database has 10 GB or 10 TB of data, partitioning can speed up data access by orders of magnitude. With the introduction of Oracle Database 11g, the DBA will find a useful assortment of partitioning enhancements. These enhancements include:

• Addition of interval partitioning

• Addition of system partitioning

• Composite partitioning enhancements

• Addition of virtual column-based partitioning

• Addition of reference partitioning

Data Pump Enhancement: You can now export one or more partitions of a table without having to move the entire table. On import, you can choose to load partitions as is, merge them into a single table, or promote each into a separate table.

Page 161: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 7

Interval Partitioning

Interval Partitioning

• Interval partitioning is an extension of range partitioning.

• Partitions of a specified interval are created when inserted data exceeds all of the range partitions.

• At least one range partition must be created.• Interval partitioning automates the creation of ran ge

partitions.

Before the introduction of interval partitioning, the DBA was required to explicitly define the range of values for each partition. The problem of explicitly defining the bounds for each partition does not take into consideration the growth, which results in the partition not scaling as the number of partitions grow. Interval partitioning is an extension of range partitioning, which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the range partitions. You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point. Interval partitioning fully automates the creation of range partitions. Managing the creation of new partitions can be a cumbersome and highly repetitive task. This is especially true for predictable additions of partitions covering small ranges, such as adding new daily partitions. Before implementing interval partitioning, you should carefully consider the following restrictions:

• You can specify only one partitioning key column and it must be of the NUMBER or DATE type.

• Interval partitioning is not supported for index-organized tables.

• You cannot create a domain index on an interval-partitioned table.

Page 162: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 8

Interval Partitioning: Example

Interval Partitioning: Example

CREATE TABLE sh.sales_interval PARTITION BY RANGE (time_id)

INTERVAL(NUMTOYMINTERVAL(1, 'month'))(PARTITION P0 values less than (TO_DATE('1-1-2002', 'dd-mm-yyyy')),PARTITION P1 values less than (TO_DATE('1-1-2003', 'dd-mm-yyyy')),

PARTITION P2 values less than (TO_DATE('1-7-2003', 'dd-mm-yyyy')),PARTITION P3 values less than (TO_DATE('1-1-2004', 'dd-mm-yyyy')))AS SELECT * FROM SH.SALES WHERE TIME_ID < TO_DATE(' 1-1-2004', 'dd-mm-yyyy');

Range components Interval

components

…SYS_PnSYS_P0P0 P1 P2 P3

Transition point

Consider the example in the slide, which illustrates the creation of an interval-partitioned table. The original CREATE TABLE statement specifies four (P0 to P3) partitions of varying widths. This portion of the table is range partitioned. It also specifies that beyond the transition point of 1-JAN-2004, partitions are created with a width of one month. These partitions are interval partitioned. The SYS_P0 partition is automatically created using this information when a row with a value of a date corresponding to January 2004 is inserted into the table. The high bound of partition P3 represents a transition point. P3 and all partitions below it (P0, P1 and P2 in this example ) are in the range section, whereas all partitions beyond it fall into the interval section. The only argument to the INTERVAL clause is a constant of the INTERVAL type if the partitioning column is of the DATE type, and a constant of the NUMBER type if the partitioning column is of the NUMBER type. Currently, only partitioned tables in which the partitioning column is of the DATE or NUMBER type are supported. Interval partitions use a system-generated name of the SYS_Pn format. Interval partitioning has the following implications:

• You cannot specify MAXVALUE (an infinite upper bound); doing so would defeat the purpose of the automatic addition of partitions as needed.

• An interval-partitioned table does not allow NULL values for the partitioning key column.

Page 163: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 9

Moving the Transition Point

Moving the Transition Point

• Creating an interval-partitioned table:

• Using the MERGEclause to move a transition point:

CREATE TABLE sales_interval PARTITION BY RANGE (time_id) INTERVAL(NUMTOYMINTERVAL(1, 'month'))(PARTITION P0 VALUES LESS THAN (TO_DATE('1-1-

2004', 'dd-mm-yyyy')))AS SELECT * FROM SH.SALES WHERE 1 = 0;

ALTER TABLE sh.sales_interval MERGE PARTITIONS P3, P4 INTO PARTITION P4;

Partitioned table maintenance operations can move a partition from the interval section to the range section, shifting the transition point upwards. Say, you merge two partitions in the interval section together, the width of the resulting partition is no longer the same as the interval. You must therefore move the resulting partition to the range section. If this is the first partition in the interval section, the semantics are straightforward. But consider the following example:

CREATE TABLE SALES_INTERVAL

PARTITION BY RANGE (time_id)

INTERVAL(NUMTOYMINTERVAL(1, 'month'))

(PARTITION P0 VALUES LESS THAN (TO_DATE('1-1-2004',

'dd-mm-yyyy')))

AS SELECT * FROM SH.SALES WHERE 1 = 0;

Rows come in for January 2004, March 2004, and April 2004 and create three corresponding partitions called, for illustrative purposes, P1, P3, and P4, respectively. The following statement is then executed:

ALTER TABLE SALES_INTERVAL MERGE PARTITIONS P3, P4 INTO PARTITION P4;

Therefore, after the merge, the table now has three partitions: P0 corresponding to values less than 1-JAN-2004, P1 corresponding to rows for January 2004 and P4 for rows in February, March, and April 2004.

Page 164: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 10

System Partitioning

System Partitioning

System partitioning:• Enables application-controlled partitioning for sel ected

tables• Provides the benefits of partitioning, but the

partitioning and data placement are controlled by t he application

• Does not employ partitioning keys (as used in other partitioning methods)

• Does not support partition pruning in the tradition al sense

System partitioning enables application-controlled partitioning for arbitrary tables. The database simply provides the ability to break down a table into meaningless partitions. All other aspects of partitioning are controlled by the application. System partitioning provides the well-known benefits of partitioning (scalability, availability, and manageability), but the partitioning and actual data placement are controlled by the application. The most fundamental difference between system partitioning and other methods is that system partitioning does not have partitioning keys. So the mapping of the rows to a particular partition is not implicit. Instead, you specify the partition to which a row maps by using partition-extended syntax when inserting a row. Without a partitioning key, the usual performance benefits of partitioned tables are not available for system-partitioned tables. There is no support for traditional partition pruning or partition-wise joins. Partition pruning is achieved only by accessing the same partitions in the system-partitioned tables as those that were accessed in the base table. System-partitioned tables do provide the manageability advantages of equipartitioning. For example, a nested table can be created as a system-partitioned table that has the same number of partitions as the base table. A domain index can be backed up by a system-partitioned table that has the same number of partitions as the base table.

Page 165: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 11

System Partitioning: Guidelines

System Partitioning: Guidelines

The following operations are supported for system-partitioned tables: • Partition maintenance operations and other data

definition language (DDL) operations• Creation of local indexes• Creation of local bitmapped indexes• Creation of global indexes• All data manipulation language (DML) operations• INSERT AS SELECT with partition-extended syntax:

INSERT INTO < table_name> PARTITION (< partition-name| number| bind var) AS < subqery>

Because of the specific requirements of system partitioning, the following operations are not supported for system partitioning:

• Unique local indexes because they require a partitioning key

• CREATE TABLE AS SELECT:

Because there is no partitioning method, it is not possible to distribute rows to partitions.

Instead, the user should first create the table and then insert rows into each partition. • INSERT INTO <tabname> AS <subquery>

• SPLIT PARTITION operations

Page 166: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 12

System Partitioning: Example

System Partitioning: Example

CREATE TABLE systab (c1 integer, c2 integer) PARTITION BY SYSTEM(

PARTITION p1 TABLESPACE tbs_1,PARTITION p2 TABLESPACE tbs_2,PARTITION p3 TABLESPACE tbs_3,PARTITION p4 TABLESPACE tbs_4

);

Using the system-partitioned table:INSERT INTO systab PARTITION (p1) VALUES (4,5);

INSERT INTO systab PARTITION (1) VALUES (4,5);

ALTER TABLE systab MERGE PARTITIONS p1,p2

INTO PARTITION p1;

The first example in the slide creates a table with four partitions, each with different physical attributes. Subsequent INSERT and MERGE statements must use the partition-extended syntax to identify the specific partition a row should go into; otherwise, the commands fail. For example, values (4,5) can be inserted into any one of the four partitions using the syntax shown in the slide. Alternatively, you can use the following syntax: INSERT INTO systab PARTITION (p2) VALUES (4,5); /* partition p2 */ INSERT INTO systab PARTITION (2) VALUES (4,5) /* se cond partition */

INSERT INTO systab PARTITION (:pno) VALUES (4,5); / * pno bound to 1/p1 */

The partition-extended syntax supports both numbers and bind variables. The use of bind variables is important because it allows cursor sharing of INSERT statements. DELETEs and UPDATEs do not require the partition-extended syntax. However, because there is no partition pruning, if the partition-extended syntax is omitted, the entire table is scanned to execute the operation. Again, this example highlights the fact that there is no implicit mapping from an ordered set of values—in this case (4,5)—to any partition.

Page 167: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 13

Composite Partitioning Enhancements

Composite Partitioning Enhancements

• Range top level:– Range-Range

• List top level:– List-List– List-Hash– List-Range

• Interval top level:– Interval-Range– Interval-List– Interval-Hash

Range, List, Interval

SP1

SP2

SP3

SP4

List, Range, Hash

SP1 SP1 SP1 SP1

SP2 SP2 SP2 SP2

SP3 SP3 SP3 SP3

SP4 SP4 SP4 SP4

Previously, the only composite partitioning methods supported were Range-List and Range-Hash. With Oracle Database 11g, list partitioning can be a top level partitioning method for composite partitioned tables providing List-List, List-Hash, and List-Range composite methods. Range partitioning now adds support for the Range-Range composite method. Interval partitioning supports the following composite partitioning methods: Interval-Range, Interval-List, and Interval-Hash. Range-Range partitioning: Composite Range-Range partitioning enables logical range partitioning along two dimensions (for example, partition by order_date and range subpartition by shipping_date ). List-Range partitioning: Composite List-Range partitioning enables logical range subpartitioning within a given list-partitioning strategy (for example, list partition by country_id and range subpartition by order_date ). List-Hash partitioning: Composite List-Hash partitioning enables hash subpartitioning of a list-partitioned object (for example, to enable partition-wise joins). List-List partitioning: Composite List-List partitioning enables logical list partitioning along two dimensions (for example, list partition by country_id and list subpartition by sales_channel ).

Page 168: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 14

Composite Range-Range Partitioning: Example

Composite Range-Range Partitioning: Example

CREATE TABLE sales (prod_id NUMBER(6) NOT NULL, cus t_id NUMBERNOT NULL,time_id DATE NOT NULL, channel_id char(1) NOT NULL,promo_id NUMBER (6) NOT NULL,quantity_sold NUMBER(3 ) NOT NULL,amount_sold NUMBER(10,2) NOT NULL)PARTITION BY RANGE (time_id)SUBPARTITION BY RANGE (cust_id)SUBPARTITION TEMPLATE

(SUBPARTITION sp1 VALUES LESS THAN (50000),SUBPARTITION sp2 VALUES LESS THAN (100000),SUBPARTITION sp3 VALUES LESS THAN (150000),SUBPARTITION sp4 VALUES LESS THAN (MAXVALUE)

)(PARTITION VALUES LESS THAN (TO_DATE('1-APR-1999','D D-MON-YYYY')),PARTITION VALUES LESS THAN (TO_DATE('1-JUL-1999','D D-MON-YYYY')),PARTITION VALUES LESS THAN (TO_DATE('1-OCT-1999','D D-MON-YYYY')),PARTITION VALUES LESS THAN (TO_DATE('1-JAN-2000','D D-MON-YYYY')));

Composite Range-Range partitioning enables logical range partitioning along two dimensions. In the example in the slide, the SALES table is created and range partitioned on time_id . Using a subpartition template, the SALES table is subpartitioned by range using cust_id for the subpartition key. Because of the template, all partitions have the same number of subpartitions with the same bounds as defined by the template. If no template is specified, a single default partition bound by MAXVALUE (Range) or DEFAULT value (List) is created. Although the example in the slide illustrates the Range-Range methodology, the other new composite partitioning methods use a similar syntax and statement structure. All of the composite partitioning methods fully support the existing partition pruning methods for queries involving predicates on the subpartitioning key.

Page 169: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 15

Virtual Column-Based Partitioning

Virtual Column-Based Partitioning

• Virtual column values are derived by the evaluation of a function or expression.

• Virtual columns can be defined within a CREATEor ALTERtable operation.

• Virtual column values are not physically stored in the table but are evaluated on demand.

• Virtual columns can be indexed and used in queries and DML and DDL statements like other column types.

• Tables and indexes can be partitioned on a virtual column; even statistics can be gathered on them.

CREATE TABLE employees (employee_id number(6) not null,

…total_compensation as (salary *( 1+commission_pct))

Columns of a table whose values are derived by computation of a function or an expression are known as virtual columns. These columns can be specified during a CREATE or ALTER table operation. Virtual columns share the same SQL namespace as other real table columns and conform to the data type of the underlying expression that describes it. These columns can be used in queries like any other table columns, providing a simple, elegant, and consistent mechanism of accessing expressions in a SQL statement. The values for virtual columns are not physically stored in the table, rather they are evaluated on demand. The functions or expressions describing the virtual columns should be deterministic and pure, meaning the same set of input values should return the same output values. Virtual columns can be used like any other table columns. They can be indexed and used in queries, DML statements, and DDL statements. Tables and indexes can be partitioned on a virtual column, and even statistics can be gathered on them. You can use virtual column partitioning to partition key columns defined on virtual columns of a table. Frequently, business requirements to logically partition objects do not match existing columns in a one-to-one manner. With the introduction of Oracle Database 11g, partitioning has been enhanced to allow a partitioning strategy defined on virtual columns, thus enabling a more comprehensive match of the business requirements.

Page 170: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 16

Virtual Column-Based Partitioning: Example

Virtual Column-Based Partitioning: Example

CREATE TABLE employees (employee_id number(6) not null, first_name varchar 2(30),

last_name varchar2(40) not null, email varchar2(25), phone_number varchar2(20), hire_date date not null, job_id varchar2(10) not null, salary number(8,2), commission_pct number(2,2), manager_id number(6), department_id number(4),total_compensation as (salary *( 1+commission_pct)))

PARTITION BY RANGE (total_compensation)(

PARTITION p1 VALUES LESS THAN (50000),PARTITION p2 VALUES LESS THAN (100000),PARTITION p3 VALUES LESS THAN (150000),PARTITION p4 VALUES LESS THAN (MAXVALUE)

);

Consider the example in the slide. The EMPLOYEES table is created using the standard CREATE TABLE syntax. The total_compensation column is a virtual column calculated by multiplying the value of total_compensation by the commission_pct plus one. The next statement declares total_compensation (a virtual column) to be the partitioning key of the EMPLOYEES table. Partition pruning takes place for virtual column partition keys when the predicates on the partitioning key are of the following types:

• Equality or Like

• List

• Range

• Partition-extended names

Given a join operation between two tables, the optimizer recognizes when partition-wise join (full or partial) is applicable, decides whether to use it or not, and annotates the join properly when it decides to use it. This applies to both serial and parallel cases. To recognize full partition-wise join, the optimizer relies on the definition of equipartitioning of two objects. This definition includes the equivalence of the virtual expression on which the tables were partitioned.

Page 171: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 17

Reference Partitioning

Reference Partitioning

A table can now be partitioned based on the partiti oning method of a table referenced in its referential con straint.• The partitioning key is resolved through an

existing parent-child relationship.• The partitioning key is enforced by active

primary key and foreign key constraints.• Tables with a parent-child relationship

can be equipartitioned by inheriting thepartitioning key from the parent table without duplicating the key columns.

Reference partitioning allows the partitioning of a table based on the partitioning scheme of the table referenced in its referential constraint. The partitioning key is resolved through an existing parent-child relationship, enforced by active primary key and foreign key constraints. This means that tables with a parent-child relationship can be logically equipartitioned by inheriting the partitioning key from the parent table without duplicating the key columns. The logical dependency also automatically cascades partition maintenance operations, making application development easier and less error prone. To create a reference-partitioned table, you specify a PARTITION BY REFERENCE clause in the CREATE TABLE statement. This clause specifies the name of a referential constraint and this constraint becomes the partitioning referential constraint that is used as the basis for reference partitioning in the table. As with other partitioned tables, you can specify object-level default attributes, and can optionally specify partition descriptors that override the object-level defaults on a per-partition basis. Note: This partitioning method can be useful for nested table partitioning

Page 172: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 18

Reference Partitioning: Example

Reference Partitioning: Example

CREATE TABLE orders( order_id NUMBER(12), order_date TIMESTAMP,

order_mode VARCHAR2(8),customer_id NUMBER(6), order_status NUMBER(2),order_total NUMBER(8,2), sales_rep_id NUMBER(6),promotion_id NUMBER(6),CONSTRAINT orders_pk PRIMARY KEY(order_id)

)PARTITION BY RANGE(order_date)

(PARTITION Q1_2005 VALUES LESS THAN (TO_DATE('01-APR-2005','DD-MON-YYYY')),

PARTITION Q2_2005 VALUES LESS THAN (TO_DATE('01-JUL-2005','DD-MON-YYYY')),

PARTITION Q3_2005 VALUES LESS THAN (TO_DATE('01-OCT-2005','DD-MON-YYYY')),

PARTITION Q4_2005 VALUES LESS THAN (TO_DATE('01-JAN-2006','DD-MON-YYYY'))

);

The example in the slide creates a range-partitioned table called ORDERS, which is range-partitioned on order_date . It is created with four partitions: Q1_2005 , Q2_2005 , Q3_2005 , and Q4_2005 . This table is referenced in the creation of a reference-partitioned table in the next slide.

Page 173: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 19

Reference Partitioning: Example

Reference Partitioning: Example

CREATE TABLE order_items( order_id NUMBER(12) NOT NULL,

line_item_id NUMBER(3) NOT NULL,product_id NUMBER(6) NOT NULL,unit_price NUMBER(8,2),quantity NUMBER(8),CONSTRAINT order_items_fkFOREIGN KEY(order_id) REFERENCES orders(order_id)

)PARTITION BY REFERENCE(order_items_fk);

The reference-partitioned child table ORDER_ITEMS example in the slide is created with four partitions: Q1_2005 , Q2_2005 , Q3_2005 , and Q4_2005 , where each partition contains the order_items rows corresponding to orders in the respective parent partition. If partition descriptors are provided, then the number of partitions described must be exactly equal to the number of partitions or subpartitions in the referenced table. If the parent table is a composite partitioned table, then the table has one partition for each subpartition of its parent; otherwise, the table has one partition for each partition of its parent. Partition bounds cannot be specified for the partitions of a reference-partitioned table. The partitions of a reference-partitioned table can be named. If a partition is not explicitly named, then it inherits its name from the corresponding partition in the parent table, unless this inherited name conflicts with one of the explicit names given. In this case, the partition has a system-generated name. Partitions of a reference-partitioned table colocate with the corresponding partition of the parent table, if no explicit tablespace is specified for the reference-partitioned table’s partition.

Page 174: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 20

Bitmap Join Index for IOT

Bitmap Join Index for IOT

IOT A IOT B

Oracle Database 11g extends bitmap join index support to Index Organized Tables (IOTs). A join index is an index on table T1 built for a column of a different table T2 using a join. Therefore, the index provides access to rows of T1 based on columns of T2. Join indexes can be used to avoid actual joins of tables or can reduce the volume of data to be joined by performing restrictions in advance. Bitmap join indexes are space efficient and can speed up queries using bit-wise operations. As in the case of bitmap indexes, these IOTs have an associated mapping table. Because IOT rows may change their position due to DML or index reorganization operations, the bitmap join index cannot rely on the physical row identifiers of the IOT rows. Instead, the row identifier of the mapping table associated with the IOT is used.

Page 175: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 21

Table Compression

Table Compression

Table compression offers positive performance impac t on queries accessing large amounts of data:• The data is compressed by eliminating duplicate

values in a database block.• All database features and functions that work on

regular blocks also work on compressed blocks.

ALTER TABLE < table_name> COMPRESS | NOCOMPRESS(FOR {ALL | DIRECT_LOAD } OPERATIONS)

The Oracle Database 11g compression feature allows relational tables to be stored in a compressed format, resulting in significant savings in disk storage, I/O, and redo logs. The table compression technique used is most advantageous for large data warehouses with a positive impact on queries accessing large amounts of data, as well as on data management operations such as backup and recovery. You need to retrieve less data from disk to satisfy a query or perform a backup, which simply reduces the amount of work that needs to be performed. Data is compressed by eliminating duplicate values in a database block; these values are stored at the beginning of the block in a symbol table. Therefore, all the information needed to re-create the uncompressed data in a block is available within that block. You can switch between table compression using the ALTER TABLE command but only the new blocks use the new table status. You can determine table compression in use by querying the DBA|USER_TABLES view. Note: The overhead associated with the initial compression may be an increase in CPU resources of up to 50%. This is the primary trade-off that needs to be taken into account when considering compression.

Page 176: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 22

Demonstrations

Demonstrations

For further understanding, you can click the links below for demonstrations on:• Manipulating Partitions in Oracle Database 11 g• Using High-Speed Data Loading and Rolling Window

Operations with Partitioning• Using Table Compression to Save Storage Costs

Click the following links to further understand:

• Manipulating Partitions in Oracle Database 11g[http://www.oracle.com/technology/obe/11gr1_db/bidw/partition/partition.htm]

• Using High-Speed Data Loading and Rolling Window Operations with Partitioning[http://www.oracle.com/technology/obe/11gr1_db/bidw/etl/etl.htm]

• Using Table Compression to Save Storage Costs[http://www.oracle.com/technology/obe/11gr1_db/perform/compress/compress.htm]

Page 177: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 23

Summary

Summary

In this lesson, you should have learned how to:• Implement partitioning on tables, indexes, and

materialized views from SQL Access Advisor’s recommendations

• Use partitioning enhancements to gain significantly faster data access:– Interval partitioning– System partitioning– Composite partitioning enhancements– Virtual column-based partitioning– Reference partitioning

Page 178: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Datawarehousing Enhancements Chapter 7 - Page 24

Page 179: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 0 - Page 1

Additional Performance Enhancements

Page 180: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 2

Chapter 8Additional Performance Enhancements

Additional Performance Enhancements

Oracle Database 11 g

Page 181: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 3

Objectives

Objectives

After completing this lesson, you should be able to :• Gain flexibility in automatic statistics generation at the

object level:– Set up statistics preferences.– Set up incremental, multicolumn, and expression

statistics.– Defer statistics publishing.

• Use memory efficiently with Query Result Cache support

• Discuss the increased cursor shareability in Oracle Database 11 g

Page 182: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 4

Statistic Preferences: Overview

Statistic Preferences: Overview

Database level

Schema level

Table level

DBA

Statement levelOptimizerstatisticsgathering

task

set_database_prefs

set_schema_prefs

set_table_prefs

gather_*_stats

DBMS_STATSset | get | delete

export | import

exec dbms_stats. set_table_prefs ('SH','SALES',' STALE_PERCENT','13');

DBA_TAB_STAT_PREFS

CASCADE DEGREE

ESTIMATE_PERCENT METHOD_OPT

NO_INVALIDATE GRANULARITY

PUBLISH INCREMENTAL

STALE_PERCENT

Global level

set_global_prefs

In Oracle Database 11g, you can associate statistics gathering options that override the default behavior of the GATHER_*_STATS procedures and the automated optimizer statistics gathering task at the object or schema level. You use the DBMS_STATS package to manage the gathering statistic options. You can set, get, delete, export, and import those preferences at the table, schema, database, and global level. Global preferences are used for tables that do not have any preferences, whereas database preferences are used to set preferences on all tables. The preference values specified in various ways take precedence from the outer circles to the inner ones as shown in the slide. The following options are new in Oracle Database 11g:

• PUBLISH: Publishes the statistics to the dictionary or stores them in a pending area.

• STALE_PERCENT: Determines the threshold level at which an object is considered as having stale statistics. The value is a percentage of the rows modified since the last statistics gathering. The example in the slide changes from the 10 percent default to 13 percent for SH.SALES only.

• INCREMENTAL: Gathers global statistics on partitioned tables using an incremental methodology

All the effective statistic preference settings are described in the DBA_TAB_STAT_PREFS view. Note: You control the global preference settings from Enterprise Manager Database Home > Server tab > Manage Optimizer Statistics link.

Page 183: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 5

Partitioned Tables and Incremental Statistics: Overview

Partitioned Tables and Incremental Statistics: Overview

… … …

Globalstatistics

Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007

For a partitioned table, Oracle maintains statistics on each partition and the overall statistics for the table. Generally, if the table is range partitioned, very few partitions go through data modifications (DML). For example, consider a table that stores the sales transactions. The table is partitioned on sales date with each partition containing transactions for a yearly quarter. Most of the DML activity happens on the partition that stores transactions of the current quarter. The data in other partitions usually remains unchanged. In Oracle Database 10g, the system keeps track of DML monitoring information at the table and (sub)partition level. Statistics are gathered only for those partitions (in the example in the slide, the partition for the current quarter) that are significantly changed (threshold is 10%) since the last statistics gathering. However, global statistics are gathered by scanning the entire table, which makes global statistics very expensive on partitioned tables especially when some partitions are stored on slow devices and not often modified.

Page 184: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 6

Partitioned Tables and Incremental Statistics in Oracle Database 11g

Partitioned Tables and Incremental Statistics in Oracle Database 11 g

… … …Global

statistics

synopsis

Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007

SYSAUX

GRANULARITY=GLOBAL%& INCREMENTAL=TRUE

Oracle Database 11g expedites the gathering of certain global statistics such as the number of distinct values (NDV). In contrast to the traditional way of scanning the entire table, there is a new mechanism to maintain certain global statistics by scanning only those partitions that have been changed and still make use of the statistics gathered before for those partitions that are unchanged. In short, these global statistics can be maintained incrementally using extra data structures called synopses. When used, synopses are maintained for all columns and for all partitions. This new mechanism has a larger space requirement than before, which is why synopses are stored in SYSAUX instead of SYSTEM. The new mechanism trades space for accuracy and speed of statistics maintenance. The DBMS_STATS package allows you to specify the granularity on a partitioned table. For example, you can specify AUTO, GLOBAL, GLOBAL and PARTITION, ALL, PARTITION, and SUBPARTITION. If the granularity specified includes GLOBAL and the table is marked as INCREMENTAL=TRUE for its gathering options, the global statistics are gathered using the synopsis mechanism. Moreover, statistics for changed partitions are gathered as well, no matter whether you specified PARTITION in the granularity or not. When used, synopses are automatically maintained by the system. Note: To maintain the pre-Oracle Database 11g functionality, you can specify INCREMENTAL=FALSE for the gathering options.

Page 185: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 7

Hash-Based Sampling for Column Statistics

Hash-Based Sampling for Column Statistics

• Computing statistics for columns is the most expensive step in statistics gathering.

• The row sampling technique gives inaccurate results with skewed data distribution.

• A new approximate counting technique based on hash sampling is used whenESTIMATE_PERCENTis set to AUTO_SAMPLE_SIZE.– The old row sampling technique is used otherwise.

For query optimization, it is essential to have a good estimate of the number of distinct values. By default and without histograms, the optimizer uses the number of distinct values to evaluate the selectivity of a predicate of a column. Oracle Database 10g computes the number of distinct values by counting the number of distinct values found on a sample of the underlying table. This approach can present issues if columns have many nulls or a very skewed distribution possibly leading to an underestimation of the number of distinct values. Oracle Database 11g uses a new approximate NDV method that provides an efficient and accurate method for gathering NDVs and other statistics for columns, using a hash-based algorithm. When you invoke a procedure from DBMS_STATS with the ESTIMATE_PERCENT gathering option set to AUTO_SAMPLE_SIZE (the default value), this new approximate NDV technique is used. Any other value for AUTO_SAMPLE_SIZE preserves the old behavior when specifying a sampling percentage. You are encouraged to use AUTO_SAMPLE_SIZE for improved accuracy of the NDV calculation.

Page 186: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 8

Multicolumn Statistics: Overview

Multicolumn Statistics: Overview

CARS

MAKE MODEL

S(MAKE Λ MODEL)=S(MAKE)xS(MODEL)

select dbms_stats. create_extended_stats ('jfv','cars','(make,model)') from dual;

exec dbms_stats.gather_table_stats('jfv','cars',-

method_opt=>'for all columns size 1 for columns (make,model) size 3');

1

2

3

CARS

MAKE MODEL

S(MAKE Λ MODEL)=S(MAKE,MODEL)

4

DBA_STAT_EXTENSIONS

In Oracle Database 10g, in most cases, the query optimizer assumes that the values of columns used in a complex predicate are independent of each other. The optimizer therefore estimated the selectivity of a complex predicate by multiplying the selectivity of the individual predicates. This approach always resulted in underestimation of the selectivity. To circumvent this, Oracle Database 11g allows you to collect, store, and use the following statistics to capture functional dependency between two or more columns (also called groups of columns): number of distinct values, number of nulls, frequency histograms, and density. For example, consider a table CARS where you store information about cars. The MAKE and MODEL columns are highly correlated in that MODEL determines MAKE. This is a strong dependency and both columns should be considered by the optimizer as highly correlated. You can specify that correlation to the optimizer using the CREATE_EXTENDED_STATS function shown in the example in the slide, and then compute the statistics for all columns including the ones for the correlated groups you created. After creation, you can retrieve statistic extensions using the ALL|DBA|USER_STAT_EXTENSIONS views.

Page 187: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 9

Expression Statistics: Overview

Expression Statistics: Overview

S(upper( MODEL) )=0.01

CARS

MODEL

CARS

MODEL

CREATE INDEX upperidx ON CARS(upper(MODEL))

CARS

MODEL

select dbms_stats. create_extended_stats ('jfv','cars','(upper(model))') from dual;

DBA_STAT_EXTENSIONS

Recomm

ended

Still p

ossible

exec dbms_stats.gather_table_stats('jfv','cars',-

method_opt=>'for all columns size 1 for columns (upper(model)) size 3');

Predicates involving expressions on columns are a big issue for the query optimizer. When computing selectivity on predicates of the form “function(Column) = constant,” the optimizer assumes a static selectivity value of one percent. This approach is inadequate and causes the optimizer to produce suboptimal plans. The query optimizer has been extended to better handle such predicates in limited cases, where functions preserve the data distribution characteristics of the column and thus allow the optimizer to use the columns statistics. An example of such a function is TO_NUMBER. Further enhancements were made to evaluate built-in functions during query optimization to derive better selectivity using dynamic sampling. Lastly, the optimizer collects statistics on virtual columns created to support function-based indexes. However, these solutions are either limited to a certain class of functions or work only for expressions used to create function-based indexes. When you use expression statistics in Oracle Database 11g, you use a more general solution that includes arbitrary user-defined functions and does not depend on the presence of function-based indexes.

Page 188: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 10

Deferred Statistics Publishing: Overview

Deferred Statistics Publishing: Overview

TEST

PROD

PUBLISH=FALSE+

GATHER_*_STATS

EXPORT_PENDING_STATS

IMPORT_TABLE_STATS

expdp/impdp

OPTIMIZER_USE_PENDING_STATISTICS=TRUE

PUBLISH_PENDING_STATS

OPTIMIZER_USE_PENDING_STATISTICS =FALSE

DictionarystatisticsPending

statistics

DBA_TAB_PENDING_STATS

The statistics-gathering operation automatically stores any new statistics in the data dictionary each time it completes the iteration for one object (table, partition, subpartition, or index). The optimizer uses these current statistics as soon as they are written to the data dictionary. This can impact the DBA, who may not be sure of the aftermath of these current statistics, days, or even weeks later. The statistics could also be inconsistent if table statistics are published before the statistics of its indexes, partitions, or subpartitions are available. In Oracle Database 11g, you can separate the gathering step from the publication step of optimizer statistics. There are two benefits from separating the two steps:

• Support the statistics-gathering operation as an atomic transaction: The statistics of all tables and its dependent objects (indexes, partitions, and subpartitions) in a schema are published at the same time. Therefore, the optimizer always has a consistent view of the statistics and if for some reason the gathering step fails, it is able to resume from where it left off when it is restarted using the DBMS_STATS.RESUME_GATHER_STATS procedure.

• Allow the DBA to validate the new statistics by running all or part of the workload using the newly gathered statistics on a test system. Then, when you are satisfied with the test results, proceed to the publishing step to make them current in the production environment.

When you specify the PUBLISH gathering option to FALSE, the gathered statistics are stored in the pending statistic tables instead of being current. These pending statistics are accessible from a number of views: {ALL|DBA|USER}_{TAB|COL|IND|TAB_HISTGRM}_PENDING_ST ATS.

Page 189: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 11

To test the pending statistics, you have two options:

• Transfer the pending statistics to your own statistics table using the new DBMS_STATS.EXPORT_PENDING_STATS procedure and then export your statistics table to a test system where you can import it back, and render the pending statistics current using the DBMS_STATS.IMPORT_TABLE_STATS procedure.

• Enable session-private statistics altering your session initialization parameter OPTIMIZER_USE_PENDING_STATISTICS to TRUE. By default, this new initialization parameter is set to FALSE. This means that in your session, you parse SQL statements using the current optimizer statistics. By setting it to TRUE in your session, you switch to the pending statistics instead.

After you have tested and are satisfied with the pending statistics, you can publish them as current in your production environment using the new DBMS_STATS.PUBLISH_PENDING_STATS procedure. Note: For more information about the DBMS_STATS package, refer to the PL/SQL Packages and Types Reference Guide.

Page 190: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 12

Deferred Statistics Publishing: Example

Deferred Statistics Publishing: Example

exec dbms_stats.set_table_prefs('SH','CUSTOMERS','P UBLISH','false');

exec dbms_stats.gather_table_stats('SH','CUSTOMERS' );

alter session set optimizer_use_pending_statistics = true;

exec dbms_stats.publish_pending_stats('SH','CUSTOME RS');

Execute your workload from the same session.

1

2

3

4

5

1. You use the SET_TABLE_PREFS procedure to set the PUBLISH option to FALSE. This

prevents the next statistics-gathering operation to automatically publish statistics as current. According to the first statement, this is true only for the SH.CUSTOMERS table.

2. Then you gather statistics on the SH.CUSTOMERS table in the pending area of the dictionary.

3. Now, you can test the new set of pending statistics from your session by setting OPTIMIZER_USE_PENDING_STATISTICS to TRUE.

4. This is done in step 4 by issuing queries against SH.CUSTOMERS.

5. If you are satisfied with the test results, you can use the PUBLISH_PENDING_STATS procedure to render the pending statistics for SH.CUSTOMERS as current.

Note: To analyze the differences between the pending statistics and the current ones, you could export the pending statistics to your own statistics table, and then use the new DBMS_STATS.DIFF_TABLE_STATS function.

Page 191: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 13

Query Result Cache

Query Result Cache

• You cache the result of a query or query block for future reuse.

• Cache is used across statements and sessions unless it is stale.

• Benefits: – Scalability – Reduction of memory usage

SELECT …

Query Result Cache

SELECT …

session 1 session 2

The Query Result Cache enables explicit caching of query result sets in database memory. Applications see improved performance for queries which have a cache hit and avoid round trips to the server for sending the query and fetching the results. A separate shared memory pool is now used for storing and retrieving the cached results. Query retrieval from the Query Result Cache is faster than rerunning the query. Frequently executed queries see significant performance improvements when using the Query Result Cache. The query results stored in the cache become invalid when data in the database objects being accessed by the query is modified. In the graphic shown above, if the first session executes a query, it retrieves the data from the database and then caches the result in the SQL query result cache. If a second session executes the exact same query, it retrieves the result directly from the cache instead of using the disks. Note:

• Each node in a RAC configuration has a private result cache. Results cached on one instance cannot be used by another instance. However, invalidations work across instances. A special RCBG process is used on each instance to handle all synchronization operations between RAC instances related to the SQL query result cache.

• With parallel query, entire result can be cached (in RAC it is cached on query coordinator instance) but individual parallel query processes cannot use the cache.

• For more information about using result caches in a RAC configuration, see the Oracle Database 11g Real Application Clusters documentation.

Page 192: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 14

Setting Up the Query Result Cache

Setting Up the Query Result Cache

Set at database level using the RESULT_CACHE_MODEinitialization parameter. Values are as follows:• MANUAL: Use the result_cache hint to specify results

to be stored in the cache.• FORCE: All results are stored in the cache.

The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. You can set the RESULT_CACHE_MODE parameter at the system or session level with the possible parameter values as MANUAL, or FORCE. When set to MANUAL(the default), you must specify, by using the RESULT_CACHE hint, that a particular result is to be stored in the cache. When set to FORCE, all results are stored in the cache. Note: For the FORCE setting, if the statement contains a [NO_]RESULT_CACHE hint, the hint takes precedence over the parameter setting.

Page 193: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 15

Using the RESULT_CACHE Hint

Using the RESULT_CACHEHint

SELECT /*+ RESULT_CACHE*/ department_id, AVG(salary)

FROM employees

GROUP BY department_id;

SELECT /*+ NO_RESULT_CACHE*/ department_id, AVG(salary)

FROM employees

GROUP BY department_id;

--------------------------------------------------- -----------

| Id | Operation | Name |Rows

--------------------------------------------------- -----------

| 0 | SELECT STATEMENT | | 11

| 1 | RESULT CACHE | 8fpza04gtwsfr6n595au15yj4y |

| 2 | HASH GROUP BY | | 11

| 3 | TABLE ACCESS FULL| EMPLOYEES | 107

--------------------------------------------------- -----------

If you want to use the Query Result Cache, and the RESULT_CACHE_MODE initialization parameter is set to MANUAL, you must explicitly specify the RESULT_CACHE hint in your query. This introduces the ResultCache operator into the execution plan for the query. When you execute the query, the ResultCache operator looks up the result cache memory to check whether the result for the query already exists in the cache. If it exists, the result is retrieved directly out of the cache. If it does not yet exist in the cache, the query is executed and the result is returned as output and also stored in the result cache memory. If the RESULT_CACHE_MODE initialization parameter is set to FORCE, and you do not want to store the result of a query in the result cache, you must then use the NO_RESULT_CACHE hint in your query. For example, when the RESULT_CACHE_MODE value equals FORCE in the initialization parameter file, and you do not want to use the result cache for the EMPLOYEES table, you should use the NO_RESULT_CACHE hint.

Page 194: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 16

Managing the Query Result Cache

Managing the Query Result Cache

The following initialization parameters can be used to manage the Query Result Cache:• RESULT_CACHE_MAX_SIZEparameter:

– It sets the memory allocated to the result cache. – The result cache is disabled if you set the value t o 0.

• RESULT_CACHE_MAX_RESULT:– It sets the maximum cache memory for a single resul t.– It defaults to 5%.

• RESULT_CACHE_REMOTE_EXPIRATION:– It sets the expiry time for the Query Result Cache. – It defaults to 0.

You can alter various parameter settings in the initialization parameter file to manage the Query Result Cache of your database. By default, the database allocates memory for the result cache in the System Global Area (SGA). The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system.

• You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set the value to 0.

• Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percent value between 1 and 100. This parameter can be implemented at the system and session level.

• Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that accesses remote database objects remains valid. The default value is 0.

Page 195: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 17

Using the DBMS_RESULT_CACHE Package

Using the DBMS_RESULT_CACHEPackage

Use the DBMS_RESULT_CACHEpackage to:

• Manage memory allocation for the query result cache• View the status of the cache:

• Retrieve statistics on the cache memory usage:

• Remove all existing results and clear cache memory:

• Invalidate cached results depending on specified object:

EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT;

EXECUTE DBMS_RESULT_CACHE.FLUSH;

SELECT DBMS_RESULT_CACHE.STATUS FROM DUAL;

EXEC DBMS_RESULT_CACHE.INVALIDATE('JFV','MYTAB');

The DBMS_RESULT_CACHE package provides statistics, information, and operators that enable you to manage memory allocation for the Query Result Cache. You can use the DBMS_RESULT_CACHE package to perform various operations such as viewing the status of the cache, retrieving statistics on the cache memory usage, and flushing the cache. For example, to view the memory allocation statistics, use the following SQL procedure:

SQL> set serveroutput on

SQL> execute dbms_result_cache.memory_report

The output of this command is similar to the following (report is truncated): R e s u l t C a c h e M e m o r y R e p o r t

[Parameters]

Block Size = 1024 bytes

Maximum Cache Size = 950272 bytes (928 blocks)

Maximum Result Size = 47104 bytes (46 blocks)

... State Object Pool = 2852 bytes [0.003% of the S hared Pool]

... Cache Memory = 32792 bytes (32 blocks) [0.034% of the Shared Pool]

....... Unused Memory = 30 blocks

Note: For more information, refer to the PL/SQL Packages and Types Reference Guide.

Page 196: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 18

Viewing Information About the Query Result Cache

Viewing Information About the Query Result Cache

The following views provide information about the Q uery Result Cache:

Lists all the memory blocks and the corresponding statistics

(G)V$RESULT_CACHE_MEMORY

Lists all the objects (cached results and dependencies) along with their attributes

(G)V$RESULT_CACHE_OBJECTS

Lists the dependency details between the cached results and dependencies

(G)V$RESULT_CACHE_DEPENDENCY

(G)V$RESULT_CACHE_STATISTICS Lists the various cache settings and memory usage statistics

Note: For further information, see the Oracle Database Reference 11g Release 1 (11.1).

Page 197: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 19

Oracle Call Interface Client Query Cache

Oracle Call Interface Client Query Cache

• Extends server-side query caching to client-side memory

• Ensures better performance by eliminating round tri ps to the server

• Leverages client-side memory• Improves server scalability by saving server CPU

resources• Automatically refreshes the result cache if the res ult

set is changed on the server• Is particularly good for lookup tables

You can enable caching of query result sets in client memory with the Oracle Client Interface (OCI) Client Query Cache in Oracle Database 11g. The cached result set data is transparently kept consistent with any changes done on the server side. Applications leveraging this feature see improved performance for queries, which have a cache hit. Additionally, a query serviced by the cache avoids round trips to the server for sending the query and fetching the results. Server CPU that would have been consumed for processing the query is reduced, thereby improving server scalability. Before using client-side query cache, you need to determine whether your application will benefit from this feature. Client-side caching is useful when you have applications that produce repeatable result sets, small result sets, static result sets, or frequently executed queries on database objects that do not change often. Client and server result caches are autonomous; each can be enabled/disabled independently. Note: You can monitor the client query cache using the client_result_cache_stats$ view.

Page 198: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 20

Setting the OCI Client Query Cache

Setting the OCI Client Query Cache

You can use client-side query caching by:• Setting initialization parameters

– CLIENT_RESULT_CACHE_SIZE

– CLIENT_RESULT_CACHE_LAG

• Using the client configuration file– OCI_RESULT_CACHE_MAX_SIZE

– OCI_RESULT_CACHE_MAX_RSET_SIZE

– OCI_RESULT_CACHE_MAX_RSET_ROWS

Client result cache is then used depending on RESULT CACHEhints in your SQL statements

The following two parameters can be set in the initialization parameter file:

• CLIENT_RESULT_CACHE_SIZE: A nonzero value enables the OCI Client Query Cache. This is the maximum size of the client per process result set cache in bytes. All OCI client processes get this maximum size and can be overridden by the OCI_RESULT_CACHE_MAX_SIZE parameter.

• CLIENT_RESULT_CACHE_LAG : This indicates the maximum time (in milliseconds) since the last round trip to the server, before which the OCI Client Query makes a round trip to get any database changes related to the queries cached on the client.

A client configuration file is optional and overrides the cache parameters set in the server initialization parameter file. Parameter values can be part of a sqlnet.ora file. When parameter values shown above are specified, OCI client caching is enabled for OCI client processes using the configuration file. OCI_RESULT_CACHE_MAX_RSET_SIZE/ROWS denotes the maximum size of any result set in bytes/rows in the per-process query cache. OCI applications can use application hints to force result cache storage. The application hints can be SQL hints:

• /*+ result_cache */

• /*+ no_result_cache */

Note: Your applications must be relinked with Release 11.1 or higher client libraries and be connected to a Release 11.1 or higher server to use this feature.

Page 199: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 21

PL/SQL Function Cache

PL/SQL Function Cache

Cached results

Subsequent queries

…Y:=HR.Calculate_Comp

…Y:=HR.Calculate_Comp

…Y:=HR.Calculate_Comp

…Y:=HR.Calculate_Comp

First query

You can also enable result caching for a function with the RESULT_CACHE clause in your PL/SQL function. You can optionally use the RELIES_ON clause specifying any database objects that the function depends on, so that if any of them are updated, the cached result becomes invalid and must be recomputed. The benefits are significant savings in space and response time. The cache is instance-wide, so that all distinct sessions invoking the function benefit. The best candidates for result caching are functions that are called frequently but depend on information that changes infrequently or never.

Page 200: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 22

PL/SQL Function Cache: Example

PL/SQL Function Cache: Example

• Include the RESULT_CACHEoption in the function declaration section of a package or function defini tion.

• Optionally include the RELIES_ONclause to specify any tables or views on which the function results depen d.

CREATE OR REPLACE FUNCTION productName (prod_id NUMBER, lang_id VARCHAR2)RETURN NVARCHAR2RESULT_CACHE RELIES_ON(product_descriptions)

ISresult VARCHAR2(50);

BEGINSELECT translated_name INTO result

FROM product_descriptionsWHERE product_id = prod_id AND language_id = lang_i d;

RETURN result;END;

In the example shown above, the productName function has result caching enabled through the RESULT_CACHE option in the function declaration. In this example, the RELIES_ON clause is used to identify the PRODUCT_DESCRIPTIONS table on which the function results depend.

Page 201: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 23

Automatic “Native” Compilation

Automatic “Native” Compilation

• More than 100% faster for pure PL/SQL or Java code• 10% to 30% faster for typical transactions with SQL

– PL/SQL parameter: plsql_code_type

— Just one value: NATIVE or INTERPRETED

— No need for C compiler— No file system DLLs

– Java parameter: java_jit_enabled

— Just one value: TRUEor FALSE

— JIT “on-the-fly” compilation— Transparent to user (asynchronous; in background)— Code stored to avoid recompilations

PL/SQL native compilation: The Oracle executable generates native dynamic-link libraries (DLL) directly from the PL/SQL source code without needing to use a third-party C compiler. In Oracle Database 10g, the DLL is stored canonically in the database catalog. In Oracle Database 11g, when it is needed, the Oracle executable loads it directly from the catalog without needing to stage it first on the file system. The execution speed of natively compiled PL/SQL programs is never slower in Oracle Database 11g than in Oracle Database 10g; speed may in fact be improved in some cases by as much as an order of magnitude. The PL/SQL native compilation is automatically available with Oracle Database 11g. No third-party software (C compiler or DLL loader) is needed. Java native compilation: Enabled by default and similar to the Java Development Kit just-in-time (JDK JIT), this feature compiles Java in the database natively and transparently without the need of a C compiler. The JIT runs as an independent session in a dedicated Oracle server process. There is at most one compiler session per database instance; it is Oracle RAC aware and amortized over all Java sessions. As this feature removes the need for a C compiler, there are cost and license savings. There are also two major benefits:

• Increased performance of pure Java execution in the database

• Ease of use because it is activated transparently (without the need for an explicit command when Java is executed in the database)

Page 202: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 24

Adaptive Cursor Sharing: Overview

Adaptive Cursor Sharing: Overview

• Adaptive Cursor Sharing allows for intelligent curs or sharing only for statements that use bind variables .

• Adaptive Cursor Sharing is used to compromise between cursor sharing and optimization.

• Adaptive Cursor Sharing benefits:– Automatically detects when different executions wou ld

benefit from different execution plans– Limits the number of generated child cursors to a

minimum– Automated mechanism that cannot be turned off

One plan is not always appropriate for all bind val ues.

Bind variables allow the Oracle database to share a single cursor for multiple SQL statements, thereby reducing the amount of shared memory used to parse SQL statements. However, cursor sharing and SQL optimization are often two conflicting goals. Writing a SQL statement with literals provides more information for the optimizer and naturally leads to better execution plans, while increasing memory and CPU overhead caused by excessive hard parses. For statements using bind variables, Oracle9i Database introduced the concept of bind peeking, in which the optimizer looks at the bind values the first time the statement is executed. It then uses these values to determine an execution plan to be shared by all other executions of that statement. To benefit from bind peeking, it is assumed that cursor sharing is intended and that different invocations of the statement are supposed to use the same execution plan. If different invocations of the statement would significantly benefit from different execution plans, bind peeking is of no use in generating good execution plans. To address this issue, Oracle Database 11g introduces Adaptive Cursor Sharing. This offers a more sophisticated strategy designed not to share the cursor blindly, but rather to generate multiple plans per SQL statement with bind variables if the benefit of using multiple execution plans outweighs the parse time and memory usage overhead. However, because the purpose of using bind variables is to share cursors in memory, a compromise must be found regarding the number of child cursors that need to be generated.

Page 203: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 25

Adaptive Cursor Sharing: Example

Adaptive Cursor Sharing: Example

CLERK7782CLARKCLERK7788SCOTTVP8739KINGCLERK7521WARDCLERK7499ALLENCLERK6973SMITH

JobIDnoEname

SELECT ……FROM..

WHERE Job = :B1

CLERK7782CLARK

CLERK7788SCOTT

CLERK7521WARD

CLERK7499ALLEN

CLERK6973SMITH

JobIDnoEname

VP8739KING

JobEmpnoEname

1

2

In the example above, assume that a query is retrieving information for EMPLOYEES based on a bind variable. In case 1, if the bind variable value at hard parse is “CLERK,” five out of six records are selected. Therefore, the execution plan is a full table scan. In case 2, if “VP” is the bind variable value at hard parse, one out of the six records is selected and the execution plan may be an index lookup. Therefore, instead of the execution plan being reused for each value of the bind variable, the optimizer looks at the selectivity of the data and determines a different execution plan to retrieve the data. The following are the benefits of adaptive cursor sharing:

• The optimizer shares the plan when binds variable values are “equivalent.”

• Plans are marked with a selectivity range. If current bind values fall within the range, they use the same plan.

• The optimizer creates a new plan if bind variable values are not equivalent.

• The optimizer generates a new plan for each selectivity range.

• The optimizer avoids expensive table scans and index searches based on selectivity criteria, thereby speeding up data retrieval.

Page 204: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 26

Adaptive Cursor Sharing Views

Adaptive Cursor Sharing Views

The following views provide information about Adapt ive Cursor Sharing usage:

Two new columns show whether a cursor is bind-sensitive or bind-aware.

V$SQL

Shows the selectivity cubes stored for every predicate that contains a bind variable and whose selectivity is used in the cursor sharing checks

V$SQL_CS_SELECTIVITY

Shows execution statistics of a cursor using different bind sets

V$SQL_CS_STATISTICS

V$SQL_CS_HISTOGRAM Shows the distribution of the execution count across the execution history histogram

These views expose what is happening with Adaptive Cursor Sharing so that the DBA can diagnose any problems. Two new columns have been added to V$SQL:

• IS_BIND_SENSITIVE: Indicates if a cursor is bind-sensitive; value YES | NO. A query for which the optimizer peeked at bind variable values when computing predicate selectivities and where a change in a bind variable value may lead to a different plan is called bind-sensitive.

• IS_BIND_AWARE: Indicates if a cursor is bind-aware; value YES | NO. A cursor in the cursor cache that has been marked to use bind-aware cursor sharing is called bind-aware.

V$SQL_CS_HISTOGRAM: Shows the distribution of the execution count across a three-bucket execution history histogram.

V$SQL_CS_SELECTIVITY: Shows the selectivity cubes or ranges stored in a cursor for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks. It contains the text of the predicates and the selectivity range low and high values. V$SQL_CS_STATISTICS: Adaptive Cursor Sharing monitors execution of a query and collects information about it for a while, and uses this information to decide whether to switch to using bind-aware cursor sharing for the query. This view summarizes the information that it collects to make this decision: for a sample of executions, it keeps track of rows processed, buffer gets, and CPU time. The PEEKED column has the value YES if the bind set was used to build the cursor, and NO otherwise.

Page 205: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 27

Demonstrations

Demonstrations

For further understanding, you can click the links below for demonstrations on:• Using Extended Statistics to Optimize Multi-Column

Relationships and Function Based Statistics• Gathering and Publishing Statistics Independently• Improving Application Performance Using Result

Cache

Click the following links to further understand:

• Using Extended Statistics to Optimize Multi-Column Relationships and Function Based Statistics[http://www.oracle.com/technology/obe/11gr1_db/perform/multistats/multicolstats.htm]

• Gathering and Publishing Statistics Independently[http://www.oracle.com/technology/obe/11gr1_db/perform/gathstats/gathstats.htm]

• Improving Application Performance Using Result Cache[http://www.oracle.com/technology/obe/11gr1_db/perform/rescache/res_cache.htm]

Page 206: Oracle 11g Database New_featuresD52362

Copyright © 2007, Oracle. All rights reserved.

Additional Performance Enhancements Chapter 8 - Page 28

Summary

Summary

In this lesson, you should have learned how to:• Gain flexibility in automatic statistic generation at the

object level:– Set up statistics preferences.– Set up incremental, multicolumn, and expression

statistics.– Defer statistics publishing.

• Use memory efficiently with Query Result Cache support

• Discuss the benefits of increased cursor shareabili ty using Adaptive Cursor Sharing