rman incremental backups n

271
RMAN Incremental Backups RMAN incremental backups backup only datafile blocks that have changed since a specified previous backup. We can make incremental backups of databases, individual tablespaces or datafiles. The primary reasons for making incremental backups parts of your strategy are: For use in a strategy based on incrementally updated backups, where these incremental backups are used to periodically roll forward an image copy of the database To reduce the amount of time needed for daily backups To save network bandwidth when backing up over a network To be able to recover changes to objects created with the NOLOGGING option. For example, direct load inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do, however, change data blocks and so are captured by incremental backups. To reduce backup sizes for NOARCHIVELOG databases. Instead of making a whole database backup every time, you can make incremental backups. As with full backups, if you are in ARCHIVELOG mode, you can make incremental backups if the database is open; if the database is in NOARCHIVELOG mode, then you can only make incremental backups after a consistent shutdown. One effective strategy is to make incremental backups to disk, and then back up the resulting backup sets to a media manager with BACKUP AS BACKUPSET. The incremental backups are generally smaller than full backups, which limits the space required to store them until they are moved to tape. Then, when the incremental backups on disk are backed up to tape, it is more likely that tape streaming can be sustained because all blocks of the incremental backup are copied to tape. There is no possibility of delay due to time required for RMAN to locate changed blocks in the datafiles. RMAN backups can be classified in these ways: Full or incremental Open or closed Consistent or inconsistent Note that these backup classifications apply only to datafile backups. Backups of other files, such as archivelogs and control files, always include the complete file and are never inconsistent. Backup Type Definition Full A backup of a datafile that includes every allocated block in the file being backed up. A full backup of a datafile can be an image copy, in which case every data block is backed up. It can also be stored in a backup set, in which case datafile blocks not in use may be skipped. A full backup cannot be part of an incremental backup strategy; that is, it cannot be the parent for a subsequent incremental backup. Increment An incremental backup is either a level 0 backup, which includes every

Upload: k2sh

Post on 07-Sep-2015

46 views

Category:

Documents


0 download

DESCRIPTION

.What is RMAN and How to configure it ?A.RMAN is an Oracle Database client that performs backup and recovery tasks on your databases and automates administration of your backup strategies. It greatly simplifies the dba jobs by managing the production database's backing up, restoring, and recovering database files. This tool integrates with sessions running on an Oracle database to perform a range of backup and recovery activities, including maintaining an RMAN repository of historical data about backups. There is no additional installation required for this tool. Its by default get installed with the oracle database installation. The RMAN environment consists of the utilities and databases that play a role in backing up your data.You can access RMAN through the command line or through Oracle Enterprise Manager.

TRANSCRIPT

RMAN Incremental Backups

RMANincremental backups backup only datafile blocks that have changed since a specified previous backup. We can make incremental backups of databases, individual tablespaces or datafiles.

The primary reasons for making incremental backups parts of your strategy are:

For use in a strategy based on incrementally updated backups, where these incremental backups are used to periodically roll forward an image copy of the database

To reduce the amount of time needed for daily backups

To save network bandwidth when backing up over a network

To be able to recover changes to objects created with theNOLOGGINGoption. For example, direct load inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do, however, change data blocks and so are captured by incremental backups.

To reduce backup sizes forNOARCHIVELOGdatabases. Instead of making a whole database backup every time, you can make incremental backups.As with full backups, if you are inARCHIVELOGmode, you can make incremental backups if the database is open; if the database is inNOARCHIVELOGmode, then you can only make incremental backups after a consistent shutdown.

One effective strategy is to make incremental backups to disk, and then back up the resulting backup sets to a media manager withBACKUPASBACKUPSET. The incremental backups are generally smaller than full backups, which limits the space required to store them until they are moved to tape. Then, when the incremental backups on disk are backed up to tape, it is more likely that tape streaming can be sustained because all blocks of the incremental backup are copied to tape. There is no possibility of delay due to time required for RMAN to locate changed blocks in the datafiles.

RMAN backups can be classified in these ways:

Full or incremental

Open or closed

Consistent or inconsistent

Note that these backup classifications apply only to datafile backups. Backups of other files, such as archivelogs and control files, always include the complete file and are never inconsistent.

Backup TypeDefinition

FullA backup of a datafile that includes every allocated block in the file being backed up. A full backup of a datafile can be an image copy, in which case every data block is backed up. It can also be stored in a backup set, in which case datafile blocks not in use may be skipped.

A full backup cannot be part of an incremental backup strategy; that is, it cannot be the parent for a subsequent incremental backup.

IncrementalAn incremental backup is either a level 0 backup, which includes every block in the file except blocks compressed out because they have never been used, or a level 1 backup, which includes only those blocks that have been changed since the parent backup was taken.

A level 0 incremental backup is physically identical to a full backup. The only difference is that the level 0 backup is recorded as an incremental backup in the RMAN repository, so it can be used as the parent for a level 1 backup.

OpenA backup of online, read/write datafiles when the database is open.

ClosedA backup of any part of the target database when it is mounted but not open. Closed backups can be consistent or inconsistent.

ConsistentA backup taken when the database is mounted (but not open) after a normal shutdown. The checkpoint SCNs in the datafile headers match the header information in the control file. None of the datafiles has changes beyond its checkpoint. Consistent backups can be restored without recovery.

Note:If you restore a consistent backup and open the database in read/write mode without recovery, transactions after the backup are lost. You still need to perform anOPEN RESETLOGS.

InconsistentA backup of any part of the target database when it is open or when a crash occurred orSHUTDOWNABORTwas run prior to mounting.

An inconsistent backup requires recovery to become consistent.

The goal of an incremental backup is to back up only those data blocks that have changed since a previous backup. You can use RMAN to create incremental backups of datafiles, tablespaces, or the whole database.

During media recovery, RMAN examines the restored files to determine whether it can recover them with an incremental backup. If it has a choice, then RMAN always chooses incremental backups over archived logs, as applying changes at a block level is faster than reapplying individual changes.

RMAN does not need to restore a base incremental backup of a datafile in order to apply incremental backups to the datafile during recovery. For example, you can restore non-incremental image copies of the datafiles in the database, and RMAN can recover them with incremental backups.

Incremental backups allow faster daily backups, use less network bandwidth when backing up over a network, and provide better performance when tape I/O bandwidth limits backup performance. They also allow recovery of database changes not reflected in the redo logs, such as direct load inserts. Finally, incremental backups can be used to back upNOARCHIVELOGdatabases, and are smaller than complete copies of the database (though they still require a clean database shutdown).

One effective strategy is to make incremental backups to disk (as image copies), and then back up these image copies to a media manager withBACKUPASBACKUPSET. Then, you do not have the problem of keeping the tape streaming that sometimes occurs when making incremental backups directly to tape. Because incremental backups are not as big as full backups, you can create them on disk more easily.

Incremental Backup AlgorithmEach data block in a datafile contains a system change number (SCN), which is the SCN at which the most recent change was made to the block. During an incremental backup, RMAN reads the SCN of each data block in the input file and compares it to the checkpoint SCN of the parent incremental backup. If the SCN in the input data block is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.

Note that if you enable the block change tracking feature, RMAN can refer to the change tracking file to identify changed blocks in datafiles without scanning the full contents of the datafile. Once enabled, block change tracking does not alter how you take or use incremental backups, other than offering increased performance.

Level 0 and Level 1 Incremental BackupsIncremental backups can be either level 0 or level 1.A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. The only difference between a level 0 incremental backup and a full backup is that a full backup is never included in an incremental strategy.

A level 1 incremental backup can be either of the following types:

Adifferential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0

Acumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0

Incremental backups are differential by default.

Note:Cumulative backups are preferable to differential backups when recovery time is more important than disk space, because during recovery each differential backup must be applied in succession. Use cumulative incremental backups instead of differential, if enough disk space is available to store cumulative incremental backups.

The size of the backup file depends solely upon the number of blocks modified and the incremental backup level.

Differential Incremental BackupsIn a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent cumulative or differential incremental backup, whether at level 1 or level 0. RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all blocks changed since the level 0 backup.

The following command performs a level 1 differential incremental backup of the database:

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN. If compatibility alter system [SID=instance-name] reset parameter-name; New DIAGNOSTIC_DEST parameter as replacement for BACKGROUND_DUMP_DEST, CORE_DUMP_DEST and USER_DUMP_DEST. It defaults to $ORACLE_BASE/diag/.

From 11g, we havetwo alert log files. One is the traditionalalert_SID.log(inDIAGNOSTIC_DEST/trace) and the other one is alog.xmlfile(inDIAGNOSTIC_DEST/alert). The xml file gives a lot more information than the traditional alert log file. We can have logging information for DDL operations in the alert log files. If log.xml reaches 10MB size, it will be renamed and will create new alert log file. log.xml can be accessed from ADR command line.ADRCI>show alert Logging information for DDL operations will be written into alert log files, is not enabled by default and we must change the new parameter to TRUE.SQL>ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH; Parameter(p) file & server parameter(sp) file can be created from memory.SQL>create pfile[=location] from memory;SQL>create spfile[=location] from memory; From 11g, server parameter file(spfile)is in new format that is compliant with Oracle Hardware Assisted Resilient Data(HARD).

DDL wait option - Oracle will automatically wait for the specified time period during DDL operations and will try to run the DDL again.SQL>ALTER SYSTEM/SESSION SET DDL_LOCK_TIMEOUT = n; We can define thestatisticsto be pending, which means newly gather statistics will not be published or used by the optimizer giving us an opportunity to test the new statistics before we publish them.

From Oracle Database 11g, we can create extended statistics on(i) expressions of values, not only on columns(ii) on multiple columns (column group), not only on single column.

Table level control of CBO statistics refresh threshold.SQL> exec dbms_stats.set_table_prefs(HR, EMP, STALE_PERCENT, 20');

Flashback Data Archive- flashback will make use offlashbacklogs, explicitly created for that table, in FRA (Flash/Fast Recovery Area), will not use undo. Flashback data archives can be defined on any table/tablespace. Flashback data archives are written by a dedicatedbackground processcalled FBDA so there is less impact on performance. Can be purged at regular intervals automatically.

Analytic Workspace Manager(AWM) - a tool to manage OLAP objects in the database.

Users with default passwords can be found in DBA_USERS_WITH_DEFPWD.

Hash value of the passwords in DBA_USERS (in ALL_USERS and USER_USERS) will be blank. If you want to see the value, query USER$.

Default value for audit_trail is DB, not NULL. By default some system privileges will beaudited.

LogMiner can be accessed from Oracle Enterprise Manager.

Data Guard improvements

OracleActive Data Guard- Standby databases can now simultaneously be in read and recovery mode - so use it for running reports 24x7.

Online upgrades: Test on standby and roll to primary.

Snapshot standby database- physical standby database can be temporarily converted into an updateable one called snapshot standby database.

Creation of physical standby is become easier.

From Oracle 11g, we can control archive log deletion by setting the log_auto_delete initialization parameter to TRUE. The log_auto_delete parameter must be coupled with the log_auto_del_retention_target parameter to specify the number of minutes an archivelog is maintained until it is purged. Default is 24 hours (1440 minutes).

Incremental backup on physical readable physical standby.

Offload: Complete database and fast incremental backups.

Logical standby databases now support XML and CLOB datatypes as well as transparent data encryption.

We can compress the redo data that goes to the standby server, by settingcompression=enable.

From Oracle 11g, logical standby provides support for DBMS_SCHEDULER.

When transferring redo data to standby, if the standby does not respond in time, the log transferring service will wait for specified timeout value (set bynet_timeout=n) and then give up.

New package and procedure, DBMS_DG.INITIATE_FS_FAILOVER, introduced to programmatically initiate a failover.

SecureFilesSecureFiles provide faster access to unstructured data than normal file systems, provides the benefits of LOBs and external files. For example, write access to SecureFiles is faster than a standard Linux file system, while read access is about the same.SecureFiles can be encrypted for security, de-duplicated and compressed for more efficient storage, cached (or not) for faster access (or save the buffer cache space), and logged at several levels to reduce the mean time to recover (MTTR) after a crash.

create table table-name ( ... lob-column lob-type [deduplicate] [compress high/low] [encrypt using 'encryption-algorithm'] [cache/nocache] [logging/nologging] ...) lob (lob-column) store as securefile ...;

To create SecureFiles:(i) The initialization parameter db_securefile should be set to PERMITTED (the default value).(ii) The tablespace where we are creating the securefile should be Automatic Segment Space Management (ASSM) enabled (default mode in Oracle Database 11g).

Real Application Testing(RAT)Real Application Testing (RAT) will make decision making easier in migration, upgradation, patching, initialization parameter changes, object changes, hardware replacements, and operating system changes and moving to RAC environment. RAT consists of two components:

Database Replay- capture production workload and replay on different (standby/test/development) environment. Capture the activities from source database in the form of capture files in capture directory. Transfer these files to target box. Replay the process on target database.

SQL Performance Analyzer (SPA)- identifies SQL execution plan changes and performance regressions. SPA allows us to get results of some specific SQL or entire SQL workload against various types of changes such as initialization parameter changes, optimizerstatisticsrefresh, and database upgrades, and then produces a comparison report to help us assess their impact. Accessible through Oracle Enterprise Manager or dbms_sqlpa package.

Other features

Temporary tablespaceor it's tempfile can be shrinked, up to specified size.SQL>alter tablespace temp-tbs shrink space;SQL>alter tablespace temp-tbs shrink space keep n{K|M|G|T|P|E};SQL> alter tablespace temp-tbs shrink tempfile '.../temp03.dbf' keepn{K|M|G|T|P|E};We can check free temp space in new viewDBA_TEMP_FREE_SPACE.

From 11g, while creating global temporary tables, we can specify TEMPORARY tablespaces.

Online application upgrades andhot patching. Features based patching is also available.

Real-time SQL Monitoring, allows us to see the different metrics of the SQL being executed in real time. The stats are exposed through V$SQL_MONITOR, which is refreshed every second.

"duality" between SQL and XML - users can embed XML within PL/SQL and vice versa.

New binary XML datatype, a new XML index & better XQuery support.

Query rewriting will occur more frequently and for remote tables also.

Automatic Diagnostic Repository(ADR)- automated capture of fault diagnostics for faster fault resolution. The location of the files depends on DIAGNOSTIC_DEST parameter. This can be managed from Database control or command line. For command line, execute $ ./adrci Repair advisorsto guide DBAs through the fault diagnosis and resolution process.

SQL Developeris installed with the database server software (all editions). The Windows SQL*Plus GUI is deprecated.

APEX(Oracle Application Express), formerly known as HTML DB, shipped with the DB.

Checkers - DB Structure Integrity Checker, Data Block Integrity Checker, Redo Integrity Checker, Undo Segment Integrity Checker, Transaction Integrity Checker, Dictionary Integrity Checker.

11g SQL Access Advisor provides recommendations with respect to the entire workload, including considering the cost of creation and maintaining access structure.

hangman Utility hangman(Hang Manager) utility to detect database bottlenecks.

Health Monitor (HM) utility - Health Monitor utility is an automation of the dbms_repair corruption detection utility.

The dbms_stats package has several new procedures to aid in supplementing histogram data, and the state of these extended histograms can be seen in the user_tab_col_statisticsview:dbms_stats.create_extended_statsdbms_stats.show_extended_stats_namedbms_stats.drop_extended_stats

New package DBMS_ADDM introduced in 11g.

Oracle 11g introduced server side connection pool called Database Resident Connection Pool (DRCP).

Desupported featuresThe following features are desupported/deprecated in Oracle Database 11gRelease 1 (11.1.0):

Oracle export utility (exp).Impis still supported for backwards compatibility.

Windows SQL*Plus GUI & iSQLPlus will not be shipped anymore. Use SQL Developer instead.

Oracle Enterprise Manager Java console.

copy command is deprecated.

What's New in Oracle 10g

The following new features were introduced with Oracle 10g:

Oracle 10gRelease 1 (10.1.0) - January 2004 Grid computing - an extension of the clustering feature (Real Application Clusters).

SYSAUXtablespace has been introduced as an auxiliary to SYSTEM, as LOCAL managed tablespace.

Datapump- faster data movement withexpdp and impdp, successor for normalexp/imp.

NIDutilityhas been introduced to change the database name and id.

Oracle Enterprise Manager (OEM) became browser based. Through any browser we can access data of a database in Oracle Enterprise Manager Database Control. Grid Control is used for accessing/managing multiple instances.

Automated Storage Management (

HYPERLINK "http://satya-dba.blogspot.com/2010/03/automatic-storage-management-asm-10g.html"ASM

HYPERLINK "http://satya-dba.blogspot.com/2010/03/automatic-storage-management-asm-10g.html"). ASMB, RBAL, ARBx are the newbackground processesrelated to ASM.

Manageability improvements (self-tuning features).

Performance and scalability improvements.

Automatic Workload Repository (AWR).

Automatic Database Diagnostic Monitor (ADDM).

Active Session History (ASH).

Flashbackoperations available on row, transaction, table or database level.

Ability to UNDROP (FlashbackDrop) a table using arecycle bin. Ability to rename tablespaces (except SYSTEM and SYSAUX), whether permanent ortemporary, using the following command:SQL> ALTER TABLESPACE oldname RENAME TO newname;

Ability totransport tablespacesacross platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. If ENDIAN formats are different we have to useRMAN.

In Oracle 10g,undo tablespacecan guarantee the retention of unexpired undo extents.SQL> CREATE UNDO TABLESPACE ... RETENTION GUARANTEE;SQL> ALTER TABLESPACE UNDO_TS RETENTION GUARANTEE; New 'drop database' statement, will delete the datafiles, redolog files mentioned in control file and will delete SP file also.SQL> STARTUP RESTRICT MOUNT EXCLUSIVE;SQL> DROP DATABASE;

New memory structure in SGA i.e. Streams pool (streams_pool_sizeparameter), useful fordatapumpactivities & streams replication.

Introduced new init parameter,sga_target,to change the value of SGA dynamically. This is called Automatic Shared Memory Management (ASMM). It includes buffer cache, shared pool, java pool and large pool. It doesn't include log buffer, streams pool and the buffer pools for nonstandard block sizes and the non-default ones for KEEP or RECYCLE.SGA_TARGET = DB_CACHE_SIZE + SHARED_POOL_SIZE + JAVA_POOL_SIZE + LARGE_POOL_SIZE

Newbackground processesin Oracle 10g

Memory Manager (maximum 1) MMAN - MMAN dynamically adjust the sizes of the SGA components like DBC, large pool, shared pool and java pool. It is a new process added to Oracle 10g as part of automatic shared memory management.

Memory Monitor (maximum 1) MMON - MMON monitors SGA and performs various manageability related background tasks.

Memory Monitor Light (maximum 1) MMNL - New background process in Oracle 10g.

Change Tracking Writer (maximum 1) CTWR - CTWR will be useful in RMAN.

ASMB - This ASMB process is used to provide information to and from cluster synchronization services used by ASM to manage the disk resources. It's also used to updatestatisticsand provide a heart beat mechanism.

Re-Balance RBAL - RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM.

Actual Rebalance ARBx - ARBx is configured by ASM_POWER_LIMIT.

DBA can specify a default tablespace for the database.

Temporary tablespace groupsto group multipletemporary tablespacesinto a single group.

From Oracle Database 10g, the ability to prepare the primary database and logical standby for a switchover, thus reducing the time to complete the switchover.On primary,ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY;

On logical standby,

ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY;

New packages

DBMS_SCHEDULER, which can call OS utilities and programs, not just PL/SQL program units like DBMS_JOB package. By using this package we can create jobs, programs, schedules and job classes.

DBMS_FILE_TRANSFER package to transfer files.

DBMS_MONITOR, to enable end-to-end tracing (tracing is not done only by session, but by client identifier).

DBMS_ADVISOR, will help in working with several advisors.

DBMS_WORKLOAD_REPOSITORY, to aid AWR, ADDM, ASH.

Support for bigfile tablespaces are up to 8EB (Exabytes) in size.

Rules-Based Optimizer (RBO) is desupported (not deprecated).

Auditing: FGA (Fine-grained auditing) now supports DML statements in addition to selects.

New features inRMAN Managing recovery related files withflash/fast recovery area.

Optimized incremental backups using block change tracking (Faster incremental backups) using a file (namedblock change tracking file). CTWR (Change Tracking Writer) is thebackground processresponsible for tracking the blocks.

Reducing the time and overhead of full backups with incrementally updated backups.

Comprehensive backup job tracking and administration with Enterprise Manager.

Backup set binary compression.

New compression algorithm BZIP2 brought in.

Automated Tablespace Point-in-Time Recovery.

Automatic channel failover on backup & restore.

Cross-platform tablespace conversion.

Ability to preview the backups required to perform a restore operation.RMAN> restore database preview [summary];RMAN> restore tablespace tbs1 preview;

SQL*Plus enhancements

The defaultSQL> promptcan be changed by setting the below parameters in $ORACLE_HOME/sqlplus/admin/glogin.sql

_connect_identifier (will prompt DBNAME>)

_date (will prompt DATE>)

_editor

_o_version

_o_release

_privilege (will prompt AS SYSDBA> or AS SYSOPER> orAS SYSASM>)

_sqlplus_release

_user (will prompt USERNAME>)

From 10g, thelogin.sqlfile is not only executed at SQL*Plus startup time, but also at connect time as well. So SQL prompt will be changed after connect command.Now we can login as SYSDBA without the quotation marks.sqlplus / as sysdba(as well as oldsqlplus "/ as sysdba"orsqlplus '/ as sysdba'). This enhancement not only means we have two fewer characters to type, but provides some additional benefits such as not requiring escape characters in operating systems such as Unix.From Oracle 10g, the spool command can append to an existing one.SQL> spool result.log append10g allows us to save statements as appended to the files.SQL> Query1 ....SQL> save myscriptsSQL> Query2 ....SQL> save myscripts appenddescribe command can give description of rules and rule sets.Virtual Private Database (VPD) has grown into a very powerful feature with the ability to support a variety of requirements, such as masking columns selectively based on the policy and applying the policy only when certain columns are accessed. The performance of the policy can also be increased through multiple types of policy by exploiting the nature of the application, making the feature applicable to multiple situations.

We can now shrink segments, tables and indexes to reclaim free blocks, provided that Automatic Segment Space Management (ASSM) is enabled in the tablespace.SQL> alter table table-name shrink space;

From 10g,statisticsare collected automatically if STATISTIC_LEVEL is set to TYPICAL or ALL. No need of ALTER TABLE ... MONITORING command.

Statistics can be collected for SYS schema, data dictionary objects and fixed objects (x$ tables).

Complete refresh ofmaterialized viewswill do delete, instead of truncate, by setting ATOMIC_REFRESH to TRUE.

Introduced Advisors

SQL Access Advisor

SQL Tune Advisor

Memory Advisor

Undo AdvisorSegment Advisor

MTTR (Mean Time To Recover) Advisor

Oracle 10gRelease 2 (10.2.0) - September 2005 Newasmcmd utilityfor managingASMstorage.

Async COMMITs.

Passwords for DB Links are encrypted.

Transparent Data Encryption.

Fast Start Failover for Data Guard was introduced in Oracle 10g R2.

The CONNECT role can now only connect (CREATE privileges are removed).Before 10g,

SQL> select PRIVILEGE from role_sys_privs where ROLE='CONNECT';

PRIVILEGE

----------------------------------------

CREATE VIEW

CREATE TABLE

ALTER SESSION

CREATE CLUSTER

CREATE SESSION

CREATE SYNONYM

CREATE SEQUENCE

CREATE DATABASE LINK

From 10g,

SYS> select PRIVILEGE from role_sys_privs where ROLE='CONNECT';

PRIVILEGE

----------------------------------------

CREATE SESSION

Undo Tablespace/Undo Management in Oracle

Oracle9iintroduced undo.

What Is Undo and Why?Oracle Database has a method of maintaining information that is used to rollback or undo the changes to the database. Oracle Database keeps records of actions of transactions, before they are committed and Oracle needs this information to rollback or undo the changes to the database. These records are calledrollback or undo records.

These records are used to:

Rollback transactions - when a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction.

Recover the database - during database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the data files.

Provide read consistency - undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.

Analyze data as of an earlier point in time by usingFlashback Query.

Recover from logical corruptions usingFlashbackfeatures.

Until Oracle 8i, Oracle usesrollback segmentsto manage the undo data.Oracle9iintroduced automatic undo management, which allows the dba to exert more control on how long undo information is retained, simplifies undo space management and also eliminates the complexity of managing rollback segments. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.

Space for undo segments is dynamically allocated, consumed, freed, and reused all under the control of Oracle Database, rather than by DBA.

From Oracle 9i, the rollback segments method is referred as "Manual Undo Management Mode" and the new undo tablespaces method as the "Automatic Undo Management Mode".

Notes: Although both rollback segments and undo tablespaces are supported, both modes cannot be used in the same database instance, although for migration purposes it is possible, for example, to create undo tablespaces in a database that is using rollback segments, or to drop rollback segments in a database that is using undo tablespaces. However, you must bounce the database in order to effect the switch to another method of managing undo.

System rollback segment exists in both the modes.

When operating in automatic undo management mode, any manual undo management SQL statements and initialization parameters are ignored and no error message will be issued e.g. ALTER ROLLBACK SEGMENT statements will be ignored.

Automatic Undo ManagementUNDO_MANAGEMENTThe following initialization parameter setting causes theSTARTUP commandto start an instance in automatic undo management mode:

UNDO_MANAGEMENT = AUTO

The default value for this parameter is MANUAL i.e. manual undo management mode.

UNDO_TABLESPACEUNDO_TABLESPACE an optional dynamic parameter, can be changed online,specifying the name of an undo tablespace to use.An undo tablespace must be available, into which the database will store undo records. The default undo tablespace is created at database creation, or an undo tablespace can be created explicitly.

When the instance starts up, the database automatically selects for use the first available undo tablespace. If there is no undo tablespace available, the instance starts, but uses the SYSTEM rollback segment for undo. This is not recommended, and an alert message is written to the alert log file to warn that the system is running without an undo tablespace. ORA-01552 error is issued for any attempts to write non-SYSTEM related undo to the SYSTEM rollback segment.

If the database contains multiple undo tablespaces, you can optionally specify at startup that you want an Oracle Database instance to use a specific undo tablespace. This is done by setting the UNDO_TABLESPACE initialization parameter.

UNDO_TABLESPACE = undotbs

In this case, if you have not already created the undo tablespace, the STARTUP command will fail. The UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance in an OracleReal Application Clusters (RAC)environment.

To findout the undo tablespaces in databaseSQL>select tablespace_name, contents from dba_tablespaces where contents = 'UNDO';To findout the current undo tablespace

SQL> show parameterundo_tablespace(OR)

SQL>select VALUE from v$parameter where NAME='undo_tablespace';UNDO_RETENTIONCommitted undo information normally is lost when its undo space is overwritten by a newer transaction. However, for consistent read purposes, long-running queries sometimes require old undo information for undoing changes and producing older images of data blocks. The success of several Flashbackfeatures can also depend upon older undo information.

The default value for the UNDO_RETENTION parameter is 900. Retention is specified in units of seconds. This value specifies the amount of time, undo is kept in the tablespace. The system retains undo for at least the time specified in this parameter.

You can set the UNDO_RETENTION in the parameter file:

UNDO_RETENTION = 1800

You can change the UNDO_RETENTION value at any time using:

SQL>ALTER SYSTEM SET UNDO_RETENTION = 2400;The effect of the UNDO_RETENTION parameter is immediate, but it can only be honored if the current undo tablespace has enough space. If an active transaction requires undo space and the undo tablespace does not have available space, then the system starts reusing unexpired undo space (if retention is not guaranteed). This action can potentially cause some queries to fail with the ORA-01555 "snapshot too old" error message.

UNDO_RETENTION applies to both committed and uncommitted transactions since the introduction offlashback queryfeature in Oracle needs this information to create a read consistent copy of the data in the past.

Oracle Database 10gautomatically tunes undo retention by collecting database usestatisticsand estimating undo capacity needs for the successful completion of the queries. You can set a low threshold value for the UNDO_RETENTION parameter so that the system retains the undo for at least the time specified in the parameter, provided that the current undo tablespace has enough space. Under space constraint conditions, the system may retain undo for a shorter duration than that specified by the low threshold value in order to allow DML operations to succeed.

The amount of time for which undo is retained for Oracle Database for the current undo tablespace can be obtained by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic performance view.

SQL>select tuned_undoretention from v$undostat;Automatic tuning of undo retention is not supported for LOBs. The RETENTION value for LOB columns is set to the value of the UNDO_RETENTION parameter.

UNDO_SUPRESS_ERRORSIn case your code has the alter transaction commands that perform manual undo management operations. Set this to true to suppress the errors generated when manual management SQL operations are issued in an automated management mode.

UNDO_SUPRESS_ERRORS = false

Retention GuaranteeOracle Database 10glets you guarantee undo retention. When you enable this option, the database never overwrites unexpired undo data i.e. undo data whose age is less than the undo retention period. This option is disabled by default, which means that the database can overwrite the unexpired undo data in order to avoid failure of DML operations if there is not enough free space left in the undo tablespace.

You enable the guarantee option by specifying the RETENTION GUARANTEE clause for the undo tablespace when it is created by either the CREATE DATABASE or CREATE UNDO TABLESPACE statement or you can later specify this clause in an ALTER TABLESPACE statement. You do not guarantee that unexpired undo is preserved if you specify the RETENTION NOGUARANTEE clause.

In order to guarantee the success of queries even at the price of compromising the success of DML operations, you can enable retention guarantee. This option must be used with caution, because it can cause DML operations to fail if the undo tablespace is not big enough. However, with proper settings, long-running queries can complete without risk of receiving the ORA-01555 "snapshot too old" error message, and you can guarantee a time window in which the execution of Flashback features will succeed.

From 10g, you can use the DBA_TABLESPACES view to determine the RETENTION setting for the undo tablespace. A column named RETENTION will contain a value on GUARANTEE, NOGUARANTEE, or NOT APPLY (used for tablespaces other than the undo tablespace).

A typical use of the guarantee option is when you want to ensure deterministic and predictable behavior of Flashback Query by guaranteeing the availability of the required undo data.

Size of Undo TablespaceYou can size the undo tablespace appropriately either by using automatic extension of the undo tablespace or by manually estimating the space.

Oracle Database supports automatic extension of the undo tablespace to facilitate capacity planning of the undo tablespace in the production environment. When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed. By combining automatic extension of the undo tablespace with automatically tuned undo retention, you can ensure that long-running queries will succeed by guaranteeing the undo required for such queries.

After the system has stabilized and you are more familiar with undo space requirements, Oracle recommends that you set the maximum size of the tablespace to be slightly (10%) more than the current size of the undo tablespace.

If you have decided on a fixed-size undo tablespace, the Undo Advisor can help us estimate needed capacity, and you can then calculate the amount of retention your system will need. You can access the Undo Advisor through Enterprise Manager or through the DBMS_ADVISOR package.

The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR). An adjustment to the collection interval and retention period for AWRstatisticscan affect the precision and the type of recommendations the advisor produces.

Undo AdvisorOracle Database provides an Undo Advisor that provides advice on and helps automate the establishment of your undo environment. You activate the Undo Advisor by creating an undo advisor task through the advisor framework. The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'. The analysis is based on AWR snapshots, which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT.

In the following example, the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".

DECLARE

tid NUMBER;

tname VARCHAR2(30);

oid NUMBER;

BEGIN

DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');

DBMS_ADVISOR.CREATE_OBJECT(tname,'UNDO_TBS',null, null, null, 'null', oid);

DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);

DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);

DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);

DBMS_ADVISOR.execute_task(tname);

end;

/

Once you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor (ADDM) in Enterprise Manager. This information is also available in the DBA_ADVISOR_*data dictionary views.

Calculating space requirements for Undo tablespaceYou can calculate space requirements manually using the following formula:

Undo Space = UNDO_RETENTION in seconds * undo blocks for each second + overhead

where:

* Undo Space is the number of undo blocks

* overhead is the small overhead for metadata and based on extent and file size (DB_BLOCK_SIZE)

As an example, if UNDO_RETENTION is set to 2 hours, and the transaction rate (UPS) is 200 undo blocks for each second, with a 4K block size, the required undo space is computed as follows:

(2 * 3600 * 200 * 4K) = 5.8GBs

Such computation can be performed by using information in the V$UNDOSTAT view. In the steady state, you can query the view to obtain the transaction rate. The overhead figure can also be obtained from the view.

Managing Undo TablespacesCreating Undo TablespaceThere are two methods of creating an undo tablespace. The first method creates the undo tablespace when the CREATE DATABASE statement is issued. This occurs when you are creating a new database, and the instance is started in automatic undo management mode (UNDO_MANAGEMENT = AUTO). The second method is used with an existing database. It uses the CREATE UNDO TABLESPACE statement.

You cannot create database objects in an undo tablespace. It is reserved for system-managed undo data.

Oracle Database enables you to create a single-file undo tablespace.

The following statement illustrates using the UNDO TABLESPACE clause in a CREATE DATABASE statement. The undo tablespace is named undotbs_01 and one datafile, is allocated for it.

SQL>CREATE DATABASE ...UNDO TABLESPACE undotbs_01 DATAFILE '/path/undo01.dbf'RETENTION GUARANTEE;If the undo tablespace cannot be created successfully during CREATE DATABASE, the entire operation fails.

The CREATE UNDO TABLESPACE statement is the same as the CREATE TABLESPACE statement, but the UNDO keyword is specified. The database determines most of the attributes of the undo tablespace, but you can specify the DATAFILE clause.

This example creates the undotbs_02 undo tablespace:

SQL>CREATE UNDO TABLESPACE undotbs_02 DATAFILE '/path/undo02.dbf' SIZE 2M REUSE AUTOEXTEND ONRETENTION NOGUARANTEE;You can create more than one undo tablespace, but only one of them can be active at any one time.

Altering Undo TablespaceUndo tablespaces are altered using the ALTER TABLESPACE statement. However, since most aspects of undo tablespaces are system managed, you need only be concerned with the following actions Adding datafile

SQL>ALTER TABLESPACE undotbs ADD DATAFILE '/path/undo0102.dbf' AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED; Renaming data file

SQL>ALTER DATABASE RENAME FILE 'old_full_path' TO 'new_full_path'; Resizing datafile

SQL>ALTER DATABASE DATAFILE 'data_file_name|data_file_number' RESIZE nK|M|G|T|P|E;

when resizing the undo tablespace you may encounter ORA-03297 error: file contains used data beyond the requested RESIZE value. This means that some undo information stills stored above the datafile size we want to set. We can check the most high used block to check the minimum size that we can resize a particular datafile, by querying the dba_free_space view.Another way to set undo tablespace to the size that we want is, to create another undo tablespace, set it the default one, take offline the old and then just drop the big old tablespace.

Making datafile online or offline

SQL>ALTER TABLESPACE undotbs offline;

SQL>ALTER TABLESPACE undotbs online; Beginning or ending an open backup on datafile

Enabling and disabling undo retention guarantee

SQL>ALTER TABLESPACE undotbs RETENTION GUARANTEE;

SQL>ALTER TABLESPACE undotbs RETENTION NOGUARANTEE;These are also the only attributes you are permitted to alter.

If an undo tablespace runs out of space, or you want to prevent it from doing so, you can add more files to it or resize existing datafiles.

Dropping Undo TablespaceUse the DROP TABLESPACE statement to drop an undo tablespace.

SQL>DROP TABLESPACE undotbs_01;An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (e.g. a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails. However, since DROP TABLESPACE drops an undo tablespace even if it contains unexpired undo information (within retention period), you must be careful not to drop an undo tablespace if undo information is needed by some existing queries.

DROP TABLESPACE for undo tablespaces behaves like DROP TABLESPACE ... INCLUDING CONTENTS. All contents of the undo tablespace are removed.

Switching Undo TablespacesYou can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace.

SQL>ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;Assuming undotbs_01 is the current undo tablespace, after this command successfully executes, the instance uses undotbs_02 in place of undotbs_01 as its undo tablespace.

If any of the following conditions exist for the tablespace being switched to, an error is reported and no switching occurs:

The tablespace does not exist

The tablespace is not an undo tablespace

The tablespace is already being used by another instance (in RAC environment)

The database is online while the switch operation is performed, and user transactions can be executed while this command is being executed. When the switch operation completes successfully, all transactions started after the switch operation began are assigned to transaction tables in the new undo tablespace.

The switch operation does not wait for transactions in the old undo tablespace to commit. If there are any pending transactions in the old undo tablespace, the old undo tablespace enters into a PENDING OFFLINE mode. In this mode, existing transactions can continue to execute, but undo records for new user transactions cannot be stored in this undo tablespace.

An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation completes successfully. A PENDING OFFLINE undo tablespace cannot be used by another instance, nor can it be dropped. Eventually, after all active transactions have committed, the undo tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo tablespace is available for other instances (in an RAC environment).

If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), then the current undo tablespace is switched out and the next available undo tablespace is switched in. Use this statement with care, because if there is no undo tablespace available, the SYSTEM rollback segment is used. This causes ORA-01552 error to be issued for any attempts to write non-SYSTEM related undo to the SYSTEM rollback segment.

The following example unassigns the current undo tablespace:

SQL>ALTER SYSTEM SET UNDO_TABLESPACE = '';Establishing User Quotas for Undo SpaceThe Oracle Database Resource Manager can be used to establish user quotas for undo space. The Database Resource Manager directive UNDO_POOL allows DBAs to limit the amount of undo space consumed by a group of users (resource consumer group).

You can specify an undo pool for each consumer group. An undo pool controls the amount of total undo that can be generated by a consumer group. When the total undo generated by a consumer group exceeds it's undo limit, the current UPDATE transaction generating the redo is terminated. No other members of the consumer group can perform further updates until undo space is freed from the pool.

When no UNDO_POOL directive is explicitly defined, users are allowed unlimited undo space.

Monitoring Undo TablespacesOracle Database also provides proactive help in managing tablespace disk space use by alerting you when tablespaces run low on available space.

In addition to the proactive undo space alerts, Oracle Database also provides alerts if your system has long-running queries that cause SNAPSHOT TOO OLD errors. To prevent excessive alerts, the long query alert is issued at most once every 24 hours. When the alert is generated, you can check the Undo Advisor Page of Enterprise Manager to get more information about the undo tablespace.

The following dynamic performance views are useful for obtaining space information about the undo tablespace:

ViewDescription

V$UNDOSTATContains statistics for monitoring and tuning undo space. Use this view to help estimate the amount of undo space required for the current workload. Oracle uses this view information to tune undo usage in the system.

V$ROLLSTATFor automatic undo management mode, information reflects behavior of the undo segments in the undo tablespace.

V$TRANSACTIONContains undo segment information.

DBA_UNDO_EXTENTSShows the status and size of each extent in the undo tablespace.

WRH$_UNDOSTATContains statistical snapshots of V$UNDOSTAT information.

WRH$_ROLLSTATContains statistical snapshots of V$ROLLSTAT information.

To findout the undo segments in the database.

SQL>select segment_name, tablespace_name from dba_rollback_segs;The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in the current instance. Statistics are available for undo space consumption, transaction concurrency, the tuning of undo retention, and the length and SQL ID of long-running queries in the instance. This view contains information that spans over a 24 hour period and each row in this view contains data for a 10 minute interval specified by the BEGIN_TIME and END_TIME.

Each row in the view containsstatisticscollected in the instance for a 10minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 1008 rows, spanning a 7 day cycle.

FlashbackFeaturesOracle Database includes several features that are based upon undo information and that allow administrators and users to access database information from a previous point in time. These features are part of the overall flashback strategy incorporated into the database and include:

Flashback Query Flashback Versions Query

Flashback Transaction Query

Flashback Table

Flashback Database

The retention period for undo information is an important factor for the successful execution of flashback features. It determines how far back in time a database version can be established.

We must choose an undo retention interval that is long enough to enable users to construct a snapshot of the database for the oldest version of the database that they are interested in, e.g. if an application requires that a version of the database be available reflecting its content 12 hours previously, then UNDO_RETENTION must be set to 43200.

You might also want to guarantee that unexpired undo is not overwritten by specifying the RETENTION GUARANTEE clause for the undo tablespace.

Migration to Automatic Undo ManagementIf you are still using rollback segments to manage undo space, Oracle strongly recommends that you migrate your database to automatic undo management. From 10g, Oracle Database provides a function that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. DBA privileges are required to execute this function:

set serveroutput on

DECLARE

utbsize_in_MB NUMBER;

BEGIN

utbsize_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;

dbms_output.put_line(utbsize_in_MB||'MB');

END;

/

The function returns the undo tablespace size in MBs.

Best Practices forUndo Tablespace/Undo Management in OracleThis following list of recommendations will help you manage your undo space to best advantage.

You need not set a value for the UNDO_RETENTION parameter unless your system has flashback or LOB retention requirements.

Allow 10 to 20% extra space in your undo tablespace to provide for some fluctuation in your workload.

Set the warning and critical alert thresholds for the undo tablespace alert properly.

To tune SQL queries or to check on runaway queries, use the value of the SQLID column provided in the long query or in the V$UNDOSTAT or WRH$_UNDOSTAT views to retrieve SQL text and other details on the SQL from V$SQL view.

Transportable Tablespaces (TTS) in Oracle

We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.

With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.

WithOracle 9i, TTS (Transportable Tablespaces)technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.

WithOracle 10g, TTS (Transportable Tablespaces)technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. If ENDIAN formats are different you have to useRMAN(e.g. Windows to Solaris, Tru64 to AIX). From this version we can transport whole database, this is called Transportable Database.

FromOracle 11g, we can transport single partition of a tablespace between databases.

You can also query the V$TRANSPORTABLE_PLATFORM view to see all the platforms that are supported, and to determine their platform names and IDs and their endian format.

SQL> select * from v$transportable_platform order by platform_id;

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT

----------- ---------------------------------------- --------------

1 Solaris[tm] OE (32-bit) Big

2 Solaris[tm] OE (64-bit) Big

3 HP-UX (64-bit) Big

4 HP-UX IA (64-bit) Big

5 HP Tru64 UNIX Little

6 AIX-Based Systems (64-bit) Big

7 Microsoft Windows IA (32-bit) Little

8 Microsoft Windows IA (64-bit) Little

9 IBM zSeries Based Linux Big

10 Linux IA (32-bit) Little

11 Linux IA (64-bit) Little

12 Microsoft Windows x86 64-bit Little

13 Linux x86 64-bit Little

15 HP Open VMS Little

16 Apple Mac OS Big

17 Solaris Operating System (x86) Little

18 IBM Power Based Linux Big

19 HP IA Open VMS Little

20 Solaris Operating System (x86-64) Little

21 Apple Mac OS (x86-64) Little (from Oracle 11g R2)

To find out your platform name and it's endian format

SQL> select tp.PLATFORM_NAME, tp.ENDIAN_FORMAT from v$database d, v$transportable_platform tp where d.platform_name=tp.platform_name;

PLATFORM_NAME ENDIAN_FORMAT

---------------------------------------- --------------

Solaris[tm] OE (64-bit) Big

Transporting tablespaces is particularly useful for

(i) Updating data from production to development and test instances.

(ii) Updating data from OLTP systems to data warehouse systems.

(iii) Transportable Tablespace (TTS) is used to take out of the database pieces of data for various reasons (Archiving, Moving to other databases etc).

(iv) Performing tablespace point-in-time-recovery (TSPITR).

Moving data using transportable tablespaces can be much faster than performing either anexport/importor unload/load of the same data, because transporting a tablespace only requires copying of datafiles and integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

In Oracle 8i, there were three restrictions with TTS. First, both the databases must have same block size. Second, both platforms must be the same OS. Third, you cannot rename the tablespace.Oracle 9iremoves the first restriction.Oracle 10gremoves the second restriction. Oracle 10g also makes available a command to rename tablespaces.

Limitations/RestrictionsFollowing are limitations/restrictions of transportable tablespace:

System, undo, sysaux andtemporary tablespacescannot be transported.

The source and target database must be on the same hardware platform. e.g. we can transport tablespaces between Sun Solaris databases, or we can transport tablespaces between Windows NT databases. However, you cannot transport a tablespace from a Sun Solaris database to a Windows NT database.

The source and target database must use the same character set and national character set.

If Automatic Storage Management (ASM) is used with either the source or destination database, you must useRMANto transport/convert the tablespace.

You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.

Transportable tablespaces do not support:Materialized views/replication Function-based indexes.

Binary_Float and Binary_Double datatypes (new in Oracle 10g) are not supported.

At Source:Validating Self Containing PropertyTTS requires all the tablespaces, which we are moving, must be self contained. This means that the segments within the migration tablespace set cannot have dependency to a segment in a tablespace out of the transportable tablespace set. This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure.

SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs', TRUE);

SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs1, tbs2, tbs3', FALSE, TRUE);

SQL> SELECT * FROM transport_set_violations;

No rows should be displayed

If it were not self contained you should either remove the dependencies by dropping/moving them or include the tablespaces of segments into TTS set to which migration set is depended.

Put Tablespaces in READ ONLY ModeWe will perform physical file system copy of tablespace datafiles. In order those datafiles to not to require recovery, they need to be in consistent during the copy activity. So put all the tablespaces in READ ONLY mode.

SQL> alter tablespace tbs-name read only;

Export the MetadataExport the metadata of the tablespace set.

expFILE=/path/dump-file.dmp LOG=/path/tts_exp.log TABLESPACES=tbs-names TRANSPORT_TABLESPACE=y STATISTICS=none

(or)

expdpDUMPFILE=tts.dmp LOGFILE=tts_exp.log DIRECTORY=exp_dir TRANSPORT_TABLESPACES=tts TRANSPORT_FULL_CHECK=y

for transporting partition,expdpDUMPFILE=tts_partition.dmp LOGFILE=tts_partition_exp.log DIRECTORY=exp_dir TRANSPORTABLE=always TABLES=trans_table:partition_Q1

If the tablespace set being transported is not self-contained, then the export will fail.

You can drop the tablespaces at source, if you dont want them.

SQL> drop tablespace tbs-name including contents;

Otherwise, make all the tablespaces as READ WRITE

SQL> alter tablespace tbs-name read write;

Copying Datafiles and export fileCopy the datafiles and export file to target server.

At Target:Import the export file.

impFILE=/path/dump-file.dmp LOG=/path/tts_imp.log TTS_OWNERS=user-name FROMUSER=user-name TOUSER=user-name TABLESPACES=tbs-name TRANSPORT_TABLESPACE=y DATAFILES=/path/tbs-name.dbf

(or)

impdpDUMPFILE=tts.dmp LOGFILE=tts_imp.log DIRECTORY=exp_dir

REMAP_SCHEMA=master:scott TRANSPORT_DATAFILES='/path/tts.dbf'

for transporting partition,

impdpDUMPFILE=tts_partition.dmp LOGFILE=tts_partition_imp.log DIRECTORY=exp_dirTRANSPORT_DATAFILES='/path/tts_part.dbf' PARTITION_OPTIONS=departition

Finally we have to switch the new tablespaces into read write mode:

SQL> alter tablespace tbs-name read write;

TRANSPORT TABLESPACEUsingRMAN

Create transportable tablespace sets from backup for one or more tablespaces.RMAN> TRANSPORT TABLESPACE example, tools TABLESPACE DESTINATION '/disk1/trans' AUXILIARY DESTINATION '/disk1/aux' UNTIL TIME 'SYSDATE-15/1440';RMAN> TRANSPORT TABLESPACE exam TABLESPACE DESTINATION '/disk1/trans' AUXILIARY DESTINATION '/disk1/aux'DATAPUMPDIRECTORY dpdir DUMP FILE 'dmpfile.dmp' IMPORT SCRIPT 'impscript.sql' EXPORT LOG 'explog.log';

Using Transportable Tablespaces with a Physical Standby DatabaseWe can use the Oracle transportable tablespaces feature to move a subset of an Oracle database and plug it in to another Oracle database, essentially moving tablespaces between the databases.

To move or copy a set of tablespaces into a primary database when a physical standby is being used, perform the following steps:

1.Generate a transportable tablespace set that consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces.

2.Transport the tablespace set:

a.Copy the datafiles and the export file to the primary database.

b.Copy the datafiles to the standby database.The datafiles must be copied in a directory defined by theDB_FILE_NAME_CONVERTinitialization parameter. IfDB_FILE_NAME_CONVERTisnotdefined, then issue theALTER DATABASE RENAME FILEstatement to modify the standby control fileafterthe redo data containing the transportable tablespace has been applied and has failed. TheSTANDBY_FILE_MANAGEMENTinitialization parameter must be set toAUTO.

3.Plug in the tablespace.Invoke theData Pump utility to plug the set of tablespaces into the primary database. Redo data will be generated and applied at the standby site to plug the tablespace into the standby database.Related PackagesDBMS_TTS

DBMS_EXTENDED_TTS_CHECKS

Temporary Tablespace Group

Temporary Tablespace Groups in OracleOracle 10gintroduced the concept of temporary tablespace groups.This allows grouping multipletemporary tablespacesinto a single group and assigning to a user, this group of tablespaces instead of a single temporary tablespace.

A tablespace group lets you assign multiple temporary tablespaces to a single user and increases the addressability of temporary tablespaces.

A temporary tablespace group has the following properties:

It contains one or more temporary tablespaces (there is no upper limit).

It contains only temporary tablespaces.

It is not explicitly created. It is created implicitly when the first temporary tablespace is assigned to it, and is deleted when the last temporary tablespace is removed from the group.

There is no CREATE TABLESPACE GROUP statement, as implicitly created during the creation of a temporary tablespace with the CREATE TEMPORARY TABLESPACE command and by specifying the TABLESPACE GROUP clause.

Temporary Tablespace Group Benefits It allows multiple default temporary tablespaces to be specified at the database level.

It allows the user to use multiple temporary tablespaces in different sessions at the same time.

Reduced contention when multiple temporary tablespaces are defined.

It allows a single SQL operation to use multiple temporary tablespaces for sorting.

Finer granularity so you can distribute operations across temporary tablespaces.

The following statement createstemporary tablespacetemp as a member of the temp_grp tablespace group. If the tablespace group does not already exist, then Oracle Database creates it during execution of this statement.

SQL> CREATE TEMPORARY TABLESPACE tempTEMPFILE 'temp01.dbf' SIZE 5M AUTOEXTEND ONTABLESPACE GROUP temp_grp;

Adding a temporary tablespace to temporary tablespace group:

SQL> ALTER TEMPORARY TABLESPACE temp TABLESPACE GROUP temp_grp;

Removing a temporary tablespace from temporary tablespace group:

SQL> ALTER TEMPORARY TABLESPACE temp TABLESPACE GROUP '';

Assigning temporary tablespace group to a user (same as assigning temporary tablespace to a user):

SQL> ALTER USER scott TEMPORARY TABLESPACE temp_grp;

Assigning temporary tablespace group as default temporary tablespace:

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_grp;

To see the tablespaces in the temporary tablespace group:

SQL> select * from DBA_TABLESPACE_GROUPS;

Related Views:DBA_TABLESPACE_GROUPS

DBA_TEMP_FILESV$TEMPFILEV$TEMPSTATV$TEMP_SPACE_HEADER

V$TEMPSEG_USAGE

Temporary Tablespace in Oracle

Oracle introduced temporary tablespaces in Oracle 7.3

Temporary tablespacesare used to manage space for database sort and joining operations and for storing global temporary tables. For joining two large tables or sorting a bigger result set, Oracle cannot do in memory by using SORT_AREA_SIZE in PGA (Programmable Global Area). Space will be allocated in a temporary tablespace for doing these types of operations. Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, SELECT DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.

Note that a temporary tablespace cannot contain permanent objects and therefore doesn't need to be backed up. A temporary tablespace contains schema objects only for the duration of a session.

Creating Temporary Tablespace

From Oracle 9i, we can specify a default temporary tablespace when you create a database, using the DEFAULT TEMPORARY TABLESPACE extension to the CREATE DATABASE statement.e.g.SQL> CREATE DATABASE oracular.....

DEFAULT TEMPORARY TABLESPACE temp_ts.....;

Oracle provides various ways of creating TEMPORARY tablespaces.

Prior to Oracle 7.3- CREATE TABLESPACE temp DATAFILE ...;

Example:SQL> CREATE TABLESPACE TEMPTBS DATAFILE'/path/temp.dbf' SIZE 2048M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITEDLOGGING DEFAULT NOCOMPRESS ONLINEEXTENT MANAGEMENT DICTIONARY;

Oracle 7.3 & 8.0- CREATE TABLESPACE temp DATAFILE ... TEMPORARY;

Example:SQL> CREATE TABLESPACE TEMPTBS DATAFILE'/path/temp.dbf' SIZE 2048M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITEDLOGGING DEFAULT NOCOMPRESS ONLINE TEMPORARYEXTENT MANAGEMENT DICTIONARY;

Oracle 8i and above- CREATE TEMPORARY TABLESPACE temp TEMPFILE ...;Examples:SQL> CREATE TEMPORARY TABLESPACE TEMPTBSTEMPFILE'/path/temp.dbf' SIZE 1000MAUTOEXTEND ON NEXT 8K MAXSIZE 1500MEXTENT MANAGEMENT LOCAL UNIFORM SIZE 1MBLOCKSIZE 8K;SQL> CREATE TEMPORARY TABLESPACE TEMPTBS2TEMPFILE'/path/temp2.dbf' SIZE 1000MAUTOEXTEND OFFEXTENT MANAGEMENT LOCALBLOCKSIZE 2K;

The MAXSIZE clause will default to UNLIMITED, if no value is specified.All extents of temporary tablespaces are the same size, so UNIFORM keyword is optional - if UNIFORM is not defined it will default to 1 MB.

Example using OMF (Oracle Managed Files):SQL> CREATE TEMPORARY TABLESPACE temp;

Restrictions:(1) We cannot specify nonstandard block sizes for a temporary tablespace or if you intend to assign this tablespace as the temporary tablespace for any users.(2) We cannot specify FORCE LOGGING for an undo or temporary tablespace.(3) We cannot specify AUTOALLOCATE for a temporary tablespace.

Tempfiles (Temporary Datafiles)

Unlike normal datafiles, tempfiles are not fully allocated. When you create a tempfiles, Oracle only writes to the header and last block of the file. This is why it is much quicker to create a tempfiles than to create a normal datafile.

Tempfiles are not recorded in the database's control file. This implies that just recreate them whenever you restore the database, or after deleting them by accident. You can have different tempfile configurations between primary and standby databases in dataguard environment, or configure tempfiles to be local instead of shared in aRAC environment.

One cannot remove datafiles from a tablespace until you drop the entire tablespace. However, one can remove a tempfile from a database. Look at this example:

SQL> alter database tempfile 'tempfile_name' drop including datafiles;//If the file was created as tempfile

SQL> alter database datafile 'tempfile_name' drop;//If the file was created as datafile

Dropping temp tablespaceSQL> drop tablespace temp_tbs;SQL> drop tablespace temp_tbs including contents and datafiles;

If you remove all tempfiles from a temporary tablespace, you may encounter error:ORA-25153: Temporary Tablespace is Empty.

Use the following statement to add a tempfile to a temporary tablespace:SQL> ALTER TABLESPACE temp ADD TEMPFILE '/path/temp01.dbf' SIZE 512mAUTOEXTEND ON NEXT 250m MAXSIZE UNLIMITED;

Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a locally managed temporary tablespace (operations like rename, set to read only, recover, etc. will fail).

Locally managed temporary tablespaces have temporary datafiles (tempfiles), which are similar to ordinary datafiles except:

You cannot create a tempfile with the ALTER DATABASE statement.

You cannot rename a tempfile or set it to read-only.

Tempfiles are always set to NOLOGGING mode.

When you create or resize tempfiles, they are not always guaranteed allocation of disk space for the file size specified. On certain file systems (like UNIX) disk blocks are allocated not at file creation or resizing, but before the blocks are accessed.

Tempfile information is shown in the dictionary view DBA_TEMP_FILES and the dynamic performance view V$TEMPFILE.

Note:This arrangement enables fast tempfile creation and resizing, however, the disk could run out of space later when the tempfiles are accessed.

Default Temporary Tablespaces

FromOracle 9i, we can define a default temporary tablespace at database creation time, or by issuing an "ALTER DATABASE" statement:SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;By default, the default temporary tablespace is SYSTEM. Each database can be assigned one and only one default temporary tablespace. Using this feature, a temporary tablespace is automatically assigned to users.The following restrictions apply to default temporary tablespaces:-DEFAULT TEMPORARY TABLESPACE must be of type TEMPORARY.-DEFAULT TEMPORARY TABLESPACEcannot be taken off-line.-DEFAULT TEMPORARY TABLESPACE cannot be dropped until you create another one.

To see the default temporary tablespace for a database, execute the following query:SQL> select PROPERTY_NAME,PROPERTY_VALUE from database_properties where property_name like '%TEMP%';

The DBA should assign a temporary tablespace to each user in the database to prevent them from allocating sort space in the SYSTEM tablespace. This can be done with one of the following commands:

SQL> CREATE USER scott TEMPORARY TABLESPACE temp;SQL> ALTER USER scott TEMPORARY TABLESPACE temp;

To change a user account to use a non-default temp tablespaceSQL> ALTER USER user1 SET TEMPORARY TABLESPACE temp_tbs;

Assigningtemporary tablespace groupas default temporary tablespace:SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_grp;

Assigning temporary tablespace group to a user (same as assigning temporary tablespace to a user):

SQL> ALTER USER scott TEMPORARY TABLESPACE temp_grp;

All new users that are not explicitly assigned a TEMPORARY TABLESPACE will get the default temporary tablespace as its TEMPORARY TABLESPACE. Also, when you assign a TEMPORARY tablespace to a user, Oracle will not change this value next time you change the default temporary tablespace for the database.

Performance Considerations

Some performance considerations for temporary tablespaces: Always use temporary tablespaces instead of permanent content tablespaces for sorting & joining (no logging and uses one large sort segment to reduce recursive SQL and ST space management enqueue contention).

Ensure that you create your temporary tablespaces as locally managed instead of dictionary managed (i.e. use sort space bitmap instead of sys.fet$ and sys.uet$ for allocating space).

Always use TEMPFILE instead of DATAFILE (reduce backup and recovery time).

Stripe your temporary tablespaces over multiple disks to alleviate possible disk contention and to speed-up operations (user processes can read/write to it directly).

The UNIFORM SIZE must be a multiple of the SORT_AREA_SIZE parameter.

Monitoring Temporary Tablespaces

Unlike datafiles, tempfiles are not listed in V$DATAFILE and DBA_DATA_FILES. Use V$TEMPFILE and DBA_TEMP_FILES instead.

SQL> SELECT tablespace_name, file_name, bytes FROM dba_temp_files WHERE tablespace_name = 'TEMP';TABLESPACE_NAME FILE_NAME BYTES----------------- -------------------------------- --------------TEMP /../temp01.dbf 11,175,650,000

SQL> select file#, name, round(bytes/(1024*1024),2) "SIZE IN MB's" from v$tempfile;

One can monitor temporary segments from V$SORT_SEGMENT and V$SORT_USAGE.

DBA_FREE_SPACE does not record free space for temporary tablespaces. Use DBA_TEMP_FREE_SPACE or V$TEMP_SPACE_HEADER instead.

SQL> select TABLESPACE_NAME, BYTES_USED, BYTES_FREE from V$TEMP_SPACE_HEADER;TABLESPACE_NAME BYTES_USED BYTES_FREE------------------------------ ---------- ----------TEMPTBS 4214226944 80740352

From 11g, we can check free temp space in new view DBA_TEMP_FREE_SPACE.SQL> select * from DBA_TEMP_FREE_SPACE;

Resizing tempfile

SQL> alter database tempfile temp-name resize integer K|M|G|T|P|E;SQL> alter database tempfile '/path/temp01.dbf' resize 1000M;

Resizing temporary tablespace

SQL> alter tablespace temptbs resize 1000M;

Renaming (temporary) tablespace, this is fromOracle 10g

SQL> alter tablespace temp rename to temp2;

InOracle 11g, temporary tablespace or it's tempfiles can be shrinked, up to specified size.

Shrinking frees as much space as possible while maintaining the other attributes of the tablespace or temp files. The optional KEEP clause defines a minimum size for the tablespace or temp file.

SQL>alter tablespace temp-tbs shrink space;SQL>alter tablespace temp-tbs shrink space keep n{K|M|G|T|P|E};SQL> alter tablespace temp-tbs shrink tempfile 'tempfile-name';SQL> alter tablespace temp-tbs shrink tempfile 'tempfile-name' keep n{K|M|G|T|P|E};

The below script reports temporary tablespace usage (script was created forOracle9iDatabase). With this script we can monitor the actual space used in a temporary tablespace and see HWM (High Water Mark) of the temporary tablespace. The script is designed to run when there is only one temporary tablespace in the database.

SQL> select sum( u.blocks * blk.block_size)/1024/1024 "MB. in sort segments", (hwm.max * blk.block_size)/1024/1024 "MB. High Water Mark"from v$sort_usage u, (select block_size from dba_tablespaces where contents = 'TEMPORARY') blk, (select segblk#+blocks max from v$sort_usage where segblk# = (select max(segblk#) from v$sort_usage) ) hwmgroup by hwm.max * blk.block_size/1024/1024;

How to reclaim used space

Several methods existed to reclaim the space used for a larger than normal temporary tablespace.(1)Restartingthe database, if possible.

(2) The method that exists for all releases of Oracle is, simply drop and recreate the temporary tablespace back to its original (or another reasonable) size.(3) If you are using Oracle9i or higher, drop the large tempfile (which will drop the tempfile from the data dictionary and the OS file system).

From 11g, while creating global temporary tables, we can specify TEMPORARY tablespaces.

Related Views:

DBA_TEMP_FILESDBA_DATA_FILESDBA_TABLESPACESDBA_TEMP_FREE_SPACE (Oracle 11g)

V$TEMPFILEV$TEMP_SPACE_HEADERV$TEMPORARY_LOBSV$TEMPSTAT

V$TEMPSEG_USAGE

Statspack in Oracle

Statspack was introduced inOracle8i.

Statspack is a set of performance monitoring, diagnosis and reporting utility provided by Oracle. Statspack provides improved UTLBSTAT/UTLESTAT functionality, as its successor, though the old BSTAT/ESTAT scripts are still available.

Statspack package is a set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. Statspack stores the performance statistics permanently in Oracle tables, which can later be used for reporting and analysis. The data collected can be analyzed using Statspack reports, which includes an instance health and load summary page, high resource SQL statements, the traditional wait events and initialization parameters.

Statspack is a diagnosis tool for instance-wide performance problems; it also supports application tuning activities by providing data which identifies high-load SQL statements. Statspack can be used both proactively to monitor the changing load on a system, and also reactively to investigate a performance problem.

Although AWR and ADDM (introduced inOracle 10g) provide better statistics than STATSPACK, users that are not licensed to use the Enterprise Manager Diagnostic Pack, should continue to use Statspack.

Statspack versus UTLBSTAT/UTLESTATThe BSTAT-ESTAT utilities capture information directly from the Oracle's in-memory structures and then compare the information from two snapshots in order to produce an elapsed-time report showing the activity of the database. If we look inside utlbstat.sql and utlestat.sql, we see the SQL that samples directly from the v$ views. e.g. V$SYSSTAT;

SQL>insert into stats$begin_stats select * from v$sysstat;SQL>insert into stats$end_stats select * from v$sysstat;Statspack improves on the existing UTLBSTAT/UTLESTAT performance scripts in the following ways:

Statspack collects more data, including high resource SQL (and the optimizer execution plans for those statements).

Statspack pre-calculates many ratios useful when performance tuning, such as cache hit ratios, per transaction and per second statistics (many of these ratios must be calculated manually when using BSTAT/ESTAT).

Permanent tables owned by PERFSTAT store performance statistics; instead of creating/dropping tables each time, data is inserted into the pre-existing tables. This makes historical data comparisons easier.

Statspack separates the data collection from the report generation. Data is collected when a 'snapshot' is taken; viewing the data collected is in the hands of the performance engineer when they run the performance report.

Data collection is easy to automate using either dbms_job or an OS utility.

NOTE: If you choose to run BSTAT/ESTAT in conjunction with statspack, do not run both as the same user. There is a name conflict with the STATS$WAITSTAT table.

Installing and Configuring Statspack$ cd $ORACLE_HOME/rdbms/admin

$ sqlplus "/as sysdba" @spcreate.sql

You will be prompted for the PERFSTAT user's password, default tablespace, and temporary tablespace.

This will create PERFSTAT user, statspack objects in it and STATSPACK package.

NOTE:Default tablespace or temporary tablespace must not be SYSTEM for PERFSTAT user.

The SPCREATE.SQL script runs the following scripts:

SPCUSR.SQL: Creates PERFSTAT user and grants privileges

SPCTAB.SQL: Creates STATSPACK tables

SPCPKG.SQL: Creates STATSPACK package

Check the log files (created in present directory): spcusr.lis, spctab.lis and spcpkg.lis and ensure that no errors were encountered during the installation.

To install statspack in batch mode, you must assign values to the SQL*Plus variables that specify the default and temporary tablespaces before running SPCREATE.SQL.

DEFAULT_TABLESPACE: For the default tablespace

TEMPORARY_TABLESPACE: For the temporary tablespace

PERFSTAT_PASSWORD: For the PERFSTAT user password

$ sqlplus "/as sysdba"

SQL> define default_tablespace='STATS'

SQL> define temporary_tablespace='TEMP_TBS'

SQL> define perfstat_password='perfstat'

SQL> @?/rdbms/admin/spcreate

When SPCREATE.SQL is run, it does not prompt for the information provided by the variables.

Taking snapshots of the databaseEach snapshot taken is identified by a snapshot ID, which is a unique number generated at the time the snapshot is taken. Each time a new collection is taken, a new SNAP_ID is generated. The SNAP_ID, along with the database identifier (DBID) and instance number (INSTANCE_NUMBER), comprise the unique key for a snapshot. Use of this unique combination allows storage of multiple instances of an OracleReal Application Clusters (RAC)database in the same tables.

When a snapshot is executed, the STATSPACK software will sample from the RAM in-memory structures inside the SGA and transfer the values into the corresponding STATSPACK tables. Taking such a snapshot stores the current values for the performance statistics in the statspack tables. This snapshot can be used as a baseline for comparison with another snapshot taken at a later time.

$ sqlplus perfstat/perfstat

SQL> exec statspack.snap;

or

SQL> exec statspack.snap(i_snap_level=>10);

instruct statspack to do gather more details in the snapshot.

SQL> select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS') "Date/Time" from stats$snapshot,v$database;

Note that in most cases, there is a direct correspondence between the v$view in the SGA and the corresponding STATSPACK table.

e.g. the stats$sysstat table is similar to the v$sysstat view.

Remember to settimed_statistics to truefor the instance. Statspack will then include important timing information in the data it collects.

Note: In RAC environment, you must connect to the instance for which you want to collect data.

Scheduling Snapshots gatheringThere are three methods to automate/schedule the gathering statspack snapshots/statistics.

SPAUTO.SQL - script can be customized and executed to schedule, a dbms_job to automate, and the collection of statspack snapshots.

DBMS_JOB procedure to schedule snapshots (you must set the initialization parameter JOB_QUEUE_PROCESSES to greater than 0).

BEGIN SYS.DBMS_JOB.SUBMIT (job => 999, what => 'statspack.snap;', next_date => to_date('17/08/2009 18:00:00','dd/mm/yyyy hh24:mi:ss'), interval => 'trunc(SYSDATE+1/24,''HH'')', no_parse => FALSE );END;/

Use an OS utility, such as cron.

Statspack reportingThe information captured by a STATSPACK snapshot has accumulated values. The information from the v$views collects database information at startup time and continues to add the values until the instance is shutdown. In order to get a meaningful elapsed-time report, you must run a STATSPACK report that compares two snapshots.

After snapshots were taken, you can generate performance reports.

SQL> connect perfstat/perfstat

SQL> @?/rdbms/admin/spreport.sql

When the report is run, you are prompted for the following:

The beginning snapshot ID

The ending snapshot ID

The name of the report text file to be created

It is not correct to specify begin and end snapshots where the begin snapshot and end snapshot were taken from different instance startups. In other words, the instance must not have been shutdown between the times that the begin and end snapshots were taken.

This is necessary because the database's dynamic performance tables, which statspack queries to gather the data, reside in memory. Hence, shutting down the Oracle database resets the values in the performance tables to 0. Because statspack subtracts the begin-snapshot statistics from the end-snapshot statistics, end snapshot will have smaller values than the begin snapshot, the resulting output is invalid and then the report shows an appropriate error to indicate this.

To get list of snapshots

SQL> select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;

To run the report without being prompted, assign values to the SQL*Plus variables that specify the begin snap ID, the end snap ID, and the report name before running SPREPORT. The variables are:

BEGIN_SNAP: Specifies the begin snapshot ID

END_SNAP: Specifies the end snapshot ID

REPORT_NAME: Specifies the report output name

SQL> connect perfstat

SQL> define begin_snap=1

SQL> define end_snap=2

SQL> define report_name=batch_run

SQL> @?/rdbms/admin/spreport

When SPREPORT.SQL is run, it does not prompt for the information provided by the variables.

The statspack package includes two reports.

Run statspack report, SPREPORT.SQL, which is general instance health report that covers all aspects of instance performance. This report calculates and prints ratios and differences for all statistics between the two snapshots, similar to the BSTAT/ESTAT report.

After examining the instance report, run SQL report, SPREPSQL.SQL, on a single SQL statement (identified by its hash value). The SQL report only reports on data relating to the single SQL statement.

Adjusting Statspack collection level & thresholdThese parameters are used as thresholds when collecting data on SQL statements, data is captured on any SQL statements that breach the specified thresholds.

Statspack has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table.

SQL> SELECT * FROM stats$level_description ORDER BY snap_level;

Level 0Captures general statistics, includingrollback segment, row cache, buffer pool statistics, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and latch information.

Level 5 (default)Includes capturing high resource usage SQL Statements, along with all data captured by lower levels.

Level 6Includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.

Level 7Captures segment level statistics, including logical and physical reads, row lock, ITL and buffer busy waits, along with all data captured by lower levels.

Level 10Includes capturing parent & child latch statistics, along with all data captured by lower levels.

You can change the default parameters used for taking snapshots so that they are tailored to the instance's workload.

To temporarily use a snapshot level or threshold that is different from the instance's default snapshot values, you specify the required threshold or snapshot level when taking the snapshot. This value is used only for the immediate snapshot taken; the new value is not saved as the default.

For example, to take a single level 6 snapshot:

SQL> EXECUTE STATSPACK.SNAP(i_snap_level=>6);

You can save the new value as the instance's default in either of two ways.

Simply use the appropriate parameter and the new value with the statspack MODIFY_STATSPACK_PARAMETER or SNAP procedure.

1) You can change the default level of a snapshot with the STATSPACK.SNAP function. The i_modify_parameter=>'true' changes the level permanent for all snapshots in the future.

SQL> EXEC STATSPACK.SNAP(i_snap_level=>8, i_modify_parameter=>'true');

Setting the I_MODIFY_PARAMETER value to TRUE saves the new thresholds in the STATS$STATSPACK_PARAMETER table. These thresholds are used for all subsequent snapshots.

If the I_MODIFY_PARAMETER was set to FALSE or omitted, then the new parameter values are not saved. Only the snapshot taken at that point uses the specified values. Any subsequent snapshots use the preexisting values in the STATS$STATSPACK_PARAMETER table.

2) Change the defaults immediately without taking a snapshot, using the STATSPACK.MODIFY_STATSPACK_PARAMETER procedure. For example, the following statement changes the snapshot level to 10 and modifies the SQL thresholds for BUFFER_GETS and DISK_READS:

SQL> EXECUTE STATSPACK.MODIFY_STATSPACK_PARAMETER

(i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);

This procedure changes the values permanently, but does not take a snapshot.

Snapshot level and threshold information used by the package is stored in the STATS$STATSPACK_PARAMETER table.

Creating Execution Plan of an SQLWhen you examine the instance report, if you find high-load SQL statements that you want to examine more closely or if you have identified one or more problematic SQL statement, you may want to check the execution plan. The SQL statement to be reported on is identified by a hash value, which is a numerical representation of the statement's SQL text. The hash value for each statement is displayed for each statement in the SQL sections of the instance report. The SQL report, SPREPSQL.SQL, displays statistics, the complete SQL text, and (if level is more than six) information on any SQL plan(s) associated with that statement.

$ sqlplus perfstat/perfstat

SQL> @?/rdbms/admin/sprepsql.sql

The SPREPSQL.SQL report prompts you for the following:

Beginning snapshot ID

Ending snapshot ID

Hash value for the SQL statement

Name of the report text file