oracle 11g data guard

35
Oracle 11g Data Guard: Building a Physical Standby Database By Jim Czuprynski Synopsis. Oracle Data Guard is a crucial part of the insurance policy that guarantees against unrecoverable disasters. Each new release of Oracle has augmented these disaster recovery features, and Oracle Database 11g expands them dramatically to include the capability to keep a standby database open for read-only queries while still accepting change vectors from the primary database. This article – the first in an ongoing series – explains how to set up a standby database environment using Oracle 11g’s new Recovery Manager features. I’ve been using Oracle’s Data Guard features even before it was officially known as Data Guard. I helped pioneer the use of a standby database as a potential reporting platform in early Oracle Database 8 i , with limited success. When Oracle 9 i Release 2 rolled out, I also experimented with switching back and forth between primary and standby databases - again with limited success, mainly because I’d decided not to implement the Data Guard Broker instrumentation. So when Oracle 10 g rolled out, I was encouraged by the many new manageability features that it provided and how well it integrated with Real Application Cluster (RAC) databases as part of Oracle’s maximum availability architecture (MAA). When I attended Oracle OpenWorld in 2008, however, Oracle Database 11 g ’s myriad new Data Guard capabilities opened my eyes to a whole new world of using the Data Guard architecture beyond disaster recovery. I’ve summarized many of these features in prior article series , but I’m going to dive into the deep end of the Data Guard pool during these next articles. Here’s a quick summary of the areas I’ll be exploring: Real-Time Query. In Oracle Database 8 i it was possible to bring a standby database into READ ONLY mode so that it could be used for reporting purposes, but it was necessary to switch it back to standby mode for reapplication of pending change vectors from the archived redo logs transported from the primary database. Oracle Database 11 g now lets me run queries in real time against any physical standby database without any disturbance to receipt and application of redo. Snapshot Standby Databases. Oracle Database 11 g offers another intriguing prospect: the ability to open a physical standby database for testing or QA purposes while simultaneously collecting production changes for immediate reapplication in case disaster recovery is required. This snapshot standby database still accepts redo information from its primary, but unlike the first two standby types, it does not apply the redo to the database immediately; instead, the redo is only applied when the snapshot standby database is reconverted back into a physical standby. This offers significant leverage because in theory, a QA environment that requires specifically dedicated, identical hardware is no longer required. Improved Handling of Role Transitions. The addition of standby snapshot databases brings the total of different Data Guard standby database types to three (physical, logical, and snapshot), so Oracle Database 11 g also makes it much easier to transition between these different roles via either Data Guard Broker (DGB) command line execution or Enterprise Manager Grid Control . As I’ll demonstrate in later articles, role transitions are simpler to execute and complete more quickly than in earlier releases. Improvements to Rolling Database Upgrades. Oracle Database 11 g supports rolling database upgrades to be performed against a physical standby database by first transforming it into a logical standby database with a few simple commands before the upgrade begins. Once the upgrade is done, the logical standby database is reverted to its original physical standby state. Oracle 11 g leverages this capability as well as the improved speed and simplicity of role transitions to perform system and database patching in a fraction of the time it would’ve taken in earlier releases, and it’s especially powerful in a Real Application Clusters (RAC) database environment, as I’ll demonstrate in a future article. SQL Apply Enhancements. Logical standby databases are obviously central to these new role transition features, but they use SQL Apply technology to apply change vectors to data. It therefore makes sense that Oracle Database 11 g provides significant improvements to this crucial part of Data Guard architecture. SQL Apply now supports parallel DDL execution , Fine-Grained

Upload: mabu-dba

Post on 25-Jul-2016

68 views

Category:

Documents


1 download

DESCRIPTION

k

TRANSCRIPT

Page 1: Oracle 11g Data Guard

Oracle 11g Data Guard: Building a Physical Standby DatabaseBy Jim CzuprynskiSynopsis. Oracle Data Guard is a crucial part of the insurance policy that guarantees against unrecoverable disasters. Each new release of Oracle has augmented these disaster recovery features, and Oracle Database 11g expands them dramatically to include the capability to keep a standby database open for read-only queries while still accepting change vectors from the primary database. This article – the first in an ongoing series – explains how to set up a standby database environment using Oracle 11g’s new Recovery Manager features.

I’ve been using Oracle’s Data Guard features even before it was officially known as Data Guard. I helped pioneer the use of a standby database as a potential reporting platform in early Oracle Database 8i, with limited success. When Oracle 9i Release 2 rolled out, I also experimented with switching back and forth between primary and standby databases - again with limited success, mainly because I’d decided not to implement the Data Guard Broker instrumentation. So when Oracle 10g rolled out, I was encouraged by the many new manageability features that it provided and how well it integrated with Real Application Cluster (RAC) databases as part of Oracle’s maximum availability architecture (MAA).

When I attended Oracle OpenWorld in 2008, however, Oracle Database 11g’s myriad new Data Guard capabilities opened my eyes to a whole new world of using the Data Guard architecture beyond disaster recovery. I’ve summarized many of these features in prior article series, but I’m going to dive into the deep end of the Data Guard pool during these next articles. Here’s a quick summary of the areas I’ll be exploring:

Real-Time Query. In Oracle Database 8i it was possible to bring a standby database into READ ONLY mode so that it could be used for reporting purposes, but it was necessary to switch it back to standby mode for reapplication of pending change vectors from the archived redo logs transported from the primary database. Oracle Database 11g now lets me run queries in real time against any physical standby database without any disturbance to receipt and application of redo.

Snapshot Standby Databases. Oracle Database 11g offers another intriguing prospect: the ability to open a physical standby database for testing or QA purposes while simultaneously collecting production changes for immediate reapplication in case disaster recovery is required. This snapshot standby database still accepts redo information from its primary, but unlike the first two standby types, it does not apply the redo to the database immediately; instead, the redo is only applied when the snapshot standby database is reconverted back into a physical standby. This offers significant leverage because in theory, a QA environment that requires specifically dedicated, identical hardware is no longer required.

Improved Handling of Role Transitions. The addition of standby snapshot databases brings the total of different Data Guard standby database types to three (physical, logical, and snapshot), so Oracle Database 11g also makes it much easier to transition between these different roles via either Data Guard Broker (DGB) command line execution or Enterprise Manager Grid Control. As I’ll demonstrate in later articles, role transitions are simpler to execute and complete more quicklythan in earlier releases.

Improvements to Rolling Database Upgrades. Oracle Database 11g supports rolling database upgrades to be performed against a physical standby database by first transforming it into a logical standby database with a few simple commands before the upgrade begins. Once the upgrade is done, the logical standby database is reverted to its original physical standby state. Oracle 11gleverages this capability as well as the improved speed and simplicity of role transitions to perform system and database patching in a fraction of the time it would’ve taken in earlier releases, and it’s especially powerful in a Real Application Clusters (RAC) database environment, as I’ll demonstrate in a future article.

SQL Apply Enhancements. Logical standby databases are obviously central to these new role transition features, but they use SQL Apply technology to apply change vectors to data. It therefore makes sense that Oracle Database 11g provides significant improvements to this crucial part of Data Guard architecture. SQL Apply now supports parallel DDL execution, Fine-Grained Auditing (FGA), Virtual Private Database (VPD), and Transparent Data Encryption (TDE), as well as simpler real-time SQL Apply reconfiguration and tuning.

Enhanced Redo Logs Transport. Physical standby databases have always used archived redo logsfor application of change vectors to data. Oracle Database 11g augments redo transport with some long-overdue features, including compression and SSL authentication of redo logs while they’re being transmitted between the primary and standby sites.

Page 2: Oracle 11g Data Guard

Heterogeneous DataGuard., Oracle Database 11g allows the primary and standby databases to use different operating systems (for example, Windows 2003 Server and Oracle Enterprise Linux) as long as both operating systems support the same endianness.

Fast Start Failover Improvements. Oracle introduced this feature set in Release 10gR2, but it’s been enhanced significantly in Oracle 11g to permit much finer-grained control over the conditions under which a fast-start failover would be initiated. I’ll demonstrate how an Oracle DBA can set up, control, and even force a fast-start failover to occur in a later article in this series.

“Live Cloning” of Standby Databases. Finally, Oracle 11g has made it extremely simple to set up a standby database environment because Recovery Manager (RMAN) now supports the ability to clone the existing primary database directly to the intended standby database site over the network via the DUPLICATE DATABASE command set while the target database is active. This means it’s no longer necessary to first generate, then transmit, and finally restore and recover RMAN backups of the primary database on the standby site via tedious (and possibly error-prone!) manual methods; instead, RMAN automatically generates a conversion script in memory on the primary site and uses that script to manage the cloning operation on the standby site with virtually no DBA intervention required.

Standby Database “Live Cloning”: A DemonstrationSince I’ll need an Oracle 11g Data Guard environment to demonstrate the features I’ve described above, I’m going to focus on the new “live cloning“ feature for the remainder of this article. My hardware is a dual-core AMD Athlon 64-bit CPU (Winchester 4200) with 4GB of memory using Windows XP as my host server to run VMWare Server 1.0.8 to access a virtualized database server environment. Each virtual machine uses one virtual CPU and 1200MB of memory, and for this iteration, I’ve chosen Oracle Enterprise Linux (OEL) 4.5.1 (Linux kernel version 2.6.9-55.0.0.0.2.ELsmp) for my operating system on both guest virtual machines.

Once each VMWare virtual machine was configured, I established network connectivity between my primary site (training) and the standby site (11gStdby) via appropriate entries in /etc/hostson each VM. I then installed the database software for Oracle Database 11g Release 1 (11.0.1.6) on both nodes. Finally, I constructed the standard 11gR1 “seed” database, including the standard sample schemas, on the primary node. This database’s ORACLE_SID is orcl. I’m now ready to perform the live cloning operation

Preparing to Clone: Adjusting the Primary DatabaseBefore I can clone my primary database to its corresponding standby environment, I’ll need to make some adjustments to the primary database itself. I’ve described the steps below in no particular order; as long as they’re all completed before I issue the DUPLICATE DATABASEstatement, I should have no surprises during the cloning operation.

Force Logging of All Transactions. A major reason that most organizations implement a Data Guard configuration is to insure that not one transaction will be lost. By default, however, an Oracle database is in NOFORCE LOGGING mode, and this implies that it’s possible to lose changes to objects whose changes aren’t being logged because their storage attribute is set toNOLOGGING. To insure that all changes are logged, I’ll execute the ALTER DATABASE FORCE LOGGING; command just before I bring the database into ARCHIVELOG mode via the ALTER DATABASE ARCHIVELOG; command. These commands are shown in Listing 1.1.

Set Up Standby Redo Log Groups. Oracle has recommended the configuration of standby redo log(SRL) groups since they became available in Oracle 9i Release 2. SRLs are required for the Real Time Apply feature or if the DBA wants to implement the ability to cascade redo log destinations; otherwise, they are still optional for configuration of a standby database. Another advantage of Oracle 11g is that if SRLs have already been configured on the primary database, then theDUPLICATE DATABASE command will automatically create them on the standby database during execution of its memory script. Listing 1.2 shows the commands I issued to create SRLs on the primary site; notice that I also multiplexed the SRL files to protect against the loss of a complete SRL group, just as is recommended for online redo log groups.

File Name Conversions. Usually a standby database is created on a host other than that of the primary database; otherwise, in a disaster, both standby and primary databases would be compromised (if not destroyed!). A recommended best practice is to name the directories and file names of the corresponding standby database identically. In cases when directory names might need to change because of different mount points, however, then

Page 3: Oracle 11g Data Guard

it’s necessary to map out the scheme for this conversion with the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters.

Modify Primary Site Initialization Parameters. Setting the following initialization parameters for the primary database instance insures that the DUPLICATE DATABASE command configures the standby database instance as well. I’ve shown the final settings for these initialization parameters in Listing 1.3:

DB_UNIQUE_NAME. I’ll set this parameter to define a unique name for the primary database instance. This assignment makes it much simpler to identify the “original” primary and standby instances regardless of role exchange. Since this is a static parameter, I set it withSCOPE=SPFILE in Listing 1.1 so that it’ll take effect during the bounce of the primary database instance.

LOG_ARCHIVE_CONFIG. This parameter controls whether a primary or standby database should acceptand/or send archived redo logs that have been transmitted from a remote source. It allows me to encompass all Data Guard primary and standby databases to be managed because it lists theDB_UNIQUE_NAME values for all databases within the configuration. I’ve set it up to reflect my current Data Guard databases, orcl and stdby.

STANDBY_FILE_MANAGEMENT. I’ve set this parameter to a value of AUTO so that Oracle willautomatically manage the creation or deletion of corresponding database files on the standbydatabase whenever a new file is created or an existing file is deleted on the primary database – for example, when a new online redo log group is added, or a tablespace is dropped.

LOG_ARCHIVE_DEST_n. This parameter is crucial to exchanging archived redo logs from the primary database to its counterpart physical standby database. I’ll set up two archiving destinations:

Destination LOG_ARCHIVE_DEST_1 designates the physical location for the primary database’s archived redo logs. Note that I’m using the Flash Recovery Area for the database as a target.

Destination LOG_ARCHIVE_DEST_2 designates the network service address that corresponds to the standby database instance (STDBY), and this insures that archived redo logs are transmitted automatically to the standby site for eventual application against the standby database.

I’ll also use two other directives for these two archived redo log transmission parameters:

Directive VALID_FOR dramatically simplifies what types of redo log transmission are acceptable when the database is acting in a specific role. This is especially critical to the proper handling archived redo logs when the primary and standby databases have exchanged roles. Table 1.1 lists the permitted values for this directive and what they control:

Table 1-1. VALID_FOR Directive ValuesSetting MeaningALL_LOGFILES (Default) Destination is valid for either online or standby redo log filesONLINE_LOGFILE Destination is valid for archiving only online redo log filesSTANDBY_LOGFILE Destination is valid for archiving only standby redo log filesALL_ROLES (Default) Destination is valid when database is operating in either primary orstandby role

PRIMARY_ROLE Destination is valid when database is operating only in primary role

STANDBY_ROLE Destination is valid when database is operating only in standby role

I’ll also identify how archived redo logs are to be transmitted from the primary to the standby database by specifying an appropriate redo transport mode. Table 1.2 lists the permitted values for this directive:

Table 1-2. Redo Transport ModesSetting MeaningASYNC (Default) The redo for a transaction may not have been received by all enabled destination(s) before the

transaction is allowed to COMMITSYNC The redo for a transaction must have been received by all enabled destination(s) before the transaction is

allowed to COMMITAFFIRM The destination for redo transport will acknowledge the receipt of redo data onlyafter it’s been written to the

standby redo log; implied by SYNC settingNOAFFIRM The destination for redo transport will acknowledge the receipt of redo data beforeit’s been written to the

standby redo log; implied by ASYNC setting

Page 4: Oracle 11g Data Guard

Network Configuration Changes. Finally, I need to insure that the primary and standby databases can communicate over the network. The only required change to the primary database server’s network configuration is the addition of the standby database’s instance to the local naming configuration file (TNSNAMES.ORA). The standby database server’s LISTENER.ORAconfiguration file also requires a listener with a static listening endpoint for the standby database’s instance. These changes are shown in Listing 1.4.

Preparing to Clone: Preparing the Standby SiteNow that the primary site is ready for cloning, I need to make some additional adjustments to its corresponding standby site:

Create Required Directories. I’ll need to create the appropriate destination directories for the database’s control files, data files, online redo logs, and standby redo logs. I’ll also create an appropriate directory for the database’s audit trails.

Set Up Password File. Since the primary database will need to communicate with the standby database using remote authentication, I’ll create a new password file via the orapwd utility, making sure that the password for SYS matches that of the primary database. (Note that I could have also copied it directly from the primary database’s site to the standby database’s primary site.)

Create Standby Initialization Parameter File. Finally, I’ll need to create an initialization parameter file (PFILE) just to allow me to start the standby database instance, and it only requires one parameter: DB_NAME. When the DUPLICATE DATABASE command script completes, it will have created a new server parameter file (SPFILE) containing only the appropriate initialization parameter settings.

I’ve illustrated these commands and the contents of the “temporary” standby database initialization parameter file in Listing 1.5. To give DUPLICATE DATABASE a target for the cloning operation, I’ll start the standby site’s listener, and then I’ll start the standby database instance in NOMOUNT mode using the PFILE I created above:

$> export ORACLE_SID=stdby

$> sqlplus / as sysdba

SQL> startup nomount pfile='/home/oracle/init_stdby.ora';

Cloning the Standby Database Via DUPLICATE DATABASEIt’s finally time to issue the DUPLICATE DATABASE command from within an RMAN session on the primary database server. As I mentioned earlier, the best part of using DUPLICATE DATABASE in Oracle 11g is that I can clone the primary database to the standby site directly across the network. As part of the setup of the standby database, I can also specify values for all required initialization parameters, and DUPLICATE DATABASE will create a new SPFILE on the standby server that captures those values.

Listing 1.6 shows the DUPLICATE DATABASE statement I’ll use to clone my primary site’s database to the standby site. Note that I’ve added a few additional parameters that aren’t exact counterparts of the primary database and “tweaked” a few others appropriately:

DB_UNIQUE_NAME. I’ve set this value to stdby for the standby database. CONTROL_FILES. I’ve specified just one control file for the standby database; I will multiplex it after cloning

is completed. FAL_CLIENT and FAL_SERVER. These parameters establish which database services will act as the fetch

archive log (FAL) client and server, respectively. For example, whenever a network outage occurs between the primary and standby servers, or if the standby database has been shut down for a significant length of time, it’s possible that one or more archived redo logs haven’t been transmitted to the standby server. This situation is called an archive log gap, and these two FAL service names establish which server maintains the master list of all archived redo logs (FAL_SERVER) and which server(s) requests resolution of the potential archive log gap (FAL_CLIENT). In our Data Guard setup, I’ve configured the standby server to be the FAL client, and the primary server to be the FAL server.

LOG_FILE_NAME_CONVERT. I’ve translated the primary database’s destinations for its archive redo logs and standby redo logs with this parameter to insure that RMAN will automatically create appropriate counterparts on the standby database during the cloning operation.

LOG_ARCHIVE_DEST_n. Just as with the primary database, I’ve set up two archive logging destinations: a primary (LOG_ARCHIVE_DEST_1) for archive redo logging, and a secondary(LOG_ARCHIVE_DEST_2)

Page 5: Oracle 11g Data Guard

that will handle the transmission of the standby site’s archived redo logs back to the original primary database when these two databases exchange roles in the future. (I’ll be demonstrating this in later articles.)

At last ... let the cloning commence! First, I’ll initiate an RMAN session on the primary database server, connecting to the primary database as the target and the standby database instance as the auxiliary:

oracle@training> rman target / auxiliary sys/oracle@stdby

 

Recovery Manager: Release 11.1.0.6.0 - Production on Tue Apr 14 19:29:25 2009

 

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

 

connected to target database: ORCL (DBID=1210321736)

connected to auxiliary database: STDBY (not mounted)

For faster processing, I’ll establish two auxiliary channels and two normal channels via theALLOCATE CHANNEL command and initiate the cloning with DUPLICATE DATABASE in the same RUN block. Here’s what this RMAN command block does:

It creates a new SPFILE for the standby database using the current primary database’s server parameter files as a template, but makes the appropriate changes as specified in the SET commands of the DUPLICATE DATABASE run block.

It then shuts down the standby database and opens it in NOMOUNT mode with the new SPFILE. Next, it creates a copy of the primary database’s control file, modifying it so that all file names match those of

the standby database, copies the new control file to the standby database, and MOUNTs the standby database using the new control file.

It then creates image copy backups of each primary database’s database file directly on the standby database. Finally, it uses the current archived redo log on the primary database to perform any necessary recovery on

the standby database, and brings the standby database into managed recovery mode.I’ve posted the results of the cloning operation in Listing 1.7, which shows the output from the RMAN command, and in Listing 1.8, which lists the standby database’s alert log entries generated during the cloning operation.

Post-Cloning: Cleanup and VerificationNow that the cloning is completed, I’ll need to insure that the standby database is actually ready to receive archived redo logs from the primary database. To verify that the primary and standby databases are indeed communicating, I’ll perform a redo log switch on the primary database:

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

And here’s the resulting proof from the standby database’s alert log that the online redo log was successfully transmitted to and applied at the standby database:

Completed: alter database clear logfile group 6

RFS connections are allowed

Sat Apr 18 06:29:58 2009

Redo Shipping Client Connected as PUBLIC

-- Connected User is Valid

RFS[1]: Assigned to RFS process 8492

RFS[1]: Identified database type as 'physical standby'

RFS LogMiner: Client disabled from further notification

Sat Apr 18 06:35:39 2009

Redo Shipping Client Connected as PUBLIC

-- Connected User is Valid

RFS[2]: Assigned to RFS process 8506

Page 6: Oracle 11g Data Guard

RFS[2]: Identified database type as 'physical standby'

Primary database is in MAXIMUM PERFORMANCE mode

Primary database is in MAXIMUM PERFORMANCE mode

RFS[2]: Successfully opened standby log 4: '/u01/app/oracle/oradata/stdby/srl01.log'

Sat Apr 18 06:36:28 2009

Redo Shipping Client Connected as PUBLIC

-- Connected User is Valid

RFS[3]: Assigned to RFS process 8512

RFS[3]: Identified database type as 'physical standby'

kcrrvslf: active RFS archival for log 4 thread 1 sequence 111

RFS[3]: Successfully opened standby log 5: '/u01/app/oracle/oradata/stdby/srl02.log'

Sat Apr 18 06:42:53 2009

Next StepsIn the next article in this series, I’ll explore how to use the Data Guard Broker (DGB) command set to control both the primary and standby database, as well as demonstrate how to perform a simple role transition – a switchover - between the primary and standby databases.

References and Additional ReadingWhile I’m hopeful that I’ve given you a thorough grounding in the technical aspects of the features I’ve discussed in this article, I’m also sure that there may be better documentation available since it’s been published. I therefore strongly suggest that you take a close look at the corresponding Oracle documentation on these features to obtain crystal-clear understanding before attempting to implement them in a production environment. Please note that I’ve drawn upon the following Oracle Database 11g documentation for the deeper technical details of this article:

Resolve Physical Standby gaps 

Scenario #1 - Version 10g – ASM is used as storage and primary’s backups not available on standby

Assumptions:

1. The primary’s backup location can not be mounted onto standby server i.e it the NFS mount where primary’s backup are

taken is not available to standby server

2. ASM is the storage on primary and standby

To resolve the gap, we need to transfer the standby from primary to standby.

There are 2 cases:

Page 7: Oracle 11g Data Guard

1. The missing log is available in the primary database and has not been deleted

2. The missing log is not available in the primary database and is available in the archive log backup of primary.

First check if the log is available or not, run the following query on primary database:

select status, deleted from v$archived_log where sequence# = ;

In both the cases, the steps are almost similar. Wherever the steps are exclusive to a case, a note is provided to specify the

same.

Step#1 – Restore the missing log from primary’s archived log backup.

This step is only required when the log has been deleted from the primary database and is available only in the archived

log backups.

RMAN> connect catalog username/password@catalog

RMAN> connect target /

RMAN> restore archivelog sequence 157682;

==> If there is a gap of more than 1 log, we need to use the statement like:

RMAN> restore archivelog from sequence until sequence ;

Note that this will restore the archive log to the default archival destination of the database specified by the parameter

log_archive_dest_1

==> You can use the following command to know the backup piece which contains the backup of that missing log

RMAN> list backup of archivelog sequence 157682;

Page 8: Oracle 11g Data Guard

Step#2 – Copy the archive log from ASM diskgroup to the normal OCFS file system

If the missing archive log is available in the primary database, start from this step.

Use the following RMAN command to achieve this:

RMAN> copy archivelog '+DATA/IPWP_RWC1/ARCHIVELOG/2009_11_13/thread_1_seq_157682.1341.702785107' to

'/tmp/thread_1_seq_157682.1341.702785107';

Step#3 – “scp” the log from primary server to standby server’s file system

scp /tmp/thread_1_seq_157682.1341.702785107 oracle@ipw-db-sac1: /tmp/thread_1_seq_157682.1341.702785107

Step#4 – Manually recover the standby database using the just shipped archived log

To do this first cancel the managed recovery:

Now, we need to perform the manual recovery to apply the archive log which is missing and has been transferred from

primary:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Database altered.

SQL> recover automatic standby database; <== This is for manual recovery ORA-00279: change 34121233951 generated

at 11/12/2009 07:54:10 needed for

thread 1

ORA-00289: suggestion : +DATA

ORA-00280: change 34121233951 for thread 1 is in sequence #157682

ORA-00278: log file '+DATA' no longer needed for this recovery

ORA-00308: cannot open archived log '+DATA'

ORA-17503: ksfdopn:2 Failed to open file +DATA

ORA-15045: ASM file name '+DATA' is not in reference form

Page 9: Oracle 11g Data Guard

Specify log: {=suggested filename AUTO CANCEL}

/tmp/thread_1_seq_157682.1341.702785107 <== At this prompt, provide the name of the log which has been copied over

from primary

ORA-00279: change 34121306612 generated at 11/12/2009 07:58:19 needed for

thread 1

ORA-00289: suggestion : +DATA

ORA-00280: change 34121306612 for thread 1 is in sequence #157683

ORA-00278: log file '/tmp/thread_1_seq_157682.1341.702785107' no longer needed

for this recovery

Specify log: {=suggested filename AUTO CANCEL}

CANCEL <== At this prompt now, enter CANCEL to tell oracle to stop recovery because we have applied the one missing

log 

Media recovery cancelled.

Now, restart managed recovery

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

Now by querying the view v$archived_log, we can see that the missing log has been bypassed and MRP process has

proceeded the log application process.

Scenario # 2 - Version 10g – Primary’s backups are available on the standby

Assumptions:

1. The primary’s backup location can be mounted onto standby server i.e it the NFS mount where primary’s backup are

taken is available to standby server

2. OCFS is being used as storage

Page 10: Oracle 11g Data Guard

If the archive logs have been deleted from the database, we need to restore them from the archive log backups as

demonstrated. This scenario is in fact independent of ASM or OCFS because backup pieces are available on the standby.

STEP#1: First, know the primary’s backup piece(s) which contain the missing archive log(s), for example:

RMAN> list backup of archivelog from sequence 154472 until sequence 154474;

This will give the name of all backup pieces which contain the required logs.

STEP#2: Since the backup pieces are available on the standby server, catalog all the backup pieces on the standby. For

example,

RMAN > catalog backuppiece '/location/piece_name';

You do not need to connect to recovery catalog, but connect to standby as the target. Note that “catalog” is only available

in version 10g

STEP#3: Once the pieces are cataloged, stay connected to the standby database as target and restore the archive logs as

follows:

RMAN> restore archivelog from sequence 154472 until sequence 154474;

RMAN will restore all the archivelogs to the correct diskgroup (archival location) on the standby

Since we have now restored the missing log(s) onto the standby, there is no need of recovery. The restored logs will now be

available in standby and shold be applied by MRP automatically which has been so far waiting for the gaps to be filled.

Scenario # 3 - Version 9i – Primary’s backups are not available on standby server

Assumptions:

1. The archive logs have been deleted from the primary database

2. Primary’s backups are not available on standby server.

Page 11: Oracle 11g Data Guard

In this scenario, the steps are almost similar to the first scenario.

Note that, if ASM is not the storage, then we just need to restore the archive logs and move them to the standby server.

Which means step#2 can be skipped when compared to the first scenario.

Important notes on resolving physical standby gaps

==> If the archive log has not been deleted from the database and ASM is not the storage, then simply copy the archive

log from primary to standby’s archival location. MRP will automatically pick the archive logs and start applying them. This is

true for 10g as well as 9i.

==> In all the scenarios, there are 2 fundamental ways to fill the gap on standby – Either by restoring the missing logs at

standby’s archival location where MRP can pick it up automatically

OR

If the missing log(s) can not be restored at the default location, then perform an incomplete recovery of standby using the

restored archived log which has been restored at the non-default location. This is basically applicable in cases where ASM is

being used

P O S T E D B Y   R O H I T G U P T A   A T   1 : 4 3 A M   N O C O M M E N T S :  

T H U R S D A Y , O C T O B E R 1 6 , 2 0 0 8

Physical Standby out of sync - Missing Datafiles ScenarioEnvironment:

1. Primary has 200 datafiles and standby has only 166 datafiles

2. Primary is a 3 node cluster and Standby is a 2 node cluster

3. The DB name is mydb

Problem and Symptoms:

Page 12: Oracle 11g Data Guard

1. When I tried to start the MRP on standby, it reported the following error in alert log:

**************************************************************

Errors in file /u01/app/oracle/admin/mydb/bdump/mydb1_mrp0_21189.trc:ORA-01111: name for data file 167 is unknown -

rename to correct fileORA-01110: data file 167: '/u01/app/oracle/product/9.2.0/dbs/UNNAMED00167'ORA-01157: cannot

identify/lock data file 167 - see DBWR trace fileORA-01111: name for data file 167 is unknown - rename to correct fileORA-

01110: data file 167: '/u01/app/oracle/product/9.2.0/dbs/UNNAMED00167'

*************************************************************

2. On further investigation, standby’s alert log also contains following errors:

************************************************************************

Tue Sep 9 04:05:03 2008Media Recovery Log /u03/oradata/mydb/arc_backup/mydb_2_2173.arcMedia Recovery Log

/u03/oradata/mydb/arc_backup/mydb_1_1896.arcWARNING: File being created with same name as in PrimaryExisting file

may be overwrittenFile #167 added to control file as 'UNNAMED00167'. Originally created

as:'/u07/oradata/mydb/myfile_1.dbf'Recovery was unable to create the file as:'/u07/oradata/mydb/myfile_1.dbf'MRP0:

Background Media Recovery terminated with error 1274Tue Sep 9 04:05:06 2008Errors in file

/u01/app/oracle/admin/mydb/bdump/mydb1_mrp0_7175.trc:ORA-01274: cannot add datafile '/u07/oradata/mydb/myfile-

1.dbf' - file could not be createdORA-01119: error in creating database file '/u07/oradata/mydb/myfile_1.dbf'ORA-27054:

Message 27054 not found; product=RDBMS; facility=ORALinux-x86_64 Error: 13: Permission denied

**************************************************************************

3. On checking the view v$archived_log, there were lot of log sequence# which were APPLIED=NO

4. There is no gap in the sequence#

Reason:

Parameter db_file_name_convert was not set at standby database. So as long as the files were created on /u02 and /u03 on

primary, there was no problem on the standby because standby had /u02 and /u03. But when file#167 was added at /u07

on primary (on Sep 9 04:05:03 2008), it could not map to a /u07 mount point on standby because /u07 does not exists on

standby and db_file_name_convert was also not set. As indicated by the alert log, the file#167 was registered in the

standby’s control file as “UNANMED00167” at the default location of $ORACLE_HOME/dbs but the file was not created

Page 13: Oracle 11g Data Guard

physically on standby database.

Action Plan:

1. At the standby:Please set the db_file_name_convert parameter at the Standby for the /u07 folder at the Primary to the

corresponding folder at the Standby.

Since this parameter is a Static parameter, you need to bounce the Standby DB.

******************************************************************************

As step#1, you can do following instead of the above step:

At the standby:

Create /u07 soft link for /u02, to eliminate the bounce of standby db due to the addition of db_file_name_convert init.ora

parameter

*********************************************************************************************

2. At the standby :SQL> alter system set standby_file_management=manual;

3. At the Primary for the datafile 167 :

SQL> alter tablespace <> begin backup ;

Copy the Datafile from the Primary to Standby to the correct location.

SQL> Alter tablespace end backup;

4. At the Standby:

SQL> alter database rename file '.......UNNAMED00167' to '<>';

******************************************************************************

You can skip steps#3 and #4 and instead do following step after #2:

At the Standby:

SQL> ALTER DATABASE CREATE DATAFILE '< ....UNNAMED00167>' as '<>';

Page 14: Oracle 11g Data Guard

******************************************************************************

5. To create the remaining datafiles at the Standby automatically:

SQL> alter system set standby_file_management=auto;

6. Start the MRP at the Standby

SQL> alter database recover managed standby database;

At standby database ensure the MRP is running as expected:

SQL>select process, status , sequence# from v$managed_standby;

When Primary and Standby are RAC databases:

1. On Standby: You can see multiple copies of some or all logs transported and applied on standby when you check the

view v$archived_log.

2. On Standby: All sequence# should have APPLIED=YES in v$archived_log for all threads. This ensures that all logs from all

threads were transported and applied on standby and hence keeps standby in sync with primary.

3. On Standby: In the view v$archived_log you may not see same number of multiple copies of all logs. For example, if the

primary is a 3 node cluster, you may or may not have 3 copies of each log i.e. you may not have the same sequence# log

on standby for all 3 threads. Of course the reason is that number of logs generated on all 3 nodes of primary will differ. The

current sequence# transported from a node of primary RAC database can be seen by querying v$archived_log on standby:

SQL> select max(sequence#) from v$archived_log where thread#=1;

As explained above, the output will differ for all 3 threads.

P O S T E D B Y   R O H I T G U P T A   A T   5 : 0 9 A M   N O C O M M E N T S :  

W E D N E S D A Y , A U G U S T 6 , 2 0 0 8

Tips for Oracle DBAs

Page 15: Oracle 11g Data Guard

I have a blog where i stream some useful Oracle tips.

http://rohitguptaoracletips.blogspot.com/

These are very few tips out of some challenging tasks i have been doing and do have lot more to add

-Rohit

P O S T E D B Y   R O H I T G U P T A   A T   2 : 3 4 A M   N O C O M M E N T S :  

So your standby is out of sync?When using dataguard, there are various scenarios where physical standby goes out of sync with primary.

Referhttp://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#CIHIAADC for scenarios where

standby needs to be rolled forward and the steps to follow to bring it in sync with primary.

Before doing anything we need to verify that why standby is not in sync with primary. In this paticular note, i am covering

the scenrio where a log is missing from the standby and the associated problems. Verify from v$archived_log that there is a

gap in Sequence number. All the logs upto that gap should have APPLIED=YES.

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

There are basically couple of steps to be performed when the standby is not in sync with Primary and is lagging in terms of

redo logs. These are:

1. Take a incremental backup of primary from the SCN where standby is lagging behind and apply on the standby server

2. Then re-create the controlfile of standby database from the primary

******************************************************************************

STEP#1

1. on STANDBY database query the V$DATABASE view and record the current SCN of the standby database:

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN

-----------

1.3945E+10

SQL> SELECT to_char(CURRENT_SCN) FROM V$DATABASE;

TO_CHAR(CURRENT_SCN)

----------------------------------------

13945141914

Page 16: Oracle 11g Data Guard

2. Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;ALTER DATABASE RECOVER MANAGED

STANDBY DATABASE CANCEL*ERROR at line 1:ORA-16136: Managed Standby Recovery not active

If you see this above error, it means Managed Recovery is already off

You can also confirm from the view v$managed_standby to see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

3. Connect to the primary database as the RMAN target and create an incremental backup from the current SCN of the

standby database that was recorded in step 1:

BACKUP INCREMENTAL FROM SCN 13945141914 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FOR STANDBY'

4. Do a recovery of STANDBY database using the incremental backup of primary taken above

--> On the Standby server, Without connecting to recovery catalog, catalog the backupset of the incermental backup

$ rman nocatalog target /

RMAN> CATALOG BACKUPPIECE '/dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1';

--> Now in the same session, start the recovery

RMAN> RECOVER DATABASE NOREDO;

you should see something like follwing at the end:

channel ORA_DISK_1: reading from backup piece /dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1channel ORA_DISK_1:

restored backup piece 1piece handle=/dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1 tag=FOR STANDBYchannel

ORA_DISK_1: restore complete, elapsed time: 01:53:08

Finished recover at 2008-07-25 05:20:3

--> Delete the backup set from standby

RMAN> DELETE BACKUP TAG 'FOR STANDBY';

using channel ORA_DISK_1

List of Backup PiecesBP Key BS Key Pc# Cp# Status Device Type Piece Name

------- ------- --- --- ----------- ----------- ----------

17713 17713 1 1 AVAILABLE DISK /dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1

Do you really want to delete the above objects (enter YES or NO)? YES

deleted backup piecebackup piece handle=/dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1 recid=17713

stamp=660972421Deleted 1 objects

Page 17: Oracle 11g Data Guard

5. Try to start the managed recovery.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

--> If you get an error here, you need to goto STEP#2 for bringing standby in sync

--> If no error, then using the view v$managed_standby, verify that MRP process is started.

6. After this check whether the logs are being applied on the standby or not.

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

*******************************************************************************************

STEP #2: Since Managed recovery failed after applying the incremental backup, we need to re-create the controlfile of

standby. The reason for re-creating the controlfile is that State of the database was same because the database_scn was

not updated in the control file after applying the incremental backup while the scn for datafiles were updated. Due to this

standby database was still looking for the old file to apply.

To recreate the standby controlfile:

--> Take the backup of controlfile from primary

rman target sys/oracle@boston catalog rman /cat@emrep backup current controlfile for standby;

--> Copy the controlfile backup to the standby system (or if it is on the common NFS mount, no need to transfer or copy)

--> Shutdown all instances (If standby is RAC) of the standby.

sqlplus / as sysdbashutdown immediateexit

--> Startup nomount, one instance.

sqlplus / as sysdbastartup nomountexit

--> Restore the standby control file.

rman nocatalog target /restore standby controlfile from '/tmp/o1_mf_TAG20070220T151030_.bkp';exit

--> Startup the standby with the new control file.

sqlplus / as sysdbashutdown immediatestartup mountexit

--> Restart managed recovery in one instance (if standby is RAC) of the standby database:

sqlplus / as sysdba

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT

The abobe statement will succeed without errors but still MRP process is not started. The reason is that since the controlfile

has been restored from the primary, it is looking for datafiles at the same location as are in primary instead of standby. For

example, if the priamry datafiles are located at '+DATA/prod_db/DATAFILE' and standby datafiles are at

'+DATA/standby_db/DATAFILE', the new controlfile has the datafiles location as '+DATA/prod_db/DATAFILE'. This can be

verified from the query "select name from v$datafile" on the standby instance. We need to rename all the datafiles to

Page 18: Oracle 11g Data Guard

reflect the correct location.

To rename the datafiles, there are 2 ways:

1. Without RMAN

--> Change the parameter standby_file_management=manual--> ALTER DATABASE RENAME FILE

'+DATA/prod_db/datafile/users.310.620229743' TO '+DATA/standby_db/datafile/USERS.1216.648429765';

2. Using RMAN

--> rman nocatalog target /

--> Catalog the files, the string specified should refer to the diskgroup/filesystem destination of the standby data files.

RMAN> catalog start with '+diskgroup//datafile/';

e.g.:

RMAN> catalog start with '+DATA/ipwp_sac1/datafiles/';

This will give the user a list of files and ask if they should all be catalog. The user should review and say YES if all the

datafiles are properly listed.

--> Once that is done, then commit the changes to the controlfile

RMAN> switch database to copy;

--> Now if you start the managed recovery

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT

and check for processes in the view V$MANAGED_STANBY, MRP process should be there. It will also start applying all the

archived logs that were missing since last applied log (this might take hours)

*******************************************************************************************

After re-crreating the controfile and renaming the datafiles (before starting the managed recovery), we observed (and it is

possible) that there is a datafile in production which is not present on the standby. This was verified by checking the names

of datafiles from v$datafile view. It showed that there is one datafile whose location is still as of production and renaming

effort also failed because the datafile is not at all present in the standby. So in such a case we need to backup that single

datafile from production and restore it at standby.

--> On Production:

RMAN> run{ Allocate channel c1 type disk; Backup datafile '+DATA/ipwp_rwc1/datafile/ipw_invli_is.1643.660041401'

format '/dump/ipwp/rman_backup/ipw_invli_is'; }

--> On standby:

RMAN> catalog backuppiece '/dump/db/rman_backup/ipw_invli_is;

cataloged backuppiecebackup piece handle=/dump/ipwp/rman_backup/ipw_invli_is recid=26806 stamp=661232892

RMAN> RESTORE DATAFILE '+DATA/prod_db/datafile/ipw_invli_is.1643.660041401';

Page 19: Oracle 11g Data Guard

Starting restore at 2008-07-28 03:58:18allocated channel: ORA_DISK_1channel ORA_DISK_1: sid=321 devtype=DISK

channel ORA_DISK_1: starting datafile backupset restorechannel ORA_DISK_1: specifying datafile(s) to restore from backup

setrestoring datafile 00045 to +DATA/ipwp_rwc1/datafile/ipw_invli_is.1643.660041401channel ORA_DISK_1: reading from

backup piece /dump/ipwp/rman_backup/ipw_invli_ischannel ORA_DISK_1: restored backup piece 1piece

handle=/dump/ipwp/rman_backup/ipw_invli_is tag=TAG20080728T010613channel ORA_DISK_1: restore complete, elapsed

time: 00:01:26Finished restore at 2008-07-28 03:59:46

RMAN> delete backuppiece '/dump/ipwp/rman_backup/ipw_invli_is';

allocated channel: ORA_DISK_1channel ORA_DISK_1: sid=310 devtype=DISK

List of Backup PiecesBP Key BS Key Pc# Cp# Status Device Type Piece Name------- ------- --- --- ----------- ----------- ----------

26806 26805 1 1 AVAILABLE DISK /dump/ipwp/rman_backup/ipw_invli_is

Do you really want to delete the above objects (enter YES or NO)? YESdeleted backup piecebackup piece

handle=/dump/ipwp/rman_backup/ipw_invli_is recid=26806 stamp=661232892Deleted 1 objects

--> After this re-confirm the location of all datafiles and then start the managed recovery.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

*******************************************************************************

Hope the note helps. This has been my real-life experience and hence found worth sharing. Any thoughts and suggestions,

always welcome!!!

So your standby is out of sync?When using dataguard, there are various scenarios where physical standby goes out of sync with primary.

Referhttp://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#CIHIAADC for scenarios where

standby needs to be rolled forward and the steps to follow to bring it in sync with primary.

Before doing anything we need to verify that why standby is not in sync with primary. In this paticular note, i am covering

the scenrio where a log is missing from the standby and the associated problems. Verify from v$archived_log that there is a

gap in Sequence number. All the logs upto that gap should have APPLIED=YES.

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

There are basically couple of steps to be performed when the standby is not in sync with Primary and is lagging in terms of

redo logs. These are:

1. Take a incremental backup of primary from the SCN where standby is lagging behind and apply on the standby server

Page 20: Oracle 11g Data Guard

2. Then re-create the controlfile of standby database from the primary

******************************************************************************

STEP#1

1. on STANDBY database query the V$DATABASE view and record the current SCN of the standby database:

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN

-----------

1.3945E+10

SQL> SELECT to_char(CURRENT_SCN) FROM V$DATABASE;

TO_CHAR(CURRENT_SCN)

----------------------------------------

13945141914

2. Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;ALTER DATABASE RECOVER MANAGED

STANDBY DATABASE CANCEL*ERROR at line 1:ORA-16136: Managed Standby Recovery not active

If you see this above error, it means Managed Recovery is already off

You can also confirm from the view v$managed_standby to see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

3. Connect to the primary database as the RMAN target and create an incremental backup from the current SCN of the

standby database that was recorded in step 1:

BACKUP INCREMENTAL FROM SCN 13945141914 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FOR STANDBY'

4. Do a recovery of STANDBY database using the incremental backup of primary taken above

--> On the Standby server, Without connecting to recovery catalog, catalog the backupset of the incermental backup

$ rman nocatalog target /

RMAN> CATALOG BACKUPPIECE '/dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1';

--> Now in the same session, start the recovery

RMAN> RECOVER DATABASE NOREDO;

you should see something like follwing at the end:

channel ORA_DISK_1: reading from backup piece /dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1channel ORA_DISK_1:

Page 21: Oracle 11g Data Guard

restored backup piece 1piece handle=/dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1 tag=FOR STANDBYchannel

ORA_DISK_1: restore complete, elapsed time: 01:53:08

Finished recover at 2008-07-25 05:20:3

--> Delete the backup set from standby

RMAN> DELETE BACKUP TAG 'FOR STANDBY';

using channel ORA_DISK_1

List of Backup PiecesBP Key BS Key Pc# Cp# Status Device Type Piece Name

------- ------- --- --- ----------- ----------- ----------

17713 17713 1 1 AVAILABLE DISK /dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1

Do you really want to delete the above objects (enter YES or NO)? YES

deleted backup piecebackup piece handle=/dump/ipwp/inc_bkup/ForStandby_1qjm8jn2_1_1 recid=17713

stamp=660972421Deleted 1 objects

5. Try to start the managed recovery.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

--> If you get an error here, you need to goto STEP#2 for bringing standby in sync

--> If no error, then using the view v$managed_standby, verify that MRP process is started.

6. After this check whether the logs are being applied on the standby or not.

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

*******************************************************************************************

STEP #2: Since Managed recovery failed after applying the incremental backup, we need to re-create the controlfile of

standby. The reason for re-creating the controlfile is that State of the database was same because the database_scn was

not updated in the control file after applying the incremental backup while the scn for datafiles were updated. Due to this

standby database was still looking for the old file to apply.

To recreate the standby controlfile:

--> Take the backup of controlfile from primary

rman target sys/oracle@boston catalog rman/cat@emrepbackup current controlfile for standby;

--> Copy the controlfile backup to the standby system (or if it is on the common NFS mount, no need to transfer or copy)

--> Shutdown all instances (If standby is RAC) of the standby.

sqlplus / as sysdbashutdown immediateexit

--> Startup nomount, one instance.

sqlplus / as sysdbastartup nomountexit

Page 22: Oracle 11g Data Guard

--> Restore the standby control file.

rman nocatalog target /restore standby controlfile from '/tmp/o1_mf_TAG20070220T151030_.bkp';exit

--> Startup the standby with the new control file.

sqlplus / as sysdbashutdown immediatestartup mountexit

--> Restart managed recovery in one instance (if standby is RAC) of the standby database:

sqlplus / as sysdba

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT

The abobe statement will succeed without errors but still MRP process is not started. The reason is that since the controlfile

has been restored from the primary, it is looking for datafiles at the same location as are in primary instead of standby. For

example, if the priamry datafiles are located at '+DATA/prod_db/DATAFILE' and standby datafiles are at

'+DATA/standby_db/DATAFILE', the new controlfile has the datafiles location as '+DATA/prod_db/DATAFILE'. This can be

verified from the query "select name from v$datafile" on the standby instance. We need to rename all the datafiles to

reflect the correct location.

To rename the datafiles, there are 2 ways:

1. Without RMAN

--> Change the parameter standby_file_management=manual--> ALTER DATABASE RENAME FILE

'+DATA/prod_db/datafile/users.310.620229743' TO '+DATA/standby_db/datafile/USERS.1216.648429765';

2. Using RMAN

--> rman nocatalog target /

--> Catalog the files, the string specified should refer to the diskgroup/filesystem destination of the standby data files.

RMAN> catalog start with '+diskgroup//datafile/';

e.g.:

RMAN> catalog start with '+DATA/ipwp_sac1/datafiles/';

This will give the user a list of files and ask if they should all be catalog. The user should review and say YES if all the

datafiles are properly listed.

--> Once that is done, then commit the changes to the controlfile

RMAN> switch database to copy;

--> Now if you start the managed recovery

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT

and check for processes in the view V$MANAGED_STANBY, MRP process should be there. It will also start applying all the

archived logs that were missing since last applied log (this might take hours)

*******************************************************************************************

Page 23: Oracle 11g Data Guard

After re-crreating the controfile and renaming the datafiles (before starting the managed recovery), we observed (and it is

possible) that there is a datafile in production which is not present on the standby. This was verified by checking the names

of datafiles from v$datafile view. It showed that there is one datafile whose location is still as of production and renaming

effort also failed because the datafile is not at all present in the standby. So in such a case we need to backup that single

datafile from production and restore it at standby.

--> On Production:

RMAN> run{ Allocate channel c1 type disk; Backup datafile '+DATA/ipwp_rwc1/datafile/ipw_invli_is.1643.660041401'

format '/dump/ipwp/rman_backup/ipw_invli_is'; }

--> On standby:

RMAN> catalog backuppiece '/dump/db/rman_backup/ipw_invli_is;

cataloged backuppiecebackup piece handle=/dump/ipwp/rman_backup/ipw_invli_is recid=26806 stamp=661232892

RMAN> RESTORE DATAFILE '+DATA/prod_db/datafile/ipw_invli_is.1643.660041401';

Starting restore at 2008-07-28 03:58:18allocated channel: ORA_DISK_1channel ORA_DISK_1: sid=321 devtype=DISK

channel ORA_DISK_1: starting datafile backupset restorechannel ORA_DISK_1: specifying datafile(s) to restore from backup

setrestoring datafile 00045 to +DATA/ipwp_rwc1/datafile/ipw_invli_is.1643.660041401channel ORA_DISK_1: reading from

backup piece /dump/ipwp/rman_backup/ipw_invli_ischannel ORA_DISK_1: restored backup piece 1piece

handle=/dump/ipwp/rman_backup/ipw_invli_is tag=TAG20080728T010613channel ORA_DISK_1: restore complete, elapsed

time: 00:01:26Finished restore at 2008-07-28 03:59:46

RMAN> delete backuppiece '/dump/ipwp/rman_backup/ipw_invli_is';

allocated channel: ORA_DISK_1channel ORA_DISK_1: sid=310 devtype=DISK

List of Backup PiecesBP Key BS Key Pc# Cp# Status Device Type Piece Name------- ------- --- --- ----------- ----------- ----------

26806 26805 1 1 AVAILABLE DISK /dump/ipwp/rman_backup/ipw_invli_is

Do you really want to delete the above objects (enter YES or NO)? YESdeleted backup piecebackup piece

handle=/dump/ipwp/rman_backup/ipw_invli_is recid=26806 stamp=661232892Deleted 1 objects

--> After this re-confirm the location of all datafiles and then start the managed recovery.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

*******************************************************************************

Physical Standby out of sync - Missing Datafiles ScenarioEnvironment:

Page 24: Oracle 11g Data Guard

1. Primary has 200 datafiles and standby has only 166 datafiles

2. Primary is a 3 node cluster and Standby is a 2 node cluster

3. The DB name is mydb

Problem and Symptoms:

1. When I tried to start the MRP on standby, it reported the following error in alert log:

**************************************************************

Errors in file /u01/app/oracle/admin/mydb/bdump/mydb1_mrp0_21189.trc:ORA-01111: name for data file 167 is unknown -

rename to correct fileORA-01110: data file 167: '/u01/app/oracle/product/9.2.0/dbs/UNNAMED00167'ORA-01157: cannot

identify/lock data file 167 - see DBWR trace fileORA-01111: name for data file 167 is unknown - rename to correct fileORA-

01110: data file 167: '/u01/app/oracle/product/9.2.0/dbs/UNNAMED00167'

*************************************************************

2. On further investigation, standby’s alert log also contains following errors:

************************************************************************

Tue Sep 9 04:05:03 2008Media Recovery Log /u03/oradata/mydb/arc_backup/mydb_2_2173.arcMedia Recovery Log

/u03/oradata/mydb/arc_backup/mydb_1_1896.arcWARNING: File being created with same name as in PrimaryExisting file

may be overwrittenFile #167 added to control file as 'UNNAMED00167'. Originally created

as:'/u07/oradata/mydb/myfile_1.dbf'Recovery was unable to create the file as:'/u07/oradata/mydb/myfile_1.dbf'MRP0:

Background Media Recovery terminated with error 1274Tue Sep 9 04:05:06 2008Errors in file

/u01/app/oracle/admin/mydb/bdump/mydb1_mrp0_7175.trc:ORA-01274: cannot add datafile '/u07/oradata/mydb/myfile-

1.dbf' - file could not be createdORA-01119: error in creating database file '/u07/oradata/mydb/myfile_1.dbf'ORA-27054:

Message 27054 not found; product=RDBMS; facility=ORALinux-x86_64 Error: 13: Permission denied

**************************************************************************

3. On checking the view v$archived_log, there were lot of log sequence# which were APPLIED=NO

4. There is no gap in the sequence#

Page 25: Oracle 11g Data Guard

Reason:

Parameter db_file_name_convert was not set at standby database. So as long as the files were created on /u02 and /u03 on

primary, there was no problem on the standby because standby had /u02 and /u03. But when file#167 was added at /u07

on primary (on Sep 9 04:05:03 2008), it could not map to a /u07 mount point on standby because /u07 does not exists on

standby and db_file_name_convert was also not set. As indicated by the alert log, the file#167 was registered in the

standby’s control file as “UNANMED00167” at the default location of $ORACLE_HOME/dbs but the file was not created

physically on standby database.

Action Plan:

1. At the standby:Please set the db_file_name_convert parameter at the Standby for the /u07 folder at the Primary to the

corresponding folder at the Standby.

Since this parameter is a Static parameter, you need to bounce the Standby DB.

******************************************************************************

As step#1, you can do following instead of the above step:

At the standby:

Create /u07 soft link for /u02, to eliminate the bounce of standby db due to the addition of db_file_name_convert init.ora

parameter

*********************************************************************************************

2. At the standby :SQL> alter system set standby_file_management=manual;

3. At the Primary for the datafile 167 :

SQL> alter tablespace <> begin backup ;

Copy the Datafile from the Primary to Standby to the correct location.

SQL> Alter tablespace end backup;

4. At the Standby:

Page 26: Oracle 11g Data Guard

SQL> alter database rename file '.......UNNAMED00167' to '<>';

******************************************************************************

You can skip steps#3 and #4 and instead do following step after #2:

At the Standby:

SQL> ALTER DATABASE CREATE DATAFILE '< ....UNNAMED00167>' as '<>';

******************************************************************************

5. To create the remaining datafiles at the Standby automatically:

SQL> alter system set standby_file_management=auto;

6. Start the MRP at the Standby

SQL> alter database recover managed standby database;

At standby database ensure the MRP is running as expected:

SQL>select process, status , sequence# from v$managed_standby;

When Primary and Standby are RAC databases:

1. On Standby: You can see multiple copies of some or all logs transported and applied on standby when you check the

view v$archived_log.

2. On Standby: All sequence# should have APPLIED=YES in v$archived_log for all threads. This ensures that all logs from all

threads were transported and applied on standby and hence keeps standby in sync with primary.

3. On Standby: In the view v$archived_log you may not see same number of multiple copies of all logs. For example, if the

primary is a 3 node cluster, you may or may not have 3 copies of each log i.e. you may not have the same sequence# log

on standby for all 3 threads. Of course the reason is that number of logs generated on all 3 nodes of primary will differ. The

current sequence# transported from a node of primary RAC database can be seen by querying v$archived_log on standby:

Page 27: Oracle 11g Data Guard

SQL> select max(sequence#) from v$archived_log where thread#=1;

As explained above, the output will differ for all 3 threads.

Basics of ASM (Automatic Storage Management) - IHere are some basics about using ASM storage. Installation and configuration is not in scope of this blog :-) The tips listed in this blog are helpful when you are working in an enviornment where ASM is already in use and you are completely novice with ASM

Using ASM

1. To use ASM storage you need to install an ASM instance.

2. ASM storage can be viewed and accessed using the command line utility called ASMCMD. To use ASMCMD, you need to first source the environment for ASM instance.

Following is a small demo of using ASMCMD:

oracle@ipw-dev-db:~> . oraenvORACLE_SID = [mydb] ? +ASM

oracle@ipw-dev-db:~> asmcmd

ASMCMD [+] > ls -lState Type Rebal Unbal NameMOUNTED EXTERN N N DATA/

ASMCMD [+] > cd +DATA

ASMCMD [+DATA] > ls -lType Redund Striped Time Sys NameY MYDB/Y MYDB_1/

ASMCMD [+DATA] >

3. Commands that can be used in ASMCMD are:mkdir, ls, rm, cd

4. ASM assigns its own names to the datafiles

5. ASM storage is entirely different from a normal filesystem storage.

Viewing ASM information

ASM information can be viewed by using dynamic performance views like v$asm_diskgroup. You need to login to the ASM instance to view this. Using ASMCMD or normal OS commands (like df –h), you can not directly know the vital information like size of the ASM diskgroup, free space etc. You have to query some relevant views to get this information. For example:

oracle@ipw-dev-db:~> sqlplus '/as sysdba'

SQL*Plus: Release 10.2.0.3.0 - Production on Thu Jul 17 01:27:08 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit ProductionWith the Partitioning and Data Mining options

SQL> select free_mb, total_mb from v$asm_diskgroup;

FREE_MB TOTAL_MB---------- ----------181331 664697

Page 28: Oracle 11g Data Guard

SQL>

Moving datafiles in ASM storage

1. Shutdown and start up the database in MOUNT state. Datafile movement should be done only in MOUNT state.

2. Connect to RMAN and target database without any recovery catalog

RMAN> Connect target /

3. Assuming that I am moving a datafile of database MYDB from ‘+DATA/mydb_1/datafile’ to ‘+DATA/mydb/datafile’

RMAN> copy datafile '+DATA/MYDB_1/DATAFILE/users.488.658972917' to '+DATA';RMAN> switch datafile '+DATA/MYDB_1/DATAFILE/users.488.658972917' to copy;

4. You can move system and sysaux datafiles also similarly because the database is mounted and not open.

5. Switch command is responsible for updating the controlfile with the new location

Happy learning !!!! More to come (ofcourse as soon as i learn something worth sharing :-))

- RohitPosted by Rohit Guptaat 4:49 AM 1 comment:  

MONDAY, JUNE 2, 2008

Common Oracle installation errors/issues/problems on Linux - 1Well!!!! This is not something i have been fortunate enough to find at a one single place in any book or umpteen tutorials over the internet. There are very good installation documents likehttp://www.puschitz.com/InstallingOracle9i.shtml and Metalink note: 184821.1, to name a few, which provide excellent step by step installation procedures but they don't really address the common installation problems consolidated at one single location. Ofcourse, one reason is that some problems are not generic and faced during one particular installation only. Following is an attempt towards the same (with highest of regards to all available notes and tutorials).

Generic installation problems on Linux:

1. Installation can fail during linking phase with errors like "errors in invoking target Install_isqlplus of makefile /u01/app/oracle/product/9.2.0.7/sqlplus/lib/ins_sqlplus.mk"

Reason/Resolution: Linking problems are usually associated with incorrect version of gcc packages of your OS version.

For 9i installation, gcc version should be 3.2.3 and for 10g installation, it should be 3.4.6. You can check the version by the command "gcc -v". Usually, 3.4.6 is the default.

For activating correct gcc version for 9i installation on 32bit OS (i386):

$ mv /usr/bin/gcc /usr/bin/gcc.orig$ mv /usr/bin/g++ /usr/bin/g++.orig$ ln -s /usr/bin/i386-redhat-linux-gcc32 /usr/bin/gcc$ ln -s /usr/bin/i386-redhat-linux-g++32 /usr/bin/g++

For activating correct gcc version for 9i installation on 64bit OS (x86_64):

$ mv /usr/bin/gcc /usr/bin/gcc.orig$ mv /usr/bin/g++ /usr/bin/g++.orig$ ln -s /usr/bin/x86_64-redhat-linux-gcc32 /usr/bin/gcc$ ln -s /usr/bin/x86_64-redhat-linux-g++32 /usr/bin/g++

Refer to Metalink Note: 353529.1 and 169706.1 for installation pre-requisites

2. "There is no non-empty value for variable s_jservPort under section Ports in file /u01/app/oracle/product/9.2.0/Apache/ports.ini"

Reason/Resolution: This problem is usually encountered when you are making a second attempt for installing the software after a failed previous installation. This is an ignorable error. If you open the file : /u01/app/oracle/product/9.2.0/Apache/ports.ini , you will see that the "s_jservPort " might be defined above the "Ports" section . We need to just place this variable under "ports" section. In case you are not using IAS or Grid Control, you can safely ignore this error or do the settings manually as mentioned above. In any case there should be no operational impacts on the database.

Page 29: Oracle 11g Data Guard

3. Errors in writing few files like "error in writing to file /u01/app/oracle/product/9.2.0/Apache/Apache/conf/ssl.key/server.key"

Reason/Resolution: Again, This problem is usually encountered when you are making a second attempt for installing the software after a failed previous installation. The files mentioned in these errors are actually created during the previous attempt and can not be overwritten because they are created as read only while installation. So to proceed with the installation, you need to change the permissions on these files (using chmod) to make them writable. Even better solution is that before starting the installation again, remove the Oracle_Home completely which was created and populated during the previous attempt for installation, and create a fresh and empty directory for Oracle_Home

4. "Error occurred during initialization of VMUnable to load native library: /tmp/OraInstall2003-10-25_03-14-57PM/jre/lib/i386/libjava.so: symbol __libc_wait, version GLIBC_2.0 not defined in file libc.so.6 with link time reference"

Reason/Resolution: To resolve the __libc_wait symbol issue, download the p3006854_9204 patch p3006854_9204_LINUX.zip fromhttp://metalink.oracle.com/. See bug 3006854 for more information. To apply the patch, run

su - root# unzip p3006854_9204_LINUX.zipArchive: p3006854_9204_LINUX.zipcreating: 3006854/inflating: 3006854/rhel3_pre_install.shinflating: 3006854/README.txt# cd 3006854# sh rhel3_pre_install.shApplying patch...Patch successfully applied#

5. OUI Hangs at 18% - "Copying naeet.o"

Reason/Resolution: The reason is that environment variable LD_ASSUME_KERNEL has not been set. Check the metalink notes:

Note: 360142.1: When Running OUI, OUI Hangs at 18% Copying naeet.oNote: 377217.1: What should the value of LD_ASSUME_KERNEL be set to for Linux?

Problems specific to 9i RAC installation:

1. On starting the ORACM service, you can get the errorocmstart.sh: Error: Restart is too frequentocmstart.sh: Info: Check the system configuration and fix the problem.ocmstart.sh: Info: After you fixed the problem, remove the timestamp fileocmstart.sh: Info: "/u01/app/oracle/product/9.2.0.7/oracm/log/ocmstart.ts"

Reason/Resolution: To resolve this, remove the file $ORACLE_HOME/oracm/log/osmstart.ts and then you should be able to start the service.

2. During installation of CM patch set (like 9207 patchset or 9208 patchset), following error: "error in writing to file '/u01/app/oracle/product/9.2.0.7/oracm/bin/oracm (text file busy)"

Reason/Resolution: This error occurs if you are trying to install the CM patch set without stopping the ORACM service. ORACM services on both nodes should be stopped before installing the CM patch set.

3. After installing Cluster manager, ORACM service should be started on all nodes to proceed with the RDBMS installation. I have faced this situation personally that service does not starts on both nodes. For example, if it is a 2 nodes RAC, service could be started on one node only. Starting the service on one node kills the service on other node.

Reason/Resolution: The service has to be started as root and requires the LD_ASSUME_KERNEL to be set correctly. So i had set the LD_ASSUME_KERNEL properly as "oracle" user and when switching to "root" user to start the service, i was doing "su -" instead of "su". "su -" does not carries over the environment settings and hence value of LD_ASSUME_KERNEL was not carried over to the user "root"

4. When trying to apply the 9208 CM patch set, all nodes were not considered by the installation. Following error was found in the installation logs:

Page 30: Oracle 11g Data Guard

"Cluster nodes cannot be retrieved from the vendor clusterware (/tmp/OraInstall2008-03-20_12-12-02AM/oui/bin/lsnodes.bin: error while loading shared libraries: libcmdll.so: cannot open shared object file: No such file or directory). This system will not be considered as a vendor clusterware"

Also, "lsnodes" command, which can be used to verify all the nodes in the RAC was failing.

Reason/Resolution: Actually the correct order to be followed to install the 9208 RAC should be:

--> 9204 CM--> 9204 RDBMS--> 9208 CM Patchset--> 9208 RDBMS patchset.

The reason for the above error and "lsnodes" failing is that $ORACLE_HOME/lib32 directory does not exist. The file libcmdll.so mentioned in the error is located inside the lib32 directory and lib32 is created only after the installation of RDBMS software and not the CM. So if you don't follow the correct order and try to install 9208 CM patchset after 9204 CM, you'll get this error and "lsnodes" will also not work to show all the nodes in the cluster (which you assume should be there after you have installed 9204 CM successfully). Instead after 9204, you should be installing 9204 RDBMS software. Then if you apply the 9208 patchset, this error won't be seen and "lsnodes" will also work.

5. Always check the inventory on all nodes to verify that correct versions of CM and RDBMS patchsets have been applied on all nodes. Inventory can be verified by launching OUI. The abnormality in versions is more prevalent in CM patchsets where if you apply 9208 CM patchset on one node, the CM version on other node is still 9204. However, this can be true for RDBMS patchsets also. So in such cases, you need to apply the patchset on other node also separately (ideally all the installation in RAC happens from a single node only and other nodes are updated automatically) to have the correct version. You can check the version of CM on each node by following command after starting the ORACM service:

$ head -1 $ORACLE_HOME/oracm/log/cm.log

This was the first set of real life installation problems i have faced. I have more to come soon. So stay tuned and keep checking this space for more updates."Suggestions are more than welcome" :-)

Cheers,-Rohit

Posted by Rohit Guptaat 1:59 AM 4 comments:  

TUESDAY, MAY 6, 2008

How to clone a database using a cold backupThere is a very good documentation available on cloning athttp://www.samoratech.com/TopicOfInterest/swCloneDB.htmWith due respect to the above documentation, I am trying to be more descriptive in approach. These are purely based on my experience.

1. Decide carefully about the location and server for the new database's datafiles, redo files and control files. Make sure you create all the directories where you want to keep the datafiles, redofiles and control files(like /u02/oradata/$ORACLE_SID), alert log (/u01/app/oracle/admin/$ORACLE_SID/bdump), trace files(/u01/app/oracle/admin/$ORACLE_SID/udump), parameter file(/u01/app/oracle/admin/$ORACLE_SID/pfile)

2. Generate the script for creating the control files for the destination database. To do this, while logged in as SYS to the source database, do ALTER DATABASE BACKUP CONTROLFILE TO TRACE. This will generate a trace file at /u01/app/oracle/admin/source_SID/udump containing the CREATE CONTROLFILE script. Edit the trace file to keep only the relevant script for creating the new control file and save with a suitable name. The Control file script should contain RESETLOGS because we are going to create a brand new cloned database. For example, the first 2 lines of the contro file script should look like:STARTUP NOMOUNT pfile=/u01/app/oracle/admin/$ORACLE_SID/pfile/$ORACLE_SID.oraCREATE CONTROLFILE set DATABASE "DD707" resetlogs noarchivelog

3. To know the name and locations of all datafiles to be copied from source to destinationSQL>select file_name from dba_data_files

4. To know the name and locations of all redo files to be copied from source to destinationSQL>select member from v$logfile

5. Shutdwn the source database SQL>SHUTDOWN IMMEDIATE

Page 31: Oracle 11g Data Guard

6. Copy all the datafiles and redo log files of the source database to the location holding the destination database's datafiles and redo log files. If it is on the same server, use "cp". If the destination is on a different server, use"scp". For example, change your directory to location of source database's datafiles $cd /u02/oradata/$ORACLE_SID and then do "$ scp -p xyz.dbf oracle@:/u01/app/oracle/product/10.2.0/dbs/xyz.dbf"

7. Copy the CREATE CONTROLFILE script generated in Step#2 to a secure location (like /u01/app/oracle/admin/dest_SID/udump). 

8. Copy the source database's init$ORACLE_SID.ora file as destination database's init$ORACLE_SID.ora at /u01/app/oracle/admin/$ORACLE_SID/pfile. 

9. In the copied init.ora, make sure you edit all relevant parameters which are specific to the source database (like udump, bdump, DB_NAME, INSTANCE_NAME etc). These should be specific to destination database.

10. Create a soft link at $ORACLE_HOME/dbs for the parameter file. For example, change the directory to $ORACLE_HOME/dbs $cd $ORACLE_HOME/dbs and then do "$ ln -s /u01/app/oracle/admin/$ORACLE_SID/pfile/init$ORACLE_SID.ora init$ORACLE_SID.ora" where $ORACLE_SID is the destination SID.

Please note that if in Step#8, you copy the pfile directly to $ORACLE_HOME/dbs on the destination, you can skip the Step#10.

11. Now edit tnsnames, listener and oratab files on the destnation server to have entries for the new destination database. All 3 are usually kept at /var/opt/oracle.

12. Now set your environment for the new SID. $ . oraenv 

13. Log in to the idle instance (for det database) as SYSSQL> sqlplus '/as sysdba'You should see the message: logged into idle instance

14. Here run the command for creating the CONTROLFILE. Assuming that you copied the script in Step#5 to /u01/app/oracle/admin/$ORACLE_SID/udump and name of the script is ctrlnew.sql, do the foll:SQL>@/u01/app/oracle/admin/$ORACLE_SID/udump/ctrlnew.sqlRemember that Control file can be created in the NOMOUNT state only. Your CREATE CONTROLFILE command should have the first command as STARTUP NOMOUNT pfile=If successful, you would get a message "controlfile created"

15. Now open the database with resetlogsSQL>ALTER DATABASE OPEN RESETLOGSIf this successfuly runs and says "database altered" your cloned destination database is ready to be used.

-RohitPosted by Rohit Guptaat 2:29 AM 1 comment:  

Labels: Oracle DB Cloning

THURSDAY, MAY 1, 2008

How to change SPFILE parameters for a RAC databaseWell, this is a simple task as it sounds. But believe me when i was supposed to do this for the first time in my career i had a hard time searching for exact solution. Here i am just sharing my experience in doing this taskThere are definitely 2 known methods available to perform this task

METHOD 1:Simply issue the following SQL statement from any of the nodesALTER SYSTEM SET = scope=spfile;There are 3 possible values for the 'scope' clause in this statement:1. MEMORY: The change is immediate but will not be available after next startup or reboot of the instance.2. SPFILE: The change will be effective in SPFILE only and will be available after next startup or reboot3. BOTH: The change is effective for both MEMORY and SPFILE and will be available after next startup also.Default is BOTH You could also specify another clause called 'sid' at the end of the above ALTER statement which is specifically meant for a RAC database. This is to specify the instance where you want to make that change. For example, ALTER SYSTEM SET = scope=spfile sid='*'; means that this particular change in the parameter will happen on all instances after rebooting them (Default is '*')

METHOD 2:Another method to change a parameter in the spfile is to export it to a pfile, change it and then create a new spfile. Let me detail out the various steps involved:1. On one instance, create a pfile from the existing spfile: SQL> CREATE PFILE FROM SPFILE. This will create a pfile called initSID.ora at /$ORACLE_HOME/dbs2. Edit the resulting PFILE initSID.ora in a vi editor (Add/alter the required parameter). You should use *. so that this parameter value is applied to all instances3. Now shutdown all the instances4. startup the instance (and hence the database) where you created and altered the pfile using this pfile onlySTARTUP PFILE=$ORACLE_HOME/dbs/initSID.oraDo not start other instances yet

Page 32: Oracle 11g Data Guard

5. Now through this instance only, create a new spfile (which can be at a common location being accesses by all instances)CREATE SPFILE=‘commom_location/spfile.ora’ FROM PFILE=‘$ORACLE_HOME/dbs/initSID.ora’;This will overwrite the existing spfile with the spfile which has the new/altered parameter6. Now shutdown this instance again7. Now startup normally all the 3 instances without PFILE or SPFILE option: STARTUPBy default startup will now consider the new spfile 8. To confirm that the new parameters have been set/removed, issue following sql statement from all 3 instancesSHOW PARAMETER 

The method would not be much different if this is a single instance database instead of a RAC database (In Step#7 you would be starting only that single instance)

http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-backup-11202-183503.pdf