what database

93
ORACLE DBA BASICS Configuring Kernel Parameters For Oracle Installation This section documents the checks and modifications to the Linux kernel that should be made by the DBA to support Oracle Database 10 g. Before detailing these individual kernel parameters, it is important to fully understand the key kernel components that are used to support the Oracle Database environment. The kernel parameters and shell limits presented in this section are recommended values only as documented by Oracle. or production database systems, Oracle recommends that !e tune these values to optimi"e the performance of the system. #erify that the kernel parameters sho!n in this section are set to values greater than or e$ual to the recommended values. Shared Memory: %hared memory allo!s processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of &nter'(rocess )ommunications *&()+ available ' mainly due to the fact that no kernel involvement occurs !hen data is being passed bet!een the processes. Data does not need to be copied bet!een processes. Oracle makes use of shared memory for its %hared lobal Area *% A+ !hich is an area of memory that is shared by all Oracle backup and foreground processes. Ade$uate si"ing of the % A is critical to Oracle performance since it is responsible for holding the database buffer cache, shared %-L, access paths, and so much more. To determine all current shared memory limits, use the follo!ing / ipcs 'lm '''''' %hared 0emory Limits '''''''' max number of segments 1 2345 max seg si"e *kbytes+ 1 2642737 max total shared memory *kbytes+ 1 63878269:2 min seg si"e *bytes+ 1 6 The follo!ing list describes the kernel parameters that can be used to change the shared memory configuration for the server 1! shmma" ' Defines the maximum si"e *in bytes+ for a shared memory segment. The Oracle % A is comprised of shared memory and it is possible that incorrectly setting shmmax could limit the si"e of the % A. ;hen setting shmmax, keep in mind that the si"e of the % A should fit !ithin one shared memory segment. An inade$uate shmmax setting could result in the follo!ing O<A':86:7 unable to attach to shared memory segment ;e can determine the value of shmmax by performing the follo!ing / cat =proc=sys=kernel=shmmax #$%#%&'$%( or most Linux systems, the default value for shmmax is 7:0B. This si"e is often too small to configure the Oracle % A. The default value for shmmax in )entO% > is 2 B !hich is more than enough for the Oracle configuration. ?ote that this value of 2 B is not the @normal@ default value for shmmax in a Linux environment inserts the follo!ing t!o entries in the file =etc=sysctl.conf / )ontrols the maximum shared segment si"e, in bytes kernel.shmmax 1 2:42458:4> $! shmmni : This kernel parameter is used to set the maximum number of shared memory segments system !ide. The default value for this parameter is 2345. This value is sufficient and typically does not need to be changed. ;e can determine the value of shmmni by performing the follo!ing / cat =proc=sys=kernel=shmmni #0%& 1

Upload: sravankumarthadakamalla

Post on 02-Nov-2015

31 views

Category:

Documents


0 download

DESCRIPTION

y

TRANSCRIPT

ORACLE DBA BASICS

Configuring Kernel Parameters For Oracle Installation

This section documents the checks and modifications to the Linux kernel that should be made by the DBA to support Oracle Database 10g. Before detailing these individual kernel parameters, it is important to fully understand the key kernel components that are used to support the Oracle Database environment.

The kernel parameters and shell limits presented in this section are recommended values only as documented by Oracle. For production database systems, Oracle recommends that we tune these values to optimize the performance of the system.

Verify that the kernel parameters shown in this section are set to values greater than or equal to the recommended values.

Shared Memory: Shared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form ofInter-Process Communications(IPC) available - mainly due to the fact that no kernel involvement occurs when data is being passed between the processes. Data does not need to be copied between processes.

Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access paths, and so much more.

To determine all current shared memory limits, use the following :# ipcs -lm------ Shared Memory Limits --------max number of segments = 4096max seg size (kbytes) = 4194303max total shared memory (kbytes) = 1073741824min seg size (bytes) = 1

The following list describes the kernel parameters that can be used to change the shared memory configuration for the server:

1.) shmmax - Defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of shared memory and it is possible that incorrectly setting shmmax could limit the size of the SGA. When setting shmmax, keep in mind that the size of the SGA should fit within one shared memory segment. An inadequate shmmax setting could result in the following:ORA-27123: unable to attach to shared memory segment

We can determine the value of shmmax by performing the following :# cat /proc/sys/kernel/shmmax4294967295

For most Linux systems, the default value for shmmax is 32MB. This size is often too small to configure the Oracle SGA. The default value for shmmax in CentOS 5 is 4GB which is more than enough for the Oracle configuration. Note that this value of 4GB is not the "normal" default value for shmmax in a Linux environment inserts the following two entries in the file /etc/sysctl.conf:# Controls the maximum shared segment size, in byteskernel.shmmax = 4294967295

2.) shmmni : This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be changed.We can determine the value of shmmni by performing the following:# cat /proc/sys/kernel/shmmni4096

3.) shmall : This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. The value of this parameter should always be at least:We can determine the value of shmall by performing the following :# cat /proc/sys/kernel/shmall268435456

For most Linux systems, the default value for shmall is 2097152 and is adequate for most configurations. The default value for shmall in CentOS 5 is 268435456 (see above) which is more than enough for the Oracle configuration described in this article. Note that this value of 268435456 is not the "normal" default value for shmall in a Linux environment , inserts the following two entries in the file /etc/sysctl.conf:# Controls the maximum number of shared memory segments, in pageskernel.shmall = 268435456

4.) shmmin : This parameter controls the minimum size (in bytes) for a shared memory segment. The default value for shmmin is 1 and is adequate for the Oracle configuration described in this article.We can determine the value of shmmin by performing the following:

# ipcs -lm | grep "min seg size"min seg size (bytes) = 1

Semaphores: After the DBA has configured the shared memory settings, it is time to take care of configuring the semaphores. The best way to describe a semaphore is as a counter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphore sets are supported in System V where each one is a counting semaphore. When an application requests semaphores, it does so using "sets".To determine all current semaphore limits, use the following:#ipcs -ls------ Semaphore Limits --------max number of arrays = 128max semaphores per array = 250max semaphores system wide = 32000max ops per semop call = 32semaphore max value = 32767

We can also use the following command:# cat /proc/sys/kernel/sem250 32000 32 128

The following list describes the kernel parameters that can be used to change the semaphore configuration for the server:i.) semmsl - This kernel parameter is used to control the maximum number of semaphores per semaphore set. Oracle recommends setting semmsl to the largest PROCESS instance parameter setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends setting the semmsl to a value of no less than 100.

ii.) semmni - This kernel parameter is used to control the maximum number of semaphore sets in the entire Linux system. Oracle recommends setting semmni to a value of no less than 100.

iii.) semmns - This kernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system. Oracle recommends setting the semmns to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system. Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be the lesser of:SEMMNS -or- (SEMMSL * SEMMNI)

iv.) semopm - This kernel parameter is used to control the number of semaphore operations that can be performed per semop system call. The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of semmslsemaphores per semaphore set and is therefore recommended to set semopm equal to semmsl in some situations. Oracle recommends setting the semopm to a value of no less than 100.

File Handles :When configuring the Linux server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles denotes the number of open files that you can have on the Linux system.Use the following command to determine the maximum number of file handles for the entire system:

# cat /proc/sys/fs/file-max102312Oracle recommends that the file handles for the entire system be set to at least 65536.We can query the current usage of file handles by using the following :# cat /proc/sys/fs/file-nr3072 0 102312

The file-nr file displays three parameters: Total allocated file handles Currently used file handles Maximum file handles that can be allocatedIf we need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify theulimit setting my issuing the ulimit command:# ulimitunlimited

IP Local Port Range : Oracle strongly recommends to set the local port range ip_local_port_range for outgoing messages to "1024 65000" which is needed for systems with high-usage. This kernel parameter defines the local port range for TCP and UDP traffic to choose from.The default value for ip_local_port_range is ports 32768 through 61000 which is inadequate for a successful Oracle configuration.Use the following command to determine the value of ip_local_port_range:# cat /proc/sys/net/ipv4/ip_local_port_range32768 61000

Networking Settings : With Oracle 9.2.0.1 and later, Oracle makes use of UDP as the default protocol on Linux for inter-process communication (IPC), such as Cache Fusion and Cluster Manager buffer transfers between instances within the RAC cluster.

Oracle strongly suggests to adjust the default and maximum receive buffer size (SO_RCVBUF socket option) to 1MB and the default and maximum send buffer size (SO_SNDBUF socket option) to 256KB.The receive buffers are used by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot overflow because the peer is not allowed to send data beyond the buffer size window.This means that datagrams will be discarded if they don't fit in the socket receive buffer, potentially causing the sender to overwhelm the receiver.Use the following commands to determine the current buffer size (in bytes) of each of the IPC networking parameters:# cat /proc/sys/net/core/rmem_default109568# cat /proc/sys/net/core/rmem_max131071# cat /proc/sys/net/core/wmem_default109568# cat /proc/sys/net/core/wmem_max131071

Setting Kernel Parameters for OracleIf the value of any kernel parameter is different to the recommended value, they will need to be modified. For this article, I identified and provide the following values that will need to be added to the /etc/sysctl.conf file which is used during the boot process.kernel.shmmax = 2147483648kernel.shmmni = 4096kernel.shmall = 2097152kernel.sem = 250 32000 100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000net.core.rmem_default = 1048576net.core.rmem_max = 1048576net.core.wmem_default = 262144net.core.wmem_max = 262144

After adding the above lines to the /etc/sysctl.conf file, they persist each time the system reboots. If we would like to make these kernel parameter value changes to the current system without having to first reboot, enter the following command:# /sbin/sysctl p

HOW ORACLE WORKS?An instance is currently running on the computer that is executing Oracle called database server.

A computer is running an application (local machine) runs the application in a user process.

The client application attempts to establish a connection to the server using the proper Net8 driver.

When the oracle server detects the connection request from the client its check client authentication, if authentication pass the oracle server creates a (dedicated) server process on behalf of the user process. When the user executes a SQL statement and commits the transaction. For example, the user changes a name in a row of a table. The server process receives the statement and checks the shared pool for any shared SQL area that contains an identical SQL statement. If a shared SQL area is found, the server process checks the user's access privileges to the requested data and the previously existing shared SQL area is used to process the statement; if not, a new shared SQL area is allocated for the statement so that it can be parsed and processed. The server process retrieves any necessary data values from the actual datafile or those stored in the system global area. The server process modifies data block in the system global area. The DBWn process writes modified blocks permanently to disk when doing so is efficient. Because the transaction committed, the LGWR process immediately records the transaction in the online redo log file. If the transaction is successful, the server process sends a message across the network to the application. If it is not successful, an appropriate error message is transmitted. Throughout this entire procedure, the other background processes run, watching for conditions that require intervention.

Basics of Oracle Architecture

What is An Oracle Database?

Basically, there are two main components of Oracle database instance and database itself. An instance consists of some memory structures (SGA) and the background processes.

Instance

Instance is consist of the memory structures and background processes. The memory structure itself consists of System Global Area (SGA), Program Global Area (PGA). In the other hand, the mandatory background processes are Database Writer (DBWn), Log Writer (LGWR), Checkpoint (CKPT), System Monitor (SMON), and Process Monitor (PMON). And another optional background processes are Archiver (ARCn), Recoverer (RECO), etc.

System Global Area

SGA is the primary memory structures. This area is broken into a few of part memory Buffer Cache, Shared Pool, Redo Log Buffer, Large Pool, and Java Pool.

Buffer Cache

Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when user retrieves data from database, the data will be stored in buffer cache. Its size can be manipulated via DB_CACHE_SIZE parameter in init.ora initialization parameter file.

Shared Pool

Shared pool is broken into two small part memories Library Cache and Dictionary Cache. The library cache is used to stores information about the commonly used SQL and PL/SQL statements; and is managed by a Least Recently Used (LRU) algorithm. It is also enables the sharing those statements among users. In the other hand, dictionary cache is used to stores information about object definitions in the database, such as columns, tables, indexes, users, privileges, etc.

The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization parameter file.

Redo Log Buffer

Each DML statement (insert, update, and delete) executed by users will generates the redo entry. What is a redo entry? It is an information about all data changes made by users. That redo entry is stored in redo log buffer before it is written into the redo log files. To manipulate the size of redo log buffer, you can use the LOG_BUFFER parameter in init.ora initialization parameter file.

Large Pool

Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the shared pool. It is also used for I/O processes. The large pool size can be set by LARGE_POOL_SIZE parameter in init.ora initialization parameter file.

Java Pool

As its name, Java pool is used to services parsing of the Java commands. Its size can be set by JAVA_POOL_SIZE parameter in init.ora initialization parameter file.

Oracle Background Processes

Oracle background processes is the processes behind the scene that work together with the memories.

DBWn

Database writer (DBWn) process is used to write data from buffer cache into the datafiles. Historically, the database writer is named DBWR. But since some of Oracle version allows us to have more than one database writer, the name is changed to DBWn, where n value is a number 0 to 9.

LGWR

Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer into the redo log files.

CKPT

Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into datafiles. It will also updates datafiles and control files header when log file switch occurs.

SMON

System Monitor (SMON) process is used to recover the system crach or instance failure by applying the entries in the redo log files to the datafiles.

PMON

Process Monitor (PMON) process is used to clean up work after failed processes by rolling back the transactions and releasing other resources.

Database

We can broken up database into two main structures Logical structures and Physical structures.

Logical Structures

The logical units are tablespace, segment, extent, and data block.

Tablespace

A Tablespace is a grouping logical database objects. A database must have one or more tablespaces. In the Figure 3, we have three tablespaces SYSTEM tablespace, Tablespace 1, and Tablespace 2. Tablespace is composed by one or more datafiles.

Segment

A Tablespace is further broken into segments. A segment is used to stores same type of objects. That is, every table in the database will store into a specific segment (named Data Segment) and every index in the database will also store in its own segment (named Index Segment). The other segment types are Temporary Segment and Rollback Segment.

Extent

A segment is further broken into extents. An extent consists of one or more data block. When the database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent cannot be named.

Data Block

A data block is the smallest unit of storage in the Oracle database. The data block size is a specific number of bytes within tablespace and it has the same number of bytes.

Physical Structures

The physical structures are structures of an Oracle database (in this case the disk files) that are not directly manipulated by users. The physical structure consists of datafiles, redo log files, and control files.

Datafiles

A datafile is a file that correspondens with a tablespace. One datafile can be used by one tablespace, but one tablespace can has more than one datafiles.

Redo Log Files

Redo log files are the files that store the redo entries generated by DML statements. It can be used for recovery processes.

Control Files

Control files are used to store information about physical structure of database, such as datafiles size and location, redo log files location, etc.

Starting up a database

First Stage: Oracle engine start an Oracle Instance

When Oracle starts an instance, it reads the initialization parameter file to determine the values of initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for database information, and creates background processes. At this point, no database is associated with these memory structures and processes.

Second Stage: Mount the Database

To mount the database, the instance finds the database control files and opens them. Control files are specified in the CONTROL_FILES initialization parameter in the parameter file used to start the instance. Oracle then reads the control files to get the names of the database's datafiles and redo log files.

At this point, the database is still closed and is accessible only to the database administrator. The database administrator can keep the database closed while completing specific maintenance operations. However, the database is not yet available for normal operations.

Final Stage: Database open for normal operation

Opening a mounted database makes it available for normal database operations. Usually, a database administrator opens the database to make it available for general use.

When you open the database, Oracle opens the online datafiles and online redo log files. If a tablespace was offline when the database was previously shut down, the tablespace and its corresponding datafiles will still be offline when you reopen the database.

If any of the datafiles or redo log files are not present when you attempt to open the database, then Oracle returns an error. You must perform recovery on a backup of any damaged or missing files before you can open the database.

Open a Database in Read-Only Mode

You can open any database in read-only mode to prevent its data from being modified by user transactions. Read-only mode restricts database access to read-only transactions, which cannot write to the datafiles or to the redo log files.

Disk writes to other files, such as control files, operating system audit trails, trace files, and alert files, can continue in read-only mode. Temporary tablespaces for sort operations are not affected by the database being open in read-only mode. However, you cannot take permanent tablespaces offline while a database is open in read-only mode. Also, job queues are not available in read-only mode.

Read-only mode does not restrict database recovery or operations that change the database's state without generating redo data. For example, in read-only mode:

* Datafiles can be taken offline and online* Offline datafiles and tablespaces can be recovered* The control file remains available for updates about the state of the database

Shutdown the databaseThe three steps to shutting down a database and its associated instance are:

*Close the database.*Unmount the database.*Shut down the instance.

Close a Database

When you close a database, Oracle writes all database data and recovery data in the SGA to the datafiles and redo log files, respectively. Next, Oracle closes all online datafiles and online redo log files. At this point, the database is closed and inaccessible for normal operations. The control files remain open after a database is closed but still mounted.

Close the Database by Terminating the Instance

In rare emergency situations, you can terminate the instance of an open database to close and completely shut down the database instantaneously. This process is fast, because the operation of writing all data in the buffers of the SGA to the datafiles and redo log files is skipped. The subsequent reopening of the database requires recovery, which Oracle performs automatically.

Un mount a Database

After the database is closed, Oracle un mounts the database to disassociate it from the instance. At this point, the instance remains in the memory of your computer.

After a database is un mounted, Oracle closes the control files of the database.

Shut Down an Instance

The final step in database shutdown is shutting down the instance. When you shut down an instance, the SGA is removed from memory and the background processes are terminated.

Abnormal Instance Shutdown

In unusual circumstances, shutdown of an instance might not occur cleanly; all memory structures might not be removed from memory or one of the background processes might not be terminated. When remnants of a previous instance exist, a subsequent instance startup most likely will fail. In such situations, the database administrator can force the new instance to start up by first removing the remnants of the previous instance and then starting a new instance, or by issuing a SHUTDOWN ABORT statement in Enterprise Manager.

Managing an Oracle Instance

When Oracle engine starts an instance, it reads the initialization parameter file to determine the values of initialization parameters. Then, it allocates an SGA and creates background processes. At this point, no database is associated with these memory structures and processes.

Type of initialization file:

Static (PFILE) Persistent (SPFILE)

Text file Binary file

Modification with an OS editor Cannot Modified

Modification made manually Maintained by the Server

Initialization parameter file content:

* Instance parameter* Name of the database* Memory structure of the SGA* Name and location of control file* Information about undo segments* Location of udump, bdump and cdump file

Creating an SPFILE:

Create SPFILE=..ORA

From PFILE=..ORA;

Note:

* Required SYSDBA Privilege.* Execute before or after instance startup.

Oracle Background Processes

An Oracle instance runs two types of processes

Server Process

Background Process

Before work user must connect to an Instance. When user LOG on Oracle Server Oracle Engine create a process called Server processes. Server process communicate with oracle instance on the behalf of user process.

Each background process is useful for a specific purpose and its role is well defined.

Background processes are invoked automatically when the instance is started.

Database Writer (DBWr)

Process Name: DBW0 through DBW9 and DBWa through DBWj

Max Processes: 20

This process writes the dirty buffers for the database buffer cache to data files. One database writer process is sufficient for most systems; more can be configured if essential. The initialisation parameter, DB_WRITER_PROCESSES, specifies the number of database writer processes to start.

The DBWn process writes dirty buffer to disk under the following conditions:

When a checkpoint is issued.

When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers.

Every 3 seconds.

When we place a normal or temporary table space offline and read only mode

When we drop and truncate table.

When we put a table space in backup mode;

Log Writer(LGWR)

Process Name: LGWR

Max Processes: 1

The log writer process writes data from the redo log buffers to the redo log files on disk.

The writer is activated under the following conditions:

*When a transaction is committed, a System Change Number (SCN) is generated and tagged to it. Log writer puts a commit record in the redo log buffer and writes it to disk immediately along with the transaction's redo entries.

*Every 3 seconds.

*When the redo log buffer is 1/3 full.

*When DBWn signals the writing of redo records to disk. All redo records associated with changes in the block buffers must be written to disk first (The write-ahead protocol). While writing dirty buffers, if the DBWn process finds that some redo information has not been written, it signals the LGWR to write the information and waits until the control is returned.

*Log writer will write synchronously to the redo log groups in a circular fashion. If any damage is identified with a redo log file, the log writer will log an error in the LGWR trace file and the system Alert Log. Sometimes, when additional redo log buffer space is required, the LGWR will even write uncommitted redo log entries to release the held buffers. LGWR can also use group commits (multiple committed transaction's redo entries taken together) to write to redo logs when a database is undergoing heavy write operations.

The log writer must always be running for an instance.System Monitor

Process Name: SMON

Max Processes: 1

This process is responsible for instance recovery, if necessary, at instance startup. SMON also cleans up temporary segments that are no longer in use. SMON wakes up about every 5 minutes to perform housekeeping activities. SMON must always be running for an instance.Process Monitor

Process Name: PMON

Max Processes: 1

This process is responsible for performing recovery if a user process fails. It will rollback uncommitted transactions. PMON is also responsible for cleaning up the database buffer cache and freeing resources that were allocated to a process. PMON also registers information about the instance and dispatcher processes with network listener.

PMON wakes up every 3 seconds to perform housekeeping activities. PMON must always be running for an instance.Checkpoint Process

Process Name: CKPT

Max processes: 1

Checkpoint process signals the synchronization of all database files with the checkpoint information. It ensures data consistency and faster database recovery in case of a crash.

CKPT ensures that all database changes present in the buffer cache at that point are written to the data files, the actual writing is done by the Database Writer process. The datafile headers and the control files are updated with the latest SCN (when the checkpoint occurred), this is done by the log writer process.

The CKPT process is invoked under the following conditions:

When a log switch is done.

When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT exists between the incremental checkpoint and the tail of the log; this is in seconds.

When the number of blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL exists between the incremental checkpoint and the tail of the log; these are OS blocks.

The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to perform roll-forward is reached.

Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET is reached; this is in seconds and specifies the time required for a crash recovery. The parameter FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but these parameters can still be used.*

When the ALTER SYSTEM SWITCH LOGFILE command is issued.*

When the ALTER SYSTEM CHECKPOINT command is issued.

Incremental Checkpoints initiate the writing of recovery information to datafile headers and controlfiles. Database writer is not signaled to perform buffer cache flushing activity here.Archiver

Process Name: ARC0 through ARC9

Max Processes: 10

The ARCn process is responsible for writing the online redo log files to the mentioned archive log destination after a log switch has occurred. ARCn is present only if the database is running in archivelog mode and automatic archiving is enabled. The log writer process is responsible for starting multiple ARCn processes when the workload increases. Unless ARCn completes the copying of a redo log file, it is not released to log writer for overwriting.

The number of Archiver processes that can be invoked initially is specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES. The actual number of Archiver processes in use may vary based on the workload.Lock Monitor

Process Name: LMON

processes: 1

Meant for Parallel server setups, Lock Monitor manages global locks and resources. It handles the redistribution of instance locks whenever instances are started or shutdown. Lock Monitor also recovers instance lock information prior to the instance recovery process. Lock Monitor co-ordinates with the Process Monitor to recover dead processes that hold instance locks.Lock processes

Process Name: LCK0 through LCK9

Max Processes: 10

Meant for Parallel server setups, the instance locks that are used to share resources between instances are held by the lock processes.Block Server Process

Process Name: BSP0 through BSP9

Max processes: 10

Meant for Parallel server setups, Block server Processes have to do with providing a consistent read image of a buffer that is requested by a process of another instance, in certain circumstances.Queue Monitor

Process Name: QMN0 through QMN9

Max Processes: 10

This is the advanced Queuing Time manager process. QMNn monitors the message queues. Failure of QMNn process will not cause the instance to fail.Event Monitor

Process Name: EMN0/EMON

Max Processes: 1

This process is also related to Advanced Queuing, and is meant for allowing a publish/subscribe style of messaging between applications.Recoverer

Process Name: RECO

Max processes: 1

Intended for distributed recovery. All in-doubt transactions are recovered by this process in the distributed database setup. RECO will connect to the remote database to resolve pending transactions.Job Queue Processes

Process Name: J000 through J999 (Originally called SNPn processes)

Max Processes: 1000

Job queue processes carry out batch processing. All scheduled jobs are executed by these processes. The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can be run concurrently. If a job fails with some Oracle error, it is recorded in the alert file and a process trace file is generated. Failure of the Job queue process will not cause the instance to fail.Dispatcher

Process Name: Dnnn

Max Processes: -

Intended for Shared server setups (MTS). Dispatcher processes listen to and receive requests from connected sessions and places them in the request queue for further processing. Dispatcher processes also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are mediators between the client processes and the shared server processes. The maximum number of Dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS.Shared Server Processes

Process Name: Snnn

Max Processes: -

Intended for Shared server setups (MTS). These processes pickup requests from the call request queue, process them and then return the results to a result queue. The number of shared server processes to be created at instance startup can be specified using the initialization parameter SHARED_SERVERS.Parallel Execution Slaves

Process Name: Pnnn

Max Processes: -

These processes are used for parallel processing. It can be used for parallel execution of SQL statements or recovery. The Maximum number of parallel processes that can be invoked is specified by the initialization parameter PARALLEL_MAX_SERVERS.Trace Writer

Process Name: TRWR

Max Processes: 1

Trace writer writes trace files from an Oracle internal tracing facility.Input/Output Slaves

Process Name: Innn

Max Processes: -

These processes are used to simulate asynchronous I/O on platforms that do not support it. The initialization parameter DBWR_IO_SLAVES is set for this purpose.Wakeup Monitor Process

Process Name: WMON

Max Processes: -

This process was available in older versions of Oracle to alarm other processes that are suspended while waiting for an event to occur. This process is obsolete and has been removed.Conclusion

With every release of Oracle, new background processes have been added and some existing ones modified. These processes are the key to the proper working of the database. Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log.

Create Stand-alone 10g Database ManuallyStep 1 Create a initSID.ora (Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.Example: $ORACLE_HOME/dbs/initTEST.oraPut following entry in initTEST.ora file##############################################################background_dump_dest=core_dump_dest=user_dump_dest=control_files = (//control1.ctl,/ /control2.ctl,/ /control3.ctl)undo_management = AUTOundo_tablespace = UNDOTBS1db_name = testdb_block_size = 8192sga_max_size = 1073741824sga_target = 1073741824####################################################Step 2Create a password file$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd.ora password= entries=5Step 3Set your ORACLE_SID$ export ORACLE_SID=test$ export ORACLE_HOME=/Step 4Run the following sqlplus command to connect to the database and startup the instance.$sqlplus '/ as sysdba'SQL> startup nomountStep 5Create the Database. use following scripts.create database testlogfile group 1 ('/redo1.log') size 100M,group 2 ('/redo2.log') size 100M,group 3 ('/redo3.log') size 100Mcharacter set WE8ISO8859P1national character set utf8datafile '/system.dbf' size 500M autoextend on next 10M maxsize unlimited extent management localsysaux datafile '/sysaux.dbf' size 100M autoextend on next 10M maxsize unlimitedundo tablespace undotbs1 datafile '/undotbs1.dbf' size 100Mdefault temporary tablespace temp tempfile '/temp01.dbf' size 100M;Step 6Run the scripts necessary to build views, synonyms, etc.: CATALOG.SQL-- creates the views of data dictionary tables and the dynamic performance views. CATPROC.SQL-- establishes the usage of PL/SQL functionality and creates many of the PL/SQL Oracle supplied packages.

Create 10g OMF Database ManuallyStep 1Create a initSID.ora(Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.Example: $ORACLE_HOME/dbs/initTEST.oraPut following entry in initTEST.ora file##############################################################background_dump_dest=core_dump_dest=user_dump_dest=control_files = (//control1.ctl,/ /control2.ctl,/ /control3.ctl)undo_management = AUTOundo_tablespace = UNDOTBS1db_name = testdb_block_size = 8192sga_max_size = 1073741824sga_target = 1073741824db_create_file_dest = / #OMFdb_create_online_log_dest_1 = / #OMFdb_create_online_log_dest_2 = / #OMFdb_recovery_file_dest = / #OMF################################################################Step 2Create a password file$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd.ora password= entries=5Step 3Set your ORACLE_SIDexport ORACLE_SID=testexport ORACLE_HOME=/Step 4Run the following sqlplus command to connect to the database and startup the instance.sqlplus '/ as sysdba'SQL> startup nomountStep 5Create the databasecreate database testcharacter set WE8ISO8859P1national character set utf8undo tablespace undotbs1default temporary tablespace temp;Step 6Run catalog and catproc@?/rdbms/admin/catalog.sql@?/rdbms/admin/catproc.sql

Managing Data FilesWhat is data File?Data files are physical files of the OS that store the data of all logical structures in the database. Data file must be created for each tablespace.How to determine the number of dataf iles?At least one datafile is required for theSYSTEMtablespace. We can create separate datafile for other teblespace. When we create DATABASE , MAXDATAFILES may be or not specify in create database statement clause. Oracle assassin db_files default value to 200. We can also specify the number of datafiles in init file.When we start the oracle instance , the DB_FILES initialization parameter reserve for datafile information and the maximum number of datafile in SGA. We can change the value ofDB_FILES(by changing the initialization parameter setting), but the new value does not take effect until you shut down and restart the instance.Important: If the value ofDB_FILESis too low, you cannot add datafiles beyond theDB_FILESlimit. Example : if init parameter db_files set to 2 then you can not add more then 2 in your database. If the value ofDB_FILESis too high, memory is unnecessarily consumed. When you issueCREATE DATABASEorCREATE CONTROLFILEstatements, theMAXDATAFILESparameter specifies an initial size. However, if you attempt to add a new file whose number is greater thanMAXDATAFILES, but less than or equal toDB_FILES, the control file will expand automatically so that the datafiles section can accommodate more files.Note:If you add new datafiles to a tablespace and do not fully specify the filenames, the database creates the datafiles in the default database directory . Oracle recommends you always specify a fully qualified name for a datafile. Unless you want to reuse existing files, make sure the new filenames do not conflict with other files. Old files that have been previously dropped will be overwritten.How to add datafile in execting tablespace?alter tablespace add datafile /............../......./file01.dbf size 10m autoextend on;How to resize the datafile?alter database datafile '/............../......./file01.dbf' resize 100M;How to bring datafile online and offline?alter database datafile '/............../......./file01.dbf' online;alter database datafile '/............../......./file01.dbf' offline;How to renaming the datafile in a single tablesapce?Step:1Take the tablespace that contains the datafiles offline. The database must be open.alter tablespace offline normal;Step:2Rename the datafiles using the operating system.Step:3Use theALTER TABLESPACEstatement with theRENAME DATAFILEclause to change the filenames within the database.alter tablespace rename datafile '/...../..../..../user.dbf' to '/..../..../.../users1.dbf';Step 4:Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.How to relocate datafile in a single tablesapce?Step:1Use following query to know the specifiec file name or size.select file_name,bytes from dba_data_files where tablespace_name='';Step:2Take the tablespace containing the datafiles offline:alter tablespace offline normal;Step:3Copy the datafiles to their new locations and rename them using the operating system.Step:4Rename the datafiles within the database.ALTER TABLESPACE RENAME DATAFILE'/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf'TO '/u03/oracle/rbdb1/users01.dbf','/u04/oracle/rbdb1/users02.dbf';Step:5Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.How to Renaming and Relocating Datafiles in Multiple Tablespaces?Step:1Ensure that the database is mounted but closed.Step:2Copy the datafiles to be renamed to their new locations and new names, using the operating system.Step:3UseALTER DATABASEto rename the file pointers in the database control file.ALTER DATABASERENAME FILE'/u02/oracle/rbdb1/sort01.dbf','/u02/oracle/rbdb1/user3.dbf'TO '/u02/oracle/rbdb1/temp01.dbf','/u02/oracle/rbdb1/users03.dbf;Step:4Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. How to drop a datafile from a tablespaceImportant :Oracle does not provide an interface for dropping datafiles in the same way you would drop a schema object such as a table or a user.Reasons why you want to remove a datafile from a tablespace: You may have mistakenly added a file to a tablespace. You may have made the file much larger than intended and now want to remove it. You may be involved in a recovery scenario and the database won't start because a datafile is missing.Important :Once the DBA creates a datafile for a tablespace, the datafile cannot be removed. If you want to do any critical operation like dropping datafiles, ensure you have a full backup of the database.Step: 1Determining how many datafiles make up a tablespaceTo determine how many and which datafiles make up a tablespace, you can use the following query:SELECT file_name, tablespace_name FROM dba_data_files WHERE tablespace_name ='';Case 1If you have only one datafile in the tablespace and you want to remove it. You can simply drop the entire tablespace using the following:DROP TABLESPACE INCLUDING CONTENTS;The above command will remove the tablespace, the datafile, and the tablespace's contents from the data dictionary.Important :Oracle will not drop the physical datafile after the DROP TABLESPACE command. This action needs to be performed at the operating system.Case 2If you have more than one datafile in the tablespace, and you wnat to remove all datafiles and also no need the information contained in that tablespace, then use the same command as above:DROP TABLESPACE INCLUDING CONTENTS;Case 3If you have more than one datafile in the tablespace and you want to remove only one or two ( not all) datafile in the tablesapce or you want to keep the objects that reside in the other datafile(s) which are part of this tablespace, then you must export all the objects inside the tablespace.Step: 1Gather information on the current datafiles within the tablespace by running the following query in SQL*Plus:SELECT file_name, tablespace_name FROM dba_data_files WHERE tablespace_name ='';Step: 2You now need to identify which objects are inside the tablespace for the purpose of running an export. To do this, run the following query:SELECT owner, segment_name, segment_type FROM dba_segments WHERE tablespace_name=''Step : 3Now, export all the objects that you wish to keep.Step : 4Once the export is done, issue theDROP TABLESPACE INCLUDING CONTENTS.Step : 5Delete the datafiles belonging to this tablespace using the operating system.Step : 6Recreate the tablespace with the datafile(s) desired, then import the objects into that tablespace.Case : 4If you do not want to follow any of these procedures, there are other things that can be done besides dropping the tablespace. If the reason you wanted to drop the file is because you mistakenly created the file of the wrong size, then consider using the RESIZE command. If you really added the datafile by mistake, and Oracle has not yet allocated any space within this datafile, then you can use ALTER DATABASE DATAFILE RESIZE; command to make the file smaller than 5 Oracle blocks. If the datafile is resized to smaller than 5 oracle blocks, then it will never be considered for extent allocation. At some later date, the tablespace can be rebuilt to exclude the incorrect datafile.Important :The ALTER DATABASE DATAFILE OFFLINE DROP command is not meant to allow you to remove a datafile. What the command really means is that you are offlining the datafile with the intention of dropping the tablespace.Important :If you are running in archivelog mode, you can also use: ALTER DATABASE DATAFILE OFFLINE; instead of OFFLINE DROP. Once the datafile is offline, Oracle no longer attempts to access it, but it is still considered part of that tablespace. This datafile is marked only as offline in the controlfile and there is no SCN comparison done between the controlfile and the datafile during startup (This also allows you to startup a database with a non-critical datafile missing). The entry for that datafile is not deleted from the controlfile to give us the opportunity to recover that datafile. Managing Control FilesA control file is a small binary file that records the physical structure of the database with database name, Names and locations of associated datafiles, online redo log files, timestamp of the database creation, current log sequence number and Checkpoint information.Note: Without the control file, the database cannot be mounted. You should create two or more copies of the control file during database creation.Role of Control File:When Database instance mount, Oracle recognized all listed file in Control file and open it. Oracle writes and maintains all listed control files during database operation.Important: If you do not specify files for CONTROL_FILES before database creation, and you are not using the Oracle Managed Files feature, Oracle creates a control file in :\ORACLE_HOME\DTATBASE\ location and uses a default filename. The default name is operating system specific. Every Oracle database should have at least two control files, each stored on a different disk. If a control file is damaged due to a disk failure, the associated instance must be shut down. Oracle writes to all filenames listed for the initialization parameter CONTROL_FILES in the database's initialization parameter file. The first file listed in the CONTROL_FILES parameter is the only file read by the Oracle database server during database operation. If any of the control files become unavailable during database operation, the instance becomes inoperable and should be aborted.

How to Create Control file at the time od database creation:The initial control files of an Oracle database are created when you issue the CREATE DATABASE statement. The names of the control files are specified by the CONTROL_FILES parameter in the initialization parameter file used during database creation.How to Create Additional Copies, Renaming, and Relocating Control FilesStep:1Shut down the database.Step:2Copy an existing control file to a different location, using operating system commands.Step:3Edit the CONTROL_FILES parameter in the database's initialization parameter file to add the new control file's name, or to change the existing control filename.Step:4Restart the database.When you Create New Control Files? All control files for the database have been permanently damaged and you do not have a control file backup. You want to change one of the permanent database parameter settings originally specified in the CREATE DATABASE statement. These settings include the database's name and the following parameters: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES.Steps for Creating New Control FilesStep:1Make a list of all datafiles and online redo log files of the database.SELECT MEMBER FROM V$LOGFILE;SELECT NAME FROM V$DATAFILE;SELECT VALUE FROM V$PARAMETER WHERE NAME = 'CONTROL_FILES';Step:2Shut down the database.Step:3Back up all datafiles and online redo log files of the database.Step:4Start up a new instance, but do not mount or open the database:STARTUP NOMOUNTStep:5Create a new control file for the database using the CREATE CONTROLFILE statement.Example:CREATE CONTROLFILE REUSE DATABASE " show parameter diagNAME TYPE VALUE------------- ------- -----------------diagnostic_dest string D:\ORACLEBelow table shows us the new location of Diagnostic trace files Data Old location ADR location------------------- ------------------------ ---------------------Core Dump CORE_DUMP_DEST $ADR_HOME/cdumpAlert log data BACKGROUND_DUMP_DEST $ADR_HOME/traceBackground process trace BACKGROUND_DUMP_DEST $ADR_HOME/traceUser process trace USER_DUMP_DEST $ADR_HOME/trace

We can use V$DIAG_INFOview to list some important ADR locations such as ADR Base, ADR Home, Diagnostic Trace, Diagnostic Alert, Default Trace file, etc.

SQL> select * from v$diag_info;

INST_ID NAME VALUE---------- ----------- ---------------------------1 Diag Enabled TRUE1 ADR Base d:\oracle1 ADR Home d:\oracle\diag\rdbms\noida\noida1 Diag Trace d:\oracle\diag\rdbms\noida\noida\trace1 Diag Alert d:\oracle\diag\rdbms\noida\noida\alert1 Diag Incident d:\oracle\diag\rdbms\noida\noida\incident1 Diag Cdump d:\oracle\diag\rdbms\noida\noida\cdump1 Health Monitor d:\oracle\diag\rdbms\noida\noida\hm1 Active Problem Count 01 Active Incident Count 010 rows selected.

ADRCI (Automatic Diagnostic Repository Command Interpreter) :The ADR Command Interpreter (ADRCI) is a command-line tool that we use to manage Oracle Database diagnostic data. ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database Release 11g. ADRCI enables: Viewing diagnostic data within the Automatic Diagnostic Repository (ADR). Viewing Health Monitor reports. Packaging of incident and problem information into a zip file for transmission to Oracle Support. Diagnostic data includes incident and problem descriptions, trace files, dumps, health monitor reports, alert log entries, and more .

ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of SQL and PL/SQL commands.To use ADRCI in interactive mode :Enter the following command at the operating system command prompt:C:\>adrciADRCI: Release 11.1.0.6.0 - Beta on Wed May 18 12:31:40 2011Copyright (c) 1982, 2007, Oracle. All rights reserved.ADR base = "d:\oracle"To get list of adrci command type help command as below :adrci> helpHELP [topic] Available Topics: CREATE REPORT ECHO EXIT HELP HOST IPS PURGE RUN SET BASE SET BROWSER SET CONTROL SET ECHO SET EDITOR SET HOMES | HOME | HOMEPATH SET TERMOUT SHOW ALERT SHOW BASE SHOW CONTROL SHOW HM_RUN SHOW HOMES | HOME | HOMEPATH SHOW INCDIR SHOW INCIDENT SHOW PROBLEM SHOW REPORT SHOW TRACEFILE SPOOLThere are other commands intended to be used directly by Oracle, type"HELP EXTENDED" to see the list

Viewing the Alert Log : The alert log is written as both an XML-formatted file and as a text file. we can view either format of the file with any text editor, or we can run an ADRCI command to view the XML-formatted alert log with the XML tags stripped. By default, ADRCI displays the alert log in your default editorThe following are variations on the SHOW ALERT command:adrci > SHOW ALERT -TAILThis displays the last portion of the alert log (the last 10 entries) in your terminal session.adrci> SHOW ALERT -TAIL 50This displays the last 50 entries in the alert log in your terminal session.adrci> SHOW ALERT -TAIL -FThis displays the last 10 entries in the alert log, and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform "live monitoring" of the alert log. Press CTRL-C to stop waiting and return to the ADRCI prompt.Here are few Example :adrci> show alertChoose the alert log from the following homes to view:1: diag\clients\user_neerajs\host_444208803_112: diag\clients\user_system\host_444208803_113: diag\clients\user_unknown\host_411310321_114: diag\rdbms\delhi\delhi5: diag\rdbms\noida\noida6: diag\tnslsnr\ramtech-199\listenerQ: to quitPlease select option:4Output the results to file: c:\docume~1\neeraj~1.ram\locals~1\temp\alert_932_4048_delhi_1.ado'vi' is not recognized as an internal or external command,operable program or batch file.Please select option: qSince we are on window platform so we don't have vi editor.So we have set editor for window say notepad.adrci> set editor notepadadrci> SHOW ALERTChoose the alert log from the following homes to view:1: diag\clients\user_neerajs\host_444208803_112: diag\clients\user_system\host_444208803_113: diag\clients\user_unknown\host_411310321_114: diag\rdbms\delhi\delhi5: diag\rdbms\noida\noida6: diag\tnslsnr\ramtech-199\listenerQ: to quitPlease select option: 4Output the results to file: c:\docume~1\neeraj~1.ram\locals~1\temp\alert_916_956_noida_7.adoHere it will open the alert log file and check the file as per our need .If we want to filter the alert log file then we can filter as below :adrci> show alert -P "message_text LIKE '%ORA-600%'"This displays only alert log messages that contain the string 'ORA-600'.Choose the alert log from the following homes to view:1: diag\clients\user_neerajs\host_444208803_112: diag\clients\user_system\host_444208803_113: diag\clients\user_unknown\host_411310321_114: diag\rdbms\delhi\delhi5: diag\rdbms\noida\noida6: diag\tnslsnr\ramtech-199\listenerQ: to quitPlease select option: 5Here, there is no ora-600 error in alert log file so it is blankFinding Trace Files :ADRCI enables us to view the names of trace files that are currently in the automatic diagnostic repository (ADR). We can view the names of all trace files in the ADR, or we can apply filters to view a subset of names. For example, ADRCI has commands that enable us to: Obtain a list of trace files whose file name matches a search string. Obtain a list of trace files in a particular directory. Obtain a list of trace files that pertain to a particular incident. The following statement lists the name of every trace file that has the string 'mmon' in its file name. The percent sign (%) is used as a wildcard character, and the search string is case sensitive.

adrci> SHOW TRACEFILE %pmon%This statement lists the name of every trace file that is located in the directory and that has the string 'mmon' in its file name:adrci> SHOW TRACEFILE -RTThis statement lists the names of all trace files related to incident number 1681:Viewing Incidents :The ADRCI SHOW INCIDENT command displays information about open incidents. For each incident, the incident ID, problem key, and incident creation time are shown. If the ADRCI homepath is set so that there are multiple current ADR homes, the report includes incidents from all of them.adrci> SHOW INCIDENTADR Home = d:\oracle\diag\rdbms\noida\noida:*******************************************************************0 rows fetchedPurging Alert Log Content :The adrci command purge can be used to purge entries from the alert log. Note that this purge will only apply to the XML based alert log and not the text file based alert log which still has to be maintained using OS commands. The purge command takes the input in minutes and specifies the number of minutes for which records should be retained.So to purge all alert log entries older than 7 days the following command will be used:adrci > purge -age 10080 -type ALERTADR Retention can be controlled with ADRCI:There is retention policy for ADR that allow to specify how long to keep the data ADR incidents are controlled by two different policies:The incident metadata retention policy ( default is 1 year )The incident files and dumps retention policy ( Default is one month)We can change retention policy using adrci MMON purge data automatically on expired ADR data.adrci> show controlThe above command will show the shortp_policy and longp_policy and this policy can the changed as below:adrci> set control (SHORTP_POLICY = 360 )adrci> set control (LONGP_POLICY = 4380 ) What Is Spfile ?There are hundreds of instance parameters that define the way an instance operates. As an administrator you have to set each of these parameters correctly. All these parameters are stored in a file called parameter file. These parameter files are also called initialization files as they are needed for an instance to startup. (More information onhow a database is opened)

There are two kinds of parameter file. Parameter file (pfile) and server parameter file (spfile).

Differences between an spfile and pfile

1.Spfiles are binary files while pfiles are plain text files.

2. If you are using spfile, instance parameters can be changed permanently using SQL*Plus commands. If you are using pfile, you have to edit pfile using an editor to change values permanently.

3.Spfile names should be either spfile.ora or spfile.ora. Pfile names must be init.ora. (More information onspfile naming)

How to find out if you are using pfile or spfile

If you are using spfile, the "spfile" parameter will show the path of spfile. Otherwise the value of "spfile" parameter will be null.12345SQL> show parameter spfile ;NAME TYPE VALUE------------------------------------ ----------- ------------------------------spfile string

In the example above, the value is null, which means I am using a pfile.

Contents Of Parameter Files

Parameter files contain name-value pairs for instance parameters. = 123456789101112131415161718192021222324$ cd $ORACLE_HOME/dbs/$ less inittestdb.oratestdb.__db_cache_size=1795162112testdb.__java_pool_size=16777216testdb.__large_pool_size=16777216testdb.__oracle_base='/u01'#ORACLE_BASE set from environmenttestdb.__pga_aggregate_target=1677721600testdb.__sga_target=2516582400testdb.__shared_io_pool_size=0testdb.__shared_pool_size=637534208testdb.__streams_pool_size=16777216*.audit_file_dest='/u01/admin/testdb/adump'*.audit_trail='db'*.compatible='11.2.0.0.0'*.control_files='+ORADATA/testdb/controlfile/current.475.758824101','+FRA/testdb/controlfile/current.257.758824101'*.db_block_size=8192*.db_create_file_dest='+ORADATA'*.db_name='testdb'*.db_recovery_file_dest='+FRA'*.db_recovery_file_dest_size=4227858432**************** Output Truncated ***********************

Pfile and spfiles have the same content except that one is plain text while other is binary.

How To Relocate an Spfile

Oracle will always search for the parameter files under "$ORACLE_HOME/dbs" folder. You also cannot change the name of a parameter file. The naming format is mandatory. (More information onparameter file naming format).

We usually don't want to change the path of parameter file under normal circumstances but if you are using RAC, you will have to place your spfile in a shared storage different than the usual "$ORACLE_HOME/dbs" folder.

1.Create a pfile.1$ vi initMYDB.ora

2.Enter the line below into the pfile. By writing this line, you are setting the "spfile" parameter to the new location of your spfile.1SPFILE='/new_location/spfileMYDB.ora'

3.Shutdown your database.1sql> shutdown immediate ;

4.Copy current spfile to the new location.1$ cp spfileMYDB.ora /new_location/spfileMYDB.ora

5.Startup your instance.1sql> startup ;

6.Verify that your database is using the spfile at the new location.12345SQL> show parameter spfile ;NAME TYPE VALUE------------------------------------ ----------- ------------------------------spfile string /new_location/spfileMYDB.ora

Transforming Pfile Into Spfile or Vice Versa

There are sql commands to transform pfile into spfile and vice versa.123SQL> create spfile='/home/oracle/myspfile.ora' from pfile;File created.

The command above will transform your current pfile into spfile and will store the spfile at the location you specify. (/home/oracle/myspfile.ora)

If you don't provide a location for spfile, the server parameter file will be created at its default location. ( $ORACLE_HOME/dbs/spfile.ora )1234567SQL> create spfile from pfile ;File created.$ file $ORACLE_HOME/dbs/spfiletestdb.ora/u01/app/oracle/11.2.0.2/dbs/spfiletestdb.ora: data

Now lets assume that you are using spfile andcreate a pfile from it.1234567SQL> create pfile='/home/oracle/mypfile.ora' from spfile;File created.$ file /home/oracle/mypfile.ora/home/oracle/mypfile.ora: ASCII text

The command above will transform your current spfile into pfile and will store the pfile at the location you specify. (/home/oracle/mypfile.ora)

One thing to mention is that infact there does not even have to be a database or an instance to perform a transformation. SQL*Plus is the only required tool.

Loss of an Spfile

If you lose your spfile (for ex: accidentally deleted it) and your instance is up, you may recreate an spfile. The values of parameters are read by instance at startup.

So, your instance knows all the values. It can create a new spfile from those parameter values.1234567SQL> create spfile='/home/oracle/myspfile.ora' from memory ;File created.$ file /home/oracle/myspfile.ora/home/oracle/myspfile.ora: data

If you lose your spfile and your db is down you'll have to restore your spfile from a backup. That subject is related with rman (recovery manager) and is not covered in this article.Instance ParametersThere are hundreds of instance parameters that determine the way an instance operates. As an administrator, you have to set each of these parameters correctly.

All these parameter are stored in a file called parameter file. (More information onspfile)

How To View Parameters Of An Instance

You can query V$SYSTEM_PARAMETER view to see the parameters of an instance.1sql> select name,value,description from V$SYSTEM_PARAMETER ;

NAMEVALUEDESCRIPTION

lock_name_spacelock name space used for generating lock names for standby/clone database

processes550user processes

sessions848user and system sessions

timed_statisticsTRUEmaintain internal timing statistics

name => Name of the parameter.value => Value of the parameter.description=> This describes what the parameter is about.

Modifying Parameters

1. Session-wide Parameters

You can change the value of a parameter session-wide using "ALTER SESSION" command. The scope is limited to session, not instance. It is valid only for the current session. At next login, you'll see that the parameter has been reset. V$PARAMETER view shows the session-wide parameters.123sql> SELECT name, VALUE, isses_modifiableFROM V$PARAMETERWHERE NAME = 'nls_language'

NAMEVALUEISSES_MODIFIABLE

nls_languageTURKISHTRUE

The value of "nls_language" parameter is "TURKISH". "ISSES_MODIFIABLE" column shows whether this parameter can be changed using "ALTER SESSION" command. In this case this is "TRUE" which means that I can change it.12345sql> alter session set nls_language='ENGLISH' ;sql> SELECT name, VALUE, isses_modifiableFROM V$PARAMETERWHERE NAME = 'nls_language' ;

NAMEVALUEISSES_MODIFIABLE

nls_languageENGLISHTRUE

2. Instance-wide ParametersYou can change the value of a parameter instance-wide using "ALTER SYSTEM" command. The scope is instance-wide. Every session is affected. When a user logs in, a session is created and that session inherits the values of parameters from instance-wide parameter values. V$SYSTEM_PARAMETER view shows the instance-wide parameters.123sql> SELECT name, VALUE, issys_modifiableFROM V$SYSTEM_PARAMETERWHERE NAME = 'db_recovery_file_dest_size' ;

NAMEVALUEISSYS_MODIFIABLE

db_recovery_file_dest_size4227858432IMMEDIATE

ISSYS_MODIFIABLE=> this column shows how the parameter change will affect the instance. If it is "IMMEDIATE", this means that the changes will take effect immediately. Such parameters are called dynamic parameters. If the column is "FALSE", then you will have to restart your instance for changes to take effect. Such parameters are called static parameters.12345SQL> alter system set db_recovery_file_dest_size=4000000000 scope=both;SQL> select name,value,issys_modifiable from v$system_parameter where name='db_recovery_file_dest_size' ;

NAMEVALUEISSYS_MODIFIABLE

db_recovery_file_dest_size4000000000IMMEDIATE

Scope Option

While changing an instance-wide parameter using "ALTER SYSTEM" command you can also set scope option to determine the scope of change. Scope option can take the value "MEMORY","SPFILE" or "BOTH"

MEMORY=> if you are modifying a dynamic parameter, the changes will take effect immediately for current instance. But after a restart of the instance the changes will revert. If you use "MEMORY", the changes will be temporary.

You cannot modify a static parameter with "MEMORY".123456SQL> alter system set processes=300 scope=MEMORY;alter system set processes=300 scope=MEMORY*ERROR at line 1:ORA-02095: specified initialization parameter cannot be modified

SPFILE=> If the instance was started using an spfile (more information onhow a database starts) and you set "SPFILE" for scope option, then the change will be recorded in spfile. However, the current instance will keep on operating with old values. The changes will take effect only after a restart.

You cannot use "SPFILE" if the instance was started using a pfile.123456SQL> alter system set processes=300 scope=spfile;alter system set processes=300 scope=spfile*ERROR at line 1:ORA-32001: write to SPFILE requested but no SPFILE is in use

BOTH=> IF you use "BOTH" for scope option for a dynamic parameter, the changes will take effect immediately and be permanent.

Instance Parameters In RAC

In a RAC, every instance can have their own parameter values but there can only be one single shared spfile. The spfile entries have . = format. For example in a 2 node RAC (MYDB1 and MYDB2) an spfile may contain such entries;12MYDB1.processes=150MYDB2.processes=230

If the value for a parameter is identical in all nodes, then instead of instance names, a star (*) can be added as a prefix.1*.processes=150

"SID" Option When Changing Parameters Using "ALTER SYSTEM" Command

In a RAC configuration you can change parameters with "ALTER SYSTEM" command, but you have to specify which instances will be affected from the change.

You can specify which instances the change will apply to by setting SID option. You can connect to any instance and change parameter of any instance from that instance. For ex:1sql> alter system set processes=230 scope=spfile SID='MYDB2';

In the example above, I've set the "processes" parameter to 230 for instance MYDB2. To change a parameter for instance MYDB2, I don't even have to connect to MYDB2. I can do that from any node. But be careful, Oracle does not verify that a node named "MYDB2" exists. Be sure you type the instance name correctly.1sql> alter system set processes=200 scope=spfile SID='*';

Here, all the nodes in the RAC will have "200" for parameter "processes".Default Parameter ValuesYou don't have to set every parameter explicitly in spfile. Those parameters, which you haven't set, will get their default values that are determined by Oracle.1sql> select name,value,isdefault from v$system_parameter ;

NAMEVALUEISDEFAULT

log_archive_max_processes4TRUE

open_cursors300FALSE

If a parameter is not defined in spfile, the "ISDEFAULT" column will be "TRUE". Otherwise it will be"FALSE". I haven't set "log_archive_max_processes" parameter and its default value is 4. However, I've set "open_cursors" parameter in spfile and it doesn't have a default value.123$ strings spfileMYDB.ora | grep open_cursorsopen_cursors=300

As seen above, the "open_cursors" parameter is set in spfile.

Resetting Default Values

You can always reset a parameter that you've set, using "ALTER SYSTEM RESET" command. This command will remove the entry from spfile.1SQL> alter system reset open_cursors;

After restarting the instance, the "open_cursors" parameteris reverted to its default value.123sql> SELECT name, VALUE, isdefaultFROM v$system_parameterWHERE name = 'open_cursors' ;

NAMEVALUEISDEFAULT

open_cursors50TRUE

1$ strings spfileMYDB.ora | grep open_cursors

The "open_cursors" parameter is no longer in spfile.

Finding Modified ParametersYou can find which parameters have been modified using "ALTER SYSTEM" command since the instance started. There is a column named "ISMODIFIED" in v$system_parameter view. If the parameterhas beenmodified it shows "TRUE", otherwise it shows "FALSE"

Deprecated ParametersAs new database versions are developed, some parameters used in earlier versions may become deprecated. There is a column named "ISDEPRECATED" in v$SYSTEM_PARAMETER view. This column shows whether the parameter is deprecated or not in current version.123sql> SELECT name, VALUE, isdeprecatedFROM v$system_parameterWHERE name = 'background_dump_dest' ;

NAMEVALUEISDEPRECATED

background_dump_dest/u01/diag/rdbms/testdb/testdb/traceTRUE

As of 11g, the "background_dump_dest" parameter is no longer being used. It used to show the path of background dump files. We now have "diagnostic_dest" parameterinstead in version 11g. As seen in the query, the "ISDEPRECATED" column shows "TRUE". How Is Database Opened ?

An instance can be "started" or "shutdown". A database can be "mounted","opened", "closed" and "dismounted". However instance and database is tightly attached so you may also use "starting a database" or "shutting down a database".

Your database will be opened automatically after a server reboot if you've configured "Oracle Restart" or if your database is a RAC.

You may also manually shut your database down and then start it up anytime you want. Let's take a database which is down, start it up using SQL*Plus and see how Oracle opens a database.

1.As oracle user set ORACLE_HOME environment variable to the home directory of your Oracle installation and set ORACLE_SID environment variable to anything you want. Your instance name will be the value you set as $ORACLE_SID.12$ export ORACLE_HOME=/u01/app/oracle/11.2.0.2/$ export ORACLE_SID=myins

2.On the server, start SQL*Plus as oracle user.

1234$ sqlplus / as sysdbaSQL*Plus: Release 11.2.0.2.0 Production on Fri Oct 21 16:05:16 2011Copyright (c) 1982, 2010, Oracle. All rights reserved.Connected to an idle instance.

At this point you can execute "startup" command and the database completes all the stages and is opened. However you may also pass each stage manually.

3. NOMOUNT State1234567SQL> startup nomount;ORACLE instance started.Total System Global Area 4175568896 bytesFixed Size 2233088 bytesVariable Size 3137342720 bytesDatabase Buffers 1023410176 bytesRedo Buffers 12582912 bytes

At this stage the instance is started but there is no database yet. To start an instance, oracle needs a parameter file. It will search for a parameter file under "$ORACLE_HOME\dbs\" directory in the order below and will use the first file it finds.

- spfile.ora (spfile)- spfile.ora (spfile)- init.ora (pfile)

You will also find a parameter file named "init.ora" under this directory. This is a template parameter file that comes with an installation. You may use this template file as a starting point for building your parameter file. However Oracle will ignore it until you name it init.ora .

Although a typical parameter file will consist many parameters, "db_name" parameter is the only required parameter to start an instance. Your database name will be the value you set for this parameter.

When an instance starts, SGA (System Global Area) is created in RAM and background processes are started.

At nomount stage, as your instance is started, you should be able to see the background processes associated with the instance.123456789101112131415161718192021222324252627$ ps -ef | grep myinsoracle 20227 1 0 18:56 ? 00:00:00 ora_pmon_myinsoracle 20229 1 0 18:56 ? 00:00:00 ora_psp0_myinsoracle 20232 1 0 18:56 ? 00:00:00 ora_vktm_myinsoracle 20236 1 0 18:56 ? 00:00:00 ora_gen0_myinsoracle 20238 1 0 18:56 ? 00:00:00 ora_diag_myinsoracle 20240 1 0 18:56 ? 00:00:00 ora_dbrm_myinsoracle 20242 1 0 18:56 ? 00:00:00 ora_dia0_myinsoracle 20244 1 27 18:56 ? 00:00:02 ora_mman_myinsoracle 20246 1 0 18:56 ? 00:00:00 ora_dbw0_myinsoracle 20248 1 0 18:56 ? 00:00:00 ora_lgwr_myinsoracle 20250 1 0 18:56 ? 00:00:00 ora_ckpt_myinsoracle 20252 1 0 18:56 ? 00:00:00 ora_smon_myinsoracle 20254 1 0 18:56 ? 00:00:00 ora_reco_myinsoracle 20256 1 0 18:56 ? 00:00:00 ora_rbal_myinsoracle 20258 1 0 18:56 ? 00:00:00 ora_asmb_myinsoracle 20260 1 0 18:56 ? 00:00:00 ora_mmon_myinsoracle 20262 1 0 18:56 ? 00:00:00 oracle+ASM_asmb_myins (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))oracle 20264 1 0 18:56 ? 00:00:00 ora_mmnl_myinsoracle 20266 1 0 18:56 ? 00:00:00 ora_d000_myinsoracle 20268 1 0 18:56 ? 00:00:00 ora_s000_myinsoracle 20270 1 0 18:56 ? 00:00:00 ora_mark_myinsoracle 20277 1 0 18:56 ? 00:00:00 ora_ocf0_myinsoracle 20282 1 0 18:56 ? 00:00:00 oracle+ASM_ocf0_myins (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))oracle 20397 1 0 18:57 ? 00:00:00 oraclemyins (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))oracle 20399 16888 0 18:57 pts/1 00:00:00 grep myins

You can also query the spfile, see parameter values or change them at nomount stage.12345678910111213141516sql> show parameterNAME TYPE VALUE-------------------------- ----------- -----------------O7_DICTIONARY_ACCESSIBILITY boolean FALSEactive_instance_count integeraq_tm_processes integer 0archive_lag_target integer 0asm_diskgroups stringasm_diskstring stringasm_power_limit integer 1audit_file_dest string /u01/admin/testdb/adumpaudit_sys_operations boolean FALSEaudit_syslog_level string********** Output Truncated ****************

SQL> show parameter instance_name;NAME TYPE VALUE------------------------------------ ----------- ------------------------------instance_name string myins

"instance_name" parameter shows the name of the instance. However, the value of this parameter is not stored in parameter file. It is populated from the value of ORACLE_SID environment variable at the time the startup command is executed.

4. MOUNT StateSQL> alter database mount;Database altered.

To proceed to mount state oracle will find the control file and verify its syntax. However, the information found in control file is not validated. For example, location of data files and redo log files are stored in a control file but Oracle will not validate if these files exists or not at mount stage. But there should valid records that show the location of those file in the control file. Otherwise you cannot proceed to mount state.

The path of control files is stored in parameter file.

8SQL> show parameter control_files;NAME TYPE VALUE------------------------------------ ----------- ------------------------------control_files string +ORADATA/testdb/controlfile/current.475.758824101, +FRA/testdb/controlfile/current.257.758824101

Here I've got two control files (paths are seperated by comma) that reside in ASM.

You can find which stage you are at by querying v$database view. This view is not available at nomount stage because at nomount stage there is no database. However in mount state the database is associated with the instance.12345SQL> select name,open_mode from v$database;NAME OPEN_MODE--------- --------------------TESTDB MOUNTED

Here, the name of the database is "TESTDB". This is the value of "db_name" parameter you've set in parameter file. And the database is in mount state.12345SQL> show parameter db_name;NAME TYPE VALUE------------------------------------ ----------- ------------------------------db_name string testdb

As seen above, the name of the database is determined by db_name parameter.

v$log,v$logfile,v$datafile views are also available at mount stage to show you the path of online redolog files and data files.

At mount stage you can:

- rename data files,- enable/disable archive mode of database (more information on archive mode),- perform media recovery.

5. OPEN Mode123SQL> alter database open;Database altered.

While opening the database, Oracle will check the availability of data files and online redo log files. If a data file or online redolog group is missing, the database won't open. This stage is where the presence of these files are checked.

If database was shutdown in a consistent way (more information onshutdown types) it will be opened immediately. If it was an inconsistent shutdown, SMON process will perform an instance recovery and open the database after that.12345SQL> select open_mode from v$database;OPEN_MODE--------------------READ WRITE

Here my database is open in read-write mode (default). Users can connect to it.

6. Shutting down Database

While opening the database Oracle has passed through each stage respectively:

- start the instance (nomount state),- mount the database (mount state),- open the database.

While shutting down Oracle will pass through each stage in reverse order:

- close database (mount state)- dismount database (nomount state)- shutdown the instance.

The only way to shutdown a database using SQL*Plus is entering "shutdown" command with an appropriate shutdown option for your needs. (More information onshutdown types)12345SQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.

Most of the time your database will be up (open database) or completely down (no database or instance). You will be on nomount and mount stages at maintenance and installation times or when there is a problem in your database. CheckpointWhat is checkpoint?Checkpoint is an internal mechanism of Oracle. When a checkpoint occurs the latest SCN (system change number) is written to the control file and to all datafile headers. This operation is performed by the checkpoint process. The name of the process is ora_ckpt_ in Linux.Also, during checkpoint, the ckpt process triggers the database writer process (DBWn) to write dirty blocks to disk.

How can I view when the last checkpoint happened?12345SQL> select checkpoint_change#,current_scn from v$database ;CHECKPOINT_CHANGE# CURRENT_SCN------------------ -----------2008597 2023173

checkpoint_change# is the scn number written to control file during checkpoint. Current scn is the scn of database at this moment. As there is always something changing in a database the current SCN will always be incrementing and also will be ahead of checkpoint_change#.12345SQL> select checkpoint_change#,current_scn from v$database ;CHECKPOINT_CHANGE# CURRENT_SCN------------------ -----------2008597 2023600

I've re-executed the command. The checkpoint_change# remains the same because no checkpoint occurred between the execution of two commands. However, the current scn increased because something changed in database during that period.There is a function called "scn_to_timestamp" which tells the time that an scn was current. The function is not 100% accurate. There can be up to a 3 seconds gap between the actual time.12345sql> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_time,current_scn,scn_to_timestamp (current_scn) current_timeFROM v$database ;

CHECKPOINT_SCNCHECKPOINT_TIMECURRENT_SCNCURRENT_TIME

200859718.08.2011 15:00:57202560718.08.2011 18:12:24

The query above contains the timestamps, therefore is more readable than sole change numbers. Notice that 3 hours and 12 minutes passed since the checkpoint and there were 17010 (2025607-2008597) changes during that period.What triggers a checkpoint operation?a)If an active redolog is to be overwritten, a checkpoint occurs implicitly to write the dirty blogs associated with the redo records in the active redolog.(More information onredolog states)1234567SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ;CHECKPOINT_SCN CHECKPOINT_TIME-------------- ----------------------------------------2026455 08.18.2011 18:25:36

The last checkpoint was at 18:25:36 .1SQL> select group#,status from v$log ;

GROUP#STATUS

1INACTIVE

2ACTIVE

3CURRENT

At the moment group 2 is the active and group 3 is the current redolog group.12SQL> alter system switch logfile;SQL> select group#,status from v$log ;

GROUP#STATUS

1CURRENT

2ACTIVE

3ACTIVE

Group 1 is the current. At next log switch group 2 must be the current but it is still active. So an implicit checkpoint will occur to make it available for overwriting.123456789SQL> alter system switch logfile ;SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ;CHECKPOINT_SCN CHECKPOINT_TIME-------------- --------------------------------------2027297 08.18.2011 18:38:48

As seen here the log switch operation caused a log switch.b)You can manually trigger a checkpoint.123456789SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ; CHECKPOINT_SCN CHECKPOINT_TIME-------------- ----------------------------------2027550 08.18.2011 18:43:49SQL> alter system checkpoint;

The command above explicitly caused a checkpoint.1234567SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ; CHECKPOINT_SCN CHECKPOINT_TIME-------------- --------------------------------2027918 08.18.2011 18:49:25

c)Consistent shutdown operations cause a checkpoint. (More information onshutdown types)12345678910111213141516171819202122232425262728293031SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ;CHECKPOINT_SCN CHECKPOINT_TIME-------------- ---------------------------2027918 08.18.2011 18:49:25SQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> startupORACLE instance started.Total System Global Area 4175568896 bytesFixed Size 2233088 bytesVariable Size 2365590784 bytesDatabase Buffers 1795162112 bytesRedo Buffers 12582912 bytesDatabase mounted.Database opened.SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ; CHECKPOINT_SCN CHECKPOINT_TIME-------------- ------------------------------------2028465 08.18.2011 18:55:12

d)If you've enabled MTTR (Mean Time To Recover) optimization by setting FAST_START_MTTR_TARGET parameter, then Oracle will automatically perform regular checkpoints to keep the MTTR at the level you defined. A checkpoint decreases MTTR because dirty blocks are written to disk and there will be less blocks to recover. Oracle will adjust the checkpoint frequency according to the FAST_START_MTTR_TARGET parameter you've set.e)If you take a datafile offline or read only, dirty blocks regarding that datafile are written to disk. This is a partial checkpoint. Not all dirty blocks in the database are written.1234567SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database;CHECKPOINT_SCN CHECKPOINT_TIME-------------- ----------------------2118315 08.19.2011 10:07:37

The database wide checkpoint happened at 10:07:37 and the SCN at that time (2118315) was written to control file and all datafile headers. v$database shows the scn recorded in control file.123sql> SELECT name, checkpoint_time, checkpoint_change#FROM v$datafile_headerWHERE tablespace_name='USERS' ;

NAMECHECKPOINT_TIMECHECKPOINT_CHANGE#

+ORADATA/testdb/datafile/users.435.75882402119.08.2011 10:07:372118315

v$datafile_header shows the scn recorded in datafile header. It is same as the scn recorded in control file. That is what we expect.12345sql> alter tablespace users read only ;sql> SELECT name, checkpoint_time, checkpoint_change#FROM v$datafile_headerWHERE tablespace_name='USERS'

NAMECHECKPOINT_TIMECHECKPOINT_CHANGE#

+ORADATA/testdb/datafile/users.435.75882402119.08.2011 10:49:212120846

I made the datafile read only. Then a partial checkpoint occurred at 10:49:21 and the SCN recorded in datafile header has changed.1234567SQL> SELECT checkpoint_change# checkpoint_scn,scn_to_timestamp (checkpoint_change#) checkpoint_timeFROM v$database ;CHECKPOINT_SCN CHECKPOINT_TIME-------------- ------------------------2118315 08.19.2011 10:07:37

Since no database wide checkpoint occurred, the SCN recorded in control file is still 2118315. Notice that the SCN recorded in datafile's header is ahead of the one in control file.

RedologWhat is Redolog?

Literally, re-do means to do it again.

There is always something changing (update, delete, insert) in your database.

Those changes are recorded by your database.

Each record regarding a change in your database is called a redo record or redolog.

These redologs are stored in files called redolog files.

This is an internal mechanism of Oracle. You can not disable it. Every database should have redolog files.

Why does Oracle need redologs?

What if your database crashes and you lose data?

With help of redolog files Oracle can know what has happened in the past and can re-apply the changes that it sees in redo records.

You can think of redologs as the history of database.

Every change that happened in the database is recorded there.

If one day you lose data, you can examine your redo records and recover your data.

What is redolog group?

There can't be a single redolog file in a database.

You should have at least 3 redolog files in your database.

Oracle uses only one redolog file at a time.

When this file is filled then it proceeds to next file. When all files are filled up then it returns back to first file.

The files are used in a circular fashion and the files are overwritten.

In this architecture each file is called a redolog group.

What is redolog member?

In each group there has to be at least one file.

Oracle recommends that each group contains at least two files.

Each file in a group is called a member.

Every member in a group is identical. They are copies. They contain the same redo records.

Members are used for fault tolerance. If any member is lost or damaged Oracle can continue to function using other members.

What should a typical configuration be?

In a typical configuration there has to be at least 3 redolog groups.

Each redolog group should have at least 2 members.

Redologs should be on your fastest disks as there will always be I/O on the files.

Redologs should not be on a RAID-5 disks. RAID-5 disks have low I/O.

If possible place members on different storage controllers or completely on a different storage unit. This will provide you high availability.

At least one member should be available for Oracle to function. Redolog States

CURRENT:

If a redolog group is in "current" state it means that, redo records arecurrently beingwritten to that group. That redolog group will keep being "current" until a log switch occurs. The act of moving from one redolog group to another is called a log switch. There can only be one current redolog group at a time. Example:1sql> select group#,status from v$log ;

GROUP#STATUS

1CURRENT

2INACTIVE

3INACTIVE

Here I've got 3 redolog groups. Group 1 is the current redolog group. The other two are inactive.

ACTIVE:

Redologfiles keep changes made to data blocks. Data blocksare modifiedin a special area inSGA(system global area) called buffer cache. Wh