user guide rac

422
Oracle Database 10g: Real Application Clusters Volume 1 Student Guide D17276GC10 Edition 1.0 December 2004 D40139

Upload: bachurm

Post on 27-Apr-2015

238 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: User Guide RAC

Oracle Database 10g: Real Application Clusters

Volume 1 • Student Guide

D17276GC10

Edition 1.0

December 2004

D40139

Page 2: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

This documentation contains proprietary information of Oracle Corporation. It is provided under a license agreement containing restrictions on use and disclosure and is also protected by copyright law. Reverse engineering of the software is prohibited. If this documentation is delivered to a U.S. Government Agency of the Department of Defense, then it is delivered with Restricted Rights and the following legend is applicable:

Restricted Rights Legend

Use, duplication or disclosure by the Government is subject to restrictions for commercial computer software and shall be deemed to be Restricted Rights software under Federal law, as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, Rights in Technical Data and Computer Software (October 1988).

This material or any portion of it may not be copied in any form or by any means without the express prior written permission of Oracle Corporation. Any other copying is a violation of copyright law and may result in civil and/or criminal penalties.

If this documentation is delivered to a U.S. Government Agency not within the Department of Defense, then it is delivered with “Restricted Rights,” as defined in FAR 52.227-14, Rights in Data-General, including Alternate III (June 1987).

The information in this document is subject to change without notice. If you find any problems in the documentation, please report them in writing to Education Products, Oracle Corporation, 500 Oracle Parkway, Box SB-6, Redwood Shores, CA 94065. Oracle Corporation does not warrant that this document is error-free.

All references to Oracle and Oracle products are trademarks or registered trademarks of Oracle Corporation.

All other products or company names are used for identification purposes only, and may be trademarks of their respective owners.

AuthorsJames WomackJean-Francois Verrier

Technical Contributors and ReviewersTroy AnthonyHarald van BreederodeBill BridgeMichael CebullaCarol ColrainJonathan CreightonJoel GoodmanYunrui LiVijay LunawatPaul ManningJohn McHughErik PetersonJavier SeenNitin Vengurlekar

PublisherJoseph Fernandez

Page 3: User Guide RAC

I IntroductionOverview I-2What Is a Cluster? I-3What Is Oracle Real Application Clusters? I-4Why Use RAC? I-5Clusters and Scalability I-6Levels of Scalability I-7Scaleup and Speedup I-8Speedup/Scaleup and Workloads I-9A History of Innovation I-10Course Objectives I-11Typical Schedule I-12

1 Architecture and ConceptsObjectives 1-2Complete Integrated Cluster Ware 1-3RAC Software Principles 1-4RAC Software Storage Principles 1-5OCR Architecture 1-6RAC Database Storage Principles 1-7RAC and Shared Storage Technologies 1-8Oracle Cluster File System 1-10Automatic Storage Management 1-11Raw or CFS? 1-12Typical Cluster Stack with RAC 1-13RAC Certification Matrix 1-14The Necessity of Global Resources 1-15Global Resources Coordination 1-16Global Cache Coordination: Example 1-17Write to Disk Coordination: Example 1-18RAC and Instance/Crash Recovery 1-19Instance Recovery and Database Availability 1-21Efficient Inter-Node Row-Level Locking 1-22Additional Memory Requirement for RAC 1-23Parallel Execution with RAC 1-24Global Dynamic Performance Views 1-25RAC and Services 1-26Virtual IP Addresses and RAC 1-27Database Control and RAC 1-28Summary 1-29

Contents

iii

Page 4: User Guide RAC

2 RAC Installation and Configuration (Part I)Objectives 2-2Oracle Database 10g RAC Installation: New Features 2-3Oracle Database 10g RAC Installation: Outline 2-5Preinstallation Tasks 2-6Hardware Requirements 2-7Network Requirements 2-8RAC Network Software Requirements 2-9Package Requirements 2-10hangcheck-timer Module Configuration 2-11Required UNIX Groups and Users 2-12The oracle User Environment 2-13User Shell Limits 2-14Configuring User Equivalency 2-15Required Directories for the Oracle Database Software 2-17Linux Kernel Parameters 2-19Cluster Setup Tasks 2-21Obtaining OCFS 2-22Installing the OCFS RPM Packages 2-23Starting ocfstool 2-24Generating the ocfs.conf File 2-25Preparing the Disks 2-26Loading OCFS at Startup 2-27Mounting OCFS on Startup 2-28Using Raw Partitions 2-30Binding the Partitions 2-31Raw Device Mapping File 2-33Installing Cluster Ready Services 2-35Specifying the Inventory Directory 2-36File Locations and Language Selection 2-37Cluster Configuration 2-38Private Interconnect Enforcement 2-39Oracle Cluster Registry File 2-40Voting Disk File 2-41Summary and Install 2-42Running the root.sh Script on All Nodes 2-43Verifying the CRS Installation 2-44Summary 2-46Practice 2: Overview 2-47

iv

Page 5: User Guide RAC

3 RAC Installation and Configuration (Part II)Objectives 3-2OUI Database Configuration Options 3-3Install the Database Software 3-4Specify File Locations 3-5Specify Cluster Installation 3-6Select Installation Type 3-7Products Prerequisite Check 3-8Select Database Configuration 3-9Check Summary 3-10The root.sh Script 3-11Launching the VIPCA with root.sh 3-12VIPCA Network Interface Discovery 3-13VIP Configuration Data and Summary 3-14Installation Progress 3-15End of Installation 3-16Database Preinstallation Tasks 3-17Creating the Cluster Database 3-19Node Selection 3-20Select Database Type 3-21Database Identification 3-22Cluster Database Management Method 3-23Passwords for Database Schema Owners 3-24Storage Options for Database Files 3-25Database File Locations 3-27Flash Recovery Area 3-28Database Components 3-29Database Services 3-30Initialization Parameters 3-31Database Storage Options 3-32Create the Database 3-33Monitor Progress 3-34Manage Default Accounts 3-35Postinstallation Tasks 3-36Patches and the RAC Environment 3-37Inventory List Locks 3-38Summary 3-39Practice 3: Overview 3-40

v

Page 6: User Guide RAC

4 RAC Database Instances AdministrationObjectives 4-2The EM Cluster Database Home Page 4-3Cluster Database Instance Home Page 4-5Cluster Home Page 4-6The Configuration Section 4-7Operating System Details Page 4-8Performance and Targets Pages 4-9Starting and Stopping RAC Instances 4-10Starting and Stopping RAC Instances with EM 4-11Starting and Stopping RAC Instances with SQL*Plus 4-12Starting and Stopping RAC Instances with SRVCTL 4-13RAC Initialization Parameter Files 4-14SPFILE Parameter Values and RAC 4-15EM and SPFILE Parameter Values 4-16RAC Initialization Parameters 4-18Parameters Requiring Identical Settings 4-20Parameters Requiring Unique Settings 4-21Adding a Node to a Cluster 4-22Adding a Node to an Existing Cluster 4-23Adding the RAC Software to the New Node 4-25Reconfigure the Listeners 4-27Add an Instance by Using DBCA 4-28Deleting Instances from a RAC Database 4-29Node Addition and Deletion and the SYSAUX Tablespace 4-31Quiescing RAC Databases 4-32How SQL*Plus Commands Affect Instances 4-33Administering Alerts with Enterprise Manager 4-34Viewing Alerts 4-35Blackouts and Scheduled Maintenance 4-37Summary 4-39

5 Administering Storage in RAC Part IObjectives 5-2What Is Automatic Storage Management? 5-3ASM: Key Features and Benefits 5-4ASM: New Concepts 5-5ASM: General Architecture 5-6ASM Instance and Crash Recovery in RAC 5-8ASMLibs 5-9Oracle Linux ASMLib Installation: Overview 5-10

vi

Page 7: User Guide RAC

Oracle Linux ASMLib Installation 5-11ASM Library Disk Creation 5-13ASM Administration 5-15ASM Instance Functionalities 5-16ASM Instance Creation 5-17ASM Instance Initialization Parameters 5-18RAC and ASM Instances Creation 5-19ASM Instance Initialization Parameters and RAC 5-20Discovering New ASM Instances with EM 5-21Accessing an ASM Instance 5-22Dynamic Performance View Additions 5-23ASM Home Page 5-24ASM Performance Page 5-25ASM Configuration Page 5-26Starting Up an ASM Instance 5-27Shutting Down an ASM Instance 5-28ASM Administration 5-29ASM Disk Group 5-30Failure Group 5-31Disk Group Mirroring 5-32Disk Group Dynamic Rebalancing 5-33ASM Administration Page 5-34Create Disk Group Page 5-35ASM Disk Groups with EM in RAC 5-36Disk Group Performance Page and RAC 5-37Create or Delete Disk Groups 5-38Adding Disks to Disk Groups 5-39Miscellaneous Alter Commands 5-40Monitoring Long-Running Operations Using V$ASM_OPERATION 5-42ASM Administration 5-43ASM Files 5-44ASM File Names 5-45ASM File Name Syntax 5-46ASM File Name Mapping 5-48ASM File Templates 5-49Template and Alias: Examples 5-50Retrieving Aliases 5-51SQL Commands and File Naming 5-52DBCA and Storage Options 5-53Database Instance Parameter Changes 5-54

vii

Page 8: User Guide RAC

Summary 5-56Practice 5 Overview 5-57

6 Administering Storage in RAC Part IIObjectives 6-2ASM and SRVCTL with RAC 6-3ASM and SRVCTL with RAC: Examples 6-4Migrating to ASM: Overview 6-5Migration with Extra Space: Overview 6-6Migration with Extra Space: Example 6-7Tablespace Migration: Example 6-11Migrate an SPFILE to ASM 6-12ASM Disk Metadata Requirements 6-13ASM and Transportable Tablespaces 6-14ASM and Storage Arrays 6-15ASM Scalability 6-16Redo Log Files and RAC 6-17Automatic Undo Management and RAC 6-18Summary 6-19Practice 6 Overview 6-20

7 ServicesObjectives 7-2Traditional Workload Dispatching 7-3Grid Workload Dispatching 7-4What Is a Service? 7-5High Availability of Services in RAC 7-6Possible Service Configuration with RAC 7-7Service Attributes 7-8Service Types 7-9Creating Services 7-10Creating Services with DBCA 7-11Creating Services with SRVCTL 7-13Preferred and Available Instances 7-14Everything Switches to Services 7-15Using Services with Client Applications 7-16Using Services with Resource Manager 7-17Services and Resource Manager with EM 7-18Services and Resource Manager: Example 7-19Using Services with Scheduler 7-20

viii

Page 9: User Guide RAC

Services and Scheduler with EM 7-21Services and Scheduler: Example 7-23Using Services with Parallel Operations 7-24Using Services with Metric Thresholds 7-25Changing Service Thresholds Using EM 7-26Services and Metric Thresholds: Example 7-27Service Aggregation and Tracing 7-28Cluster Database: Top Services 7-29Service Aggregation Configuration 7-30Service Aggregation: Example 7-31The trcsess Utility 7-32Service Performance Views 7-33Managing Services 7-34Managing Services with EM 7-36Managing Services: Example 7-38Summary 7-39Practice 7 Overview 7-40

8 High Availability of ConnectionsObjectives 8-2Types of Workload Distribution 8-3Client Side Connect-Time Load Balancing 8-4Client Side Connect-Time Failover 8-5Server Side Connect-Time Load Balancing 8-6Fast Application Notification: Overview 8-7Fast Application Notification Benefits 8-9FAN-Supported Event Types 8-10FAN Event Status 8-11FAN Event Reasons 8-12FAN Event Format 8-13Server-Side Callouts Implementation 8-14Server-Side Callout Parse: Example 8-15Server-Side Callout Filter: Example 8-16Configuring the Server-Side ONS 8-17Configuring the Client-Side ONS 8-18JDBC Fast Connection Failover: Overview 8-19JDBC Fast Connection Failover Benefits 8-20Transparent Application Failover: Overview 8-21TAF Basic Configuration: Example 8-22TAF Preconnect Configuration: Example 8-23

ix

Page 10: User Guide RAC

TAF Verification 8-24FAN Connection Pools and TAF Considerations 8-25Restricted Session and Services 8-26Summary 8-27Practice 8 Overview 8-28

9 Managing Backup and Recovery in RACObjectives 9-2Protecting Against Media Failure 9-3Configure RAC Recovery Settings with EM 9-4Configure RAC Backup Settings with EM 9-5Log History Records 9-6Initiate Archiving 9-7Archived Log File Configurations 9-8RAC and the Flash Recovery Area 9-9Oracle Recovery Manager 9-10Configuring RMAN 9-11RMAN Default Autolocation 9-12User-Managed Backup Methods 9-13Offline User-Managed Backup 9-14Online User-Managed Backup 9-15Channel Connections to Cluster Instances 9-16Distribution of Backups 9-17One Local Drive CFS Backup Scheme 9-18Multiple Drives CFS Backup Scheme 9-19Non-CFS Backup Scheme 9-20RAC Backup and Recovery using EM 9-21Restoring and Recovering Redo Log Files 9-22Parallel Recovery in Real Application Clusters 9-23Fast-Start Parallel Rollback in Real Application Clusters 9-25Managing OCR: Overview 9-26Recovering the OCR 9-27Recovering the Voting Disk 9-28Summary 9-29Practice 9: Overview 9-30

10 RAC Performance TuningObjectives 10-2CPU and Wait Time Tuning Dimensions 10-3RAC-Specific Tuning 10-4Analyzing Cache Fusion Impact in RAC 10-5

x

Page 11: User Guide RAC

Typical Latencies for RAC Operations 10-6Wait Events for RAC 10-7Wait Event Views 10-8Global Cache Wait Events: Overview 10-92-way Block Request: Example 10-113-way Block Request: Example 10-122-way Grant: Example 10-13Considered “Lost” Blocks: Example 10-14Global Enqueue Waits: Overview 10-15Session and System Statistics 10-16Most Common RAC Tuning Tips 10-17Index Block Contention Considerations 10-19Oracle Sequences and Index Contention 10-20Undo Block Considerations 10-21High-Water Mark Considerations 10-22Cluster Database Performance Page 10-23Cluster Cache Coherency Page 10-25Database Locks Page 10-26Automatic Workload Repository: Overview 10-27AWR Tables 10-28AWR Snapshots in RAC 10-29Generating and Viewing AWR Reports 10-30AWR Reports and RAC: Overview 10-31Statspack and AWR 10-33Automatic Database Diagnostic Monitor 10-34ADDM Problem Classification 10-35RAC-Specific ADDM Findings 10-36ADDM Analysis: Results 10-37ADDM Recommendations 10-38Summary 10-39Practice 10: Overview 10-40

11 Design for High AvailabilityObjectives 11-2Causes of Unplanned Down Time 11-3Causes of Planned Down Time 11-4Oracle’s Solution to Down Time 11-5RAC and Data Guard Complementarity 11-6Maximum Availability Architecture 11-7RAC and Data Guard Topologies 11-8RAC and Data Guard Architecture 11-9

xi

Page 12: User Guide RAC

Data Guard Broker (DGB) and CRS Integration 11-11Data Guard Broker Configuration Files 11-12Hardware Assisted Resilient Data 11-13Rolling Patch Upgrade Using RAC 11-14Rolling Release Upgrade Using SQL Apply 11-15Database High Availability Best Practices 11-16Extended RAC: Overview 11-17Extended RAC Connectivity 11-18Extended RAC Disk Mirroring 11-19Additional Data Guard Benefits 11-20Using Distributed Transactions with RAC 11-21Using a Test Environment 11-22Summary 11-23

Appendix A: Practices

Appendix B: Solutions

Appendix C: RAC on Windows Installation

Appendix D: Add Node

Appendix E: Remove Node

xii

Page 13: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Introduction

Page 14: User Guide RAC

Oracle Database 10g: Real Application Clusters I-2

Copyright © 2004, Oracle. All rights reserved.

Overview

• This course is designed for anyone interested in implementing a Real Application Clusters (RAC) database.

• The coverage is general and contains platform-specific information only when it is necessary to explain a concept using an example.

• Knowledge of and experience with Oracle Database 10g architecture are assumed.

• Lecture material is supplemented with hands-on practices.

OverviewThe material in this course is designed to provide basic information that is needed to plan or manage Oracle Database 10g for Real Application Clusters.The lessons and practices are designed to build on your knowledge of Oracle used in anonclustered environment. The material does not cover basic architecture and database management; these topics are addressed by the Oracle Database 10g administration courses offered by Oracle University. If your background does not include working with a current release of the Oracle database, then you should consider taking such training before attempting this course.The practices provide an opportunity for you to work with the features of the database that are unique to Real Application Clusters.

Page 15: User Guide RAC

Oracle Database 10g: Real Application Clusters I-3

Copyright © 2004, Oracle. All rights reserved.

What Is a Cluster?

• Interconnected nodesact as a single server.

• Cluster software hides the structure.

• Disks are availablefor read andwrite by all nodes.

Node

Disks

Interconnect

Cluster wareon each node

What Is a Cluster?A cluster consists of two or more independent, but interconnected, servers. Several hardware vendors have provided cluster capability over the years to meet a variety of needs. Some clusters were only intended to provide high availability by allowing work to be transferred to a secondary node if the active node fails. Others were designed to provide scalability by allowing user connections or work to be distributed across the nodes.Another common feature of a cluster is that it should appear to an application as if it were a single server. Similarly, management of several servers should be as similar to the management of a single server as possible. The cluster management software provides this transparency.For the nodes to act as if they were a single server, files must be stored in such a way that they can be found by the specific node that needs them. There are several different cluster topologies that address the data access issue, each dependent on the primary goals of the cluster designer.The interconnect is a physical network used as a means of communication between each node of the cluster. In short, a cluster is a group of independent servers that cooperate as a single system.

Page 16: User Guide RAC

Oracle Database 10g: Real Application Clusters I-4

Copyright © 2004, Oracle. All rights reserved.

What Is Oracle Real Application Clusters?

• Multiple instancesaccessing the samedatabase

• Instances spread oneach node

• Physical orlogical accessto each database file

• Software controlleddata access

Instances run on each node

Databasefiles

Interconnect

What Is Oracle Real Application Clusters?Real Application Clusters is a software that enables you to use clustered hardware by running multiple instances against the same database. The database files are stored on disks that are either physically or logically connected to each node, so that every active instance can read from or write to them.The Real Application Clusters software manages data access, so that changes are coordinated between the instances and each instance sees a consistent image of the database. The cluster interconnect enables instances to pass coordination information and data images between each other.This architecture enables users and applications to benefit from the processing power of multiple machines. RAC architecture also achieves redundancy in the case of, for example, a system crashing or becoming unavailable; the application can still access the database on any surviving instances.

Page 17: User Guide RAC

Oracle Database 10g: Real Application Clusters I-5

Copyright © 2004, Oracle. All rights reserved.

Why Use RAC?

• High availability: Survive node and instance failures

• No scalability limits: Add more nodes as you need them tomorrow

• Pay as you grow: Pay for just what you need today• Key grid computing feature:

– Grow and shrink on demand– Single-button addition and removal of servers– Automatic workload management for services

Why Use RAC?Oracle Real Application Clusters (RAC) enables high utilization of a cluster of standard, low-cost modular servers such as blades.RAC offers automatic workload management for services. Services are groups or classifications of applications that comprise business components corresponding to application workloads. Services in RAC enable continuous, uninterrupted database operations and provide support for multiple services on multiple instances. You assign services to run on one or more instances, and alternate instances can serve as backup instances. If a primary instance fails, Oracle moves the services from the failed instance to a surviving alternate instance. Oracle also automatically load-balances connections across instances hosting a service.RAC harnesses the power of multiple low-cost computers to serve as a single large computer for database processing, and provides the only viable alternative to large-scale SMP for all types of applications.RAC, which is based on a shared-disk architecture, can grow and shrink on demand without the need to artificially partition data among the servers of your cluster. RAC also offers a single-button addition and removal of servers to a cluster. Thus, you can easily provide or remove a server to or from the database.

Page 18: User Guide RAC

Oracle Database 10g: Real Application Clusters I-6

Copyright © 2004, Oracle. All rights reserved.

Clusters and Scalability

SMP model RAC model

Cache

CPU CPU

Cache

CPU CPU

Memory

Cache coherency

SGA

CPU CPU

SGA

CPU CPU

Sharedstorage

Cache fusion

Clusters and ScalabilityIf your application scales transparently on symmetric multiprocessing (SMP) machines, then it is realistic to expect it to scale well on RAC, without having to make any changes to the application code.RAC eliminates the database instance, and the node itself, as a single point of failure, and ensures database integrity in the case of such failures.Following are some scalability examples:

• Allow more simultaneous batch processes.• Allow larger degrees of parallelism and more parallel executions to occur.• Allow large increases in the number of connected users in online transaction

processing (OLTP) systems.

Page 19: User Guide RAC

Oracle Database 10g: Real Application Clusters I-7

Copyright © 2004, Oracle. All rights reserved.

Levels of Scalability

• Hardware: Disk input/output (I/O)• Inter-node communication: High bandwidth and

low latency• Operating system: Number of CPUs• Database management system: Synchronization• Application: Design

Level of ScalabilitySuccessful implementation of cluster databases requires optimal scalability on four levels:

• Hardware scalability: Interconnectivity is the key to hardware scalability, which greatly depends on high bandwidth and low latency.

• Operating system scalability: Methods of synchronization in the operating system can determine the scalability of the system. In some cases, potential scalability of the hardware is lost because of the operating system’s inability to handle multiple resource requests simultaneously.

• Database management system scalability: A key factor in parallel architectures is whether the parallelism is affected internally or by external processes. The answer to this question affects the synchronization mechanism.

• Application scalability: Applications must be specifically designed to be scalable. A bottleneck occurs in systems in which every session is updating the same data most of the time. Note that this is not RAC specific and is true on single-instance system too.

It is important to remember that if any of the above areas are not scalable (no matter how scalable the other areas are), then parallel cluster processing may not be successful. A typical cause for the lack of scalability is one common shared resource that must be accessed often. This causes the otherwise parallel operations to serialize on this bottleneck. A high latency in the synchronization increases the cost of synchronization, thereby counteracting the benefits of parallelization. This is a general limitation and not a RAC-specific limitation.

Page 20: User Guide RAC

Oracle Database 10g: Real Application Clusters I-8

Copyright © 2004, Oracle. All rights reserved.

Scaleup and Speedup

Original system

100% of task

Cluster system scaleup

up to 200%oftask

up to 300%oftask

Time

Hardware Time

Time

Cluster system speedup

Time/2

Hardware

Hardware

Hardware

Hardware

TimeHardware

100%of task

Scaleup and SpeedupScaleup is the ability to sustain the same performance levels (response time) when both workload and resources increase proportionally:

Scaleup=(volume parallel)/(volume original)–time for ipc

For example, if 30 users consume close to 100 percent of the CPU during normal processing, then adding more users would cause the system to slow down due to contention for limited CPU cycles. However, by adding CPUs, you can support extra users without degrading performance.Speedup is the effect of applying an increasing number of resources to a fixed amount of work to achieve a proportional reduction in execution times:

Speedup=(time original)/(time parallel)–time for ipc

Speedup results in resource availability for other tasks. For example, if queries usually take ten minutes to process and running in parallel reduces the time to five minutes, then additional queries can run without introducing the contention that might occur were they to run concurrently.Note: ipc is the abbreviation for interprocess communication.

Page 21: User Guide RAC

Oracle Database 10g: Real Application Clusters I-9

Copyright © 2004, Oracle. All rights reserved.

Speedup/Scaleup and Workloads

Workload

OLTP and Internet

DSS with parallel query

Batch (mixed)

Speedup

No

Yes

Possible

Scaleup

Yes

Yes

Yes

Speedup/Scaleup and WorkloadsThe type of workload determines whether scaleup or speedup capabilities can be achieved using parallel processing.Online transaction processing (OLTP) and Internet application environments are characterized by short transactions that cannot be further broken down, and therefore, no speedup can be achieved. However, by deploying greater amounts of resources, a larger volume of transactions can be supported without compromising the response.Decision support systems (DSS) and parallel query options can attain speedup, as well asscaleup, because they essentially support large tasks without conflicting demands on resources. The parallel query capability within Oracle can also be leveraged to decrease overall processing time of long-running queries and to increase the number of such queries that can be run concurrently.In an environment with a mixed workload of DSS, OLTP, and reporting applications,scaleup can be achieved by running different programs on different hardware. Speedup is possible in a batch environment, but may involve rewriting programs to use the parallel processing capabilities.

Page 22: User Guide RAC

Oracle Database 10g: Real Application Clusters I-10

Copyright © 2004, Oracle. All rights reserved.

A History of Innovation

Nonblockingqueries

OPS

RAC

AutomaticStorage

Management

AutomaticWorkload

managementLow-costcommodity

clustersResourceManager

DataGuard

Enterprise

Grids

A History of InnovationOracle Database 10g and the specific new manageability enhancements provided by Oracle RAC 10g enable RAC for everyone—all types of applications and enterprise grids, the basis for fourth-generation computing. Enterprise grids are built from large configurations of standardized, commodity-priced components: processors, network, and storage. With Oracle RAC’s cache fusion technology, the Oracle database adds to this the highest levels of availability and scalability.Also, with Oracle RAC 10g, it becomes possible to perform dynamic provisioning of nodes, storage, CPUs, and memory to maintain service levels more easily and efficiently.Enterprise grids are the data centers of the future and enable business to be adaptive, proactive, and agile for the fourth generation.The next major transition in computing infrastructure is going from the era of big SMPs to the era of grids.

Page 23: User Guide RAC

Oracle Database 10g: Real Application Clusters I-11

Copyright © 2004, Oracle. All rights reserved.

Course Objectives

In this course, you:• Learn the principal concepts of RAC• Install the RAC components• Administer database instances in a RAC

environment• Migrate a RAC database to ASM• Manage services• Back up and recover RAC databases• Monitor and tune performance of a RAC database

Course ObjectivesThis course is designed to give you the necessary information to successfully administer Real Application Clusters.You install Oracle Database 10g with Oracle Universal Installer (OUI) and create your database with the Database Configuration Assistant (DBCA). This ensures that your RAC environment has the optimal network configuration, database structure, and parameter settings for the environment that you selected. As a DBA, after installation your tasks are to administer your RAC environment at three levels:

• Instance administration• Database administration• Cluster administration

Throughout this course you use various tools to administer each level of RAC:• Oracle Enterprise Manager 10g Database Control to perform administrative tasks

whenever feasible• Task-specific GUIs such as the Database Configuration Assistant (DBCA) and the

Virtual Internet Protocol Configuration Assistant (VIPCA)• Command-line tools such as SQL*Plus, Recovery Manager, Server Control

(SRVCTL).

Page 24: User Guide RAC

Oracle Database 10g: Real Application Clusters I-12

Copyright © 2004, Oracle. All rights reserved.

Typical Schedule

510-11Tuning and design

47-8-9

35-6Storage and services

23-4

11-2Concepts and installation

DayLessonsTopics

Typical ScheduleThe lessons in this guide are arranged in the order that you will probably study them in class, and are grouped into the topic areas that are shown in the slide. The individual lessons are ordered so that they lead from more familiar to less familiar areas. The related practices are designed to let you explore increasingly powerful features of a Real Application Clusters database. In some cases, the goals for the lessons and goals for the practices are not completely compatible. Your instructor may, therefore, choose to teach some material in a different order than found in this guide. However, if your instructor teaches the class in the order in which the lessons are printed in this guide, then the class should run approximately as shown in this schedule.

Page 25: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Architecture and Concepts

Page 26: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• List the various components of Cluster Ready

Services (CRS) and Real Application Clusters (RAC)

• Describe the various types of files used by a RAC database

• Describe the various techniques used to share database files across a cluster

• Describe the purpose of using services with RAC

Page 27: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-3

Copyright © 2004, Oracle. All rights reserved.

Complete Integrated Cluster Ware

Man

agem

ent A

PIs

Hardware/OS kernelHardware/OS kernel

Connectivity

Membership

Messaging and Locking

Volume Managerfile system

Applications

Cluster control Event Services

System M

anagement

Applications/RAC

Services framework

Cluster control/Recovery APIs

Automatic Storage Management

Messaging and Locking

Membership

ConnectivityEv

ent S

ervi

ces

9i RAC 10g RAC

Complete Integrated Cluster WareWith Oracle9i, Oracle introduced Real Application Clusters. For the first time, you were able to run online transaction processing (OLTP) and decision support system (DSS) applications against a database cluster without having to make expensive code changes or spend large amounts of valuable administrator time partitioning and repartitioning the database to achieve good performance.Although Oracle9i Real Application Clusters did much to ease the task of allowingapplications to work in clusters, there are still support challenges and limitations. Among these cluster challenges are complex software environments, support, inconsistent features across platforms, and awkward management interaction across the software stack. Most clustering solutions today were designed with failover in mind. Failover clustering has additional systems standing by in case of a failure. During normal operations, these failover resources may sit idle.With the release of Oracle Database 10g, Oracle provides you with an integrated software solution that addresses cluster management, event management, application management, connection management, storage management, load balancing, and availability. These capabilities are addressed while hiding the complexity through simple-to-use management tools and automation.Real Application Clusters 10g provides an integrated cluster ware layer that delivers a complete environment for applications.

Page 28: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-4

Copyright © 2004, Oracle. All rights reserved.

RAC Software Principles

Node1Instance1

Cluster Ready ServicesCRSD & RACGIMON

OCSSD & OPROCDEVMD

LMONLMD0LMSx

DIAG

…Cache

NodenInstancen

Cluster

Applications

VIP, ONS, EMD, Listener

Cluster Ready ServicesCRSD & RACGIMON

OCSSD & OPROCDEVMD

ASM, DB, Services, OCRApplications

VIP, ONS, EMD, ListenerASM, DB, Services, OCR

Globalmanagement:

SRVCTL, DBCA, EM

Clusterinterface

LCK0

LMONLMD0LMSx

DIAG

…Cache

LCK0

Globalresources

RAC Software PrinciplesYou may see a few additional background processes associated with a RAC instance than you would with a single-instance database. These processes are primarily used to maintain database coherency among each instance. They manage what is called the global resources:• LMON: Global Enqueue Service Monitor• LMD0: Global Enqueue Service Daemon• LMSx: Global Cache Service Processes, where x can range from 0 to j• LCK0: Lock process• DIAG: Diagnosibility process

At the cluster level, you find the main processes of the Cluster Ready Services software. They provide a standard cluster interface on all platforms and perform high-availability operations. You find these processes on each node of the cluster:• CRSD and RACGIMON: Are engines for high-availability operations• OCSSD: Provides access to node membership and group services• EVMD: Scans callout directory and invokes callouts in reactions to detected events• OPROCD: Is a process monitor for the cluster

There are also several tools that are used to manage the various resources available on the cluster at a global level. These resources are the Automatic Storage Management (ASM) instances, the RAC databases, the services, and CRS node applications. Some of the tools that you will use throughout this course are Server Control (SRVCTL), DBCA, and Enterprise Manager.

Page 29: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-5

Copyright © 2004, Oracle. All rights reserved.

RAC Software Storage Principles

Node1Instance1

CRS home

Local storageOracle home

NodenInstancen

CRS home

Local storageOracle home

Voting fileOCR file

Node1Instance1

Local storage

NodenInstancen

Local storage

Shared storage

Voting fileOCR file

CRS homeOracle home

Permits online patch upgradesSoftware not a single

point of failure

Shared storage

RAC Software Storage PrinciplesThe Oracle Database 10g Real Application Clusters installation is a two-phase installation. In the first phase, you install CRS. In the second phase, you install the Oracle database software with RAC components and create a cluster database. The Oracle home that you use for the CRS software must be different from the one that is used for the RAC software. Although it is possible to install the CRS and RAC software on your cluster shared storage when using certain cluster file systems, software is usually installed on a regular file system that is local to each node. This permits online patch upgrades and eliminates the software as a single point of failure. In addition, two files must be stored on your shared storage:

• The voting file is essentially used by the Cluster Synchronization Services daemon for node monitoring information across the cluster. Its size is set to around 20 MB.

• The Oracle Cluster Registry (OCR) file is also a key component of the CRS. It maintains information about the high-availability components in your cluster such as the cluster node list, cluster database instance to node mapping, and CRS application resource profiles (such as services, Virtual Interconnect Protocol addresses, and so on). This file is maintained automatically by administrative tools such as SRVCTL. Its size is around 100 MB.

The voting and OCR files cannot be stored in ASM because they must be accessible before starting any Oracle instance. OCR and voting files must be on redundant, reliable storage such as RAID. The recommended best practice location for those files is raw devices.

Page 30: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-6

Copyright © 2004, Oracle. All rights reserved.

OCR Architecture

Node1

OCR cache

OCRprocess

Clientprocess

Node2

OCR cache

OCRprocess

Node3

OCR cache

OCRprocess

Clientprocess

OCRfile

Shared storage

OCR ArchitectureCluster configuration information is maintained in Oracle Cluster Registry. OCR relies on a distributed shared-cache architecture for optimizing queries against the cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with an OCR process that accesses its OCR cache. Only one of the OCR processes actually reads from and writes to the OCR file on shared storage. This process is responsible for refreshing its own local cache, as well as the OCR cache on other nodes in the cluster. For queries against the cluster repository, the OCR clients communicate directly with the local OCR process on the node from which they originate. When clients need to update the OCR, they communicate through their local OCR process to the OCR process that is performing input/output (I/O) for writing to the repository on disk.The OCR client applications are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manager (EM), Database Configuration Assistant (DBCA), Database Upgrade Assistant (DBUA), NetCA, and Virtual Internet Protocol Configuration Assistant (VIPCA).Furthermore, OCR maintains dependency and status information for application resources defined within CRS, specifically databases, instances, services, and node applications.The name of the configuration file is ocr.loc, and the configuration file variable is ocrconfig_loc. The location for the cluster repository is not restricted to raw devices. You can put OCR on shared storage that is managed by a Cluster File System.Note: OCR also serves as a configuration file in a single instance with the ASM, where there is one OCR per node.

Page 31: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-7

Copyright © 2004, Oracle. All rights reserved.

RAC Database Storage Principles

Node1Instance1

Shared storage

…Noden

Instancen

Onlineredo log filesfor instance1

Onlineredo log filesfor instancen

Data filesUndo tablespacefiles for

instance1

Undo tablespacefiles for

instancen

Flash recovery area filesChange tracking file

SPFILE

Control filesTemp files

Archivedlog files

Local storage

Archivedlog files

Local storage

RAC Database Storage PrinciplesThe primary difference between RAC storage and storage for single-instance Oracle databases is that all data files in RAC must reside on shared devices (either raw devices or cluster file systems) in order to be shared by all the instances that access the same database. You must also create at least two redo log groups for each instance, and all the redo log groups must also be stored on shared devices for instance or crash recovery purposes. Each instance’s online redo log groups are called an instance’s thread of online redo.In addition, you must create one shared undo tablespace for each instance for using the recommended automatic undo management feature. Each undo tablespace must be shared by all instances for recovery purposes.Archive logs cannot be placed on raw devices because their names are automatically generated and different for each one. That is why they must be stored on a file system. If you are using a cluster file system (CFS), it enables you to access these archive files from any node at any time. If you are not using a CFS, you are always forced to make the archives available to the other cluster members at the time of recovery; for example by using a network file system (NFS) across nodes. If you are using the recommended flash recovery area feature, then it must be stored in a shared directory so that all instances can access it.Note: A shared directory can be an ASM disk group, or a Cluster File System.

Page 32: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-8

Copyright © 2004, Oracle. All rights reserved.

RAC and Shared Storage Technologies

• Storage is a critical component of grids:– Sharing storage is fundamental– New technology trends

• Supported shared storage for Oracle grids:– Network Attached Storage– Storage Area Network

• Supported file systems for Oracle grids:– Raw volumes– Cluster file system– ASM

RAC and Shared Storage TechnologiesStorage is a critical component of any grid solution. Traditionally, storage has been directly attached to each individual server (DAS). Over the past few years, more flexible storage, which is accessible over storage area networks or regular Ethernet networks, has become popular. These new storage options enable multiple servers to access the same set of disks, simplifying provisioning of storage in any distributed environment.Storage Area Network (SAN) represents the evolution of data storage technology to this point. Traditionally, on client server systems, data was stored on devices either inside or directly attached to the server. Next in the evolutionary scale came Network Attached Storage (NAS) that took the storage devices away from the server and connected them directly to the network. SANs take the principle a step further by allowing storage devices to exist on their own separate networks and communicate directly with each other over very fast media. Users can gain access to these storage devices through server systems that are connected to both the local area network (LAN) and SAN.As you already saw, the choice of file system is critical for RAC deployment. Traditional file systems do not support simultaneous mounting by more than one system. Therefore, you must store files in either raw volumes without any file system, or on a file system that supports concurrent access by multiple systems.

Page 33: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-9

RAC and Shared Storage Technologies (continued)Thus, three major approaches exist for providing the shared storage needed by RAC:

• Raw volumes: These are directly attached raw devices that require storage that operates in block mode such as fiber channel or iSCSI.

• Cluster File System: One or more cluster file systems can be used to hold all RAC files. Cluster file systems require block mode storage such as fiber channel or iSCSI.

• Automatic Storage Management (ASM) is a portable, dedicated, and optimized cluster file system for Oracle database files.

Note: iSCSI is important to SAN technology because it enables a SAN to be deployed in a local area network (LAN), wide area network (WAN), or Metropolitan Area Network (MAN).

Page 34: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-10

Copyright © 2004, Oracle. All rights reserved.

Oracle Cluster File System

• Is a shared disk cluster file system for Linux and Windows

• Improves management of data for RAC by eliminating the need to manage raw devices

• Provides open solution on the operating system side (Linux) free and open source

• Can be downloaded from OTN: http://oss.oracle.com/software

Oracle Cluster File SystemOracle Cluster File System (OCFS) is a shared file system designed specifically for Oracle Real Application Clusters. OCFS eliminates the requirement that Oracle database files be linked to logical drives and enables all nodes to share a single Oracle Home (on Windows 2000 only), instead of requiring each node to have its own local copy. OCFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements.Following is a list of files that can be placed on an Oracle Cluster File System:

• Oracle software installation: Currently, this configuration is only supported on Windows 2000. The next major version will provide support for Oracle Home on Linux as well.

• Oracle files (control files, data files, redo logs, bfiles, and so on)• Shared configuration files (spfile)• Files created by Oracle during run time• Voting and OCR files

Oracle Cluster File System is free for developers and customers. The source code is provided under the General Public License (GPL) on Linux. It can be downloaded from the Oracle Technology Network Web site.Note: Please see the release notes for platform-specific limitations for OCFS.

Page 35: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-11

Copyright © 2004, Oracle. All rights reserved.

Automatic Storage Management

• Portable and high-performancecluster file system

• Manages Oracle database files• Data spread across disks

to balance load• Integrated mirroring across

disks• Solves many storage

management challenges

ASM

FilesystemVolume

manager

Operating system

Application

Database

Automatic Storage ManagementThe Automatic Storage Management (ASM) is a new feature in Oracle Database 10g. It provides a vertical integration of the file system and the volume manager that is specifically built for Oracle database files. The ASM can provide management for single SMP machines or across multiple nodes of a cluster for Oracle Real Application Clusters support.The ASM distributes I/O load across all available resources to optimize performance while removing the need for manual I/O tuning. It helps DBAs manage a dynamic database environment by allowing them to increase the database size without having to shut down the database to adjust the storage allocation.The ASM can maintain redundant copies of data to provide fault tolerance, or it can be built on top of vendor-supplied, reliable storage mechanisms. Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per-file basis.The ASM capabilities save DBAs time by automating manual storage and thereby increasing their ability to manage larger databases (and more of them) with increased efficiency.Note: ASM is the strategic and stated direction as to where Oracle database files should be stored. However, OCFS will continue to be developed and supported for those who are using it.

Page 36: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-12

Copyright © 2004, Oracle. All rights reserved.

Raw or CFS?

• Using CFS:– Simpler management– Use of OMF with RAC– Single Oracle software installation– Autoextend

• Using raw:– Performance– Use when CFS not available– Cannot be used for archivelog files (on UNIX)

Raw or CFS?As already explained, you can either use a cluster file system or place files on raw devices.Cluster file systems provide the following advantages:

• Greatly simplify the installation and administration of RAC• Use of Oracle Managed Files with RAC• Single Oracle software installation• Autoextend enabled on Oracle data files• Uniform accessibility to archive logs in case of physical node failure

Raw devices implications:• Raw devices are always used when CFS is not available or not supported by Oracle.• Raw devices offer best performance without any intermediate layer between Oracle

and the disk.• Autoextend fails on raw devices if the space is exhausted.• ASM, Logical Storage Managers, or Logical Volume Managers can ease the work

with raw devices. Also, they can enable you to add space to a raw device online, or you may be able to create raw device names that make the usage of this device clear to the system administrators.

Page 37: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-13

Copyright © 2004, Oracle. All rights reserved.

Typical Cluster Stack with RAC

High-speed Interconnect: Gigabit EthernetUDP Proprietary

Proprietary

Database shared storage

Interconnect

Servers

Oracle CRS

RACLinux, UNIX,

Windows

ASM

RACLinux

Windows

RAW

RACLinux

Windows

OCFS

RACAIX, HP-UX, Solaris

ASM

OS C/W

RAWOS CVM

CFS

Typical Cluster Stack with RACEach node in a cluster requires a supported interconnect software protocol to support inter-instance communication, and Transmission Control Protocol/Internet Protocol (TCP/IP) to support CRS polling.All UNIX platforms use User Datagram Protocol (UDP) on Gigabit Ethernet as one of the primary protocols and interconnect for RAC inter-instance IPC communication. Other supported vendor-specific interconnect protocols include Remote Shared Memory for SCI and SunFire Link interconnects, and Hyper Messaging Protocol for Hyperfabric interconnects. In any case, your interconnect must be certified by Oracle for your platform.Using Oracle clusterware, you can reduce installation and support complications. However, vendor clusterware may be needed if customers use non-Ethernet interconnect or if you have deployed clusterware-dependent applications on the same cluster where you deploy RAC.Similar to the interconnect, the shared storage solution you choose must be certified by Oracle for your platform. If a cluster file system (CFS) is available on the target platform, then both the database area and flash recovery area can be created on either CFS or ASM. If a CFS is unavailable on the target platform, then the database area can be created either on ASM or on raw devices (with the required volume manager), and the flash recovery area must be created on the ASM.

Page 38: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-14

Copyright © 2004, Oracle. All rights reserved.

RAC Certification Matrix

1. Connect and log in to http://metalink.oracle.com2. Click the Certify and Availability button on the

menu frame3. Click the View Certifications by Product link4. Select Real Application Clusters5. Select the correct platform

RAC Certification MatrixReal Application Clusters Certification Matrix is designed to address any certification inquiries. Use this matrix to answer any certification questions that are related to RAC. To navigate to Real Application Clusters Certification Matrix, perform the steps shown in the slide above.

Page 39: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-15

Copyright © 2004, Oracle. All rights reserved.

The Necessity of Global Resources

1008

SGA1 SGA2

1008

SGA1 SGA2

1008

1008

SGA1 SGA2

1008

SGA1 SGA2

1009 1008 1009

Lostupdates!

1 2

34

The Necessity of Global ResourcesIn single-instance environments, locking coordinates access to a common resource such as a row in a table. Locking prevents two processes from changing the same resource (or row) at the same time.In RAC environments, internode synchronization is critical because it maintains proper coordination between processes on different nodes, preventing them from changing the same resource at the same time. Internode synchronization guarantees that each instance sees the most recent version of a block in its buffer cache.Note: The slide shows you what can happen in the absence of cache coordination.

Page 40: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-16

Copyright © 2004, Oracle. All rights reserved.

LMONLMD0LMSx

DIAG

LCK0

CacheGRD Master

GES

GCS

LMONLMD0LMSx

DIAG

…Cache

LCK0

GRD Master

GES

GCS

Global Resources Coordination

Node1Instance1

NodenInstancen

Cluster

Interconnect

Globalresources

Global Enqueue Service (GES)Global Cache Service (GCS)

Global Resource Directory (GRD)

Global Resources CoordinationCluster operations require synchronization among all instances to control shared access to resources. RAC uses the Global Resource Directory (GRD) to record information about how resources are used within a cluster database. The Global Cache Service (GCS) and Global Enqueue Service (GES) manage the information in the GRD. Each instance maintains a part of the GRD in its System Global Area (SGA). The GCS and GES nominate one instance to manage all information about a particular resource. This instance is called the resource master. Also, each instance knows which instance masters which resource.Maintaining cache coherency is an important part of a RAC activity. Cache coherency is the technique of keeping multiple copies of a block consistent between different Oracle instances. GCS implements cache coherency by using what is called the Cache Fusion algorithm.The GES manages all non-Cache Fusion inter-instance resource operations and tracks the status of all Oracle enqueuing mechanisms. The primary resources of the GES controls are dictionary cache locks and library cache locks. The GES also performs deadlock detection to all deadlock-sensitive enqueues and resources.

Page 41: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-17

Copyright © 2004, Oracle. All rights reserved.

Global Cache Coordination: Example

Node1Instance1

Node2Instance2

…Cache

Cluster

1009

1008

12

3

GCS

4

No disk I/O

LMONLMD0LMSx

LCK0

Cache 1009

DIAG

LMONLMD0LMSxLCK0DIAG

Block masteredby instance one

Which instancemasters the block?

Instance two hasthe current version of the block

Global Cache Coordination: ExampleThe scenario described in the slide assumes that the data block has been changed, or dirtied, by the first instance. Furthermore, only one copy of the block exists clusterwide, and the content of the block is represented by its SCN.

1. The second instance attempting to modify the block submits a request to the GCS.2. The GCS transmits the request to the holder. In this case, the first instance is the

holder.3. The first instance receives the message and sends the block to the second instance.

The first instance retains the dirty buffer for recovery purposes. This dirty image of the block is also called a past image of the block. A past image block cannot be modified further.

4. On receipt of the block, the second instance informs the GCS that it holds the block.Note: The data block is not written to disk before the resource is granted to the second instance.

Page 42: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-18

Copyright © 2004, Oracle. All rights reserved.

Write to Disk Coordination: Example

Node1Instance1

Node2Instance2

Cache

Cluster

1010

1010

1

3

2

GCS

45

Only onedisk I/O

LMONLMD0LMSxLCK0DIAG

LMONLMD0LMSxLCK0DIAG

……Cache 1009

Need to make roomin my cache.

Who has the current versionof that block?

Instance two owns it.Instance two, flush the block

to disk

Block flushed, make room

Write to Disk Coordination: ExampleThe scenario described in the slide illustrates how an instance can perform a checkpoint at any time or replace buffers in the cache due to free buffer requests. Because multiple versions of the same data block with different changes can exist in the caches of instances in the cluster, a write protocol managed by the GCS ensures that only the most current version of the data is written to disk. It must also ensure that all previous versions are purged from the other caches. A write request for a data block can originate in any instance that has the current or past image of the block. In this scenario, assume that the first instance holding a past image buffer requests that Oracle writes the buffer to disk:

1. The first instance sends a write request to the GCS.2. The GCS forwards the request to the second instance, the holder of the current

version of the block.3. The second instance receives the write request and writes the block to disk.4. The second instance records the completion of the write operation with the GCS.5. After receipt of the notification, the GCS orders all past image holders to discard their

past images. These past images are no longer needed for recovery.Note: In this case, only one I/O is performed to write the most current version of the block to disk.

Page 43: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-19

Copyright © 2004, Oracle. All rights reserved.

RAC and Instance/Crash Recovery

Recovery time

SMONrecovers

thedatabase

Remasterenqueue

resources Remastercache

resources

Build re-covery set

LMONrecovers

GRD

Merge failedredo threads

1

2

3

5

Resourceclaim

4 Roll forwardrecovery set

Use informationfor other caches

RAC and Instance/Crash RecoveryWhen an instance fails and the failure is detected by another instance, the second instance performs the following recovery steps:

1. During the first phase of recovery, GES remasters the enqueues.2. Then the GCS remasters its resources. The GCS processes remaster only those

resources that lose their masters. During this time, all GCS resource requests and write requests are temporarily suspended. However, transactions can continue to modify data blocks as long as these transactions have already acquired the necessary resources.

3. After enqueues are reconfigured, one of the surviving instances can grab the Instance Recovery enqueue. Therefore, at the same time as GCS resources are remastered, SMON determines the set of blocks that need recovery. This set is called the recovery set. Because, with Cache Fusion, an instance ships the contents of its blocks to the requesting instance without writing the blocks to the disk, the on-disk version of the blocks may not contain the changes that are made by either instance. This implies that SMON needs to merge the content of all the online redo logs of each failed instance to determine the recovery set. This is because one failed thread might contain a hole in the redo that needs to be applied to a particular block. So, redo threads of failed instances cannot be applied serially. Also, redo threads of surviving instances are not needed for recovery because SMON could use past or current images of their corresponding buffer caches.

Page 44: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-20

RAC and Instance/Crash Recovery (continued)4. Buffer space for recovery is allocated and the resources that were identified in the

previous reading of the redo logs are claimed as recovery resources. This is done to avoid other instances to access those resources.

5. All resources required for subsequent processing have been acquired and the GRD is now unfrozen. Any data blocks that are not in recovery can now be accessed. Note that the system is already partially available.Then, assuming that there are past images or current images of blocks to be recovered in other caches in the cluster database, the most recent is the starting point of recovery for these particular blocks. If neither the past image buffers nor the current buffer for a data block is in any of the surviving instances’ caches, then SMON performs a log merge of the failed instances. SMON recovers and writes each block identified in step 3, releasing the recovery resources immediately after block recovery so that more blocks become available as recovery proceeds.

After all blocks have been recovered and the recovery resources have been released, the system is again fully available.In summary, the recovered database or the recovered portions of the database becomes available earlier, and before the completion of the entire recovery sequence. This makes the system available sooner and it makes recovery more scalable.Note: The performance overhead of a log merge is proportional to the number of failed instances and to the size of the redo logs for each instance.

Page 45: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-21

Copyright © 2004, Oracle. All rights reserved.

Instance Recovery and Database Availability

Elapsed time

Dat

abas

e av

aila

bilit

y

None

Partial

Full A

B

C D E

G H

F

1 2 3

4

5

2

Instance Recovery and Database AvailabilityThe graphic illustrates the degree of database availability during each step of Oracle instance recovery:

A. Real Application Clusters is running on multiple nodes.B. Node failure is detected.C. The enqueue part of the GRD is reconfigured; resource management is redistributed

to the surviving nodes. This operation occurs relatively quickly.D. The cache part of the GRD is reconfigured and SMON reads the redo log of the failed

instance to identify the database blocks that it needs to recover.E. SMON issues the GRD requests to obtain all the database blocks it needs for

recovery. After the requests are complete, all other blocks are accessible.F. Oracle performs roll forward recovery. Redo logs of the failed threads are applied to

the database, and blocks are available right after their recovery is completed.G. Oracle performs rollback recovery. Undo blocks are applied to the database for all

uncommitted transactions.H. Instance recovery is complete and all data is accessible.

Note: The dashed line represents the blocks identified in step 2 on the previous slide. Also, the dotted steps represent the ones identified on the previous slide.

Page 46: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-22

Copyright © 2004, Oracle. All rights reserved.

Efficient Inter-Node Row-Level Locking

Node1Instance1

Node2Instance2

UPDATE

Node1Instance1

Node2Instance2

UPDATE

Node1Instance1

Node2Instance2

UPDATE

Node1Instance1

Node2Instance2

COMMITNo block-level

lock

1 2

34

Efficient Inter-Node Row-Level LockingOracle supports efficient row-level locks. These row-level locks are created when data manipulation language (DML) operations, such as UPDATE, are executed by an application. These locks are held until the application commits or rolls back the transaction. Any other application process will be blocked if it requests a lock on the same row.Cache Fusion block transfers operate independently of these user-visible row-level locks. The transfer of data blocks by the GCS is a low level process that can occur without waiting for row-level locks to be released. Blocks may be transferred from one instance to another while row-level locks are held. GCS provides access to data blocks allowing multiple transactions to proceed in parallel.

Page 47: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-23

Copyright © 2004, Oracle. All rights reserved.

Additional Memory Requirement for RAC

• Heuristics for scalability cases:– 15% more shared pool– 10% more buffer cache

• Smaller buffer cache per instance in the case of single-instance workload distributed across multiple instances

• Current values:

SELECT resource_name, current_utilization,max_utilization

FROM v$resource_limitWHERE resource_name like 'g%s_%';

Additional Memory Requirement for RACRAC-specific memory is mostly allocated in the shared pool at SGA creation time. Because blocks may be cached across instances, you must also account for bigger buffer caches.Therefore, when migrating your Oracle database from single instance to RAC, keeping the workload requirements per instance the same as with the single-instance case, then about 10% more buffer cache and 15% more shared pool are needed to run on RAC. These values are heuristics, based on RAC sizing experience. However, these values are mostly upper bounds.If you are using the recommended automatic memory management feature as a starting point, then you can reflect these values in your SGA_TARGET initialization parameter.However, consider that memory requirements per instance are reduced when the same user population is distributed over multiple nodes.Actual resource usage can be monitored by querying the CURRENT_UTILIZATION and MAX_UTILIZATION columns for the GCS and GES entries in the V$RESOURCE_LIMIT view of each instance.

Page 48: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-24

Copyright © 2004, Oracle. All rights reserved.

Parallel Execution with RAC

Execution slaves have node affinity with the execution coordinator, but will expand if needed.

Executioncoordinator

Parallel execution

server

Shared disks

Node 4Node 1 Node 2 Node 3

Parallel Execution with RACOracle’s cost-based optimizer incorporates parallel execution considerations as a fundamental component in arriving at optimal execution plans.In a RAC environment, intelligent decisions are made with regard to intra-node and inter-node parallelism. For example, if a particular query requires six query processes to complete the work and six parallel execution slaves are idle on the local node (the node that the user connected to), then the query is processed by using only local resources. This demonstrates efficient intra-node parallelism and eliminates the query coordination overhead across multiple nodes. However, if there are only two parallel execution servers available on the local node, then those two and four of another node are used to process the query. In this manner, both inter-node and intra-node parallelism are used to speed up query operations.In real world decision support applications, queries are not perfectly partitioned across the various query servers. Therefore, some parallel execution servers complete their processing and become idle sooner than others. The Oracle parallel execution technology dynamically detects idle processes and assigns work to these idle processes from the queue tables of the overloaded processes. In this way, Oracle efficiently redistributes the query workload across all processes. Real Application Clusters further extends these efficiencies to clusters by enabling the redistribution of work across all the parallel execution slaves of a cluster.

Page 49: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-25

Copyright © 2004, Oracle. All rights reserved.

Global Dynamic Performance Views

• Store information about all started instances• One global view for each local view• Use one parallel slave on each instance• Make sure that PARALLEL_MAX_SERVERS is big

enough

Node1Instance1

NodenInstancen

Cluster

V$INSTANCE V$INSTANCE

GV$INSTANCE

Global Dynamic Performance ViewsGlobal dynamic performance views store information about all started instances accessing one RAC database. In contrast, standard dynamic performance views store information about the local instance only.For each of the V$ views available, there is a corresponding GV$ view except for a few exceptions. In addition to the V$ information, each GV$ view possesses an additional column named INST_ID. The INST_ID column displays the instance number from which the associated V$ view information is obtained. You can query GV$ views from any started instance.In order to query the GV$ views, the value of the PARALLEL_MAX_SERVERSinitialization parameter must be set to at least 1 on each instance. This is because GV$views use a special form of parallel execution. The parallel execution coordinator is running on the instance that the client connects to, and one slave is allocated in each instance to query the underlying V$ view for that instance. If PARALLEL_MAX_SERVERSis set to 0 on a particular node, then you do not get a result from that node. Also, if all the parallel servers are busy on a particular node, then you do not get a result either. In the two cases above, you do not get a warning or an error message.

Page 50: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-26

Copyright © 2004, Oracle. All rights reserved.

RAC and Services

Up and down events notification engine

Listeners

RAC Instances

Application serverERP CRM

ERP ERP ERP ERP

CRM CRM CRM CRM

Stop

/St a

rt s

e rvi

ce c

onne

cti o

n s Modify service to instance m

apping

Service connections

BackupPriorityAlertsTuning

Connection load balancing Service availability aware

CRS

Run-time load balancingService location transparency

Restart failed components

RAC and ServicesServices are a logical abstraction for managing workloads. Services divide the universe of work executing in the Oracle database into mutually disjoint classes. Each service represents a workload with common attributes, service level thresholds, and priorities. Services are built into the Oracle database providing single system image for workloads, prioritization for workloads, performance measures for real transactions, and alerts and actions when performance goals are violated. These attributes are handled by each instance in the cluster by using metrics, alerts, scheduler job classes, and resource manager.With RAC, services facilitate load balancing, allow for end-to-end lights-out recovery, and provide full location transparency.A service can span one or more instances of an Oracle database in a cluster, and a single instance can support multiple services. The number of instances offering the service is transparent to the application. Services enable the automatic recovery of work. Following outages, the service is recovered fast and automatically at the surviving instances. When instances are later repaired, services that are not running are restored fast and automatically by CRS. Immediately the service changes state, up or down; a notification is available for applications using the service to trigger immediate recovery and load-balancing actions. Listeners are also aware of services availability, and are responsible for distributing the workload on surviving instances when new connections are made. This architecture forms an end-to-end continuous service for applications.

Page 51: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-27

Copyright © 2004, Oracle. All rights reserved.

Virtual IP Addresses and RAC

ERP=(DESCRIPTION=((HOST=clusnode-1vip))((HOST=clusnode-2vip))

(SERVICE_NAME=ERP))

clnode-1

ERP=(DESCRIPTION=((HOST=clusnode-1))((HOST=clusnode-2))

(SERVICE_NAME=ERP))

Timeoutwait

clnode-2

clnode-1 clnode-2

2

5

3

7

clnode-1vip

clnode-2vip

2

clnode-1vipclnode-2vip

3

4

7

Clients

146

156

Virtual IP Addresses and RACVirtual IP addresses (VIP) are all about availability of applications when an entire node fails.When a node fails, the VIP associated with it automatically fails over to some other node in the cluster. When this occurs:

• The new node indicates to the world the new MAC address for the VIP. For directly connected clients, this usually causes them to see errors on their connections to the old address.

• Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.

This means that when the client issues SQL to the node that is now down (3), or traverses the address list while connecting (1), rather than waiting on a very long TCP/IP timeout (5), which could be as long as ten minutes, the client receives a TCP reset.In the case of SQL, this results in an ORA-3113 error. In the case of connect, the next address in tnsnames is used (6). The slide shows you the connect case with and without VIP. Without using VIPs, clients connected to a node that died will often wait a 10-minute TCP timeout period before getting an error. As a result, you do not really have a good high-availability solution without using VIPs.Note: After you are in the SQL stack and blocked on read/write requests, you need to use Fast Application Notification (FAN) to receive an interrupt. FAN is discussed in more detail in the “High Availability of Connections” lesson.

Page 52: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-28

Copyright © 2004, Oracle. All rights reserved.

Database Control and RAC

Cluster

Cluster Database Home

Cluster Database Performance

Cluster Database Administration

Cluster Database

Cluster Home

Cluster Performance

Cluster Targets

Cluster Database Maintenance

Node1

Instance1

DB Control

AgentOC4J

EM App

Node2

Instance2

AgentOC4J

EM App

DB&

Rep

Database Control and RACWith Real Application Clusters 10g, Enterprise Manager (EM) is the recommended management tool for the cluster as well as the database. EM delivers a single-system image of RAC databases, providing consolidated screens for managing and monitoring individual cluster components. The integration with the cluster allows EM to report status and events, offer suggestions, and show configuration information for the storage and the operating system. This information is available from the Cluster page in a summary form. The flexibility of EM allows you to drill down easily on any events or information that you want to explore. For example, you can use EM to administer your entire processing environment, not just the RAC database. EM enables you to manage a RAC database with its instance targets, listener targets, host targets, and a cluster target, as well as the ASM targets if you are using ASM storage for your database.EM has two different management frameworks: Grid Control and Database Control. RAC is supported in both modes. Database Control is configured within the same ORACLE_HOME of your database target and can be used to manage only one database at a time. Alternatively, Grid Control can be used to manage multiple databases, iAS, and other target types in your enterprise across different ORACLE_HOME directories.The diagram shows you the main divisions that can be seen from the various EM pages.

Page 53: User Guide RAC

Oracle Database 10g: Real Application Clusters 1-29

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Recognize the various components of CRS and

RAC• Use the various types of files in a RAC database• Share database files across a cluster• Use services with RAC

Page 54: User Guide RAC
Page 55: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

RAC Installation and Configuration(Part I)

Page 56: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Describe the installation of Oracle Database 10g

Real Application Clusters (RAC)• Perform RAC preinstallation tasks• Perform cluster setup tasks• Install Oracle Cluster File System (OCFS)• Install Oracle Cluster Ready Services

Page 57: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-3

Copyright © 2004, Oracle. All rights reserved.

Oracle Database 10g RAC Installation: New Features

• Oracle Database 10g RAC incorporates a two-phase installation process:– Phase one installs Cluster Ready Services (CRS).– Phase two installs the Oracle Database 10g

software with RAC.• New pages and dialogs for the Oracle Universal

Installer are introduced.• The Virtual Internet Protocol Configuration

Assistant (VIPCA) tool is used to configure virtual IPs.

Oracle Database 10g RAC Installation: New FeaturesThe installation of Oracle Database 10g requires that you perform a two-phase process in which you run the Oracle Universal Installer (OUI) twice. The first phase installs Oracle Cluster Ready Services Release 1 (10.1.0.2). Cluster Ready Services (CRS) provides high-availability components, and it can also interact with the vendor clusterware, if present, to coordinate cluster membership information.The second phase installs the Oracle Database 10g software with RAC. The installation also enables you to configure services for your RAC environment. If you have a previous Oracle cluster database version, the OUI activates the Database Upgrade Assistant (DBUA) to automatically upgrade your pre-Oracle 10g cluster database. The Oracle Database 10ginstallation process provides single system image, ease of use, and accuracy for RAC installations and patches. There are new and changed pages and dialogs for the OUI, Database Configuration Assistant (DBCA), and DBUA. The VIPCA is a new tool for this release. The enhancements include the following:

• The OUI Cluster Installation Mode page enables you to select whether to perform a cluster Oracle Database 10g installation or to perform a single-instance Oracle Database 10g installation.

Page 58: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-4

Oracle Database 10g RAC Installation: New Features (continued)• The DBCA Services page enables you to configure services for your RAC

environment.• The VIPCA pages enable you to configure virtual Internet protocol addresses for your

RAC database.• The gsdctl command is obsolete. The CRS installation stops any group services

daemon (GSD) processes.• The cluster manager on all platforms in Oracle Database 10g is known as Cluster

Synchronization Services (CSS). The Oracle Cluster Synchronization Service Daemon (OCSSD) performs this function.

• The Oracle Database 10g version of the srvConfig.loc file is the ocr.loc file. The Oracle9i version of srvConfig.loc still exists for backward compatibility.

Page 59: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-5

Copyright © 2004, Oracle. All rights reserved.

Oracle Database 10g RAC Installation: Outline

1. Complete preinstallation tasks:– Hardware requirements– Software requirements– Environment configuration, kernel

parameters, and so on2. Perform CRS installation.3. Perform Oracle Database 10g software

installation.4. Perform cluster database creation.5. Complete postinstallation tasks.

Oracle Database 10g RAC Installation: OutlineTo successfully install Oracle Database 10g RAC, it is important that you have an understanding of the tasks that must be completed and the order in which they must occur. Before the installation can begin in earnest, each node that is going to be part of your RAC installation must meet the hardware and software requirements that are covered in this lesson. You must perform step-by-step tasks for hardware and software verification, as well as for the platform-specific preinstallation procedures. You must install the operating system patches (Red Hat Package Managers [RPMs]) required by the cluster database, and you must verify that the kernel parameters are correct for your needs.CRS must be installed by using the OUI. Make sure that your cluster hardware is functioning normally before you begin this step. Failure to do so results in an aborted or nonoperative installation.After CRS has been successfully installed and tested, again use the OUI to install the Oracle Database 10g software, including software options required for a RAC configuration. Although it is possible to create the database by using the OUI, using the DBCA to create it after the software is installed enables you some extra configuration flexibility.After the database has been created, there are a few postinstallation tasks that must be completed before your RAC database is fully functional. The remainder of this lesson provides you with the necessary knowledge to complete these tasks successfully.

Page 60: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-6

Copyright © 2004, Oracle. All rights reserved.

Preinstallation Tasks

Check system requirementsCheck software requirementsCreate groups and usersConfigure kernel parametersPerform cluster setup

Preinstallation TasksSeveral tasks must be completed before CRS and Oracle Database 10g software can be installed. Some of these tasks are common to all Oracle database installations and should be familiar to you. Others are specific to Oracle Database 10g RAC.Attention to details here simplifies the rest of the installation process. Failure to complete these tasks can certainly affect your installation and possibly force you to restart the process from the beginning.

Page 61: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-7

Copyright © 2004, Oracle. All rights reserved.

Hardware Requirements

• At least 512 MB of physical memory is needed.

• A minimum of 1 GB of swap space is required.

• The /tmp directory should be at least 400 MB.

• The Oracle Database 10g software requires up to 4 GB of disk space.

# grep MemTotal /proc/meminfoMemTotal: 763976 kB

# grep SwapTotal /proc/meminfoSwapTotal: 1566328 kB

# df -k /tmpFilesystem 1K-blocks Used Available Use%/dev/hdb1 3020140 2432180 434544 85%

Hardware RequirementsThe system must meet the following minimum hardware requirements:

• At least 512 megabytes of physical memory is needed. To determine the amount of physical memory, enter the following command: grep MemTotal /proc/meminfo

• A minimum of one gigabyte of swap space or twice the amount of physical memory is needed. On systems with two gigabytes or more of memory, the swap space can be between one and two times the amount of physical memory. To determine the size of the configured swap space, enter the following command: grep SwapTotal /proc/meminfo

• At least 400 megabytes of disk space must be available in the /tmp directory. To determine the amount of disk space available in the /tmp directory, enter the following command: df -k /tmp

• Up to four gigabytes of disk space is required for the Oracle Database 10g software, depending on the installation type. The df command can be used to check for the availability of the required disk space.

Page 62: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-8

Copyright © 2004, Oracle. All rights reserved.

Network Requirements

• Each node must have at least two network adapters.

• Each public network adapter must support TCP/IP.• The interconnect adapter must support User

Datagram Protocol (UDP).• The host name and IP address associated with the

public interface must be registered in the domain name service (DNS) or the /etc/hosts file.

Network RequirementsEach node must have at least two network adapters: one for the public network interface and the other for the private network interface or interconnect. In addition, the interface names associated with the network adapters for each network must be the same on all nodes.For the public network, each network adapter must support TCP/IP. For the private network, the interconnect must support UDP using high-speed network adapters and switches that support TCP/IP. Gigabit Ethernet or an equivalent is recommended.Before starting the installation, each node requires an IP address and an associated host name registered in the DNS or the /etc/hosts file for each public network interface. One unused virtual IP address and an associated virtual host name registered in the DNS or the /etc/hosts file that you configure for the primary public network interface is needed for each node. The virtual IP address must be in the same subnet as the associated public interface. After installation, you can configure clients to use the virtual host name or IP address. If a node fails, its virtual IP address fails over to another node.For the private IP address and optional host name for each private interface, Oracle recommends that you use private network IP addresses for these interfaces, for example, 10.*.*.* or 192.168.*.*. You can use the /etc/hosts file on each node to associate private host names with private IP addresses.

Page 63: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-9

Copyright © 2004, Oracle. All rights reserved.

RAC Network Software Requirements

• Supported interconnect software protocols are required:– TCP/IP– UDP– Remote Shared Memory– Hyper Messaging protocol– Reliable Data Gram

• Token Ring is not supported on AIX platforms.

RAC Network Software RequirementsEach node in a cluster requires a supported interconnect software protocol to support Cache Fusion, and TCP/IP to support CRS polling. In addition to UDP, other supported vendor-specific interconnect protocols include Remote Shared Memory, Hyper Messaging protocol, and Reliable Data Gram. Note that Token Ring is not supported for cluster interconnects on AIX. Your interconnect must be certified by Oracle for your platform. You should also have a Web browser to view online documentation.For functionality required from the vendor clusterware, Oracle’s clusterware provides the equivalent functionality. Also, using Oracle clusterware reduces installation and support complications. However, vendor clusterware may be needed if customers use non-Ethernet interconnect or if you have deployed clusterware-dependent applications on the same cluster where you deploy RAC.

Page 64: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-10

Copyright © 2004, Oracle. All rights reserved.

Required packages and versions for Red Hat 3.0:• gcc-3.2.3-2

• compat-db-4.0.14.5

• compat-gcc-7.3-2.96.122

• compat-gcc-c++-7.3-2.96.122

• compat-libstdc++-7.3-2.96.122

• compat-libstdc++-devel-7.3-2.96.122

• openmotif21-2.1.30-8

• setarch-1.3-1

Package Requirements

Package RequirementsDepending on the products that you intend to install, verify that the packages listed in the slide above are installed on the system. The OUI performs checks on your system to verify that it meets the Linux package requirements of the cluster database and related services. To ensure that these checks succeed, verify the requirements before you start the OUI. To determine whether the required packages are installed, enter a command similar to the following:

# rpm -q package_name# rpm –qa |grep package_name_segment

For example, to check the gcc compatability packages, run the following command:# rpm –qa |grep compatcompat-db-4.0.14.5compat-gcc-7.3-2.96.122compat-gcc-c++-7.3-2.96.122compat-libstdc++-7.3-2.96.122compat-libstdc++-devel-7.3-2.96.122

If a package is not installed, install it from your Linux distribution media as the root user by using the rpm –i command. For example, to install the compat-db package, use the following command:

# rpm –i compat-db-4.0.14.5.i386.rpm

Page 65: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-11

Copyright © 2004, Oracle. All rights reserved.

hangcheck-timer Module Configuration

• The hangcheck-timer module monitors the Linux kernel for hangs.

• Make sure that the hangcheck-timer module is running on all nodes:

• Add entry to start the hangcheck-timer module on all nodes, if necessary:

# /sbin/lsmod |grep –i hangModule Size Used by Not taintedhangcheck-timer 2648 0 (unused)

# vi /etc/rc.local

/sbin/insmod hangcheck-timer hangcheck_tick=30 \ hangcheck_margin=180

hangcheck-timer Module ConfigurationAnother component of the required system software for Linux platforms is thehangcheck-timer kernel module. With the introduction of Red Hat 3.0, this module is part of the operating system distribution. The hangcheck-timer module monitors the Linux kernel for extended operating system hangs that can affect the reliability of a RAC node and cause database corruption. If a hang occurs, the module reboots the node. Verify that the hangcheck-timer module is loaded by running the lsmod command as the root user: /sbin/lsmod |grep –i hang

If the module is not running, you can load it manually by using the insmod command:/sbin/insmod hangcheck-timer hangcheck_tick=30hangcheck_margin=180

The hangcheck_tick parameter defines how often, in seconds, the hangcheck-timermodule checks the node for hangs. The default value is 60 seconds. Thehangcheck_margin parameter defines how long, in seconds, the timer waits for a response from the kernel. The default value is 180 seconds. If the kernel fails to respond within the sum of the hangcheck_tick and hangcheck_margin parameter values, then the hangcheck-timer module reboots the system. Using the default values, the node is rebooted if the kernel fails to respond within 240 seconds.This module must be loaded on each node of your cluster. To ensure that the module is loaded every time the system reboots, verify that the local system startup file contains the command shown in the example, or add the command to the /etc/rc.d/rc.local file.

Page 66: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-12

Copyright © 2004, Oracle. All rights reserved.

Required UNIX Groups and Users

• Create an oracle user, a dba, and an oinstallgroup on each node:

• Verify the existence of the nobody nonprivileged user.

# groupadd -g 500 oinstall

# groupadd -g 501 dba

# useradd -u 500 -d /home/oracle -g "oinstall" \–G "dba" -m -s /bin/bash oracle

# grep nobody /etc/passwd

Nobody:x:99:99:Nobody:/:/sbin/nobody

Required UNIX Groups and UsersYou must create the oinstall group the first time you install the Oracle database software on the system. This group owns the Oracle inventory, which is a catalog of all the Oracle database software installed on the system. You must create the dba group the first time you install the Oracle database software on the system. It identifies the UNIX users that have database administrative privileges. If you want to specify a group name other than the default dba group, you must choose the custom installation type to install the software, or start the OUI as a user that is not a member of this group. In this case, the OUI prompts you to specify the name of this group. It is recommended that the root user be a member of the dba group for CRS considerations.You must create the oracle user the first time you install the Oracle database software onthe system. This user owns all the software installed during the installation. The usual name chosen for this user is oracle. This user must have the Oracle Inventory group as its primary group. It must also have the OSDBA (dba) group as the secondary group.You must verify that the unprivileged user named nobody exists on the system. The nobody user must own the external jobs (extjob) executable after the installation.

Page 67: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-13

Copyright © 2004, Oracle. All rights reserved.

The oracle User Environment

• Set umask to 022.• Set the DISPLAY environment variable.• Set the ORACLE_BASE environment variable.• Set the TMP and TMPDIR variables, if needed.

$ cd$ vi .bash_profileumask 022ORACLE_BASE=/u01/app/oracle; export ORACLE_BASETMP=/u01/mytmp; export TMPTMPDIR=$TMP; export TMPDIR

The oracle User EnvironmentYou must run the OUI as the oracle user. However, before you start the OUI, you must configure the environment of the oracle user. To configure the environment, you must:

• Set the default file mode creation mask (umask) to 022 in the shell startup file• Set the DISPLAY and ORACLE_BASE environment variables• Secure enough temporary disk space for the OUI

If the /tmp directory has less than 400 megabytes of free disk space, identify a file system that is large enough and set the TMP and TMPDIR environment variables to specify a temporary directory on this file system. Use the df -k command to identify a suitable file system with sufficient free space. Make sure that the oracle user and the oinstallgroup can write to the directory.# df -kFilesystem 1K-blocks Used Available Use% Mounted on/dev/hdb1 3020140 2471980 394744 87% //dev/hdb2 3826584 33020 3599180 1% /home/dev/dha1 386008 200000 186008 0% /dev/shm/dev/hdb5 11472060 2999244 7890060 28% /u01/dev/sda1 8030560 1389664 6640896 18% /ocfs# /mkdir /u01/mytmp# chmod 777 /u01/mytmp

Page 68: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-14

Copyright © 2004, Oracle. All rights reserved.

User Shell Limits

• Add the following lines to the /etc/security/limits.conf file:

• Add the following line to the /etc/pam.d/loginfile:

* soft nproc 2047* hard nproc 16384* soft nofile 1024* hard nofile 65536

session required /lib/security/pam_limits.so

User Shell LimitsTo improve the performance of the software, you must increase the following shell limits for the oracle user:• nofile: The maximum number of open file descriptors should be 65536.• nproc: The maximum number of processes available to a single user must not be

less than 16384.The hard values, or upper limits, for these parameters can be set in the /etc/security/limits.conf file as shown in the slide above. The entry configures Pluggable Authentication Modules (PAM) to control session security. PAM is a system of libraries that handle the authentication tasks of applications (services) on the system. The principal feature of the PAM approach is that the nature of the authentication is dynamically configurable.

Page 69: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-15

Copyright © 2004, Oracle. All rights reserved.

Configuring User Equivalency

1. Edit the /etc/hosts.equiv file.2. Insert both private and public node names for

each node in your cluster.

3. Test the configuration by using rsh

# vi /etc/hosts.equivstc-raclin01stc-raclin02

# rsh stc-raclin01 uname –r# rsh stc-raclin02 uname –r

Configuring User EquivalencyThe OUI detects whether the machine on which you are running the OUI is part of the cluster. If it is, you are prompted to select the nodes from the cluster on which you would like the patch set to be installed. For this to work properly, user equivalence must be in effect for the oracle user on each node of the cluster. To enable user equivalence, make sure that the /etc/hosts.equiv file exists on each node with an entry for each trusted host. For example, if the cluster has two nodes, stc-raclin01 and stc-raclin02, the hosts.equiv files should look like this:

[root@stc-raclin01]# cat /etc/hosts.equivstc-raclin01stc-raclin02[root@stc-raclin02]# cat /etc/hosts.equivstc-raclin01stc-raclin02

Using sshThe Oracle 10g Universal Installer also supports ssh and scp (OpenSSH) for remote installs. The ssh command is a secure replacement for the rlogin, rsh, and telnetcommands. To connect to an OpenSSH server from a client machine, you must have theopenssh packages installed on the client machine.

Page 70: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-16

Configuring User Equivalency (continued)$ rpm -qa|grep -i opensshopenssh-clients-3.6.1p2-18openssh-3.6.1p2-18openssh-askpass-3.6.1p2-18openssh-server-3.6.1p2-18

Page 71: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-17

Copyright © 2004, Oracle. All rights reserved.

Required Directories for the Oracle Database Software

You must identify or create four directories for the Oracle database software:• Oracle base directory• Oracle inventory directory• CRS home directory• Oracle home directory

Required Directories for the Oracle Database SoftwareThe Oracle base (ORACLE_BASE) directory acts as a top-level directory for the Oracle database software installations. On UNIX systems, the Optimal Flexible Architecture (OFA) guidelines recommend that you must use a path similar to the following for the Oracle base directory:

/mount_point/app/oracle_sw_owner

where mount_point is the mount-point directory for the file system that contains the Oracle database software and oracle_sw_owner is the UNIX username of the Oracle database software owner, which is usually oracle. You must create the ORACLE_BASE directory before starting the installation. A minimum of four gigabytes of disk space is needed. When installing on Linux, do not create the Oracle base directory on an OCFS file system.The Oracle inventory directory (oraInventory) stores the inventory of all software installed on the system. It is required by, and shared by, all the Oracle database software installations on a single system. The first time you install the Oracle database software on a system, the OUI prompts you to specify the path to this directory. If you are installing the software on a local file system, it is recommended that you choose the following path: ORACLE_BASE/oraInventory

The OUI creates the directory that you specify and sets the correct owner, group, and permissions on it.

Page 72: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-18

Required Directories for the Oracle Database Software (continued)The CRS home directory is the directory where you choose to install the software for Oracle CRS. You must install CRS in a separate home directory. When you run the OUI, it prompts you to specify the path to this directory, as well as a name that identifies it. The directory that you specify must be a subdirectory of the Oracle base directory. It is recommended that you specify a path similar to the following for the CRS home directory:

ORACLE_BASE/product/10.1.0/crs_1

The Oracle home directory is the directory where you choose to install the software for a particular Oracle product. You must install different Oracle products, or different releases of the same Oracle product, in separate Oracle home directories. When you run the OUI, it prompts you to specify the path to this directory, as well as a name that identifies it. The directory that you specify must be a subdirectory of the Oracle base directory. It is recommended that you specify a path similar to the following for the Oracle home directory:

ORACLE_BASE/product/10.1.0/db_1

Page 73: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-19

Copyright © 2004, Oracle. All rights reserved.

Linux Kernel Parameters

/proc/sys/net/ipv4/ip_local_port_range

1024-65000

ip_local_port_name

/proc/sys/fs/file-max65536file-max

/proc/sys/kernel/shmmni4096shmmni

/proc/sys/kernel/shmmaxHalf the size of physical memory

shmmax

/proc/sys/kernel/shmall2097152shmall

/proc/sys/kernel/sem128semmni

/proc/sys/kernel/sem100semopm

/proc/sys/kernel/sem32000semmns

/proc/sys/kernel/sem250semmsl

FileParameter Value

Linux Kernel ParametersVerify that the kernel parameters shown in the table above are set to values greater than or equal to the recommended value shown. Use the sysctl command to view the default values of the various parameters. For example, to view the semaphore parameters, run the following command:

# sysctl -a|grep semkernel.sem = 250 32000 32 128

The values shown represent semmsl, semmns, semopm, and semmni in that order.Kernel parameters that can be manually set include:• SEMMNS: The number of semaphores in the system• SEMMNI: The number of semaphore set identifiers that control the number of

semaphore sets that can be created at any one time• SEMMSL: Semaphores are grouped into semaphore sets, and SEMMSL controls the

array size, or the number of semaphores that are contained per semaphore set. It should be about ten more than the maximum number of the Oracle processes.

• SEMOPM: The maximum number of operations per semaphore operation call• SHMMAX: The maximum size of a single shared-memory segment. This must be

slightly larger than the largest anticipated size of the System Global Area (SGA), if possible.

• SHMMNI: The number of shared memory identifiers

Page 74: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-20

Linux Kernel Parameters (continued)You can adjust these semaphore parameters manually by writing the contents of the /proc/sys/kernel/sem file:

# echo SEMMSL_value SEMMNS_value SEMOPM_value \SEMMNI_value > /proc/sys/kernel/sem

To change these parameter values and make them persistent, edit the /etc/sysctl.conffile as follows:

# vi /etc/sysctl.conf...kernel.sem = 250 32000 100 128kernel.shmall = 2097152kernel.shmmax = 2147483648kernel.shmmni = 4096fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000

Note: The kernel parameters shown above are recommended values only. For production database systems, it is recommended that you tune these values to optimize the performance of the system.

Page 75: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-21

Copyright © 2004, Oracle. All rights reserved.

Cluster Setup Tasks

1. View the Certifications by Product section at http://metalink.oracle.com/.

2. Verify your high-speed interconnects.3. Determine the shared storage (disk) option for

your system:– OCFS or other shared file system solution– Raw devices– ASM

4. Install the necessary operating system patches.

Cluster Setup TasksEnsure that you have a certified combination of the operating system and the Oracle database software version by referring to the certification information on Oracle MetaLink in the Availability & Certification section. See the Certifications by Product section at: http://metalink.oracle.comVerify that your cluster interconnects are functioning properly. If you are using vendor-specific clusterware, follow the vendor’s instructions to ensure that it is functioning properly.Determine the storage option for your system, and configure the shared disk. Oracle recommends that you use automatic storage management (ASM) and Oracle Managed Files (OMF), or a cluster file system such as OCFS. If you use ASM or a cluster file system, you can also utilize OMF and other Oracle Database 10g storage features.If the operating system requires specific patches or RPMs in support of your cluster software, apply them before installing any Oracle database software.Note: For more information about ASM, refer to the lessons titled “ASM” and “Administering Storage in RAC” in this course.

Page 76: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-22

Copyright © 2004, Oracle. All rights reserved.

Obtaining OCFS

• To get OCFS for Linux, visit the Web site at http://oss.oracle.com/projects/ocfs/files.

• Download the following Red Hat Package Manager (RPM) packages:– ocfs-support-1.0-n.i686.rpm

– ocfs-tools-1.0-n.i686.rpm

• Download the following RPM kernel module: ocfs-2.4.21-EL-typeversion.rpm, where typeversion is the Linux version.

Obtaining OCFSDownload OCFS for Linux in a compiled form from the following Web site: http://oss.oracle.com/projects/ocfs/In addition, you must download the following RPM packages:• ocfs-support-1.0-n.i686.rpm• ocfs-tools-1.0-n.i686.rpm

Also, download the RPM kernel module ocfs-2.4.21-4typeversion.rpm, where the variable typeversion stands for the type and version of the kernel that is used. Use the following command to find out which Red Hat kernel version is installed on your system:

uname -a

The alphanumeric identifier at the end of the kernel name indicates the kernel version that you are running. Download the kernel module that matches your kernel version. For example, if the kernel name that is returned with the uname command ends with -21.EL, download the ocfs-2.4.9-21-EL-1.0.11-1.rpm kernel module.Note: Ensure that you use the SMP or enterprise kernel that is shipped with Red Hat Advanced Server 3.0 without any non-Red Hat supplied patches or customization. If you modify the kernel, Oracle Corporation cannot support it.

Page 77: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-23

Copyright © 2004, Oracle. All rights reserved.

Installing the OCFS RPM Packages

1. Install the support RPM file:ocfs-support-1.0.-n.i686.rpm

2. Install the correct kernel module RPM file: ocfs-2.4.21-ELtypeversion.rpm

3. Install the tools RPM file:ocfs-tools-1.0-n.i686.rpm

# rpm -i ocfs-2.4.21-EL-1.0.13-1.i686.rpm

# rpm -i ocfs-support-1.0.10-1.i686.rpm

# rpm -i ocfs-tools-1.0-10.i686.rpm

Installing the OCFS RPM PackagesUse the following procedure to prepare the environment to run OCFS. Note that you must perform all the steps as the root user and that each step must be performed on all the nodes of the cluster.Install the support RPM file, ocfs-support-1.0.-n.i686.rpm, and then the correct kernel module RPM file for your system. Next, install the tools RPM file, ocfs-tools-1.0-n.i686.rpm. Note that n represents the most current release of the support and tools RPM (for example, ocfs-tools-1.0.10-1.i686.rpm).To install the files, enter the following command:

# rpm -i ocfs_rpm_package

where the variable ocfs_rpm_package is the name of the RPM package that you are installing. For example, to install the kernel module RPM file for the 21.EL enterprise kernel, you must enter the following command:

# rpm -i ocfs-2.4.21-EL-1.0.13-1.i686.rpm

Make sure all OCFS rpms are installed by running an rpm query:# rpm -qa|grep -i ocfsocfs-2.4.21-EL-1.0.13-1ocfs-support-1.0.10-1ocfs-tools-1.0.10-1

Page 78: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-24

Copyright © 2004, Oracle. All rights reserved.

Starting ocfstool

# /usr/bin/ocfstool&

Starting ocfstoolUse the ocfstool utility to generate the /etc/ocfs.conf file. The ocfstoolutility is a graphical application. Therefore, you must be sure that your DISPLAY variable is properly set. Start up ocfstool as shown in the following example:

# DISPLAY=:0.0# export DISPLAY# /usr/bin/ocfstool&

The OCFS Tool window appears in a new window. Click in the window to make it active, and select the Generate Config option from the Tasks menu. The OCFS Generate Configwindow is displayed.

Page 79: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-25

Copyright © 2004, Oracle. All rights reserved.

Generating the ocfs.conf File

• Confirm that the values are correct.

• View the /etc/ocfs.conf file.$ cat /etc/ocfs.conf# Ensure this file exists in /etc#

node_name = stc-raclin01node_number = 1 ip_address = 148.2.65.11ip_port = 7000guid = 98C704EBD14F6EBC68660060976E5460

Generating the ocfs.conf FileWhen the OCFS Generate Config window appears, check the values that are displayed in the window to confirm that they are correct, and then click the OK button. Based on the information that is gathered from your installation, the ocfstool utility generates the necessary /etc/ocfs.conf file. After the generation is completed, open the /etc/ocfs.conf file in a text file tool and verify that the information is correct before continuing.The guid value is generated from the Ethernet adapter hardware address and must not be edited manually. If the adapter is switched or replaced, remove the ocfs.conf file and regenerate it or run the ocfs_uid_gen utility that is located in /sbin.

Page 80: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-26

Copyright © 2004, Oracle. All rights reserved.

Preparing the Disks

1. Partition the disk for the OCFS file system2. Create the necessary mount points3. Load the ocfs module and start ocfstool4. Format and mount

the partitions

Preparing the DisksBy using the fdisk utility, partition the disk to allocate space for the OCFS (or ASM) file system(s) according to your storage needs. You should partition your system in accordance with Oracle Optimal Flexible Architecture (OFA) standards. In Linux, SCSI disk devices are named by using the following convention:Sd: SCSI diska–z: Disks 1 through 261–4: Partitions one through fourAfter the partitions are created, use the mkdir comand command to create the mount points for the OCFS file system:

# mkdir /ocfs1 /ocfs2 /ocfs3 (more as needed)# chown oracle:dba /ocfs1 ...

As the root user, load the OCFS module and start the ocfstool utility:# load_ocfs/sbin/insmod ocfs node_name=stc-raclin01 ip_address=192.168.1.11cs=1807 guid=191C46E04CE4C1130B840050BFABD260 comm_voting=1 ip_po0Using /lib/modules/2.4.21-EL-ABI/ocfs/ocfs.oModule ocfs loaded# /sbin/ocfstool&Go to Tasks on the menu bar and click Format. After supplying the needed information, click OK in the OCFS Format window. When finished, click the Mount button.

Page 81: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-27

Copyright © 2004, Oracle. All rights reserved.

Loading OCFS at Startup

The /etc/rc..d/S24ocfs file loads OCFS at startup.

# more S24ocfs...case "`basename $0`" in*ocfs)

MODNAME=ocfsFSNAME=OCFSLOAD_OCFS=/sbin/load_ocfs;;

*ocfs2)MODNAME=ocfs2FSNAME=OCFS2LOAD_OCFS=/sbin/load_ocfs2;;

...esac...

Loading OCFS at StartupTo start OCFS, the ocfs.o module must be loaded at system startup, before CRS is started. The startup script /etc/rc5.d/S24ocfs is provided to do this. This script is designed to be run before /etc/rc5.d/ S25netfs, which is responsible for mounting network file systems like NFS, SAMBA and of course OCFS. The CRS startup script, /etc/rc5.d/ S96init.crs runs after the aforementioned scripts, needing a mounted OCFS volume. The exception to this is to place the voting disk and CRS repository on raw devices or ASM volumes. The startup script is capable of handling OCFS versions 1 and 2. It is not advisable to modify this startup script directly.

Page 82: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-28

Copyright © 2004, Oracle. All rights reserved.

Mounting OCFS on Startup

• Edit /etc/fstab, and add lines similar to these:

• The _netdev option prevents mount attempts before the S24ocfs script runs

• uid is the user ID of the oracle user as defined in /etc/passwd.

• gid is the group ID of the dba group as defined in /etc/group.

/dev/sda1 /ocfs1 ocfs _netdev uid=500,gid=502/dev/sda2 /ocfs2 ocfs _netdev uid=500,gid=502

oracle:x:500:501::/home/oracle/:/bin/bash

dba:x:502:

Mounting OCFS on StartupTo mount the file system automatically on startup, add lines similar to the following to the /etc/fstab file for each OCFS file system:

/dev/sda1 /ocfs1 ocfs _netdev uid=500,gid=502

Ensure that the OCFS file systems are mounted in sequence, node after node, and wait for each mount to complete before starting the mount on the next node. The OCFS file systems must be mounted after the standard file systems as indicated below:

# cat /etc/fstab

LABEL=/ / ext3 defaults 1 1...LABEL=/tmp /tmp ext3 defaults 1 2LABEL=/usr /usr ext3 defaults 1 2LABEL=/var /var ext3 defaults 1 2/dev/sdb2 swap swap defaults 0 0.../dev/sda1 /ocfs1 ocfs _netdev uid=500,gid=502/dev/sda2 /ocfs2 ocfs _netdev uid=500,gid=502

Note: The OCFS file systems must be mounted after the OCFS module is loaded.

Page 83: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-29

Copyright © 2004, Oracle. All rights reserved.

Using Raw Partitions

1. Install shared disks2. Identify the shared disks to use3. Partition the device

# fdisk –l

Disk /dev/sda: 9173 MB, 9173114880 bytes255 heads, 63 sectors/track, 1115 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb: 9173 MB, 9173114880 bytes255 heads, 63 sectors/track, 1115 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

...# fdisk /dev/sda...

Using Raw PartitionsAlthough Red Hat Enterprise Linux 3.0 and SLES 8 provide a Logical Volume Manager (LVM), this LVM is not cluster aware. For this reason, Oracle does not support the use of logical volumes with RAC for either CRS or database files on Linux. The use of logical volumes for raw devices is supported only for single-instance databases. They are not supported for RAC databases.To create the required raw partitions, perform the following steps:

1. If necessary, install the shared disks that you intend to use, and reboot the system.2. To identify the device name for the disks that you want to use for the database, enter

the following command:# /sbin/fdisk –lDisk /dev/sda: 9173 MB, 9173114880 bytes

255 heads, 63 sectors/track, 1115 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb: 9173 MB, 9173114880 bytes255 heads, 63 sectors/track, 1115 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes...

Page 84: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-30

Copyright © 2004, Oracle. All rights reserved.

Using Raw Partitions

Volume for OCR1001Oracle CRS voting disk201

Password file51Server parameter file (SPFILE)51TEMP tablespace2501First and second control files11022 online redo logs per instance1202 per instanceUSERS tablespace1201EXAMPLE tablespace1601UNDOTBSn tablespace5001 per instance

SYSAUX tablespace300 + 250 per instance

1SYSTEM tablespace500 1

PurposePartition Size (MB)

Number of Partitions

Using Raw Partitions (continued)3. Partition the devices.You can create the required raw partitions either on new devices

that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine whether the device contains unused cylinders. Identify the number and size of the raw files that you need for your installation. Use the chart above as a starting point in determining your storage needs. Use the following guidelines when creating partitions:

- Use the p command to list the partition table of the device.- Use the n command to create a new partition.- After you have created all the required partitions on this device, use the w

command to write the modified partition table to the device.# fdisk /dev/sdaCommand (m for help):ne extendedp primary partition (1-4) pPartition number (1-4): 1First cylinder (1-1020, default 1): 1Last cylinder or +size or +sizeM or +sizeK (1-1020): 500M # System TBCommand (m for help): wThe partition table has been altered!

Page 85: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-31

Copyright © 2004, Oracle. All rights reserved.

Binding the Partitions

1. Identify the devices that are already bound.

2. Edit the /etc/sysconfig/rawdevices file.

3. Adjust the ownership and permissions of the OCR file to root:dba and 640, respectively.

4. Adjust the ownership and permissions of all other raw files to oracle:dba and 660, respectively.

5. Execute the rawdevices command.

# /usr/bin/raw -qa

# /etc/sysconfig/rawdevices file# raw device bindings/etc/sysconfig/rawdevices file

Binding the Partitions1. After you have created the required partitions, you must bind the partitions to raw

devices. However, you must first determine which raw devices are already bound to other devices. To determine which raw devices are already bound to other devices, enter the following command:

# /usr/bin/raw –qa

Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device.

2. Open the /etc/sysconfig/rawdevices file in any text editor, and add a line similar to the following for each partition that you created:

/dev/raw/raw1 /dev/sda1

Specify an unused raw device for each partition.3. For the raw device that you created for the Oracle Cluster Registry (OCR), enter

commands similar to the following to set the owner, group, and permissions on the device file:

# chown root:dba /dev/raw/rawn# chmod 640 /dev/raw/rawn

Page 86: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-32

Binding the Partitions (continued)4. For each additional raw device that you specified in the rawdevices file, enter

commands similar to the following to set the owner, group, and permissions on the device file:

# chown oracle:oinstall /dev/dev/rawn# chmod 660 /dev/raw/rawn

5. To bind the partitions to the raw devices, enter the following command:# /sbin/service rawdevices restart

By editing the rawdevices file, the system binds the partitions to the raw devices when it reboots.

Page 87: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-33

Copyright © 2004, Oracle. All rights reserved.

Raw Device Mapping File

1. Create a database directory, and set proper permissions.

2. Edit the $ORACLE_BASE/oradata/dbname/dbname_raw.conf

file.

3. Set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

# mkdir -p $ORACLE_BASE/oradata/dbname# chown oracle:oinstall $ORACLE_BASE/oradata# chmod 775 $ORACLE_BASE/oradata

# cd $ORACLE_BASE/oradata/dbname/# vi dbname_raw.conf

Raw Device Mapping FileTo enable the DBCA to identify the appropriate raw partition for each database file, you must create a raw device mapping file, as follows:

1. Create a database file subdirectory under the Oracle base directory, and set the appropriate owner, group, and permissions on it:

# mkdir -p $ORACLE_BASE/oradata/dbname# chown -R oracle:oinstall $ORACLE_BASE/oradata# chmod -R 775 $ORACLE_BASE/oradata

2. Change directory to the $ORACLE_BASE/oradata/dbname directory, and edit the dbname_raw.conf file in any text editor to create a file similar to the following:

system=/dev/raw/raw1sysaux=/dev/raw/raw2example=/dev/raw/raw3users=/dev/raw/raw4temp=/dev/raw/raw5undotbs1=/dev/raw/raw6undotbs2=/dev/raw/raw7...

Page 88: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-34

Raw Device Mapping File (continued)Use the following guidelines when creating or editing this file:

• Each line in the file must have the following format:database_object_identifier=raw_device_path

• For a RAC database, the file must specify one automatic undo tablespace data file (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

• Specify at least two control files (control1, control2).• To use manual instead of automatic undo management, specify a single RBS

tablespace data file (rbs) instead of the automatic undo management tablespaces.3. Save the file, and note the file name that you specified. When you configure the

oracle user’s environment later in this lesson, set the DBCA_RAW_CONFIGenvironment variable to specify the full path to this file.

Page 89: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-35

Copyright © 2004, Oracle. All rights reserved.

Installing Cluster Ready Services

$ /cdrom/crs/runInstaller

Installing Cluster Ready ServicesRun the OUI by executing the runInstaller command from the /crs subdirectory on the Oracle Cluster Ready Services Release 1 (10.1.0.2) CD-ROM. This is a separate CD that contains the CRS software. When the OUI displays the Welcome page, click Next.If you are performing this installation in an environment in which you have never installed the Oracle database software (that is, the environment does not have an OUI inventory), the OUI displays the Specify Inventory directory and credentials page. If you are performing this installation in an environment where the OUI inventory is already set up, the OUI displays the Specify File Locations page instead of the Specify Inventory directory and credentials page.

Page 90: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-36

Copyright © 2004, Oracle. All rights reserved.

Specifying the Inventory Directory

# cd /u01/app/oracle/oraInventory# ./orainstRoot.sh

Specifying the Inventory DirectoryOn the Specify Inventory directory and credentials page, enter the inventory location. If ORACLE_BASE has been properly set, the OUI suggests the proper directory location for the inventory location as per OFA guidelines. If ORACLE_BASE has not been set, enter the proper inventory location according to your requirements. Enter the UNIX group name information oinstall in the Specify Operating System group name field, and then click Next. The OUI displays a dialog box requesting that you run the orainstRoot.sh script from the oraInventory directory. Open a terminal window to the host where the OUI is running, change directory to the oraInventory directory, and execute the script as the root user as follows:# cd /u01/app/oracle/oraInventory# ./orainstRoot.shCreating the Oracle inventory pointer file (/etc/oraInst.loc)Changing groupname of /u01/app/oracle/oraInventory to oinstall.

After the script is run, click the Continue button to close the dialog box, and then click the Next button to continue.

Page 91: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-37

Copyright © 2004, Oracle. All rights reserved.

File Locations and Language Selection

File Locations and Language SelectionNext, the OUI displays the Specify File Locations page. The Specify File Locations page contains predetermined information for the source of the installation files and the target destination information. The OUI provides a CRS Home name in the Name field located in the Destination section of the page. You may accept the name or enter a new name at this time. If ORACLE_BASE has been set, an OFA-compliant directory path appears in the Path field located below the Destination section. If not, enter the location in the target destination, and click Next to continue.The OUI displays the Language Selection page next. Select the language that you want to use for your installation in the Available Languages list on the left of the page. Click the right arrow (>>) to move your selection to the Selected Languages list, and then click the Next button to continue.

Page 92: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-38

Copyright © 2004, Oracle. All rights reserved.

Cluster Configuration

Cluster ConfigurationThe Cluster Configuration page displays predefined node information if the OUI detects that your system has vendor clusterware. Otherwise, the OUI displays the Cluster Configuration page without the predefined node information.Node Names

• Vendor: Use vendor node names.• Oracle: Use host names as returned by /bin/hostname:

# hostnameraclin01

• Private interconnectNames or IP Addresses

• Names must be resolvable by every node by the DNS or /etc/hosts.• Names must exist and be on the same subnet.

Page 93: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-39

Copyright © 2004, Oracle. All rights reserved.

Private Interconnect Enforcement

Private Interconnect EnforcementThe Private Interconnect Enforcement page enables you to select the network interfaces on your cluster nodes to use for internode communication. Ensure that the network interfaces that you choose for the interconnect have enough bandwidth to support the cluster- and RAC-related network traffic. A gigabit Ethernet interface is highly recommended for the private interconnect. To configure the interface for private use, click in the Interface Type field for the interface that you have chosen (eth1 in the example in the slide), and select Private from the drop-down list. When you have finished, click the Next button to continue.

Page 94: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-40

Copyright © 2004, Oracle. All rights reserved.

Oracle Cluster Registry File

Oracle Cluster Registry FileWhen you click the Next button on the Private Interconnect Enforcement page, the OUI looks for the /etc/oracle/ocr.loc file (on Linux systems). If your environment is HP-UX or Solaris, the OUI looks in the /var/opt/oracle directory. On other UNIX systems, the OUI looks for the ocr.loc file in the /etc directory. If the ocr.loc file exists, and if the file has a valid entry for the OCR location, the Voting Disk Locationpage appears. Click the Next button to continue.

# cat /etc/oracle/*.lococrconfig_loc=/ocfs/OCR/ocr.dbflocal_only=FALSE

Otherwise, the Oracle Cluster Registry page appears. Enter a fully qualified file name for the raw device or shared file system file for the OCR. Click Next. The Voting Disk page appears.

Page 95: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-41

Copyright © 2004, Oracle. All rights reserved.

Voting Disk File

Voting Disk FileOn the Voting Disk page, enter a complete path and file name for the file in which you want to store the voting disk. This must be a shared raw device or a shared file system file located on a cluster file system, such as OCFS, or a network file system (NFS) mount, such as a Netapps Filer volume. If you are using raw devices, remember that the storage size for the OCR should be at least 100 megabytes. In addition, it is recommended that you use a redundant array of independent disks (RAID) for storing the OCR and the voting disk to ensure continuous availability of partitions. When you are ready to continue, click the Nextbutton.If the Oracle inventories (oraInventory) on the remote nodes are not set up, the OUI displays a dialog box prompting you to run the orainstRoot.sh script on all the nodes:[raclin01] # /u01/app/oracle/oraInventory/orainstRoot.sh

[raclin02] # /u01/app/oracle/oraInventory/orainstRoot.sh

When you have run the oraInventory script on both nodes, click the Continue button to close the dialog box.

Page 96: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-42

Copyright © 2004, Oracle. All rights reserved.

Summary and Install

Summary and InstallThe OUI displays the Summary page. Note that the OUI must install the components shown in the summary window. Click the Install button. The Install page is then displayed, informing you about the progress of the installation.During the installation, the OUI first copies the software to the local node and then copies the software to the remote nodes.

Page 97: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-43

Copyright © 2004, Oracle. All rights reserved.

Running the root.sh Script on All Nodes

Running the root.sh Script on All NodesNext, the OUI displays a dialog box indicating that you must run the root.sh script on all the nodes that are part of this installation. When you complete the final execution of root.sh, the script runs the following assistants without your intervention:

• Oracle Cluster Registry Configuration Tool (ocrconfig)• Cluster Configuration Tool (clscfg)

When the root.sh script has been run on all nodes, click the OK button to close the dialog box. Run the olsnodes command from the ORA_CRS_HOME/bin directory to make sure that the software is installed properly. The olsnodes command syntax is: olsnodes [-n] [-l] [-v] [-g]

where:-n displays the member number with the member name-l displays the local node name-v activates verbose mode-g activates logging

The output from this command should be a listing of the nodes on which CRS is installed:$ /u01/app/oracle/crs_1/bin/olsnodes -nraclin01 1raclin02 2

Page 98: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-44

Copyright © 2004, Oracle. All rights reserved.

Verifying the CRS Installation

• Check for CRS processes with the ps command.• Check the CRS startup entries in the

/etc/inittab file.

# cat /etc/inittab# Run xdm in runlevel 5x:5:respawn:/etc/X11/prefdm -nodaemonh1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/nullh2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/nullh3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

Verifying the CRS InstallationBefore continuing with the installation of the Oracle database software, you must verify your CRS installation and startup mechanism. In Oracle9i RAC environments, the cluster management was provided by multiple oracm processes. With the introduction of Oracle Database 10g RAC, cluster management is controlled by the evmd, ocssd, and crsdprocesses. Use the ps command to make sure that the processes are running. Run the following command on both nodes:$ ps –ef|grep d.binoracle 1797 1523 0 Jun02 ? 00:00:00 .../evmd.binoracle 1809 1808 0 Jun02 ? 00:00:00 .../ocssd.binroot 1823 1805 0 Jun02 ? 00:00:00 .../crsd.bin ...

Check the startup mechanism for CRS. In Oracle Database 10g RAC, CRS processes are started by entries in the /etc/inittab file, which is processed whenever the run level changes (as it does during system startup and shutdown):h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/nullh2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/nullh3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

Note: The processes are started at run levels 3 and 5 and are started with the respawnflag.

Page 99: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-45

Verifying the CRS Installation (continued)This means that if the processes abnormally terminate, they are automatically restarted. If you kill the CRS processes, they automatically restart or, worse, cause the node to reboot. For this reason, stopping CRS by killing the processes is not recommended. If you want to stop CRS without resorting to shutting down the node, Update inittab entries on all nodes and comment out the CRS-specific entries. Then run the init.crs stop

command:# /etc/init.d/init.crs stop

The init.crs stop command stops the CRS daemons in the following order: crsd, cssd, and evmd.If you encounter difficulty with your CRS installation, it is recommended that you check the associated log files. To do this, check the directories under the CRS Home:

$ORA_CRS_HOME/crs/log: This directory includes traces for CRS resources that are joining, leaving, restarting, and relocating as identified by CRS.

$ORA_CRS_HOME/crs/init: Any core dumps for the crsd.bin daemon are written here.

$ORA_CRS_HOME/css/log: The css logs indicate all actions, such as reconfigurations, missed checkins, connects, and disconnects, from the client CSS listener. In some cases, the logger logs messages with the category of auth.crit for the reboots performed by CRS. This can be used for checking the exact time when the reboot occurred.

$ORA_CRS_HOME/css/init: Core dumps from ocssd primarily and PID for the cssddaemon whose death is treated as fatal are located here. If there are abnormal restarts for cssd, the core files have the formats of core.<pid>.

$ORA_CRS_HOME/evm/log: Log files for the evmd and evmlogger daemons. These are not used as often for debugging as the CRS and CSS directories.

$ORA_CRS_HOME/evm/init: PID and lock files for evmd are found here. Core files for evmd should also be written here.

$ORA_CRS_HOME/srvm/log: Log files for OCR are written here.When you have determined that your CRS installation is successful and fully functional, you may start the Oracle Database 10g software installation.If you must remove a failed CRS install, please refer to Metalink Note: 239998.1

Page 100: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-46

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Describe the installation of Oracle Database 10g RAC • Perform RAC preinstallation tasks• Perform cluster setup tasks• Install OCFS• Install Oracle Cluster Ready Services

Page 101: User Guide RAC

Oracle Database 10g: Real Application Clusters 2-47

Copyright © 2004, Oracle. All rights reserved.

Practice 2: Overview

This practice covers the following topics:• Configuring the operating system to support a

cluster database installation• Installing and configuring OCFS• Creating OCFS volumes• Installing Oracle Cluster Ready Services

Page 102: User Guide RAC
Page 103: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

RAC Installation and Configuration(Part II)

Page 104: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Install the Oracle database software• Configure virtual IPs with the Virtual Internet

Protocol Configuration Assistant (VIPCA)• Perform preinstallation database tasks• Create a cluster database• Perform postinstallation database tasks• Identify best configuration practices for RAC

Page 105: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-3

Copyright © 2004, Oracle. All rights reserved.

OUI Database Configuration Options

Best configuration flexibility

Installs only the software. Does not configure the listeners or network infrastructure and does not create a database

Do not create a starter database

Create tablespaces and data files.Customize your database

Enables you to customize your database options and storage components

Advanced

Minimal input required. You can create your database more quickly.

Installs a starter database, Oracle options, networking services, and utilities. At the end of the installation, the DBCA creates your RAC database.

General purpose, OLTP, DW

AdvantagesDescriptionConfiguration Type

OUI Database Configuration OptionsWhen you run the Oracle Universal Installer (OUI) and choose to create the database, you can select the General Purpose, Transaction Processing, Data Warehouse, or Advanced database configuration types. If you select the Advanced configuration, then you can use the Database Configuration Assistant (DBCA) to create the database. It is recommended that you use the DBCA to create your database.You can also select the Advanced configuration, select a preconfigured template, customize the template, and use the DBCA to create a database by using the template. These templates correspond to the General Purpose, Transaction Processing, and Data Warehouseconfiguration types. You can also use the DBCA with the Advanced template to create a database.It is recommended that you use one of the preconfigured database options or use the Advancedoption and the DBCA to create your database. However, if you want to configure your environment and create your database manually, select the Do not create a starter databaseconfiguration option.

Page 106: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-4

Copyright © 2004, Oracle. All rights reserved.

Install the Database Software

$ idoracle$ /cdrom/dbs/runInstaller

Install the Database SoftwareThe OUI is used to install the Oracle Database 10g software. The OUI must be run as the oracle user. Start the OUI by executing the runInstaller command from the rootdirectory of the Oracle Database 10g Release 1 (10.1.0.2) CD-ROM or the software staging location. When the OUI displays the Welcome page, click the Next button. The Specify File Locations page is displayed.

Page 107: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-5

Copyright © 2004, Oracle. All rights reserved.

Specify File Locations

Specify File locationsThe Source field on the Specify File Locations page is prepopulated with the path to the Oracle Database 10g products.xml file. You need not change this location under normal circumstances. In the Destination section of the page, there are fields for the installation nameor Oracle Home name and the path for the installed products. Note that the database software cannot share the same location (Oracle Home) as the previously installed Cluster Ready Services (CRS) software.The Name field is populated with a default or suggested installation name. Accept the suggested name or enter your own Oracle Home name. Next, in the Path field, enter the fully qualified path name for the installation, /u01/app/oracle/product/10.0.1/db_1 in the example in the slide. After entering the information, review it for accuracy, and click the Nextbutton to continue.

Page 108: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-6

Copyright © 2004, Oracle. All rights reserved.

Specify Cluster Installation

Specify Cluster InstallationThe Specify Hardware Cluster Installation Mode page is displayed next. Because the OUI is node dependent, you must indicate whether you want the installation to be copied to the recognized and selected nodes in your cluster, or whether you want a single, noncluster installation to take place. Most installation scenarios require the Cluster Installation option.To do this, click the Cluster Installation option button and make sure that all nodes have been selected in Node Name list. Note that the local node is always selected for the installation. Additional nodes that are to be part of this installation must be selected by selecting the check box. If you do not see all your nodes listed here, exit the OUI and make sure that CRS is running on all your nodes. Restart the OUI. Click the Next button when you are ready to proceed with the installation.If the OUI does not display the nodes properly, perform clusterware diagnostics by executing the olsnodes -v command from the ORA_CRS_HOME/bin directory, and analyze its output. Refer to your clusterware documentation if the detailed output indicates that your clusterware is not running.

Page 109: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-7

Copyright © 2004, Oracle. All rights reserved.

Select Installation Type

Select Installation TypeThe Select Installation Type page is displayed next. Your installation options include:

• Enterprise Edition• Standard Edition• Custom

For most installations, the Enterprise Edition installation is the correct choice (but Standard Edition is also supported). Selecting the Custom installation type option enables you to install only those Oracle product components that you deem necessary. For this, you must have a good knowledge of the installable Oracle components and of any dependencies or interactions that may exist between them. For this reason, it is recommended that you select the Enterprise Edition installation because it installs all components that comprise the Oracle Database 10g10.1.0 distribution.

Page 110: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-8

Copyright © 2004, Oracle. All rights reserved.

Products Prerequisite Check

Products Prerequisite CheckThe Product-specific Prerequisite Checks page verifies the operating system requirements that must be met for the installation to be successful. These requirements include:

• Certified operating system check• Kernel parameters as required by the database software• Required operating system packages and correct revisions• Required glibc and glibc-compat (compatability) package versions

In addition, the OUI checks whether the ORACLE_BASE user environment variable has been set and, if so, whether the value is acceptable.After each successful check, the Succeeded check box is selected for that test. The test suite results are displayed at the bottom of the page. Any tests that fail are also reported here. The example in the slide shows the results of a completely successful test suite. If you encounter any failures, try opening another terminal window and correct the deficiency. For example, if your glibc version is too low, acquire the correct version of the glibc Red Hat Package Manager (RPM), install it from another terminal window, return to the OUI, and click the Retry button to rerun the tests. When all tests have succeeded, click the Next button to continue.

Page 111: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-9

Copyright © 2004, Oracle. All rights reserved.

Select Database Configuration

Select Database ConfigurationThe Select Database Configuration page appears. On this page, you can choose to create a database as part of the database software installation or defer creation until later. If you choose to install a database, you must select one of the preconfigured starter database types:

• General Purpose• Transaction Processing• Data Warehouse• Advanced (user customizable)

If you choose one of these options, you are queried about the specifics of your database (cluster database name, shared storage options, and so on). After the OUI stops, the DBCA is launched to install your database with the information that you provided.You may also choose to defer the database creation by clicking the Do not create a starter database option button. This option enables you to create the database by manually invoking the DBCA at some point in time after the OUI finishes installing the database software. This choice provides you with more options than the standard preconfigured database models. In the slide above, the default option is Create a starter database (General Purpose). Instead, select Do not create a starter database option. Click the Next button to continue.

Page 112: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-10

Copyright © 2004, Oracle. All rights reserved.

Check Summary

Check SummaryThe Summary page is displayed next. Review the information on this page. Node information and space requirements can be viewed here, as well as selected software components. If you are satisfied with the summary, click the Install button to proceed. If you are not, you can click the Back button to go back and make the appropriate changes.On the Install page, you can monitor the progress of the installation. During the installation, the OUI copies the software first to the local node and then to the remote nodes.

Page 113: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-11

Copyright © 2004, Oracle. All rights reserved.

The root.sh Script

# cd /u01/app/oracle/product/10.1.0/db_1# ./root.sh

The root.sh scriptAt the end of the installation, the OUI displays a dialog box indicating that you must run the root.sh script as the root user on all the nodes where the software is being installed. Execute the root.sh script on one node at a time, and then click the OK button in the dialog box to continue. Note: The root.sh script launches the Virtual Internet Protocol Configuration Assistant (VIPCA) before exiting. Because the VIPCA is a graphical application, make sure that the root.sh script is run from a graphical terminal session, such as X Windows or VNC, and that the DISPLAY environment variable is properly set.

Page 114: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-12

Copyright © 2004, Oracle. All rights reserved.

Launching the VIPCA with root.sh

Launching the VIPCA with root.shThe VIPCA is called from the root.sh script, and it configures the virtual IP addresses for each node. In addition, the VIPCA also configures nodeapps, consisting of the group services daemon (GSD), the Enterprise Manager agent, and Oracle Notification Services (ONS) for the cluster. Before running the VIPCA, you must make sure that you have unused public IP addresses available for each node and that they are configured in the /etc/hosts file or resolvable through DNS. The VIPCA Welcome page appears first. Click the Next button to continue.

Page 115: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-13

Copyright © 2004, Oracle. All rights reserved.

VIPCA Network Interface Discovery

VIPCA Network Interface DiscoveryThe Network Interfaces page appears next. All working network adapters should appear in the discovery window. You can choose individual interfaces to be configured by the VIPCA. However, do not select the interface that is acting as your private interconnect. If you have missing interfaces, check whether they are recognized by the operating system by running the ifconfig command:# ifconfig -aeth0 Link encap:Ethernet HWaddr 00:06:5B:A6:3C:70

inet addr:139.185.35.113 Bcast:139.185.35.255 Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:66641537 errors:0 dropped:0 overruns:1 frame:0TX packets:538116 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:3471151110 (3310.3 Mb) TX bytes:86213945 (82.2 Mb)Interrupt:11 Base address:0xdc80

eth1 Link encap:Ethernet HWaddr 00:06:1B:A4:2B:60inet addr:139.185.35.114 Bcast:139.185.35.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1...

If all your adapters do not appear in the listing, then you have to troubleshoot your hardware.

Page 116: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-14

Copyright © 2004, Oracle. All rights reserved.

VIP Configuration Data and Summary

VIP Configuration Data and SummaryThe Virtual IPs for cluster nodes page is displayed next. Enter an unused or unassigned public virtual IP address for each node displayed on this page, and click Next. The Summary page is displayed. Review the information on this page, and click the Finish button.

Page 117: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-15

Copyright © 2004, Oracle. All rights reserved.

Installation Progress

Installation ProgressWhen you click the Finish button, a progress dialog box appears while the VIPCA configures the virtual IP addresses with the network interfaces that you selected. The VIPCA then creates and starts the VIPs, GSD, and ONS node applications. When the configuration completes, click OK to view the VIPCA configuration results. Review the information on the Configuration Results page, taking care to review any errors that may have been reported. After reviewing the configuration results, click the Exit button to exit the VIPCA.

Page 118: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-16

Copyright © 2004, Oracle. All rights reserved.

End of Installation

End of InstallationAfter running root.sh on all the nodes as described in previous slides, click the OK button in the OUI dialog box to continue the installation. This enables the remaining Oracle configuration assistants to run, so that the assistants can perform postinstallation processing. The Network Configuration Assistant (NETCA) runs next to configure listeners for each node in the cluster. If you have chosen to create the database as described earlier in the Select Database Configuration slide, the DBCA is automatically launched to perform database creation. When the configuration assistants stop running, the End of Installation page appears. You can now click the Exit button to exit the OUI and start the DBCA to create your database.

Page 119: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-17

Copyright © 2004, Oracle. All rights reserved.

Database Preinstallation Tasks

• Make sure that CRS processes are running.

• Ensure that the GSD node application is running.• Set the Oracle database–related environment

variables:– ORACLE_BASE

– ORACLE_HOME

– ORACLE_SID

– PATH

$ ps –ef|grep d.binoracle 1797 1523 0 Jun02 ? 00:00:00 .../evmd.binoracle 1809 1808 0 Jun02 ? 00:00:00 .../ocssd.binroot 1823 1805 0 Jun02 ? 00:00:00 .../crsd.bin ...

Database Preinstallation TasksBefore starting the DBCA to install the database, you must ensure that CRS processes and Group Services are functional. If any of these processes are not running, the database creation fails. Check whether the CRS background processes are running (crsd.bin, ocssd.bin, and evmd.bin) by using the ps command:

oracle 1804 1797 0 Jun02 ? 00:00:00 /u01/.../crs_1/bin/evmd.bin...oracle 1808 1800 0 Jun02 ? 00:00:00 /u01/.../crs_1/bin/ocssd.binoracle 1809 1808 0 Jun02 ? 00:00:00 /u01/.../crs_1/bin/ocssd.bin...root 1823 1805 0 Jun02 ? 00:00:00 /u01/.../crs_1/bin/crsd.binroot 1827 1805 0 Jun02 ? 00:00:00 /u01/.../crs_1/bin/crsd.bin...

To check whether Group Services is running, use the crs_stat command from the /u01/app/oracle/product/10.1.0/cr_1/bin directory as follows:$ cd /u01/app/oracle/product/10.1.0/cr_1/bin

$ ./crs_stat...

Page 120: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-18

Database Preinstallation Tasks (continued)NAME=ora.raclin01.gsdTYPE=applicationTARGET=ONLINESTATE=ONLINE on raclin01...NAME=ora.raclin02.gsdTYPE=applicationTARGET=ONLINESTATE=ONLINE on raclin02...

You can now set the Oracle database–related environment variables for the oracle user, so that they are recognized by the DBCA during database creation:$ cd $ vi .bash_profileORACLE_BASE=/u01/app/oracle; export ORACLE_BASEORA_CRS_HOME=/u01/app/oracle/product/10.1.0/crs_1; export ORA_CRS_HOMEORACLE_SID=RACDB1; export ORACLE_SIDORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1; export ORACLE_HOMEPATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin; export PATH

Create a directory called /ocfs/oradata where the cluster database data files reside. Make sure that the owner is oracle and the group is dba. $ mkdir /ocfs/oradata$ chown oracle:dba /ocfs/oradata$ chmod 775 /ocfs/oradata

Page 121: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-19

Copyright © 2004, Oracle. All rights reserved.

Creating the Cluster Database

$ cd /u01/app/oracle/product/10.1.0/db_1/bin$ ./dbca –datafileDestination /ocfs/oradata

Creating the Cluster DatabaseThis database creation assumes that an Oracle Cluster File System (OCFS) volume is mounted under /ocfs. From a graphical display, start the DBCA with the –datafileDestinationflag. This flag lets DBCA know that the shared disk volume is actually a cluster file system and not a raw device. This prevents the DBCA from looking for a data file-to-raw device map file. The following example shows the usage of the flag:

$ cd /u01/app/oracle/product/10.1.0/db_1/bin$ ./dbca –datafileDestination /ocfs/oradata

Note the mixed case letters in the flag. This is not an error. You must enter it exactly as shown in the example. Later during the installation, when the DBCA prompts you to confirm the data file location, the directory passed as an argument in the dbca command (/ocfs/oradata) is displayed as the default value, simplifying the creation process.The Welcome page appears first. You must select the type of database that you want to install. Click the Oracle Real Application Clusters database option button, and then click Next. The Operations page appears. For a first-time installation, you have two choices only. The first option enables you to create a database and the other option enables you to manage database creation templates. Click the Create a database option button, and then click Next to continue.

Page 122: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-20

Copyright © 2004, Oracle. All rights reserved.

Node Selection

Node SelectionThe Node Selection page is displayed next. Because you are creating a cluster database, choose all the nodes. Click the Select All button to choose all the nodes of the cluster. Each node must be highlighted before continuing. If all the nodes do not appear, you must stop the installation and troubleshoot your environment.The most common reason for encountering an error here is related to problems with the GSD node application. If no problems are encountered, click the Next button to proceed.

Page 123: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-21

Copyright © 2004, Oracle. All rights reserved.

Select Database Type

Select Database TypeThe Database Templates page appears next. If you want OUI to create your database after the Oracle database software is installed, you must choose a template to use for the creation of the database. The templates include:

• Custom Database• Data Warehouse• General Purpose• Transaction Processing

Click the Custom Database option button. This option is chosen because it allows the mostflexibility in configuration options. This is also the slowest of the four options because it is the only choice that does not include data files or options specially configured for a particular type of application. All data files that you include in the configuration are created during the database creation process. Click the Next button to continue.

Page 124: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-22

Copyright © 2004, Oracle. All rights reserved.

Database Identification

Database IdentificationOn the Database Identification page, you must enter the database name in the Global Database Name field. A global database name includes the database name and database domain such as racdb.oracle.com. The name that you enter on this page must be unique among all the global database names used in your environment. The global database name can be up to 30 characters in length and must begin with an alphabetical character.A system identifier (SID) prefix is required, and the DBCA suggests a name based on your global database name. This prefix is used to generate unique SID names for the two instances that make up the cluster database. For example, if your prefix is RACDB, the DBCA creates two instances on node 1 and node 2, called RACDB1 and RACDB2, respectively. This example assumes that you have a two-node cluster. If you do not want to use the system-supplied prefix, enter a prefix of your choice. The SID prefix must begin with an alphabetical character and contain no more than 5 characters on UNIX-based systems or 61 characters on Windows-based systems. Click the Next button to continue.

Page 125: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-23

Copyright © 2004, Oracle. All rights reserved.

Cluster Database Management Method

Cluster Database Management MethodThe Management Options page is displayed. For small cluster environments, you may choose to manage your cluster with Enterprise Manager Database Control. To do this, select the Configure the Database with Enterprise Manager check box. If you have Grid Control installed somewhere on your network, you can click the Use Grid Control for Managementoption button. If you select the Enterprise Manager with the Grid Control option and DBCA discovers agents running on the local node, you can select the preferred agent from a list. Grid Control can simplify database management in large, enterprise deployments.You can also configure Database Control to send e-mail notifications when alerts occur. If you want to configure this, you must supply a Simple Mail Transfer Protocol (SMTP) or outgoing mail server and an e-mail address. You can also enable daily backups here. You must supply a backup start time as well as operating system user credentials for this option. Click the Nextbutton to continue.

Page 126: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-24

Copyright © 2004, Oracle. All rights reserved.

Passwords for Database Schema Owners

Passwords for Database Schema OwnersThe Database Credentials page appears next. You must supply passwords for the user accounts created by the DBCA when configuring your database. You can use the same password for all of these privileged accounts by clicking the Use the Same Password for All Accounts option button. Enter your password in the Password field, and then enter it again in the Confirm Password field.Alternatively, you may choose to set different passwords for the privileged users. To do this, click the Use Different Passwords option button, and then enter your password in the Passwordfield, and then enter it again in the Confirm Password field. Repeat this for each user listed in the User Name column. Click the Next button to continue.

Page 127: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-25

Copyright © 2004, Oracle. All rights reserved.

Storage Options for Database Files

Storage Options for Database FilesOn the Storage Options page, you must select the storage medium where your shared database files are stored. Your three choices are:

• Cluster File System• Automatic Storage Management (ASM)• Raw Devices

If you click the Cluster File System option button, you can click the Next button to continue.

If you click the Automatic Storage Management (ASM) option button, you can either use an existing ASM disk group or specify a new disk group to use. If there is no ASM instance on any of the cluster nodes, the DBCA displays the Create ASM Instance page for you. If an ASM instance exists on the local node, the DBCA displays a dialog box prompting you to enter the password for the SYS user for ASM. To initiate the creation of the required ASM instance, enter the password for the SYS user of the ASM instance. After you enter the required information, click Next to create the ASM instance. After the instance is created, the DBCA proceeds to the ASM Disk Groups page. If you have just created a new ASM instance, there is no disk group from which to select, so you must create a new one by clicking Create New to open the Create Disk Group page. After you are satisfied with the ASM disk groups available to you, select the one that you want to use for your database files, and click Next to proceed to the Database File Locations page.

Page 128: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-26

Storage Options for Database Files (continued)

If you have configured raw devices, click the corresponding button. You must provide a fully qualified mapping file name if you did not previously set the DBCA_RAW_CONFIG environment variable to point to it. You can enter your response or click the Browse button to locate it. The file should follow the format of the example below:

system=/dev/vg_name/rdbname_system_raw_500msysaux=/dev/vg_name/rdbname_sysaux_raw_800m...redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120mcontrol1=/dev/vg_name/rdbname_control1_raw_110mcontrol2=/dev/vg_name/rdbname_control2_raw_110mspfile=/dev/vg_name/rdbname_spfile_raw_5mpwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m

where VG_NAME is the volume group (if configured) and rdbname is the database name. Because this example uses OCFS, click the Cluster File System button, and then Next to continue.Note: For more information about ASM, refer to the lessons titled “ASM” and “Administering Storage in RAC” in this course.

Page 129: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-27

Copyright © 2004, Oracle. All rights reserved.

Database File Locations

Database File LocationsOn the Database File Locations page, you must indicate where the database files are created. You can choose to use a standard template for file locations, one common location, or Oracle Managed Files (OMF). This cluster database uses a common location. Therefore, select the Use Common Location for All Database Files option button, and enter the directory in the Database Files Location field. Alternatively, you can use the Browse button to locate the directory where the database files are created. When you have made your choices, click the Nextbutton to continue.

Page 130: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-28

Copyright © 2004, Oracle. All rights reserved.

Flash Recovery Area

Flash Recovery AreaOn the Recovery Configuration page, you can select redo log archiving by selecting Enable Archiving. If you are using ASM or cluster file system storages, you can also select the Flash Recovery Area size on the Recovery Configuration page. The size of the area defaults to 2048 megabytes, but you can change this figure if it is not suitable for your requirements. If you are using ASM, the flash recovery area defaults to the ASM Disk Group. If you are using a cluster file system, the flash recovery area defaults to $ORACLE_BASE/flash_recovery_area. You may also define your own variables for the file locations if you plan to use the Database Storage page to define individual file locations. When you have completed your entries, click Next, and the Database Content page is displayed.

Page 131: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-29

Copyright © 2004, Oracle. All rights reserved.

Database Components

Database ComponentsThe Database Content page has two tabs. On the Database Components page, you can select the components to configure for use in your database. If you choose the Custom Database option, you can select or deselect the database components and their corresponding assigned tablespaces. Select the check box next to each component that you want to install, and select a tablespace from the drop-down list for the product, if you want to install it somewhere other than the default tablespace that is shown. For a seed database, you can select whether to include the sample schemas in your database.The Custom Scripts page enables you to browse and choose scripts to be executed after your database has been created. After selecting components, click the Next button to continue.

Page 132: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-30

Copyright © 2004, Oracle. All rights reserved.

Database Services

Database ServicesOn the Database Services page, you can add database services to be configured during database creation. To add a service, click the Add button at the bottom of the Database Services section. Enter a service name in the Add a Service dialog box, and then click OK to add the service and return to the Database Services page. The new service name appears under the global database name. Select the service name. The DBCA displays the service preferences for the service on the right of the DBCA Database Services page. Change the instance preference (Not Used, Preferred, or Available) as needed.Go to the Transparent Application Failover (TAF) policy row at the bottom of the page. Make a selection in this row for your failover and reconnection policy preference as described in the following list:

• None: Do not use TAF.• Basic: Establish connections at failover time.• Pre-connect: Establish one connection to a preferred instance and another connection to a

backup instance that you have selected to be available.In the example in the slide, the Pre-connect policy has been chosen. When you have finished adding and configuring services, click the Next button to continue.Note: For more information about services and TAF, refer to the lessons titled “Services” and “High Availability of Connections” in this course.

Page 133: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-31

Copyright © 2004, Oracle. All rights reserved.

Initialization Parameters

Initialization ParametersOn the Initialization Parameters page, you can set important database parameters. The parameters are grouped under four tabs:

• Memory• Sizing• Character Sets• Connection Mode

On the Memory page, you can set parameters that deal with memory allocation, including shared pool, buffer cache, Java pool, large pool, and PGA size. On the Sizing page, you can adjust the database block size. Note that the default is eight kilobytes. In addition, you can set the number of processes that can connect simultaneously to the database.By clicking the Character Sets tab, you can change the database character set. You can also select the default language and the date format. On the Connection Mode page, you can choose the connection type that clients use to connect to the database. The default type is Dedicated Server Mode. If you want to use Oracle Shared Server, click the Shared Server Mode button. If you want to review the parameters that are not found in the four tabs, click the All Initialization Parameters button. After setting the parameters, click the Next button to continue.

Page 134: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-32

Copyright © 2004, Oracle. All rights reserved.

Database Storage Options

Adjust SYSTEM tablespace parameters

Database Storage OptionsThe Database Storage page provides full control over all aspects of database storage, including tablespaces, data files, and log members. Size, location, and all aspects of extent management are under your control here.

Page 135: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-33

Copyright © 2004, Oracle. All rights reserved.

Create the Database

Create the DatabaseThe Creation Options page appears next. You can choose to create the database, save your responses as a database template, or save your DBCA session as a database creation script by clicking the corresponding button. Select the Create Database check box, and then click the Finish button. The DBCA displays the Summary page, giving you the last chance to review all options, parameters, and so on that have been chosen for your database creation. Review the summary data. The review is to make sure that the actual creation is trouble free. When you are ready to proceed, close the Summary page by clicking the OK button.

Page 136: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-34

Copyright © 2004, Oracle. All rights reserved.

Monitor Progress

Monitor ProgressThe Progress Monitor page appears next. In addition to informing you about how fast the database creation is taking place, it also informs you about the specific tasks being performed by the DBCA in real time. These tasks include:

• Creating the RAC data dictionary views• Configuring the network for the cluster database• Starting the listeners and database instances and the high-availability services

When the database creation progress reaches 100 percent, the DBCA displays a dialog box announcing the completion of the creation process. It also directs you to the installation log file location, parameter file location, and the Enterprise Manager URL. By clicking the Password Management button, you can manage the database accounts created by the DBCA.

Page 137: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-35

Copyright © 2004, Oracle. All rights reserved.

Manage Default Accounts

Manage Default AccountsOn the Password Management page, you can manage all accounts created during the database creation process. By default, all database accounts, except SYSTEM, SYS, DBSNMP, and SYSMAN, are locked. You can unlock these additional accounts if you want or leave them as they are. If you unlock any of these accounts, you must set passwords for them, which can be done on the same page. When you have completed database account management, click the OKbutton to return to the DBCA.The End of Installation page appears next, informing you about the URLs for Ultra Search and iSQL*Plus. When you have finished reviewing this information, click the Exit button to exit the DBCA.

Page 138: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-36

Copyright © 2004, Oracle. All rights reserved.

Postinstallation Tasks

• Verify the Enterprise Manager configuration.

• Back up the root.sh script.

• Set up additional user accounts.

$ srvctl config database -d racdb

raclin01 racdb1 /u01/app/.../db_1raclin02 racdb2 /u01/app/.../db_1

$ cd $ORACLE_HOME$ cp root.sh root.sh.bak

Postinstallation TasksAfter the cluster database has been successfully created, you must run the following command to verify the Enterprise Manager/Oracle Cluster Registry configuration in your newly installed RAC environment:

$ srvctl config database -d db_name

Server Control (SRVCTL) displays the name of the node and the instance for the node. The following example shows a node named raclin01 running an instance named racdb1. Execute the following command:

$ srvctl config database -d racdb

raclin01 racdb1 /u01/app/.../db_1raclin02 racdb2 /u01/app/.../db_1

It is also recommended that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle Home directory, the OUI updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, you can recover it from the root.sh file copy.You must also add and set up additionally required user accounts. For information about setting up additional optional user accounts, refer to the Administrator’s Guide for UNIX Systems.

Page 139: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-37

Copyright © 2004, Oracle. All rights reserved.

Patches and the RAC Environment

stc-raclin01/u01/app/oracle/product/db_1

stc-raclin03/u01/app/oracle /product/db_1

Apply a patch set to /u01/app/oracle /product/db_1 on

all nodes.

stc-raclin02/u01/app/oracle /product/db_1

Patches and the RAC EnvironmentApplying patches to your RAC installation is a simple process with the OUI. The OUI can keep track of multiple ORACLE_HOME deployments, as well as the participating nodes. This intelligence prevents potentially destructive or conflicting patch sets from being applied.In the example in the slide, a patch set is applied to the /u01/app/oracle/product/db_1 Oracle Home on all the three nodes of your cluster database. Although you execute the installation on stc-raclin01, you can choose any of the nodes to perform this task. The steps that you must perform to add a patch set through the OUIare essentially the same as those to install a new release. You must change directory to $ORACLE_HOME/bin. After starting the OUI, you must perform the following steps:

1. Select Installation from a stage location, and enter the appropriate patch set source on the Welcome page.

2. Select the nodes on the Node Selection page, where you need to add the patch, and ensure that they are all available. In this example, this should be all three of the nodes because /u01/app/oracle/product/db_1 is installed on all of them.

3. Check the Summary page to confirm that space requirements are met for each node.4. Continue with the installation and monitor the progress as usual.

The OUI automatically manages the installation progress, including the copying of files to remote nodes, just as it does with the CRS and database binary installations.

Page 140: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-38

Copyright © 2004, Oracle. All rights reserved.

"Unable to acquire a writer lock on nodes stc-raclin02. Restart the install after verifying that there is no OUI session on any of the selected nodes."

Inventory List Locks

• The OUI employs a timed lock on the inventory list stored on a node.

• The lock prevents an installation from changing a list being used concurrently by another installation.

• If a conflict is detected, the second installation is suspended and the following message appears:

Inventory List LocksOne of the improvements in the OUI is that it prevents potentially destructive concurrent installations. The mechanism involves a timed lock on the inventory list stored on a node. When you start multiple concurrent installations, the OUI displays an error message that is similar to the one shown in the slide. You must cancel the installation and wait until the conflicting installation completes, before retrying it.Although this mechanism works with all types of installations, see how it can function if you attempt concurrent patch set installations in the sample cluster. Use the same configuration as in the previous scenario for your starting point.Assume that you start a patch set installation on stc-raclin01 to update ORACLE_HOME2on nodes stc-raclin01 and stc-raclin03. While this is still running, you start another patch set installation on stc-raclin02 to update ORACLE_HOME3 on that node. Will these installations succeed? As long as there are no other problems, such as a down node or interconnect, these processes have no conflicts with each other and should succeed.However, what if you start your patch set installation on stc-raclin02 to update ORACLE_HOME3 and then start a concurrent patch set installation for ORACLE_HOME2 (using either stc-raclin01 or stc-raclin03) on all nodes where this Oracle Home is installed? In this case, the second installation should fail with the error shown because the inventory on stc-raclin02 is already locked by the patch set installation for ORACLE_HOME3.

Page 141: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-39

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Install the Oracle database software• Configure virtual IPs with the VIPCA• Perform preinstallation database tasks• Create a cluster database• Perform postinstallation database tasks• Identify best configuration practices for RAC

Page 142: User Guide RAC

Oracle Database 10g: Real Application Clusters 3-40

Copyright © 2004, Oracle. All rights reserved.

Practice 3: Overview

This practice covers the following topics:• Installing the Oracle database software by using

the OUI• Confirming that the services needed by the

database creation process are running• Using the DBCA to create a cluster database

Page 143: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

RAC Database InstancesAdministration

Page 144: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Use the EM Cluster Database home page• Start and Stop RAC databases and instances • Add a node to a cluster• Delete instances from a RAC database• Quiesce RAC databases• Administer alerts with Enterprise Manager

Page 145: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-3

Copyright © 2004, Oracle. All rights reserved.

The EM Cluster Database Home Page

The EM Cluster Database Home PageThe Cluster Database home page serves as a crossroad for managing and monitoring allaspects of your RAC database. From this page, you can also access the three other main cluster database tabs: Performance, Administration, and Maintenance. On this page, you also find General, High Availability, Space Usage, and Diagnostic Summary sections for information that pertains to your cluster database as a whole. The number of instances is displayed for the RAC database, in addition to the status. A RAC database is considered to be up if at least one instance has the database open. The Cluster Database home page is accessible by clicking the Cluster link in the General section of the page.Other items of interest include the date of the last RMAN backup, archiving information, space utilization within tablespaces and segments, and an alert summary. By clicking the link next to the Archiving label, you can view and set archive log–related parameters, and adjust the value of the FAST_START_MTTR_TARGET initialization parameter.The Alerts section displays a list of recent cluster database and cluster database instance–related events for RAC with links to alert details.

Page 146: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-4

Copyright © 2004, Oracle. All rights reserved.

The EM Cluster Database Home Page

The EM Cluster Database Home Page (continued)At the top of the page, you can see a list of alerts in the Alerts section mentioned on the previous slide. The Related Alerts section lists pertinent cluster-database events, such as host and listener alerts. Between the Alerts and Related Alerts sections, you can get an overview of all alerts for your cluster database.The Job Activity section lists job-related specifics for the cluster database.You can use Enterprise Manager to manage Oracle critical patch advisories. When configured, critical advisory information can be accessed under the Critical Patch Advisories section. To promote critical patch application, Enterprise Manager performs an assessment of vulnerabilities by examining your enterprise configuration to determine which Oracle homes have not applied one or more of these critical patches. Enterprise Manager provides a list of critical patch advisories and the Oracle homes to which the critical patches should be applied.The Related Links area provides links to other areas for managing your RAC database. For example, the Jobs link opens the Job Activity page where you can configure jobs for high availability.Finally, the Instances section lists every instance configured in Oracle Cluster Registry (OCR) to be able to open the database. Status, alert, and performance-related information is summarized for each instance. When you click an instance name, the corresponding Instance home page is displayed.

Page 147: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-5

Copyright © 2004, Oracle. All rights reserved.

Cluster Database Instance Home Page

Cluster Database Instance Home PageThe Cluster Database Instance home page can be reached by clicking one of the instance names from the Instances section of the Cluster Database home page. This page has the same four subpages as the Cluster Database home page: Home, Performance, Administration, and Maintenance.The difference is that tasks and monitored activities from these pages apply primarily to a specific instance. For example, clicking the Shutdown button from this page only shuts down this one instance. However, clicking the Shutdown button from the Cluster Database home page gives you the option of shutting down all or specific instances.By scrolling down on this page, you see the Alerts, Related Alerts, Jobs, and Related Links sections. These provide similar information compared to the same sections in the Cluster Database home page.

Page 148: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-6

Copyright © 2004, Oracle. All rights reserved.

Cluster Home Page

Cluster Home PageThe slide above shows you the Cluster home page, which is accessible by clicking the Cluster link located in the General section of the Cluster Database home page. The cluster is represented as a composite target composed of nodes and cluster databases. An overall summary of the cluster is provided here. The current status and cluster availability over the past 24 hours is shown. A cluster is deemed to be up if at least one cluster node is up. The cluster is down if all nodes are down. In the Configuration section, you can see the clusterware version, along with the hardware and operating system of the cluster. A list of cluster databases is presented with their status and the number of warning and critical alerts. You can click the value of either the warning or the critical alerts to see a detailed list of the cluster database–related alerts and when they occurred. There is also a separate alerts section for the entire cluster, which centralizes alert reporting on hosts across all the nodes in the cluster.The Related Links section is at the bottom of the page. This section enables you to view alert history, blackout information, and deployments. When a blackout is applied to a cluster, all targets in the cluster are blacked out. The Deployments link takes you to the Deployments page, which shows detailed information about hosts and their operating systems, as well as software installations and patches. At the bottom of the page, the Hostssection lists each node in the cluster with its status, warning and critical alerts, policy violations, and performance-related information. Links for each node take you to more detailed pages for a particular node.

Page 149: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-7

Copyright © 2004, Oracle. All rights reserved.

The Configuration Section

The Configuration SectionThe Cluster home page is invaluable for locating configuration-specific data. Locate the Configuration section on the Cluster home page. The View drop-down list allows you to inspect hardware and operating system overview information.Click the Hosts link, and then click the Hardware Details link of the host that you want. On the Hardware Details page, you find detailed information regarding your CPU, disk controllers, network adapters, and so on. This information can be very useful when determining the Linux patches for your platform.When you click the Operating System link, the Operating System Details page is displayed. On this page, you can view the current kernel parameter values for each node in your cluster. Shared memory, semaphore, IP, and other kernel parameter classes (whose values affect how Oracle software performs) can be viewed on this page. Although both operating system and kernel parameter values may be viewed by using a terminal window, the Enterprise Manager interface enables you to view this information much easier.

Page 150: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-8

Copyright © 2004, Oracle. All rights reserved.

Operating System Details Page

Operating System Details PageOn the Operating System Details page, click the File Systems tab to display the file system information.All mounted file systems are displayed here. These include standard or local file systems, swap, cluster file systems, and NFS/NetApps file systems.You can browse all installed packages by clicking the Packages tab. The information displayed in these tabs may be viewed at the operating system level through a telnet session; the Cluster home page simplifies the retrieval of this data.

Page 151: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-9

Copyright © 2004, Oracle. All rights reserved.

Performance and Targets Pages

Performance and Targets PagesFrom the Cluster home page, performance target information can be viewed. By clicking the Performance tab, you can view CPU utilization, disk I/O activity, and memory utilization, all in real time. The performance data is displayed graphically with information for each node displayed in the same graph.Click the Targets tab to list all Oracle targets in the cluster. The host, ORACLE_HOME, status, and target types are listed for each target displayed in the Targets page.Note: The refresh rate may be adjusted using the View Data drop-down list. You have the choice to refresh data manually, or automatically every 15 seconds.

Page 152: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-10

Copyright © 2004, Oracle. All rights reserved.

Starting and Stopping RAC Instances

• Multiple instances can open the same database simultaneously.

• Shutting down one instance does not interfere with other running instances.

• SHUTDOWN TRANSACTIONAL LOCAL does not wait for other instance’s transactions to finish.

• RAC instances can be started and stopped using:– Enterprise Manager– Server Control (SRVCTL) utility– SQL*Plus

• Shutting down a RAC database means to shutting down all instances accessing the database.

Starting and Stopping RAC InstancesIn a RAC environment, multiple instances can have the same RAC database open at the same time. Also, shutting down one instance does not interfere with the operation of other running instances.The procedures for starting up and shutting down RAC instances are identical to the procedures used in single-instance Oracle, with the following exception:The SHUTDOWN TRANSACTIONAL command with the LOCAL option is useful to shut down an instance after all active transactions on the instance have either committed or rolled back. Transactions on other instances do not block this operation. If you omit the LOCAL option, then this operation waits until transactions on all other instances that started before the shutdown was issued either a COMMIT or a ROLLBACK.You can start up and shut down instances by using Enterprise Manager, SQL*Plus, or Server Control (SRVCTL). Both Enterprise Manager and SRVCTL provide options to start up and shut down all the instances of a RAC database with a single step.Shutting down a RAC database mounted or opened by multiple instances means that you need to shut down every instance accessing that RAC database. However, having only one instance opening the RAC database is enough to declare the RAC database open.

Page 153: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-11

Copyright © 2004, Oracle. All rights reserved.

Starting and Stopping RAC Instances with EM

Starting and Stopping RAC Instances with EMOn the Cluster Database home page, the cluster database instances are displayed at the bottom of the page. Click an instance name to access the corresponding Cluster Database Instance home page. On this page, you can start or stop the cluster database instance, as well as see an overview of the cluster database instance activity such as CPU and space usage, active sessions, and so on. To start a cluster database instance click Startup, and click Shutdown to stop it. To start or shut down a cluster database (that is, all the instances known to Enterprise Manager), select the database and click Startup or Shutdown on the Cluster Databasehome page.

Page 154: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-12

Copyright © 2004, Oracle. All rights reserved.

Starting and Stopping RAC Instances with SQL*Plus

[stc-raclin01] $ echo $ORACLE_SIDRACDB1sqlplus / as sysdbaSQL> startupSQL> shutdown

[stc-raclin02] $ echo $ORACLE_SIDRACDB2sqlplus / as sysdbaSQL> startupSQL> shutdown

[stc-raclin01] $sqlplus / as sysdbaSQL> startupSQL> shutdownSQL> connect sys/oracle@RACDB2 as sysdbaSQL> startupSQL> shutdown

OR

Starting and Stopping RAC Instances with SQL*PlusIf you want to start or stop just one instance, and you are connected to your local node, then you must first ensure that your current environment includes the SID for the local instance.To start or shut down your local instance, initiate a SQL*Plus session connected as SYSDBA or SYSOPER, and then issue the required command (for example, STARTUP).You can start multiple instances from a single SQL*Plus session on one node by way of Oracle Net Services. To achieve this, you must connect to each instance by using a Net Services connection string, typically an instance-specific alias from your tnsnames.orafile. For example, you can use a SQL*Plus session on a local node to shut down two instances on remote nodes by connecting to each using the instance’s individual alias name.The above example assumes that the alias name for the second instance is RACDB2. In the above example, there is no need to connect to the first instance using its connect descriptor because the command is issued from the first node with the correct ORACLE_SID.Note: It is not possible to start up or shut down more than one instance at a time in SQL*Plus, so you cannot start or stop all the instances for a cluster database with a single SQL*Plus command.

Page 155: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-13

Copyright © 2004, Oracle. All rights reserved.

Starting and Stopping RAC Instances with SRVCTL

• start/stop syntax:

• Examples:

srvctl start|stop instance -d <db_name> -i <inst_name_list>[-o open|mount|nomount|normal|transactional|immediate|abort>][-c <connect_str> | -q]

$ srvctl start instance -d RACDB -i RACDB1,RACDB2

$ srvctl stop instance -d RACDB -i RACDB1,RACDB2

srvctl start|stop database -d <db_name>[-o open|mount|nomount|normal|transactional|immediate|abort>] [-c <connect_str> | -q]

$ srvctl start database -d RACDB -o open

Starting and Stopping RAC Instances with SRVCTLThe srvctl start database command starts a cluster database and its enabled instances. The srvctl stop database command stops a database, its instances, and its services.The srvctl start instance command starts instances of a cluster database. This command also starts all enabled and nonrunning services that have the listed instances either as preferred or available instances.The srvctl stop instance command stops instances, and all enabled and nonrunningservices that have these instances as either preferred or available instances.You must disable an object that you intend to remain stopped after you issue a srvctl stop command, otherwise CRS can restart it as a result of another planned operation. For the commands that use a connect string, if you do not provide a connect string, then SRVCTL uses / as sysdba to perform the operation. The –q option is asking for aconnect string from standard input. SRVCTL does not support concurrent executions of commands on the same object. Therefore, run only one SRVCTL command at a time for each database, service, or other object. In order to use the START or STOP options of the SRVCTL command, your service must be a CRS-enabled, non-running service. That is why it is recommended to use the Database Configuration Assistant (DBCA) because it configures both the CRS resources and the Net Service entries for each RAC database.Note: For more information, refer to the Real Application Clusters Administrator’s Guide.

Page 156: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-14

Copyright © 2004, Oracle. All rights reserved.

RAC Initialization Parameter Files

• An SPFILE is created if you use DBCA.• The SPFILE must be created on a shared volume

or shared raw device.• All instances use the same SPFILE.• If the database was created manually, then create

an SPFILE from a PFILE.Node1

RAC01

SPFILE

SPFILE=…

initRAC01.ora

Node2

RAC02

SPFILE=…

initRAC02.ora

Initialization Parameter FilesWhen you create the database, DBCA creates an SPFILE in the file location that you specify. This location can be an automatic storage management (ASM) disk group, cluster file system file, or a shared raw device. If you manually create your database, then it is recommended to create an SPFILE from a PFILE.All instances in the cluster database use the same SPFILE at startup. Because the SPFILEis a binary file, do not edit it. Instead, change the SPFILE parameter settings by using Enterprise Manager or ALTER SYSTEM SQL statements.RAC uses a traditional PFILE only if an SPFILE does not exist or if you specify PFILEin your STARTUP command. Using SPFILE simplifies administration, maintaining parameter settings consistent, and guarantees parameter settings persistence across database shutdown and startup. In addition, you can configure RMAN to back up your SPFILE.In order for each instance to use the same SPFILE at startup, each instance uses its own PFILE file that contains only one parameter called SPFILE. The SPFILE parameter points to the shared SPFILE on your shared storage. This is illustrated in the above graphic. By calling each PFILE init<SID>.ora, and by putting them in the $ORACLE_HOME/dbs directory of each node, a STARTUP command uses the shared SPFILE.

Page 157: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-15

Copyright © 2004, Oracle. All rights reserved.

SPFILE Parameter Values and RAC

• You can change parameter settings using the ALTER SYSTEM SET command from any instance.

• SPFILE entries such as:– *.<pname> apply to all instances– <sid>.<pname> apply only to <sid>– <sid>.<pname> takes precedence over *.<pname>

• Use current or future *.<dpname> settings for <sid>.

• Remove an entry from your SPFILE.ALTER SYSTEM RESET <dpname> SCOPE=SPFILE sid='<sid|*>';

ALTER SYSTEM RESET <dpname> SCOPE=MEMORY sid='<sid>';

ALTER SYSTEM SET <dpname> SCOPE=MEMORY sid='<sid|*>';

SPFILE Parameter Values and RACYou can modify the value of your initialization parameters by using the ALTER SYSTEM SET command. This is the same as with a single-instance database except that you have the possibility to specify the SID clause in addition to the SCOPE clause.By using the SID clause, you can specify the SID of the instance where the value takes effect. Specify SID='*' if you want to change the value of the parameter for all instances. Specify SID='sid' if you want to change the value of the parameter only for the instance sid. This setting takes precedence over previous and subsequent ALTER SYSTEM SET statements that specify SID='*'. If the instances are started up with an SPFILE, then SID='*' is the default if you do not specify the SID clause.If you specify an instance other than the current instance, then a message is sent to that instance to change the parameter value in its memory if you are not using the SPFILEscope.The combination of SCOPE=MEMORY and SID='sid' of the ALTER SYSTEM RESETcommand allows you to override the precedence of a currently used <sid>.<dparam>entry. This allows for the current *.<dparam> entry to be used, or for the next created *.<dparam> entry to be taken into account on that particular sid.Using the last example, you can remove a line from your SPFILE.

Page 158: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-16

Copyright © 2004, Oracle. All rights reserved.

EM and SPFILE Parameter Values

SCOPE=MEMORY

EM and SPFILE Parameter ValuesYou can access the Initialization Parameters page from the Cluster Database Administration page by clicking the Initialization Parameters link.The Current page shows you the values currently used by the initialization parameters of all the instances accessing the RAC database. You can filter the Initialization Parameterspage to show only those parameters that meet the criteria of the filter that you entered in the Filter field. Optionally, you can choose Show All to display on one page all the parameters that are currently used by the running instances.The Instance column shows the instances for which the parameter has the value listed in the table. An asterisk (*) indicates that the parameter has the same value for all remaining instances of the cluster database.Choose a parameter from the Select column and perform one of the following steps:

• Click Add to add the selected parameter to a different instance. Enter a new instance name and value in the newly created row in the table.

• Click Reset to reset the value of the selected parameter. Note that you may only reset parameters that do not have an asterisk in the Instance column. The value of the selected column is reset to the value of the remaining instances.

Note: For both Add and Reset buttons, the ALTER SYSTEM command uses SCOPE=MEMORY.

Page 159: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-17

Copyright © 2004, Oracle. All rights reserved.

EM and SPFILE Parameter Values

SCOPE=SPFILE

SCOPE=BOTH

EM and SPFILE Parameter Values (continued)The SPFile tab displays the current values stored in your SPFILE.As in the Current tab, you can add or reset parameters. However, If you select the Apply changes in SPFile mode check box, then the ALTER SYSTEM command uses SCOPE=BOTH. If this check box is not selected, SCOPE=SPFILE is used. Click Apply to accept and generate your changes.

Page 160: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-18

Copyright © 2004, Oracle. All rights reserved.

RAC Initialization Parameters

RAC Initialization ParametersCLUSTER_DATABASE: Enables a database to be started in cluster mode. Set this to TRUE.CLUSTER_DATABASE_INSTANCES: Sets the number of instances in your RAC environment. A proper setting for this parameter can improve memory use.CLUSTER_INTERCONNECTS: Specifies the cluster interconnect when there is more than one interconnect. Refer to your Oracle platform-specific documentation for the use of this parameter, its syntax, and its behavior. You typically do not need to set the CLUSTER_INTERCONNECTS parameter. For example, do not set this parameter for the following common configurations:

• If you have only one cluster interconnect• If the default cluster interconnect meets the bandwidth requirements of your RAC

database, which is typically the case• If NIC bonding is being used for the interconnect

DB_NAME: If you set a value for DB_NAME in instance-specific parameter files, then the setting must be identical for all instances.DISPATCHER: Set the DISPATCHERS parameter to enable a shared-server configuration, that is a server that is configured to allow many user processes to share very few server processes. With shared-server configurations, many user processes connect to a dispatcher. The DISPATCHERS parameter may contain many attributes. Oracle recommends that you configure at least the PROTOCOL and LISTENER attributes.

Page 161: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-19

RAC Initialization Parameters (continued)PROTOCOL specifies the network protocol for which the dispatcher process generates a listening end point. LISTENER specifies an alias name for the Oracle Net Services listeners. Set the alias to a name that is resolved through a naming method such as atnsnames.ora file.MAX_COMMIT_PROPAGATION_DELAY: This is a RAC-specific parameter. Do not alter the default setting for this parameter except under a limited set of circumstances. This parameter specifies the maximum amount of time allowed before the system change number (SCN) held in the SGA of an instance is refreshed by the log writer process (LGWR). It determines whether the local SCN should be refreshed from the SGA when getting the snapshot SCN for a query.SPFILE: When you use an SPFILE, all RAC database instances must use the SPFILE and the file must be on shared storage.THREAD: If specified, this parameter must have unique values on all instances. The THREAD parameter specifies the number of the redo thread to be used by an instance. You can specify any available redo thread number as long as that thread number is enabled and is not used.

Page 162: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-20

Copyright © 2004, Oracle. All rights reserved.

Parameters Requiring Identical Settings

• ACTIVE_INSTANCE_COUNT• ARCHIVE_LAG_TARGET• CLUSTER_DATABASE• CONTROL_FILES• DB_BLOCK_SIZE• DB_DOMAIN• DB_FILES• DB_NAME• DB_RECOVERY_FILE_DEST• DB_RECOVERY_FILE_DEST_SIZE• DB_UNIQUE_NAME• MAX_COMMIT_PROPAGATION_DELAY• TRACE_ENABLED• UNDO_MANAGEMENT

Parameters Requiring Identical SettingsCertain initialization parameters that are critical at database creation or that affect certain database operations must have the same value for every instance in RAC. Specify these parameter values in the SPFILE, or within each init_dbname.ora file on each instance. The following list contains the parameters that must be identical on every instance:• ACTIVE_INSTANCE_COUNT• ARCHIVE_LAG_TARGET• CLUSTER_DATABASE• CONTROL_FILES• DB_BLOCK_SIZE• DB_DOMAIN• DB_FILES• DB_NAME• DB_RECOVERY_FILE_DEST• DB_RECOVERY_FILE_DEST_SIZE• DB_UNIQUE_NAME• MAX_COMMIT_PROPAGATION_DELAY• TRACE_ENABLED• UNDO_MANAGEMENT

Note: The setting for DML_LOCKS must be identical on every instance only if set to zero.

Page 163: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-21

Copyright © 2004, Oracle. All rights reserved.

Parameters Requiring Unique Settings

• Instance Settings– THREAD

– ROLLBACK_SEGMENTS

– INSTANCE_NUMBER

– UNDO_TABLESPACE(When using automatic undo management)

• Environment Variables– ORACLE_SID

Parameters Requiring Unique SettingsIf you use the THREAD or ROLLBACK_SEGMENTS parameters, then it is recommended to set unique values for them by using the SID identifier in the SPFILE. However, you must set a unique value for INSTANCE_NUMBER for each instance and you cannot use a default value.Oracle uses the INSTANCE_NUMBER parameter to distinguish among instances at startup. Oracle uses the THREAD number to assign redo log groups to specific instances. To simplify administration, use the same number for both the THREAD and INSTANCE_NUMBER parameters.If you specify UNDO_TABLESPACE with automatic undo management enabled, then set this parameter to a unique undo tablespace name for each instance.Specify the ORACLE_SID environment variable, which comprises the database name and the number of the THREAD assigned to the instance.

Page 164: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-22

Copyright © 2004, Oracle. All rights reserved.

Adding a Node to a Cluster

1. Configure the OS and hardware for the new node.2. Add the node to the cluster.3. Add the RAC software to the new node.4. Reconfigure listeners for new node.5. Add instances via DBCA.

RACDB1

RACDB2

RACDB3

Adding a Node to a ClusterThe next several slides explain how to add nodes to clusters. You can do this by setting up the new nodes to be part of your cluster at the network level. Then extend the Cluster Ready Services (CRS) home from an existing CRS home to the new nodes, and then extend the Oracle database software with RAC components to the new nodes. Finally, make the new nodes members of the existing cluster database.If the nodes that you are adding to your cluster do not have clusterware or Oracle software, then you must complete the five steps listed in the slide above. The procedures in these steps assume that you already have an operative UNIX-based or Windows-based RAC environment.

Page 165: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-23

Copyright © 2004, Oracle. All rights reserved.

Adding a Node to an Existing Cluster

$ cd $ORA_CRS_HOME/oui/bin$ ./addNode.sh

Adding a Node to an Existing ClusterRun the addNode.sh script from $ORA_CRS_HOME/oui/bin on one of the existing nodes as the oracle user:$ cd $ORA_CRS_HOME/oui/bin$ ./addNode.sh

When the Oracle Universal Installer (OUI) Welcome page appears, click Next. On the Specify Cluster Nodes to Add to Installation page, add the public and private node names, and then click Next. When the Cluster Node Addition Summary page appears, click Next. The Cluster Node Addition Progress page appears. You are then prompted to run rootaddnode.sh as the root user. Verify that the CLSCFG information in the rootaddnode.sh script is correct. It must contain the new public and private node names and node numbers. For example:$ clscfg -add -nn node2,2 -pn node2-private,2 -hn <node2>,2

Run the rootaddnode.sh script on the existing node from where you ran the addNode.sh script.su rootcd $ORA_CRS_HOMEsh -x rootaddnode.sh

After this is completed, click OK to continue.

Page 166: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-24

Adding a Node to an Existing Cluster (continued)At this point another dialog box appears, which prompts you to run $ORA_CRS_HOME/root.sh on the new cluster node:

su rootcd $ORA_CRS_HOMEsh -x root.sh

After this is completed, click OK in the dialog box to continue. The End of Installationpage is displayed. Exit the installer.

Page 167: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-25

Copyright © 2004, Oracle. All rights reserved.

Adding the RAC Software to the New Node

$ cd $ORACLE_HOME/oui/bin$ ./addNode.sh

Add the RAC software to the new node

Adding the RAC Software to the New NodeRun the addNode.sh script from $ORACLE_HOME/oui/bin on one of the existing nodes as the oracle user:

$ cd $ORACLE_HOME/oui/bin$ ./addNode.sh

When the OUI Welcome page appears, click Next. On the Specify Cluster Nodes to Add to Installation page, specify the node that you want to add. Click Next. When the Cluster Node Addition Summary page appears, click Next. The Cluster Node Addition Progresspage appears. You are then prompted to run root.sh as the root user on the new node:

$ su - root$ cd $ORACLE_HOME$ ./root.sh

After this is completed, click OK to continue. The End of Installation page is displayed. Exit the installer. Change directory to the $ORACLE_HOME/bin directory and run the vipca tool with the new node list:

$ su root$ DISPLAY=ipaddress:0.0; export DISPLAY$ cd $ORACLE_HOME/bin$ ./vipca -nodelist <node1>,<node2>

Page 168: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-26

Adding the RAC Software to the New Node (continued)The Virtual Internet Protocol Configuration Assistant (VIPCA) Welcome page appears. Click Next. Add the new node’s virtual IP information, and click Next. The Summarypage is displayed. Click Finish. A progress bar creating and starting the new CRS resources appears. After this is completed, click OK, view the configuration results, and click the Exit button.Verify that interconnect information is correct with the oifcfg command:

$ oifcfg getif

If it is not correct, change it by using oifcfg:$ oifcfg setif <interface-

name>/<subnet>:<cluster_interconnect|public>

Page 169: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-27

Copyright © 2004, Oracle. All rights reserved.

Reconfigure the Listeners

Reconfigure the ListenersRun netca on the new node to verify that the listener is configured on the new node:

$ DISPLAY=ipaddress:0.0; export DISPLAY$ netca

Select Cluster Configuration, and then click Next. After selecting all nodes, click Next. Select Listener configuration, and then click Next. Click Reconfigure, then click Next. Choose the listener that you want to reconfigure, and then click Next. Choose the correct protocol, and then click Next. Choose the correct port, and then click Next. Choose whether or not to configure another listener. Click Next.You may get an error message saying, “The information provided for this listener is currently in use by another listener….” Ignore this message and click Yes to continue. When the Listener Configuration Complete page appears, click Next to continue. Click Finish to exit the Network Configuration Assistant (NETCA). Run the crs_statcommand to verify that the listener CRS resource was created. For example:

cd $ORA_CRS_HOME/bin./crs_stat

The new listener must be offline. Start it by starting the nodeapps on the new node.$ srvctl start nodeapps -n <newnode>

Use crs_stat to confirm that all VIPs, GSDs, ONSs, and listeners are online.

Page 170: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-28

Copyright © 2004, Oracle. All rights reserved.

Add an Instance by Using DBCA

Add an Instance by Using DBCATo add new instances, open DBCA from an existing node:

$ DISPLAY=ipaddress:0.0; export DISPLAY$ dbca

On the Welcome page, select Oracle Real Application Clusters database, and then click Next. Select Instance Management, click Next. Select Add an Instance, click Next. Choose the database that you want to add an instance to, and specify a user with SYSDBAprivileges. Click Next. Choose the correct instance name and node, and then click Next. Review the Storage page, click Next. Review the Summary page, click OK, and wait for the progress bar to start. Allow the progress bar to finish. When asked if you want to perform another operation, click No to exit the DBCA. To verify success, log in to one of the instances and query from GV$INSTANCE. Now you must be able to see all nodes:SQL> SELECT instance_number inst_no, instance_name inst_name, parallel, status, database_status db_status, active_state state,host_name host FROM gv$instance;

INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST------- ------------ --- ------ --------- ------ --------------

1 RACDB1 YES OPEN ACTIVE NORMAL sct-raclin012 RACDB2 YES OPEN ACTIVE NORMAL stc-raclin023 RACDB3 YES OPEN ACTIVE NORMAL stc-raclin03

Page 171: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-29

Copyright © 2004, Oracle. All rights reserved.

Deleting Instances from a RAC Database

Deleting Instances from a RAC DatabaseThe procedures outlined here explain how to use the DBCA to delete an instance from a RAC database. To delete an instance, start the DBCA on a node other than the node that hosts the instance that you want to delete. On the DBCA Welcome page, select Oracle Real Application Clusters Database. Click Next. The Operations page is displayed. Select Instance Management, and then click Next. The Instance Management page appears.On the Instance Management page, select Delete Instance, and then click Next. On the page that displays the list of cluster databases, select a RAC database from which to delete an instance. If your user ID is not operating-system authenticated, then DBCA also prompts you for a user ID and password for a database user that has SYSDBA privileges. Click Nextand DBCA displays the List of Cluster Database Instances page which shows the instances associated with the RAC database that you selected and the status of each instance. Select a remote instance to delete, and then click Finish. If you have services assigned to this instance, then the DBCA Services Management page appears. Use this feature to reassign services from this instance to other instances in the cluster database. Review the information about the instance deletion operation on the Summary page, then click OK. Click OK in the Confirmation dialog box to proceed with the instance deletion operation. The DBCA displays a progress dialog box showing that the DBCA is performing the instance deletion operation. During this operation, the DBCA removes the instance and the instance’s Oracle Net configuration.

Page 172: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-30

Deleting Instances From a RAC Database (continued)When the DBCA completes this operation, it displays a dialog box asking whether you want to perform another operation. Click No and exit the DBCA, or click Yes to perform another operation. If you click Yes, the Operations page is displayed.

Page 173: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-31

Copyright © 2004, Oracle. All rights reserved.

Node Addition and Deletion and the SYSAUX Tablespace

• The SYSAUX tablespace combines the storage needs for the following tablespaces:– DRSYS

– CWMLITE

– XDB

– ODM

– TOOLS

– INDEX

– EXAMPLE

– OEM-REPO

• Use this formula to size the SYSAUX tablespace:300M + (250M * number_of_nodes)

The SYSAUX TablespaceA new auxiliary, system-managed tablespace called SYSAUX contains performance data and combines content that was stored in different tablespaces (some of which are no longer required) in earlier releases of the Oracle database. This is a required tablespace for which you must plan disk space. The SYSAUX system tablespace now contains the DRSYS(contains data for OracleText), CWMLITE (contains the OLAP schemas), XDB (for XML features), ODM (for Oracle Data Mining), TOOLS (contains Enterprise Manager tables), INDEX, EXAMPLE, and OEM-REPO tablespaces.If you add nodes to your RAC database environment, then you may need to increase the size of the SYSAUX tablespace. Conversely, if you remove nodes from your cluster database, then you may be able to reduce the size of your SYSAUX tablespace and thus save valuable disk space. The following is a formula that you can use to properly size the SYSAUX tablespace:

300 megabytes + (250 megabytes * number_of_nodes)

If you apply this formula to a four-node cluster, then you will find that the SYSAUXtablespace is sized around 1,300 megabytes (300 + (250 * 4) = 1300).

Page 174: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-32

Copyright © 2004, Oracle. All rights reserved.

Quiescing RAC Databases

• Use the ALTER SYSTEM QUIESCE RESTRICTEDstatement from a single instance.

• The database cannot be opened until the ALTER SYSTEM QUIESCE… statement finishes execution.

• The ALTER SYSTEM QUIESCE RESTRICTED and ALTER SYSTEM UNQUIESCE statements affect all instances in a RAC environment.

• Cold backups cannot be taken while the database is in a quiesced state.

SQL> ALTER SYSTEM QUIESCE RESTRICTED;

Quiescing RAC DatabasesTo quiesce a RAC database, use the ALTER SYSTEM QUIESCE RESTRICTED statement from one instance. It is not possible to open the database from any instance while the database is in the process of being quiesced from another instance. After all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE RESTRICTED executes and the database is considered to be quiesced. In a RAC environment, this statement affects all instances.To issue the ALTER SYSTEM QUIESCE RESTRICTED statement in a RAC environment, you must have the Database Resource Manager feature activated, and it must have been activated since instance startup for all instances in the cluster database. The following conditions apply to RAC:

• If you had issued the ALTER SYSTEM QUIESCE RESTRICTED statement, but Oracle has not finished processing it, then you cannot open the database.

• You cannot open the database if it is already in a quiesced state.• The ALTER SYSTEM QUIESCE RESTRICTED and ALTER SYSTEM UNQUIESCE

statements affect all instances in a RAC environment, not just the instance that issues the command.

Cold backups cannot be taken while the database is in a quiesced state because the Oracle background processes may still perform updates for internal purposes even while the database is in a quiesced state. Also, the file headers of online data files continue to appear as if they are being accessed. They do not look the same as if a clean shutdown were done.

Page 175: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-33

Copyright © 2004, Oracle. All rights reserved.

How SQL*Plus Commands Affect Instances

Displays information about the current instance

SHOW INSTANCE

Affect the current instanceSTARTUP and SHUTDOWN

Shows the current instance parameter and SGA information

SHOW PARAMETER andSHOW SGA

Does not affect any particular instance, but rather the database

RECOVER

Affects the node running the SQL*Plus session

HOST

Affects the default instance if no instance is specified in the CONNECT command

CONNECT

Always affects the current instanceARCHIVE LOG

Associated InstanceSQL*Plus Command

How SQL*Plus Commands Affect InstancesMost SQL statements affect the current instance. You can use SQL*Plus to start and stop instances in the RAC database. You do not need to run SQL*Plus commands as root on UNIX-based systems or as Administrator on Windows-based systems. You need only the proper database account with the privileges that you normally use for single-instance Oracle database administration. Some examples of how SQL*Plus commands affect instances are:

• The statement ALTER SYSTEM SET CHECKPOINT LOCAL only affects the instance to which you are currently connected, rather than the default instance or all instances.

• ALTER SYSTEM CHECKPOINT LOCAL affects the current instance.• ALTER SYSTEM CHECKPOINT or ALTER SYSTEM CHECKPOINT GLOBAL

affects all instances in the cluster database.• ALTER SYSTEM SWITCH LOGFILE affects only the current instance.• To force a global log switch, use the ALTER SYSTEM ARCHIVE LOG CURRENT

statement.• The INSTANCE option of ALTER SYSTEM ARCHIVE LOG enables you to archive

each online redo log file for a specific instance.

Page 176: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-34

Copyright © 2004, Oracle. All rights reserved.

Administering Alerts with Enterprise Manager

View alerts for all instances.

Administering Alerts with Enterprise ManagerYou can use Enterprise Manager to administer alerts for RAC environments. You can also configure specialized tests for RAC databases such as global cache converts, consistent read requests, and so on. Enterprise Manager distinguishes between database- and instance-level alerts in RAC environments. Alert thresholds for instance-level alerts, such as archive log alerts, can be set at the instance target level. This enables you to receive alerts for the specific instance if performance exceeds your threshold. You can also configure alerts at the database level, such as setting alerts for tablespaces. This enables you to avoid receiving duplicate alerts at each instance. Enterprise Manager also responds to metrics from across the entire RAC database and publishes alerts when thresholds are exceeded. Enterprise Manager interprets both predefined and customized metrics. You can also copy customized metrics from one cluster database instance to another, or from one RAC database to another. A recent alert summary can be found on the database control home page. Notice that alerts are sorted by relative time and target name.

Page 177: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-35

Copyright © 2004, Oracle. All rights reserved.

Viewing Alerts

Choose an alert and drill down.

Viewing AlertsWhen an alert that requires a closer look is raised, you can click it for more information. There is a statistic summary for the metric displayed to the left of the window. Here you can find information such as high, low, and average values of the metric over the duration of the polling period, how many times the threshold exceeded the warning and critical thresholds, and the current values of those thresholds. If you want to adjust the warning or threshold values, click the Manage Metrics link at the bottom of the page. In addition to adjusting the threshold values, you can also define a response action that will be used when the threshold is exceeded.

Page 178: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-36

Copyright © 2004, Oracle. All rights reserved.

Viewing Alerts

Viewing Alerts (continued)It is also possible to view the metric across the cluster in a comparative or overlay fashion. To view this information, click the Compare Targets link at the bottom of the page. When the Compare Targets page appears, choose the instance targets that you want to compare by selecting them, and then clicking the Move button. If you want to compare the metric data from all targets, then click the Move All button. After making your selections, click the OK button to continue. The Metric summary page appears next. Depending on your needs, you can accept the default timeline of 24 hours or select a more suitable value from the drop-down list. If you want to add a comment regarding the event for future reference, then enter a comment in the Comment for Most Recent Alert field, and then click the Add Comment button.

Page 179: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-37

Copyright © 2004, Oracle. All rights reserved.

Blackouts and Scheduled Maintenance

Blackouts and Scheduled MaintenanceYou can use Enterprise Manager database control to define blackouts for all managed targets of your RAC database to prevent alerts from being recorded. Blackouts are useful when performing scheduled or unscheduled maintenance or other tasks that might trigger extraneous or unwanted events. You can define blackouts for an entire cluster database or for specific cluster database instances.To create a blackout event, click the Maintenance tab located on the Database Control home page. Click the Blackouts link in the Enterprise Manager Administration section. The Setup Blackouts page will appear next. Click the Create button located to the right of the window. The Create Blackout: Properties page appears next. You must enter a name or tag in the Name field. If you want, you can also type in a descriptive comment in the Comments field. This is optional. Enter a reason for the blackout in the Enter a Reasonfield.In the Targets area of the Properties page, you must choose a Target type from the drop-down list. In the example above, the entire cluster database is chosen. Click the cluster database in the Available Targets list, and then click the Move button to move your choice to the Selected Targets list. Click the Next button to continue.

Page 180: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-38

Copyright © 2004, Oracle. All rights reserved.

Blackouts and Scheduled Maintenance

Blackouts and Scheduled Maintenance (continued)The Member Targets page appears next. Expand the Selected Composite Targets tree and ensure that all targets that must be included appear in the list. Click the Next button to continue. The Create Blackout: Schedule page appears next. You must supply the start time and duration, and indicate whether the blackout is to be recurring and if so, the frequency of intervals in days.Finally, you must indicate whether the blackout will occur indefinitely or will end at some point in time. If the blackout must stop in the future, enter the time and date the blackout event will end. After supplying the needed information on this page, click the Next button to proceed. The Review page contains a summary of the blackout information that you previously entered. Review the information for accuracy and correct any errors that you may find. Click the Finish button when you are satisfied with the blackout parameters.If you navigate to the Blackouts page, you can see the blackout you just submitted. You can click the View button to see the properties of your blackout or you can click the Editbutton to make changes at any point in the life of the new blackout event.

Page 181: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-39

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Use the EM Cluster Database Home Page• Start and Stop RAC databases and instances • Add a node to a cluster• Delete instances from a RAC database• Quiesce RAC databases• Administer alerts with Enterprise Manager

Page 182: User Guide RAC

Oracle Database 10g: Real Application Clusters 4-40

Copyright © 2004, Oracle. All rights reserved.

Practice 4: Overview

This practice covers the following topics:• Using the srvctl utility to control your cluster

database• Starting and stopping the cluster database using

EM Dbconsole.

Page 183: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Administering Storage in RACPart I

Page 184: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Describe automatic storage management (ASM)• Install the ASM software• Set up initialization parameter files for ASM and

database instances• Start up and shut down ASM instances• Add ASM instances to the target list of Database

Control• Use Database Control to administer ASM in a RAC

environment

Page 185: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-3

Copyright © 2004, Oracle. All rights reserved.

What Is Automatic Storage Management?

• Is a purpose-built cluster file system and volume manager

• Manages Oracle database files• Spreads data across disks

to balance load• Provides integrated mirroring

across disks• Solves many storage

management challenges

ASM

FilesystemLogicalvolume

manager

Operating system

Application

Database

What Is Automatic Storage Management?Automatic storage management (ASM) is a new feature in Oracle Database 10g. It provides a vertical integration of the file system and the Logical Volume Manager (LVM) that is specifically built for Oracle database files. The ASM can provide management for single SMP machines or across multiple nodes of a cluster for Oracle Real Application Clusters support.The ASM distributes input/output (I/O) load across all available resources to optimize performance while removing the need for manual I/O tuning. The ASM helps DBAs manage a dynamic database environment by allowing them to grow the database size without having to shut down the database to adjust the storage allocation.The ASM can maintain redundant copies of data to provide fault tolerance, or it can be built on top of vendor-supplied reliable storage mechanisms. Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per-file basis.The capabilities of ASM save DBAs time by automating manual storage and thereby increasing their ability to manage larger databases (and more of them) with increased efficiency.

Page 186: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-4

Copyright © 2004, Oracle. All rights reserved.

ASM: Key Features and Benefits

• Stripes files rather than logical volumes• Enables online disk reconfiguration and dynamic

rebalancing• Provides adjustable rebalancing speed• Provides redundancy on a file basis• Supports only Oracle files• Is cluster aware• Is automatically installed as part of the base code

set

ASM: Key Features and BenefitsThe ASM divides a file into pieces and spreads them evenly across all the disks. The ASM uses an index technique to track the placement of each piece. Traditional striping techniques use mathematical functions to stripe complete logical volumes.When your storage capacity changes, ASM does not restripe all the data, but moves an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced I/O load across the disks. This is done while the database is active.You can adjust the speed of a rebalance operation to increase its speed or to lower the impact on the I/O subsystem.The ASM includes mirroring protection without the need to purchase a third-party Logical Volume Manager. One unique advantage of ASM is that the mirroring is applied on a file basis, rather than on a volume basis. Therefore, the same disk group can contain a combination of files protected by mirroring, or not protected at all.The ASM supports data files, log files, control files, archive logs, Recovery Manager (RMAN) backup sets, and other Oracle database file types. The ASM supports Real Application Clusters (RAC) and eliminates the need for a cluster Logical Volume Manager or a cluster file system.ASM is shipped with the database and does not show up as a separate option in the custom tree installation. It is available in both the Enterprise Edition and Standard Edition installations.

Page 187: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-5

Copyright © 2004, Oracle. All rights reserved.

ASM: New Concepts

Database

Tablespace

Segment

Extent

Oracleblock

Data file

Physicalblock

ASMdisk group

ASM disk

ASM file

Allocation unitFile system

fileor

raw device

ASM: New ConceptsThe ASM does not eliminate any existing database functionality. Existing databases are able to operate as they always have. New files may be created as ASM files, while existing ones are administered in the old way or can be migrated to ASM.The diagram depicts the relationships that exist between the various storage components inside an Oracle database. On the left and center parts of the diagram, you can find the relationships that exist in previous releases. The right part of the diagram shows you the new concepts introduced by ASM in Oracle Database 10g. However, these new concepts are only used to describe file storage, and do not replace any existing concepts such as segments and tablespaces.With ASM, database files can now be stored as ASM files. At the top of the new hierarchy, you can find what are called ASM disk groups. Any single ASM file is contained in only one disk group. However, a disk group may contain files belonging to several databases, and a single database may use storage from multiple disk groups. As you can see, one disk group is made up of ASM disks, and each ASM disk belongs to only one disk group. Also, ASM files are always spread across all ASM disks in the disk group. The ASM disks are partitioned in allocation units (AU) of one megabyte each. An AU is the smallest contiguous disk space that ASM allocates. The ASM does not allow physical blocks to be split across AUs.Note: The graphic deals with only one type of ASM file, the data file. However, ASM can be used to store other database file types.

Page 188: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-6

Copyright © 2004, Oracle. All rights reserved.

ASM: General Architecture

ASM InstanceSID=ant

ASM InstanceSID=bee

ASM disks ASM disks ASM disks ASM disks ASM disks ASM disks

ASM Disk group Tom ASM Disk group Dick ASM Disk group Harry

DB InstanceSID=sales

DB InstanceSID=test

DB InstanceSID=sales

DB InstanceSID=test

ASMB

ASMB

FG

FG FG

FG

Node2Node1 Group Servicestom=antdick=antharry=ant

Group Servicestom=beedick=beeharry=bee

DBW0

DBW0DBW0

DBW0ASMB

RBAL RBAL

ARB0

ARBA

…ARB0

ARBA

ASMB

RBAL

RBAL RBAL

RBAL

ASM: General ArchitectureTo use ASM, you must start a special instance called an ASM instance before you start your database instance. ASM instances do not mount databases, but instead manage the metadata needed to make ASM files available to ordinary database instances. Both ASM instances and database instances have access to a common set of disks called disk groups. Database instances access the contents of ASM files directly, communicating with an ASM instance only to get information about the layout of these files. An ASM instance contains two new background processes. One coordinates rebalance activity for disk groups. It is called RBAL. The other performs the actual rebalance activity for AU movements. There can be many of these at a time, and they are called ARB0, ARB1, and so on. An ASM instance also has most of the same background processes as a database instance (SMON, PMON, LGWR, and so on). Each database instance using ASM has two new background processes called ASMB and RBAL. RBAL performs global opens of the disks in the disk groups. At database instance startup, ASMBconnects as a foreground process into the ASM instance. All communication between the database and ASM instances is performed via this bridge. This includes physical file changes such as data file creation and deletion. Over this connection, periodic messages are exchanged to update statistics and to verify that both instances are healthy.

Page 189: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-7

ASM: General Architecture (continued)Group Services is used to register the connection information needed by the database instances to find ASM instances. When an ASM instance mounts a disk group, it registers the disk group and connect string with Group Services. The database instance knows the name of the disk group, and can therefore use it to look up connect information for the correct ASM instance.Like RAC, ASM instances themselves may be clustered, using the existing Global Cache Services (GCS) infrastructure. There is one ASM instance per node on a cluster. As with existing RAC configurations, ASM requires that the operating system makes the disks globally visible to all ASM instances, irrespective of the node. If there are several database instances for different databases on the same node, then they share the same single ASM instance on that node. If the ASM instance on one node fails, all the database instances connected to it also fail. As with RAC, ASM and database instances on other nodes recover the dead instances and continue operations.Note: A disk group can contain files for many different Oracle databases. Thus, multiple database instances serving different databases can access the same disk group even on a single system without RAC.

Page 190: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-8

Copyright © 2004, Oracle. All rights reserved.

ASM Instance and Crash Recovery in RAC

Disk group A

Node2+ASM2

Node1+ASM1

Both instances mountdisk group

Disk group A

Node2+ASM2

Node1+ASM1

ASM instance failure

Disk group A

Node2+ASM2

Node1

Disk group repairedby surviving instance

Disk Group A

Node2+ASM2

Node1+ASM1

Only one instancemounts disk group

Disk Group A

Node2+ASM2

Node1+ASM1

ASM instance failure

Disk Group A

Node2+ASM2

Node1

Disk group repairedwhen next mounted

ASM crash recovery

ASM instance recovery

ASM Instance and Crash Recovery in RAC Each disk group is self-describing, containing its own file directory, disk directory, and other data such as metadata logging information. ASM automatically protects its metadata by using mirroring techniques even with external redundancy disk groups.With multiple ASM instances mounting the same disk groups, if one ASM instance fails, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. This situation is called ASM instance recovery, and is automatically and immediately detected by the global cache services.With multiple ASM instances mounting different disk groups, or in the case of a single ASM instance configuration, if an ASM instance fails while ASM metadata is open for update, then the disk groups that are not currently mounted by any other ASM instance are not recovered until they are mounted again. When an ASM instance mounts a failed disk group, it reads the disk group log and recovers all transient changes. This situation is called ASM crash recovery.Therefore, when using ASM clustered instances, it is recommended to have all ASM instances always mounting the same set of disk groups. However, it is possible to have a disk group on locally attached disks that are only visible to one node in a cluster, and have that disk group only mounted on the node where the disks are attached.Note: The failure of an Oracle database instance is not significant here because only ASM instances update ASM metadata.

Page 191: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-9

Copyright © 2004, Oracle. All rights reserved.

ASMLibs

• An ASMLib is a storage-management interface between Oracle kernel and disk storage.

• You can load multiple ASMLibs.• Purpose built drivers can provide:

– Device discovery– More efficient I/O interface– Increased performance and reliability

• Oracle freely delivers an ASMLib on Linux.• Several participating storage vendors such as

EMC and HP are joining this initiative.

ASMLibsASMLib is a support library for the ASM feature. The objective of ASMLib is to provide a more streamlined and efficient mechanism for identifying and accessing block devices used by ASM disk groups. This API serves as an alternative to the standard operating system interface.The ASMLib kernel driver is released under the GNU General Public License (GPL), and Oracle Corporation freely delivers an ASMLib for Linux platforms. This library is provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API.The main ASMLib functions are grouped into three collections of functions:

• Device discovery functions must be implemented in any ASMLib. Discover strings usually contain a prefix identifying which ASMLib this discover string is intended for. For the Linux ASMLib provided by Oracle, the prefix is ORCL:.

• I/O processing functions extend the operating system interface and provide an optimized asynchronous interface for scheduling I/O operations and managing I/O operation completion events. These functions are implemented as a device driver within the operating system kernel.

• The performance and reliability functions use the I/O processing control structures for passing metadata between the Oracle database and the back-end storage devices. They enable additional intelligence on the part of back-end storage.

Note: The database can load multiple ASMLibs, each handling different disks.

Page 192: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-10

Copyright © 2004, Oracle. All rights reserved.

Oracle Linux ASMLib Installation: Overview

1. Install the ASMLib packages on each node:– http://otn.oracle.com/tech/linux/asmlib– Install oracleasm-support, oracleasmlib, and

kernel-related packages2. Configure ASMLib on each node:

– Load ASM driver and mount ASM driver file system– Use the oracleasm script with the configure

option3. Make disks available to ASMLib by marking disks

using oracleasm createdisk on one node.4. Make sure that disks are visible on other nodes

using oracleasm scandisks.5. Use appropriate discovery strings for this ASMLib.

Oracle Linux ASMLib Installation: OverviewYou can download the Oracle ASMLib software from the Oracle Technology Network Web site. There are three packages for each Linux platform. The two essential packages are the oracleasmlib package, which provides the actual ASM library, and the oracleasmsupport package, which provides the utilities to configure and enable the ASM driver. The remaining package provides the kernel driver for the ASMLib.After the ASMLib software is installed, you need to make the ASM driver available by executing the /etc/init.d/oracleasm configure command. This operation creates the /dev/oracleasm mount point used by the ASMLib to communicate with the ASM driver. When using RAC, installation and configuration must be completed on all nodes of the cluster.In order to place a disk under ASM management, it must first be marked to prevent inadvertent use of incorrect disks by ASM. This is accomplished by using the/etc/init.d/oracleasm createdisk command. With RAC, this operation needs to be performed only on one node because it is a shared-disk architecture. However, the other nodes in the cluster need to ensure the disk is seen and valid. Therefore, the other nodes in cluster need to execute the /etc/init.d/oracleasm scandisks command.After the disks are marked, the ASM initialization parameter can be set to appropriate values.

Page 193: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-11

Copyright © 2004, Oracle. All rights reserved.

Oracle Linux ASMLib Installation

• Install the packages as the root user:

• Run oracleasm with the configure option:

– Provide oracle UID as the driver owner.– Provide dba GID as the group of the driver.– Load the driver at system startup.

# rpm -i oracleasm-support-version.arch.rpm \

oracleasm-kernel-version.arch.rpm \

oracleasmlib-version.arch.rpm

# /etc/init.d/oracleasm configure

Oracle Linux ASMLib InstallationIf you do not use the ASM library driver, you must bind each disk device that you want to use to a raw device. To install and configure the ASM library driver and utilities, perform the following steps:

1. Enter the following command to determine the kernel version and architecture of the system:

# uname -rm

2. If necessary, download the required ASM library driver packages from the OTN Web site. You must download the following three packages, where version is the version of the ASM library driver, arch is the system architecture, and kernel is the kernel version you are using:

oracleasm-support-version.arch.rpmoracleasm-kernel-version.arch.rpmoracleasmlib-version.arch.rpm

3. Install the proper packages for your platform. For example, if you are using the Red Hat Enterprise Linux AS 3.0 enterprise kernel, enter a command similar to the following:

# rpm -i oracleasm-support-1.0.0-1.i386.rpm \oracleasm-2.4.9-e-enterprise-1.0.0-1.i686.rpm \oracleasmlib-1.0.0-1.i386.rpm

Page 194: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-12

Oracle Linux ASMLib Installation (continued)4. Enter the following command to run the oracleasm initialization script with the

configure option: # /etc/init.d/oracleasm configureYou will be prompted for the following:

- The UID of the driver owner. This will be the UID for the oracle user.- The GID of the driver group. This will be the GID for the dba group.- Whether the ASMlib driver should be loaded at startup The correct answer is YES.

The script then completes the following tasks:- Creates the /etc/sysconfig/oracleasm configuration file- Creates the /dev/oracleasm mount point- Loads the oracleasm kernel module- Mounts the ASM library driver file system

5. Repeat this procedure on all cluster nodes where you want to install RAC.

Page 195: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-13

Copyright © 2004, Oracle. All rights reserved.

ASM Library Disk Creation

• Identify the device name for the disks that you want to use with the fdisk –l command.

• Create a single whole-disk partition on the disk device with fdisk.

• Enter a command similar to the following to mark the shared disk as an ASM disk:

• To make the disk available on the other nodes, enter the following as root on each node:

• Set the ASM_DISKSTRING parameter

/etc/init.d/oracleasm createdisk disk1 /dev/sdbn

# /etc/init.d/oracleasm scandisks

ASM Library Disk CreationTo configure the disk devices that you want to use in an ASM disk group, complete the following steps:

1. If necessary, install the shared disks that you intend to use for the disk group and restart the system.

2. To identify the device name for the disks that you want to use, enter the following command:

# /sbin/fdisk –l

3. Using fdisk, create a single whole-disk partition on the device that you want to use.4. Enter a command similar to the following to mark a disk as an ASM disk:

# /etc/init.d/oracleasm createdisk disk1 /dev/sdb1In this example, disk1 is the tag or name that you want to assign to the disk.

5. To make the disk available on other cluster nodes, enter the following command as rooton each node:

# /etc/init.d/oracleasm scandisks

This command identifies all the shared disks attached to the node that are marked as ASM disks.

Page 196: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-14

ASM Library Disk Configuration (continued)Important oracleasm options:• configure: Use this option to reconfigure the ASM library driver, if necessary.• enable/disable: Use the disable and enable options to change the behavior of the

ASM library driver when the system starts. The enable option causes the ASM library driver to load when the system starts.

• start/stop/restart: Use the start, stop, and restart options to load or unload the ASM library driver without restarting the system.

• createdisk: Use this option to mark a disk for use with the ASM library and name it.• deletedisk: Use the deletedisk option to unmark a named disk device. Do not use

this command to unmark disks that are being used by an ASM disk group. You must drop the disk from the ASM disk group before you unmark it.

• Querydisk: Use this option to determine whether a disk device or disk name is being used by the ASM library driver.

• listdisks: Use this option to list the disk names of marked ASM library driver disks.• scandisks: Use the scandisks option to enable cluster nodes to identify which

shared disks have been marked as ASM library driver disks on another node.After you have prepared your disks, set the ASM_DISKSTRING initialization parameter to an appropriate value. The oracleasm script marks disks with an ASM header label. You can set the ASM_DISKSTRING parameter to the value ORCL:DISK*. This setting enables ASM to scan and qualify all disks with that header label.

Page 197: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-15

Copyright © 2004, Oracle. All rights reserved.

ASM Administration

• ASM instance

• Disk groups and disks

• Files 00100010

Page 198: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-16

Copyright © 2004, Oracle. All rights reserved.

ASM Instance Functionalities

CREATE DISKGROUP

ALTER DISKGROUP DROP DISKGROUP

ALTER SYSTEM RESTRICTED SESSION

ASMinstance

Databaseinstance

ASM Instance FunctionalitiesThe main goal of an ASM instance is to manage disk groups and protect their data. ASM instances also communicate file layout to database instances. In this way, database instances can directly access files stored in disk groups.There are several new disk group administrative commands. They all require the SYSDBAprivilege and must be issued from an ASM instance.You can add new disk groups. You can also modify existing disk groups to add new disks, remove existing ones, and many other operations. You can remove existing disk groups.Finally, you can prevent database instances from connecting to an ASM instance. When the ALTER SYSTEM ENABLE RESTRICTED SESSION command is issued to an ASM instance, database instances cannot connect to that ASM instance. Conversely, ALTER SYSTEM DISABLE RESTRICTED SESSION enables connections from database instances. This command enables an ASM instance to start up and mount disk groups for the purpose of maintenance without allowing database instances to access the disk groups.

Page 199: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-17

Copyright © 2004, Oracle. All rights reserved.

ASM Instance Creation

ASM Instance CreationWhile creating an ASM-enabled database, the DBCA determines if an ASM instance alreadyexists on your host. If there is one, it gives you the list of managed disk groups. You can then make a selection of whose managed disk groups are used for ASM-enabled database storage.When the ASM instance discovery returns an empty list, the DBCA creates a new ASM instance.As part of the ASM instance creation process, the DBCA automatically creates an entry in the oratab file on supported platforms. This entry is used for discovery purposes. On Windows platforms where a services mechanism is used, the DBCA automatically creates an Oracle Service and the appropriate registry entry to facilitate the discovery of ASM instances.The following configuration files are also automatically created by the DBCA at the time of ASM instance creation: The ASM instance parameter file and the ASM instance password file.Before creating the ASM instance, you have the possibility to specify some initialization parameters for the ASM instance. After the ASM instance is created, the DBCA allows you to create new disk groups that you can use to store your database.Note: ASM instances are smaller than database instances. A 64 MB SGA should be sufficient for all but the largest ASM installations.

Page 200: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-18

Copyright © 2004, Oracle. All rights reserved.

ASM Instance Initialization Parameters

INSTANCE_TYPE = ASM

DB_UNIQUE_NAME = +ASM

ASM_POWER_LIMIT = 1

ASM_DISKSTRING = '/dev/rdsk/*s2', '/dev/rdsk/c1*'

ASM_DISKGROUPS = dgroupA, dgroupB

LARGE_POOL_SIZE = 8MB

PROCESSES = 25 + 15*<#DB inst using ASM for their storage>

ASM Instance Initialization Parameters• INSTANCE_TYPE should be set to ASM for ASM instances.• DB_UNIQUE_NAME specifies the service provider name for which this ASM instance

manages disk groups.The default value of +ASM should be valid for you.• ASM_POWER_LIMIT controls the speed for a rebalance operation. Possible values range

from 1 through 11, with 11 being the fastest. If omitted, this value defaults to 1. The number of slaves for a rebalance operation is derived from the parallelization level specified in a manual rebalance command (POWER), or by the ASM_POWER_LIMITparameter.

• ASM_DISKSTRING is an operating system–dependent value used by ASM to limit the set of disks considered for discovery. When a new disk is added to a disk group, each ASM instance that has the disk group mounted must be able to discover the new disk by using its ASM_DISKSTRING. If not specified, it is assumed to be NULL and ASM disk discovery finds all disks to which ASM instance has read and write access.

• ASM_DISKGROUPS is the list of names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT command is used. ASM automatically adds a disk group to this parameter when a disk group is successfully mounted, and automatically removes the disk group when it is dismounted except for dismounts at instance shutdown.

Note: The internal packages used by ASM instances are executed from the LARGE POOL; therefore, you must set the value of the initialization parameter LARGE_POOL_SIZE to at least 8 MB. For other buffer parameters, you can use their default values.

Page 201: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-19

Copyright © 2004, Oracle. All rights reserved.

RAC and ASM Instances Creation

RAC and ASM Instances CreationWhen using the Database Configuration Assistant (DBCA) to create ASM instances on your cluster, you need to follow the exact same steps as for a single-instance environment. The only exception is for the first and third steps. You must select the Oracle Real Application Clusters database option in the first step, and then select all nodes of your cluster. The DBCA automatically creates one ASM instance on each selected node. The first instance is called +ASM1, the second +ASM2, and so on.

Page 202: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-20

Copyright © 2004, Oracle. All rights reserved.

ASM Instance Initialization Parameters and RAC

• CLUSTER_DATABASE: This parameter must be set to TRUE.

• ASM_DISKGROUP: – Multiple instances can have different values.– Shared disk groups must be mounted by each ASM

instance.• ASM_DISKSTRING:

– Multiple instances can have different values.– With shared disk groups, every instance should be

able to see the common pool of physical disks.• ASM_POWER_LIMIT: Multiple instances can have

different values.

ASM Instance Initialization Parameters and RACIn order to enable ASM instances to be clustered together in a RAC environment, each ASM instance initialization parameter file must set its CLUSTER_DATABASE parameter to TRUE. This enables the global cache services to be started on each ASM instance.Although it is possible for multiple ASM instances to have different values for their ASM_DISKGROUPS parameter, it is recommended for each ASM instance to mount the same set of disk groups. This enables disk groups to be shared amongst ASM instances for recovery purposes. In addition, all disk groups used to store one RAC database must be shared by all ASM instances in the cluster.Consequently, if you are sharing disk groups amongst ASM instances, their ASM_DISKSTRINGinitialization parameter must point to the same set of physical media. However, this parameter does not need to have the same setting on each node. For example, assume that the physical disks of a disk group are mapped by the OS on node A as /dev/rdsk/c1t1d0s2, and on node B as /dev/rdsk/c2t1d0s2. Although both nodes have different disk string settings, they locate the same devices via the OS mappings. This situation can occur when the hardware configurations of node A and node B are different, for example, when nodes are using different controllers as in the above example. ASM handles this situation because it inspects the contents of the disk header block to determine the disk group to which it belongs, rather than attempting to maintain a fixed list of path names.

Page 203: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-21

Copyright © 2004, Oracle. All rights reserved.

Discovering New ASM Instances with EM

<Target TYPE="osm_instance" NAME="+ASMn" DISPLAY_NAME="+ASMn"><Property NAME="SID" VALUE="+ASMn"/><Property NAME="MachineName" VALUE="clusnode1_vip"/><Property NAME="OracleHome" VALUE="/u01/app/oracle/..."/><Property NAME="UserName" VALUE="sys"/><Property NAME="password" VALUE="manager" ENCRYPTED="FALSE"/><Property NAME="Role" VALUE="sysdba"/><Property NAME="Port" VALUE="1521"/></Target>

If new ASM targets are not discovered:

$ emctl config agent addtarget <filename>

$ emctl stop agent

$ emctl start agent

Discovering New ASM Instances with EMIf an ASM instance is added in an existing RAC environment, it is not discovered automatically by Database Control. You must perform the following steps to discover the ASM target:

1. Create an XML file in the format shown in the slide above. You must use the correct values for the following parameters:

- NAME: The ASM target name. Usually hostname_ASMSid.- DISPLAY_NAME: ASM target display name. Usually +ASMn.- "SID": ASM SID. Usually +ASMn.- "MachineName": Hostname. In RAC environment, use the corresponding VIP.- "OracleHome": ASM Oracle Home- "UserName": Username (default is SYS) of the ASM instance.- "password": ASM user’s password- "Role": ASM user’s role (default is SYSDBA)- "Port": ASM port. By default it is 1521.

2. Run emctl config agent addtarget <filename>. This command appends the above target to the list of targets in the targets.xml configuration parameter of Database Control.

3. Restart the agent.Note: Repeat these steps on each node by using the corresponding ASM instance’s name.

Page 204: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-22

Copyright © 2004, Oracle. All rights reserved.

Accessing an ASM Instance

Disk group Disk group

Storage system

AS SYSDBAASM

instance

All operations

Accessing an ASM InstanceASM instances do not have a data dictionary, so the only way to connect to one is by using OS authentication, that is, SYSDBA. To connect remotely, a password file must be used.Normally, the SYSDBA privilege is granted through the use of an operating system group. On UNIX, this is typically the dba group. By default, members of the dba group have SYSDBAprivileges on all instances on the node, including the ASM instance. Users who connect to the ASM instance with the SYSDBA privilege have complete administrative access to all disk groups in the system.

Page 205: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-23

Copyright © 2004, Oracle. All rights reserved.

Dynamic Performance View Additions

Storage system

V$ASM_DISKGROUPV$ASM_CLIENT

V$ASM_DISK

V$ASM_FILE

V$ASM_TEMPLATE

V$ASM_ALIAS

V$ASM_OPERATION

Disk group A Disk group B

Dynamic Performance View AdditionsIn an ASM instance, V$ASM_CLIENT contains one row for every database instance using a disk group managed by the ASM instance. In a database instance, it has one row for each disk group with the database name and the ASM instance name.In an ASM instance, V$ASM_DISKGROUP contains one row for every disk group discovered by the ASM instance. In a database instance, V$ASM_DISKGROUP has a row for all disk groups mounted or dismounted.In an ASM instance, V$ASM_TEMPLATE contains one row for every template present in every disk group mounted by the ASM instance. In a database instance, it has rows for all templates in mounted disk groups.In an ASM instance, V$ASM_DISK contains one row for every disk discovered by the ASM instance, including disks which are not part of any disk group. In a database instance, it has rows for disks in the disk groups in use by the database instance.In an ASM instance, V$ASM_OPERATION contains one row for every active ASM long-running operation executing in the ASM instance. In a database instance, it contains no rows.In an ASM instance, V$ASM_FILE contains one row for every ASM file in every disk group mounted by ASM instance. In a database instance, it contains no rows.In an ASM instance, V$ASM_ALIAS contains one row for every alias present in every disk group mounted by the ASM instance. In a database instance, it contains no rows.

Page 206: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-24

Copyright © 2004, Oracle. All rights reserved.

ASM Home Page

ASM Home PageEnterprise Manager provides a user-friendly graphical interface to Oracle database management, administration, and monitoring tasks. Oracle Database 10g extends the existing functionality to transparently support the management, administration, and monitoring of Oracle databases using ASM storage. It also adds support for the new management tasks required for administration of the ASM instance and ASM disk groups.This home page shows the status of the ASM instance along with the metrics and alerts generated by the collection mechanisms. This page also provides the startup and shutdown functionality. When you click the Alerts link, a page providing alert details appears. The DiskGroup Usage chart shows space used by each client database along with free space.Note: You can reach ASM home page from the Database home page. In the General section of the Database home page, click the +ASM link.

Page 207: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-25

Copyright © 2004, Oracle. All rights reserved.

ASM Performance Page

ASM Performance PageThe Performance tab of the ASM home page shows the I/O response time and throughput for each disk group.You can further drill down to view disk-level performance metrics.

Page 208: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-26

Copyright © 2004, Oracle. All rights reserved.

ASM Configuration Page

ASM Configuration PageThe Configuration tab of the ASM home page enables you to view or modify the initialization parameters of the ASM instance.

Page 209: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-27

Copyright © 2004, Oracle. All rights reserved.

Starting Up an ASM Instance

$ sqlplus /nolog

SQL> CONNECT / AS sysdba

Connected to an idle instance.

SQL> STARTUP;

ASM instance started

Total System Global Area 147936196 bytes

Fixed Size 324548 bytes

Variable Size 96468992 bytes

Database Buffers 50331648 bytes

Redo Buffers 811008 bytes

ASM diskgroups mounted

Starting Up an ASM InstanceASM instances are started similarly to database instances except that the initialization parameter file contains an entry such as INSTANCE_TYPE=ASM. This parameter sets to ASM value signals the Oracle executable that an ASM instance is starting, and not a database instance.Furthermore, for ASM instances, the mount option during startup tries to mount the disk groups specified by the ASM_DISKGROUPS initialization parameter. No database is mounted in this case.Other STARTUP clauses for ASM instances are similar to those for database instances. For example, RESTRICT prevents database instances from connecting to this ASM instance. OPENis invalid for an ASM instance. NOMOUNT starts up an ASM instance without mounting any disk group.

Page 210: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-28

Copyright © 2004, Oracle. All rights reserved.

Shutting Down an ASM Instance

ASM

SHUTDOWN NORMAL

DB

SHUTDOWN

ASM

SHUTDOWN IMMEDIATE

DB instanceimmediately aborted

Shutting Down an ASM InstanceSince ASM manages disk groups which hold the database files and its metadata, a shutdown of the ASM instance cannot proceed as long as all of its client database instances are not stopped as well. In case of ASM SHUTDOWN NORMAL, the ASM instance begins shutdown and waits for all sessions to disconnect, just as typical database instances. In addition, and because ASM has a persistent database instance connection, the database instances must be shut down first, in order for ASM to complete its shutdown. In case of ASM SHUTDOWN IMMEDIATE, TRANSACTIONAL, or ABORT, ASM immediately terminates its database instance connections, and as a result, all dependent databases immediately abort. However, for the IMMEDIATE, and TRANSACTIONALcase, ASM waits for any in-progress ASM SQL to complete before shutting down the ASM instance. In a single ASM instance configuration, if the ASM instance fails while disk groups are open for update, then after the ASM instance reinitializes, it reads the disk group’s log and recovers all transient changes. With multiple ASM instances sharing disk groups, if one ASM instance should fail, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. The failure of a database instance does not affect ASM instances. An ASM instance is expected to be always functional on the host. An ASM instance must be brought up automatically whenever the host is restarted. An ASM instance is expected to use the auto-startup mechanism supported by the underlying operating system. For example, it should run as a Service under Windows.Note: File system failure usually crashes a node.

Page 211: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-29

Copyright © 2004, Oracle. All rights reserved.

ASM Administration

• ASM instance

• Disk groups and disks

• Files 00100010

Page 212: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-30

Copyright © 2004, Oracle. All rights reserved.

ASM Disk Group

• Is a pool of disks managed asa logical unit

• Partitions total disk space into uniform-sized units

• Spreads each file evenly across all disks

• Provides coarse- or fine-grain striping based on file type

• Administers disk groups, not files

Disk group

ASMinstance

ASM Disk GroupA disk group is a collection of disks managed as a logical unit. Storage is added and removed from disk groups in units of ASM disks. Every ASM disk has an ASM disk name, which is a name common to all nodes in a cluster. The ASM disk name abstraction is required because different hosts can use different operating system names to refer to the same disk.ASM always evenly spreads files in 1 MB allocation-unit chunks across all the disks in a disk group. This is called COARSE striping. In this way, ASM eliminates the need for manual disk tuning. However, disks in a disk group should have similar size and performance characteristics to obtain optimal I/O tuning.For most installations, there is only a small number of disk groups, for example, one disk group for a work area and another for a recovery area.For files (such as log files) that require low latency, ASM provides fine-grained (128 KB) striping. FINE striping stripes each allocation unit. FINE striping breaks up medium-sized I/O operations into multiple, smaller I/O operations that execute in parallel.While the number of files and disks increases, you have to manage only a constant number of disk groups. From a database perspective, disk groups can be specified as the default location for files created in the database.Note: Each disk group is self-describing, containing its own file directory, disk directory, and other directories.

Page 213: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-31

Copyright © 2004, Oracle. All rights reserved.

Failure Group

Controller 1 Controller 2

Failure group 1 Failure group 2

Disk group A

Controller 3

Failure group 3

1 7 131 7 131 7 13

1 7 131 7 131 7 13

1 7 131 7 131 7 13

2

3

4

5

6

Failure GroupA failure group is a set of disks, inside one particular disk group, sharing a common resource whose failure needs to be tolerated. An example of a failure group is a string of SCSI disks connected to a common SCSI controller. A failure of the controller leads to all of the disks on its SCSI bus becoming unavailable, although each of the individual disks is still functional.What constitutes a failure group is site specific. It is largely based on failure modes that a site is willing to tolerate. By default, ASM assigns each disk to its own failure group. When creating a disk group or adding a disk to a disk group, administrators can specify their own grouping of disks into failure groups. After failure groups are identified, ASM can optimize file layout to reduce the unavailability of data due to the failure of a shared resource.

Page 214: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-32

Copyright © 2004, Oracle. All rights reserved.

Disk Group Mirroring

• Mirror at AU level• Mix primary and mirror

AUs on each disk• External redundancy:

Defers to hardwaremirroring

• Normal redundancy:– Two-way mirroring– At least two failure groups

• High redundancy:– Three-way mirroring– At least three failure groups

Disk Group MirroringASM has three disk group types that support different types of mirroring: external redundancy, normal redundancy, and high redundancy. External-redundancy disk groups do not provide mirroring. Use an external-redundancy disk group if you use hardware mirroring or if you can tolerate data loss as the result of a disk failure. Normal-redundancy disk groups support two-way mirroring. High-redundancy disk groups provide triple mirroring.ASM uses a unique mirroring algorithm. ASM does not mirror disks, rather it mirrors AUs. As a result, you need spare capacity only in your disk group. When a disk fails, ASM automatically reconstructs the contents of the failed disk on the surviving disks in the disk group by reading the mirrored contents from the surviving disks. In this way, the I/O hit from a disk failure is spread across several disks rather than on a single disk that mirrors the failed drive.When ASM allocates a primary AU of a file to one disk in a disk group, it allocates a mirror copy of that AU to another disk in the disk group. Primary AUs on a given disk can have their respective mirror AUs on one of several partner disks in the disk group. Each disk in a disk group has the same ratio of primary and mirror AUs. ASM ensures that a primary AUs and its mirror copy never reside in the same failure group. If you define failure groups for your disk group, ASM can tolerate the simultaneous failure of multiple disks in a single failure group.Note: For disk groups with external redundancy, failure groups are not used because disks in an external-redundancy disk group are presumed to be highly available.

Page 215: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-33

Copyright © 2004, Oracle. All rights reserved.

Disk Group Dynamic Rebalancing

• Automatic online rebalancing wheneverstorage configurationchanges

• Only move data proportional to storage added

• No need for manual I/O tuning

• Online migration tonew storage

Disk Group Dynamic Rebalancing• With ASM, the rebalance process is very easy and happens without any intervention from

the DBA or system administrator. ASM automatically rebalances a disk group whenever disks are added or dropped.

• By using index techniques to spread AUs on the available disks, ASM does not need to restripe all of the data, but instead only needs to move an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced I/O load across the disks in a disk group.

• With I/O balanced whenever files are allocated and whenever the storage configuration changes, the DBA never needs to search for hot spots in a disk group and manually move data to restore a balanced I/O load.

• It is more efficient to add or drop multiple disks at the same time, so that they are rebalanced as a single operation. This avoids unnecessary movement of data. With this technique, it is easy to achieve online migration of your data. All you need to do is add the new disks in one operation and drop the old ones in one operation.

Page 216: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-34

Copyright © 2004, Oracle. All rights reserved.

ASM Administration Page

ASM Administration PageThe Administration tab of the ASM home page shows the enumeration of disk groups from V$ASM_DISKGROUP.On this page, you can create, edit, or drop a disk group. You can also perform disk group operations such as mount, dismount, and rebalance on a selected disk group.By clicking a particular disk group, you can view all existing disks pertaining to the disk group, and you can add or delete disks as well as checking or resizing disks.You can also access the Performance page, as well as Templates and Files from the Disk Group page. You can define your templates and aliases.

Page 217: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-35

Copyright © 2004, Oracle. All rights reserved.

Create Disk Group Page

Create Disk Group PageClick the Create button on the Administration page to open this page. You can input disk group name, redundancy mechanism, and the list of disks that you would like to include in the new disk group.The list of disks is obtained from the V$ASM_DISK fixed view. By default, only the disks with header status of CANDIDATE are shown in the list.

Page 218: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-36

Copyright © 2004, Oracle. All rights reserved.

ASM Disk Groups with EM in RAC

ASM Disk Groups with EM in RACWhen you add a new disk group from an ASM instance, this disk group is not automatically mounted by other ASM instances. If you want to mount the newly added disk group on all ASM instances, for example, by using SQL*Plus, then you need to manually mount the disk group on each ASM instance.However, if you are using Database Control to add a disk group, then the disk group definition includes a check box to indicate whether the disk group is automatically mounted to all the ASM clustered database instances. This is also true when you mount and dismount ASM disk groups by using Database Control where you can use a check box to indicate which instances mount or dismount the ASM disk group.

Page 219: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-37

Copyright © 2004, Oracle. All rights reserved.

Disk Group Performance Page and RAC

Disk Group Performance Page and RACWhen you examine the default Disk Group Performance page, you can see an instance-level performance details by clicking a performance characteristic such as Write Response Time or I/O Response Time.You can access the Disk Group Performance page from one of the Automatic Storage Management home pages by clicking the Administration tab. On the Administration Disk Groups page, click the appropriate disk group link in the Namecolumn. When the corresponding Disk Group page is displayed, click the Performance tab.

Page 220: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-38

Copyright © 2004, Oracle. All rights reserved.

Create or Delete Disk Groups

CREATE DISKGROUP dgroupA NORMAL REDUNDANCY

FAILGROUP controller1 DISK

'/devices/A1' NAME diskA1 SIZE 120G FORCE,

'/devices/A2',

'/devices/A3'

FAILGROUP controller2 DISK

'/devices/B1',

'/devices/B2',

'/devices/B3';

DROP DISKGROUP dgroupA INCLUDING CONTENTS;

Create or Delete Disk GroupsAssume that ASM disk discovery identified the following disks in the directory /devices: A1, A2, A3, A4, B1, B2, B3, and B4. Suppose that disks A1, A2, A3, and A4 are on a separate SCSI controller from disks B1, B2, B3, and B4. The first example illustrates how to set up a disk group called DGROUPA with two failure groups: CONTROLLER1 and CONTROLLER2.The example also uses NORMAL REDUNDANCY for the disk group. This is the default redundancy characteristic. As shown by the example, you can provide an optional disk name. If not supplied, ASM creates a default name of the form <group>_n, where <group> is the disk group name and n is the disk number. Optionally, you can also provide the size for the disk. If not supplied, ASM attempts to determine the size of the disk. If the size cannot be determined, an error is returned. Over-specification of capacity also returns an error. Under-specification of capacity limits what ASM uses. FORCE indicates that a specified disk should be added to the specified disk group even though the disk is already formatted as a member of an ASM disk group. Using the FORCE option for a disk that is not formatted as a member of an ASM disk group, returns an error. As shown by the second statement, you can delete a disk group along with all its files. To avoid accidental deletions, the INCLUDING CONTENTS option must be specified if the disk group still contains any files besides internal ASM metadata. The disk group must be mounted. After ensuring that none of the disk group files are open, the group and all its drives are removed from the disk group. Then the header of each disk is overwritten to eliminate ASM formatting information.

Page 221: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-39

Copyright © 2004, Oracle. All rights reserved.

Adding Disks to Disk Groups

ALTER DISKGROUP dgroupA ADD DISK

'/dev/rdsk/c0t4d0s2' NAME A5,

'/dev/rdsk/c0t5d0s2' NAME A6,

'/dev/rdsk/c0t6d0s2' NAME A7,

'/dev/rdsk/c0t7d0s2' NAME A8;

ALTER DISKGROUP dgroupA ADD DISK '/devices/A*';

Disk formatting

Disk group rebalancing

Adding Disks to Disk GroupsThe example in the slide shows how to add disks to a disk group. You execute an ALTER DISKGROUP ADD DISK command to add the disks. The first statement adds four new disks to the DGROUPA disk group.The second statement demonstrates the interactions of discovery strings. Consider the following configuration:

/devices/A1 is a member of disk group DGROUPA./devices/A2 is a member of disk group DGROUPA./devices/A3 is a member of disk group DGROUPA./devices/A4 is a candidate disk.

The second command adds A4 to the DGROUPA disk group. It ignores the other disks, even though they match the discovery string, because they are already part of the DGROUPA disk group. As shown by the diagram, when you add a disk to a disk group, the ASM instance ensures that the disk is addressable and usable. The disk is then formatted and rebalanced. The rebalancing process is time-consuming as it moves AUs from every file onto the new disk.Note: Rebalance does not block any database operations. The main impact that rebalance has is on the I/O load on the system. The higher the power of the rebalance, the more I/O load it puts on the system. Thus, less I/O bandwidth is available for database I/Os.

Page 222: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-40

Copyright © 2004, Oracle. All rights reserved.

Miscellaneous Alter Commands

ALTER DISKGROUP dgroupA DROP DISK A5;

ALTER DISKGROUP dgroupA DROP DISK A6

ADD FAILGROUP fredDISK '/dev/rdsk/c0t8d0s2' NAME A9;

ALTER DISKGROUP dgroupB REBALANCE POWER 5;

ALTER DISKGROUP dgroupA UNDROP DISKS;

ALTER DISKGROUP dgroupA DISMOUNT;

ALTER DISKGROUP dgroupA CHECK ALL;

Miscellaneous Alter CommandsThe first statement shows how to remove one of the disks from disk group DGROUPA. The second statement shows how you can add and drop a disk with a single command. The big advantage in this case is that rebalancing is not started until the command completes. The third statement shows how to cancel the dropping of the disk from a previous example. The UNDROPcommand operates only on pending drops, not after drop completion.The fourth statement rebalances DGROUPB disk group if necessary. This command is generally not necessary because it is automatically done as disks are added, dropped, or resized. However, it is useful if you want to use the POWER clause to override the default and maximum speed defined by the ASM_POWER_LIMIT initialization parameter. You can change the power level of an ongoing rebalance operation by reentering the command with a new level. A power level of zero causes rebalancing to halt until the command is either implicitly or explicitly reinvoked.The fifth statement dismounts DGROUPA. The MOUNT and DISMOUNT options allow you to make one or more disk groups available or unavailable to the database instances.

Page 223: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-41

Miscellaneous Alter Commands (continued)The sixth statement shows how to verify the internal consistency of disk group metadata and to repair any error found. It is also possible to use the NOREPAIR clause if you only want to be alerted about errors. Although the example requests a check across all disks in the disk group, checking can be specified on a file or an individual disk. This command requires that the disk group be mounted. If any error is found, a summary error message is displayed and the details of the detected error are reported in the alert log.Note: Except for the last two statements, the examples trigger a disk group rebalancing.

Page 224: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-42

Copyright © 2004, Oracle. All rights reserved.

Monitoring Long-Running Operations Using V$ASM_OPERATION

Estimated amount of time (in minutes) for operation termination

EST_MINUTES

Estimated number of allocation units moved per minute

EST_RATE

Estimated number of remaining allocation unitsEST_WORK

Number of allocation units moved so farSOFAR

Power allocated to this operationACTUAL

Power requested for this operationPOWER

State of operation: QUEUED or RUNNINGSTATE

Type of operation: REBALOPERATION

Disk groupGROUP_NUMBER

DescriptionColumn

Monitoring Long-Running Operations Using V$ASM_OPERATIONThe ALTER DISKGROUP DROP, RESIZE, and REBALANCE commands return before the operation is completed. To monitor progress of these long-running operations, you can query the V$ASM_OPERATION fixed view. This view is described in the table in the slide above.Note: A power limit can be set to zero, but it does not show up in V$ASM_OPERATION as an outstanding operation.

Page 225: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-43

Copyright © 2004, Oracle. All rights reserved.

ASM Administration

• ASM instance

• Disk groups and disks

• Files 00100010

Page 226: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-44

Copyright © 2004, Oracle. All rights reserved.

ASM Files

Database file

ASM file automatically spread inside dgroupA

CREATE TABLESPACE sample DATAFILE '+dgroupA';

1234

1 2 3 4

Automatic ASM file

management

RMAN

Mandatoryfor backups

ASM FilesASM files are Oracle database files stored in ASM disk groups. When a file is created, certain file attributes are permanently set. Among these are its protection policy and its striping policy.ASM files are Oracle-managed files. Any file that is created by ASM is automatically deleted when it is no longer needed. However, ASM files that are created by specifying a user alias are not considered Oracle-managed files. These files are not automatically deleted.When ASM creates a data file for a permanent tablespace (or a temp file for a temporary tablespace), the data file is set to auto-extensible with an unlimited maximum size and 100 MB default size. An AUTOEXTEND clause may override this default.All circumstances where a database must create a new file allow for the specification of a disk group for automatically generating a unique file name.With ASM, file operations are specified in terms of database objects. Administration of databases never requires knowing the name of a file, though the name of the file is exposed through some data dictionary views or the ALTER DATABASE BACKUP CONTROLFILE TOTRACE command.Because each file in a disk group is physically spread across all disks in the disk group, a backup of a single disk is not useful. Database backups of ASM files must be made with RMAN.Note: ASM does not manage binaries, alert logs, trace files, password files, or Cluster Ready Services files.

Page 227: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-45

Copyright © 2004, Oracle. All rights reserved.

ASM File Names

ASM file name

Fully qualified

Incompletewith

templateNumeric Alias Alias with

template Incomplete

Reference Single-filecreation

Multiple-filecreation

ASM File NamesASM file names can take several forms:

• Fully qualified• Numeric• Alias• Alias with template• Incomplete• Incomplete with template

The correct form to use for a particular situation depends on the context of how the file name is used. There are three such contexts:

• When an existing file is being referenced• When a single file is about to be created• When multiple files are about to be created

As shown in the graphic, each context has possible choices for file name form.Note: ASM files that are created by specifying a user alias are not considered Oracle Managed Files. The files are not automatically deleted.

Page 228: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-46

Copyright © 2004, Oracle. All rights reserved.

ASM File Name Syntax

+<group>/<dbname>/<file_type>/<tag>.<file#>.<incarnation#>

+<group>.<file#>.<incarnation#>

+<group>/<directory1>/…/<directoryn>/<file_name>

+<group>

+<group>(<temp>)

+<group>/<directory1>/…/<directoryn>/<file_name>(<temp>)

1

2

3

4

5

6

ASM File Name SyntaxThe examples in the slide give you the syntax that you can use to refer to ASM files:

1. Fully qualified ASM file names are used for referencing existing ASM files. They specify a disk group name, a database name, a file type, a type-specific tag, a file number, and an incarnation number. The fully qualified name is automatically generated for every ASM file when it is created. Even if a file is created via an alias, a fully qualified name is also created. Because ASM assigns the name as part of file creation, fully qualified names cannot be used for file creation. The names can be found in the same hierarchical directory structure as alias names. All the information in the name is automatically derived by ASM. Fully qualified ASM file names are also called system aliases, implying that these aliases are created and maintained by ASM. End users cannot modify them.A fully qualified name has the following form: +<group>/<dbname>/<file type>/<tag>.<file>.<incarnation>where:

- <group> is the disk group name.- <dbname> is the database name to which the file belongs.- <file type> is the Oracle file type (CONTROLFILE, DATAFILE, and so on).- <tag> is type-specific information about the file (such as the tablespace name for a

data file).- <file>.<incarnation> is the file/incarnation number pair used for uniqueness.

Page 229: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-47

ASM File Name Syntax (continued)An example of a fully qualified ASM file name is the following:+dgroupA/db1/controlfile/CF.257.8675309

2. Numeric ASM file names are used for referencing existing ASM files. They specify a disk group name, a file number, and an incarnation number. Because ASM assigns the file and incarnation numbers as part of creation, numeric ASM file names cannot be used for file creation. These names do not appear in ASM directory hierarchy. They are derived from the fully qualified name. These names are never reported to you by ASM, but they can be used in any interface that needs the name of an existing file.The following is an example of a numeric ASM file name:+dgroupA.257.8675309

3. Alias ASM file names are used both for referencing existing ASM files and for creating new ASM files. Alias names specify a disk group name, but instead of a file and incarnation number, they include a user-friendly name string. Alias ASM file names are distinguished from fully qualified or numeric names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends with a dotted pair of numbers. Alias file names are provided to allow administrators to reference ASM files with human-understandable names. Alias file names are implemented using a hierarchical directory structure, with the slash (/) separating name components. Name components are in UTF-8 format and may be up to 48 bytes in length, but must not contain a slash. This implies a 48-character limit in a single-byte language but a lower limit in a multibyte language depending upon how many multibyte characters are present in the string. The total length of the alias file name, including all components and all separators, is limited to 256 bytes. The components of alias file names can have space between sets of characters, but the space should not be the first or last character of a component. Alias ASM file names are case-insensitive. Here is a possible example of ASM alias file name:+dgroupA/myfiles/control_file1 +dgroupA/A rather LoNg and WeiRd name/for a fileEvery ASM file will be given a fully qualified name during file creation based upon its attributes. An administrator may create an additional alias for each file during file creation, or an alias can be created for an existing file using the ALTER DISKGROUP ADD ALIAScommand. An alias ASM file name is normally used in the CONTROL_FILESinitialization parameter. An administrator may create directory structures as needed to support whatever naming convention is desired, subject to the 256-byte limit.

4. Alias ASM file names with templates are used only for ASM file creation operations. They specify a disk group name, an alias name, and a file creation template name. (See the following slide in this lesson.) If an alias ASM file name with template is specified, and the alias portion refers to an existing file, then the template specification is ignored. An example of an alias ASM file name with template is the following: +dgroupA/config1(spfile)

5. Incomplete ASM file names are used only for file creation operations. They consist of a disk group name only. ASM uses a default template for incomplete ASM file names as defined by their file type. An example of an incomplete ASM file name is the following:+dgroupA

6. Incomplete ASM file names with templates are used only for file creation operations. They consist of a disk group name followed by a template name. The template name determines the file creation attributes applied to the file. An example of an incomplete ASM file name with template is the following: +dgroupA(datafile)

Page 230: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-48

Copyright © 2004, Oracle. All rights reserved.

ASM File Name MappingOracle File Type <File Type> <Tag> Def Template

Control files controlfile CF/BCF CONTROLFILEData files datafile <ts_name>_<file#> DATAFILE

Online logs online_log log_<thread#> ONLINELOGArchive logs archive_log parameter ARCHIVELOGTemp files temp <ts_name>_<file#> TEMPFILE

Data file backup pieces backupset Client Specified BACKUPSETData file incremental

backup piecesbackupset Client Specified BACKUPSET

Arch log backup piece backupset Client Specified BACKUPSETData file copy datafile <ts_name>_<file#> DATAFILE

Initialization parameters init spfile PARAMETERFILEBroker configurations drc drc DATAGUARDCONFIG

Flashback logs rlog <thread#>_<log#> FLASHBACKChange tracking bitmaps CTB BITMAP CHANGETRACKING

Auto backup AutoBackup Client Specified AUTOBACKUPData Pump dump set Dumpset dump DUMPSET

Cross-platform converted data files

XTRANSPORT

ASM File Name MappingASM supports most file types required by the database. However, certain classes of file types, such as operating system executables, are not supported by ASM. Each file type is associated with a default template name.This table specifies ASM-supported file types with their corresponding naming conventions.ASM applies attributes to the files that it creates as specified by the corresponding system default template.

Page 231: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-49

Copyright © 2004, Oracle. All rights reserved.

ASM File Templates

System Template StripedExternal Normal High CONTROLFILE fineunprotected 2-way mirror 3-way mirror

DATAFILE coarseunprotected 2-way mirror 3-way mirrorONLINELOG fineunprotected 2-way mirror 3-way mirror

ARCHIVELOG coarseunprotected 2-way mirror 3-way mirrorTEMPFILE coarseunprotected 2-way mirror 3-way mirror

BACKUPSET coarseunprotected 2-way mirror 3-way mirrorXTRANSPORT coarseunprotected 2-way mirror 3-way mirror

PARAMETERFILE coarseunprotected 2-way mirror 3-way mirrorDATAGUARDCONFIG coarseunprotected 2-way mirror 3-way mirror

FLASHBACK fineunprotected 2-way mirror 3-way mirrorCHANGETRACKING coarseunprotected 2-way mirror 3-way mirror

AUTOBACKUP coarseunprotected 2-way mirror 3-way mirrorDUMPSET coarseunprotected 2-way mirror 3-way mirror

Unprotected

2-way

Mirror

3-way

Mirror

ASM File TemplatesASM file templates are named collections of attributes applied to files during file creation. Templates simplify file creation by mapping complex file-attribute specifications on to a single name. Templates, while applied to files, are associated with a disk group.When a disk group is created, ASM establishes a set of initial system default templates associated with that disk group. These templates contain the default attributes for the various Oracle database file types. Attributes of the default templates can be changed by the administrator. Additionally, administrators may add their own unique templates as required. This enables you to specify the appropriate file creation attributes as a template for less sophisticated administrators to use. System default templates cannot be deleted.If you need to change an ASM file attribute after the file has been created, then the file must be copied via RMAN into a new file with the new attributes. This is the only method of changing file attributes.Depending on the defined disk group redundancy characteristics, the system templates are created with the attributes shown.When defining or altering a template, you can specify whether the files must be mirrored or not. You can also specify if the files created under that template are COARSE or FINE striped.Note: The redundancy and striping attributes used for ASM metadata files are predetermined by ASM and are not changeable by the template mechanism.

Page 232: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-50

Copyright © 2004, Oracle. All rights reserved.

ALTER DISKGROUP dgroupA ADD TEMPLATE reliable ATTRIBUTES (MIRROR);

ALTER DISKGROUP dgroupA DROP TEMPLATE reliable;

ALTER DISKGROUP dgroupA DROP FILE '+dgroupA.268.8675309';

ALTER DISKGROUP dgroupA ADD DIRECTORY '+dgroupA/mydir';

ALTER DISKGROUP dgroupA ADD ALIAS '+dgroupA/mydir/datafile.dbf'FOR '+dgroupA.274.38745';

ALTER DISKGROUP dgroupADROP ALIAS '+dgroupA/mydir/datafile.dbf';

Template and Alias: Examples

Template and Alias: ExamplesThe first statement shows how to add a new template to a disk group. In this example, the RELIABLE template is created in disk group DGROUPA that is two-way mirrored. The second statement shows how you can remove the previously defined template.The third statement shows you how a file might be removed from a disk group.The fourth statement creates a user directory called MYDIR. The parent directory must exist before attempting to create a subdirectory or alias in that directory. Then, the example creates an alias for the +dgroupA.274.38745 file. The same code example shows you how to delete the alias. You can also drop a directory by using the ALTER DISKGROUP DROP DIRECTORYcommand. You can also rename an alias or a directory by using the ALTER DISKGROUPRENAME command.Note: Files can only be dropped if they are not in use.

Page 233: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-51

Copyright © 2004, Oracle. All rights reserved.

Retrieving Aliases

SELECT reference_index INTO :alias_idFROM V$ASM_ALIAS WHERE name = '+dgroupA';

SELECT reference_index INTO :alias_id FROM V$ASM_ALIASWHERE parent_index = :alias_id AND name = 'mydir';

SELECT name FROM V$ASM_ALIAS WHERE parent_index = :alias_id;

Retrieving AliasesAssume that you want to retrieve all aliases that are defined inside the previously defined directory +dgroupA/mydir. You can traverse the directory tree, as shown in the example.The REFERENCE_INDEX number can be used only for entries that are directory entries in the alias directory. For nondirectory entries, the reference index is set to zero. The example retrieves REFERENCE_INDEX numbers for each subdirectory and uses the last REFERENCE_INDEX as the PARENT_INDEX of needed aliases.

Page 234: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-52

Copyright © 2004, Oracle. All rights reserved.

SQL Commands and File Naming

CREATE CONTROLFILE DATABASE sample

RESETLOGS ARCHIVELOG

MAXLOGFILES 5 MAXLOGHISTORY 100

MAXDATAFILES 10 MAXINSTANCES 2

LOGFILE GROUP 1 ('+dgroupA','+dgroupB') SIZE 100M,

GROUP 2 ('+dgroupA','+dgroupB') SIZE 100M

DATAFILE '+dgroupA.261.12345678' SIZE 100M,

'+dgroupA.262.12345678' SIZE 100M;

SQL Commands and File NamingASM file names are accepted in SQL commands wherever file names are legal. For most commands, there is an alternate method for identifying the file (for example, a file number) so that the name need not be entered. Because one of the principal design objectives of ASM is to eliminate the need for specifying file names, you are discouraged from using ASM file names as much as possible. However, certain commands must have file names as parameters. For example, data files and log files stored in an ASM disk group should be given to the CREATE CONTROLFILE command using the file reference context form. However, the use of the RESETLOGS option requires the use of file creation context form for the specification of the log files.

Page 235: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-53

Copyright © 2004, Oracle. All rights reserved.

DBCA and Storage Options

DBCA and Storage OptionsIn order to support ASM as a storage option, a new page is added to the DBCA. This allows you to choose the storage options file system, ASM, or raw devices.

Page 236: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-54

Copyright © 2004, Oracle. All rights reserved.

Database Instance Parameter Changes

INSTANCE_TYPE = RDBMS

LOG_ARCHIVE_FORMAT

DB_BLOCK_SIZE

DB_CREATE_ONLINE_DEST_n

DB_CREATE_FILE_DEST_n

DB_RECOVERY_FILE_DEST

CONTROL_FILES

LOG_ARCHIVE_DEST_n

LOG_ARCHIVE_DEST

STANDBY_ARCHIVE_DEST

Database Instance Parameter ChangesINSTANCE_TYPE defaults to RDBMS and specifies that this instance is an RDBMS instance.LOG_ARCHIVE_FORMAT is ignored if LOG_ARCHIVE_DEST is set to an incomplete ASM file name: +dGroupA, for example. If LOG_ARCHIVE_DEST is set to an ASM directory (for example, +dGroupA/myarchlogdir/), then LOG_ARCHIVE_FORMAT is used and the files are non-OMF. Unique file names for archived logs are automatically created by the Oracle database.DB_BLOCK_SIZE must be set to one of the standard block sizes (2 KB, 4 KB, 8 KB, 16 KB, or 32 KB). Databases using nonstandard block sizes, such as 6 KB, are not supported.The following parameters accept the multifile creation context form of ASM file names as a destination:• DB_CREATE_ONLINE_DEST_n• DB_CREATE_FILE_DEST_n• DB_RECOVERY_FILE_DEST• CONTROL_FILES• LOG_ARCHIVE_DEST_n• LOG_ARCHIVE_DEST• STANDBY_ARCHIVE_DEST

Page 237: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-55

Copyright © 2004, Oracle. All rights reserved.

Database Instance Parameter Changes

• Add at least 600 KB to LARGE_POOL_SIZE• Add the following to SHARED_POOL_SIZE:

SELECT d+l+t DB_SPACE

FROM (SELECT SUM(bytes)/(1024*1024*1024) dFROM v$datafile),

(SELECT SUM(bytes)/(1024*1024*1024) lFROM v$logfile a, v$log b WHERE a.group#=b.group#),

(SELECT SUM(bytes)/(1024*1024*1024) tFROM v$tempfile WHERE status='ONLINE');

• Add at least 16 to PROCESSES

(DB_SPACE/100+2)*#_External_Red OR(DB_SPACE/50+4)*#_Normal_Red OR(DB_SPACE/33+6)*#_High_Red

Database Instance Parameter Changes (continued)The SGA parameters for a database instance needs slight modification to support ASM AUsmaps and other ASM information. The following are guidelines for SGA sizing on the database instance:

• Add at least 600 KB to your large pool, and make sure that its size is at least 8 MB.• Additional memory is required to store AU maps in the shared pool. Use the result of the

above query to obtain the current database storage size (DB_SPACE) that is either already on ASM or will be stored in ASM. Then determine the redundancy type that is used (or will be used), and add to the shared pool size, one of the the following values:

- For disk groups using external redundancy: Every 100 GB of space needs 1 MB of extra shared pool plus a fixed amount of 2 MB of shared pool.

- For disk groups using normal redundancy: Every 50 GB of space needs 1 MB of extra shared pool plus a fixed amount of 4 MB of shared pool.

- For disk groups using high redundancy: Every 33 GB of space needs 1 MB of extra shared pool plus a fixed amount of 6 MB of shared pool.

• Add at least 16 to the value of the PROCESSES initialization parameter.Note: If the Automatic Memory Management (AMM) feature is being used, then this sizing data can be treated as informational only, or as supplemental data in gauging best values for the SGA. Oracle Corporation highly recommends using the AMM feature.

Page 238: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-56

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to: • Use the DBCA to create an ASM instance• Start up and shut down ASM instances• Create and maintain ASM disk groups• Create database files using ASM

Page 239: User Guide RAC

Oracle Database 10g: Real Application Clusters 5-57

Copyright © 2004, Oracle. All rights reserved.

Practice 5 Overview

This practice covers the following topics:• Installing ASMLib.• Using the DBCA to create ASM instances.• Discovering ASM instances in Database Control.• Creating new ASM disk groups using Database

Control.• Generating automatic disk group rebalancing

operations.

Page 240: User Guide RAC
Page 241: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Administering Storage in RACPart II

Page 242: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Manage redo log groups in a RAC environment• Manage undo tablespaces in a RAC environment• Use SRVCTL to manage ASM instances• Migrate database files to ASM

Page 243: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-3

Copyright © 2004, Oracle. All rights reserved.

ASM and SRVCTL with RAC

• SRVCTL allows to manage ASM from a CRS perspective:– Add an ASM instance to CRS.– Enable an ASM instance for CRS automatic restart.– Start up an ASM instance.– Shut down an ASM instance.– Disable an ASM instance for CRS automatic restart.– Remove an ASM instance and its OCR entries.– Get some status information.

• DBCA allows you to create ASM instances as well as helps you to add and enable them with CRS.

ASM and SRVCTL with RACYou can use SRVCTL to perform the following ASM administration tasks:

• The ADD option adds Oracle Cluster Registry (OCR) information about an ASM instance to run under CRS. This option also enables the recourse.

• The ENABLE option enables an ASM instance to run under CRS for automatic startup, or restart.

• The DISABLE option disables an ASM instance to prevent CRS inappropriate automatic restarts.

• The START option starts a CRS-enabled ASM instance. SRVCTL uses the SYSDBAconnection to perform the operation.

• The STOP option stops an ASM instance by using either shutdown normal, transactional, immediate, or abort option.

• The CONFIG option displays the configuration information stored in the OCR for a particular ASM instance.

• The STATUS option obtains the current status of an ASM instance.• The REMOVE option removes the configuration of an ASM instance, as well as its

corresponding instance from a node. Before you can remove an ASM instance, you must first stop and disable it.

Note: Adding and enabling an ASM instance is automatically performed by the DBCA when creating the ASM instance.

Page 244: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-4

Copyright © 2004, Oracle. All rights reserved.

ASM and SRVCTL with RAC: Examples

$ srvctl start asm –n clusnode1 –i +ASM1 –o open

$ srvctl stop asm –n clusnode1 –i +ASM1 –o immediate

$ srvctl add asm -n clusnode1 -i +ASM1 -o /ora/ora10

$ srvctl disable asm –n clusnode1 –i +ASM1

• Start an ASM instance on the specified node.

• Stop an ASM instance on the specified node.

• Add OCR data about an existing ASM instance.

• Disable CRS management of an ASM instance.

ASM and SRVCTL with RAC (continued)The first example starts up the +ASM1 ASM instance on the CLUSNODE1 node.The second example shuts down immediate +ASM1 on CLUSNODE1.The third example adds to the OCR the CRS information for +ASM1 on CLUSNODE1. You need to specify the ORACLE_HOME of the instance.The fourth example prevents CRS to automatically restart +ASM1.Note: For more information, refer to the Oracle Real Application Clusters Administrator’s Guide.

Page 245: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-5

Copyright © 2004, Oracle. All rights reserved.

Migrating to ASM: Overview

• You must use RMAN.• The following two types of migration paths are

possible:– Cold migration– Hot migration

• Migration can be performed for the entire database, or just pieces.

• The general goal is to have two disk groups:– For the data area– For the recovery area

• The migration path depends on whether you have extra space or not.

• For more information, see the OTN Web site.

Migrating to ASM: OverviewWhenever you want to migrate your database to ASM, the only possibility is to use Recovery Manager (RMAN). This is because each file stored in a disk group is physically spread across alldisks in the disk group, and RMAN commands enable non-ASM files to be relocated to an ASM disk group.Although it is not a requirement, most of the time, the goal of a migration to ASM is to distribute your database on two disk groups: one that contains all the database files and the other that contains the flash recovery area files.The possible migration paths are organized around the concepts of hot and cold migration paths. With a cold migration path, you do not care to shut down your database for a long period of time. Whereas, with a hot migration path, you want to minimize the down time. However, it is not possible to do a complete online migration to ASM.Also, the procedures you need to follow depend on your disk space capacity. The easiest paths are when you have enough disk space capacity to store the database both on the file system and ASM. Nevertheless, it is possible to migrate your database to ASM even in the case where you cannot add disks to your system.In this lesson, you explore one possible migration path. However, for more information about other alternatives, refer to the OTN Web site at otn.oracle.com, and the Backup and Recovery Advanced User’s Guide.

Page 246: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-6

Copyright © 2004, Oracle. All rights reserved.

Migration with Extra Space: Overview

1. Create the ASM instances.2. Create the data and recovery area disk groups.3. Set database file and back up OMF parameters.4. Create a database copy to ASM, and switch

control files and data files to ASM.5. Re-create flashback database logs, temp files, and

change tracking file in ASM.6. Optionally migrate your backups to ASM.7. Drop and re-create online redo log groups to ASM.

Migration with Extra Space: OverviewThe migration path that you are going to study with this case produces a minimum down time if your database is relatively small. As indicated in the slide, the down time is between steps four and six. However, only the flashback logs reconstitution in step five needs to be done offline.In this example, it is assumed that your database is currently using a flash recovery area, and that you have enough disk space capacity to simultaneously store your database both on the file system and ASM.Note: If your database is too big to afford a down time corresponding to the whole database backup, then you can create the database image copies to ASM while the database is online. Then, you can use an incrementally updated backup strategy to reduce the recovery time to its minimum before initiating the switch.

Page 247: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-7

Copyright © 2004, Oracle. All rights reserved.

Migration with Extra Space: Example

3. Using SQL*Plus to set OMF parameters:

4. Using RMAN to migrate control files and data files:

ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;ALTER SYSTEM SET db_create_file_dest='+DATA' SCOPE=SPFILE;ALTER SYSTEM SET db_recovery_file_dest='+RECOV' SCOPE=SPFILE;ALTER SYSTEM SET control_files='' SCOPE=SPFILE;SHUTDOWN IMMEDIATE;

CONNECT TARGETSTARTUP NOMOUNT;RESTORE CONTROLFILE FROM 'filename_of_old_control_file';ALTER DATABASE MOUNT;BACKUP AS COPY DATABASE FORMAT '+DATA';SWITCH DATABASE TO COPY;RECOVER DATABASE;

Migration with Extra Space: ExampleIt is assumed that you already created ASM instances, as well as the DATA disk group for the database area and the RECOV disk group for the recovery area. It is also assumed that you are using an SPFILE.You can execute the first script using SQL*Plus from any instance that has the database open. However, you need to shut down any remaining instance immediately.The goal of this script is to modify the OMF parameters of each instance to point to the new disk groups. By also resetting the value of CONTROL_FILES, you make sure that the OMF control files will be created in ASM. The script also turns off the block change tracking mechanism.After the database is shut down, you can run the second script by using RMAN. This script create two multiplexed OMF control files stored in DATA and RECOV. Before executing this script, you need to specify the full name of one of the control files that has been used so far. Then, the script mounts the database by using the newly created control files, and backs up the existing file-system database to DATA using image copies. After completing, the control file pointers are switched to the ASM database image copies.Then, if needed, the database is recovered. This might not be needed if the database was shut down correctly by the previous script.Note: In a RAC/SPFILE environment, SID='*' is assumed for ALTER SYSTEM statements.

Page 248: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-8

Copyright © 2004, Oracle. All rights reserved.

Migration with Extra Space: Example

5. Using SQL*Plus to migrate flashback logs, change tracking file and temp files:

6. Using RMAN to migrate existing backups:

ALTER DATABASE FLASHBACK OFF;ALTER DATABASE FLASHBACK ON;ALTER DATABASE OPEN;ALTER TABLESPACE temp ADD TEMPFILE;ALTER DATABASE TEMPFILE 'filename_of_old_tempfile' DROP;ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

CONNECT TARGETBACKUP AS COPY ARCHIVELOG ALL DELETE INPUT;BACKUP DEVICE TYPE DISK BACKUPSET ALL DELETE INPUT;BACKUP AS COPY DATAFILECOPY ALL DELETE INPUT;

Migration with Extra Space: Example (continued)The third script disables and enables again flashback logging. By doing this, you re-create the flashback logs into RECOV. At that point, you can open the database again.Because RMAN does not take into account tempfiles, you need to re-create them for each existing TEMPORARY tablespace. All you need for that is to add at least a new tempfile, and drop the ones that are still in the file system. You need to do that for each TEMPORARYtablespace. Therefore, before executing the third script, make sure that you specify the right tempfile names.The third script also re-creates the change-tracking file directly inside DATA.The fourth script is optional but might be recommended if you want to get rid of the old flash recovery area files. The goal of this script is to transfer the existing backups to RECOV.The first BACKUP command is used to move all the current archived log files that have not yet been backed up. The second BACKUP command is used to move all the current backup sets. The last BACKUP command is used to move all the current data file copies, including the ones corresponding to the old file system database.

Page 249: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-9

Copyright © 2004, Oracle. All rights reserved.

Migration with Extra Space: Example

7. Using SQL*Plus to migrate online redo log files:DECLAREcursor logfile_cur is select l.thread#, l.group#, l.bytes

from v$log l;type numTab_t is table of number index by binary_integer;grouplist numTab_t; threadlist numTab_t; byteslist numTab_t;

BEGINopen logfile_cur;fetch logfile_cur bulk collect into threadlist, grouplist,

byteslist;close logfile_cur;for i in 1 .. threadlist.count loopmigrateorl (threadlist(i), grouplist(i), byteslist(i) );

end loop;END;

Migration with Extra Space: Example (continued)The last step of this procedure is to create new online redo log files in ASM, and drop the existing ones from the file system.The above script can be used to automate this process. It is assumed that you have already created the MIGRATEORL procedure discussed later in this lesson.The basic idea of the above script is to add a new redo log group inside ASM for each existing redo log group on the file system, and then drop the corresponding existing group from the file system.Therefore, the script retrieves all the groups of each thread, and for each of them, it invokes the MIGRATEORL procedure.

Page 250: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-10

Copyright © 2004, Oracle. All rights reserved.

Migration with Extra Space: Example

CREATE PROCEDURE migrateorl(thread# number, group# number,bytes number) is

stmt varchar2(1024):='alter database add logfile thread'||thread#||' size '||bytes;

asalc varchar2(1024):='alter system archive log current';BEGINexecute immediate stmt;stmt := 'alter database drop logfile group '||group#;for i in 1 .. 5 loopbeginexecute immediate stmt;exit;

exceptionwhen others then execute immediate asalc;

end;end loop;

END;

Migration with Extra Space: Example (continued)The goal of the MIGRATEORL procedure is to drop a particular online redo log group from one thread, and to create a new one inside ASM for the same thread.The only issue is with the CURRENT log of each thread. Because it is not possible to drop a CURRENT group, you need to generate an artificial log switch before you can drop a CURRENTgroup. In a RAC environment, when the database is open, the global switch is achieved by using the ALTER SYSTEM ARCHIVELOG CURRENT command. This command archives all redo log file groups from all enabled threads, which forces a switch to occur for each enabled thread.Therefore, at the beginning, the procedure adds a new group for the specified thread. Then it tries to drop the given group. If it succeeds, the procedure has successfully migrated one group. If it fails to drop the group, this is because the group is a CURRENT group. At that point, the procedure tries to switch and then tries to drop the group again. The procedure retries this five times before it stops. This ensures that there is never a problem.Note: This procedure is not part of the set standard set of procedure. You need to manually create this procedure in your database.

Page 251: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-11

Copyright © 2004, Oracle. All rights reserved.

Tablespace Migration: Example

1. Make the desired tablespace OFFLINE.2. Create a backup copy of the tablespace to ASM.3. Switch the tablespace to ASM.4. Make the tablespace ONLINE.

CONNECT TARGETSQL "ALTER TABLESPACE tbsname OFFLINE";BACKUP AS COPY TABLESPACE tbsname FORMAT '+DGROUP1';SWITCH TABLESPACE tbsname TO COPY;SQL "ALTER TABLESPACE tbsname ONLINE";

Tablespace Migration: ExampleThis procedure describes one possible way of migrating an individual tablespace to ASM while the database is online.It is assumed that ASM instances are already created, and that the DATA disk group is currently mounted by all ASM instances.

1. You need to use RMAN to connect to the target database.2. Make the target tablespace OFFLINE or READ ONLY.3. Copy the tablespace to the ASM disk group.4. Switch the control file pointer to the ASM copy.5. Make the target tablespace ONLINE or READ WRITE again.

Note: Before you execute the above RMAN script, replace tbsname occurrences with the corresponding name of the tablespace that you want to migrate.

Page 252: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-12

Copyright © 2004, Oracle. All rights reserved.

Migrate an SPFILE to ASM

1. Create a PFILE from the existing SPFILE.

2. Optionally add a meaningful directory.

3. Create a new SPFILE in your new directory.

4. Create a new single-line PFILE used to STARTUP.

CREATE SPFILE='+DGROUP1/ORCL/SPFILE/spfileORCL.ora'FROM PFILE='initORCL.ora';

CREATE PFILE='initORCL.ora' FROM SPFILE;

ALTER DISKGROUP dgroup1ADD DIRECTORY '+DGROUP1/ORCL/SPFILE';

spfile=+DGROUP1/ORCL/SPFILE/spfileORCL.ora

Migrate an SPFILE to ASMIt is possible to store SPFILEs inside an ASM disk group. From a database instance, you can use the CREATE SPFILE statement to do this. The slide shows you the procedure to follow in order to migrate an existing file system SPFILE to an ASM disk group:

1. First of all, you need to create a PFILE from the existing SPFILE. This is because the CREATE SPFILE command needs a PFILE as parameter. You create the PFILE in your file system.

2. Although you can create an SPFILE by just specifying a disk group name, it might be important to create a meaningful directory alias in your disk group to precisely locate your SPFILE. This is especially relevant if you want to create backups, or have multiple databases stored in the same disk group. In the example, it is assumed that you have already created a database file for the ORCL database inside the DGROUP1 ASM disk group. By default, ASM adds a directory corresponding to the name of your database. The example just adds the SPFILE directory under the existing ORCL directory.

3. You can then create the new SPFILE directly in the new directory with a specified alias. The alias is automatically created by ASM.

4. The last step is to then create a new PFILE with only the SPFILE parameter pointing to the new SPFILE. This PFILE should then be used to start up your instances.

Page 253: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-13

Copyright © 2004, Oracle. All rights reserved.

ASM Disk Metadata Requirements

• For empty disk groups:– For normal and high redundancy:

15 + (2 * #_disks) + (126 * #_ASM_insts)

– For external redundancy:5 + (2 * #_disks) + (42 * #_ASM_insts)

• For each file:– High redundancy: add 3MB if file size is greater

than 20MB plus 3MB for every additional 42GB– Normal redundancy: add 3MB if file size is greater

than 30MB plus 3MB for every additional 64GB– External redundancy: add 1MB if file size is greater

than 60MB plus 1MB for every additional 128GB

ASM Disk Metadata RequirementsYou must also add additional disk space for the ASM metadata. You can use the above formulas to calculate the additional disk space requirements (in MB) for an empty disk group.For example, for a four-node RAC installation, using three disks in a high-redundancy disk group, you require an additional 525 MB of disk space: (15 + (2 * 3) +(126 * 4)) = 525.As files are created, there is additional metadata overhead:

• With high redundancy, every file grater than 20MB adds 3MB of metadata, and another 3MB for every additional 42GB in that file.

• With normal redundancy, every file grater than 30MB adds 3MB of metadata, and another 3MB for every additional 64GB in that file.

• With external redundancy, every file greater than 60MB adds 1MB of metadata, and another 1MB for every additional 128GB in that file.

Note: Compared to the space used for storing user data, this should all be noise.

Page 254: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-14

Copyright © 2004, Oracle. All rights reserved.

ASM and Transportable Tablespaces

File system to ASM

RMAN

ASM to file system

RMAN

ASM to ASM

DBMS_FILE_TRANSFER

Migrate

Migrate

ASM and Transportable TablespacesYou can copy a data file stored inside an ASM disk group on one machine to an ASM disk group on another machine via the DBMS_FILE_TRANSFER package running in one of the database instances. This operation can be performed directly without having to covert the data file.However, if you want to transport a data file stored in a traditional file system to an ASM disk group on another database, then you need to plug the data file in the target database by using the classic transportable tablespace procedure, and then use RMAN to migrate the plugged tablespace to ASM.Note: For more information about the DBMS_FILE_TRANSFER package, refer to the PL/SQL Packages and Types Reference guide.

Page 255: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-15

Copyright © 2004, Oracle. All rights reserved.

ASM and Storage Arrays

• ASM works well with storage arrays:– External redundancy: Mirroring/RAID protections– Double striping– Dynamic multi-pathing/channel failover

• ASM is the best file system for Oracle database files.

ASM and Storage ArraysUsing ASM does not imply that you have to discard your storage and replace it with something new.Although ASM offers mirroring functionality, it is considered best practice to offload these tasks to an external storage array that supports the mirroring or RAID 5 technology, if available. ASM can be configured to use these external redundancy mechanisms to protect data.Server-level striping can be used to complement the performance benefit of storage-level striping. This technique is known as double striping. Because server-level striping provides an even distribution of I/O across the back end of the storage layer, and dynamic multi-pathing provides a dynamic distribution across the available channels, use of ASM to evenly distribute the database I/O across striped metavolumes provides an efficient double striping strategy.Use of ASM with the recovery area provides the purpose-built file system of choice for Oracle Database 10g environments. It provides automation of file naming best practices, placement, and data file automatic expansion. Combined with the recovery area, there is no better purpose-built clustered file system for Oracle database files than ASM.

Page 256: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-16

Copyright © 2004, Oracle. All rights reserved.

ASM Scalability

ASM imposes the following limits:• 63 disk groups• 10,000 ASM disks• 4 petabyte per ASM disk• 40 exabyte of storage• 1 million files per disk group• 2.4 terabyte per file

ASM ScalabilityASM imposes the following limits:

• 63 disk groups in a storage system• 10,000 ASM disks in a storage system• 4 petabyte maximum storage for each ASM disk• 40 exabyte maximum storage for each storage system• 1 million files for each disk group• 2.4 terabyte maximum storage for each file

Page 257: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-17

Copyright © 2004, Oracle. All rights reserved.

Redo Log Files and RAC

…RAC01.THREAD=1RAC02.THREAD=2…

SPFILE

Shared storage

Group 1

Thread 1

Node1

RAC01

Node2

RAC02

Group 2

Group 3

Group 4

Thread 2

Group 5

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 4;ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 5;

ALTER DATABASE ENABLE THREAD 2;

Redo Log Files and RACWith Real Application Clusters (RAC), each instance writes to its own set of online redo log files, and the redo written by an instance is called a thread of redo, or thread. Thus, each redo log file group used by an instance is associated with the same thread number determined by the value of the THREAD initialization parameter. If you set the THREAD parameter to a nonzero value for a particular instance, the next time the instance is started, it will try to use that thread. Because an instance can use a thread as long as that thread is enabled, and not in use by another instance, it is recommended to set the THREAD parameter to a nonzero value with each instance having different values.You associate a thread number with a redo log file group by using the ALTER DATABASE ADD LOGFILE THREAD statement. You enable a thread number by using the ALTER DATABASE ENABLE THREAD statement. Before you can enable a thread, it must have at least two redo log file groups.By default a database is created with one enabled public thread. An enabled public thread is a thread that has been enabled by using the ALTER DATABASE ENABLE PUBLIC THREADstatement. Such a thread can be acquired by an instance with its THREAD parameter set to zero.Therefore, you need to create and enable additional threads when you add instances to your database.Note: The maximum possible value for the THREAD parameter is the value assigned to the MAXINSTANCES parameter specified in the CREATE DATABASE statement.

Page 258: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-18

Copyright © 2004, Oracle. All rights reserved.

Automatic Undo Management and RAC

…RAC01.UNDO_TABLESPACE=undotbs3RAC02.UNDO_TABLESPACE=undotbs2…

ALTER SYSTEM SET UNDO_TABLESPACE=undotbs3 SID='RAC01';

undotbs3

Node1

RAC01

Node2

RAC02

undotbs1

SPFILEundotbs2

Shared storage

Transaction recovery

Pendingoffline

Consistent reads

Automatic Undo Management in RACThe Oracle database automatically manages undo segments within a specific undo tablespace that is assigned to an instance. Under normal circumstances, only the instance assigned to the undo tablespace can modify the contents of that tablespace. However, all instances can always read all undo blocks for consistent read purposes. Also, any instance can update any undo tablespace during transaction recovery, as long as that undo tablespace is not currently used by another instance for undo generation or transaction recovery.You assign undo tablespaces in your RAC database by specifying a different value for the UNDO_TABLESPACE parameter for each instance in your SPFILE or individual PFILEs. If you do not set the UNDO_TABLESPACE parameter, then each instance uses the first available undo tablespace. If undo tablespaces are not available, the SYSTEM rollback segment is used.You can dynamically switch undo tablespace assignments by executing the ALTER SYSTEM SET UNDO_TABLESPACE statement with the SID clause. You can run this command from any instance. In this example, the previously used undo tablespace assigned to instance RAC01remains assigned to it until the RAC01 instance’s last active transaction commits. The pending offline tablespace may be unavailable for other instances until all transactions against that tablespace are committed.Note: You cannot simultaneously use automatic undo management (AUM) and manual undo management in a RAC database. It is highly recommended that you use the AUM mode.

Page 259: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-19

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to: • Manage redo log groups in a RAC environment• Manage undo tablespaces in a RAC environment• Use SRVCTL to manage ASM instances• Migrate database files to ASM

Page 260: User Guide RAC

Oracle Database 10g: Real Application Clusters 6-20

Copyright © 2004, Oracle. All rights reserved.

Practice 6 Overview

This practice covers the following topics:• Reconfiguring your redo threads• Migrating tablespaces to ASM

Page 261: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Services

Page 262: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Configure and manage services in a RAC

environment• Use services with client applications• Use services with the Database Resource Manager• Use services with the Scheduler• Set performance-metric thresholds on services• Configure services aggregation and tracing

Page 263: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-3

Copyright © 2004, Oracle. All rights reserved.

Traditional Workload Dispatching

HR DW CRM Batch

Day time

HR DW CRM Batch

Holiday season

HR DW CRM Batch

Payday

Traditional Workload DispatchingIn a standard environment, isolated computing units of different sizes are permanently dedicated to specific applications such as Human Resources, Data Warehouses, Customer Relationship Management, and Retail Batches.These computing units need to be sized for their peak workload. As the peak workload occurs for some hours only, a considerable amount of resources is idle for a long time.

Page 264: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-4

Copyright © 2004, Oracle. All rights reserved.

Grid Workload DispatchingDay time

Holiday seasonPayday

HRDW

CRM

Batch

Idle

CRM

Batch

HR CRMHR

Idle

Batch

DW

Idle

DW

Grid Workload DispatchingWith Grid Computing, a global pool of computing units can be provided, and the computing units can be temporarily assigned to specific applications. Computing units can then be dynamically exchanged between applications. During business hours, more units can be used for CRM applications, and after business hours, some of them can be transferred to Retail Batches.Grid Computing minimizes unused resources. This means that overall a grid-enabled environment needs less computing power than an environment that is not grid enabled.In the example, 25 percent of the computing resource units are idle. This unused extra capacity is there so that service levels can still be met when there are components failures, such as nodes or instances, and also to deal with unexpected workloads. This is much better than the industry average of 70 to 90 percent idle rates when each machine is sized for its individual maximum.

Page 265: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-5

Copyright © 2004, Oracle. All rights reserved.

What Is a Service?

• Is a means of grouping sessions that are doing the same kind of work

• Provides single-system image instead of multiple instances image

• Is a part of the regular administration tasks that provide dynamic service-to-instance allocation

• Is the base for high availability of connections• Provides a new performance-tuning dimension

What Is a Service?The concept of a service was first introduced in Oracle8i as a means for the listener to do connection load balancing between nodes and instances of a cluster. However, the concept, definition, and implementation of services have been dramatically expanded. Services are a feature for workload management that organizes the universe of work execution within the database to make that work more manageable, measurable, tunable, and recoverable. A service is a grouping of related tasks within the database with common functionality, quality expectations, and priority relative to other services. The notion of service provides a single-system image for managing competing applications running within a single instance and across multiple instances and databases.Using standard interfaces, such as DBCA, Enterprise Manager, and SRVCTL, services can be configured, administered, enabled, disabled, and measured as a single entity.Services provide availability. Following outages, a service is recovered fast and automatically at surviving instances.Services provide a new dimension to performance tuning. With services, workloads are visible and measurable. Tuning by “service and SQL” replaces tuning by “session and SQL” in the majority of systems where sessions are anonymous and shared.Services are dynamic in that the number of instances a service runs on can be augmented when load increases, and reduced when load declines. This dynamic resource allocation enables a cost-effective solution for meeting demands as they occur.

Page 266: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-6

Copyright © 2004, Oracle. All rights reserved.

High Availability of Services in RAC

• Services are available continuously with load shared across one or more instances.

• Additional instances are made available in response to failures.

• Preferred instances:– Set the initial cardinality for the service– Are the first to start the service

• Available instances are used in response to preferred instance failures.

High Availability of Services in RACWith RAC, the focus of high availability (HA) is on protecting the logically defined application services. This focus is more flexible than focusing on high availability of instances.Services must be location-independent and the RAC HA framework is used to implement this. Services are made available continuously with load shared across one or more instances in the cluster. Any instance can offer services in response to run-time demands, failures, and planned maintenance. Services are always available somewhere in the cluster.To implement the workload balancing and continuous availability features of services, CRS stores the HA configuration for each service in the Oracle Cluster Registry (OCR). The HA configuration defines a set of preferred and available instances that support the service.A preferred instance set defines the number of instances (cardinality) that support the corresponding service. It also identifies every instance in the cluster that the service will run on when the system first starts up.An available instance does not initially support a service. However, it begins accepting connections for the service when a preferred instance cannot support the service. If a preferred instance fails, then the service is transparently restored to an available instance defined for the service.Note: An available instance can become a preferred instance and vice versa.

Page 267: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-7

Copyright © 2004, Oracle. All rights reserved.

Possible Service Configuration with RAC

AP

RAC01 RAC02 RAC03

GL

AP

GL

AP

RAC01 RAC02 RAC03

GL

AP

GL

AP

GL

AP

RAC01 RAC02 RAC03

GL

AP

GL

AP

GL

Active/Spare

Active/Symmetric Active/Asymmetric

Possible Service Configuration with RAC• Active/Spare: With this service configuration, the simplest redundancy known as

primary/secondary, or 1+1 redundancy is extended to the general case of N+M redundancy, where N is the number of primary RAC instances providing service, and M is the number of spare RAC instances available to provide the service. An example of this solution is a three-node configuration in which one instance provides the AP service; the second instance provides the GL service; the third instance provides service failover capability for both services. The spare node can still be available for other applications during normal operation.

• Active/Symmetric: With this service configuration, the same set of services is active on every instance. An example of this is illustrated in the slide, with both AP and GL services being offered on all three instances. Each instance provides service load-sharing and service failover capabilities for the other.

• Active/Asymmetric: With this service configuration, services with lower capacity needs can be defined with single cardinality and configured as having all other instances capable of providing the service in the event of failure. The slide shows the AP service running on only one instance, and the GL service running on two instances. The first instance supports the AP services and offers failover for the GL service. Likewise, the second and third instances support the GL service and offer failover for AP. If either the first and third instance dies, then GL and AP are still offered through the second instance.

Page 268: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-8

Copyright © 2004, Oracle. All rights reserved.

Service Attributes

• Single instance:– Global unique name– Threshold– Priority

• RAC:– Global unique name– Threshold– Priority– High-availability configuration– Preconnection

Service AttributesEach service has the following attributes:

• Globally unique name that identifies the service in the local cluster and globally for data guard

• Quality of service thresholds for response time and CPU consumption• Priority relative to other services, defined in terms of either ratio of resource consumption

or priorityIn a RAC environment, services have two additional attributes:

• High-availability configuration: A description of how to distribute the service across instances when the system first starts.

• Preconnection: Definition of a corresponding preconnected service, also called a shadow service. The preconnect service spans the set of instances that are available to support a service in the event of a failure. When a service is added by using the Database Configuration Assitant (DBCA) or Server Control (SRVCTL), the preconnect service is created automatically and is then managed by CRS. This service name <SERVICE>_PRECONNECT is used in the backup clause for transparent application failover (TAF) connect descriptors for directly connected applications that are using the preconnected TAF feature. Preconnect services are non-overlapping with their matching active services. This feature eliminates sessions looping back to the same instance as the original, and enables load balancing for both active and preconnected sessions with TAF.

Page 269: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-9

Copyright © 2004, Oracle. All rights reserved.

Service Types

• Application services• Internal services:

– SYS$BACKGROUND

– SYS$USERS

• Limit of 64 services per database:– 62 application services– 2 internal services

Service TypesOracle Database 10g supports two broad types of services: application services and internal services. Application services are mainly functional maps to workloads. Sessions doing work for a common business function are grouped together. For Oracle Applications, AP, AR, GL, MFG, WIP, BOM, and so on create a functional division of work within the database and can thus be categorized as services.In addition to application services, the RDBMS also supports two internal services. SYS$BACKGROUND is used by the background processes only. SYS$USERS is the default service for user sessions that are not associated with any application service. Both internal services support all the workload management features and neither one can be stopped or disabled.There is a limitation of 64 services per database, 62 application services, and 2 internal services. Also, a service name is restricted to 64 characters.Note: Shadow services are also included in the application service category. In addition, a service is also created for each Advanced Queue created.

Page 270: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-10

Copyright © 2004, Oracle. All rights reserved.

Creating Services

• Services are maintained in the data dictionary.• Use DBMS_SERVICE.CREATE to create a service for

single-instance Oracle.• Services are created automatically based on

SERVICE_NAMES initialization parameter.• Create a service in RAC with the following:

– DBCA– SRVCTL

• High-availability business rules are maintained in the OCR and managed by CRS.

Creating ServicesLike other database objects, services are maintained and tracked through the data dictionary and dynamic performance views. Each service has a unique name that identifies it locally in the cluster and globally for Data Guard.For single-instance Oracle, services can be created with the DBMS_SERVICE package. Services are also created implicitly at startup of the instance according to the values set for the SERVICE_NAMES initialization parameter.For high-availability features in RAC environments, services should be defined either by the DBCA, or by the command-line tool SRVCTL. This definition process implicitly creates high-availability business rules that are managed automatically by CRS to keep the services available. The high-availability business rules are kept in the OCR.

Page 271: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-11

Copyright © 2004, Oracle. All rights reserved.

Creating Services with DBCA

DBCA configures both the CRS resources and the Net Service entries for each service.

Creating Services with DBCAThe Database Configuration Assistant (DBCA) enables you to perform simple management operations on database services. When creating a RAC database by using DBCA, you can add and remove services, establish a preferred configuration, and set up a Transparent Application Failover (TAF) policy for your services.The DBCA lists the available instances for your RAC database. By clicking the appropriate option button, you can configure an instance as being preferred or available for a service. If you want to prevent a service from running on a specific instance, then click the option button in the Not Used column for the prohibited instance.The entries you make in the Add a Service dialog box are appended to the SERVICE_NAMESparameter entry, which has a 4 KB limit. Therefore, the total length of the names of all services assigned to an instance cannot exceed 4 KB.When you click Finish, the DBCA configures the CRS resources for the services that you added, modified, or removed. The DBCA also configures the net service entries for these services and starts them. When you use the DBCA to remove services, the DBCA stops the service, removes the CRS resource for the service, and removes the net service entries.Note: You can also set up a service for transparent application failover using the TAF Policy section as shown in the slide above.

Page 272: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-12

Copyright © 2004, Oracle. All rights reserved.

Creating Services with DBCA

Creating Services with DBCA (continued)The Database Configuration Assistant (DBCA) also enables you to perform simple management operations on database services after the database has been created.You can do so by selecting the Services Management option in the first step. Then select the corresponding database in step two. In step three, you can add and remove services, establish a preferred configuration, and set up a transparent application failover (TAF) policy for your services.Note: This page is identical to the Database Services page you see when creating a database.

Page 273: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-13

Copyright © 2004, Oracle. All rights reserved.

Creating Services with SRVCTL

$ srvctl add service –d PROD –s GL -r RAC02 -a RAC01$ srvctl add service –d PROD –s AP –r RAC01 -a RAC02

AP

RAC01

GL AP

RAC02

GL

Creating Services with SRVCTLThe example in the slide shows a two-node cluster with an instance named RAC01 on one node and an instance called RAC02 on the other. The cluster database name is PROD. Two services are created, AP and GL, and stored in the cluster repository to be managed by CRS. The AP service is defined with a preferred instance of RAC01 and an available instance of RAC02. If RAC01 dies, the AP service member on RAC01 is restored automatically on RAC02. The same scenarios hold true for the GL service.Note that it is possible to assign more than one instance with both the -r and -a options. However, -r is mandatory but -a is optional.Services enable you to move beyond the simple two-node primary/secondary configuration of RAC Guard in Oracle9i.With Oracle Database 10g, multiple primary nodes can support a service with RAC.Possible configurations for service placement are active/spare, active/symmetric, and active/asymmetric.Note: You can also set up a service for transparent application failover by using the –P option of SRVCTL. Possible values are NONE, BASIC, and PRECONNECT.

Page 274: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-14

Copyright © 2004, Oracle. All rights reserved.

Preferred and Available Instances

$ srvctl add service –d PROD –s ERP \–r RAC01,RAC02 -a RAC03,RAC04

ERP ERP ERP ERP

RAC01 RAC02 RAC03 RAC04

ERP ERP ERP ERP

RAC01 RAC02 RAC03 RAC04

ERP ERP ERP ERP

RAC01 RAC02 RAC03 RAC04

ERP ERP ERP ERP

RAC01 RAC02 RAC03 RAC04

1 2

34

Preferred and Available InstancesIn this example, it is assumed that you have a four-node cluster. You define a service called ERP. The preferred instances for ERP are RAC01 and RAC02. The available instances for ERP are RAC03 and RAC04.

1. Initially, ERP connections are only directed to RAC01 and RAC02.2. RAC02 is failing and goes down.3. CRS detects the failure of RAC02, and because the cardinality of ERP is 2, CRS restores

the service on one of the available instances, in this case RAC03.4. ERP connection requests are now directed to RAC01 and RAC03, which are the instances

that currently offer the service. Although CRS is able to restart RAC02, the ERP service does not fall back to RAC02. RAC02 and RAC04 are now the instances that are accessible if subsequent failures occur.

Note: If you want to fall back to RAC02 you can use SRVCTL to relocate the service. This operation can be done manually by the DBA, or by coding the SRVCTL relocation command using a call back mechanism to automate the fallback. However, relocating a service is a disruptive operation.

Page 275: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-15

Copyright © 2004, Oracle. All rights reserved.

Everything Switches to Services

• Data dictionary maintains services.• AWR measures performance of services.• Database resource manager uses service in place

of users for priorities.• Job scheduler, PQ, and streams queues run under

services.• RAC keeps services available within site.• Data Guard Broker with RAC keeps primary

services available across sites.

Everything Switches to ServicesSeveral database features support services. Sessions are tracked by the service with which they connect. In addition, performance-related statistics and wait events are also tracked by service.The Automatic Workload Repository (AWR) manages the performance of services. The AWR records the service performance, including SQL execution times, wait classes, and resources consumed by service. The AWR alerts when service response time thresholds are exceeded. Specific dynamic performance views report current service status with one hour of history.The Database Resource Manager is now capable of managing services for prioritizing application workloads within an instance. In addition, jobs can now run under a service, as opposed to a specific instance. Parallel slave processes inherit the service of their coordinator.The RAC High Availability framework keeps services available within a site. Data Guard Broker, in conjunction with RAC, migrates the primary service across Data Guard sites for disaster tolerance.Note: For more information about RAC and Data Guard Broker integration, refer to the lesson titled “Design for High Availability” in this course.

Page 276: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-16

Copyright © 2004, Oracle. All rights reserved.

Using Services with Client Applications

ERP=(DESCRIPTION=(LOAD_BALANCE=on) (ADDRESS=(PROTOCOL=TCP)(HOST=node-1vip)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=node-2vip)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=node-3vip)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=node-4vip)(PORT=1521))

(CONNECT_DATA=(SERVICE_NAME=ERP)))

url="jdbc:oracle:oci:@ERP"

url="jdbc:oracle:thin:@ERP"

Using Services with Client ApplicationsApplications and mid-tier connection pools select a service by using the TNS connection descriptor.The service must match the service that has been created using SRVCTL or DBCA.The address lists in each example use virtual IP addresses. The address lists must not use hostnames. Using the virtual addresses for client communication ensures that connections and SQL statements issued against a node that is down do not result in a TCP/IP timeout.The first example on the slide above shows the TNS connect descriptor that could be used to access the ERP service. The second example shows the thick JDBC connection description using the previously defined TNS connect descriptor.The third example shows the thin JDBC connection description using the previously defined TNS connect descriptor.Note: The LOAD_BALANCE=ON clause is used by Oracle Net to randomize its progress through the protocol addresses of the connect descriptor. This feature is called client load balancing.

Page 277: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-17

Copyright © 2004, Oracle. All rights reserved.

Using Services with Resource Manager

• Consumer groups are automatically assigned to sessions based on session services.

• Work is prioritized by service inside one instance.

AP

Instance resources

BATCH

75%

25%

AP

BATCH

Connections

Using Services with Resource ManagerThe Database Resource Manager enables you to identify work by using services. It manages the relative priority of services within an instance by binding services directly to consumer groups. When a client connects by using a service, the consumer group is assigned transparently at connect time. This enables Resource Manager to manage the work requests by service in the order of their importance.For example, you define the AP and BATCH services to run on the same instance, and assign AP to a high-priority consumer group and BATCH to a low-priority consumer group. Sessions that connect to the database with the AP service specified in their TNS connect descriptor get priority over those that connect to the BATCH service.This offers benefits in managing workloads because priority is given to business functions rather than the sessions that support those business functions.

Page 278: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-18

Copyright © 2004, Oracle. All rights reserved.

Services and Resource Manager with EM

Services and Resource Manager with EMEnterprise Manager (EM) gives you a GUI interface through the Resource Consumer Group Mapping page to automatically map sessions to consumer groups. This page can be accessed by clicking the Resource Consumer Group Mappings link on the Cluster Database Administration page.Using the General page of this page, you can set up a mapping of sessions connecting with a service name to consumer groups. At the bottom of the page (not visible in this screenshot), there is an option for a module name and action mapping.With the ability to map sessions to consumer groups by service, module, and action, you have greater flexibility when it comes to managing the performance of different application workloads.Note: The Priorities page of the Resource Consumer Group Mapping page allows you to set priorities for the mappings that you set up on the General page. The mapping options correspond to columns in V$SESSION. When multiple mapping columns have values, the priorities you set determine the precedence for assigning sessions to consumer groups.

Page 279: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-19

Copyright © 2004, Oracle. All rights reserved.

Services and Resource Manager: Example

exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA;exec DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(

CONSUMER_GROUP => 'HIGH_PRIORITY',COMMENT => 'High priority consumer group');

exec DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(ATTRIBUTE => DBMS_RESOURCE_MANAGER.SERVICE_NAME,VALUE => 'AP',CONSUMER_GROUP => 'HIGH_PRIORITY');

exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA;

exec -DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP(-GRANTEE_NAME => 'PUBLIC',CONSUMER_GROUP => 'HIGH_PRIORITY',GRANT_OPTION => FALSE);

Services and Resource Manager: ExampleAssume that your site has two consumer groups called HIGH_PRIORITY and LOW_PRIORITY. These consumer groups map to a resource plan for the database that reflects either the intended ratios or the intended resource consumption.Before mapping services to consumer groups, you must first create the consumer groups and the resource plan for these consumer groups. The resource plan can be priority based or ratio based. The above PL/SQL calls are used to create the HIGH_PRIORITY consumer group, and map the AP service to the HIGH_PRIORITY consumer group. You can use similar calls to create the LOW_PRIORITY consumer groups and map the BATCH service to the LOW_PRIORITYconsumer group.The last PL/SQL call in the example in the slide above is executed because sessions are automatically assigned only to consumer groups for which they have been granted switch privileges. A similar call should be executed for the LOW_PRIORITY consumer group.Note: For more information about Database Resource Manager, refer to the Oracle Database Administrator’s Guide and PL/SQL Packages and Types Reference.

Page 280: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-20

Copyright © 2004, Oracle. All rights reserved.

Using Services with Scheduler

• Services are associated with Scheduler classes.• Scheduler jobs have service affinity:

– High availability– Load balancing

Database

Job1 HOT_BATCH_CLASS HOT_BATCH_SERVJob table

Job2 HOT_BATCH_CLASS HOT_BATCH_SERVJob3 LOW_BATCH_CLASS LOW_BATCH_SERV

HOT_BATCH_SERV

Job Coordinator

Job Slaves

HOT_BATCH_SERV

Job Coordinator

Job Slaves

LOW_BATCH_SERV

Job Coordinator

Job Slaves

Using Services with SchedulerJust as in other environments, the Scheduler in a RAC environment uses one job table for each database and one job coordinator for each instance. The job coordinators communicate with each other to keep information current. Each instance’s job coordinator exchanges information with the others.The Scheduler can use the services and the benefits they offer in a RAC environment. The service that a specific job class uses is defined when the job class is created. During execution, jobs are assigned to job classes and job classes run within services. Using services with job classes ensures that the work of the Scheduler is identified for workload management and performance tuning. For example, jobs inherit server-generated alerts and performance thresholds for the service theyrun under. For high availability, the Scheduler offers service affinity instead of instance affinity. Jobs are not scheduled to run on any specific instance. They are scheduled to run under a service. So, if an instance dies, the job can still run on any other instance in the cluster that offers the service.Note: By specifying the service where you want the jobs to run, the job coordinators balance the load on your system for better performance.

Page 281: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-21

Copyright © 2004, Oracle. All rights reserved.

Services and Scheduler with EM

Services and Scheduler with EMTo configure a job to run under a specific service, click the Job Classes link under the Scheduler section of the Cluster Database Administration page. That takes you to the Scheduler Job Class page. On the Scheduler Job Class page, you can see services assigned to job classes.When you click the Create button on the Scheduler Job Classes page, you access the Create Job Class page. On this page, you can enter details of a new job class, including what service it must run under.

Page 282: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-22

Copyright © 2004, Oracle. All rights reserved.

Services and Scheduler with EM

Services and Scheduler with EM (continued)After your job class is set up with the service that you want it to run under, you can create the job. To create the job, click the Jobs link just above the Job Classes link on the Cluster Database Administration page. The Scheduler Jobs page appears where you can click the Create button to create a new job. When you click the Create button, the Create Job page is displayed. This page has different pages, represented by the General, Schedule, and Options tabs. Use the General page to assign your job to a job class.Use the Options page displayed in the slide above to set the Instance Stickiness attribute for your job. Basically, this attribute causes the job to be load balanced across the instances for which the service of the job is running. The job can run only on one instance. If the Instance Stickiness value is set to TRUE, which is the default value, the Scheduler runs the job on the instance where the service is offered with the lightest load. If Instance Stickiness is set to FALSE, then the job is run on the first available instance where the service is offered.Note: It is possible to set jobs attributes, such as INSTANCE_STICKINESS, by using the SET_ATTRIBUTE procedure of the DBMS_SCHEDULER PL/SQL package.

Page 283: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-23

Copyright © 2004, Oracle. All rights reserved.

Services and Scheduler: Example

DBMS_SCHEDULER.CREATE_JOB_CLASS(JOB_CLASS_NAME => 'HOT_BATCH_CLASS',RESOURCE_CONSUMER_GROUP => NULL ,SERVICE => 'HOT_BATCH_SERV' ,LOGGING_LEVEL => DBMS_SCHEDULER.LOGGING_RUNS,LOG_HISTORY => 30, COMMENTS => 'P1 batch');

DBMS_SCHEDULER.CREATE_JOB(JOB_NAME => 'my_report_job',JOB_TYPE => 'stored_procedure',JOB_ACTION => 'my_name.my_proc();',NUMBER_OF_ARGUMENTS => 4, START_DATE => SYSDATE+1,REPEAT_INTERVAL => 5, END_DATE => SYSDATE+30,JOB_CLASS => 'HOT_BATCH_CLASS', ENABLED => TRUE,AUTO_DROP => false, COMMENTS => 'daily status');

Services and Scheduler: ExampleIn this PL/SQL example, you define a batch queue managed by the Scheduler called HOT_BATCH_CLASS. You associate the HOT_BATCH_SERV service to the HOT_BATCH_CLASS queue. It is assumed that you had already defined the HOT_BATCH_SERV service.After the class is defined, you can define your job. In this example, the MY_REPORT_JOB job executes in the HOT_BATCH_CLASS job class at instances offering the HOT_BATCH_SERVservice.In this example, you do not assign a resource consumer group to the HOT_BATCH_CLASS job class. However, it is possible to assign a consumer group to a class. Regarding services, this allows you to combine Scheduler jobs and service prioritization using Database Resource Manager.Note: For more information about Scheduler, refer to the Oracle Database Administrator’s Guide and PL/SQL Packages and Types Reference.

Page 284: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-24

Copyright © 2004, Oracle. All rights reserved.

Using Services with Parallel Operations

• Slaves inherit the service from the coordinator.• Slaves can execute on every instance.

ERP

Executioncoordinator

Parallel Execution

server

Shared disks

Node 4Node 1 Node 2 Node 3ERP

ERP

ERP

ERP

ERP

ERP

ERP

ERP

Using Services with Parallel OperationsFor parallel query and parallel DML operations, the parallel query slaves inherit the service from the query coordinator for the duration of the operation. ERP is the name of the service used by the example shown on the slide.However, services currently do not restrict the set of instances that are used by a parallel query. Connecting via a service and then issuing a parallel query may use instances that are not part of the service that was specified during the connection. A slave appears to belong under the service even on an instance that does not support the service, if that slave is being used by a query coordinator that was started on an instance that does support that service.Note: At the end of the execution, the slaves revert to the default database service.

Page 285: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-25

Copyright © 2004, Oracle. All rights reserved.

Using Services with Metric Thresholds

• Possibility to define service-level thresholds:– ELAPSED_TIME_PER_CALL

– CPU_TIME_PER_CALL

• Server-generated alerts are triggered on threshold violations.

• You can react on generated alerts:– Change priority– Relocate services– Add instances for services

SELECT service_name, elapsedpercall, cpupercallFROM V$SERVICEMETRIC;

Using Services with Metric ThresholdsService-level thresholds permit the comparison of achieved service levels against accepted minimum required level. This provides accountability with respect to delivery or failure to deliver an agreed service level.You can specify explicitly two metric thresholds for each service on a particular instance:

• The response time for calls: ELAPSED_TIME_PER_CALL. The response time goal indicates a desire for the elapsed time to be, at most, a certain value. The response time represents the wall clock time. It is a fundamental measure that reflects all delays and faults blocking the call from running on behalf of the user.

• CPU time for calls: CPU_TIME_PER_CALLThe AWR monitors the service time and publishes AWR alerts when the performance exceeds the thresholds. You can then respond to these alerts by changing the priority of a job, stopping overloaded processes, or relocating, expanding, shrinking, starting or stopping a service. Using automated tasks, you can automate the reaction.This allows you to maintain service quality despite changes in demand.Note: The SELECT statement shown in the slide above gives you the accumulated instance statistics for elapsed time and for CPU used metrics for each service for the most recent 60-second interval. For the last hour history, look at V$SERVICEMETRIC_HISTORY.

Page 286: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-26

Copyright © 2004, Oracle. All rights reserved.

Changing Service Thresholds Using EM

Changing Service Thresholds Using EMThe Edit Thresholds page is displayed in the slide. The screenshot shows a portion of the page where you can see the Service CPU Time (per user call) and Service Response Time (per user call) metrics.To access the Edit Thresholds page, click the Manage Metrics link on the All Metrics page. After the Manage Metrics page is displayed, click the Edit Thresholds button.Using the Edit Thresholds page, you can change the critical and warning values for the service metrics. If you modify the critical and warning values on this page, the thresholds apply to all services of the instance.If you want different thresholds for different services, click the Specify Multiple Thresholds button at the top of the page. Another page appears where you can set critical and warning thresholds for individual services.

Page 287: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-27

Copyright © 2004, Oracle. All rights reserved.

Services and Metric Thresholds: Example

exec DBMS_SERVER_ALERT.SET_THRESHOLD(-METRICS_ID => dbms_server_alert.elapsed_time_per_call,WARNING_OPERATOR => dbms_server_alert.operator_ge, WARNING_VALUE => '500000',CRITICAL_OPERATOR => dbms_server_alert.operator_ge,CRITICAL_VALUE => '750000',OBSERVATION_PERIOD => 15,CONSECUTIVE_OCCURRENCES => 3,INSTANCE_NAME => 'I0n',OBJECT_TYPE => dbms_server_alert.object_type_service,OBJECT_NAME => 'ERP');

Must be set on each instance supporting the service

Services and Metric Thresholds: ExampleIn this example, thresholds are added for the ERP service for the ELAPSED_TIME_PER_CALLmetric. This metric measures the elapsed time for each user call for the corresponding service. The time must be expressed in microseconds.A warning alert is raised by the server whenever the average elapsed time per call for the ERP service over a 15-minutes period exceeds 0.5 seconds three consecutive times.A critical alert is raised by the server whenever the average elapsed time per call for the ERP service over a 15-minutes period exceeds 0.75 seconds three consecutive times.Note: The thresholds must be created for each RAC instance that potentially supports the service.

Page 288: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-28

Copyright © 2004, Oracle. All rights reserved.

Service Aggregation and Tracing

• Statistics are always aggregated by service to measure workloads for performance tuning.

• Statistics can be aggregated at finer levels:– MODULE

– ACTION

– Combination of SERVICE_NAME, MODULE, ACTION• Tracing can be done at various levels:

– SERVICE_NAMES

– MODULE

– ACTION

– Combination of SERVICE_NAME, MODULE, ACTION• Useful for tuning systems using shared sessions

Service Aggregation and TracingBy default, important statistics and wait events are collected for the work attributed to every service. An application can further qualify a service by MODULE and ACTION names to identify the important transactions within the service. This enables you to locate exactly the poorly performing transactions for categorized workloads. This is especially important when monitoring performance in systems by using connection pools or transaction processing monitors. For these systems, the sessions are shared which makes accountability difficult.SERVICE_NAME, MODULE, and ACTION are actual columns in V$SESSION. SERVICE_NAME is set automatically at login time for the user. MODULE and ACTION names are set by the application by using the DBMS_APPLICATION_INFO PL/SQL package or special OCI calls. MODULE should be set to a user recognizable name for the program that is currently executing. Likewise, ACTION should be set to a specific action or task that a user is performing within a module (for example, entering a new customer).Another aspect of this workload aggregation is tracing by service. The traditional method of tracing each session produces trace files with SQL commands that can span workloads. This results in a hit-or-miss approach to diagnose problematic SQL. With the criteria that you provide, SERVICE_NAME, MODULE, or ACTION, specific trace information is captured in a set of trace files and combined into a single output trace file. This enables you to produce trace files that contain SQL that is relevant to a specific workload being done.

Page 289: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-29

Copyright © 2004, Oracle. All rights reserved.

Cluster Database: Top Services

Cluster Database: Top ServicesFrom the Cluster Database Performance page, you can access Top Consumers page by clicking the Top Consumers link.The Top Consumers page has several tabs for displaying your RAC database as a single-system image. The Overview tab contains four pie charts: Top Clients, Top Services, Top Modules, and Top Actions. Each chart provides a different perspective regarding the top resource consumers across all instances of a particular RAC database.The Top Services tab displays performance-related information for the services that are defined in your cluster. Performance data is broken down by each instance that the service has run on since startup and is also summarized across all instances.Through this page, you can enable or disable tracing at the service level, as well as view the resulting SQL trace file.

Page 290: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-30

Copyright © 2004, Oracle. All rights reserved.

Service Aggregation Configuration

• Automatic service aggregation level of statistics• DBMS_MONITOR used for finer granularity of

service aggregations:– SERV_MOD_ACT_STAT_ENABLE

– SERV_MOD_ACT_STAT_DISABLE

• Possible additional aggregation levels:– SERVICE_NAME/MODULE– SERVICE_NAME/MODULE/ACTION

• Tracing services, modules, and actions– SERV_MOD_ACT_TRACE_ENABLE

– SERV_MOD_ACT_TRACE_DISABLE

• Database settings persist across instance restarts

Service Aggregation ConfigurationOn each instance, important statistics and wait events are automatically aggregated and collected by service. You do not have to do anything to set this up, except connect with different connect strings using the services you want to connect to. However, to achieve a finer level of granularity of statistics collection for services, you must make use of the SERV_MOD_ACT_STAT_ENABLE procedure in the DBMS_MONITOR package. This procedure enables statistics gathering for additional hierarchical combinations of SERVICE_NAME/MODULE and SERVICE_NAME/MODULE/ACTION. The SERV_MOD_ACT_STAT_DISABLE procedure stops the statistics gathering that was turned on.The enabling and disabling of statistics aggregation within the service applies to every instance accessing the database. Furthermore, these settings are persistent across instance restarts.The SERV_MOD_ACT_TRACE_ENABLE procedure enables tracing for services with three hierarchical possibilities: SERVICE_NAME, SERVICE_NAME/MODULE, and SERVICE_NAME/MODULE/ACTION. The default is to trace for all instances that access the database. A parameter is provided that restricts tracing to specified instances where poor performance is known to exist. This procedure also gives you the option of capturing relevant waits and bind variable values in the generated trace files. SERV_MOD_ACT_TRACE_DISABLE disables the trace at all enabled instances for a given combination of service, module, and action. Like the statistics gathering mentioned previously, service tracing persists across instance restarts.

Page 291: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-31

Copyright © 2004, Oracle. All rights reserved.

Service Aggregation: Example

• Collect statistics on service and module.

• Collect statistics on service, module, and action.

• Trace all sessions of an entire service.

• Trace on service, module, and action.

exec DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(-'AP', 'PAYMENTS');

exec DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(-'AP', 'PAYMENTS', 'QUERY_DELINQUENT');

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('AP');

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(-'AP', 'PAYMENTS', 'QUERY_DELINQUENT');

Service Aggregation: ExampleThe first piece of sample code begins collecting statistics for the PAYMENTS module within the AP service. The second example collects statistics only for the QUERY_DELINQUENT program that runs in the PAYMENTS module under the AP service. This enables statistics collection on specific tasks that run in the database. In the third code box, all sessions that log in under the AP service are traced. A trace file is created for each session that uses the service, regardless of the module and action. To be precise, you can trace only specific tasks within a service. This is illustrated in the last example where all sessions of the AP service that execute the QUERY_DELINQUENT action within the PAYMENTSmodule are traced.Tracing by service, module, and action enables you to focus your tuning efforts on specific SQL, rather than sifting through trace files with SQL from different programs. Only the SQL statements that define this one task are recorded in the trace file. This complements collecting statistics by service, module, and action because relevant wait events for an action can be identified.Note: For more information about the DBMS_MONITOR package, refer to PL/SQL Packages and Types Reference.

Page 292: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-32

Copyright © 2004, Oracle. All rights reserved.

The trcsess Utility

Client

DedicatedServer

Tracefile

Clients

SharedServer

Tracefile

SharedServer

Tracefile

SharedServer

Tracefile

Client

DedicatedServer

Tracefile

TRCSESS

Trace filefor one clientTKPROF

Reportfile

TRCSESS

Trace filefor CRM service

Client

DedicatedServer

Tracefile

CRM ERP CRM CRM ERP CRM

The trcsess UtilityThe trcsess utility consolidates trace output from selected trace files based on several criteria: Session Id, client Id, service name, action name, and module name. After trcsessmerges the trace information into a single output file, the output file can be processed by tkprof.When using the DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE procedure, tracing information is present in multiple trace files and you must use the trcsess tool to collect it into a single file.The trcsess utility is useful for consolidating the tracing of a particular session or service for performance or debugging purposes.Tracing a specific session is usually not a problem in the dedicated server model as a single dedicated process serves a session during its lifetime. All the trace information for the session can be seen from the trace file belonging to the dedicated server serving it. However, tracing a service might become a complex task even in dedicated server model.Moreover, in a shared server configuration a user session is serviced by different processes from time to time. The trace pertaining to the user session is scattered across different trace files belonging to different processes. This makes it difficult to get a complete picture of the life cycle of a session.

Page 293: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-33

Copyright © 2004, Oracle. All rights reserved.

Service Performance Views

• Service, module, and action information in:– V$SESSION

– V$ACTIVE_SESSION_HISTORY

• Service performance in:– V$SERVICE_STATS

– V$SERVICE_EVENT

– V$SERVICE_WAIT_CLASS

– V$SERVICEMETRIC

– V$SERVICEMETRIC_HISTORY

– V$SERV_MOD_ACT_STATS

– DBA_ENABLED_AGGREGATIONS

• 28 statistics for services

Service Performance ViewsThe service, module, and action information are visible in V$SESSION and V$ACTIVE_SESSION_HISTORY.The call times and performance statistics are visible in V$SERVICE_STATS, V$SERVICE_EVENT, V$SERVICE_WAIT_CLASS, V$SERVICEMETRIC, and V$SERVICEMETRIC_HISTORY.When statistics collection for specific modules and actions is enabled, performance measures are visible at each instance in V$SERV_MOD_ACT_STATS.There are over 300 performance-related statistics that are tracked and visible in V$SYSSTAT. Of these, 28 statistics are tracked for services. To see the statistics measured for services, run the following query: SELECT DISTINCT stat_name FROM v$service_stats

Of the 28 statistics, DB time and DB CPU are worth mentioning. DB time is a statistic that measures the average response time per call. It represents the actual wall clock time for a call to complete. DB CPU is an average of the actual CPU time spent per call. The difference between response time and CPU time is the wait time for the service. After the wait time is known and if it consumes a large percentage of response time, then you can trace at the action level to identify the waits.Note: DBA_ENABLED_AGGREGATIONS displays information about enabled on-demand statistic aggregation.

Page 294: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-34

Copyright © 2004, Oracle. All rights reserved.

Managing Services

• Use EM or SRVCTL to manage services:– Start: Allow connections– Stop: Prevent connections– Enable: Allow automatic restart and redistribution– Disable: Prevent starting and automatic restart– Relocate: Temporarily change instances on which

services run– Modify: Modify preferred and available instances– Get status information

• Use the DBCA or SRVCTL to:– Add or remove– Modify services

Managing ServicesDepending on the type of management tasks that you want to perform, you can use Enterprise Manager, DBCA, or SRVCTL.The following is the description of the management tasks related to services in a RAC environment:

• Disabling a service is used to disable a specified service on all or specified instances. The disable state is used when a service is down for maintenance to prevent inappropriate automatic CRS restarts. Disabling an entire service affects all the instances by disabling each one.

• Enabling a service is used to enable a service to run under CRS for automatic restart and redistribution. You can enable a service even if that service is stopped. Enable is the default value when a service is created. If the service is already enabled, then the command is ignored. Enabled services can be started, and disabled services cannot be started. Enabling an entire service affects the enabling of the service over all the instances by enabling the service at each one.

• Starting a service is used to start a service or multiple services on the specified instance. Only enabled services can be started. The command fails if you attempt to start a service on an instance and if the number of instances that are currently running the service already reaches its cardinality.

Page 295: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-35

Managing Services (continued)• Stopping is used to stop one or more services globally across the cluster database, or on the

specified instance. Only CRS services that are starting or started are stopped. You should disable a service that you intend to remain stopped after you stopped that service because if the service is stopped and is not disabled, then it can be restarted automatically as a result of another planned operation. This operation can force sessions to be disconnected transactionally.

• Removing a service is used to remove its configuration from the cluster database on all or specified instances. You must first stop the corresponding service before you can remove it. You can remove a service from specific instances only.

• Relocating a service is used to relocate a service from a source instance to a target instance. The target instance must be on the preferred or available list for the service. This operation can force sessions to be disconnected transactionally. The relocated service is temporary until you permanently modify the configuration.

• Modifying a service configuration is used to permanently modify a service configuration. The change takes effect when the service is restarted later. This allows you to move a service from one instance to another. Additionally, this command changes the instances that are to be the preferred and available instances for a service.

• Displaying the current state of a named service.You can only administer a service with Enterprise Manager after creating the service with DBCA or SRVCTL. When using the DBCA to add services, the DBCA also configures the net service entries for these services and starts them. When you use DBCA to remove services, DBCA stops the service, removes the CRS resource for the service, and removes the net service entries.When you create a service with SRVCTL, you must start it with a separate SRVCTL command.Note: When you create a service it is automatically enabled.

Page 296: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-36

Copyright © 2004, Oracle. All rights reserved.

Managing Services with EM

Managing Services with EMEM provides you with some ability to manage services within a GUI framework. The screenshot shown in the slide above is the main page for administering services within RAC. It shows you some basic status information about defined service.To access this page, click the Cluster Managed Database Services link on the Cluster Database Administration page. With the initial release of EM, you can perform simple service management such as enabling, disabling, starting, stopping, and relocating services.If you choose to start a service on the Cluster Managed Database Services page, then EM attempts to start the service on every preferred instance. Stopping the service stops it on all instances that it is currently running.To start or stop a service on individual instances or to relocate a service, choose the service that you want to administer and then click the Manage Service button.

Page 297: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-37

Copyright © 2004, Oracle. All rights reserved.

Managing Services with EM

Managing Services with EM (continued)To access this page, you must choose a service from the Cluster Managed Database Services page and then click the Manage Service button.This is the Cluster Managed Database Service page for an individual service. It offers you the same functionality as the previous page, except that actions performed here apply to specific instances of a service.This page also offers you the added functionality of relocating a service to an available instance. Relocating a service from one instance to another stops the service on the first instance and then starts it on the second.Note: This page also shows you the TAF policy set for this particular service.

Page 298: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-38

Copyright © 2004, Oracle. All rights reserved.

Managing Services: Example

• Start a named service on all preferred instances.

• Stop a service on selected instances.

• Disable a service at a named instance.

• Set an available instance as a preferred instance.

$ srvctl start service –d PROD –s AP

$ srvctl stop service –d PROD –s AP –i RAC03,RAC04

$ srvctl disable service –d PROD –s AP –i RAC04

$ srvctl modify service –d PROD –s AP -i RAC05 –r

Managing Services: ExampleThe slide demonstrates some management tasks with services by using SRVCTL.Assume that an AP service has been created with four preferred instances: RAC01, RAC02, RAC03, and RAC04. An available instance, RAC05, has also been defined for AP.In the first example, the AP service is started on all preferred instances. If any of the preferred or available instances that support AP, are not running but are enabled, then they are started.The stop command stops the AP service on instances RAC03 and RAC04. The instances themselves are not shut down, but remain running possibly supporting other services. The APservice continues to run on RAC01 and RAC02. The intention might have been to do maintenance on RAC04, and so the AP service was disabled on that instance to prevent automatic restart of the service on that instance. The OCR records the fact that AP is disabled for RAC04. Thus, CRS will not run AP on RAC04 until the service is enabled later.The last command in the slide changes RAC05 from being an available instance to a preferred one. This is beneficial if the intent is to always have four instances run the service because RAC04 was previously disabled.Note: For more information, refer to the Oracle Real Application Clusters Administrator’s Guide.

Page 299: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-39

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Configure and manage services in a RAC

environment• Use services with client applications• Use services with the Database Resource Manager• Use services with the Scheduler• Set performance metric thresholds on services• Configure services aggregation and tracing

Page 300: User Guide RAC

Oracle Database 10g: Real Application Clusters 7-40

Copyright © 2004, Oracle. All rights reserved.

Practice 7 Overview

This practice covers the following topics:• Defining services using DBCA• Managing services using Database Control• Using server-generated alerts in combination with

services

Page 301: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

High Availability of Connections

Page 302: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Configure client side connect-time load balancing• Configure client side connect-time failover• Configure server side connect-time load balancing• Describe the benefits of Fast Application

Notification (FAN)• Configure server-side callouts• Configure the server and client-side ONS• Configure Transparent Application Failover (TAF)

Page 303: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-3

Copyright © 2004, Oracle. All rights reserved.

Types of Workload Distribution

• Connection balancing is rendered possible by configuring multiple listeners on multiple nodes:– Client side connect-time load balancing– Client side connect-time failover– Server side connect-time load balancing

• Run-time balancing is rendered possible by using connection pools:– Work requests are automatically balanced across

the pool of connections– Native feature of the JDBC implicit connection

cache

Types of Workload DistributionWith RAC, multiple listeners on multiple nodes can be configured to handle client connection requests for the same database service.A multiple-listener configuration enables you to leverage the following failover and load-balancing features:

• Client side connect-time load balancing• Client side connect-time failover• Server side connect-time load balancing

These features can be implemented either one by one, or in combination with each other.Moreover, if you are using connection pools, you can benefit from run-time balancing to distribute the client work requests across the pool of connections established by the middle tier. This possibility is offered by the Oracle JDBC implicit connection cache feature.

Page 304: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-4

Copyright © 2004, Oracle. All rights reserved.

Client Side Connect-Time Load Balancing

ERP =(DESCRIPTION =(LOAD_BALANCE=ON)(ADDRESS_LIST =(ADDRESS=(PROTOCOL=TCP)(HOST=node1vip)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=node2vip)(PORT=1521))

)(CONNECT_DATA=(SERVICE_NAME=ERP)))

node1 node2

Randomaccess

Client Side Connect-Time Load BalancingThe client side connect-time load balancing feature enables clients to randomize connection requests among a list of available listeners. Oracle Net progresses through the list of protocol addresses in a random sequence, balancing the load on the various listeners. Without this feature, Oracle Net always takes the first protocol address to attempt a connection.You enable this feature by setting the LOAD_BALANCE=ON clause in the corresponding client side TNS entry.Note: For a small number of connections, random sequence is not always even.

Page 305: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-5

Copyright © 2004, Oracle. All rights reserved.

Client Side Connect-Time Failover

ERP =(DESCRIPTION =(LOAD_BALANCE=ON)(FAILOVER=ON)(ADDRESS_LIST =(ADDRESS=(PROTOCOL=TCP)(HOST=node1vip)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=node2vip)(PORT=1521))

)(CONNECT_DATA=(SERVICE_NAME=ERP)))

node1vipnode2vip

3

4

1

2

Client Side Connect-Time FailoverThis feature enables clients to connect to another listener if the initial connection to the first listener fails. The number of listener protocol addresses in the connect descriptor determines how many listeners are tried. Without client side connect-time failover, Oracle Net attempts a connection with only one listener. As shown by the example, client side connect-time failover is enabled by setting the FAILOVER=ON clause in the corresponding client side TNS entry.In the example, you expect the client to randomly attempt connections to either NODE1VIP or NODE2VIP, because LOAD_BALANCE is set to ON. In a case where one of the nodes is down, the client cannot know this. If a connection attempt is made to a down node, the client needs to wait until it receives the notification that the node is not accessible, before an alternate address in the ADDRESS_LIST is tried.So, it is highly recommended to use virtual host names in the ADDRESS_LIST of your connect descriptors. Should a failure of a node occur (1), the Virtual IP address assigned to that node is failed over and brought online on another node in the cluster (2). Thus, all client connection attempts are still able to get a response from the IP address, without the need to wait for the operating system TCP/IP timeout (3). Therefore, clients get an immediate acknowledgement from the IP address, and are notified that the service on that node is not available. The next address in the ADDRESS_LIST can then be tried immediately with no delay (4).Note: If using connect-time failover, do no set GLOBAL_DBNAME in your listener.orafile.

Page 306: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-6

Copyright © 2004, Oracle. All rights reserved.

Server Side Connect-Time Load Balancing

Node2

Listener

Node1

Listener

RACDB_LISTENERS=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=node1vip)(PORT=1521))(ADDRESS=(PROTOCOL=tcp)(HOST=node2vip)(PORT=1521)))

PMON PMON*.REMOTE_LISTENERS=RACDB_LISTENERS

ERP started on both instances

ERP = (DESCRIPTION=(LOAD_BALANCE=ON)(FAILOVER=ON)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1vip)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=node2vip)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=ERP)))

11

1

234

5

6

Server Side Connect-Time Load BalancingThe slide shows you how listeners evenly distribute service connection requests across a RAC cluster. Here, the client application connects to the ERP service. On the server side, the database is using the dynamic service registration feature. This allows the PMON process of each instance in the cluster to register workload information with each listener in the cluster (1). Each listener is then aware of which instance has a particular service started, as well as the absolute session count per instance, and the run queue length of each node.You configure this feature by setting the REMOTE_LISTENER initialization parameter of each instance to a TNS name that describes the list of all available listeners. The slide shows the shared entry in the SPFILE as well as its corresponding server-side TNS entry.Depending on the load information sent by each PMON process, a listener, by default, redirects the incoming connection request (2) to the listener of the least-loaded instance on the least-loaded node (3). If you want to guarantee that the listener redirects the connection request to the least loaded instance, then set the PREFER_LEAST_LOADED_NODE_[listener_name] to OFFin the listener.ora files. This is better for applications that use connection pools.In the example, the listener on NODE2 is tried first. Based on workload information dynamically updated by PMON processes, the listener determines that the best instance is the one residing on NODE1. The listener redirects the connection request to the listener on NODE1 (4). That listener then starts a dedicated server process (5), and the connection is made to that process (6).Note: For more information, refer to the Net Services Administrator’s Guide.

Page 307: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-7

Copyright © 2004, Oracle. All rights reserved.

Node1

Fast Application Notification: Overview

CRS EMD

ONS

HAEvents

HAEvents

SNMP

SMTP

syslog

Calloutscript

HAEvents

Calloutexec

Proxyapp

Systemlogs

e-mail

SNMPconsole

C appONC OCI

API

ONS

Java appONC Java

API

ONS

Java app

DBControl

Remotetrouble ticket

repository

Remoteconnection

manager

JDBC

Fast Application Notification: OverviewA typical RAC installation may generate a myriad of cluster events, which are used by the internal components of a cluster to manage its high availability. Fast Application Notification (FAN) is a feature that filters and publishes only those high-availability events that are considered as meaningful to specific targets. There are currently three main methods to integrate FAN with your applications:

• Server-side callouts that are using wrapper scripts or executables installed in a particular directory on the database server. Callouts should be coded to affect local components. However, callouts can also affect remote components if proxy applications are deployed. Proxy applications may include existing server components that expose command-line interfaces, such as SMTP mail servers and SNMP agents, or custom programs that connect exclusively to a specific set of remote applications.

• The Oracle Notification Client (ONC) API, which provides a distributed, publish-and-subscribe protocol over TCP/IP. This API is available for C and Java applications. With this method, RAC is the default event publisher, and any custom C or Java application reachable from the database server, simply links the ONC API library and subscribes to FAN events. Upon reception, the application can execute event-handling actions. An Oracle Notification Services (ONS) listening daemon needs to be started on each host where one or more ONC subscribers are present. This includes each node of your RAC environment.

Page 308: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-8

Fast Application Notification: Overview (continued)• Applications can also enable the Oracle JDBC implicit connection cache and let FAN

events be handled by the Oracle JDBC libraries directly. Using this method, you no longer need to write any custom Java or C code to handle FAN events, nor invoke the ONC API directly. Think of this as an out-of-the-box integration with the ONS.

Note: Enterprise manager is tightly integrated with FAN, and no configuration is required.

Page 309: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-9

Copyright © 2004, Oracle. All rights reserved.

Fast Application Notification Benefits

• No need for connections to rely on connection timeouts

• Designed for enterprise application and management console integration

• Reliable distributed system that:– Timely detects high-availability event occurrences– Pushes notification directly to your applications

• Tightly integrated with:– Oracle JDBC applications using connection pools– Enterprise Manager– Data Guard Broker

Fast Application Notification BenefitsTraditionally, client or mid-tier applications connected to the database have relied on connection timeouts, out-of-band polling mechanisms, or other custom solutions to realize that a system component has failed. This approach has huge implications in application availability, because down times are extended and more noticeable.With FAN, important high-availability events are pushed as soon as they are detected, which results in a more efficient use of existing computing resources, and a better integration with your enterprise applications, including mid-tier connection managers, or IT management consoles, including trouble ticket loggers and e-mail/paging servers.FAN is in fact a distributed system that is enabled on each participating node. This makes it very reliable and fault-tolerant because the failure of one component is detected by another one. Therefore, event notification can be detected and pushed by any of the participating nodes.FAN events are tightly integrated with Oracle Data Guard Broker, Oracle JDBC implicit connection cache, and Enterprise Manager. Oracle Database 10g JDBC applications managing connection pools do not need custom code development. They are automatically integrated with the ONS if implicit connection cache and fast connection failover are enabled.Note: For more information about FAN and Data Guard integration, refer to the lesson titled Design for High Availability in this course.

Page 310: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-10

Copyright © 2004, Oracle. All rights reserved.

FAN-Supported Event Types

Oracle cluster nodeNODE

Oracle ASM instanceASM

Oracle instanceINSTANCE

Oracle databaseDATABASE

Application service on a specific instanceSERVICEMEMBER

Shadow application service event (mid-tiers and TAF using primary and secondary instances)

SRV_PRECONNECT

Primary application serviceSERVICE

DescriptionEvent type

FAN-Supported Event TypesFAN delivers events pertaining to the above list of managed cluster resources. The table describes each of the resources.Note: SRV_PRECONNECT is discussed later in this lesson.

Page 311: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-11

Copyright © 2004, Oracle. All rights reserved.

FAN Event Status

Unrecognized statusUnknown

Managed resource fails to start locally after a discrete number of retries.

restart_failed

Managed resource cannot fail over to a remote node.

not_restarting

Managed node goes down.nodedown

Shadow application service goes down.preconn_down

Shadow application service comes up.preconn_up

Managed resource goes down.down

Managed resource comes up.up

DescriptionEvent status

FAN Event StatusThis table describes the event status for each of the managed cluster resources seen previously.

Page 312: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-12

Copyright © 2004, Oracle. All rights reserved.

FAN Event Reasons

Initial cluster boot: Managed resource was running before the last CRS shutdown.

Boot

Initial cluster boot: Managed resource has profile attribute AUTO_START=1, and was offline before the last CRS shutdown.

autostart

Unknown or internal application state when event is triggered

unknown

Dependency of another managed resource that triggered a failure condition

dependency

Managed resource polling checks detecting a failure

failure

User-initiated commands, such as srvctl and sqlplus

user

DescriptionEvent Reason

FAN Event ReasonsThe event status for each managed resource is associated with an event reason. The reason further describes what triggered the event. The above table gives you the list of possible reasons with a corresponding description.

Page 313: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-13

Copyright © 2004, Oracle. All rights reserved.

FAN Event Format

<Event_Type>VERSION=<n.n>[service=<serviceName.dbDomainName>][database=<dbName>] [instance=<sid>] [host=<hostname>]status=<Event_Status>reason=<Event_Reason>[card=<n>]timestamp=<eventDate> <eventTime>

SERVICE VERSION=1.0 service=ERP.oracle.comdatabase=RACDB status=up reason=user card=4 timestamp=16-Mar-2004 19:08:15

NODE VERSION=1.0 host=strac-1status=nodedown timestamp=16-Mar-2004 17:35:53

FAN Event FormatIn addition to its type, status, and reason, a FAN event has other payload fields to further describe the unique cluster resource whose status is being monitored and published:

• The event payload version which is currently 1.0.• The name of the primary or shadow application service. This name is excluded from NODE

events.• The name of the RAC database which is also excluded from NODE events.• The name of the RAC instance which is excluded from SERVICE, DATABASE, and NODE

events.• The name of the cluster host machine which is excluded from SERVICE and DATABASE

events.• The service cardinality which is excluded from all events except for SERVICE

status=up events.• The server-side date and time when the event is detected.

The general FAN event format is described in the slide along with possible FAN event examples. Note the differences in event payload for each FAN event type.

Page 314: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-14

Copyright © 2004, Oracle. All rights reserved.

Server-Side Callouts Implementation

• The callout directory:– <CRS Home>/racg/usrco– Can store more than one callout– Grants execution on callout directory and callouts

only to the CRS user• Callouts execution order is nondeterministic.• Writing callout involves:

– Parsing callout arguments: The event payload– Filtering incoming FAN events– Executing event-handling programs

Server-Side Callouts ImplementationEach database event detected by the RAC High Availability (HA) framework results in the execution of each callout deployed in the standard CRS callout directory. On UNIX, it is $ORACLE_BASE/product/10.1.0/crs/racg/usrco. Unless your CRS home directory is shared across the network, you must deploy each new callout on each RAC node.The order in which these callouts are executed is nondeterministic. However, RAC guarantees that all callouts are invoked once for each recognized event in an asynchronous fashion. Thus, it is recommended to merge callouts whose executions need to be in a particular order.You can install as many callout scripts or programs as your business requires, provided that each callout does not incur expensive operations that delay the propagation of HA events. If many callouts are going to be written to perform different operations based on the event received, it might be more efficient to write a single callout program that merges each single callout. Writing server-side callouts involve the above shown steps. In order for your callout to identify an event, it must parse the event payload sent by the RAC HA framework to your callout. After the sent event is identified, your callout can filter it to avoid execution on each event notification. Then, your callout needs to implement a corresponding event-handler that depends on the event itself and the recovery process required by your business.Note: As a security measure, make sure that the callout directory and its contained callouts have write permissions only to the system user who installed CRS.

Page 315: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-15

Copyright © 2004, Oracle. All rights reserved.

Server-Side Callout Parse: Example#!/bin/shNOTIFY_EVENTTYPE=$1for ARGS in $*; do

PROPERTY=`echo $ARGS | $AWK -F"=" '{print $1}'`VALUE=`echo $ARGS | $AWK -F"=" '{print $2}'`case $PROPERTY inVERSION|version) NOTIFY_VERSION=$VALUE ;;SERVICE|service) NOTIFY_SERVICE=$VALUE ;;DATABASE|database) NOTIFY_DATABASE=$VALUE ;;INSTANCE|instance) NOTIFY_INSTANCE=$VALUE ;;HOST|host) NOTIFY_HOST=$VALUE ;;STATUS|status) NOTIFY_STATUS=$VALUE ;;REASON|reason) NOTIFY_REASON=$VALUE ;;CARD|card) NOTIFY_CARDINALITY=$VALUE ;;TIMESTAMP|timestamp) NOTIFY_LOGDATE=$VALUE ;; ??:??:??) NOTIFY_LOGTIME=$PROPERTY ;;

esacdone

Server-Side Callout Parse: ExampleUnless you want your callouts to be executed on each event notification, you must first identify the event parameters that are passed automatically to your callout during its execution. The example in the slide above shows you how to parse these arguments by using a sample Bourne shell script.The first argument that is passed to your callout is the type of event that is detected. Then, depending on the event type, a set of PROPERTY=VALUE strings are passed to identify exactly the event itself.The above script identifies the event type and each pair of PROPERTY=VALUE string. The data is then dispatched into a set of variables that can be used later in the callout for filtering purposes.As mentioned in the previous slide, it might be better to have a single callout that parses the event payload, and then executes a function or another program based on the information in the event, as opposed to having to filter information in each callout. This becomes necessary only if many callouts are required.Note: In the above example, #!/bin/sh should be the very first line of the script to be correctly interpreted by the shell script interpreter.

Page 316: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-16

Copyright © 2004, Oracle. All rights reserved.

Server-Side Callout Filter: Example

if ((( [ $NOTIFY_EVENTTYPE = "SERVICE" ] ||[ $NOTIFY_EVENTTYPE = "DATABASE" ] || \[ $NOTIFY_EVENTTYPE = "NODE" ] \

) && \( [ $NOTIFY_STATUS = "not_restarting" ] || \[ $NOTIFY_STATUS = "restart_failed" ] \

)) && \( [ $NOTIFY_DATABASE = "HQPROD" ] || \[ $NOTIFY_SERVICE = "ERP" ] \

))then

/usr/local/bin/logTicket.exe $NOTIFY_LOGDATE \$NOTIFY_LOGTIME \$NOTIFY_SERVICE \$NOTIFY_DBNAME \$NOTIFY_HOST

fi

Server-Side Callout Filter: ExampleThe example in the slide above shows you a way to filter FAN events from a callout script. This example is based on the example in the previous slide.Now that the event characteristics are identified, this script triggers the execution of the trouble logging program /usr/local/bin/logTicket.exe only when the RAC HA framework posts a SERVICE, DATABASE or NODE event type, with a status sets to either not_restarting, or restart_failed, and only for the production HQPROD RAC database or the ERP service.It is assumed that the logTicket.exe program is already created and that it takes the above shown arguments.It is also assumed that a ticket is logged only for not_restarting or restart_failedevents, because they are the ones that exceeded internally monitored timeouts and seriously need human intervention for full resolution.

Page 317: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-17

Copyright © 2004, Oracle. All rights reserved.

Configuring the Server-Side ONS

ONS

ONSlocalport=6100remoteport=6200useocr=on

ons.config

ONS

ons.config

OCR

$ racgons add_config node1:6200 node2:6200$ racgons add_config midtier1:6200

$ onsctl reconfig $ onsctl reconfig

Node1 Node2

Midtier1

1

2

1

3 3

Configuring the Server-Side ONSThe ONS configuration is controlled by the $ORACLE_HOME/opmn/conf/ons.configconfiguration file. This file is automatically created during installation. There are three important parameters that should always be configured for each ONS:

• The first is localport, the port that the ONS uses to talk to local clients.• The second is remoteport, the port that the ONS uses to talk to other ONS daemons.• The third parameter is called nodes. It specifies the list of other ONS daemons to talk to.

This list should include all RAC ONS daemons, and all mid-tier ONS daemons. Node values are given as either host names or IP addresses followed by its remoteport. Instead, you can store this data in the Oracle Cluster Registry (OCR) using the racgons add_config command and having the useocr parameter set to on in the ons.config file. By storing nodes information in the OCR, you do not need to edit a file on every node to change the configuration. Instead, you only need to run a single command on one of the cluster nodes.

In the slide above, it is assumed that ONS daemons are already started on each cluster node. This should be the default situation after a correct RAC installation. However, if you want to use the OCR, you should edit the ons.config file on each node, and then add the configuration to the OCR before reloading it on each cluster node. This is illustrated in the slide above.Note: You should run racgons whenever you add or remove a node that runs an ONS daemon. To remove a node from the OCR, you can use the racgons remove_config command.

Page 318: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-18

Copyright © 2004, Oracle. All rights reserved.

Configuring the Client-Side ONS

localport=6100remoteport=6200nodes=node1:6200,node2:6200

ONS

ONS

ons.config

ONS

ons.config

$ onsctl start

Node1 Node2

Midtier1

ons.config

1

2

OCR

Configuring the Client-Side ONSYou must install the ONS on each host where you have client applications that need to be integrated with FAN. Most of the time, these hosts play the role of a mid-tier application server. The ONS is shipped as part of the 10g Release 1 (10.1). Therefore, on the client side, you must configure all the RAC nodes in the ONS configuration file. A sample configuration file might look like the one shown in the slide.After configuring the ONS, you start the ONS daemon with the onsctl start command. It is your responsibility to make sure that an ONS daemon is running at all times. You can check that the ONS daemon is active by executing the onsctl ping command.

Page 319: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-19

Copyright © 2004, Oracle. All rights reserved.

JDBC Fast Connection Failover: Overview

ONS

ONS

Noden

Midtier1

ONS

Node1

Service or nodeDOWN event

Connections marked down

JDBC ICC

ServiceUP event

Connections reconnected

Listeners

Connection Cache

Eventhandler

Connectionsload balancing

………

Connectionsusing

Service names

Connectionsusing

Service names

JDBC Fast Connection Failover: OverviewOracle Application Server 10g integrates JDBC Implicit Connection Cache (ICC) with the ONC by having application developers enable Fast Connection Failover (FCF). FCF works in conjunction with the JDBC ICC to quickly and automatically recover lost or damaged connections. This automatic connection management results from FAN events received by the local ONS daemon, and handled by a special event handler thread. Both JDBC thin and JDBC OCI drivers are supported.Therefore, if JDBC ICC and FCF are enabled, your Java program automatically becomes an ONC subscriber without having to manage FAN events directly.Whenever a service or node down event is received by the mid-tier ONS, the event handler automatically mark the corresponding connections as down. This prevents applications that request connections from the cache, from receiving invalid or bad connections.Whenever a service up event is received by the mid-tier ONS, the event handler recycles some unused connections, and reconnect them using the event service name. The number of recycled connections is automatically determined by the connection cache. Because the listeners perform connect-time load balancing, this will automatically rebalance the connections across the preferred instances of the service without waiting for application connection requests or retries.Note: For more information, refer to the Oracle Database JDBC Developer’s Guide and Reference.

Page 320: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-20

Copyright © 2004, Oracle. All rights reserved.

JDBC Fast Connection Failover Benefits

• Database connections are balanced across all preferred RAC instances.

• Database connections are anticipated.• Database connection failures are immediately

detected and stopped.• No need to add extra coding to your existing Java

applications except to set the setFastConnectionFailoverEnabled flag.

JDBC Fast Connection Failover BenefitsBy enabling the JDBC implicit connection cache and fast connection failover features, your existing Java applications connecting through Oracle JDBC and application services benefit from the following:

• All database connections are balanced across all RAC instances that support the new service name, instead of having the first batch of sessions routed to the first RAC instance. Connection pools are rebalanced upon service, instance, or node up events.

• The connection cache immediately starts placing connections to a particular RAC instance when a new service is started on that instance.

• The connection cache immediately shuts down stale connections to RAC instances where the service is stopped on that instance, or whose node goes down.

• Your Java program automatically becomes an ONS subscriber without having to manage FAN events directly. The data source needs to have the attributesetFastConnectionFailoverEnabled set to true to enable fast connection failover in the implicit connection cache.

Note: An exception is immediately thrown as soon as the service status becomes not_restarting, which avoids wasteful service connection retries.

Page 321: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-21

Copyright © 2004, Oracle. All rights reserved.

Transparent Application Failover: Overview

ERP

ERP ERP_PRECONNECT

OCI LibraryNet Services

Application2

3

3

3

4

5

6

1

1

7AP

AP

OCI LibraryNet Services

Application2

3

3

7

4

5

6

1

7

8

TAF Basic TAF Preconnect

Transparent Application Failover (TAF): OverviewTAF is a run-time feature of the OCI driver. It enables your application to automatically reconnect to the service should the initial connection fails. During the reconnection, although your active transactions are rolled back, TAF can optionally resume the execution of a SELECTstatement that was in progress. TAF supports two failover methods:

• With the BASIC method, the reconnection is established at failover time. The initial connection (2) is made after the corresponding service has been started on the nodes (1). The listener establishes the connection (3), and your application accesses the database (4)until the connection fails (5). Your application then receives an error the next time it triesto access the database (6). Then, the OCI driver reconnects to the same service (7), and the next time your application tries to access the database, it transparently uses the newly created connection (8).

• The PRECONNECT method, is similar to the BASIC method except that it is during the initial connection that a shadow connection is also created to anticipate the failover. TAF guarantees that the shadow connection is always created on the available instances of your service. This is achieved by using an automatically created and maintained shadow service on available instances only.

Note: Optionally you can register TAF callbacks with the OCI layer. These callback functions are automatically invoked at failover detection and allow you to have some control of the failover process. For more information, refer to the Oracle Call Interface Programmer’s Guide.

Page 322: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-22

Copyright © 2004, Oracle. All rights reserved.

TAF Basic Configuration: Example

AP =(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))(CONNECT_DATA =

(SERVICE_NAME = AP)(FAILOVER_MODE =

(TYPE=SESSION)(METHOD=BASIC)(RETRIES=180)(DELAY=5))))

$ srvctl add service -d RACDB -s AP -r I1,I2 \> -P BASIC$ srvctl start service -d RACDB -s AP

TAF Basic Configuration: ExampleBefore using TAF, it is recommended that you create and start a service that is used during connections. By doing so, you benefit from TAF and services integration. When you want to use BASIC TAF with a service, you should have the -P BASIC option when creating the service. After the service is created, you simply start it on your database.Then, your application needs to connect to the service using a connection descriptor similar to the one shown in the slide above. The FAILOVER_MODE parameter must be included in the CONNECT_DATA section of your connection descriptor:• TYPE specifies the type of failover. The SESSION value means that only the user session

is reauthenticated on the server-side, whereas open cursors in the OCI application need to be reexecuted. The SELECT value means that not only the user session is reauthenticatedon the server side, but the open cursors in the OCI can continue fetching. This implies that the client-side logic maintains fetch-state of each open cursor. A SELECT statement is reexecuted by using the same snapshot, discarding those rows already fetched, and retrieving those rows that were not fetched initially. TAF verifies that the discarded rows are those that were returned initially, or it returns an error message.

• METHOD=BASIC is used to reconnect at failover time.• RETRIES specifies the number of times to attempt to connect after a failover.• DELAY specifies the amount of time in seconds to wait between connect attempts.

Note: If using TAF, do no set the GLOBAL_DBNAME parameter in your listener.ora file.

Page 323: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-23

Copyright © 2004, Oracle. All rights reserved.

TAF Preconnect Configuration: Example

ERP =(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))(CONNECT_DATA = (SERVICE_NAME = ERP)(FAILOVER_MODE = (BACKUP=ERP_PRECONNECT)

(TYPE=SESSION)(METHOD=PRECONNECT))))

ERP_PRECONNECT =(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))(CONNECT_DATA = (SERVICE_NAME = ERP_PRECONNECT)))

$ srvctl add service -d RACDB -s ERP -r I1 –a I2 \> -P PRECONNECT$ srvctl start service -d RACDB -s ERP

TAF Preconnect Configuration: ExampleIn order to use PRECONNECT TAF, it is recommended that you create a service with preferredand available instances. Also, in order for the shadow service to be created and managed automatically by the CRS, you must define the service with the –P PRECONNECT option. The shadow service is always named <service_name>_PRECONNECT.Like with the BASIC method, you need to use a special connection descriptor to use the PRECONNECT method while connecting to the service. One such connection descriptor is shown in the slide above.The main differences with the previous example is that METHOD is set to PRECONNECT, and an addition parameter is added. This parameter is called BACKUP and must be set to another entry in your tnsnames.ora that points to the shadow service.Note: In all cases where TAF cannot use the PRECONNECT method, TAF falls back to the BASIC method automatically.

Page 324: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-24

Copyright © 2004, Oracle. All rights reserved.

TAF VerificationSELECT machine, failover_method, failover_type,

failed_over, service_name, COUNT(*)FROM v$sessionGROUP BY machine, failover_method, failover_type,

failed_over, service_name;

MACHINE FAILOVER_M FAILOVER_T FAI SERVICE_N COUNT(*)------- ---------- ---------- --- -------- --------node1 BASIC SESSION NO AP 1 node1 PRECONNECT SESSION NO ERP 1

MACHINE FAILOVER_M FAILOVER_T FAI SERVICE_N COUNT(*)------- ---------- ---------- --- --------- --------node2 PRECONNECT SESSION NO ERP_PRECO 1

MACHINE FAILOVER_M FAILOVER_T FAI SERVICE_N COUNT(*)------- ---------- ---------- --- -------- --------node2 BASIC SESSION YES AP 1 node2 PRECONNECT SESSION YES ERP_PRECO 1

1stnode

2ndnode

2ndnodeafter

TAF VerificationTo determine whether TAF is correctly configured and that connections are associated with a failover option, you can examine the V$SESSION view. To obtain information about the connected clients and their TAF status, examine the FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER, and SERVICE_NAME columns. The example includes one query that you could execute to verify that you have correctly configured TAF.The above example is based on the previously configured AP and ERP services, and their corresponding connection descriptors.The first output in the slide is the result of the execution of the query on the first node after two SQL*Plus sessions from the first node have connected to the AP and ERP services respectively. The output shows that the AP connection ended up on the first instance. Because of the load-balancing algorithm, it can end up on the second instance. Alternatively, the ERP connection must end up on the first instance because it is the only preferred one.The second output is the result of the execution of the query on the second node before any connection failure. Note that there is currently one unused connection established under the ERP_PROCONNECT service which is automatically started on the ERP available instance.The third output is the one corresponding to the execution of the query on the second node after the first instance failure. A second connection has been created automatically for the AP service connection, and the original ERP connection is now using the preconnected connection.

Page 325: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-25

Copyright © 2004, Oracle. All rights reserved.

FAN Connection Pools and TAF Considerations

• Both techniques are integrated with services, and provide service connection load balancing.

• Do not mix FAN and TAF.• Connection pools using FAN are always

preconnected.• TAF may rely on OS timeouts to detect failures.• FAN never relies on OS timeouts to detect failures.

FAN Connection Pools and TAF Considerations Because the connection load balancing is a listener functionality, both FAN connection pools and TAF automatically benefit from connection load balancing for services.When you use FAN connection pools, there is no need to use TAF. Moreover, FAN connection pools and TAF cannot work together.For example, you do not need to preconnect if you use FAN connection pools. The connection pool is always preconnected.With both techniques, you automatically benefit from VIPs at connection time. This means that your application does not rely on lengthy operating system connection timeouts at connect time, or when issuing a SQL statement. However, when in the SQL stack, and the application is blocked on a read/write call, the application needs to be integrated with FAN in order to receive an interrupt if a node goes down. In a similar case, TAF will rely on OS timeouts to detect the failure. This takes much more time to fail over the connection than when using FAN.

Page 326: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-26

Copyright © 2004, Oracle. All rights reserved.

Restricted Session and Services

ALTER SYSTEMENABLE RESTRICTED SESSION;

ERP

RAC01

ERP

RAC02

ERP ERP

ERP

CRS

ERP

1 2

3

4

5

Restricted Session and ServicesWhenever you put one instance of the cluster in restricted mode, the corresponding CRS stops the services running on the restricted instance, and it starts them on available instances if they exist. That way, the listeners are dynamically informed of the changes, and they no longer attempt to route requests to the restricted instance, regardless of its current load. In effect, the listeners exempt the restricted instance from their connection load balancing algorithm.This feature comes with two important considerations:

• First, even users with RESTRICTED SESSION privilege are not able to connect remotely through the listeners to an instance that is in the restricted mode. They need to connect locally to the node supporting the instance and use the bequeath protocol.

• Second, this new feature works only when the restricted instance dynamically registers with the listeners. In other words, if you configure the listener.ora file with SID_LIST entries, and you do not use dynamic registration, the listener cannot block connection attempts to a restricted instance. In this case, and because the unrestricted instances of the cluster are still accessible, the restricted instance will eventually become least loaded, and the listener will start routing connection requests to that instance. Unable to accept the connection request because of its restricted status, the instance will deny the connection and return an error. This situation has the potential for blocking access to an entire service.

Note: The listener uses dynamic service registration information before static configurations.

Page 327: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-27

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Configure client side connect-time load balancing• Configure client side connect-time failover• Configure server side connect-time load balancing• Describe the benefits of Fast Application Notification• Configure server-side callouts• Configure the server and client-side ONS• Configure Transparent Application Failover

Page 328: User Guide RAC

Oracle Database 10g: Real Application Clusters 8-28

Copyright © 2004, Oracle. All rights reserved.

Practice 8 Overview

This practice covers the following topics:• Monitoring high availability of connections • Creating and using callout scripts• Using the transparent application failover feature

Page 329: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Managing Backup and Recovery in RAC

Page 330: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-2

Copyright © 2003, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Configure the RAC database to use ARCHIVELOG

mode and Flash Recovery• Configure RMAN for the RAC environment• Back up and recover the Oracle Cluster Registry

Page 331: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-3

Copyright © 2003, Oracle. All rights reserved.

Protecting Against Media Failure

Databasebackups

Mirroreddisks

Archivedlog files

Archivedlog files

Protecting Against Media FailureAlthough RAC provides you with methods to avoid or to reduce down time due to a failure of one or more (but not all) of your instances, you must still protect the database itself, which must be shared by all the instances. This means that you need to consider disk backup and recovery strategies for you cluster database just as you would for a nonclustereddatabase.To minimize the potential loss of data due to disk failures, you may want to use disk mirroring technology (available from your server or disk vendor). As in nonclustereddatabases, you can have more than one mirror if your vendor allows it, to help reduce the potential for data loss and to provide you with alternate backup strategies. For example, with your database in ARCHIVELOG mode and with three copies of your disks, you can remove one mirror copy and perform your backup from it while the two remaining mirror copies continue to protect ongoing disk activity. To do this correctly, you must first put the tablespaces into backup mode and then, if required by your cluster or disk vendor, temporarily halt disk operations by issuing the ALTER SYSTEM SUSPEND command. After the statement completes, you can break the mirror and then resume normal operations by executing the ALTER SYSTEM RESUME command and taking the tablespaces out of backup mode.

Page 332: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-4

Copyright © 2003, Oracle. All rights reserved.

Configure RAC Recovery Settings with EM

Configure Recovery Settings with EMEnterprise Manager can be used to configure important recovery settings for your cluster database. From the Database Control home page, click on the Maintenance folder tab, then click on Configure Recovery Settings link. From here you can ensure that your database is in archivelog mode and configure flash recovery settings.

Page 333: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-5

Copyright © 2003, Oracle. All rights reserved.

Configure RAC Backup Settings with EM

Configure Backup Settings with EMPersistent backup settings can be configured using Enterprise Manager. From the Database Control home page, click on the Maintenance folder tab, then click on Configure Backup Settings link. You can configure disk settings like the directory location of your disk backups, and level of parallelism. You can also choose the default backup type:

• Backup set• Compressed backup set• Image copy

You can also specify important tape related settings like the number of available tape drives and vendor-specific media management parameters.

Page 334: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-6

Copyright © 2003, Oracle. All rights reserved.

Log History Records

SELECT records_total, records_usedFROM v$controlfile_record_sectionWHERE type ='LOG HISTORY'

CREATE DATABASE . . . MAXLOGHISTORY 500 . . .

CREATE CONTROLFILE . . . MAXLOGHISTORY 1000 . . .

RECORDS_TOTAL RECORDS_USED------------- ------------

500 500

Log History RecordsYour RAC control file maintains a record of the archived log files generated for each thread of redo. This enables an instance that is performing media recovery to identify the archived log files that it needs regardless of which instance generated them. The number of entries that are maintained in the control file is determined by the value assigned with the MAXLOGHISTORY in your CREATE DATABASE command.As a best practice, MAXLOGHISTORY should be no less than the total number of archived log files that are generated across all your instances between each complete database backup. This enables a recovering instance to identify the archived log files that it requires after you restore the most recent backup.If there are insufficient archived log entries in your control file, you are prompted for the required file names when you initiate the recovery process. This can be hard to do and is almost always time consuming. To increase the size of the MAXLOGHISTORY setting, you must re-create your control file.

Page 335: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-7

Copyright © 2003, Oracle. All rights reserved.

Initiate Archiving

1. Shut down all instances2. Start up an exclusive instance with

LOG_ARCHIVE_* parameters set appropriately3. Enter the command: ALTER DATABASE

ARCHIVELOG

4. Modify the LOG_ARCHIVE_* parameters for each of the other instances

5. Shut down the exclusive instance6. Restart your instances using the modified

parameters

Initiate ArchivingTo enable archive logging, your database must be mounted, but not open, by an exclusive instance. If you are using an SPFILE, you must first create SID-specific entries for this instance; otherwise, build a special-purpose text parameter file. The parameters that you must set for this exclusive instance include the following: • CLUSTER_DATABASE: Set to FALSE• LOG_ARCHIVE_DEST_n: Up to 10 values, depending on your archive strategy• LOG_ARCHIVE_FORMAT: Include the %t or %T parameter for thread identification• LOG_ARCHIVE_START: Set to TRUE

You can now enable archiving for your database by following these steps:1. Shut down all instances.2. Start up your exclusive instance.3. Enter the following command: ALTER DATABASE ARCHIVELOG4. Modify the LOG_ARCHIVE_* parameters for each of the other instances.5. Shut down the exclusive instance and reset its CLUSTER_DATABASE parameter.6. Restart all of your instances using the modified parameters.

Page 336: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-8

Copyright © 2003, Oracle. All rights reserved.

Archived Log File Configurations

Cluster file system scheme:archive logs from each instance written to same file location

Local archive with NFS scheme: Each instance can read mounted archive destinations of all instances

Archived Log File ConfigurationsDuring backup and recovery operations involving archived log files, the Oracle server determines the file destinations and names from the control file. The archived log file path names can also be stored in the optional recovery catalog if you are using RMAN. The archived log file path names do not include the node name, however, so RMAN expects to find the files it needs on the nodes where the channels are allocated. If you are using a cluster file system, your instances can all write to the same archive log destination. This is known as the cluster file system scheme. Backup and recovery of the archive logs are easy because all logs are located in the same directory. If a cluster file system is not available, then Oracle recommends that local archive log destinations be created for each instance with NFS-read mount points to all other instances. This is known as the local archive with NFS scheme. During backup, you can either back up the archive logs from each host or select one host to perform the backup for all archive logs. During recovery, one instance may access the logs from any host without having to first copy them to the local destination.Using either scheme, you may wish to provide a second archive destination to avoid single points of failure.

Page 337: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-9

Copyright © 2003, Oracle. All rights reserved.

RAC and the Flash Recovery Area

Flashrecovery

area

Shared NFSdirectory

ASM

Cluster file system

RAC and the Flash Recovery AreaTo use a flash recovery area in RAC, you must place it on an ASM disk, a Cluster File System, or on a shared directory that is configured through NFS for each RAC instance. In other words, the flash recovery area must be shared among all the instances of a RAC database.Note: Set the initialization parameters DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE to the same values on all instances.

Page 338: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-10

Copyright © 2003, Oracle. All rights reserved.

Oracle Recovery Manager

RMAN provides the following benefits for Real Application Clusters:• Can read cluster files

or raw partitions with no configuration changes

• Can access multiple archive log destinations

Recovery catalog

RecoveryManager

Backup storage

OracleServer

process

Snapshot control file

Oracledatabase

Storedscripts

Archivedlog files

Oracle Recovery ManagerOracle Recovery Manager (RMAN) can use stored scripts, interactive scripts, or an interactive GUI front end. When using RMAN with your RAC database, use stored scripts to initiate the backup and recovery processes from the most appropriate node.If you use different Oracle Home locations for your Real Application Instances on each of your nodes, create a snapshot control file in a location that can be accessed by all your instances, either a cluster file or a shared raw device. Here is an example:

RMAN> CONFIGURE SNAPSHOT CONTROLFILE TO '/oracle/db_files/snaps/snap_prod1.cf';

For recovery, you must ensure that each recovery node can access the archive log files from all instances using one of the archive schemes discussed earlier, or make the archived logs available to the recovering instance by copying them from another location.

Page 339: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-11

Copyright © 2003, Oracle. All rights reserved.

Configuring RMAN

• Configure the snapshot control file location in RMAN

• Configure the control file automatic backup feature

Snapshot Control File ManagementRMAN creates snapshot copies of your control file as part of its backup operations. In a RAC database, the snapshot control file is created on the node that is making the backup. You need to configure a default path and file name for these snapshot control files that is valid on every node from which you might initiate an RMAN backup. You can use a shared file system location, including a raw device, if you want.

Automatic Control File BackupsRMAN optionally creates control file backups automatically after BACKUP and COPYcommands. These backups can be used if the recovery catalog and current control file are lost. RMAN performs the control file autobackup on the first allocated channel. When you allocate multiple channels with different parameters (especially if you allocate a channel with the CONNECT command), you must determine which channel will perform the automatic backup. Always allocate the channel for the connected node first.

Page 340: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-12

Copyright © 2003, Oracle. All rights reserved.

RMAN Default Autolocation

Recovery Manager autolocates the following files:• Backup pieces• Archived redo logs during backup• Data file or control file copies

RMAN Default AutolocationRecovery Manager automatically discovers which nodes of a RAC configuration can access the files that you want to back up or restore. Recovery Manager autolocates the following files:

• Backup pieces during backup or restore• Archived redo logs during backup• Data file or control file copies during backup or restore

In previous releases, you had to manually enable this option with SET AUTOLOCATE, and the option applied to backup pieces only.

Page 341: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-13

Copyright © 2003, Oracle. All rights reserved.

User-Managed Backup Methods

• For offline backup, shut down all instances

• Can use multiple nodes for parallel backup

• Can convert tablespace between backup modes from any instance

• Provide access to all threads of archived redo

• Full offline backupsonly

• Recovery to point oflast backup

With-outarch-iving

• Full or partial, offline oronline backups

• Recovery to point offailure

Witharch-iving

User-managed backup

User-Managed Backup MethodsThe user-managed backup and recovery options and methods for RAC databases are similar to the procedures that are used in a noncluster database. These include full offline backups and, if you are running in ARCHIVELOG mode, online backups.There are some additional issues that you need to consider when backing up your RAC database, among which are the following:

• You must shut down every instance for an offline database backup.• You can employ more than one node to back up different data files in parallel.• You need to issue the ALTER TABLESPACE commands (BEGIN BACKUP and END

BACKUP) on only one instance. It does not have to be on the same node where you perform the disk backup operations.

You must provide for the backup and recovery operations to access the archive log files that are independently written by each instance.

Page 342: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-14

Copyright © 2003, Oracle. All rights reserved.

Offline User-Managed Backup

• Query the following views to find files that need to be backed up:– V$DATAFILE or DBA_DATA_FILES– V$LOGFILE

– V$CONTROLFILE

– V$PARAMETER

• Shut down all instances• Copy the identified files to a backup destination• Partial offline backups are performed in exactly

the same way whether the database is clustered or single-instance

Offline User-Managed BackupThe full offline user-managed backup procedure for a RAC database is almost identical to the procedure for a single-instance configuration. The only difference is that you must shut down all of the instances, not just one, before you begin the actual backup. The main steps in the backup procedure include:

1. Query the V$DATAFILE view to obtain the names and locations of the data. 2. Query the V$LOGFILE view to obtain the names and locations of the online redo log

files.3. Query the V$CONTROLFILE view to obtain the names and locations of the control

files.4. Query the V$PARAMETER view to obtain the name of the SPFILE (if one is used).5. Shut down all instances that are currently accessing the database.6. After shutting down all instances, use an operating system utility to save all the data

files, online redo log files, at least one copy of the control file, and the SPFILE.7. Restart the instances.

The easiest way to perform this type of backup is to use a script. However, the script must confirm that all instances are shut down before starting to copy any of the files.

Page 343: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-15

Copyright © 2003, Oracle. All rights reserved.

Online User-Managed Backup

• ARCHIVELOG mode is required• Perform these steps for each tablespace to be

backed up– Execute ALTER TABLESPACE…BEGIN BACKUP– Invoke the operating system utility for file backup– Execute ALTER TABLESPACE…END BACKUP

• Execute ALTER DATABASE BACKUP CONTROLFILE TO filename or ALTER DATABASE BACKUP CONTROLFILE TO TRACE

• Execute ALTER SYSTEM ARCHIVE LOG CURRENTand backup archived log files from all threads

Online User-Managed BackupOnline backups enable you to back up all or part of the database while it is running. The procedure is essentially the same as for performing an online backup of a nonclustereddatabase. Online backups can be performed only when the database is running in ARCHIVELOG mode. The main steps in the backup procedure include:

1. Execute the ALTER TABLESPACE … BEGIN BACKUP command. This should be done for each tablespace before you back up any of its data files. You can set all tablespaces in backup mode simultaneously, or you can set them individually if you back up one at a time.

2. Issue the operating system commands to back up the data files for the tablespace.3. Execute the ALTER TABLESPACE … END BACKUP command.4. Back up the control files with the ALTER DATABASE BACKUP CONTROLFILE

TO file_name command. For additional safety, back up the control file to a trace file with the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command.

5. Execute an ALTER SYSTEM ARCHIVE LOG CURRENT command to archive all redo threads, including any unarchived logs from closed threads, and back up these along with any other archive log files that have not been previously backed up.

Page 344: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-16

Copyright © 2003, Oracle. All rights reserved.

Channel Connections to Cluster Instances

• When backing up, each allocated channel can connect to a different instance in the cluster.

• Each channel connection must resolve to only one instance.

• Instances to which the channels connect must be either all mounted or all open.

CONFIGURE DEFAULT DEVICE TYPE TO sbt;CONFIGURE DEVICE TYPE sbt PARALLELISM 3;CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT='sys/rac@RACDB1';CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT='sys/rac@RACDB2';CONFIGURE CHANNEL 3 DEVICE TYPE sbt CONNECT='sys/rac@RACDB3';

Channel Connections to Cluster InstancesWhen making backups using channels connected to different instances, each allocated channel can connect to a different instance in the cluster, and each channel connection must resolve to only one instance. The above example shows you a possible automatic channels configuration.During a backup, the instances to which the channels connect must be either all mounted or all open. For example, if the RACDB1 instance has the database mounted while the RACDB2and RACDB3 instances have the database open, then the backup fails.

Page 345: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-17

Copyright © 2003, Oracle. All rights reserved.

Distribution of Backups

Three possible backup configurations for RAC:• A dedicated backup server performs and manages

backups for the cluster and the cluster database.• One node has access to a local backup appliance

and performs and manages backups for the cluster database.

• Each node has access to a local backup appliance and can write to its own local backup media.

Distribution of BackupsWhen configuring the backup options for RAC, you have three possible configurations:

• Network backup server: A dedicated backup server performs and manages backups for the cluster and the cluster database. None of the nodes have local backup appliances.

• One local drive: One node has access to a local backup appliance and performs and manages backups for the cluster database. All nodes of the cluster should be on a cluster file system to be able to read all datafiles, archived redo logs, and SPFILEs. It is recommended that you do not use the non-cluster file system archiving scheme if you have backup media on only one local drive.

• Multiple drives: Each node has access to a local backup appliance and can write to its own local backup media.

In the cluster file system scheme, any node can access all the datafiles, archived redo logs, and SPFILEs. In the non-cluster file system scheme, you must write the backup script so that the backup is distributed to the correct drive and path for each node. For example, node 1 can back up the archived redo logs whose path names begin with /arc_dest_1, node 2 can back up the archived redo logs whose path names begin with /arc_dest_2, and node 3 can back up the archived redo logs whose path names begin with /arc_dest_3.

Page 346: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-18

Copyright © 2003, Oracle. All rights reserved.

One Local Drive CFS Backup Scheme

RMAN> CONFIGURE DEVICE TYPE sbt PARALLELISM 1;RMAN> CONFIGURE DEFAULT DEVICE TYPE TO sbt;

RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

One Local Drive CFS Backup SchemeIn a cluster file system backup scheme, each node in the cluster has read access to all the datafiles, archived redo logs, and SPFILEs. This includes Automated Storage Management (ASM), cluster file systems, and Network Attached Storage (NAS).When backing up to only one local drive in the cluster file system backup scheme, it is assumed that only one node in the cluster has a local backup appliance such as a tape drive. In this case, run the following one-time configuration commands:

CONFIGURE DEVICE TYPE sbt PARALLELISM 1;CONFIGURE DEFAULT DEVICE TYPE TO sbt;

Because any node performing the backup has read/write access to the archived redo logs written by the other nodes, the backup script for any node is simple:BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

In this case, the tape drive receives all datafiles, archived redo logs, and SPFILEs.

Page 347: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-19

Copyright © 2003, Oracle. All rights reserved.

Multiple Drives CFS Backup Scheme

CONFIGURE DEVICE TYPE sbt PARALLELISM 3;CONFIGURE DEFAULT DEVICE TYPE TO sbt;CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT 'usr1/pwd1@n1';CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'usr2/pwd2@n2';CONFIGURE CHANNEL 3 DEVICE TYPE sbt CONNECT 'usr3/pwd3@n3';

BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

Multiple Drives CFS Backup SchemeWhen backing up to multiple drives in the cluster file system backup scheme, it is assumed that each node in the cluster has its own local tape drive. Perform the following one-time configuration so that one channel is configured for each node in the cluster. This is a one-time configuration step. For example, enter the following at the RMAN prompt:CONFIGURE DEVICE TYPE sbt PARALLELISM 3;CONFIGURE DEFAULT DEVICE TYPE TO sbt;CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT 'user1/passwd1@node1';CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'user2/passwd2@node2';CONFIGURE CHANNEL 3 DEVICE TYPE sbt CONNECT 'user3/passwd3@node3';

Similarly, you can perform this configuration for a device type of DISK. The following backup script, which you can run from any node in the cluster, distributes the datafiles, archived redo logs, and SPFILE backups among the backup drives:BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

For example, if the database contains 10 datafiles and 100 archived redo logs are on disk, then the node 1 backup drive can back up datafiles 1, 3, and 7 and logs 1-33. Node 2 can back up datafiles 2, 5, and 10 and logs 34-66. The node 3 backup drive can back up datafiles4, 6, 8 and 9 as well as archived redo logs 67-100.

Page 348: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-20

Copyright © 2003, Oracle. All rights reserved.

Non-CFS Backup Scheme

CONFIGURE DEVICE TYPE sbt PARALLELISM 3;CONFIGURE DEFAULT DEVICE TYPE TO sbt;CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT 'usr1/pwd1@n1';CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'usr2/pwd2@n2';CONFIGURE CHANNEL 3 DEVICE TYPE sbt CONNECT 'usr3/pwd3@n3';

BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

Non-Cluster File System Backup SchemeIn a non-cluster file system environment, each node can back up only its own local archived redo logs. For example, node 1 cannot access the archived redo logs on node 2 or node 3 unless you configure the network file system for remote access. To configure NFS, distribute the backup to multiple drives. However, if you configure NFS for backups, then you can only back up to one drive.When backing up to multiple drives in a non-cluster file system backup scheme, it is assumed that each node in the cluster has its own local tape drive. You can perform a similar one-time configuration that the one shown above to configure one channel for each node in the cluster. Similarly, you can perform this configuration for a device type of DISK. Develop a production backup script for whole database backups that you can run from any node. With the BACKUP example, the datafiles backups, archived redo logs, and SPFILEbackups are distributed among the different tape drives. However, channel 1 can only read the logs archived locally on /arc_dest_1. This is because the autolocation feature restricts channel 1 to only back up the archived redo logs in the /arc_dest_1 directory. Because node 2 can only read files in the /arc_dest_2 directory, channel 2 can only back up the archived redo logs in the /arc_dest_2 directory, and so on. The important point is that all logs are backed up, but they are distributed among the different drives.

Page 349: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-21

Copyright © 2003, Oracle. All rights reserved.

RAC Backup and Recovery Using EM

RAC Backup and Recovery Using EMThe Cluster Database Maintenance page can be reached from the Cluster Database Home page by clicking the Maintenance tab link. From this page, you can perform a range of backup and recovery operations using RMAN, such as scheduling backups, performing recovery when necessary, and configuring backup and recovery settings. Also, you have links for executing different utilities to import or export data, load data, gather statistics on objects, reorganize objects, and convert a tablespace to be locally managed. As with other pages, the Related Links and Instances sections are available for you to manage other aspects of your cluster database.

Page 350: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-22

Copyright © 2003, Oracle. All rights reserved.

Restoring and Recovering Redo Log Files

• Media recovery may require one or more archived log files from each thread.

• The RMAN RECOVER command automatically restores and applies the required archived logs.

• Archive logs may be restored to any node performing the restore and recover operation.

• Logs must be readable from the node performing the restore and recovery activity.

• Recovery processes request additional threads enabled during the recovery period.

• Recovery processes notify you of threads no longer needed because they were disabled.

Restoring and Recovering Redo Log FilesMedia recovery of a database that is accessed by RAC may require at least one archived log file for each thread. However, if a thread’s online redo log contains enough recovery information, restoring archived log files for any thread is unnecessary.If you use RMAN for media recovery and you share archive log directories, you can change the destination of the automatic restoration of archive logs with the SET clause to restore the files to a local directory of the node where you begin recovery. If you backed up the archive logs from each node without using a central media management system, you must first restore all the log files from the remote nodes and move them to the host from which you will start recovery with RMAN. However, if you backed up each node’s log files using a central media management system, you can use RMAN’s AUTOLOCATE feature. This enables you to recover a database using the local tape drive on the remote node.If recovery reaches a time when an additional thread was enabled, the recovery process requests the archived log file for that thread. If you are using a backup control file, when all archive log files are exhausted you may need to redirect the recovery process to the online redo log files to complete recovery. If recovery reaches a time when a thread was disabled, the process informs you that the log file for that thread is no longer needed.

Page 351: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-23

Copyright © 2003, Oracle. All rights reserved.

Parallel Recovery in Real Application Clusters

• The RECOVERY_PARALLELISM initialization parameter: – Specifies the number of redo application server

processes participating in instance or media recovery

– Cannot exceed the value of the PARALLEL_MAX_SERVERS parameter

– Causes serial recovery if the value is zero or one• Parallel recovery can also be initiated with the

PARALLEL clause of the RECOVER command.

Parallel Recovery in RACParallel recovery uses multiple CPUs and I/O parallelism to reduce the time that is required to perform thread or media recovery. Parallel recovery is most effective at reducing recovery time while concurrently recovering several data files on several disks. You can use parallel instance and crash recovery as well as parallel media recovery in RAC. You can use RMAN or other Oracle tools, such as SQL*Plus, to perform parallel recovery in RAC.With RMAN’s RESTORE and RECOVER statements, The Oracle server automatically parallelizes recovery:

• Restoring data files: The number of allocated channels determines the maximum degree of parallelism.

• Applying incremental backups: The maximum degree of parallelism is determined by the number of allocated channels.

To perform parallel recovery with other tools, either set the RECOVERY_PARALLELISMinitialization parameter or include the PARALLEL clause in your RECOVER command.

Page 352: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-24

Copyright © 2003, Oracle. All rights reserved.

Parallel Recovery in Real Application Clusters

• Parallel recovery uses one process to read the log files and multiple processes to apply redo to the data files.

• Oracle automatically starts the recovery processes—you do not need to use more than one session.

• The processes may be on one instance or spread across all instances.

Page 353: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-25

Copyright © 2003, Oracle. All rights reserved.

Fast-Start Parallel Rollback in Real Application Clusters

SQL> DESCRIBE gv$fast_start_transactionsName Null? Type-------------------------- -------- ---------------INST_ID NUMBERUSN NUMBERSLT NUMBERSEQ NUMBERSTATE VARCHAR2(16)UNDOBLOCKSDONE NUMBERUNDOBLOCKSTOTAL NUMBERPID NUMBERCPUTIME NUMBERPARENTUSN NUMBERPARENTSLT NUMBERPARENTSEQ NUMBER

Fast-Start Parallel Rollback in RACIf you have long-running transactions and your nodes have multiple CPUs, you should consider setting the initialization parameter FAST_START_PARALLEL_ROLLBACK to a nondefault value. This enables the rollback step of the recovery process to be performed in parallel. Since each instance is responsible for its own recovery processes, you need to monitor and tune the parallel rollback using the contents of V$FAST_START_SERVERSand V$FAST_START_TRANSACTIONS on each instance or the global views, GV$FAST_START_SERVERS and GV$FAST_START_TRANSACTIONS, from one of the active instances. Although fast-start parallel rollback does not perform rollback activity across instances, it can improve the processing of rollback segments for a RAC database.

Page 354: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-26

Copyright © 2003, Oracle. All rights reserved.

Managing OCR: Overview

• The OCR content is critical to CRS.• The OCR is automatically backed up physically:

– Every four hours: CRS keeps the last three– At the end of every day: CRS keeps the last two– At the end of every week: CRS keeps the last two

• You can also:– Logically back up the OCR:

– Copy physical OCR backups• Backups can be used for recovery purposes.

ocrconfig –showbackup ocrconfig –backuploc

ocrconfig -export

Managing the OCR: OverviewThe OCR contains important cluster and database configuration information for RAC and CRS. One CRS instance in the cluster automatically creates OCR backups every four hours, and CRS retains the last three copies. The CRS instance also creates an OCR backup at the beginning of each day and of each week, and retains the last two copies. Although you cannot customize the backup frequencies or the number of retained copies, you have the possibility to identify the name and location of the automatically retained copies by using the ocrconfig -showbackup command.The default target location of each automatically generated OCR backup file is the <CRS Home>/cdata/<cluster name> directory. It is recommended to change this location to one that is shared by all nodes in the cluster by using theocrconfig -backuploc command. This command takes one argument that is the full path directory name of the new location.Because of the importance of OCR information, it is also recommended to manually create copies of the automatically generated physical backups, and use the ocrconfig -export command to generate OCR logical backups. You need to specify a file name as the argument of the command, and it generates a binary file that you should not try to edit.Note: If you try to export the OCR while an OCR client is running, then you get an error.

Page 355: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-27

Copyright © 2003, Oracle. All rights reserved.

Recovering the OCR

1. Locate either a physical or a logical backup.2. Restart all the nodes in single-user mode.3. Restore the physical OCR backup or import the

logical OCR backup:

4. Restart all the nodes in multiuser mode.

$ ocrconfig –restore \> /app/oracle/product/10.1.0/crs_1/cdata/jfv_clus/day.ocr

$ ocrconfig –import /u01/logical_ocr/yesterday

Recovering the OCRIf you need to recover the OCR, you can follow this procedure:

1. Find an OCR backup from a time before the inconsistency occurred.2. If you have a backup of the OCR from before the time of the inconsistency, restart all

the nodes in single-user mode or runlevel 1.3. After all nodes are restarted in single-user mode, restore or import the OCR with the

ocrconfig utility as shown in the slide.4. Restart all the nodes in multiuser mode.

Note: It is highly recommended to store the OCR on RAID disk arrays.

Page 356: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-28

Copyright © 2003, Oracle. All rights reserved.

Recovering the Voting Disk

• Backup voting disk using the dd command:– After CRS installation– After node addition or deletion

• Recover voting disk by restoring it using the ddcommand.

• If no voting disk backup, reinstall CRS.

Recovering the Voting DiskBasically, the voting disk should be stored on redundant devices to avoid its loss. However, if you loose your voting disk, it is possible to restore it using the dd command if you have a backup.It is good practice to backup your voting disk right after CRS installation, and whenever you add or remove nodes to your cluster.If you loose your voting disk, and you do not have any backup, then you must reinstall CRS.

Page 357: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-29

Copyright © 2003, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to do the following:• Configure RAC recovery settings with EM• Configure RAC backup settings with EM• Initiate archiving• Configure RMAN• Perform RAC backup and recovery using EM

Page 358: User Guide RAC

Oracle Database 10g: Real Application Clusters 9-30

Copyright © 2003, Oracle. All rights reserved.

Practice 9: Overview

This practice covers the following topics:• Configure the RAC database to use ARCHIVELOG

mode and Flash Recovery• Configure RMAN for the RAC environment• Back up and recover the Oracle Cluster Registry

Page 359: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

RAC Performance Tuning

Page 360: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Determine RAC-specific tuning components• Tune instance recovery in RAC• Determine RAC-specific wait events, global

enqueues, and system statistics• Implement the most common RAC tuning tips• Use the Cluster Database Performance pages• Use the Automatic Workload Repository in RAC• Use the Automatic Database Diagnostic Monitor in

RAC

Page 361: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-3

Copyright © 2004, Oracle. All rights reserved.

CPU and Wait Time Tuning Dimensions

Scalableapplication

Scalableapplication

Possiblyneeds SQL

tuning

Needsinstance/RAC

tuning

CPUtime

Waittime

No gain achievedby adding

CPUs/nodes

CPU and Wait Time Tuning DimensionsWhen tuning your system, it is important that you compare the CPU time with the wait time of your system. Comparing CPU time with wait time helps to determine how much of the response time is spent on useful work and how much on waiting for resources potentially held by other processes.As a general rule, the systems where CPU time is dominant usually need less tuning than the ones where wait time is dominant. On the other hand, heavy CPU usage can be caused by badly written SQL statements.Although the proportion of CPU time to wait time always tends to decrease as load on the system increases, steep increases in wait time are a sign of contention and must be addressed for good scalability.Adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit under contention. Conversely, a system where the proportion of CPU time does not decrease significantly as load increases can scale better, and would most likely benefit from adding CPUs or Real Application Clusters (RAC) instances if needed.Note: Automatic Workload Repository (AWR) reports display CPU time together with wait time in the Top 5 Event section, if the CPU time portion is among the top five events.

Page 362: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-4

Copyright © 2004, Oracle. All rights reserved.

RAC-Specific Tuning

• Tune for single instance first.• Tune for RAC:

– Interconnect traffic– Point of serialization can be exacerbated

• RAC-reactive tuning tools:– Specific wait events– System and enqueue statistics– Database Control performance pages– Statspack and AWR reports

• RAC-proactive tuning tools:– AWR snapshots– ADDM reports

Certain combinationsare characteristic of

well-known tuning cases.

RAC-Specific TuningAlthough there are specific tuning areas for RAC, such as interconnect traffic, you get most benefits by tuning your system like a single-instance system. At least, this must be your starting point.Obviously, if you have serialization issues in a single-instance environment, these may be exacerbated with RAC.As shown in the slide, you have basically the same tuning tools with RAC as with a single-instance system. However, certain combinations of specific wait events and statistics are well-known RAC tuning cases.In the remaining of this lesson, you see some of those specific combinations, as well as the RAC-specific information that you can get from the Database Control performance pages, and Statspack and AWR reports. Finally, you see the RAC-specific information that you can get from the Automatic Database Diagnostic Monitor (ADDM).

Page 363: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-5

Copyright © 2004, Oracle. All rights reserved.

Analyzing Cache Fusion Impact in RAC

• The cost of block access and cache coherency is represented by:– Global Cache Service statistics– Global Cache Service wait events

• The response time for cache fusion transfers is determined by:– Overhead of the physical interconnect components– IPC protocol– GCS protocol

• The response time is not generally affected by disk I/O factors.

Analyzing Cache Fusion Impact in RACThe effect of accessing blocks in the global cache and maintaining cache coherency is represented by:

• The Global Cache Service statistics for current and cr blocks; for example, gc current blocks received, gc cr blocks received, and so on.

• The Global Cache Service wait events for gc current block 3-way, gc cr grant 2-way, and so on.

The response time for cache fusion transfers is determined by the messaging time and processing time imposed by the physical interconnect components, the IPC protocol, and the GCS protocol. It is not affected by disk input/output (I/O) factors other than occasional log writes. The cache fusion protocol does not require I/O to data files in order to guarantee cache coherency, and RAC inherently does not cause any more I/O to disk than a nonclustered instance.

Page 364: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-6

Copyright © 2004, Oracle. All rights reserved.

Typical Latencies for RAC Operations

3080.3Avg global cache current block receive time(ms)

2330.1Average time to process current block request

1240.3Avg global cache cr block receive time (ms)

1010.1Average time to process cr block request

Upper Bound

TypicalLower Bound

AWR Report Latency Name

Typical Latencies for RAC OperationsIn a RAC AWR report, there is a table in the RAC Statistics section containing average times (latencies) for some Global Cache Services and Global Enqueue Services operations. This table is shown in the slide and is called Global Cache and Enqueue Services: Workload Characteristics. Those latencies should be monitored over time, and significant increases in their values should be investigated. The table presents some typical values, based on empirical observations. Factors that may cause variations to those latencies include:

• Utilization of the IPC protocol. User-mode IPC protocols are faster.• Scheduling delays, when the system is under high CPU utilization• Log flushes for current blocks served

Other RAC latencies in AWR reports are mostly derived from V$GES_STATISTICS and may be useful for debugging purposes, but do not require frequent monitoring.Note: The time to process consistent read (CR) block request in the cache corresponds to (build time + flush time + send time), and the time to process current block request in the cache corresponds to(pin time + flush time + send time).

Page 365: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-7

Copyright © 2004, Oracle. All rights reserved.

Wait Events for RAC

• Wait events help to analyze what sessions are waiting for.

• Wait times are attributed to events that reflect the outcome of a request:– Placeholders while waiting– Precise events when waited

• Global cache waits are summarized in a broader category called Cluster Wait Class.

• These wait events are used in the ADDM to enable cache fusion diagnostics.

Wait Events for RACAnalyzing what sessions are waiting for is an important method to determine where time is spent. In RAC, the wait time is attributed to an event that reflects the exact outcome of a request. For example, when a session on an instance is looking for a block in the global cache, it does not know whether it will receive the data cached by another instance or whether it will receive a message to read from disk. The wait events for the global cache convey precise information and wait for global cache blocks or messages. They are mainly categorized by the following:

• Summarized in a broader category called Cluster Wait Class• Temporarily represented by a placeholder event that is active while waiting for a block• Attributed to precise events when the outcome of the request is known

The wait events for RAC convey information valuable for performance analysis. They are used in the ADDM to enable precise diagnostics of the impact of cache fusion.

Page 366: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-8

Copyright © 2004, Oracle. All rights reserved.

Wait Event Views

V$ACTIVE_SESSION_HISTORY

V$SESSION_EVENT

V$SYSTEM_EVENT

V$SESSION_WAIT_HISTORY

Waits for an event by a session

Last 10 wait eventsfor each active session

Activity of recent active sessions

Total waits for an event

V$SESSION_WAITEvents for which

active sessions are waiting

V$SESSION_WAIT_CLASSWaits for a wait event classby a session

V$SQLAREAIdentify SQL statements impactedby interconnect latencies

Wait Event ViewsWhen it takes some time to acquire resources because of the total path length and latency for requests, processes sleep to avoid spinning for indeterminate periods of time. When the process decides to wait, it wakes up either after a specified timer value expires (timeout) or when the event it is waiting for occurs and the process is posted. The wait events are recorded and aggregated in the views shown in the slide. The first three are aggregations of wait times, timeouts, and the number of times waited for a particular event whereas the rest enable the monitoring of waiting sessions in real time, including a history of recent events waited for.The individual events distinguish themselves by their names and the parameters that they assume. For most of the global cache wait events, the parameters include file number, block number, the block class, and access mode dispositions, such as mode held and requested. The wait times for events presented and aggregated in these views are very useful when debugging response time performance issues. Note that the time waited is cumulative, and that the event with the highest score is not necessarily a problem. However, if the available CPU power cannot be maxed out, or response times for an application are too high, the top wait events provide valuable performance diagnostics.Note: Use the CLUSTER_WAIT_TIME column in V$SQLAREA to identify SQL statements impacted by interconnect latencies, or run an ADDM report on the corresponding AWR snapshot.

Page 367: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-9

Copyright © 2004, Oracle. All rights reserved.

Global Cache Wait Events: Overview

gc [current/cr] [multiblock] request

gc [current/cr] [2/3]-way gc [current/cr] block busy

Just requested(placeholder)

Received after two or three network hops, immediately after request

Received but not sent immediately

gc [current/cr] grant 2-wayNot received and not mastered

locally. Grant received immediately

gc current grant busyNot received and not mastered

locally. Grant received with delay

gc [current/cr] [block/grant] congested

Block or grant received with delay because of CPU or memory lack

gc [current/cr] [failure/retry]

Not received because of failure

gc buffer busy

Block arrival time less than buffer pin time

Global Cache Wait Events: OverviewThe main Global Cache wait events for Oracle Database 10g are described briefly in the slide:• gc current/cr request: These wait events are relevant only while a gc request

for a cr or current buffer is in progress. They act as placeholders until the request completes.

• gc [current/cr] [2/3]-way: A current or cr block was requested and received after two or three network hops. The request was processed immediately: it was not busy or congested.

• gc [current/cr] block busy: A current or cr block was requested and received, but was not sent immediately by LMS because some special condition that delayed the sending was found.

• gc [current/cr] grant 2-way: A current or cr block was requested and a grant message received. The grant was given without any significant delays. If the block is not in its local cache, a current or cr grant is followed by a disk read on the requesting instance.

• gc current grant busy: A current block was requested and a grant message received. The busy hint implies that the request was blocked because others were ahead of it or it could not be handled immediately.

Page 368: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-10

Global Cache Wait Events: Overview (continued)• gc [current/cr] [block/grant] congested: A current or cr block was

requested and a block or grant message received. The congested hint implies that the request spent more than 1 ms in internal queues.

• gc [current/cr] [failure/retry]: A block was requested and a failure status was received or some other exceptional event has occurred.

• gc buffer busy: If the time between buffer accesses becomes less than the time the buffer is pinned in memory, the buffer containing a block is said to become busy and as a result interested users may have to wait for it to be unpinned.

Note: For more information, refer to the Oracle Database Reference guide.

Page 369: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-11

Copyright © 2004, Oracle. All rights reserved.

2-way Block Request: Example

SGA1

SGA2

LGWRLGWR:

Log sync

Wait complete:gc current block 2-way

Wait:gc current block request

Direct send

Block transfer

1

2

3

LMS

LMS

2-way Block Request: ExampleThis slide shows you what typically happens when the master instance requests a block that is not cached locally. Here it is supposed that the master instance is called SGA1, and SGA2contains the requested block. The scenario is as follows:

1. SGA1 sends a direct request to SGA2. So SGA1 waits on the gc current block request event.

2. When SGA2 receives the request, its local LGWR process may need to flush some recovery information to its local redo log files. For example, if the cached block is frequently changed, and the changes have not been logged yet, LMS would have to ask LGWR to flush the log before it can ship the block. This may add a delay to the serving of the block and may show up in the requesting node as a busy wait.

3. Then, SGA2 sends the requested block to SGA1. When the block arrives in SGA1, the wait event is complete, and is reflected as gc current block 2-way.

Note: Using the notation R= time at requestor, W=wire time and transfer delay, and S= time at server, the total time for a round-trip would be:R(send) + W(small msg) + S(process msg,process block,send) + W(block) + R(receive block)

Page 370: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-12

Copyright © 2004, Oracle. All rights reserved.

3-way Block Request: Example

SGA1

SGA2

LGWR

LMS

LMS

Wait complete:gc current block 3-way

Wait:gc current block request

Directmessage

Block transfer

LMSResource

Master

1 2

3

4

3-way Block Request: ExampleThis is a modified scenario for a cluster with more than two nodes. It is very similar to the previous one. However, the master for this block is on a node that is different from that of the requestor, and where the block is cached. Thus, the request must be forwarded. There is an additional delay for one message and the processing at the master node:R(send) + W(small msg) + S(process msg,send) + W(small msg) + S(process msg,process block,send) + W(block) + R(receive block)While a remote read is pending, any process on the requesting instance that is trying to write or read the data cached in the buffer has to wait for a gc buffer busy. The buffer remains globally busy until the block arrives.

Page 371: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-13

Copyright © 2004, Oracle. All rights reserved.

2-way Grant: Example

SGA1

SGA2

LMS

Wait complete:gc current grant 2-way

Wait:gc current block request

Directmessage

Grant message

LMSResource

Master

1

2

3

2-way Grant: ExampleIn this scenario, a grant message is sent by the master because the requested block is not cached in any instance.If the local instance is the resource master, the grant happens immediately. If not, the grant is always 2-way, regardless of the number of instances in the cluster.The grant messages are small. For every block read from the disk, a grant has to be received before the IO is initiated, which adds the latency of the grant round-trip to the disk latency:R(send) + W(small msg) + S(process msg,send) + W(small msg) + R(receive block)The round-trip looks similar to a 2-way block round-trip, with the difference that the wire time is determined by a small message, and the processing does not involve the buffer cache.

Page 372: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-14

Copyright © 2004, Oracle. All rights reserved.

Considered “Lost” Blocks: Example

SGA1

SGA2

LGWRLGWR:

Log sync

LMS

Wait complete:gc current failure

Wait:gc current block request

Block request

Blocksend

1

2

3

4

Side channel message

LMS

Considered “Lost” Blocks: ExampleIn this scenario, the side channel message arrives before the block itself. Under normal circumstances, this should never occur. Most of the time, this is an indicator of switch problems or the absence of a private interconnect. This is often related to operating system (OS) or network configuration issues.Note: You can try to avoid these occurrences by reducing the value of the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter to less than 16.

Page 373: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-15

Copyright © 2004, Oracle. All rights reserved.

Global Enqueue Waits: Overview

• Enqueues are synchronous.• Enqueues are global resources in RAC.• The most frequent waits are for:

• The waits may constitute serious serialization points.

TX TM

HW

SQ

US

TA

Global Enqueue Waits: OverviewAn enqueue wait is not RAC specific, but involves a global lock operation when RAC is enabled. Most of the global requests for enqueues are synchronous, and foreground processes wait for them. Therefore, contention on enqueues in RAC is more visible than in single-instance environments. Most waits for enqueues occur for enqueues of the following types:

• TX: Transaction enqueue; used for transaction demarcation and tracking• TM: Table or partition enqueue; used to protect table definitions during DML

operations• HW: High-Water Mark enqueue; acquired to synchronize a new block operation• SQ: Sequence enqueue; used to serialize incrementing of an Oracle sequence number• US: Undo Segment enqueue; mainly used by the Automatic Undo Management

(AUM) feature• TA: Enqueue used mainly for transaction recovery as part of instance recovery

In all of the cases above, the waits are synchronous and may constitute serious serialization points that can be exacerbated in a RAC environment.Note: In Oracle Database 10g, the enqueue wait events specify the resource name and a reason for the wait. For example: TX Enqueue index block split. This makes diagnostics of enqueue waits easier.

Page 374: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-16

Copyright © 2004, Oracle. All rights reserved.

Session and System Statistics

• Use V$SYSSTAT to characterize the workload.• Use V$SESSTAT to monitor important sessions.• V$SEGMENT_STATISTICS includes RAC statistics.• RAC-relevant statistic groups are:

– Global cache service statistics– Global enqueue service statistics– Statistics for messages sent

• V$ENQUEUE_STATISTICS determines the enqueue with the highest impact.

• V$INSTANCE_CACHE_TRANSFER breaks down GCS statistics into block classes.

Session and System StatisticsUsing system statistics based on V$SYSSTAT enables characterization of the database activity based on averages. It is the basis for many metrics and ratios used in various tools and methods, such as AWR, Statspack, and Database Control.In order to drill down to individual sessions or groups of sessions, V$SESSTAT is useful when the important session identifiers to monitor are known. Its usefulness is enhanced if an application fills in the MODULE and ACTION columns in V$SESSION.V$SEGMENT_STATISTICS is useful for RAC because it also tracks the number of CR and current blocks received by the object.The RAC-relevant statistics can be grouped into:

• Global cache service statistics: gc cr blocks received, gc cr block receive time, and so on

• Global enqueue service statistics: global enqueue gets, and so on• Statistics for messages sent: gcs messages sent and ges messages sent

V$ENQUEUE_STATISTICS can be queried to determine which enqueue has the highest impact on database service times and eventually response times.V$INSTANCE_CACHE_TRANSFER indicates how many current and CR blocks per block class are received from each instance, including how many transfers incurred a delay.Note: For more information about statistics, refer to the Oracle Database Reference guide.

Page 375: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-17

Copyright © 2004, Oracle. All rights reserved.

Most Common RAC Tuning Tips

• Application tuning is the most beneficial• Resizing and tuning the buffer cache• Reducing long full-table scans in OLTP systems• Using automatic segment space management• Increasing sequence caches• Using partitioning to reduce interinstance traffic• Avoiding unnecessary parsing• Minimizing locking usage • Removing unselective indexes

Also applicable to single-instance tuning

Most Common RAC Tuning TipsIn any database system, RAC or single instance, the most significant performance gains are usually obtained from traditional application tuning techniques. The benefits of those techniques are even more remarkable in a RAC database. In addition to traditional application tuning, some of the techniques that are particularly important for RAC include the following:

• Try to avoid long full-table scans to minimize GCS requests. The overhead caused by the global CR requests in this scenario is because of the fact that when queries result in local cache misses, an attempt is first made to find the data in another cache, based on the assumption that the chance that another instance has cached the block is high.

• Automatic segment space management can provide instance affinity to table blocks.• Increasing sequence caches improves instance affinity to index keys deriving their

values from sequences. That technique may result in significant performance gains for multi-instance insert intensive applications.

• Range or list partitioning may be very effective in conjunction with data-dependent routing, if the workload can be directed to modify a particular range of values from a particular instance.

• Hash partitioning may help to reduce buffer busy contention by making buffer access distribution patterns sparser, enabling more buffers to be available for concurrent access.

Page 376: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-18

Most Common RAC Tuning Tips (continued)• In RAC, library cache and row cache operations are globally coordinated. So,

excessive parsing means additional interconnect traffic. Library cache locks are heavily used, in particular by applications using PL/SQL or Advanced Queuing. Library cache locks are acquired in exclusive mode whenever a package or procedure has to be recompiled.

• Because transaction locks are globally coordinated, they also deserve special attention in RAC. For example, using tables instead of Oracle sequences to generate unique numbers is not recommended because it may cause severe contention even for a single instance system.

• Indexes that are not selective do not improve query performance, but can degrade DML performance. In RAC, unselective index blocks may be subject to interinstancecontention, increasing the frequency of cache transfers for indexes belonging to INSERT intensive tables.

Page 377: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-19

Copyright © 2004, Oracle. All rights reserved.

Index Block Contention Considerations

RAC01 RAC02

Indexblock

Split inprogress

gc current split

gc current block busy

gc buffer busy

enq: TX - index contention

Wait events

gcs refuse xid gcs ast xid Service ITL waits

Exchange deadlocks Branch node splits Leaf node splits

System statistics

Index Block Contention ConsiderationsIn application systems where the loading or batch processing of data is a dominant business function, there may be performance issues affecting response times because of the high volume of data inserted into indexes. Depending on the access frequency and the number of processes concurrently inserting data, indexes can become hot spots and contention can be exacerbated by:

• Ordered monotonically increasing key values in the index (right-growing trees)• Frequent leaf block splits• Low tree depth: all leaf block access go through the root block

A leaf or branch block split can become an important serialization point if the particular leaf block or branch of the tree is concurrently accessed.The tables in the slide sum up the most common symptoms associated with the splitting of index blocks, listing wait events and statistics that are commonly elevated when index block splits are prevalent. As a general recommendation, to alleviate the performance impact of globally hot index blocks and leaf block splits, a more uniform, less skewed distribution of the concurrency in the index tree should be the primary objective. This can be achieved by:

• Global index hash partitioning• Increasing the sequence cache, if the key value is derived from a sequence

Page 378: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-20

Copyright © 2004, Oracle. All rights reserved.

Oracle Sequences and Index Contention

Can contain 500 rows

1…50000 50001…100000

CACHE 50000 NOORDER

RAC01 RAC02

Oracle Sequences and Index ContentionIndexes with key values generated by sequences tend to be subject to leaf block contention when the insert rate is high. That is because the index leaf block holding the highest key value is changed for every row inserted, as the values are monotonically ascending. In RAC, this may lead to a high rate of current and CR blocks transferred between nodes.One of the simplest techniques that can be used to limit this overhead is to increase the sequence cache, if you are using Oracle sequences. As the difference between sequence values generated by different instances increases, successive index block splits tend to create instance affinity to index leaf blocks. For example, suppose that an index key value is generated by a CACHE NOORDER sequence and each index leaf block can hold 500 rows. If the sequence cache is set to 50000, while instance 1 inserts values 1, 2, 3, and so on, instance 2 concurrently inserts 50001, 50002, and so on. After some block splits, each instance writes to a different part of the index tree.So, what is the ideal value for a sequence cache to avoid interinstance leaf index block contention, yet minimizing possible gaps? One of the main variables to consider is the insert rate: the higher it is, the higher must be the sequence cache. However, creating a simulation to evaluate the gains for a specific configuration is recommended.Note: By default, the cache value is 20. Typically, 20 is too small for the example above.

Page 379: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-21

Copyright © 2004, Oracle. All rights reserved.

SGA2SGA1

Undo Block Considerations

ChangesReads

Index

Additionalinterconnect traffic

Undo Undo

Undo Block ConsiderationsExcessive undo block shipment and contention for undo buffers usually happens when index blocks containing active transactions from multiple instances are read frequently. When a SELECT statement needs to read a block with active transactions, it has to undo the changes to create a CR version. If the active transactions in the block belong to more than one instance, there is a need to combine local and remote undo information for the consistent read. Depending on the amount of index blocks changed by multiple instances and the duration of the transactions, undo block shipment may become a bottleneck.Usually this happens in applications that read recently inserted data very frequently, butcommit infrequently. Techniques that can be used to reduce such situations include the following:

• Shorter transactions reduce the likelihood that any given index block in the cache contains uncommitted data, thereby reducing the need to access undo information for consistent read.

• As explained earlier, increasing sequence cache sizes can reduce interinstanceconcurrent access to index leaf blocks. CR versions of index blocks modified by only one instance can be fabricated without the need of remote undo information.

Note: In RAC, the problem is exacerbated by the fact that a subset of the undo information has to be obtained from remote instances.

Page 380: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-22

Copyright © 2004, Oracle. All rights reserved.

High-Water Mark Considerations

HWM

Heavyinserts

Heavyinserts

New extent

gc current grant

enq: HW -contention

Wait events

RAC01 RAC02

High-Water Mark ConsiderationsA certain combination of wait events and statistics presents itself in applications where the insertion of data is a dominant business function and new blocks have to be allocated frequently to a segment. If data is inserted at a high rate, new blocks may have to be made available after unfruitful searches for free space. This has to happen while holding the High-Water Mark (HWM) enqueue.Therefore, the most common symptoms for this scenario include:

• A high percentage of wait time for enq: HW – contention• A high percentage of wait time for gc current grant events

The former is a consequence of the serialization on the HWM enqueue, and the latter is because of the fact that current access to the new data blocks is required for the new block operation. In a RAC environment, the length of this space management operation is proportional to the time it takes to acquire the HW enqueue and the time it takes to acquire global locks for all the new blocks. This time is small under normal circumstances because there is never any access conflict for the new blocks. Therefore, this scenario may be observed in applications with business functions requiring a lot of data loading, and the main recommendation to alleviate the symptoms is to define uniform and large extent sizes for the locally managed and automatic space managed segments that are subject to high volume inserts.

Page 381: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-23

Copyright © 2004, Oracle. All rights reserved.

Cluster Database Performance Page

Cluster Performance PageTo access the Cluster Performance page, click the Performance tab on the Cluster Homepage. This page displays an overview of CPU, memory, and disk I/O utilization for each node in the cluster. You can choose an automatic refresh interval of 15 seconds or a manual refresh interval. At the bottom of the page, each node name is presented as a link, which you can click to access more detailed performance information about that node.On this page, you can see performance charts for the entire cluster database. The charts displayed on this page are Run Queue Length, Paging Rate, Sessions, and Database Throughput. The Run Queue Length and Paging Rate graphics are host-related charts, and they break down the data by each node in the cluster. The Sessions and Database Throughput charts consolidate data across all instances that are currently open.The slide illustrates a possible drill down from the Sessions graphic. If the cluster impact is high, you can click the Cluster wait class link that you can click to drill down to the Active Sessions by Instance graphic.On the Active Sessions by Instance graphic, you can see for each node its corresponding session count over a period of time.Note: The Cluster Performance page also contains a link to the Cluster Cache Coherencypage.

Page 382: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-24

Copyright © 2004, Oracle. All rights reserved.

Cluster Database Performance Page

Cluster Performance Page (continued)On the Active Sessions by Instance graphic page, you can click one of the node links to drill down to the Active Sessions Waiting graphic presented in the slide.The Active Sessions Waiting: Cluster page shows you the top SQL statements and the top sessions consuming significant resources from the Cluster wait class.You can further drill down to a specific SQL statement or session that has the highest impact on that particular instance.

Page 383: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-25

Copyright © 2004, Oracle. All rights reserved.

Cluster Cache Coherency Page

Cluster Cache CoherencyCluster cache coherency metrics help you to identify processing trends and optimize performance for your RAC environment. The Cluster Cache Coherency page enables you to view cache coherency metrics for the entire cluster database, grouped into the following categories:

• Block Access Statistics• Global Cache Current Block Request• Global Cache CR Block Request• Top 5 Library Cache Lock• Top 5 Row Cache Lock

For each of these categories, click By Instance to view a table of the metrics in that group for each instance of the cluster database. This page also displays the Cache Coherency vs. Session Logical Reads chart, which graphs the percentage of Global Cache CR Blocks Received, Current Blocks Received, against logical reads for the session.If your request latency value is too high, the DB_FILE_MULTIBLOCK_READ_COUNTvalue may be too high. This is because a requesting process can issue more than one request for a block depending on the setting of this parameter, causing the requesting process to wait longer. High request latency may also be caused by a large number of incoming requests to the LMS process for Global Cache Services.

Page 384: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-26

Copyright © 2004, Oracle. All rights reserved.

Database Locks Page

Database Locks PageLocks are mechanisms that prevent destructive interaction between transactions accessing the same resource, either user objects such as tables and rows or system objects not visible to users, such as shared data structures in memory and data dictionary rows. Locks are also used to control concurrent access to data. Locks guarantee data integrity while enabling maximum concurrent access to data by unlimited users.Use the Database Locks page to view a table showing User Locks, Blocking Locks, or the complete list of all database locks. You can access the Database Locks page from the Monitoring section of the Database Performance Page by clicking Database Locks.The Database Locks page displays the table of locks listed by User Locks, Blocking Locks, or All Database Locks. Click the expand or collapse icon beside the lock type to view the type of locks that you want to display, or you can use the Expand All or Collapse All links to expand or collapse the entire list of locks. You can filter the type of locks that appear on the page by choosing the lock type from the drop-down list at the top of the Database Lockspage. To view details about a specific session, click the Select field for that row, and click Session Details. To terminate a session, click the Select field for that session, and then click Kill Session. To view objects related to a session, click View Objects. For cluster databases, the Database Locks table displays the Instance Name column and shows databasewide locks.

Page 385: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-27

Copyright © 2004, Oracle. All rights reserved.

Automatic Workload Repository: Overview

SGA

V$ DBA_*

ADDM Self-tuningcomponent

Self-tuningcomponent

…Internal clients

External clientsEM SQL*Plus …

Efficientin-memorystatisticscollection

AWRsnapshotsMMON

Automatic Workload Repository: OverviewAWR is the infrastructure that provides services to Oracle Database 10g components to collect, maintain, and utilize statistics for problem detection and self-tuning purposes.The AWR infrastructure consists of two major parts:

• An in-memory statistics collection facility that is used by Oracle Database 10gcomponents to collect statistics. These statistics are stored in memory for performance reasons. Statistics stored in memory are accessible through dynamic performance (V$) views.

• AWR snapshots represent the persistent portion of the facility. They are accessible through data dictionary views and Database Control.

Statistics are stored in persistent storage for several reasons:• The statistics need to survive instance crashes.• Some analyses need historical data for baseline comparisons.• Memory overflow: When old statistics are replaced by new ones because of memory

shortage, the replaced data can be stored for later use.The memory version of the statistics is transferred to the disk on a regular basis by a new background process called Manageability Monitor (MMON). With AWR, the Oracle database provides a way to capture historical statistics data automatically, without the intervention of DBAs.

Page 386: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-28

Copyright © 2004, Oracle. All rights reserved.

AWR Tables

System

AWR Tables

DBA_HIST_*

Files Concurrency Tuning SQL

Segments Undo Time Recovery RAC

AWR TablesAWR contains two types of tables:

• Metadata tables: Are used to control, process, and describe AWR tables. For example, the Oracle database uses metadata tables to determine when to perform snapshots, and what to capture to the disk. Also, metadata tables contain the mapping between the snapshot_id and the corresponding wall clock time.

• Historical statistic tables: Store historical statistical information about the Oracle database. Each snapshot is a capture of the in-memory database statistics data at a certain point in time.

All names of AWR tables are prefixed with WRx$, where x specifies the kind of table:• WRM$ tables store metadata information for AWR.• WRH$ tables store historical data or snapshots.

You can use dictionary views to query AWR data. Any view related to historical information in AWR has the DBA_HIST_suffix.AWR uses partitioning for efficient querying and purging of data. The snapshot tables are organized into the following categories: File Statistics, General System Statistics, Concurrency Statistics, Instance Tuning Statistics, SQL Statistics, Segment Statistics, Undo Statistics, Time-Model Statistics, Recovery Statistics, and RAC Statistics.

Page 387: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-29

Copyright © 2004, Oracle. All rights reserved.

AWR Snapshots in RAC

SYSAUX

SGA (Instn)

In-memorystatistics

6:00 a.m.

7:00 a.m.

8:00 a.m.9:00 a.m.

9:00 a.m.

SGA (Inst1)

In-memorystatistics

MMON Coordinator

MMON

AWR tables

AWR Snapshots in RACIn RAC environments, each AWR snapshot captures data from all active instances within the cluster. The data for each snapshot set that is captured for all active instances is from roughly the same point in time. In addition, the data for each instance is stored separately and is identified with an instance identifier. For example, the buffer_busy_waitstatistic shows the number of buffer waits on each instance. AWR does not store data that is aggregated from across the entire cluster. In other words, the data is stored for each individual instance.The statistics snapshots generated by AWR can be evaluated by producing reports displaying summary data such as load and cluster profiles based on regular statistics and wait events gathered on each instance.AWR functions in a similar way as Statspack. The difference is that AWR automatically collects and maintains performance statistics for problem detection and self-tuning purposes. Unlike in Statspack, in AWR there is only one snapshot_id per snapshot across instances.

Page 388: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-30

Copyright © 2004, Oracle. All rights reserved.

Generating and Viewing AWR Reports

Generating and Viewing AWR ReportsAlthough an AWR snapshot contains data for all instances of your RAC environment, an AWR report is only relevant to one particular instance. So, to generate an AWR report when using Database Control, you must go to the Administration tab of the corresponding instance’s home page. From there, click the Automatic Workload Repository link in the Workload section. When on the Automatic Workload Repository page, click the link corresponding to the Snapshots field. This takes you to the Snapshots page shown in the slide.Select a beginning snapshot identifier by using the Select column of the Select Beginning Snapshot table, and select View Report in the Actions list. Then, click the Go button next to the Actions field.When on the View Report page, select the ending snapshot identifier by using the Selectcolumn of the Select Ending Snapshot table. When done, click OK.This takes you to the Snapshot Details page, which contains the corresponding AWR report. Note: The report generation interface is also provided in the form of an awrrpt.sqlSQL*Plus script. The script generates either an HTML report or a text file. The script should be run when having the SELECT ANY DICTIONARY privilege. This script can be found in the corresponding $ORACLE_HOME/rdbms/admin directory.

Page 389: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-31

Copyright © 2004, Oracle. All rights reserved.

AWR Reports and RAC: Overview

AWR Reports and RACThe RAC-related statistics in an AWR report are organized in different sections. A RAC statistics section appears after the Top 5 Timed Events. This section contains:

• The number of instances open at the time of the begin snapshot and the end snapshot to indicate whether instances joined or left between the two snapshots

• The Global Cache Load Profile, which essentially lists the number of blocks and messages that are sent and received, as well as the number of fusion writes

• The Global Cache Efficiency Percentages, which indicate the percentage of buffer gets broken up into buffers received from the disk, local cache, and remote caches. Ideally, the percentage of disk buffer access should be close to zero.

• GCS and GES Workload Characteristics, which gives you an overview of the more important numbers first. Because the global enqueue convert statistics have been consolidated with the global enqueue get statistics, the report only prints the average global enqueue get time. The round-trip times for CR and current block transfers follow, as well as the individual sender-side statistics for CR and current blocks. The average log flush times are computed by dividing the total log flush time by the number of actual log flushes. Also, the report prints the percentage of blocks served that actually incurred a log flush.

Page 390: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-32

AWR Reports and RAC (continued)• GCS and GES Messaging Statistics. The most important statistic here is the average

message sent queue time on ksxp, which gives a good indicator of how well the IPC works. Average numbers should be less than 1 ms.

Additional RAC statistics are then organized in the following sections:• The Global Enqueue Statistics section contains data extracted from

V$GES_STATISTICS.• The Global CR Served Stats section contains data from V$CR_BLOCK_SERVER.• The Global CURRENT Served Stats section contains data from

V$CURRENT_BLOCK_SERVER.• The Global Cache Transfer Stats section contains data from

V$INSTANCE_CACHE_TRANSFER.The Segment statistics section also includes the GC Buffer Busy Waits, CR Blocks Received, and CUR Blocks Received information for relevant segments.Note: For more information about wait events and statistics, refer to the Oracle Database Reference guide.

Page 391: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-33

Copyright © 2004, Oracle. All rights reserved.

Statspack and AWR

Statspackschema

AWRschema

Migration

Old application codeusing Statspack

schema

Statspack and AWRIn the past, historical data could be obtained manually by using Statspack. You can continue to use Statspack in Oracle Database 10g. But if you want to use the workload repository instead, you have to change your application code.Statspack users should switch to the workload repository in Oracle Database 10g.There is no supported path to migrate Statspack data into the workload repository. Also, there is no view created on top of the workload repository to simulate the Statspack schema.

Page 392: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-34

Copyright © 2004, Oracle. All rights reserved.

Automatic Database Diagnostic Monitor

Snapshots

ADDM ADDMresults

MMON

In-memorystatistics

AWR

SGA

60 minutes

ADDMresults

EM

Automatic Database Diagnostic Monitor By default, the database automatically captures statistical information every 60 minutes from the SGA and stores it inside AWR in the form of snapshots. These snapshots are stored on the disk and are similar to Statspack snapshots. However, they contain more precise information than Statspack snapshots.Additionally, the ADDM is scheduled to run automatically by the new MMON process on every database instance to detect problems proactively. Each time a snapshot is taken, ADDM is triggered to perform an analysis of the period corresponding to the last two snapshots. Such a capability proactively monitors the instance and detects bottlenecks before they become a significant problem.The results of each ADDM analysis are stored inside AWR (WRI$ tables) and are accessible also through the Enterprise Manager console.

Page 393: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-35

Copyright © 2004, Oracle. All rights reserved.

……

ADDM Problem Classification

System wait

RAC waits

I/O waits

Concurrency

Buffer busy

Parse latches

Buffer cache latches

Root causesSymptomsNonproblem areas

ADDM Problem ClassificationInternally, the ADDM uses a tree structure to represent all possible tuning issues. The tree is based on the new wait and time statistics model that is used by the Oracle database. The root of this tree represents the symptoms. Going down to the leaves, the ADDM identifies root causes. The ADDM walks down the tree by using time-based thresholds for each node.If the time-based threshold is not exceeded for a particular node, the ADDM prunes the corresponding subtree. This enables the ADDM to identify nonproblem areas. This tree structure enables the ADDM to efficiently prune the search space to quickly identify the problems.

Page 394: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-36

Copyright © 2004, Oracle. All rights reserved.

RAC-Specific ADDM Findings

Hot blocks

Hot objects

Interconnectlatencies

LMScongestions

Top SQL

Instancecontentions

ADDM

RAC-Specific ADDM FindingsRAC-specific ADDM findings include:

• Hot block (with block details) with high read/write contention within an instance and across the cluster

• Hot object with high read/write contention within an instance and across the cluster• Cluster interconnect latency issues in a RAC environment• LMS congestion issues: LMS processes are not able to keep up with lock requests.• Top SQL that encounters interinstance messaging• Contention on other instances: Basically multiple instances are updating the same set

of blocks concurrently.

Page 395: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-37

Copyright © 2004, Oracle. All rights reserved.

ADDM Analysis: Results

1

2

3

ADDM Analysis: ResultsOn the Automatic Database Diagnostic Monitor (ADDM) page, you can see the detailed findings for the latest ADDM run. The Database Time represents the sum of the nonidletime spent by sessions in the database for the analysis period. A specific Impact percentage is given for each finding. The impact represents the time consumed by the corresponding issue as compared with the database time for the analysis period. Here is a corresponding description of the numbers shown in the slide:

1. The graphic shows that the number of average active users increased dramatically at this point. Also, the major problem is a wait problem.

2. The icon shows that the ADDM output displayed at the bottom of the page corresponds to this point in time. You can go into the past (to view previous analysis) by clicking the other icons.

3. The findings give you a short summary of what the ADDM found as performance areas in the instance that could be tuned. By clicking a particular issue, you are directed to the Performance Finding Details page.

You can click the View Report button to get details about the performance analysis in the form of a text report. By clicking a particular issue, you are directed to the Performance Finding Details page.

Page 396: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-38

Copyright © 2004, Oracle. All rights reserved.

ADDM Recommendations

ADDM RecommendationsOn the Performance Finding Details page, you are given recommendations to solve the corresponding issue. Recommendations are divided into many categories such as Schema, SQL Tuning, DB configuration, and so on. The Benefit(%) column gives you the maximum reduction in database elapse time if the recommendation is implemented. The ADDM considers a variety of changes to a system, and its recommendations can include:

• Hardware changes: Adding CPUs or changing the I/O subsystem configuration• Database configuration: Changing initialization parameter settings• Schema changes: Hash-partitioning a table or index, or using automatic segment space

management • Application changes: Using the cache option for sequences or using bind variables• Using other advisors: Running the SQL Tuning Advisor on high-load SQL or running

the Segment Advisor on hot objects

Page 397: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-39

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Determine RAC-specific tuning components• Tune instance recovery in RAC• Determine RAC-specific wait events, global

enqueues, and system statistics• Implement the most common RAC tuning tips• Use the Cluster Database Performance pages• Use the Automatic Workload Repository in RAC• Use the Automatic Database Diagnostic Monitor in

RAC

Page 398: User Guide RAC

Oracle Database 10g: Real Application Clusters 10-40

Copyright © 2004, Oracle. All rights reserved.

Practice 10: Overview

This practice covers studying a scalability case by using the ADDM.

Page 399: User Guide RAC

Copyright © 2004, Oracle. All rights reserved.

Design for High Availability

Page 400: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-2

Copyright © 2004, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following:• Design a Maximum Availability Architecture in

your environment• Determine the best RAC and Data Guard

topologies for your environment• Configure the Data Guard Broker configuration

files in a RAC environment• Patch your RAC system in a rolling fashion

ObjectivesThe goal of this lesson is to provide an overview of the various high-availability architectures that you can implement with RAC. It is out of the scope of this lesson to give you detailed information on how to set up these architectures. For more information, refer to the corresponding documentation or courses.

Page 401: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-3

Copyright © 2004, Oracle. All rights reserved.

Causes of Unplanned Down Time

Unplanned Down time

Software failures Hardware failures Human errors Disasters

Operating system

Database

Middleware

Application

Network

CPU

Memory

Power supply

Bus

Disk

Tape

Controllers

Network

Power

Operator error

User error

DBA

System admin.

Sabotage

Fire

Flood

Earthquake

Power failure

Bombing

Causes of Unplanned Down TimeOne of the true challenges in designing a highly available solution is examining and addressing all the possible causes of down time. It is important to consider causes of both unplanned and planned down time. The above schema, which is a taxonomy of unplanned failures, classifies failures as software failures, hardware failures, human error, and disasters. Under each category heading is a list of possible causes of failures related to that category.Software failures include operating system, database, middleware, application, and network failures. A failure of any one of these components can cause a system fault.Hardware failures include system, peripheral, network, and power failures.Human error, which is a leading cause of failures, includes errors by an operator, user, database administrator, or system administrator. Another type of human error that can cause unplanned down time is sabotage.The final category is disasters. Although infrequent, these can have extreme impacts on enterprises, because of their prolonged effect on operations. Possible causes of disasters include fires, floods, earthquakes, power failures, and bombings. A well-designed high-availability solution accounts for all these factors in preventing unplanned down time.

Page 402: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-4

Copyright © 2004, Oracle. All rights reserved.

Causes of Planned Down Time

Planned down time

Routine operations Periodic maintenance New deployments

Backups

Performance mgmt

Security mgmt

Batches

Storage maintenance

Initialization parameters

Software patches

Schema management

Operating system

Middleware

Network

HW upgrade

OS upgrades

DB upgrades

MidW upgrades

App upgrades

Net upgrades

Causes of Planned Down TimePlanned down time can be just as disruptive to operations, especially in global enterprises that support users in multiple time zones, up to 24-hours per day. In these cases, it is important to design a system to minimize planned interruptions. As shown by the schema in the slide above, causes of planned down time include routine operations, periodic maintenance, and new deployments. Routine operations are frequent maintenance tasks that include backups, performance management, user and security management, and batch operations. Periodic maintenance, such as installing a patch or reconfiguring the system, is occasionally necessary to update the database, application, operating system middleware, or network.New deployments describe major upgrades to the hardware, operating system, database, application, middleware, or network. It is important to consider not only the time to perform the upgrade, but also the effect the changes may have on the overall application.

Page 403: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-5

Copyright © 2004, Oracle. All rights reserved.

Oracle’s Solution to Down Time

Unplanneddown time

Planneddown time

Systemfailures

Datafailures

RAC

Flashback

ASM

HARD

Data Guard

Flash Backup/Recovery

Dynamic provisioning

Systemchanges

Datachanges

Rolling upgrades

Online redefinition

Fast-startFault Recovery

Oracle’s Solution to Down TimeUnplanned down time is primarily the result of computer failures or data failures. Planned down time is primarily due to data changes or system changes:

• RAC provides optimal performance, scalability, and availability gains.• Fast-start Fault Recovery enables you to bound the database crash/recovery time. The

database self-tunes checkpoint processing to safeguard the desired recovery time objective.

• ASM provides a higher level of availability using online provisioning of database storage.

• Flashback provides a family of human error correction technology.• Oracle Hardware Assisted Resilient Data (HARD) is a comprehensive program

designed to prevent data corruptions before they happen.• Recovery Manager (RMAN) automates database backup and recovery by using the

flash recovery area.• Data Guard must be the foundation of any Oracle database disaster-recovery plan.• With online redefinition, the Oracle database supports many maintenance operations

without disrupting database operations, or users updating or accessing data. • The Oracle database continues to broaden support for dynamic reconfiguration

enabling it to adapt to changes in demand and hardware with no disruption of service.• The Oracle database supports the application of patches to the nodes of a RAC

system, as well as database software upgrades, in a rolling fashion.Note: The above list is not complete.

Page 404: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-6

Copyright © 2004, Oracle. All rights reserved.

RAC and Data Guard Complementarity

Nodes

Instances

Data

Site

Resource Cause

Componentfailure

Software failure

Humanerror

Environment

RAC

RAC

Data Guard

Data Guard

Protection

RAC and Data Guard ComplementarityRAC and Data Guard together provide the benefits of system-level, site-level, and data-level protection, resulting in high levels of availability and disaster recovery without loss of data:

• RAC addresses system failures by providing rapid and automatic recovery from failures, such as node failures and instance crashes.

• Data Guard addresses site failures and data protection through transactionally consistent primary and standby databases that do not share disks, enabling recovery from site disasters and data corruption.

Page 405: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-7

Copyright © 2004, Oracle. All rights reserved.

Maximum Availability Architecture

WAN TrafficManager

OracleApplication

Server

Clients

RACdatabase

RAC databases:Phys&log standby

OracleApplication

Server

Primarysite

SecondarysiteData Guard

Maximum Availability Architecture (MAA)RAC and Data Guard provide the basis of the database MAA solution. MAA provides the most comprehensive architecture for reducing down time for scheduled outages and preventing, detecting, and recovering from unscheduled outages. The recommended MAA has two identical sites. The primary site contains the RAC database, and the secondary site contains both a physical standby database and a logical standby database on RAC. Identical site configuration is recommended to ensure that performance is not sacrificed after a failover or switchover. Symmetric sites also enable processes and procedures to be kept the same between sites, making operational tasks easier to maintain and execute.The graphic illustrates identically configured sites. Each site consists of redundant components and redundant routing mechanisms, so that requests are always serviceable even in the event of a failure. Most outages are resolved locally. Client requests are always routed to the site playing the production role.After a failover or switchover operation occurs due to a serious outage, client requests are routed to another site that assumes the production role. Each site contains a set of application servers or mid-tier servers. The site playing the production role contains a production database using RAC to protect from host and instance failures. The site playing the standby role contains one standby database, and one logical standby database managed by Data Guard. Data Guard switchover and failover functions allow the roles to be traded between sites.Note: For more information, see the following Web site: http://otn.oracle.com/deploy/availability/htdocs/maa.htm

Page 406: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-8

Copyright © 2004, Oracle. All rights reserved.

RAC and Data Guard Topologies

• Symmetric configuration with RAC at all sites:– Same number of instances– Same service preferences

• Asymmetric configuration with RAC at all sites:– Different number of instances– Different service preferences

• Asymmetric configuration with mixture of RAC and single instance:– All sites running under CRS– Some single-instance sites not running under CRS

RAC and Data Guard TopologiesYou can configure a standby database to protect a primary database in a RAC environment. Basically, all kind of combinations are supported. For example, it is possible to have your primary database running under RAC, and your standby database running as a single-instance database. It is also possible to have both the primary and standby databases running under RAC.The slide above explains the distinction between symmetric environments and asymmetric ones.If you want to create a symmetric environment running RAC, then all databases need to have the same number of instances and the same service preferences. As the DBA, you need to make sure that this is the case by manually configuring them in a symmetric way.However, if you want to benefit from the tight integration of CRS and Data Guard Broker, make sure that both the primary site and the secondary site are running under CRS, and that both sites have the same services defined.

Page 407: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-9

Copyright © 2004, Oracle. All rights reserved.

RAC and Data Guard Architecture

Primary instance A

Primary instance B

Standby receiving instance C

Standby apply instance D

Onlineredofiles

Standbyredofiles

RFSARCn

LGWR RFS

ARCn

LGWR

Primary database

Standbydatabase

Apply

Flashrecovery

area

Flashrecovery

area

ARCn

ARCn

RAC and Data Guard ArchitectureAlthough it is perfectly possible to use a RAC to single-instance Data Guard (DG) configuration, you also have the possibility to use a RAC-to-RAC DG configuration. In this mode, although multiple standby instances can receive redo from the primary database, only one standby instance can apply the redo stream generated by the primary instances.A RAC-to-RAC DG configuration can be set up in different ways, and the slide shows you one possibility with a symmetric configuration where each primary instance sends its redo stream to a corresponding standby instance using standby redo log files. It is also possible for each primary instance to send its redo stream to only one standby instance that can also apply this stream to the standby database. However, you can get performance benefits by using the configuration shown in the slide above. For example, assume that the redo generation rate on the primary is too great for a single receiving instance on the standby side to handle. Suppose further that the primary database is using the SYNC redo transport mode. If a single receiving instance on the standby cannot keep up with the primary, then the primary’s progress is going to be throttled by the standby. If the load is spread across multiple receiving instances on the standby, then this is less likely to occur.If the standby can keep up with the primary, another approach is to use only one standby instance to receive and apply the complete redo stream. For example, you can set up the primary instances to remotely archive to the same Oracle Net service name.

Page 408: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-10

RAC and Data Guard Architecture (continued)Then, you can configure one of the standby nodes to handle that service. This instance then both receives and applies redo from the primary. If you need to do maintenance on that node, then you can stop the service on that node and start it on another node. This approach allows for the primary instances to be more independent of the standby configuration because they are not configured to send redo to a particular instance.Note: For more information, refer to the Oracle Data Guard Concepts and Administrationguide.

Page 409: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-11

Copyright © 2004, Oracle. All rights reserved.

Data Guard Broker (DGB) andCRS Integration

• CRS manages intra-site lights out HA operations• CRS manages intra-site planned HA operations.• CRS notifies when manual intervention is

required.• DBA receives notification.• DBA decides to switchover or failover using DGB.• DGB manages inter-site planned HA operations.• DGB takes over from CRS for intersite failover,

switchover, and protection mode changes:– DMON notifies CRS to stop and disable the site,

leaving all or one instance.– DMON notifies CRS to enable and start the site

according to the DG site role.

Data Guard Broker (DGB) and CRS IntegrationDGB is tightly integrated with CRS. CRS manages individual instances to provide unattended high availability of a given clustered database. DGB manages individual databases (clustered or otherwise) in a Data Guard configuration to provide disaster recovery in the event that CRS is unable to maintain availability of the primary database.For example, CRS posts NOT_RESTARTING events for the database group and service groups that cannot be recovered. These events are available through Enterprise Manager, ONS, and server-side callouts. As a DBA, when you receive those events, you might decide to repair and restart the primary site, or to invoke DGB to failover.DGB and CRS work together to temporarily suspend service availability on the primary database, accomplish the actual role change for both databases during which CRS works with the DGB to properly restart the instances as necessary, then to resume service availability on the new primary database. The broker manages the underlying Data Guard configuration and its database roles while CRS manages service availability that depends upon those roles. Applications that rely upon CRS for managing service availability will see only a temporary suspension of service as the role change occurs within the Data Guard configuration.

Page 410: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-12

Copyright © 2004, Oracle. All rights reserved.

Data Guard Broker Configuration Files

*.DG_BROKER_CONFIG_FILE1=+DG1/RACDB/dr1config.dat*.DG_BROKER_CONFIG_FILE2=+DG1/RACDB/dr2config.dat

Shared storage

RAC01 RAC02

Data Guard Broker Configuration FilesTwo copies of the Data Guard Broker (DGB) configuration file are maintained for each database so as to always have a record of the last known valid state of the configuration. When the broker is started for the first time, the configuration files are automatically created and named using a default path name and file name that is operating system specific. When using a RAC environment, the DGB configuration files must be shared by all instances of the same database. You can override the default path name and file name by setting the following initialization parameters for that database: DG_BROKER_CONFIG_FILE1, DG_BROKER_CONFIG_FILE2.You have three possible options to share those files:

• Cluster file system• Raw devices• ASM

The above example illustrates a case where those files are stored in an ASM disk group called DG1. It is assumed that you have already created a directory called RACDB in DG1.

Page 411: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-13

Copyright © 2004, Oracle. All rights reserved.

Hardware Assisted Resilient Data

• Prevents corruption introducedin I/O path

• Is supported by major storage vendors:– EMC, Fujitsu, Hitachi, HP, NEC,– Network Appliance– Sun Microsystems

• All file types and block sizes checked

Vol Man/ASMOracle Database

Operating SystemDevice Driver

Host Bus Adapter

SAN InterfaceStorage Device

SAN&

Virtualization

Blocks validated andprotection info added to blocks[DB_BLOCK_CHECKSUM=TRUE]

Protection info validatedby storage device once enabled

symchksum –type Oracle enable

Hardware Assisted Resilient DataOne problem that can cause lengthy outages is data corruption. Today, the primary means for detecting corruptions caused by hardware or software outside of Oracle, such as an I/O subsystem, is the Oracle checksum. However, after a block is passed to the operating system, through the volume manager and out to disk, Oracle itself can no longer provide any checking that the block being written is still correct.With disk technologies expanding in complexity, with configurations such as Storage Area Networks (SANs) becoming more popular, the number of layers between the host processor and the physical spindle continues to increase. With more layers, the chance of any problem increases. With the HARD initiative, it is possible to enable the verification of database block checksum information by the storage device. Verifying that the block is still the same at the end of the write as it was in the beginning gives you an additional level of security.By default, the Oracle database automatically add checksum information to its blocks. These checksums can be verified by the storage device if you enabled this possibility. In case a block is found to be corrupted by the storage device, it will log an I/O corruption, or it will cancel the I/O and report the error back to the instance.Note: The way you enable the checksum validation at the storage device side is vendor specific. The above example was used with EMC Symmetrix storage.

Page 412: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-14

Copyright © 2004, Oracle. All rights reserved.

Rolling Patch Upgrade Using RAC

Operatingsystem

upgrades

Oraclepatch

upgrades

Hardware upgrades

Clients

Initial RAC configuration

Clients

Clients on , patch

ClientsClients

Upgrade complete

1 2

4 3

A B Patch

Patch

A B

Clients on , patch B A

Rolling Patch Upgrade Using RACThis is supported but only for single patches that are marked as rolling upgrade compatible. Rolling RAC patching allows the interoperation of a patched node and an unpatched node simultaneously. This means only one node is out of commission while it is patched.Using the OPATCH tool to apply a rolling RAC patch, you are prompted to stop theinstances on the node to be patched. First, the local node is patched, then you are asked for the next node to patch from a list. As each node is patched, the user is prompted when it is safe to restart the patched node. The cycle of prompting for a node, of stopping the instances on the node, of patching the node, and of restarting the instances continues until you stop the cycle, or until all nodes are patched. After you download the patch to your node, you need to unzip it before you can apply it. You can determine if the patch is flagged as rolling upgradable by checking the Patch_number/etc/config/inventory file. Near the end of the file you must see the following mark: <online_rac_installable>true</online_rac_installable>

It is important to stress that although rolling patch upgrade allows you to test the patch before propagating it to the other nodes, it is preferable to test patches on test environment rather than directly on your production system. Note: Some components cannot be changed a node at a time. The classic example is the data dictionary. Because, there is only a single data dictionary, all instances need to be shut down.

Page 413: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-15

Copyright © 2004, Oracle. All rights reserved.

Rolling Release Upgrade Using SQL Apply

Major release

upgrades

Patch setupgrades

Cluster software

and hardware upgrades

Version n Version n

Logsship

Clients

Initial SQL apply setupVersion n Version n+1

Logsqueue

Clients

Upgrade standby site

Version n Version n+1

Logsship

Clients

Run mixed to testVersion n+1 Version n+1

Logsship

Clients

Switchover, upgrade standby

1 2

4 3

Rolling Release Upgrade Using SQL ApplyIt is possible to do a rolling upgrade using logical standby databases. For example, using SQL Apply and logical standby databases, you are able to upgrade the Oracle database software from patchset release 10.1.0.n to the next database 10.1.0.(n+1) patchset release.The first step in the slide, shows the Data Guard configuration before the upgrade begins, with the primary and logical standby databases both running the same Oracle software version.At step two, you stop SQL Apply and upgrade the Oracle database software on the logical standby database to version n+1. During the upgrade, redo data accumulates on the primary system.With step three, you restart SQL Apply and the redo data that was accumulating on the primary system is automatically transmitted and applied on the newly upgraded logical standby database. The Data Guard configuration can run the mixed versions for an arbitrary period.In the last step, you perform a switchover. Then, activate the user applications and services on the new primary database. Before you can enable again SQL Apply, you need to upgrade the new standby site. This is because the new standby site does not understand new redo information. Finally, raise the compatibility level on each database.Note: SQL Apply does not support all data types. So, this can prevent you to use this method.

Page 414: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-16

Copyright © 2004, Oracle. All rights reserved.

Database High Availability Best Practices

Use temporary tablespaces

Register all instances with remote listeners

Use database resource manager

Use resumablespace allocation

Use automatic segment space management

Use locally managed tablespaces

Use automatic undo management

Enable block checksums

Enable flashback database

Enable ARCHIVELOGmode and use a flash recovery area

Use auto-tune checkpointing

Log checkpoints to the alert log

Multiplex production and standby redo logs

Set CONTROL_FILE_RECORD_KEEP_TIME long enough

Create two control files

Use SPFILE

Database High Availability Best PracticesThe above table gives you a short summary of the recommended practices that apply to single-instance databases, RAC databases, and Data Guard standby databases.These practices affect the performance, availability, and MTTR of your system. Some of these practices may reduce performance, but they are necessary to reduce or avoid outages. The minimal performance impact is outweighed by the reduced risk of corruption or the performance improvement for recovery.Note: For more information on how to set up the above features, refer to the following documents:

• Administrator's Guide• Data Guard Concepts and Administration• Net Services Administrator’s Guide

Page 415: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-17

Copyright © 2004, Oracle. All rights reserved.

Site A

Extended RAC: Overview

• Full utilization of resources, no matter where they are located

• Fast recovery from site failure

RACDatabase

Site B

Clients

Site A RACDatabase

Site B

Extended RAC: OverviewTypically, RAC databases share a single set of storage and are located on servers in the same data center.With extended RAC, you can use disk mirroring and Dense Wavelength Division Multiplexing (DWDM) equipment to extend the reach of the cluster. This configuration allows two data centers, separated by up to 100 kilometers, to share the same RAC database with multiple RAC instances spread across the two sites.As shown in the slide above, this RAC topology is very interesting, because the clients’ work gets distributed automatically across all nodes independently of their location, and in case one site goes down, the clients’ work continues to be executed on the remaining site. The types of failures that extended RAC can cover are mainly failures of an entire data center due to a limited geographic disaster. Fire, flooding, and site power failure are just a few examples of limited geographic disasters that can result in the failure of an entire data center.Note: Extended RAC does not use special software other than the normal RAC installation.

Page 416: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-18

Copyright © 2004, Oracle. All rights reserved.

DBCopy

Site B

DWDMdevice

Extended RAC Connectivity

• Distances over ten kilometers require dark fiber.• Set up buffer credits for large distances.

DBCopy

Site A

DWDMdevice

Clients

Dark fiber

Public network

Extended RAC ConnectivityIn order to extend a RAC cluster to another site separated from your data center by more than ten kilometers, it is required to use DWDM over dark fiber to get good performance results.DWDM is a technology that uses multiple lasers, and transmits several wavelengths of light simultaneously over a single optical fiber. DWDM enables the existing infrastructure of a single fiber cable to be dramatically increased. DWDM systems can support more than 150 wavelengths, each carrying up to 10Gbps. Such systems provide more than a terabit per second of data transmission on one optical strand that is thinner than a human hair.As shown in the slide above, each site should have its own DWDM device connected together by a dark fiber optical strand. All traffic between the two sites is sent through the DWDM and carried on dark fiber. This includes mirrored disk writes, network and heartbeat traffic, and memory-to-memory data passage. Also shown on the graphic are the set of disks at each site. Each site maintains a copy of the RAC database.It is important to note that depending on the site’s distance, you should tune and determine the minimum value of buffer credits in order to maintain the maximum link bandwidth. Buffer credit is a mechanism defined by the Fiber Channel standard that establishes the maximum amount of data that can be sent at any one time.Note: Dark fiber is a single fiber optic cable or strand mainly sold by telecom providers.

Page 417: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-19

Copyright © 2004, Oracle. All rights reserved.

Extended RAC Disk Mirroring

• Need copy of data at each location• Two options:

– Host-based mirroring– Remote array-based mirroring

DBcopy

Site A

DBcopy

Site B

DBcopy

Primary

DBcopy

Secondary

Extended RAC Disk MirroringAlthough there is only one RAC database, each data center has its own set of storage which is synchronously mirrored using either a cluster-aware host-based Logical Volume Manager (LVM) solution, such as SLVM with MirrorDiskUX, or an array-based mirroring solution, such as EMC SRDF.With host-based mirroring, shown to the left of the slide, the disks appear as one set, and all I/Os get sent to both sets of disks. This solution requires closely integrated clusterware and LVM, which does not exist with the Oracle Database 10g clusterware.With array-based mirroring, shown to the right, all I/Os are sent to one site, and are then mirrored to the other. This alternative is the only option if you only have the Oracle Database 10g clusterware. In fact, this solution is like a primary/secondary site setup. If the primary site fails, all access to primary disks is lost. An outage may be incurred before one can switch to the secondary site.Note: With extended RAC, designing the cluster in a manner that ensures the cluster can achieve quorum after a site failure is a critical issue. For more information, regarding this topic refer to the Oracle Technology Network site.

Page 418: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-20

Copyright © 2004, Oracle. All rights reserved.

Additional Data Guard Benefits

• Greater disaster protection– Greater distance– Additional protection against corruptions

• Better for planned maintenance– Full rolling upgrades

• More performance neutral at large distances– Option to do asynchronous

• If you cannot handle the costs of a DWDM network, Data Guard still works over cheap standard networks.

Additional Data Guard BenefitsData Guard provides a greater Disaster Protection:

• Distance over 100 kilometers without performance hit• Additional protection against corruptions because it uses a separate database• Optional delay to protect against user errors

Data Guard also provides better planned maintenance capabilities by supporting full rolling upgrades.Also, if you cannot handle the costs of a DWDM network, then Data Guard still works over cheap standard networks.

Page 419: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-21

Copyright © 2004, Oracle. All rights reserved.

Using Distributed Transactions with RAC

• Scope of application: XA or MS DTC• All transaction branches occur on same instance

S0 RAC01S1 S2

S0 RAC02S1 S2

S0 RAC03S1 S2

Mid-tierpartition 1

Mid-tiernon-DT

Mid-tierpartition 2

S1

S0

S2

Using Distributed Transactions with RACWhen using RAC with distributed transactions (Microsoft Distributed Transaction Coordinator or XA), it is possible for two application components in the same transaction to connect to different nodes on a RAC cluster. This situation can occur on systems with automatic load balancing where the application cannot control which database nodes a distributed transaction branch gets processed. It is important that working in a tightly coupled transaction remain on the same node as separating them may lead to deadlocks or problems with the two-phase commit.Each distributed transaction's operations must have an affinity to a single database node within a RAC cluster. By using node affinity, it is possible to use RAC reliably with distributed transactions.The above graphic presents a possible solution. Assume you have three RAC nodes, RAC01, RAC02, and RAC03, where each one is capable of servicing any nondistributedtransaction coming from a middle-tier. For distributed transactions from other middle tiers, they are partitioned statically via Oracle Net aliases across one of these three nodes. Thus, each node publishes itself as an S0 service for nondistributed transactions. In addition, RAC01 and RAC02 publish themselves as a singleton service: S1, and S2 respectively.Each mid-tier client has an address list in its Oracle Net alias that assigns common distributed transaction branches to the same RAC node. In case one of these database nodes fails, CRS starts the corresponding service on one of the available instances.

Page 420: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-22

Copyright © 2004, Oracle. All rights reserved.

Using a Test Environment

• The most common cause of down time is change.• Test your changes on a separate test cluster

before changing your production environment.

RACdatabase

Productioncluster

RACdatabase

Testcluster

Using a Test EnvironmentChange is the most likely cause of down time in a production environment. A proper test environment can catch more than 90 percent of the changes that could lead to a down time of the production environment, and is invaluable for quick test and resolution of issues in production.When your production is RAC, your test environment should be a separate RAC cluster with all the identical software components and versions.Without a test cluster, your production environment will not be highly available.Note: Not using a test environment is one of the most common errors seen by Oracle Support Services.

Page 421: User Guide RAC

Oracle Database 10g: Real Application Clusters 11-23

Copyright © 2004, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:• Design a Maximum Availability Architecture in your

environment• Determine the best RAC and Data Guard topologies

for your environment• Configure the Data Guard Broker configuration files

in a RAC environment• Patch your RAC system in a rolling fashion

Page 422: User Guide RAC