11gr2 clusterware and grid home - what you need to know

23
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1] In this Document Purpos e Scop e Detai ls 11gR2 Clusterware Key Facts Clusterware Startup Sequence Important Log Locations Clusterware Resource Status Check Clusterware Resource Administration OCRCONFIG Options: OLSNODES Options Cluster Verification Options Database - RAC/Scalability Community Referenc es Applies to: Oracle Server - Enterprise Edition - Version 11.2.0.1 to 11.2.0.1 [Release 11.2] Information in this document applies to any platform.

Upload: pentiumonce

Post on 18-Jan-2016

12 views

Category:

Documents


0 download

DESCRIPTION

Other

TRANSCRIPT

Page 1: 11gR2 Clusterware and Grid Home - What You Need to Know

11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]

In this Document

PurposeScopeDetails

11gR2 Clusterware Key FactsClusterware Startup SequenceImportant Log LocationsClusterware Resource Status CheckClusterware Resource AdministrationOCRCONFIG Options:OLSNODES OptionsCluster Verification OptionsDatabase - RAC/Scalability Community

References

Applies to:

Oracle Server - Enterprise Edition - Version 11.2.0.1 to 11.2.0.1 [Release 11.2]Information in this document applies to any platform.

Purpose

The 11gR2 Clusterware has undergone numerous changes since the previous release. For information on the previous release(s), see Note: 259301.1 "CRS and 10g Real Application Clusters". This document is

Page 2: 11gR2 Clusterware and Grid Home - What You Need to Know

intended to go over the 11.2 Clusterware which has some similarities and some differences from the previous version(s).

Scope

This document is intended for RAC Database Administrators and Oracle support engineers.

Details

11gR2 Clusterware Key Facts

11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.

The GRID home consists of the Oracle Clusterware and ASM. ASM should not be in a seperate home.

The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.

The 11gR2 Clusterware can be run by itself or on top of vendor clusterware. See the certification matrix for certified combinations. Ref: Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"

The GRID Home and the RAC/DB Home must be installed in different locations.

The 11gR2 Clusterware requires a shared OCR files and voting files. These can be stored on ASM or a cluster filesystem.

The OCR is backed up automatically every 4 hours to <GRID_HOME>/cdata/<scan name>/ and can be restored via ocrconfig.

The voting file is backed up into the OCR at every configuration change and can be restored via crsctl.

The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication. Several virtual IPs need to be registered with DNS. This includes the node VIPs (one per node), SCAN VIPs (three). This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).

A SCAN (Single Client Access Name) is provided to clients to connect to. For more info on SCAN see Note: 887522.1

The root.sh script at the end of the clusterware installation starts the clusterware stack. For information on troubleshooting root.sh issues see Note: 1053970.1

Page 3: 11gR2 Clusterware and Grid Home - What You Need to Know

Only one set of clusterware daemons can be running per node. On Unix, the clusterware stack is started via the init.ohasd script

referenced in /etc/inittab with "respawn". A node can be evicted (rebooted) if a node is deemed to be

unhealthy. This is done so that the health of the entire cluster can be maintained. For more information on this see: Note: 1050693.1 "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"

Either have vendor time synchronization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchonization. See Note: 1054006.1 for more infomation.

If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors. See Note 946332.1 and Note:948456.1 for more info.

The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes. Note that crsctl is in the <GRID_HOME>/bin directory. Note that "crsctl start cluster" will only work if ohasd is running.

The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl stop cluster" to stop the clusterware on all nodes. Note that crsctl is in the <GRID_HOME>/bin directory.

Killing clusterware daemons is not supported.

Note that it is also a good idea to follow the RAC Assurance best practices in Note: 810394.1

Clusterware Startup Sequence

The following is the Clusterware startup sequence (image from the "Oracle Clusterware Administration and Deployment Guide):

Don't let this picture scare you too much. You aren't responsible for managing all of these processes, that is the Clusterware's job!

Short summary of the startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High Availability Services Daemon). This daemon spawns 4 processes.

Level 1: OHASD Spawns:

Page 4: 11gR2 Clusterware and Grid Home - What You Need to Know

cssdagent - Agent responsible for spawning CSSD. orarootagent - Agent responsible for managing all root owned

ohasd resources. oraagent - Agent responsible for managing all oracle owned ohasd

resources. cssdmonitor - Monitors CSSD and node health (along wth the

cssdagent).

Level 2: OHASD rootagent spawns:

CRSD - Primary daemon responsible for managing cluster resources.

CTSSD - Cluster Time Synchronization Services Daemon Diskmon ACFS (ASM Cluster File System) Drivers

Level 2: OHASD oraagent spawns:

MDNSD - Used for DNS lookup GIPCD - Used for inter-process and inter-node communication GPNPD - Grid Plug & Play Profile Daemon EVMD - Event Monitor Daemon ASM - Resource for monitoring ASM instances

Level 3: CRSD spawns:

orarootagent - Agent responsible for managing all root owned crsd resources.

oraagent - Agent responsible for managing all oracle owned crsd resources.

Level 4: CRSD rootagent spawns:

Network resource - To monitor the public network SCAN VIP(s) - Single Client Access Name Virtual IPs Node VIPs - One per node ACFS Registery - For mounting ASM Cluster File System GNS VIP (optional) - VIP for GNS

Level 4: CRSD oraagent spawns:

ASM Resouce - ASM Instance(s) resource Diskgroup - Used for managing/monitoring ASM diskgroups. DB Resource - Used for monitoring and managing the DB and

instances

Page 5: 11gR2 Clusterware and Grid Home - What You Need to Know

SCAN Listener - Listener for single client access name, listening on SCAN VIP

Listener - Node listener listening on the Node VIP Services - Used for monitoring and managing services ONS - Oracle Notification Service eONS - Enhanced Oracle Notification Service GSD - For 9i backward compatibility GNS (optional) - Grid Naming Service - Performs name resolution

This image shows the various levels more clearly:

Important Log Locations

Clusterware daemon logs are all under <GRID_HOME>/log/<nodename>. Structure under <GRID_HOME>/log/<nodename>:

alert<NODENAME>.log - look here first for most clusterware issues./admin:./agent:./agent/crsd:./agent/crsd/oraagent_oracle:./agent/crsd/ora_oc4j_type_oracle:./agent/crsd/orarootagent_root:./agent/ohasd:./agent/ohasd/oraagent_oracle:./agent/ohasd/oracssdagent_root:./agent/ohasd/oracssdmonitor_root:./agent/ohasd/orarootagent_root:./client:./crsd:./cssd:./ctssd:./diskmon:./evmd:./gipcd:./gnsd:./gpnpd:./mdnsd:./ohasd:./racg:./racg/racgeut:

Page 6: 11gR2 Clusterware and Grid Home - What You Need to Know

./racg/racgevtf:

./racg/racgmain:

./srvm:

The cfgtoollogs dir under <GRID_HOME> and $ORACLE_BASE contains other important logfiles. Specifically for rootcrs.pl and configuration assistants like ASMCA, etc...

ASM logs live under $ORACLE_BASE/diag/asm/+asm/<ASM Instance Name>/trace

The diagcollection.pl script under <GRID_HOME>/bin can be used to automatically collect important files for support. Run this as the root user.

Clusterware Resource Status Check

The following command will display the status of all cluster resources:

$ ./crsctl status resource -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS--------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.DATADG.dgONLINE ONLINE racbde1ONLINE ONLINE racbde2ora.LISTENER.lsnrONLINE ONLINE racbde1ONLINE ONLINE racbde2ora.SYSTEMDG.dgONLINE ONLINE racbde1ONLINE ONLINE racbde2ora.asmONLINE ONLINE racbde1 StartedONLINE ONLINE racbde2 Startedora.eonsONLINE ONLINE racbde1ONLINE ONLINE racbde2ora.gsd

Page 7: 11gR2 Clusterware and Grid Home - What You Need to Know

OFFLINE OFFLINE racbde1OFFLINE OFFLINE racbde2ora.net1.networkONLINE ONLINE racbde1ONLINE ONLINE racbde2ora.onsONLINE ONLINE racbde1ONLINE ONLINE racbde2ora.registry.acfsONLINE ONLINE racbde1ONLINE ONLINE racbde2--------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr1 ONLINE ONLINE racbde1ora.LISTENER_SCAN2.lsnr1 ONLINE ONLINE racbde2ora.LISTENER_SCAN3.lsnr1 ONLINE ONLINE racbde2ora.oc4j1 OFFLINE OFFLINEora.rac.db1 ONLINE ONLINE racbde1 Open2 ONLINE ONLINE racbde2 Openora.racbde1.vip1 ONLINE ONLINE racbde1ora.racbde2.vip1 ONLINE ONLINE racbde2ora.scan1.vip1 ONLINE ONLINE racbde1ora.scan2.vip1 ONLINE ONLINE racbde2ora.scan3.vip1 ONLINE ONLINE racbde2

Clusterware Resource Administration

Srvctl and crsctl are used to manage clusterware resources. The general rule is to use srvctl for whatever resource management you can. Crsctl should only be used for things that you cannot do with srvctl (like start the cluster). Both have a help feature to see the available syntax.

Page 8: 11gR2 Clusterware and Grid Home - What You Need to Know

Note that the following only shows the available srvctl syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Srvctl syntax:

$ srvctl -hUsage: srvctl [-V]Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"]Usage: srvctl config database [-d <db_unique_name> [-a] ]Usage: srvctl start database -d <db_unique_name> [-o <start_options>]Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]Usage: srvctl status database -d <db_unique_name> [-f] [-v]Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z]Usage: srvctl remove database -d <db_unique_name> [-f] [-y]Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"

Page 9: 11gR2 Clusterware and Grid Home - What You Need to Know

Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-o <stop_options>] [-f]Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }Usage: srvctl remove instance -d <db_unique_name> [-i <inst_name>] [-f] [-y]

Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <server_pool> [-c {UNIFORM | SINGLETON}] } [-k <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>]Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-a]Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]Usage: srvctl modify service -d <db_unique_name> -s

Page 10: 11gR2 Clusterware and Grid Home - What You Need to Know

<service_name> -i <avail_inst_name> -r [-f]Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>]Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]Specify instances for an administrator-managed database, or nodes for a policy managed databaseUsage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]

Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-p <portnum>] [-m <multicast-ip-address>] [-e <eons-listen-port>] [-l <ons-local-port>] [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-m <multicast-ip-address>] [-p <multicast-portnum>] [-e <eons-listen-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]Usage: srvctl start nodeapps [-n <node_name>] [-v]Usage: srvctl stop nodeapps [-n <node_name>] [-f] [-r] [-v]Usage: srvctl status nodeappsUsage: srvctl enable nodeapps [-v]

Page 11: 11gR2 Clusterware and Grid Home - What You Need to Know

Usage: srvctl disable nodeapps [-v]Usage: srvctl remove nodeapps [-f] [-y] [-v]Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "<name_list>"]Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]

Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]Usage: srvctl config vip { -n <node_name> | -i <vip_name> }Usage: srvctl disable vip -i <vip_name> [-v]Usage: srvctl enable vip -i <vip_name> [-v]Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]Usage: srvctl stop vip { -n <node_name> | -i <vip_name> } [-f] [-r] [-v]Usage: srvctl status vip { -n <node_name> | -i <vip_name> }Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]

Usage: srvctl add asm [-l <lsnr_name>]Usage: srvctl start asm [-n <node_name>] [-o <start_options>]Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]Usage: srvctl config asm [-a]Usage: srvctl status asm [-n <node_name>] [-a]Usage: srvctl enable asm [-n <node_name>]Usage: srvctl disable asm [-n <node_name>]Usage: srvctl modify asm [-l <lsnr_name>]Usage: srvctl remove asm [-f]Usage: srvctl getenv asm [-t <name>[, ...]]Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"Usage: srvctl unsetenv asm -t "<name>[, ...]"

Page 12: 11gR2 Clusterware and Grid Home - What You Need to Know

Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a]Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]Usage: srvctl remove diskgroup -g <dg_name> [-f]

Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]Usage: srvctl config listener [-l <lsnr_name>] [-a]Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>]Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"

Usage: srvctl add scan -n <scan_name> [-k <network_number> [-S <subnet>/<netmask>[/if1[|if2|...]]]]Usage: srvctl config scan [-i <ordinal_number>]

Page 13: 11gR2 Clusterware and Grid Home - What You Need to Know

Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]Usage: srvctl stop scan [-i <ordinal_number>] [-f]Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]Usage: srvctl status scan [-i <ordinal_number>]Usage: srvctl enable scan [-i <ordinal_number>]Usage: srvctl disable scan [-i <ordinal_number>]Usage: srvctl modify scan -n <scan_name>Usage: srvctl remove scan [-f] [-y]Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]]Usage: srvctl config scan_listener [-i <ordinal_number>]Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]Usage: srvctl status scan_listener [-i <ordinal_number>]Usage: srvctl enable scan_listener [-i <ordinal_number>]Usage: srvctl disable scan_listener [-i <ordinal_number>]Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]}Usage: srvctl remove scan_listener [-f] [-y]

Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]Usage: srvctl config srvpool [-g <pool_name>]Usage: srvctl status srvpool [-g <pool_name>] [-a]Usage: srvctl status server -n "<server_list>" [-a]Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]Usage: srvctl remove srvpool -g <pool_name>

Usage: srvctl add oc4j [-v]Usage: srvctl config oc4jUsage: srvctl start oc4j [-v]

Page 14: 11gR2 Clusterware and Grid Home - What You Need to Know

Usage: srvctl stop oc4j [-f] [-v]Usage: srvctl relocate oc4j [-n <node_name>] [-v]Usage: srvctl status oc4j [-n <node_name>]Usage: srvctl enable oc4j [-n <node_name>] [-v]Usage: srvctl disable oc4j [-n <node_name>] [-v]Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v]Usage: srvctl remove oc4j [-f] [-v]

Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>

Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]Usage: srvctl config filesystem -d <volume_device>Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]Usage: srvctl status filesystem -d <volume_device>Usage: srvctl enable filesystem -d <volume_device>Usage: srvctl disable filesystem -d <volume_device>Usage: srvctl modify filesystem -d <volume_device> -u <user>Usage: srvctl remove filesystem -d <volume_device> [-f]

Usage: srvctl start gns [-v] [-l <log_level>] [-n <node_name>]Usage: srvctl stop gns [-v] [-n <node_name>] [-f]Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V]Usage: srvctl status gns -n <node_name>Usage: srvctl enable gns [-v] [-n <node_name>]Usage: srvctl disable gns [-v] [-n <node_name>]Usage: srvctl relocate gns [-v] [-n <node_name>] [-f]Usage: srvctl add gns [-v] -d <domain> -i <vip_name|ip> [-k <network_number> [-S <subnet>/<netmask>[/<interface>]]]srvctl modify gns [-v] [-f] [-l <log_level>] [-d <domain>] [-i <ip_address>] [-N <name> -A <address>] [-D

Page 15: 11gR2 Clusterware and Grid Home - What You Need to Know

<name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>]Usage: srvctl remove gns [-f] [-d <domain_name>]

Crsctl Syntax (for further explanation of these commands see the Oracle Documentation)

$ ./crsctl -hUsage: crsctl add - add a resource, type or other entitycrsctl check - check a service, resource or other entitycrsctl config - output autostart configurationcrsctl debug - obtain or modify debug statecrsctl delete - delete a resource, type or other entitycrsctl disable - disable autostartcrsctl enable - enable autostartcrsctl get - get an entity valuecrsctl getperm - get entity permissionscrsctl lsmodules - list debug modulescrsctl modify - modify a resource, type or other entitycrsctl query - query service statecrsctl pin - Pin the nodes in the nodelistcrsctl relocate - relocate a resource, server or other entitycrsctl replace - replaces the location of voting filescrsctl setperm - set entity permissionscrsctl set - set an entity valuecrsctl start - start a resource, server or other entitycrsctl status - get status of a resource or other entitycrsctl stop - stop a resource, server or other entitycrsctl unpin - unpin the nodes in the nodelistcrsctl unset - unset a entity value, restoring its default

For more information non each command. Run "crsctl <command> -h".

OCRCONFIG Options:

Note that the following only shows the available ocrconfig syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Page 16: 11gR2 Clusterware and Grid Home - What You Need to Know

$ ./ocrconfig -helpName:ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:ocrconfig [option]option:[-local] -export <filename>- Export OCR/OLR contents to a file[-local] -import <filename> - Import OCR/OLR contents from a file[-local] -upgrade [<user> [<group>]]- Upgrade OCR from previous version-downgrade [-version <version string>]- Downgrade OCR to the specified version[-local] -backuploc <dirname> - Configure OCR/OLR backup location[-local] -showbackup [auto|manual] - Show OCR/OLR backup information[-local] -manualbackup - Perform OCR/OLR backup[-local] -restore <filename> - Restore OCR/OLR from physical backup-replace <current filename> -replacement <new filename>- Replace a OCR device/file <filename1> with <filename2>-add <filename> - Add a new OCR device/file-delete <filename> - Remove a OCR device/file-overwrite - Overwrite OCR configuration on disk-repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>- Repair OCR configuration on the local node-help - Print out this help information

Note:* A log file will be created in$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensureyou have file creation privileges in the above directory beforerunning this tool.* Only -local -showbackup [manual] is supported.* Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry

Page 17: 11gR2 Clusterware and Grid Home - What You Need to Know

OLSNODES Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

$ ./olsnodes -hUsage: olsnodes [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]where-n print node number with the node name-p print private interconnect address for the local node-i print virtual IP address with the node name<node> print information for the specified node-l print information for the local node-s print node status - active or inactive-t print node type - pinned or unpinned-g turn on logging-v Run in debug mode; use at direction of Oracle Support only.-c print clusterware name

Cluster Verification Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Component Options:

$ ./cluvfy comp -list

USAGE:cluvfy comp <component-name> <component-specific options> [-verbose]

Valid components are:nodereach : checks reachability between nodes

Page 18: 11gR2 Clusterware and Grid Home - What You Need to Know

nodecon : checks node connectivitycfs : checks CFS integrityssa : checks shared storage accessibilityspace : checks space availabilitysys : checks minimum system requirementsclu : checks cluster integrityclumgr : checks cluster manager integrityocr : checks OCR integrityolr : checks OLR integrityha : checks HA integritycrs : checks CRS integritynodeapp : checks node applications existenceadmprv : checks administrative privilegespeer : compares properties with peerssoftware : checks software distributionasm : checks ASM integrityacfs : checks ACFS integritygpnp : checks GPnP integritygns : checks GNS integrityscan : checks SCAN configurationohasd : checks OHASD integrityclocksync : checks Clock Synchronizationvdisk : check Voting Disk Udev settings

Stage Options:

$ ./cluvfy stage -list

USAGE:cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]

Valid stage options and stage names are:-post hwos : post-check for hardware and operating system-pre cfs : pre-check for CFS setup-post cfs : post-check for CFS setup-pre crsinst : pre-check for CRS installation-post crsinst : post-check for CRS installation-pre hacfg : pre-check for HA configuration-post hacfg : post-check for HA configuration-pre dbinst : pre-check for database installation

Page 19: 11gR2 Clusterware and Grid Home - What You Need to Know

-pre acfscfg : pre-check for ACFS Configuration.-post acfscfg : post-check for ACFS Configuration.-pre dbcfg : pre-check for database configuration-pre nodeadd : pre-check for node addition.-post nodeadd : post-check for node addition.-post nodedel : post-check for node deletion.

Database - RAC/Scalability Community

To discuss this topic further with Oracle experts and industry peers, we encourage you to review, join or start a discussion in the My Oracle Support Database - RAC/Scalability Community

References

NOTE:1050693.1 - Troubleshooting 11.2 Clusterware Node Evictions (Reboots)NOTE:1053970.1 - Troubleshooting 11.2 Grid Infrastructure root.sh IssuesNOTE:1054006.1 - CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running@NOTE:1058357.1 - Oracle Clusterware 11g Release 2 (11.2) Technical White Paper (INTERNAL ONLY)NOTE:184875.1 - How To Check The Certification Matrix for Real Application ClustersNOTE:259301.1 - CRS and 10g/11.1 Real Application ClustersNOTE:810394.1 - RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent)NOTE:887522.1 - 11gR2 Grid Infrastructure Single Client Access Name (SCAN) ExplainedNOTE:946332.1 - Unable To Create 10.1 or 10.2 or 11.1(< 11gR2) ASM RAC Databases (ORA-29702) Using Brand New 11gR2 Grid Infrastructure Installation .