11gr2 rac-install workshop day1

33
Oracle11g R2 Real Application Clusters On Oracle Linux 5 update 7 Hands-on Workshop January 2012

Upload: tomas-e-carpio-milano

Post on 26-Oct-2014

357 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: 11gR2 RAC-Install Workshop Day1

Oracle11g R2Real Application Clusters

On Oracle Linux 5 update 7

Hands-on Workshop

January 2012

Page 2: 11gR2 RAC-Install Workshop Day1

Authors:Efraín SánchezPlatform Technology ManagerOracle Server Technologies, PTS

Contributors / Reviewers:André SousaSenior TechnologistOracle Server Technologies, PTS

Oracle CorporationWorld Headquarters500 Oracle ParkwayRedwood Shores, CA 94065U.S.A.

Worldwide Inquiries:Phone: +1.650.506.7000Fax: +1.650.506.7200www.oracle.com

Oracle Corporation provides the softwarethat powers the Internet.

Oracle is a registered trademark of Oracle Corporation. Variousproduct and service names referenced herein may be trademarksof Oracle Corporation. All other product and service namesmentioned may be trademarks of their respective owners.

Copyright © 2011 Oracle CorporationAll rights reserved.

Page 3: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

RAC workshop concepts and overview

During this RAC workshop you will setup a RAC cluster using Oracle Clusterware and Oracle Database 11g. The cluster will be setup on Oracle Enterprise Linux.

A cluster comprises multiple interconnected computers or servers that appear as if they are one server to end users and applications. Oracle Database 11g Real Application Clusters (RAC) enables the clustering of the Oracle Database. RAC uses the Oracle Clusterware for the infrastructure to bind multiple servers so that they operate as a single system.

Your first step in the workshop will be configuring the operating system for the Clusterware and RAC software.

Each server in the cluster will have one public network interface and one private network interface. The public network interface is the standard network connection, which connects the server to all of the other computers in your network. The private network interface is a private network connection shared by only the servers in the cluster. The private network interface is use by the Oracle Clusterware and RAC software to communicate between the servers in the cluster.

All of the database files in the cluster will be stored on shared storage. The shared storage allows multiple database instances, running on different servers, to access the same database information.

Your next step in the workshop will be to install the Oracle Clusterware, which binds multiple servers into a cluster. During the Clusterware install you will specify the location to create two Clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information. The Clusterware install is performed on one server and will be automatically installed on the other servers in the cluster.

After the Clusterware is installed, you will install the Oracle Database and RAC software. The installer will automatically recognize that the Clusterware has already been installed. Like the Clusterware install, the database and RAC install is performed on one server and the software will be automatically installed on the other servers in the cluster.

A virtual IP address is an alternate public address that client connections use instead of the standard public IP address. If a node fails, then the node's VIP fails over to another node on which the VIP cannot accept connections. Clients that attempt to connect to the VIP receive a rapid connection instead of waiting for TCP connect timeout messages.

After the database software is installed you will create a database using the Database Configuration Assistant.

Some parts of this workshop are based on the article “Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI” by Jeffrey Hunter.

Page 4: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.0.- Oracle Linux 5 Configuration Tasks

This step-by step guide is targeted for those who are implementing Oracle Database 11g R2 RAC for applications that need High Availability, Scalability, Performance, Workload Management and Lower Total Cost of Ownership (TCO).

We hope going through this step by step guide to install Oracle Real Application Clusters (RAC) will be a great learning experience for those interested in trying out Oracle Grid technology. This guide is an example of installing a 2-node RAC cluster, but the same procedure applies for single instance database managed by the grid infrastructure services.

Expectations, Roles and Responsibilities

You are expected to have: • basic understanding of Unix or Linux operating system and commands such cp (for copy, etc.).• basic understanding of RAC architecture and its benefits.

This guide performs the roles of both sys admin and a DBA. You must have root privileges to perform some of the steps

Operating System

This guide assumes the systems are pre-installed with either Oracle Enterprise Linux 5 or Red Hat Enterprise AS/ES 5. If your system does not have the correct operating system some steps will not work and the install may not perform as expected.

Storage

Oracle Real Application Clusters requires shared storage for its database files. This guide will use Oracle Automatic Storage Management (ASM) during the install for the storage management.

Page 5: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Oracle required OS packages

Oracle has released the Unbreakable Enterprise Kernel for x86 32b and 64b servers and is the default installation option, still you can switch to the Redhat compatible kernel.

Install required OS packages via local DVD yum repository:

1. Login as root user and open a terminal window, review the local-cdrom repository configuration using cat to display the file content:

[root@db01 ~]# cat /etc/yum.repos.d/cdrom.repo

Review the output:

[ol5_u7_base_cdrom]name=Oracle Linux $releasever - U7 - $basearch - base cdrombaseurl=file:///media/cdrom/Server/gpgcheck=0enabled=1

[ol5_u7_cluster_cdrom]name=Oracle Linux $releasever - U7 - $basearch - cluster cdrombaseurl=file:///media/cdrom/ClusterStorage/gpgcheck=0enabled=1

Insert the DVD or configure the ISO file in Virtualbox, right click the cdrom icon on the Virtualbox status bar, click “Choose a virtual CD/DVD disk file” and select the corresponding ISO File.

Un-mount the cdrom from the current automount directory and re-mount it in /mnt/cdrom

umount /dev/cdrommount /dev/cdrom /media/cdrom

Display current yum configuration:

yum repolist

You will get the following output:

Loaded plugins: rhnplugin, securityThis system is not registered with ULN.ULN support will be disabled.ol5_u7_base_cdrom | 1.1 kB 00:00 ol5_u7_cluster_cdrom | 1.1 kB 00:00 repo id repo name statusol5_u7_base_cdrom Oracle Linux 5 - U7 - i386 - base cdrom 2,471ol5_u7_cluster_cdrom Oracle Linux 5 - U7 - i386 - cluster cdrom 16repolist: 2,487

Page 6: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

2. The oracle-validated package verifies and sets system parameters based configuration recommendations for Oracle Linux, the files updated are:

/etc/sysctl.conf/etc/security/limits.conf/etc/modprobe.conf/boot/grub/menu.lst

This package will modify module parameters and re-insert them, it also installs any required packages for Oracle Databases.

yum install oracle-validated

It's recommended that you also install these packages for previous versions compatibility :

yum install libXp-devel openmotif22 openmotif

3. Install Automatic Storage Manager (ASM) packages

yum install oracleasm-support oracleasm-2.6.18-274.el5

4. Clean all cached files from any enabled repository. It's useful to run it from time to time to make sure there is nothing using unnecessary space in /var/cache/yum.

yum clean all

Eject the cdrom and disable the ISO image in order not to boot the OS Installation on next re-boot.

Optionally you can configure the public yum repository to install new updates in the future, skip this step for the workshop:

5. Disable current local-cdrom repository by changing enable=0

6. Download and install the Oracle Linux 5 repo file to your system.# cd /etc/yum.repos.d# wget http://public-yum.oracle.com/public-yum-el5.repo

7. Enable both the [ol5_u7_base] repositories in the yum configuration file by changing enable=0 to enable=1 in those sections.

8. To update your system use the following yum command:

# yum update

Page 7: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.1.- Kernel Parameters

As root, update kernel parameters in /etc/sysctl.conf file on both node1 and node2.

Open a terminal window, su to root and run the following commands. Alternatively, if you are not comfortable with scripting the update to the /etc/sysctl file.

Review the modifications made by the oracle-validated package

vi /etc/sysctl.conf

Optionally for production systems you can configure the OS to reboot in case of a kernel panic

# Enables system reboot in 60 seconds after kernel panickernel.panic = 60

Review also the following files modified by the oracle-validated package on each node of the cluster:

/etc/security/limits.conf/etc/pam.d/login

Open a terminal window and run the following commands. Alternatively use vi to update the values in the default profile file.

cat >> /etc/profile <<EOF# Oracle settings for 11gif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fiumask 022fiEOF

Run the following command to make the changes immediately effective instead of rebooting the machine.

/sbin/sysctl -p

Disable secure Linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows:

SELINUX=disabled

Disable firewall if not disabled at OS install.

/etc/rc.d/init.d/iptables stopchkconfig iptables off

Repeat these the same procedure for each database node in the cluster.

Page 8: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.2.- Check Installed and additional packages

Everything should be ready, but let's check the required packages:

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' compat-db compat-gcc-34 compat-gcc-34-c++ compat-libstdc++-296 compat-libstdc++-33

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils control-center gcc gcc-c++ glibc glibc-common glibc-headers glibc-devel libstdc++ libstdc++-devel make sysstat libaio libaio-devel

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' expat fontconfig freetype zlib libXp libXp-devel openmotif22 openmotif elfutils-libelf elfutils-libelf-devel unixODBC unixODBC-devel

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' oracleasm-support oracleasm-2.6.18-274.el5

1.3.- Network Configuration

As root, edit /etc/hosts file on one node and include the host IP addresses, VIP IP addresses and Private network IP addresses from all nodes in the cluster as follows, erase current configuration.

# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost.localdomain localhost

# Admin Network10.0.3.11 db110.0.3.12 db2

# Private Network10.0.4.11 db1-priv10.0.4.12 db2-priv10.0.4.21 nas01

# Public network is configured on DNS10.0.5.254 dns01

After the /etc/hosts file is configured on db1 copy the file to the other node(s) (db2) using scp. You will be prompted to enter the root password of the remote node(s) for example:

scp /etc/hosts <db2>:/etc/hosts

As root, verify network configuration by pinging db1 from db2 and vice versa. As root, run the following commands on each node.

ping -c 1 db1ping -c 1 db2

ping -c 1 db1-privping -c 1 db2-priv

Page 9: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Note that you will not be able to ping the virtual IPs (db1-vip, etc.) until after the clusterware is installed, up and running.

Check that no gateway is defined for private interconnect.

If you find any problems, run as root user, the network configuration program:/usr/bin/system-config-network

Verify MTU size for the private network interface

To set the current MTU size:

ifconfig eth1 mtu 1500

To make this change permanent, add MTU=1500 at the end of this the eth1 configuration file

cat >> /etc/sysconfig/network-scripts/ifcfg-eth1 <<EOFMT=1500EOF

Execute the same command on the second node.

Configure DNS name resolution

cat > /etc/resolv.conf <<EOFsearch local.comoptions timeout:1nameserver 10.0.5.254 EOF

Execute the following command to test the DNS availability.

nslookup db-cluster-scan

Server: 10.0.5.254Address: 10.0.5.254#53

Name: db-cluster-scan.local.comAddress: 10.0.5.20

Page 10: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.4.- Configure Cluster Time Synchronization Service - (CTSS) and Hangcheck Timer

If the Network Time Protocol (NTP) service is not available or properly configured, you can use Cluster Time Synchronization Service to provide synchronization service in the cluster but first you need to de-configure and de-install current NTP configuration.

To deactivate the NTP service, you must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To complete these steps on Oracle Enterprise Linux, run the following commands as the root user on both Oracle RAC nodes:

/sbin/service ntpd stopchkconfig ntpd offmv /etc/ntp.conf /etc/ntp.conf.original

Also remove the following file (This file maintains the pid for the NTP daemon) :

rm /var/run/ntpd.pid

Verify hangcheck-timer skip this step for VirtualBox installation

The hangcheck-timer module monitors the Linux kernel for extended operatingsystem hangs that could affect the reliability of a RAC node and cause a databasecorruption. If a hang occurs, the module restarts the node in seconds.

To see if hangcheck-timer is running, run the following command, on both nodes.

/sbin/lsmod | grep hang

If nothing is returned, run the following to configure it:

echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modprobe.conf

modprobe hangcheck-timer

grep Hangcheck /var/log/messages | tail -2

Run the following command as root to start hangcheck-timer automatically on system startup

echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.local

Page 11: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.5.- Create groups and oracle user

The following O/S groups will be used during installation:

DescriptionOS Group Name

OS Users Assigned to this Group Oracle Privilege Oracle Group Name

Oracle Inventory and Software Owner oinstall grid, oracle - -

Oracle Automatic Storage Management Group asmadmin grid SYSASM OSASM

ASM Database Administrator Group asmdba grid, oracle SYSDBA for ASM OSDBA for ASM

ASM Operator Group asmoper grid SYSOPER for ASM OSOPER for ASM

Database Administrator dba oracle SYSDBA OSDBA

Database Operator oper oracle SYSOPER OSOPER

As root on both db1 and db2, create the groups, dba, oinstall and the user oracle.

/usr/sbin/groupadd oinstall/usr/sbin/groupadd dba/usr/sbin/groupadd oper

/usr/sbin/groupadd asmadmin/usr/sbin/groupadd asmdba/usr/sbin/groupadd asmoper

The following command will create the oracle user and the users home directory with the default group as oinstall and secondary group as dba. The user default shell will be bash. Useradd unix man pages will provide additional details on the command:

useradd -g oinstall -G asmadmin -m -s /bin/bash -d /home/grid -r gridusermod -g oinstall -G asmadmin,asmdba,asmoper,dba grid

useradd -g oinstall -G dba -m -s /bin/bash -d /home/oracle -r oracleusermod -g oinstall -G dba,asmadmin,asmdba -s /bin/bash oracle

Set the password for the oracle and grid account, use “welcome1”.

passwd oracle

Changing password for user oracle.New UNIX password:<enter password> retype new UNIX password:<enter password>passwd: all authentication tokens updated successfully.

passwd grid

Verify that the attributes of the user oracle are identical on both db1 and db2

id oracle id grid

Page 12: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

The command output should be as follows:

[root@db01 ~]# id oracle

uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54324(asmadmin),54325(asmdba)

[root@db01 ~]# id grid

uid=102(grid) gid=54321(oinstall) groups=54321(oinstall),54324(asmadmin),54325(asmdba),54326(asmoper)

Enable xhost permissions is case you want to login with root and switch to oracle or grid user:

xhost +

Re-Login or switch as oracle OS user and edit .bash_profile file with the following:

umask 022if [ -t 0 ]; then stty intr ^Cfi

export ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/11.2.0/rac#export ORACLE_SID=<your sid>

export ORACLE_PATH=/u01/app/oracle/common/oracle/sqlexport ORACLE_TERM=xterm

PATH=${PATH}:$HOME/bin:$ORACLE_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/binPATH=${PATH}:/u01/app/common/oracle/binexport PATH

LD_LIBRARY_PATH=$ORACLE_HOME/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/libexport LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRECLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlibexport CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

Copy the profile to db2

scp /home/oracle/.bash_profile oracle@db2:/home/oracle

Page 13: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Login or switch as grid user and edit .bash_profile with the following

umask 022if [ -t 0 ]; then stty intr ^Cfi

export ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=/u01/app/grid/11.2.0/infra#export ORACLE_SID=<your sid>

export CV_NODE_ALL=db1,db2export CVUQDISK_GRP=oinstall

export ORACLE_PATH=/u01/app/oracle/common/oracle/sqlexport ORACLE_TERM=xterm

PATH=${PATH}:$HOME/bin:$ORACLE_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/binPATH=${PATH}:/u01/app/common/oracle/binexport PATH

LD_LIBRARY_PATH=$ORACLE_HOME/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/libexport LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRECLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlibexport CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

Copy the profile to db2

scp /home/grid/.bash_profile grid@db2:/home/grid

Page 14: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.6.- Create install directories, as root user (on each node)

rm -rf /u01/app

mkdir -p /u01/app/oracle/product/11.2.0/racchown -R oracle:oinstall /u01/app

mkdir -p /u01/app/grid/11.2.0/infrachown -R grid:oinstall /u01/app/grid

chmod -R 775 /u01/

Page 15: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.7.- Configure SSH on all nodes

The Installer uses the ssh and scp commands during installation to run remote commands and copy files to the other cluster nodes. You must configure ssh so that these commands do not prompt for a password

OPTIONAL: Oracle 11gR2 installer now configures ssh keys across all nodes in the cluster, but if you want to manually configure them, use the following steps:

Logout and login as oracle in db1

NOTE: If you switch or oracle user, you must use the ‘-‘ option for example: su – oracle, so that the shell environment correctly set

mkdir .ssh

Create RSA and DSA type public and private keys on both nodes.

ssh-keygen -t rsassh db2 /usr/bin/ssh-keygen -t rsa

Accept the default location for the key fileLeave the pass phrase blank.

This command writes the public key to the /home/oracle/.ssh/id_rsa.pub file and the private key to the /home/oracle/.ssh/id_rsa file.

ssh-keygen -t dsassh db2 /usr/bin/ssh-keygen -t dsa

Accept the default location for the key fileLeave the pass phrase blank.

This command writes the public key to the /home/oracle/.ssh/id_dsa.pub file and the private key to the /home/oracle/.ssh/id_dsa file.

On node 1:

Concatenate the rsa and dsa public keys of both nodes into one file called authorized_keys with the following commands, execute one-by-one.

ssh db1 cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keysssh db1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh db2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysssh db2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Copy the authorized_keys file from node 1 to node 2.

scp ~/.ssh/authorized_keys db2:~/.ssh/authorized_keysssh db1 chmod 600 ~/.ssh/authorized_keysssh db2 chmod 600 ~/.ssh/authorized_keys

Page 16: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Check the connections with the following commands in both nodes. Execute each line, one at a time and choose to permanently add the host to the list of known hosts.

ssh db1 datessh db2 datessh db1-priv datessh db2-priv date

Try the next line to see if everything works

ssh db1 date; ssh db2 date; ssh db1-priv date; ssh db2-priv date

Execute the same procedure for user “grid”

Page 17: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.8.- iSCSI Configuration

In this section, we will be using the static discovery method. We first need to verify that the iSCSI software packages are installed on our servers before we can proceed further.

Enabling the Name Service Cache Daemon

To allow Oracle Clusterware to better tolerate network failures with NAS devices or NFS mounts, enable the Name Service Cache Daemon (nscd).

To change the configuration to ensure that nscd is on for both run level 3 and run level 5, enter the following command as root:

chkconfig --level 35 nscd onservice nscd start

Configure UDEV Rules

Execute the following to create the rule script:

cat >> /etc/udev/rules.d/99-iscsi.rules<<EOF#iscsi devicesKERNEL=="sd*", BUS=="scsi", PROGRAM="/usr/local/bin/iscsidev %b",SYMLINK+="iscsi/%c{1}.p%n"EOF

Use vi to create the following script:

[root@db1 ~]# vi /usr/local/bin/iscsidev

#!/bin/shBUS=${1}HOST=${BUS%%:*}LUN=`echo ${BUS} | cut -d":" -f4` [ -e /sys/class/iscsi_host ] || exit 1file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session:session*/targetname"target_name=`cut -d":" -f2 ${file}`if [ -z "${target_name}" ]; then exit 1fiecho "${target_name} ${LUN}"

Start iscsi services:

chmod a+x /usr/local/bin/iscsidevchkconfig iscsid onservice iscsid startsetsebool -P iscsid_disable_trans=1

iscsiadm -m discovery -t sendtargets -p nas01service iscsi restart

Page 18: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Display running sessions:iscsiadm -m session

tcp: [1] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk06tcp: [10] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk08tcp: [11] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk03tcp: [12] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk01tcp: [2] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk04tcp: [3] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk02tcp: [4] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk07tcp: [5] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk09tcp: [6] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk11tcp: [7] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk12tcp: [8] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk10tcp: [9] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk05

Check that the /dev/iscsi links are created, notice the order assigned.ls -l /dev/iscsi

Copy iscsi scripts from db1.

scp /etc/udev/rules.d/99-iscsi.rules db2:/etc/udev/rules.dscp /usr/local/bin/iscsidev db2:/usr/local/bin

Execute configuration in the remaining nodes also as root:

chkconfig iscsid onservice iscsid startsetsebool -P iscsid_disable_trans=1

iscsiadm -m discovery -t sendtargets -p nas01

service iscsi restartls -l /dev/iscsi

Now we are going to partition the disks, we will use the the first partition for the grid infrastructure diskgroup.

Partition1 100 MbPartition2 Remaining space

First Disk

[root@db01 ~]# fdisk /dev/iscsi/nas01.disk01.p Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n

Page 19: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Command action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-1009, default 1): Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009): +100M

Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 2First cylinder (49-1009, default 49): <press enter>Using default value 49Last cylinder or +size or +sizeM or +sizeK (472-1009, default 1009): <enter>Using default value 1009

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table.The new table will be used at the next reboot.Syncing disks.

A method for cloning a partition table in linux is to use sfdisk, we are going to apply the same configuration for all disks.

sfdisk -d /dev/iscsi/nas01.disk01.p>disk01part.txt

sfdisk /dev/iscsi/nas01.disk02.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk03.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk04.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk05.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk06.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk07.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk08.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk09.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk10.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk11.p<disk01part.txtsfdisk /dev/iscsi/nas01.disk12.p<disk01part.txt

Page 20: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Initialize all block devices with the following commands from db1:

dd if=/dev/zero of=/dev/iscsi/nas01.disk01.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk01.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk02.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk02.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk03.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk03.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk04.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk04.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk05.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk05.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk06.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk06.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk07.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk07.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk08.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk08.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk09.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk09.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk10.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk10.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk11.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk11.p2 bs=1000k count=99

dd if=/dev/zero of=/dev/iscsi/nas01.disk12.p1 bs=1000k count=99dd if=/dev/zero of=/dev/iscsi/nas01.disk12.p2 bs=1000k count=99

You'll need to propagate changes on node 2 by executing:

For a SAN configuration:

partprobe

For an iscsi configuration (Virtualbox):

service iscsi restart

Page 21: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.9.- ASMlib configuration

The Oracle ASMLib kernel driver is now included in the Unbreakable Enterprise Kernel. No driver package needs to be installed when using this kernel. The oracleasm-support and oracleasmlib packages still need to be installed,

The package oracleasmlib can be downloaded directly from:http://www-content.oracle.com/technetwork/topics/linux/downloads/index-088143.html

Make sure the two ASM packages are installed in both nodes, becase asmlib driver is implemented on the new Oracle Unbrekable Kernel we no longer need to install oracleasmlib package:

[root@db01 ~]# rpm -qa | grep oracleasmoracleasm-2.6.18-274.el5-2.0.5-1.el5oracleasm-support-2.1.7-1.el5

Run the following command to configure ASM

[root@db1 ~]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets ('[]'). Hitting <ENTER> without typing ananswer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid Default group to own the driver interface []: asmadminStart Oracle ASM library driver on boot (y/n) [n]: yScan for Oracle ASM disks on boot (y/n) [y]: yWriting Oracle ASM library driver configuration: doneInitializing the Oracle ASMLib driver: [ OK ]Scanning the system for Oracle ASMLib disks: [ OK ]

Every disk that ASMLib is going to be accessing needs to be made available. This is accomplished by creating an ASM disk in db1:

/etc/init.d/oracleasm createdisk NAS01_GRID01 /dev/iscsi/nas01.disk01.p1/etc/init.d/oracleasm createdisk NAS01_GRID02 /dev/iscsi/nas01.disk02.p1/etc/init.d/oracleasm createdisk NAS01_GRID03 /dev/iscsi/nas01.disk03.p1/etc/init.d/oracleasm createdisk NAS01_GRID04 /dev/iscsi/nas01.disk04.p1/etc/init.d/oracleasm createdisk NAS01_GRID05 /dev/iscsi/nas01.disk05.p1/etc/init.d/oracleasm createdisk NAS01_GRID06 /dev/iscsi/nas01.disk06.p1/etc/init.d/oracleasm createdisk NAS01_GRID07 /dev/iscsi/nas01.disk07.p1/etc/init.d/oracleasm createdisk NAS01_GRID08 /dev/iscsi/nas01.disk08.p1/etc/init.d/oracleasm createdisk NAS01_GRID09 /dev/iscsi/nas01.disk09.p1/etc/init.d/oracleasm createdisk NAS01_GRID10 /dev/iscsi/nas01.disk10.p1/etc/init.d/oracleasm createdisk NAS01_GRID11 /dev/iscsi/nas01.disk11.p1/etc/init.d/oracleasm createdisk NAS01_GRID12 /dev/iscsi/nas01.disk12.p1

/etc/init.d/oracleasm createdisk NAS01_DATA01 /dev/iscsi/nas01.disk01.p2/etc/init.d/oracleasm createdisk NAS01_DATA02 /dev/iscsi/nas01.disk02.p2/etc/init.d/oracleasm createdisk NAS01_DATA03 /dev/iscsi/nas01.disk03.p2/etc/init.d/oracleasm createdisk NAS01_DATA04 /dev/iscsi/nas01.disk04.p2/etc/init.d/oracleasm createdisk NAS01_DATA05 /dev/iscsi/nas01.disk05.p2/etc/init.d/oracleasm createdisk NAS01_DATA06 /dev/iscsi/nas01.disk06.p2/etc/init.d/oracleasm createdisk NAS01_DATA07 /dev/iscsi/nas01.disk07.p2/etc/init.d/oracleasm createdisk NAS01_DATA08 /dev/iscsi/nas01.disk08.p2/etc/init.d/oracleasm createdisk NAS01_DATA09 /dev/iscsi/nas01.disk09.p2/etc/init.d/oracleasm createdisk NAS01_DATA10 /dev/iscsi/nas01.disk10.p2/etc/init.d/oracleasm createdisk NAS01_DATA11 /dev/iscsi/nas01.disk11.p2/etc/init.d/oracleasm createdisk NAS01_DATA12 /dev/iscsi/nas01.disk12.p2

Page 22: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

List all the disks:

/etc/init.d/oracleasm listdisks

List all the ASM persistent links:

ls /dev/oracleasm/disks/

We also have to execute the ASMLib configuration in db2.

[root@db2~]# /etc/init.d/oracleasm configure

When a disk is marked with asmlib, other nodes have to be refreshed, just run the 'scandisks' option on db2:

# /etc/init.d/oracleasm scandisks Scanning system for ASM disks [ OK ]

Existing disks can now be listed:

[root@db2 ~]# /etc/init.d/oracleasm listdisks

NAS01_GRID01NAS01_GRID02NAS01_GRID03NAS01_GRID04

Because we are going to use asmlib support, we no longer need to assign permissions to block devices upon reboot in /etc/rc.local file, asmlib will take care of that.

Page 23: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.10.- Verify SCAN name with DNS Server

For the purpose of this workshop, we already configured the SCAN address resolution in the VM that acts as an ISCSI and DNS server on client access network (10.0.5.254).

The nslookup binary will be executed by the Cluster Verification Utility during the Oracle grid infrastructure install.

Verify the command output, it should look like the following:

[root@db1 ~]# nslookup db-cluster-scan

Server: 10.0.5.254Address: 10.0.5.254#53

Name: node-cluster-scan.local.comAddress: 10.0.5.20

Remember to perform these actions on both Oracle RAC nodes.

Page 24: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

1.11.- Configure VNC Server (Optional)

First check if you already have them installed on your system, open a terminal and type:

$ rpm -qa|grep vnc

We need to add at least 1 VNC user/group, open the file /etc/sysconfig/vncservers as root and add the information shown:

VNCSERVERS="1:root"VNCSERVERARGS[1]="-geometry 1024x768 -depth 16"

To add some security we need to add a password that must be given before a connection can be established, open a terminal and type:

$ vncpasswd

To start the server we type the command 'vncserver' and the session you wish to start, in this case “root”(if you have set up more than 1 entry in the /etc/sysconfig/vncservers file:

[root@db1 ~] vncserver :1

Now the server is started and a user could connect, however they will get a plain grey desktop by default as the connection will not cause a new session of X to start by default, to fix this we need to edit the startup script in the .vnc folder in your home directory.

vi ~/.vnc/xstartup# Uncomment the following two lines for normal desktop:unset SESSION_MANAGERexec /etc/X11/xinit/xinitrc

As the file says make sure the two lines at the top are uncommented by removing the leading # sign. Next we need to restart vncserver to pick up the changed we just made. To restart the vncserver we need to kill the process and start a new one as root:

$ vncserver -kill :1$ vncserver :1

To start the viewer type:vncviewer <ip addess>:1

Page 25: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

2.0 Oracle Software, Pre-Installation Tasks

The Cluster Verify Utility automatically checks all nodes that are specified, but first we need to install the cvuqdisk rpm required for the CVU, on both nodes.

su -export CVUQDISK_GRP=asmadmincd /install/11gR2/grid/rpmrpm -ivh cvuqdisk-*

ssh root@db2 mkdir -p /install/11gR2/grid/rpm scp cvuqdisk-* root@db2:/install/11gR2/grid/rpm

ssh root@db2 export CVUQDISK_GRP=asmadminrpm -ivh /install/11gr2/grid/rpm/cvuqdisk-*

Re-login in the OS Desktop graphical user interface using the “grid” user, execute the following commands (if you are installing only one node replace -n all switch with -n db1) :

cd /install/11gR2/grid

Verifying node connectivity (only if you configured the shh equivalence):./runcluvfy.sh comp nodecon -n all -verbose

Performing post-checks for hardware and operating system setup: ./runcluvfy.sh stage -post hwos -n all -verbose

Checking system requirements for crs, display only failed checks:./runcluvfy.sh comp sys -n all -p crs -verbose | grep failed

Check warnings and errors.

Ignore memory and kernel parameters errors, installer will generate a script for you to run as root user to change them to the correct kernel values.

You may need to update some rpms as ROOT user in BOTH NODES

Perform overall pre-checks for cluster services setup: ./runcluvfy.sh stage -pre crsinst -n all

Check time difference between the nodes, if there is more than one second, update the time manually as root user using an ntp server and update hardware clock:

[root@db1 ~]# /usr/sbin/ntpdate north-america.pool.ntp.org[root@db1 ~]# hwclock –systohc

If needed, add 1024 Mb of extra swap to avoid installer warnings:[root@db1 ~]# dd if=/dev/zero of=/extraswap bs=1M count=1024[root@db1 ~]# mkswap /extraswap[root@db1 ~]# swapon /extraswap[root@db1 ~]# swapon -s

Page 26: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

2.1 Install 11g Grid Infrastructure, former 10g Cluster Ready Services (CRS)

The installer needs to be run from one node in the cluster under an X environment. Run the following steps in VNC (or another X client) on only the first node in the cluster as grid user.

Review .bash_profile configuration: more ~/.bash_profile

Run the Oracle Universal Installer:

/install/11gR2/grid/runInstaller

Screen Name Response

Select Installation Option Select "Install and Configure Grid Infrastructure for a Cluster"

Select Installation Type Select "Advanced Installation"

Select Product Languages Click Next

Grid Plug and Play Information

Cluster Name: db-clusterSCAN Name: db-cluster-scanSCAN Port: 1521

Only Un-check the option to "Configure GNS", Click Next

Cluster Node Information Click the "Add" button to add "db2" and its virtual IP address "db2-vip", Click Next

Specify Network Interface Usage

Identify the network interface to be used for the "Public" and "Private" network. Make any changes necessary to match the values in the table below:

Interface Subnet Type eth1 10.0.4.0 Privateeth2 10.0.5.0 Public eth0 10.0.3.0 Do not Use

Storage Option Information Select "Automatic Storage Management (ASM)", Click Next

Create ASM Disk Group

Change Discovery Path to

/dev/oracleasm/disks/*

Create an ASM Disk Group that will be used to store the Oracle Clusterware files according to the values in the following values:

Disk Group: GRIDName Redundancy: External RedundancyDisks: NAS01_GRID*

Click Next

In a production environment is always recommended to use at least “Normal Redundancy”

Specify ASM PasswordFor the purpose of this article, I choose to "Use same passwords for these accounts", Click Next

Failure Isolation Support Select "Do not use Intelligent Platform Management Interface (IPMI)".

Privileged Operating System Groups

Make any changes necessary to match the values: OSDBA for ASM: asmdbaOSOPER for ASM: asmoperOSASM: asmadminClick Next

Page 27: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Specify Installation Location

Review default values, those are preloaded from the environment variables we already set in the OS user.

Click Next

Create InventoryInventory Directory: /u01/app/oraInventoryoraInventory Group Name: oinstall

Prerequisite Checks

If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session.

Ignore the Device Checks for ASM error by selecting the Ignore All checkbox

SummaryClick Finish to start the installation.

Setup The installer performs the Oracle grid infrastructure setup process on both Oracle RAC nodes.

Execute Configuration scripts

Run the orainstRoot.sh script on both nodes in the RAC cluster:

[root@db1 ~]# /u01/app/oraInventory/orainstRoot.sh

[root@db2 ~]# /u01/app/oraInventory/orainstRoot.sh

Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the root user account. Run the root.sh script on both nodes in the RAC cluster one at a time starting with the node you are performing the install from:

[root@db1 ~]# /u01/app/11.2.0/grid/root.sh

[root@db2 ~]# /u01/app/11.2.0/grid/root.sh

The root.sh script can take several minutes to run. When running root.sh on the last node, you will receive output similar to the following which signifies a successful install: ...The inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.

Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.

Finish At the end of the installation, click the [Close] button to exit the OUI.

Install verification

The Installed Cluster Verify Utility can be used to verify the CRS installation.

Run the Cluster Verify Utility as grid user, if running from one node replace all for the node name for example: db1.

cluvfy stage -post crsinst -n all

Reboot the server, in the following section we'll execute some commands to make sure all services started successfully.

Page 28: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Troubleshooting:

If something goes wrong when executing the root.sh you can review the log and repair the error, but before executing the script again, de-configure the node and then re-execute the root.sh script.

“Dont execute this is you finished the configuration correctly”<oracle_home>crs/install/rootcrs.pl -deconfig -force

Page 29: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

2.2 Post installation procedures

Verify Oracle Clusterware Installation

After the installation of Oracle grid infrastructure, you should run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC cluster as the grid user.

Check CRS Status

[grid@db1 ~]$ crsctl check crs

CRS-4638: Oracle High Availability Services is onlineCRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online

Check Clusterware Resources

[grid@db1 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host ----------------------------------------------------------------------ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE db1 ora.GRID.dg ora....up.type 0/5 0/ ONLINE ONLINE db1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE db1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE db1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE db1 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE db1 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE db1 ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE db1 ora.db1.gsd application 0/5 0/0 OFFLINE OFFLINE ora.db1.ons application 0/3 0/0 ONLINE ONLINE db1 ora.db1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE db1 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 0/5 0/ ONLINE ONLINE db1 ora.oc4j ora.oc4j.type 0/5 0/0 ONLINE ONLINE db1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE db1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE db1

Check Cluster Nodes

[grid@db1 ~]$ olsnodes -n

Check Oracle TNS Listener Process on Both Nodes

[grid@db1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'

LISTENER_SCAN1LISTENER

[grid@db2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'

LISTENER

Page 30: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Another method is to use the command:

[grid@db1 ~] srvctl status listener

Listener LISTENER is enabledListener LISTENER is running on node(s): db1

Confirming Oracle ASM Function for Oracle Clusterware Files

If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running:

[grid@db1 ~]$ srvctl status asm -a

ASM is running on db1,db2ASM is enabled.

Check Oracle Cluster Registry (OCR)

[grid@db1 ~]$ ocrcheck

Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2224 Available space (kbytes) : 259896 ID : 670206863 Device/File Name : +GRID Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

Check Voting Disk

[grid@db1 ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- --------- 1. ONLINE 6ba1b1a0d1ed4fcbbfafac71f335bec3 (/dev/oracleasm/disks/NAS01_GRID03) [GRID]Located 1 voting disk(s).

Note: To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or later installations, use the srvctl binary in the Oracle grid infrastructure home for a cluster (Grid home). When we install Oracle Real Application Clusters (the Oracle database software), you cannot use the srvctl binary in the database home to manage Oracle ASM or Oracle Net which reside in the Oracle grid infrastructure home.

Page 31: 11gR2 RAC-Install Workshop Day1

Platform Technology Solutions, Latin America

Voting Disk Management

In prior releases, it was highly recommended to back up the voting disk using the dd command after installing the Oracle Clusterware software. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the dd is not supported and may result in the loss of the voting disk.

Backing up the voting disks in Oracle Clusterware 11g release 2 is no longer required. The voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added.

To learn more about managing the voting disks, Oracle Cluster Registry (OCR), and Oracle Local Registry (OLR), please refer to the Oracle Clusterware Administration and Deployment Guide 11 g Release 2 (11.2).

Back Up the root.sh Script

Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle home directory, then the installer updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, then you can recover it from the root.sh file copy.

Back up the root.sh file on both Oracle RAC nodes as root:

[root@db1 ~]# cd /u01/app/grid/11.2.0/infra[root@db1 grid]# cp root.sh root.sh.db1

[root@db2 ~]# cd /u01/app/grid/11.2.0/infra[root@db2 grid]# cp root.sh root.sh.db2

In order for JDBC Fast Connection Failover to work, you must start Global Service Daemon (GSD) for the first time on each node as root user, this step is optional.

[root@db1 ~]# /u01/app/grid/11.2.0/infra/bin/gsdctl start

Next time you reboot the server or restart cluster services, GSD services will start automatically.

Page 32: 11gR2 RAC-Install Workshop Day1

3.0 ASM Disk-groups Provisioning

Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (db1) to create the additional ASM disk groups which will be used to create the clustered database.

During the installation of Oracle grid infrastructure, we configured one ASM disk group named +GRID which was used to store the Oracle clusterware files (OCR and voting disk).

In this section, we will create two additional ASM disk groups using the ASM Configuration Assistant (asmca). These new ASM disk groups will be used later in this guide when creating the clustered database.

The first ASM disk group will be named +DATA and will be used to store all Oracle physical database files (data, online redo logs, control files, archived redo logs).

Normally a second ASM disk group is created for the Fast Recovery Area named +FLASH, but for this lab we will use only one diskgroup.

Before starting the ASM Configuration Assistant, log in to db1 as the owner of the Oracle Grid Infrastructure software which for this article is grid. Next, if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to db1 from a workstation configured with an X Server) or directly using the console.

Update ASMSNMP password

As grid user, execute the following commands:

[grid@db1 ~]$ export ORACLE_SID=+ASM1[grid@db1 ~]$ asmcmd

ASMCMD> lspwusr

Username sysdba sysoper sysasm SYS TRUE TRUE TRUE ASMSNMP TRUE FALSE FALSE

ASMCMD> orapwusr --modify --password sysEnter password: manager

ASMCMD> orapwusr --modify --password asmsnmpEnter password: manager

Page 33: 11gR2 RAC-Install Workshop Day1

Create Additional ASM Disk Groups using ASMCA

Perform the following tasks as the grid user to create two additional ASM disk groups:

[grid@db1 ~]$ asmca &

Screen Name Response

Disk Groups From the "Disk Groups" tab, click the "Create" button.

Create Disk Group

The "Create Disk Group" dialog should show two of the ASMLib volumes we created earlier in this guide.

When creating the datbase ASM disk group, use "DATA" for the "Disk Group Name".

In the "Redundancy" section, choose "External Redundancy", for production is recommended at least normal redundancy.

Finally, check all the ASMLib volumes remaining in the "Select Member Disks" section, If necessary change the Disk Discovery Path to:

/dev/oracleasm/disks/*

After verifying all values in this dialog are correct, click the [OK] button.

Disk Groups After creating the first ASM disk group, you will be returned to the initial dialog, if necessary you can create additional diskgroups

Disk GroupsExit the ASM Configuration Assistant by clicking the [Exit] button.

Congratulations, you finished the first installation stage, see you tomorrow for the next lab.