installation oracle rac - aix

56
Table of Contents 1. HARDWARE ARCHITECTURE.......................................... 3 1.1 Overall configuraiton.......................................3 1.2 DB severs : IBM pSeries Servers (p595)......................3 1.3 Operating System............................................ 4 1.4 Network Infrastructure......................................4 1.5 AIX virtual I/O Disks.......................................4 2. BASIC Database information for IBPS............................5 2.1 DC and DR database – extended RAC...........................5 3. Way to Implement Oracle 10g RAC on AIX 5L......................7 4. CHECK LIST TO USE AND FOLLOW...................................8 5. PREPARING THE SYSTEM........................................... 9 5.1 Hardware Requirement........................................9 5.2 Software Requirements......................................13 5.3 Tuning AIX System Enviroment...............................14 5.4 Users and Groups........................................... 17 5.5 Network Configuration......................................18 5.6 Local Disk for Oracle code.................................19 5.7 Node Time Requirement......................................19 5.8 User Equivalence Setup.....................................20 5.9 Running the rootpre.sh script..............................20 5.10 Update .profile (User Enviroment)..........................20 6. INSTALLING ORACLE CLUSTERWARE.................................21 6.1 Verify Oracle Clusterware Requirements with CVU............21 6.2 Preparing to install Oracle CRS with OUI...................21 6.3 Confirming Oracle Clusterware Funtion......................22 6.4 Oracle Clusterware Postinstallation Procedures.............22 7. TASKS for After Installation..................................23 7.1 Tuning parameter file......................................23 8. Installation screen shot......................................24 8.1 Two node extended rac (DC).................................24

Upload: hung-nguyen

Post on 25-Nov-2015

74 views

Category:

Documents


6 download

TRANSCRIPT

Table of Contents1.HARDWARE ARCHITECTURE31.1Overall configuraiton31.2DB severs : IBM pSeries Servers (p595)31.3Operating System41.4Network Infrastructure41.5AIX virtual I/O Disks42.BASIC Database information for IBPS52.1DC and DR database extended RAC53.Way to Implement Oracle 10g RAC on AIX 5L74.CHECK LIST TO USE AND FOLLOW85.PREPARING THE SYSTEM95.1Hardware Requirement95.2Software Requirements135.3Tuning AIX System Enviroment145.4Users and Groups175.5Network Configuration185.6Local Disk for Oracle code195.7Node Time Requirement195.8User Equivalence Setup205.9Running the rootpre.sh script205.10Update .profile (User Enviroment)206.INSTALLING ORACLE CLUSTERWARE216.1Verify Oracle Clusterware Requirements with CVU216.2Preparing to install Oracle CRS with OUI216.3Confirming Oracle Clusterware Funtion226.4Oracle Clusterware Postinstallation Procedures227.TASKS for After Installation237.1Tuning parameter file238.Installation screen shot248.1Two node extended rac (DC)24CRS Installation screen shot24Oracle Software installation screen shot31Configure ASM Instance378.2Two node extenede rac (DR)44CRS Installation screen shot44Oracle Software installation screen shot46

HARDWARE ARCHITECTURE For our infrastructure, we used a cluster, which is composed of Five IBM pSeries (p595).This chapter will be revision after HW implementation completed

0. Overall configuraiton

0. DB severs : IBM pSeries Servers (p595)This is the IBM pSeries server, we used for installation: Server Configurations Storage Configurations0. Operating SystemOperating System must be installed the same way on all nodes of the cluster, with the same version, maintenance level, with the same APARS and FILESETS level. OS Version supported: AIX 5L 5.3, ML 02 or later.0. Network InfrastructureYou must have three network addresses for each node. A public IP address A virtual IP address, which is used by application for failover in the event of node failure. A private address, which is used by Oracle Clusterware and Oracle RAC for internodes communication.The virtual IP address has the following requirements The IP address and hostname are currently unused (it can be registered in a DNS, but it should not be accessible by ping command) The virtual IP address is on the same subnet as your public interfaceThe private address has the following requirements It should be on a subnet reserved for private networks, such as 10.0.0.0 or 192.168.0.0 It should use dedicated switches or a physical separate, private network, reachable only by the cluster member nodes, preferably using high-speed NICs. It must use the same private interfaces for both Oracle Clusterware and RAC private IP address.Ping all IP address. The public and private IP addresses should respond to ping commands. The VIP address should not respond.0. AIX virtual I/O DisksYou can use virtual I/O disks for: Oracle Clusterware ($ORACLE_CRS_HOME) Oracle RAC software ($ORACLE_HOME)But not to be used for: OCR and Voting disks. Oracle database files.

BASIC Database information for IBPS 0. DC and DR database extended RAC1) DB serverCATEGORYDCDRREMARK

DB_NAMEDCDBDRDBDC : 2-node DR : 2-node

OS accountoracle:dbaoracle:dba

PORT15211521

Character setUTF8UTF8

db_block_size8K8K

ORACLE_HOME/u01/app/db/u01/app/db$ORACLE_CRS_HOME$ORACLE_HOME

Mount ping/u01/u01

filesystem

Database Server Identifying (DC 2node RAC, DR 2node RAC)1) server NAMECATEGORYDCDRREMARK

Server namedcdb1dcdb2

drdb1drdb2-DC (cluster-name),DR (cluster name) -dbx(node order),

2) etc/hostsCATEGORYETC/HOSTSREMARK

DC# Public network192.168.10.10 dcdb1 # node name db1192.168.10.12 dcdb2 # node name db2# Oracle RAC interconnect network172.16.100.14 dcdb1-priv # db1 private IP172.16.100.16 dcdb2-priv # db2 private IP#Virtual IP for oracle192.168.10.14 dcdb1-vip # db1 virtual IP192.168.10.16 dcdb2-vip # db2 virtual IP

DR# Public network10.192.10.10 drdb1 # drdb110.192.10.12 drdb2 # drdb2# Oracle RAC interconnect network172.16.100.14 drdb1-priv # bak db1 private IP172.16.100.16 drdb2-priv # bak db2 private IP#Virtual IP for oracle10.192.10.14 drdb1-vip # bak db1 virtual IP10.192.10.16 drdb2-vip # bak db2 virtual IP

Way to Implement Oracle 10g RAC on AIX 5L

We will implement Oracle 10g RAC on AIX 5L at SBV Bank as follows:

ASM is used for Oracle database storage

NO NEED for HACMP NO NEED for GPFSOracle database files (datafiles, redo log files, archive logs) are stored on the disks managed by Oracle ASM.CRS files (OCR and Voting) are placed on raw disks.

CHECK LIST TO USE AND FOLLOWThis is the list of operations you should do, before moving to Oracle installation steps:

OperationsDone on each node: Yes/No?

Node 1Node 2

1Check the Hardware Requirements

2Check the Network Requirements

Check the Software Requirements

Tuning the AIX System Environment

Create Required UNIX Groups and Users

Configure kernel Parameters and Shell Limit

Identify Required Software Directories

Identify or Create an Oracle Base Directory

Create the CRS Home Directory

Choose Storage Option for Oracle CRS, Database, and Recovery Files

Create LUNs for Oracle CRS, Database, and Recovery Files

Configure Disk for ASM

Synchronize the System Time on Cluster Nodes

Stop Existing Oracle Processes

Configure Oracle User Environment

User Equivalence Setup

Running the rootpre.sh script

Voting Disk on the third site setup

PREPARING THE SYSTEM

0. Hardware RequirementYou must ensure that each system meets these requirements, follow these steps:6. To determine the physical RAM size, enter the following command:# /usr/sbin/lsattr -E -l sys0 -a realmemMinimum RAM required is 1GB With DC site.Actual Value (GB)Remark

Node 132GB

Node 232GB

With DR site.Actual Value (GB)Remark

Node 132GB

Node 232GB

6. To determine the SWAP size:# /usr/sbin/lsps aa) SWAP is twice of RAM, if RAM is smaller than 2GBb) SWAP is equal to RAM, if RAM is between 2GB and 8GBc) SWAP is 0.75 times the size of RAM, if RAM is greater than 8GB But in SBV, SWAP less than 0.75 times of RAM.Because, RAM size is too big With DC site. Actual Value (GB)Remark

Node 120GB

Node 220GB

With DR site.Actual Value (GB)Remark

Node 120GB

Node 220GB

6. To determine the amount of disk space available in the /tmp directory, enter the following command.# df -k /tmpThe minimum requirement of /tmp size is 400MB. With DC siteActual Value (GB)Remark

Node 110GB

Node 210GB

With DR siteActual Value (MB)Remark

Node 110GB

Node 210GB

6. Configure OCR and Voting disks for DCDB and DRBD.On DC system and DR system , OCR(s) and Voting Disks will be placed on Raw Disks The size of each OCR and Voting disk is at least 256MB. (In SBV is 1GB) OCR configuration.- Size of OCR disk 1GB- Owner, permission of OCR disk - oracle:dba, 660- Location of OCR disk:2 OCR disk in DS8K at each pair of cluster Voting disk configuration.- Size of vote disk 1 GB- Owner, permission of vote disk - oracle:dba, 660- Location of Vote disk:3 Vote disk in DS8K at each pair of cluster6. Steps to configure OCR and Voting for PDC and DR sites:1) LUNs creationDisksLUNs ID NumberLUNs Size

OCR1GB

Voting1GB

2) Preparing Raw Disks for OCR and Voting DisksLUNs ID NumberNode 1 hdiskNode 2 hdisk

OCRhdisk9hdisk9

OCRhdisk10hdisk10

Votinghdisk4hdisk4

Votinghdisk5hdisk5

Votinghdisk6hdisk6

3) Following the steps below to configure Disk devices for OCR and Voting. Identify or configure the required disk devicesThe disk devices must be shared on all of the cluster nodes. As the root user, enter the following command on any node to identify the device names for the devices that you want to use.# lspv | grep -i noneThis command displays information similar to the following for each device that is not configured in a volume group:hdisk17 0009005fb9c23648 Nonewhere:- hdisk17 is the device name- 0009005fb9c23648 is the physical volume ID (PVID) If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it (This is for a moment).# chdev -l hdiskn -a pv=yesIf you have an existing PVID, then chdev will overwrite the existing PVID, which will cause applications depending on the previous PVID to fail. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node.# lspv | grep -i "0009005fb9c23648"The output from this command should be similar to the following:hdisk18 0009005fb9c23648 NoneThe device name associated with this device on this node is hdisk18, on the primary node is hdisk17 (they are different names) If the device names are the same on all nodes, then enter the following command on all nodes to change owner, group and permissions on the character raw device files for the disk devices:- OCR device:# chown root:oinstall /dev/rhdiskn# chmod 640 /dev/rhdiskn- Voting device:# chown oracle:dba /dev/rhdiskn# chmod 660 /dev/rhdiskn If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the node using a common unused name.To create a new device file for a disk device on all nodes, perform these steps on each node:

a) To determine the device major and minor number do the following:# ls -alF /dev/*hdisknThe output from this command is similar to the following:brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskncrw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdisknIn this case, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number.

b) To create a new device file, enter the following command:# mknod /dev/ora_ocr_raw_280m c 24 8192

c) Enter commands similar to the following to change the owner, group and permissions on the character raw device file for the disk:- OCR:# chown root:oinstall /dev/ora_ocr_raw_280m# chmod 640 /dev/ora_ocr_raw_280m- Voting:# chown oracle:dba /dev/ora_vote_raw_280m# chmod 660 /dev/ora_vote_raw_280m

d) Enter following command to verify that you have created the new device file successfully:# ls -alF /dev | grep "24,8192"The output should be similar to the following:brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskncrw-r----- 1 root oinstall 24,8192 Dec 05 2001 /dev/ora_ocr_raw_280mcrw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn

To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute, depending on the type of reserve attribute used by your disks. Follow the steps below to perform this task using hdisk logical names.To determine the reserve setting your disks use, enter command below:# lsattr -E -l hdiskn | grep reserve_

The response is either a reserve_lock setting, or a reserve_policy setting.

If the attribute is reserve lock, then ensure the setting is reserve_lock=noIf the attribute is reserve policy, then ensure the setting is reserve_policy=no_reserve

If necessary, change the setting with chdev command using the following syntax.#chdev -l hdiskn -a [ reserve_lock=no | reserve_policy=no_reserve ] Enter command similar to the following on any node to clear the PVID from each disk device that you want to use. (CLEAR PVID for each disks)# chdev -l hdiskn -a pv=clear Format (Zeroing) and Verify devices Concurrent Read/Write access by running at the same time dd command from each node:At the same timeFrom node 1dd if=/dev/zero of=/dev/hdiskx bs=8192 count=25000 Repeat this operation with others disk devicesAt the same timeFrom node 2 dd if=/dev/zero of=/dev/hdiskx bs=8192 count=25000 Repeat this operation with others disk devices

6. To determine if the system architecture can run the software, enter the following command.# getconf HARDWARE_BITMODEThe output of this command should be 64; otherwise you can not install the software on this system.6. To determine if the system is started in 64-bit mode.# bootinfo KThe result of this command should be 64, indicating that the 64-bit kernel is enabled.

Software RequirementsDepending on the products that you indent to install verify that following software is installed on this system.

0.6.7 OS version requirementAIX Release supported with Oracle 10g RAC Release 2AIX 5L version 5.3 Maintenance Level 02 or latyer (64bit)

To determine which version of AIX is installed, enter command#oslevel -r

#oslevel r With SBV system, the result of the command is 5300-08-01-0819

0.6.8 OS FILESETS requirementOperaing System FilesetsThe following operating system filesets are required:bos.adt.basebos.adt.libbos.adt.libmbos.perf.libperfstatbos.perf.perfstatbos.perf.proctoolsrsct.basic.rtersct.compat.clients.rtexlC.aix50.rte 7.0.0.4xlC.rte 7.0.0.1You must have the xlC C/C++ runtime filesets for installation, but you do not require the C/C++ compiler.

Oracle Real Application Cluster ASM is requiredAS we use ASM for Clusterware files and for Oracle Database files.

ADAOC Systems PowerAda 5.4d

JDKIBM JDK 1.4.2 is installed with this release

Pro*PORTRANIBM XLPortran v 10.1 for AIX

Utilities- GNUfind 4.1- gdb 6.0- Gmake 3.80- Gnutar 1.13- Perl 5.005_03 + MIME 2.21- Perl 5.6 + MIME 2.21- Perl 5.8.3- Python 2.2- Unzip 5.4.2- Zip 2.3

- Pro*C/C++- Oracle Call Interface- Oracle C++ Call Interface- Oracle XML developers Kit- GNU compiler CollectionN/A

To determine whether the required filesets are installed and commited, enter the command similar to the following:# lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat \bos.perf.libperfstat bos.perf.proctools rsct.basic.rte

0.6.9 AIX APAR and other operating System Fixes All InstallationAll AIX 5L v. 5.3 installations Authorized Problem Analysis Reports (APARs) for AIX 5L v. 5.3 ML02 or Plus, and the following AIX fixes:

IY68989: WRITE TO MMAPPED SPACE HANGS

IY68874: An application that is run wtyle='font-size:10.5pt'>

IY70031: CORRUPTION FROM SIMULTANEOUS CIO WRITES WITH O_DSYNC ON JFS2 If using the IBM Journal File System Version 2 (JFS2) for Oracle Database files.

NOTE: All Oracle 9i Database and Oracle 10g Database customers who are running on AIX 5L V5.3 Technology Level 5 (TL 5300-05) must install the IBM AIX PTF for APAR IY89080. In addition, Oracle customers should contact Oracle support to obtain the fix for Oracle Bug 496862.

IZ03260: LIO_LISTIO FAILS TO UPDATE AIO CONTROL BLOCKS ON ERROR APPLIES TO AIX 5300-06 (for AIX 5.3 TL06 customers).

IZ03475: LIO_LISTIO FAILS TO UPDATE AIO CONTROL BLOCKS ON ERROR APPLIES TO AIX 5300-07 (for AIX 5.3 TL07 customers).

To determine the required APAR are installed, enter:# instfix -i -k " IY68989"

Tuning AIX System EnviromentThe parameter and shell limit values shown in this section are recommended Values only. For production Database System, Oracle recommends that you tunes these values to optimize the performance of the system.

0.6.10 Tuning Virtual Memory Manager (VMM)Oracle recommends that you use the vmo command to tune virtual memory using the following values.Parametervalue

minperm%3 (default is 20)

maxperm%90 (default is 80)

maxclient% = 9090 (default is 80)

lru_file_repage0 (default is 1)

strict_maxclient1 (default is 1)

strict_maxperm0 (default is 0)

For example:

vmo -p -o minperm%=3vmo -p -o maxperm%=90vmo -p -o maxclient%=90vmo -p -o lru_file_repage=0vmo -p -o strict_maxclient=1vmo -p -o strict_maxperm=0You must restart the system for these changes to take effect.

0.6.11 Configuring Shell LimitsTo improve the software performance, you must increase the following shell limits:Shell LimitItem in limits.confHard limit

Maximum number of open file descriptorsnofile65536

Maximum number of processes available to a single usermaxuproc16384

To increase the shell limits:1. Add the following lines to the /etc/security/limits file:

default: fsize = -1 core = -1 cpu = -1 data = 512000 rss = 512000 stack = 512000 nofiles = 2000

2. Enter the following command to list the current setting for the maximum number of process allowed by the Oracle software user:

/etc/lsattr -E -l sys0 -a maxuprocIf necessary, change the maxuproc setting using the following command:

/etc/chdev -l sys0 -a maxuproc = 16384

3. Repeat this procedure on all other nodes in the cluster

0.6.12 Configuring User Process ParametersVerify that the maximum number of processes allowed for each user is set to 2048 or greater:1. Enter the following command: # smit chgsys2. Verify that the value shown for Maximum number of PROCESSES allowed for each user is greater than or equal to 2048.If necessary, edit the existing value.3. When you have finished making changes, press F10 to exit.

0.6.13 Configuring Network Tuning ParametersVerify that the Network Tuning Parameters shown in the following table are set to the values shown or higher values:Network Tuning ParameterRecommended Value

ipqmaxlen512

rfc13231

sb_max2*655360

tcp_recvspace65536

tcp_sendspace65536

udp_recvspace655360Note: The recommended value of this parameter is 10 times the value of the udp_sendspace parameter. The value must be less than the value of the sb_max parameter

udp_sendspace65536production databases, the minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE initialization parameter multiplied by the value of theDB_MULTIBLOCK_READ_COUNT initialization parameter:(DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB

How to view the current setting and change them as required.1. To check the current values:# no -a | more2. If you must change the value of any parameter, you must determine whether the system is running in compatibility mode:# lsattr -E -l sys0 -a pre520tuneIf the result is:pre520tune enable Pre-520 tuning compatibility mode TrueIt means, the system is running in compatibility mode:3. If the system is running in compatibility mode, follow the steps bellow to change the value:a. Enter command bellow to change value for each parameter# no -o parameter_name=valueFor example:# no -o udp_recvspace=655360b. Add entries similar to the following to /etc/rc.net file for each parameter that you changed in previous step:if [ -f /usr/sbin/no ] ; then/usr/sbin/no -o udp_sendspace=65536/usr/sbin/no -o udp_recvspace=655360/usr/sbin/no -o tcp_sendspace=65536/usr/sbin/no -o tcp_recvspace=65536/usr/sbin/no -o rfc1323=1/usr/sbin/no -o sb_max=2*655360/usr/sbin/no -o ipqmaxlen=512fi

4. If the system is not running in compatibility mode: For ipqmaxlen parameter#no -r -o ipqmaxlen=512 Othersno -p -o parameter=value

0.6.14 Increasing Space Block Size AllocationOracle recommends that you increase the space allocated for ARC/ENV list to 128.#chdev -l sys0 -a ncargs='128'

Users and GroupsYou must create the following Users and Groups for Oracle Clusterware and RAC installation and Management: Create oinstall group for software owner group and inventory Create the dba group Create oracle user for CRS and RAC softwrae onwerThe Group ID and User ID for oinstall, dba and oracle must be identical over the member nodes of Cluster.oinstall#mkgroup id=500 oinstall

dba#mkgroup id=501 dba

oracleUsing smit security: ID=500This user has the following attributes:- oinstall is the PRIMARY GROUP- dba is the SET GROUP

Suppose Oracle Clusterware and Oracle RAC are installed on mount point /u01 (/u01 is owner by root), do the following to create Oracle Software owner and Group. Change /u01 to any you have. /u01: Mount point for Oracle Software installation (owner by root) /u01/app: Oracle Base /u01/app/crs: Oracle CRS Home /u01/app/oracle: Oracle RAC HOME #mkdir /u01/app#chown oracle:oinstall /u01/app#chmod R 775 /u01/app/#mkdir /u01/app/crs#mkdir /u01/app/oracleNetwork Configuration1. Network hardware Requirements: Each node must have at least two network adapters, one for public network interface and one for private network interface (the interconnect). The public interface names associated with the network adapters for each network must be the same on all nodes, and the private interface names associated with the network adapters should be the same on all nodes. For increased reliability, configure redundant public and private network adapters for each node. For public network, each network adapter must support TCP/IP For private network, the interconnect must support the user diagram protocol (UDP) using high-speed network adapters and switchs that supports TCP/IP (Gigabit Ethernet or better).

2. Node name and IP identification With DC sitePublicVIPRAC Interconnect (Private network)

En0En0En8

Node nameIPNode nameIPNode nameIP

dcdb1192.168.10.10dcdb1_vip192.168.10.14dcdb1_priv172.16.100.14

dcdb2192.168.10.12dcdb2_vip192.168.10.16dcdb2_priv172.16.100.16

With DR sitePublicVIPRAC Interconnect (Private network)

En20En20En16

Node nameIPNode nameIPNode nameIP

drdb110.192.10.10bnspscdb1_vip10.192.10.14drdb1_priv172.16.100.14

drdb210.192.10.12drdb2_vip10.192.10.16drdb2_priv172.16.100.16

3. Host file setup/etc/hosts With DC site# Public network192.168.10.10 dcdb1 # node name db1192.168.10.12 dcdb2 # node name db2# Oracle RAC interconnect network172.16.100.14 dcdb1-priv # db1 private IP172.16.100.16 dcdb2-priv # db2 private IP#Virtual IP for oracle192.168.10.14 dcdb1-vip # db1 virtual IP192.168.10.16 dcdb2-vip # db2 virtual IP

With DR site# Public network10.192.10.10 drdb1 # drdb110.192.10.12 drdb2 # drdb2# Oracle RAC interconnect network172.16.100.14 drdb1-priv # bak db1 private IP172.16.100.16 drdb2-priv # bak db2 private IP#Virtual IP for oracle10.192.10.14 drdb1-vip # bak db1 virtual IP10.192.10.16 drdb2-vip # bak db2 virtual IP

Local Disk for Oracle codeThe Oracle code (Oracle Clusterware and RAC software) can be located on an internal disk. Regular file systems are used for Oracle code.Node: you can also use virtual I/O disks for Oracle code.Preferred mount point for Oracle code is /u01. It is recommended to have 30-50GB space for Oracle code.

Node Time RequirementBefore starting the installation, ensure that each member node of the cluster is set as closely as posible to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating system for this purpose.

User Equivalence SetupBefore you install and use Oracle Real Application Clusters, you must configure secure shell (SSH) for oracle user on all cluster nodes.This task will be done by IBM engineer.

Running the rootpre.sh scriptNote: Do not run this script if you have a later release of the Oracle Database software already installed on this system.1. Switch User to root$su - root2. Run the rootpre.sh script#/rootpre.sh3. exit from the root account4. Repeated these steps on all cluster nodes.

Update .profile (User Enviroment)

ORACLE_BASE=/u01/app; export ORACLE_BASEORACLE_HOME=$ORACLE_BASE/oracle; export ORACLE_HOMEPATH=$PATH:$ORACLE_HOME/bin; export PATHORACLE_SID=; export ORACLE_SIDAIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPEumask 022

INSTALLING ORACLE CLUSTERWARE0. Verify Oracle Clusterware Requirements with CVUUsing the following command to verify and check system requirements before starting to install Oracle Clusterware.$/mountpoint/runcluvfy.sh stage -pre crsinst -n node_listThe Cluster Verification Utility Oracle Clusterware preinstallation stage check verifies the following: Node Reach ability: User Equivalence: Node Connectivity: Administrative Privileges: Shared Storage Accessibility: System requirements: Kernel Packages: Node Applications:

Preparing to install Oracle CRS with OUI Shutdown all running Oracle Processes Determine the Oracle Inventory Location (oraInventory)Oracle Inventory Location/u01/app

Obtain root account access Determine cluster name, public node names, private node names, and virtual node names for each node in the cluster. With DC siteCRS

NodePublicIPVIPIPPrivateIP

Node1dcdb1192.168.10.10dcdb1_vip192.168.10.14dcdb1_priv172.16.100.14

Node2dcdb2192.168.10.12dcdb2_vip192.168.10.16dcdb2_priv172.16.100.16

With DC site Identify the shared storage for Clusterware filesOCR Location /dev/hdisk9

OCR Location /dev/hdisk10

Voting Location /dev/hdisk4

Voting Location /dev/hdisk5

Voting Location/dev/hdisk6

With DR siteCRS

NodePublicIPVIPIPPrivateIP

Node1drdb110.192.10.10bnspscdb1_vip10.192.10.14drdb1_priv172.16.100.14

Node2drdb210.192.10.12drdb2_vip10.192.10.16drdb2_priv172.16.100.16

Identify the shared storage for Clusterware filesOCR Location /dev/hdisk9

OCR Location /dev/hdisk10

Voting Location /dev/hdisk4

Voting Location /dev/hdisk5

Voting Location/dev/hdisk6

Confirming Oracle Clusterware FuntionAfter installation, log in as root, and use the following command syntax to confirm that your Oracle Clusterware installation is installed and running correctly

#CRS_HOME/bin/crs_stat t Oracle Clusterware Postinstallation Procedures1. Required Postinstallation Tasksa) Backup the voting disk after installationUsing cp command to backup the Voting diskPerform this task after you complete any installation, node addition, and node deletions.b) Download and Install Patch updatesList of Patches to apply.- Patchset 3 (10.2.0.4)2. Recommended Postinstallation TasksOracle recommends that you backup root.sh script after you complete an installation.

TASKS for After Installation 0. Tuning parameter fileITEMValueREMARK

LOCK_SGATrueOracle databases requiring high performance will usually benefitfrom running with a pinned Oracle SGA$ /usr/sbin/vmo -r -o v_pinshm=1$ /usr/sbin/vmo -r -o maxpin%=percent_of_real_memoryWhere percent_of_real_memory = ( (size of SGA / size ofphysical memory) *100) + 3Set LOCK_SGA parameter to TRUE in the init.oraNeed to stop/start Oracle to change

Installation screen shot 0. Two node extended rac (DC) CRS Installation screen shot Begin Installation:

Specify Inventory directory and credentials:

Specify Home Details:

Product Specific Prerequisite Checks:

Specify Cluster Configuration:

Specify Network Interface Usage:

Specify Oracle Cluster Registry (OCR) Location:

Specify Voting Disk Location:

Summary:

Installation progress:

Install VIP CA:

Configuration Assistant Progress Dialog:

End of Installation:

Oracle Software installation screen shot Begin of installation:

Select Installation Type:

Specify Home Details

Specify Hardware Cluster Installation Mode:

Product Specific Prerequisite Checks

Select Configuration Options:

Summary:

Install progress

Execute Configuration scripts

Running scripts

Configure ASM Instance Database configuration Assistant : Operations

Database Configuration Assistant : Node Selections

Create ASM Instance

ASM Disk Groups:

Create disk group:

Create database : dcore

Two node extenede rac (DR)CRS Installation screen shot Specify Home Detailss:

Specify Hardware Cluster Installation Mode:

Product Specific Prerequisite Checks:

End of Installation:

Running scripts:

Oracle Software installation screen shot Specific Home Details:

Product Specific Prerequisite Checks: