troug 11 gnf
Post on 08-May-2015
2.243 Views
Preview:
TRANSCRIPT
Oracle High Availability 11gR2 New Features
Turkey Oracle User Group (TROUG) DayIstanbul, 11 October 2012
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 1
Disclaimer
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 2
This views/content in this slides are those of the author and do not necessarily reflect that of Oracle Corporation and/or its affiliates/subsidiaries. The material in this document is for informational purposes only and is published with no guarantee or warranty, express or implied..
This material should not be reproduced or used without the authors' written permission.
• What is Grid Infrastructure (GI)?• Oracle 11gR2 Clusterware Stack flow• OCR, Voting disks in ASM• Out-of-place upgrade• Redundant interconnect• Clusterized commands• Nodes addition, deletion made easy• Oracle Automatic Cluster File System (ACFS)• Single Client Access Name (SCAN)• Oracle RAC one node• Cluster Health Monitor (CHM)
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 3
Some key new features
Know your presenter
Syed Jaffer HussainDatabase Support ManagerOver 20 years IT experience13 years as an Oracle DBAOracle ACE DirectorOracle 10g Certified Master(OCM)Oracle 10g RAC Certified ExpertOCP v8i,9i,10g & 11gITIL v3 Foundation CertifiedAuthored Oracle 11g R1/R2 Real Application Clusters
EssentialsTwitter: @sjaffarhussainhttp://jaffardba.blogspot.com
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 4
Know your presenter
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 5
Technologist of the Year, DBA 2011
http://www.oracle.com/technetwork/issue-archive/2012/12-jan/o12awards-tech-1403083.html
Know your presenter
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 6
Expert Oracle RAC
Kai YuRiyaj Shamsudeen
http://orainternals.wordpress.com/about/http://kyuoracleblog.wordpress.com/
• Grid Home consist of : Clusterware + Automatic Storage Management (ASM)
• Oracle Clusterware, ASM still remains separate components
• Must be configured out of ORACLE_BASE directory
• Running any lower version DBs, node must to be pinned
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 7
What is Grid Infrastructure - GI
Oracle Clusterware Automatic Storage Management (ASM)
Grid Infrastructure (GI)
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 8
Courtesy from Oracle Documents
Clusterware stack flow
Oracle 11gR2Pre Oracle 11gR2
init /etc/init.d/init.evmd/etc/init.d/init.cssd/etc/init.d/init.crsd
evmd.bincrsd.binocssd.bin
racgmain, racgimonons
oprocd
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 9
Clusterware stack overviewClusterware startup sequence
ohasd
orarootagent cssdagent oraagent
crsd
init
orarootagent oraagent
cssd
• Network resources• SCAN VIP• VIP• ACFS Registry• GNS VIP
• ASM Resource• Diskgroup• DB Resources• SCAN Listener• Listener• ONS• GNS
• ASM Resource Monitoring• EVMD
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 10
OCR, Voting disks in ASM
# ocrcheck
Status of Oracle Cluster Registry is as follows :Version : 3Total space (kbytes) : 1291700Used space (kbytes) : 32572Available space (kbytes) : 1259128ID : 1402199437Device/File Name : +DG_OCR_VOTE
Device/File integrity check succeededDevice/File not configuredDevice/File not configuredDevice/File not configuredDevice/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
OCR, Voting Disks now can be stored on ASM
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 11
OCR, Voting disks in ASM
# ocrcheck
Status of Oracle Cluster Registry is as follows :Version : 3Total space (kbytes) : 1291700Used space (kbytes) : 32572Available space (kbytes) : 1259128ID : 1402199437Device/File Name : +DG_OCR_VOTE
Device/File integrity check succeededDevice/File not configuredDevice/File not configuredDevice/File not configuredDevice/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
• Supports utmost 5 copies of OCR• Can be stored in separate Diskgroups• Can also have mixed storage: ASM, FS
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 12
OCR, Voting disks in ASM
• Create a new Disk Group
• Ensure the DG attribute COMPATIBLE.ASM is set to >=11.2
• Mount the DG across nodessrvctl start diskgroup –g DISKGROUP_NAME –n <hostname_list>
• ocrconfig –add +DG_OCR_VOTE
• Delete OCR from the FS , if required
Migrating OCR on ASM
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 13
OCR, Voting disks in ASM
• Create a new Disk Group with suitable redundancy
• Ensure the DG attribute COMPATIBLE.ASM is set to >=‘11.2’
• Mount the DG across nodessrvctl start diskgroup –g DISKGROUP_NAME –n hostname_list
• crsctl replace votedisk +DG_OCR_VOTE
Migrating Voting Disk on ASM
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 14
OCR, Voting disks in ASM
How to multiplex Voting disk (files) at ASM level?
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 15
OCR, Voting disks in ASM
• As a matter of fact, one cannot add more than 1 voting disk in the same or on a separate diskgroup when using External Redundancy
• Number of voting disks determined based on the disk group redundancy
• 1 voting disk for External redundancy DG
• 3 voting disks for Normal redundancy DG (3 Failure groups)
• 5 voting disks for High redundancy DG (5 Failure groups)
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 16
OCR, Voting disks in ASM
# crsctl query css votedisk## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- ---------1. ONLINE 429a472a29944f2ebfeaeb56d2bb1877 (/dev/rdisk/oracle/vote/ora_vote_01) [DG_OCR_VOTE]2. ONLINE 96d1bf7722f84ff2bf1caa23275e01d0 (/dev/rdisk/oracle/vote/ora_vote_02) [DG_OCR_VOTE]3. ONLINE 755c0187686d4f1fbfa8e2068792491e (/dev/rdisk/oracle/vote/ora_vote_03) [DG_OCR_VOTE]4. ONLINE 8349ad87c10a4fa8bf75312ba0741b8a (/dev/rdisk/oracle/ocr/ora_ocr_01) [DG_OCR_VOTE]5. ONLINE 1d9fba7fb51e4fd0bfb89e52d5cab72b (/dev/rdisk/oracle/ocr/ora_ocr_02) [DG_OCR_VOTE]
• dd command is no longer supported to backup voting disk.• Voting disk will be backed up in OCR
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 17
Out-of-place upgrade
• Install software into a new/separate ORACLE_HOME/u00/app/oracle/product/10.2.0/crs/u00/app/11.2.0/grid
• Less downtime for Cluster upgrades : can save software installation time
• With 11gR2 patch set 1, patching installation can be done: In-place or out-of-place
• With 11.2.0.2+, all patch sets are a full installation
• No longer required base release
• Oracle Grid Infrastructure upgrades must be out-of-place upgrades
• On the flip side, it requires additional disk space.
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 18
Redundant interconnect
Courtesy by Kai Yu
• Redundancy improves the stability, reliability and scalability of a Cluster• Pre 11gR2, 3rd party redundancy technology:
Sun : Trunking, IP Multipathing (IPMP)HP : Auto Port AggregationWindows : NIC teamingLinux : Bonding
• Only active/standby mode is supported with Multiswitch configuration.
Private interconnect Link Aggregation: Multiswitch
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 19
Redundant interconnect• With 11gR2 (11.2.0.2), Oracle supports multiple redundant interconnect usage
without any 3rdparty IP failure technology
• Multiple private interconnect adapters can be defined- during installation- use oifcfg setif post installation
• Oracle clusterware support utmost four interfaces at a given point in time
• Provide load balance, failover for interconnect traffic across all active interfaces
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 20
Add, Remove nodes made easy
• cluvfy stage –pre nodeadd –n ServerC -verbose
• addNode.sh –silent “CLUSTER_NEW_NODES={ServerC}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={ServerC-vip}”
• cluvfy state –post nodeadd –n ServerC -verbose
ServerA ServerB ServerC
C l u s t e r
Adding new node
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 21
Add, Remove nodes made easy
• olsnodes –s –t
• crsctl unpin css –n <ServerC>
• $GI/crs/install/rootcrs.pl –dconfig –force – on the node to be deleted
• crsctl delete node –n <ServerC> - execute on other node
Removing a node
ServerA ServerB ServerC
C l u s t e r
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 22
Oracle Automatic Cluster Filesystem - ACFS
• Extends ASM capabilities to manage ALL types of data
• ACFS is a multi-platform filesystem that runs on any plat-form
• Designed for general-purpose standalone and cluster wide FS
• Oracle binaries, application files, data files, BFILES, video/audio, and otherconfiguration files can be stored
Courtesy by Oracle docs
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 23
Oracle Automatic Cluster Filesystem - ACFS
• Dynamic extension: minimizes the downtime to resize the FS
• Evenly distributes the extents of ACFS filesystem across all disks
• Can be used for shared, non-shared Oracle Home in a RAC
• ACFS drivers must be configured: they are installed/configured by default
• Use ASMCA tool or ASMCMD command prompt to create ACFS
• acfsutil : to manage, administrate ACFS
• a new b/g process, asm_acfs_+ASMn manages ACFS functionality
• Query v$asm_filesystem, v$asm_acfsvolumes dynamic views
Key Facts & Advantages
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 24
Oracle Automatic Cluster Filesystem - ACFS
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 25
Clusterized cluster-aware commands
# crsctl check cluster -all- to check the status of the Clusterware on all nodes
# crsctl stop cluster -all- to stop the Oracle Clusterware stack on all nodes
# crsctl start cluster -all- to start the Oracle Clusterware stack on all nodes
Start, Stop & Verify cluster status cluster wide
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 26
Single Client Access Name - SCAN
• Simplifies client connection to the database services
• Provides a single/virtual name for all clients to connect DBs in a cluster
• A stable and highly available name for clients
• Require 1 – 3 round robin IP address define either in the - Domain Name Server (DNS) - Grid Name Server (GNS)
• SCAN IPs must be on the same subnet as Public and Virtual Ips
• A fully qualified host name (host + domain name)
• Mandatory for GI installation and upgrades
• SCAN listener will be running by default under GI home
• Oracle doesn’t recommend configuring SCAN in the /etc/hosts file
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 27
Single Client Access Name - SCANSCAN architecture
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 28
Courtesy by Kai Yu
Single Client Access Name - SCANPre 11gR2 Clusterware, RAC Architecture
Public network interface
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 29
Courtesy by Kai Yu
Single Client Access Name - SCANSCAN architecture
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 30
Courtesy by Oracle
Single Client Access Name - SCANHow does SCAN works?
• Three LISTENER_SCANn are configuredin a cluster.
• All instances register SCAN throughREMOTE_LISTENER parameter.
# srvctl status SCAN
SCAN Listener LISTENER_SCAN1 is enabledSCAN listener LISTENER_SCAN1 is running on node ServerASCAN Listener LISTENER_SCAN2 is enabledSCAN listener LISTENER_SCAN2 is running on node ServerBSCAN Listener LISTENER_SCAN3 is enabledSCAN listener LISTENER_SCAN3 is running on node ServerC
RACDB =(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SCAN.domain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=RACDB_MAIN)))
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 31
Oracle RAC One Node
• A single instance of RAC database [active/passive] on a node in a cluster
• Commonly known as “Always On Instance”
• Online database relocation, no downtime for application users
• Uses Omotion tool to move instance online from one server to another, without any downtime
• Rolling online patching
• Easy conversion to RAC and vice versa
• Complements with Oracle Virtual Machine (OVM)
• Sustain node failures by providing an automatic instance failover mechanism
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 32
Oracle RAC One Node – contd…
• Fully integrated with DG and EM
• An additional license is required, $10,000/processor, vs $23,000/processor RAC
• GI must be configured, up and running
• Better option than any 3rd party clustering HA (HACMP, Service Guard, Veritas)
• On the flip side, doesn’t support Load Balancing
Key Facts
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 33
Courtesy by Kai Yu
Oracle RAC One Node
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 34
Courtesy by Kai Yu
Oracle RAC One Node
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 35
Courtesy by Kai Yu
Oracle RAC One Node
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 36
Courtesy by Kai Yu
Oracle RAC One Nodesrvctl relocate database -d kr1n -n k2r720n2 -w 5 -v
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 37
Courtesy by Kai Yu
Oracle RAC One Nodesrvctl relocate database -d kr1n -n k2r720n2 -w 5 -v
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 38
Courtesy by Kai Yu
Oracle RAC One Nodesrvctl relocate database -d kr1n -n k2r720n2 -w 5 -v
Added target node k2r720n1Configuration updated to two instances
Instance kr1n_2 startedServices relocatedWaiting for 5 minutes for instance kr1n_1 to stop.....Instance kr1n_1 stoppedConfiguration updated to one instance
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 39
Cluster Health Monitor - CHM• Previously know as Instantaneous Problem Detector for Clusters (IPD/OS)
• With 11gR2 (11.2.0.3) installed and activated by default on most OS
• For pre 11gR2, need to download the tool from OTN
• Collects OS system metrics: memory, swap, processes, IO, network
• Real time statistics collection, every second, and stored in repository
• The CHM repository requires 1gb disk space per node in the cluster
• Can be installed on a single node, non-RAC servers
• CHM vs OSWatcher : OSW often can’t run when CPU load is heavyOSW uses more CPU
• CHM uses less than 5% of CPU/core
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 40
Cluster Health Monitor - CHM# crsctl stat res -t -init--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS--------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.asm
1 ONLINE ONLINE ServerA Startedora.crf
1 ONLINE ONLINE ServerAora.crsd
1 ONLINE ONLINE ServerA
#Grid_home/bin/diagcollection.pl -collect -crshomeGrid_home -chmoshome Grid_home -chmos -incidenttime 07/14/201201:00:00 -incidentduration 00:30
Collecting CHM data
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 41
Cluster Health Monitor - CHM• CHMOSG : A graphical user interface to CHM tool
• Download from OTN, configure under GI home
• Tool, Presentation :
http://www.oracle.com/technetwork/database/clustering/downloads/ipd-download-homepage-087212.html
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 42
Cluster Health Monitor - CHM
Courtesy by Oracle
CHMOSG - Monitoring
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 43
Cluster Health Monitor - CHM
Courtesy by Oracle
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 44
Cluster Health Monitor - CHM
Courtesy by Oracle
Cluster view histograms
Presented by : Syed Jaffer Hussain TROUG Day 2012 Slide # 45
Q&A – Thank you
Courtesy by Oracle
A big thank to TROUGand you all
for listening ...
You can reach me at sjaffarhussain@gmail.com
top related