aman sharma hyd_12crac high availability day 2015

63
What’s New in 12c RAC & ASM Aman Sharma @amansharma81 http://blog.aristadba.com

Upload: aioughydchapter

Post on 15-Jan-2017

347 views

Category:

Technology


0 download

TRANSCRIPT

What’s New in 12c RAC & ASM

Aman Sharma

@amansharma81 http://blog.aristadba.com

Who Am I?

Aman Sharma-Hyd_Tech_Day_2015 2

Aman Sharma

About 12+ years using Oracle Database

Oracle ACE

Frequent Contributor to OTN Database forum(Aman….)

Oracle Certified

Sun Certified

|@amansharma81 *

|http://blog.aristadba.com *

Aman Sharma-Hyd_Tech_Day_2015 3

Agenda

• Flex Cluster

• Flex Cluster- Server Pool

Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Flex ASM

• Cloud FS

(Actual)Agenda

Aman Sharma-Hyd_Tech_Day_2015 4

Pre 12c Oracle RAC-Database Tier

• Software based clustering using Grid

Infrastructure software

• Cluster nodes contain only database

and ASM instances

• Homogenous configuration

• Dedicated access to the shared

storage for the cluster nodes

• Applications/users connect via nodes

outsides the cluster

• Reflects Point-to-Point model

Database Tier

Aman Sharma-Hyd_Tech_Day_2015 5

Database Tier

Application Tier

Pre 12c Oracle RAC-Application Tier

Aman Sharma-Hyd_Tech_Day_2015 6

Pre-12.1 Cluster vs 12c Flex Cluster

Aman Sharma-Hyd_Tech_Day_2015 7

Oracle RAC Using Point-to-Point System

• Requires a lot of resources

• Each node is connected to each other via interconnect for node-node heartbeat

• Each node is connected to the storage directly

• Possible Interconnect paths for N node cluster – N*(N-1)/2 Interconnect Paths for Node Heartbeat

– N connection paths for storage

• For 16 Node RAC • Heartbeat paths: 16(16-1)/2=120 • Storage paths:16

Aman Sharma-Hyd_Tech_Day_2015 8

Let’s Talk BIIIGGGG!!!!

• Recap:

– N*(N-1)/2 Node Heartbeat paths

– N Storage paths

• For 16 Node RAC

– 120 Interconnects, 16 storage paths

• What about 500 node cluster?

– 124,750 Heartbeat connections

– 500 Storage Paths

Aman Sharma-Hyd_Tech_Day_2015 9

Introducing 12c Flex Cluster

ORCL1 +ASM1

Hub Node1

ORCL2 +ASM2

Hub Node2

ORCL3

Hub Node3

ORCL4 +ASM3

Hub Node4

Leaf Node1 Leaf Node1 Leaf Node4 Leaf Node2

Oracle CW Oracle CW

Database Tier

Application Tier

GNS

Flex ASM

Oracle CW Oracle CW Oracle CW

Oracle CW Oracle CW Oracle CW Oracle CW

Aman Sharma-Hyd_Tech_Day_2015 10

12c Flex Clusters-Overview • Based on Hub-Spoke topology

• Two different categories of cluster

nodes

– Hub Nodes • Runs database and ASM

instances

– Leaf Nodes • Loosely coupled

• Runs applications

• Connects to a Hub node

– Flex ASM • Required for Flex Cluster

• Hub nodes connect to Flex ASM

based storage

Aman Sharma-Hyd_Tech_Day_2015 11

11.2 RAC vs 12c Flex Cluster

• 16 Node cluster

– 120 Interconnects

– 16 Storage paths

• 500 node cluster?

– 124,750 Interconnects

– 500 Storage Paths

• 5 Hub Nodes

– 5 Hub, 16 Leaf

• 8 Interconnects

• 5 Storage paths

• 21 Hub-Leaf node connections

• 500 node cluster?

– 25 Hub,475 Leaf

• 300 Interconnects

• 25 Storage paths

• 775 Storage Paths

Aman Sharma-Hyd_Tech_Day_2015 12

Flex Cluster Benefits

Much lesser resource requirements

Much larger scalability. Number of nodes can be now up to 2000

More High Availability for the application tier Previously, application HA was dependent on the

application code

Application nodes also can now be able to use Server Pools

Better management of dependency mapping for applications

Aman Sharma-Hyd_Tech_Day_2015 13

Say Hello to Leaf & Hub Nodes

• 1 Leaf Node 1 Hub node

• Leaf Nodes don’t talk to each other(neither needs to)

• Leaf nodes choose the Hub nodes when they join the cluster

• Applications running on Leaf nodes connect to the database using the Hub nodes

• Much less inter-node interactions are required(Hub-Spoke model)

Aman Sharma-Hyd_Tech_Day_2015 14

• Light weight

• Loosely coupled

• Works as Spoke

• Each Leaf node gets connected to a Hub node

• Heartbeat only to the Hub node

• Required to run applications and clients over them

• No direct access to the storage managed by Flex ASM(it’s accessibly only to Hub nodes)

Leaf Node2 Leaf Node1

Oracle CW Oracle CW

Leaf Nodes-A Closer Look….

Aman Sharma-Hyd_Tech_Day_2015 15

• Requires GNS to discover the Hub nodes

• No private inter-connect between the leaf nodes i.e. no inter-leaf node communication

• Uses the same Public and Private networks as are used by the Hub nodes

• If a Hub nodes goes down, connected Leaf node(s) get evicted

• Evicted Leaf node can be added back by restarting the Clusterware on it

Leaf Nodes-Closer Look(contd.)

Aman Sharma-Hyd_Tech_Day_2015 16

• Very less (as compared to the Hub nodes)

• Contains only the application specific workload

• Do not contain

–Database instances

–ASM instances

–VIP’s

• Can be either virtual or physical

• Contains no Voting Disk or OCR

• Can be converted into Hub nodes if they have access to the storage

Leaf Nodes-Resource Requirements

Aman Sharma-Hyd_Tech_Day_2015 17

• For enabling Flex Cluster mode, GNS is mandatory

• GNS runs on one of the Hub nodes

• GNS Server location is stored in the cluster profile

• Leaf Nodes use GNS as naming service to locate the Hub nodes – During initial startup

• Clients i.e. Applications/Services running on Leaf nodes will be requiring GNS to locate the resources that they need in order to function

• Leaf nodes use GNS only at the time when they join the cluster for the 1st time

• Alike 11.2, GNS requires a static IP (GNSVIP)

Grid Naming Service(GNS) & Flex Cluster

Aman Sharma-Hyd_Tech_Day_2015 18

• In the previous versions, only one GNS/cluster was allowed

• For multiple clusters, multiple GNS VIP’s were required – Causes more resource requirements

• In 12c-Shared GNS configuration – Requires just 1 GNSVIP, Domain, DNS Configuration

• GNS configuration needs to be exported before being shared with the clusters

• Use the option USE SHARED GNS when doing the next cluster installation

12c-Shared GNS Configuration

$ srvctl export gns -clientdata /tmp/gnsconfig

Aman Sharma-Hyd_Tech_Day_2015 19

• Just the same as what the cluster nodes were in

pre-12c clusters

• Have access to the ASM managed storage

• Runs database instances, ASM(Flex) instances

and resources for the applications

• Maximum number of Hub nodes can be 64 in

12.1(HUBSIZE)

So What Are Hub Nodes?

Aman Sharma-Hyd_Tech_Day_2015 20

To convert a Standard Cluster: • Check the current cluster mode

$crsctl get cluster mode config| status

• Check GNS is enabled or not #srvctl status gns

• If GNS is not added, add it #srvctl add gns –vip 192.168.10.12 –domain cluster01.example.com

• Set Flex Cluster mode #crsctl set cluster mode flex

Stop & start clusterware on each node #crsctl stop crs #crsctl start crs

• Note: Flex clusters can’t be converted back to Standard cluster

Enabling Flex Cluster Mode

Aman Sharma-Hyd_Tech_Day_2015 21

• Show the current role of the node

• Change the node role

• Requires a CRS restart on the node

• Checking the maximum number of Hub nodes allowed(HubSize)

Flex Cluster Administration-Example Commands

$ crsctl get node role status –node rac01 Node ‘rac01’ active role is ‘hub’

$ crsctl set node role –node rac01 leaf

$ crsctl get cluster hubsize

Aman Sharma-Hyd_Tech_Day_2015 22

• Flex Cluster

• Flex Cluster- Server Pool

Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Flex ASM

• Cloud FS

Agenda

Aman Sharma-Hyd_Tech_Day_2015 23

Feature starting from 11.2 Offers the traditional facility of logical division

of cluster Nodes are allocated to the pools Resources are hosted over the pools Resource can be an application, a database, a

process Policy-managed interface Resource allocation is based on priority

24

Server Pools- Quick Recap

Aman Sharma-Hyd_Tech_Day_2015

• Server Pools are now available for both Hub and Leaf nodes

• Provide better resource management by isolating workloads

• Leaf Nodes and Hub can never be in the same server pool

• Server pool management for Leaf nodes is independent from server pools containing Hub Nodes

Server Pools & Hub-Leaf Node

Aman Sharma-Hyd_Tech_Day_2015 25

OLTP_SP DSS_SP

MIN_SIZE=1,Max_SIZE=3 IMP=3

MIN_SIZE=2,Max_SIZE=2 IMP=2

Apache Siebel

Leaf

Hub

Flex Cluster – Server Pool Enhancements

Aman Sharma-Hyd_Tech_Day_2015 26

• Enhances the concept of Server Pools introduced in 11.2

• Previously, only server pool attributes would determine node placement in server pools

• From 12c Flex clusters, two new concepts

– Server Categorization • Extended node attributes for servers to decide the

allocation in server pools

– Cluster Configuration Policy Sets • Workload based management of servers in the server

pools

Flex Cluster – Policy Based Cluster Administration

Aman Sharma-Hyd_Tech_Day_2015 27

OLTP_SP

SERVER_CATEGORY

Server Configuration Attributes ACTIVE_CSS_ROLE: HUB| LEAF CONFIGURED_CSS_ROLE: HUB| LEAF CPU_CLOCK_RATE: MHz CPU_COUNT CPU_EQUIVALENCY CPU_HYPERTHREADING MEMORY_SIZE NAME RESOURCE_USE_ENABLED:1|0 SERVER_LABEL

Server Category Attributes NAME ACTIVE_CSS_ROLE:HUB| LEAF EXPRESSION:

:=: equal eqi: equal, case insensitive

>: greater than <: less than

!=: not equal co: contains

coi: contains, case insensitive st: starts with en: ends with

nc: does not contain nci: does not contain, case insensitive

Flex Cluster – Server Categorization

Aman Sharma-Hyd_Tech_Day_2015 28

[root@rac0 ~]# crsctl status server rac0 -f NAME=rac0 MEMORY_SIZE=1997 CPU_COUNT=1 CPU_CLOCK_RATE=3 CPU_HYPERTHREADING=0 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= STATE=ONLINE ACTIVE_POOLS=Generic ora.ORCL STATE_DETAILS=AUTOSTARTING RESOURCES ACTIVE_CSS_ROLE=hub

[root@rac0 ~]# crsctl status server rac3 -f NAME=rac3 MEMORY_SIZE=1997 CPU_COUNT=1 CPU_CLOCK_RATE=3 CPU_HYPERTHREADING=0 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=leaf RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= STATE=ONLINE ACTIVE_POOLS=Free STATE_DETAILS=AUTOSTART QUEUED ACTIVE_CSS_ROLE=leaf

Flex Cluster – Server Categorization in Action

Aman Sharma-Hyd_Tech_Day_2015 29

[root@rac0 ~]# crsctl status category NAME=ora.hub.category ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=hub EXPRESSION= NAME=ora.leaf.category ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=leaf EXPRESSION= [root@rac0 ~]# crsctl status server -category ora.hub.category NAME=rac0 STATE=ONLINE NAME=rac1 STATE=ONLINE NAME=rac2 STATE=ONLINE

Flex Cluster – Listing Server Categories

Aman Sharma-Hyd_Tech_Day_2015 30

[root@rac0 ~]# crsctl add category testcat -attr "EXPRESSION='(MEMORY > 1900)'“ [root@rac0 ~]# crsctl status server -category ora.leaf.category NAME=rac3 STATE=ONLINE[root@rac0 ~]# crsctl status category testcat NAME=testcat ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=hub EXPRESSION=( MEMORY > 1900 )

Flex Cluster – Creating Server Category

Aman Sharma-Hyd_Tech_Day_2015 31

• Policy based server pool assignment

• Default policy-CURRENT • Managed by a Policy set

• Policy set contains 2 attributes • SERVER_POOL_NAMES • LAST_ACTIVATED_POLICY

• Policy set may contain 0 or more than one policies

• Each Policy contain definitions for one server pool only

Flex Cluster – Cluster Policy Set

Aman Sharma-Hyd_Tech_Day_2015 32

POOL1 POOL2 POOL3

MIN_SIZE=2,Max_SIZE=2 IMP=0

MIN_SIZE=1,Max_SIZE=1 IMP=0

MIN_SIZE=1,Max_SIZE=1 IMP=0

4 Node Cluster

Aman Sharma-Hyd_Tech_Day_2015 33

app1

app2

app3

Day Time: app1 uses two servers app2 and app3 use one server, each Night Time: app1 uses one server app2 uses two servers app3 uses one server Weekend: app1 is not running (0 servers) app2 uses one server app3 uses three servers

Node allocation

should be done

depending

on the

requirements at

different timings

Varying Times & Varying Workloads

Aman Sharma-Hyd_Tech_Day_2015 34

SERVER_POOL_NAMES=Free pool1 pool2 pool3 POLICY NAME=DayTime SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY=

POLICY NAME=NightTime SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY=

POLICY NAME=Weekend SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=0 MIN_SIZE=0 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=3 MIN_SIZE=3 SERVER_CATEGORY=

Flex Cluster – Proposed Cluster Policy Set

Aman Sharma-Hyd_Tech_Day_2015 35

Modify the Default policy set to manage the three server pools:

$ crsctl modify policyset –attr "SERVER_POOL_NAMES=Free pool1 pool2 pool3"

Add the required three policies:

$ crsctl add policy DayTime $ crsctl add policy NightTime $ crsctl add policy Weekend

$ crsctl modify serverpool pool1 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy DayTime $ crsctl modify serverpool pool1 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime $ crsctl modify serverpool pool1 -attr "MIN_SIZE=0,MAX_SIZE=0" -policy Weekend $ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime $ crsctl modify serverpool pool2 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy NightTime $ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy Weekend $ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime $ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime $ crsctl modify serverpool pool3 -attr "MIN_SIZE=3,MAX_SIZE=3" -policy Weekend

Modify the server pools:

Flex Cluster – Cluster Policy Set Creation

Aman Sharma-Hyd_Tech_Day_2015 36

Activate the policy-Weekend $ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=Weekend”

$ crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- app1 1 ONLINE OFFLINE STABLE 2 ONLINE OFFLINE STABLE app2 1 ONLINE ONLINE mjk_has3_1 STABLE app3 1 ONLINE ONLINE mjk_has3_0 STABLE 2 ONLINE ONLINE mjk_has3_2 STABLE 3 ONLINE ONLINE mjk_has3_3 STABLE --------------------------------------------------------------------------------

Server allocations after the policy being applied

Flex Cluster – Cluster Policy Set Creation

Aman Sharma-Hyd_Tech_Day_2015 37

• Flex Cluster

• Flex Cluster- Server Pool

Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Flex ASM

• Cloud FS

Agenda

Aman Sharma-Hyd_Tech_Day_2015 38

• Multitenant Databases contain Containers and

Pluggables

• Supported with 12c RAC

• Each PDB is going to be running as a service

• Each PDB service can run on one or more

RAC instances

• Each PDB service can be deployed over server

pool(s)

12c Multitenant Database & 12c RAC

Aman Sharma-Hyd_Tech_Day_2015 39

• Flex Cluster

• Flex Cluster- Server Pool

Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Flex ASM

• Cloud FS

Agenda

Aman Sharma-Hyd_Tech_Day_2015 40

Flex Cluster – Bundled Agents(XAG)

ORCL1 +ASM1

Hub Node1

ORCL2 +ASM2

Hub Node2

ORCL3

Hub Node3

ORCL4 +ASM3

Hub Node4

Ag

Leaf Node1 Leaf Node1 Leaf Node4 Leaf Node2

Oracle CW Oracle CW

Database Tier

Application Tier

GNS

Flex ASM

Oracle CW Oracle CW Oracle CW

XAG XAG XAG XAG

Aman Sharma-Hyd_Tech_Day_2015 41

Oracle CW Oracle CW Oracle CW Oracle CW

• Oracle CW can be used to provide HA to applications

• HA for applications was available earlier through the applications API’s and Services

• With 11.2.0.3, agents were available as standalone (http://oracle.com/goto/clusterware) XAGPACK.zip

• 12.1 introduced Bundled Agents(XAG)- supplied with the GI software itself*

• XAG agents reside on Leaf nodes

– Can be on Hub Nodes as well if needed e.g. GoldenGate

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/ogiba-2189738.pdf

Flex Cluster – Bundled Agents(XAG) Introduction

Aman Sharma-Hyd_Tech_Day_2015 42

• GI provides pre-configured public core network resource- ora.net1.network

• Applications bind Application VIP’s(APPVIP) to this network layer

• AGCTL-interface to add an application resource to the GI, managed by the bundled agents

• Shared storage access-ACFS/NFS/DBFS • Applications for which XAG are available :

– Apache HTTP & Tomcat

– Golden Gate

– Siebel

– JD Edwards

– PeopleSoft

– MySQL

GI & Bundled Agents

Aman Sharma-Hyd_Tech_Day_2015 43

• Flex Cluster

• Flex Cluster- Server Pool

Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Flex ASM

• Cloud FS

Agenda

Aman Sharma-Hyd_Tech_Day_2015 44

• DBA’s, from 12c, can predict the impact of an operation done over the cluster

• Can be used with both CRSCTL and SRVCTL commands

• Available for the following category of events

Resources: Start, stop, relocate, add,modify Server pools: Add, remove, and modify Servers: Add, remove, and relocate Policy: Change active policy Server category: Modify

12c Cluster- What-If Command

Aman Sharma-Hyd_Tech_Day_2015 45

[root@rac0 ~]# crsctl eval stop res ora.rac0.vip -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 Y Resource 'ora.LISTENER.lsnr' (rac0) will be in state [OFFLINE] 2 Y Resource 'ora.rac0.vip' (1/1) will be in state [OFFLINE] --------------------------------------------------------------------------------

[root@rac0 ~]# crsctl eval start res ora.rac0.vip -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 N Error code [223] for entity [ora.rac0.vip]. Message is [CRS-5702: Resource 'ora.rac0.vip' is already running on 'rac0']. --------------------------------------------------------------------------------

12c Cluster- What-If Command

Aman Sharma-Hyd_Tech_Day_2015 46

[root@rac0 ~]# crsctl eval delete server rac0 -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 Y Resource 'ora.ASMNET1LSNR_ASM.lsnr' (rac0) will be in state [OFFLINE] Y Resource 'ora.DATA.dg' (rac0) will be in state [OFFLINE] Y Resource 'ora.LISTENER.lsnr' (rac0) will be in state [OFFLINE] Y Resource 'ora.LISTENER_SCAN1.lsnr' (1/1) will be in state [OFFLINE] Y Resource 'ora.asm' (1/1) will be in state [OFFLINE] Y Resource 'ora.gns' (1/1) will be in state [OFFLINE] Y Resource 'ora.gns.vip' (1/1) will be in state [OFFLINE] Y Resource 'ora.net1.network' (rac0) will be in state [OFFLINE] Y Resource 'ora.ons' (rac0) will be in state [OFFLINE] Y Resource 'ora.orcl.db' (1/1) will be in state [OFFLINE] Y Resource 'ora.proxy_advm' (rac0) will be in state [OFFLINE] Y Resource 'ora.rac0.vip' (1/1) will be in state [OFFLINE] Y Resource 'ora.scan1.vip' (1/1) will be in state [OFFLINE] Y Server 'rac0' will be removed from pools [Generic ora.ORCL] 2 Y Resource 'ora.gns.vip' (1/1) will be in state [ONLINE] on server [rac1] Y Resource 'ora.rac0.vip' (1/1) will be in state [ONLINE|INTERMEDIATE] on server [rac1] <<output bridged for abbreviation>> -------------------------------------------------------------------------------- Aman Sharma-Hyd_Tech_Day_2015 47

• Flex Cluster

• Flex Cluster- Server Pool

Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Flex ASM

• Cloud File System

Agenda

Aman Sharma-Hyd_Tech_Day_2015 48

• ASM instances run locally on a node

• ASM clients can access ASM only from the local node

• Loss of local ASM Instance results in the unavailability of the clients connected to it

ASM of Old Times

Aman Sharma-Hyd_Tech_Day_2015 49

• 1:1 mapping of ASM instance with the clients is not required

• Number of ASM instances= Cardinality(3)

• Uses a dedicated network – ASM Network

• ASM network is used exclusively for communication between ASM instances and clients

• If local ASM instance fails – Client failover to another Hub

node running ASM instance

• Mandatory for 12c Flex Cluster

12c’s Flex ASM

Aman Sharma-Hyd_Tech_Day_2015 50

ORCL1 +ASM1

Hub Node1

ORCL2 +ASM2

Hub Node2

ORCL3

Hub Node3 Oracle CW Oracle CW

GNS

Oracle CW Oracle CW

Public Network

Storage Network

ASM Storage

CSS Network

ASM Network

Dedicated ASM Network in 12c Flex ASM

Aman Sharma-Hyd_Tech_Day_2015 51

Dedicated ASM Network in 12c Flex ASM

Aman Sharma-Hyd_Tech_Day_2015 52

Flex ASM- Failover

Aman Sharma-Hyd_Tech_Day_2015 53

• Flex ASM can be managed using – ASMCA – CRSCTL – SQL*PLUS – SRVCTL

$ asmcmd showclustermode ASM cluster : Flex mode enabled

$ srvctl status asm -detail ASM is running on mynoden02,mynoden01 ASM is enabled.

$ srvctl config asm ASM instance count: 3

SQL> SELECT instance_name, db_name, status FROM V$ASM_CLIENT; INSTANCE_NAME DB_NAME STATUS --------------- -------- ------------ +ASM1 +ASM CONNECTED orcl1 orcl CONNECTED orcl2 orcl CONNECTED

Administering Flex ASM

Aman Sharma-Hyd_Tech_Day_2015 54

• Pure 12c Mode – Cardinality != Number of Nodes

– Supports DB instance failover to other ASM instances

– Supports any DB instance to connect to ASM instance

– Managed by Cardinality

• Mixed Mode – Flex ASM with Cardinality = Number of Nodes

– ASM instance on all the nodes

– Allows 12c DB instances to connect to remote ASM instances

– Pre-12c DB instances can connect to local ASM instance

• Standard Mode – Standard ASM installation and configuration

– Can be converted to Flex ASM mode using • ASMCA

• converttoFlexASM.sh

12c ASM- Mixed Mode Configuration

Aman Sharma-Hyd_Tech_Day_2015 55

• In the previous versions, password files are

required to be copied to each node’s home

• From12c, you can use shared password file

stored in the ASM disk group

• Location of password file is now maintained

as an attribute for the said DB/ASM as a

cluster resource

• ASM instances use OS authentication for

authentication

12c-Shared Password File ASM

Aman Sharma-Hyd_Tech_Day_2015 56

• Before 11.1, ASM can only read from the Primary copy of the data-even on Extended Clusters

• Starting from 11.1, Preferred Read Fail Groups can be defined(ASM_PREFERRED_READ_FAILURE_GROUPS)

• From 12c, Even Read distributes data reads on all the disks-evenly

• Choice of the Least Loaded disks is automatic – Enabled for non-Exadata systems by default

12c-Even Reads(ASM)

Aman Sharma-Hyd_Tech_Day_2015 57

• Flex Cluster

• Flex Cluster- Server Pool Enhancements

• Multitenant database with 12c RAC

• Bundled Agents(XAG)

• What-If command

• Transaction Guard

• Application Continuity

• Flex ASM

• Cloud File System

Agenda

Aman Sharma-Hyd_Tech_Day_2015 58

• Next generation file system

• 12c Cloud File System integrates:

– ASM Cluster File System(ACFS)

– ASM Dynamic Volume Manager(ADVM)

• Using Cloud FS, applications, database,

storage in private clouds

12c Cloud File System(Cloud FS)

Aman Sharma-Hyd_Tech_Day_2015 59

Cloud FS in 12c-Overview

Aman Sharma-Hyd_Tech_Day_2015 60

Cloud FS-Advanced Data Services

• Support for all types of files

• Enhanced Snapshots(snap-of-snap)

• Auditing

• Encryption

• Tagging

Aman Sharma-Hyd_Tech_Day_2015 61

Take Away For You

• 12c has revolutionized HA stack-yet again • Flex cluster and Flex ASM are new paradigms in

Oracle RAC technology • Multitenancy is the solution for database

consolidation • Using Flex Cluster along with Multitenancy gives

you a much better foundation for creating a private cloud

• Cloud FS is the foundation for next generation storage solutions

Aman Sharma-Hyd_Tech_Day_2015 62

63

Thank You!

@amansharma81

http://blog.aristadba.com

[email protected] |

|

|

Aman Sharma-Hyd_Tech_Day_2015