sun cluster 3 3 concepts

Upload: -

Post on 03-Jun-2018

221 views

Category:

Documents


1 download

TRANSCRIPT

  • 8/11/2019 Sun Cluster 3 3 Concepts

    1/98

  • 8/11/2019 Sun Cluster 3 3 Concepts

    2/98

    Copyright 2000, 2013, Oracleand/or itsaffiliates. Allrights reserved.

    This software and related documentation are provided undera license agreement containingrestrictions on use and disclosure and areprotected by intellectualproperty laws. Exceptas expresslypermitted in your license agreementor allowed by law, youmay notuse, copy, reproduce, translate, broadcast, modify, license,transmit,distribute,exhibit,perform,publish,or display anypart,in anyform,or by anymeans. Reverse engineering,disassembly,or decompilation of this software,unless required by law for interoperability, is prohibited.

    Theinformationcontained hereinis subject to changewithout noticeand is notwarranted to be error-free. If yound anyerrors,please reportthem to us in writing.

    Ifthis is software or related documentation that is deliveredto theU.S. Government or anyonelicensing it on behalfof theU.S. Government, thefollowing noticeisapplicable:

    U.S.GOVERNMENT END USERS.Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/ordocumentation, deliveredto U.S. Governmentend usersare " commercialcomputersoftware " pursuant to the applicable Federal Acquisition Regulation andagency-specicsupplemental regulations.As such,use, duplication, disclosure,modication, and adaptation of the programs, including any operating system,integratedsoftware,any programs installed on the hardware, and/or documentation, shallbe subject to license termsand license restrictions applicable to theprograms.No other rightsare granted to theU.S. Government.

    This software or hardware is developedfor general usein a variety of information management applications. Itis notdeveloped or intended foruse in anyinherently dangerousapplications, includingapplications that maycreate a risk of personal injury. If youuse this software or hardware in dangerousapplications, then youshallbe responsible to take allappropriate fail-safe, backup, redundancy, andothermeasuresto ensure itssafe use. OracleCorporationand itsaffiliatesdisclaimany

    liabilityfor anydamagescausedby useof this software or hardware in dangerousapplications.Oracleand Java areregistered trademarks of Oracleand/or itsaffiliates. Other names maybe trademarks of their respective owners.

    Inteland Intel Xeon aretrademarks or registered trademarks of Intel Corporation.All SPARCtrademarks areused under license andare trademarks or registeredtrademarks of SPARCInternational,Inc. AMD, Opteron, theAMD logo, andthe AMDOpteron logo aretrademarksor registered trademarks of Advanced MicroDevices. UNIX is a registered trademarkof TheOpen Group.

    This software or hardware anddocumentation mayprovide accessto or information on content, products,and services from third parties. OracleCorporationanditsaffiliates arenot responsible forand expresslydisclaimall warranties of anykind with respect to third-party content, products,and services.Oracle Corporationandits affiliates will notbe responsible forany loss, costs,or damages incurred dueto your accessto or useof third-party content, products,or services.

    Ce logiciel et la documentation quilaccompagnesont protgs parles lois surla propritintellectuelle. Ilssont concds souslicenceet soumis desrestrictionsdutilisation et de divulgation.Sauf disposition de votre contrat de licence ou de la loi, vous ne pouvezpas copier, reproduire, traduire,diffuser,modier,breveter,transmettre,distribuer,exposer,excuter, publier ou afficherle logiciel,mme partiellement, sous quelque forme et parquelque procd quece soit. Par ailleurs,il estinterdit de procder toute ingnierie inverse du logiciel,de le dsassemblerou de le dcompiler, except desns dinteroprabilitavecdes logicielstiersou telqueprescrit parla loi.

    Les informations fournies dansce document sont susceptibles de modication sans pravis. Parailleurs,Oracle Corporationne garantit pas quellessoient exemptesderreurs et vous invite, le caschant, luien faire part parcrit.

    Sice logiciel,ou la documentation quilaccompagne, estconcd sous licence au Gouvernementdes Etats-Unis, ou toute entitqui dlivre la licence de ce logicielou lutilisepour le compte du Gouvernementdes Etats-Unis, la notice suivante sapplique:

    U.S.GOVERNMENT END USERS.Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/ordocumentation, deliveredto U.S. Governmentend usersare " commercialcomputersoftware " pursuant to the applicable Federal Acquisition Regulation and

    agency-specicsupplemental regulations.As such,use, duplication, disclosure,modication, and adaptation of the programs, including any operating system,integratedsoftware,any programs installed on the hardware, and/or documentation, shallbe subject to license termsand license restrictions applicable to theprograms.No other rightsare granted to theU.S. Government.

    Ce logiciel ou matriel a tdvelopp pour un usage gnral dans le cadre dapplicationsde gestiondes informations. Ce logiciel ou matriel nestpas conu ni nestdestin tre utilis dans desapplications risque, notammentdans desapplications pouvant causerdes dommages corporels. Si vous utilisez ce logiciel ou matrieldans le cadre dapplicationsdangereuses, il estde votre responsabilit de prendre toutesles mesures de secours, de sauvegarde, de redondance et autresmesuresncessaires son utilisation dans des conditionsoptimalesde scurit. Oracle Corporationet ses affilis dclinent touteresponsabilitquant aux dommages causspar lutilisation de ce logiciel ou matriel pource typedapplications.

    Oracleet Java sont desmarques dposes dOracleCorporationet/oude sesaffilis. Tout autre nommentionn peut correspondre desmarques appartenant dautres propritaires quOracle.

    Intelet Intel Xeon sontdes marques ou desmarques dposes dIntel Corporation.Toutes lesmarques SPARCsont utilisessous licence et sont desmarques ou desmarques dposes de SPARCInternational,Inc. AMD, Opteron, le logo AMDet le logo AMDOpteron sont desmarques ou desmarques dposes dAdvanced MicroDevices. UNIX estune marquedpose dThe Open Group.

    Ce logiciel ou matriel et la documentation quilaccompagnepeuvent fournir desinformations ou desliensdonnant accs descontenus, desproduitset desservicesmanant de tiers.Oracle Corporation et sesaffilis dclinenttouteresponsabilitou garantie expresse quant auxcontenus, produits ou services manant de tiers.Enaucun cas, OracleCorporationet sesaffilis ne sauraienttretenus pour responsablesdes pertessubies, descotsoccasionnsou desdommagescauss parlaccs descontenus, produits ou services tiers,ou leur utilisation.

    130801@25097

  • 8/11/2019 Sun Cluster 3 3 Concepts

    3/98

    Contents

    Preface .....................................................................................................................................................

    1 Introduction and Overview ...............................................................................................................11Introduction to the Oracle Solaris Cluster Environment ............................................................... 11Three Views of the Oracle Solaris Cluster Software ........................................................................ 12

    Hardware Installation and Service View ................................................................................... 13System Administrator View ....................................................................................................... 13

    Application Developer View ...................................................................................................... 15Oracle Solaris Cluster Software Tasks ............................................................................................... 16

    2 Key Concepts for HardwareServiceProviders ............................................................................... 17Oracle Solaris Cluster System Hardware and SoftwareComponents ........................................... 17

    Cluster Nodes ............................................................................................................................... 18

    SoftwareComponents for Cluster HardwareMembers .......................................................... 20Multihost Devices ........................................................................................................................ 21Local Disks .................................................................................................................................... 22Removable Media ......................................................................................................................... 22Cluster Interconnect .................................................................................................................... 22Public NetworkInterfaces ........................................................................................................... 23

    Logging Into the Cluster Remotely ............................................................................................ 24Administrative Console .............................................................................................................. 24SPARC: Oracle Solaris Cluster Topologies ....................................................................................... 25

    SPARC: Clustered Pair Topology ............................................................................................... 25SPARC: Pair+N Topology ........................................................................................................... 26SPARC: N+1 (Star) Topology ..................................................................................................... 27SPARC: N*N (Scalable) Topology ............................................................................................. 28

    SPARC: OracleVM Server forSPARC Software GuestDomains: Cluster in a BoxTopology ....................................................................................................................................... 29

    3

  • 8/11/2019 Sun Cluster 3 3 Concepts

    4/98

  • 8/11/2019 Sun Cluster 3 3 Concepts

    5/98

    RecommendedQuorum Congurations .................................................................................. 56Load Limits ........................................................................................................................................... 58

    Data Services ........................................................................................................................................ 59Data Service Methods .................................................................................................................. 62FailoverData Services .................................................................................................................. 62Scalable Data Services .................................................................................................................. 63Load-Balancing Policies .............................................................................................................. 64FailbackSettings ........................................................................................................................... 66Data Services Fault Monitors ...................................................................................................... 66

    Developing NewData Services .......................................................................................................... 66Characteristics of Scalable Services ............................................................................................ 66Data Service API and Data Service Development Library API .............................................. 67

    Using the Cluster Interconnect for DataService Traffic ................................................................. 68Resources, Resource Groups, and ResourceTypes ......................................................................... 69

    Resource Group Manager (RGM) ............................................................................................. 70

    Resourceand ResourceGroup States and Settings .................................................................. 70Resource and Resource Group Properties ................................................................................ 71

    Support for Oracle Solaris Zones ....................................................................................................... 72Support for Global-ClusterNon-Voting Nodes (Oracle Solaris Zones) Directly Throughthe RGM ........................................................................................................................................ 72Support for OracleSolaris Zones on Cluster NodesThrough OracleSolaris Cluster HA forSolaris Zones ................................................................................................................................. 74

    Service Management Facility ............................................................................................................. 75System Resource Usage ....................................................................................................................... 76

    System Resource Monitoring ..................................................................................................... 76Control of CPU ............................................................................................................................ 77Viewing System Resource Usage ................................................................................................ 78

    Data Service Project Conguration ................................................................................................... 78

    DeterminingRequirements for Project Conguration ........................................................... 80Setting Per-Process Virtual MemoryLimits ............................................................................. 81FailoverScenarios ........................................................................................................................ 82

    Public Network Adaptersand IP NetworkMultipathing ............................................................... 87SPARC: Dynamic Reconguration Support .................................................................................... 88

    SPARC: Dynamic RecongurationGeneral Description ....................................................... 89SPARC:DR Clustering Considerations for CPU Devices ....................................................... 89SPARC: DR Clustering Considerations for Memory ............................................................... 89

    Contents

    5

  • 8/11/2019 Sun Cluster 3 3 Concepts

    6/98

    SPARC:DR Clustering Considerations for Diskand Tape Drives ........................................ 90SPARC:DR Clustering Considerations for Quorum Devices ................................................ 91

    SPARC:DR Clustering Considerations for Cluster Interconnect Interfaces ....................... 91SPARC:DR Clustering Considerations for Public Network Interfaces ................................ 91

    Index ......................................................................................................................................................93

    Contents

    Oracle Solaris Cluster ConceptsGuide March2013, E37723016

  • 8/11/2019 Sun Cluster 3 3 Concepts

    7/98

    Preface

    The OracleSolaris Cluster Concepts Guidecontains conceptual information about the Oracle

    Solaris Cluster product on both SPARC andx86 based systems.

    Note This OracleSolaris Cluster release supports systems that usetheSPARC andx86familiesof processorarchitectures: UltraSPARC, SPARC64, AMD64, andIntel 64. In this document, x86refers to thelarger familyof 64-bit x86compatible products. Information in this documentpertains to all platformsunlessotherwise specied.

    Who Should UseThis Book This document is intended for the followingaudiences: Service providerswho install andservice cluster hardware System administrators who install, congure, and administer Oracle Solaris Cluster

    software Application developers who develop failover andscalable services for applications that are

    not currently included with theOracleSolaris Cluster product

    To understand the concepts that are described in this book, you should be familiar with theOracleSolaris operatingsystemandhave expertisewith thevolumemanager software that youcan usewith theOracleSolaris Cluster product.

    Youshoulddetermineyour system requirements andpurchase therequired equipmentand

    software. The OracleSolaris Cluster Data Services Planning andAdministrationGuide containsinformation about howto plan, install, setup, and usetheOracleSolaris Cluster software.

    7

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAG
  • 8/11/2019 Sun Cluster 3 3 Concepts

    8/98

  • 8/11/2019 Sun Cluster 3 3 Concepts

    9/98

    Topic Documentation

    Command and function references Oracle Solaris Cluster ReferenceManual

    Oracle Solaris Cluster Data ServicesReferenceManual

    Getting HelpIfyouhave problems installing or using theOracleSolaris Cluster software, contact your serviceprovider and provide the following information: Your name andemail address Your company name, address, and phone number The model and serial numbers of your systems The release number of the operating system (for example, the Solaris 10 OS) Therelease numberof OracleSolaris Cluster software (for example, 3.3)

    Use thecommands in thefollowing table to gather information about your systems for yourservice provider.

    Command Function

    prtconf -v Displays thesize of thesystem memoryand reportsinformation aboutperipheral devices

    psrinfo -v Displays information aboutprocessors

    showrev -p Reports which patches are installed

    SPARC: prtdiag -v Displayssystem diagnostic information

    /usr/cluster/bin/clnode show-rev -v Displays Oracle Solaris Cluster release and package version information

    You should also have available the contents of the /var/adm/messages le.

    Access to Oracle SupportOraclecustomershave access to electronic support through MyOracleSupport. Forinformation, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visithttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if youare hearing impaired.

    Preface

    9

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDRMhttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=infohttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=trshttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=trshttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=infohttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDRMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRM
  • 8/11/2019 Sun Cluster 3 3 Concepts

    10/98

    Typographic ConventionsThe following table describes thetypographic conventions that areused in this book.

    TABLEP1 TypographicConventions

    Typeface Description Example

    AaBbCc123 The names of commands, les, and directories,and onscreen computer output

    Edityour .login le.

    Use ls -a to list all les.

    machine_name% you have mail.

    AaBbCc123 What you type, contrasted with onscreencomputer output

    machine_name% su

    Password:

    aabbcc123 Placeholder:replacewith a realname orvalue The command toremove a le is rm lename .

    AaBbCc123 Booktitles, new terms, and terms to beemphasized

    Read Chapter 6 in the User's Guide.

    A cache isa copythat isstoredlocally.

    Do not save the le.

    Note: Some emphasized itemsappear bold online.

    Shell Prompts in Command ExamplesThe following table shows UNIX system prompts and superuserprompts for shells that areincluded in theOracleSolaris OS. In command examples, theshell prompt indicates whetherthe command should beexecuted by a regular user or a user with privileges.

    TABLEP2 ShellPrompts

    Shell Prompt

    Bashshell,Korn shell, and Bourne shell $

    Bashshell,Korn shell, and Bourne shell for superuser #

    C shell machine_name%

    C shell for superuser machine_name#

    Preface

    Oracle Solaris Cluster ConceptsGuide March2013, E377230110

  • 8/11/2019 Sun Cluster 3 3 Concepts

    11/98

    Introduction and Overview

    The OracleSolaris Cluster product is an integrated hardware andsoftwaresolution that youuseto createhighlyavailableand scalableservices. The OracleSolaris Cluster Concepts Guideprovides theconceptual information that youneed to gain a more complete picture of theOracleSolaris Cluster product. Use this book with theentireOracleSolaris Clusterdocumentation set to provide a complete view of theOracleSolaris Cluster software.

    This chapter provides an overview of thegeneral concepts that underlie theOracleSolarisCluster product. It includes the following information: Provides an introductionandhigh-level overview of theOracleSolaris Cluster software Describes several views of theOracleSolaris Cluster software by audiences Identies keyconcepts that youneed to understand beforeyou usetheOracleSolaris

    Cluster software Maps keyconcepts to procedures and related information in the OracleSolaris Cluster

    documentation Maps cluster-related tasks to the related procedures in thedocumentation

    Thischapter contains the followingsections: Introduction to theOracleSolaris Cluster Environmenton page 11 ThreeViews of the OracleSolaris Cluster Softwareon page 12 Oracle Solaris Cluster SoftwareTaskson page 16

    Introduction to the Oracle Solaris Cluster EnvironmentThe OracleSolaris Cluster environment extends theOracleSolaris operatingsysteminto a

    cluster operatingsystem. A cluster is a collection of one or more nodes that belong exclusively tothat collection.

    1C H A P T E R 1

    11

  • 8/11/2019 Sun Cluster 3 3 Concepts

    12/98

    A cluster offers several advantages over traditional single-server systems. These advantagesinclude support for failover andscalable services, capacity formodular growth, theability to setload limitson nodes, and a lowentry price compared to traditional hardware fault-tolerantsystems.

    Additional benets of theOracleSolaris Cluster software include the following: Reduces or eliminates system downtime because of software or hardware failure Ensures availability of data andapplications to endusers regardless of thekind of failure

    that would normally take down a single-server system Increases application throughput by enabling services to scale to additional processors by

    addingnodes to thecluster andbalancing load Provides enhanced availabilityof thesystemby enabling youto perform maintenance

    without shutting down theentirecluster

    Ina cluster that runs on the Oracle Solaris OS, a global cluster and a zone cluster are types of clusters. A global cluster consists of a set of Oracle Solaris global zones. A global cluster is composed

    of oneor more global-cluster votingnodes andoptionally, zero or more global-clusternon-voting nodes.

    A global-cluster votingnode is a native brand global zone in a global cluster thatcontributes votes to thetotal number of quorumvotes, that is, membership votes in thecluster. This total determines whether thecluster hassufficientvotes to continueoperating.A global-cluster non-voting node is a native brand non-global zone in a global cluster thatdoes not contribute votes to the total numberof quorumvotes.

    A zone cluster consists of a setof non-global zones (one perglobal-clusternode), that arecongured to behaveas a separate virtualcluster.

    For more information about global clusters andzone clusters, see Overview of AdministeringOracleSolaris Cluster in Oracle Solaris Cluster System Administration Guide and WorkingWith a Zone Cluster in Oracle Solaris Cluster System Administration Guide.

    ThreeViews of the Oracle Solaris Cluster SoftwareThis section describes three different views of theOracleSolaris Cluster software by differentaudiences and thekeyconcepts anddocumentation relevant to each view.

    Theseviews are typical for the following professionals: Hardware installation and service personnel

    System administrators Application developers

    Three Views of theOracle Solaris Cluster Software

    Oracle Solaris Cluster ConceptsGuide March2013, E377230112

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n682http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n682http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n682http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMgheenhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMgheenhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMgheenhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMgheenhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMgheenhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMgheenhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n682http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n682
  • 8/11/2019 Sun Cluster 3 3 Concepts

    13/98

  • 8/11/2019 Sun Cluster 3 3 Concepts

    14/98

    The systemadministrator sees software that performs specic tasks: Specialized cluster software that is integrated with theOracleSolaris OS, whichforms the

    high availability framework that monitors thehealthof thecluster nodes Specialized software that monitors thehealth of user application programs that are running

    on thecluster nodes Optional volumemanagement software that sets up andadministers disks Specialized cluster software that enables all cluster nodes to access all storage devices, even

    those nodes that arenotdirectly connected to disks Specialized cluster software that enables les to appearon every cluster node as thoughthey

    were locally attached to that node

    KeyConcepts SystemAdministrationSystem administrators need to understand the followingconceptsandprocesses: The interaction between thehardware andsoftwarecomponents Thegeneral owof howto install and congurethecluster including:

    Installing theOracleSolaris OS Installing and conguring Oracle Solaris Cluster software Installing and conguring a volume manager (optional) Installing and conguring application software to be cluster ready Installing and conguring OracleSolaris Cluster data service software

    Cluster administrativeprocedures for adding, removing, replacing, and servicingclusterhardware and softwarecomponents

    Conguration modications to improve performance

    The followingsections contain material relevant to these keyconcepts: Administrative Interfaceson page 38 Cluster Time on page 38 High-Availability Framework on page 39 Campus Clusterson page 39 Global Devices on page 40

    Device Groups on page 43 Global Namespaceon page 46 Cluster File Systems on page 47 Disk Path Monitoringon page 49 Quorum andQuorum Devices on page 52 Load Limits on page 58Load Limits on page 58 Data Services on page 59 Using theCluster Interconnect forData Service Trafficon page 68 Resources, Resource Groups, and Resource Types on page 69 Support forOracleSolaris Zones on page 72

    Three Views of theOracle Solaris Cluster Software

    Oracle Solaris Cluster ConceptsGuide March2013, E377230114

  • 8/11/2019 Sun Cluster 3 3 Concepts

    15/98

    Service Management Facility on page 75 System ResourceUsageon page 76

    Oracle Solaris Cluster Documentationfor SystemAdministratorsThe followingOracle Solaris Cluster documents include proceduresand informationassociated withsystem administration concepts: Oracle Solaris Cluster Software InstallationGuide Oracle Solaris Cluster System Administration Guide OracleSolaris Cluster ErrorMessages Guide OracleSolaris Cluster 3.33/13 Release Notes

    Application Developer ViewThe OracleSolaris Cluster software provides data services for web anddatabase applications.Data services arecreated by conguring off-the-shelf applications to rununder control of theOracleSolaris Cluster software. TheOracleSolaris Cluster software provides congurationlesandmanagement methods that start, stop, and monitor theapplications. Italso provides twokindsof highlyavailableservices forapplications: failover servicesandscalable services. Formore information, see Key Concepts ApplicationDevelopment on page 15 .

    Ifyou need to create a new failover or scalable service, you can use the Oracle Solaris ClusterApplication Programming Interface (API) and theData Service EnablingTechnologies API(DSET API) to develop thenecessaryconguration les andmanagement methods that enabletheservice's application to runas a data service on thecluster. For more information on failoverand scalable applications, seethe Oracle Solaris Cluster System Administration Guide.

    KeyConcepts Application DevelopmentApplicationdevelopers should understand the followingconcepts: How thecharacteristicsof their application help determinewhether it canbe made to runas

    a failover or scalabledata service. TheOracleSolaris Cluster API, DSET API, and thegeneric data service. Developers need

    to determine which tool is most suitable for them to use to write programs or scripts tocongure their application for the cluster environment.

    Therelationshipbetween failover andscalable applicationsandnodes: Ina failover application, an application runs on one node at a time. If that node fails, the

    application fails over to another node in thesame cluster. In a scalable application, an application runs on several nodes to createa single, logical

    service. If a node that is running a scalable application fails, failover does not occur. Theapplication continues to runon theother nodes. For more information, see FailoverData Services on page 62 and ScalableData Serviceson page 63.

    ThreeViews of the Oracle SolarisCluster Software

    Chapter 1 Introduction and Overview 15

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLERRhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLRELhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLRELhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLERRhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLIST
  • 8/11/2019 Sun Cluster 3 3 Concepts

    16/98

    The followingsections contain material relevant to these key concepts: Data Services on page 59

    Resources, Resource Groups, and Resource Types on page 69

    Oracle Solaris Cluster Documentationfor Application DevelopersThe followingOracle Solaris Cluster documents include proceduresand informationassociated with the application developerconcepts: Oracle Solaris Cluster Data Services Developers Guide OracleSolaris Cluster Data Services Planning andAdministrationGuide

    Oracle Solaris Cluster Software TasksAllOracleSolaris Cluster software tasks require some conceptual background. Thefollowingtable provides a high-level view of thetasks and thedocumentation that describes task steps.The concepts sections in this book describe howtheconceptsmap to these tasks.

    TABLE 11 Task Map: Mapping User Tasks to Documentation

    Task Instructions

    Install cluster hardware Oracle Solaris Cluster 3.3 3/13 HardwareAdministration Manual

    Install Oracle Solaris software on thecluster

    Oracle Solaris Cluster SoftwareInstallationGuide

    Install and congure Oracle SolarisCluster

    software

    Oracle Solaris Cluster SoftwareInstallationGuide

    Install and congure volume managementsoftware

    Oracle Solaris Cluster Software InstallationGuide and your volume management documentation

    Install and congure Oracle SolarisClusterdata services

    Oracle Solaris Cluster Data Services Planning and AdministrationGuide

    Service cluster hardware Oracle Solaris Cluster 3.3 3/13 HardwareAdministration Manual

    Administer OracleSolaris Cluster software Oracle Solaris Cluster System Administration GuideAdministervolumemanagement software Oracle Solaris Cluster System Administration Guide and your

    volume management documentation

    Administer application software Yourapplicationdocumentation

    Problem identication and suggested useractions

    Oracle Solaris Cluster Error Messages Guide

    Create a new data service Oracle Solaris Cluster DataServices Developers Guide

    Oracle Solaris Cluster SoftwareTasks

    Oracle Solaris Cluster ConceptsGuide March2013, E377230116

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDEVhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLERRhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDEVhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDEVhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLERRhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISThttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDAGhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLDEV
  • 8/11/2019 Sun Cluster 3 3 Concepts

    17/98

    Key Concepts for Hardware Service Providers

    This chapter describes thekeyconcepts that are related to thehardware components of anOracle Solaris Cluster conguration.

    This chapter covers thefollowing topics: Oracle Solaris Cluster SystemHardwareandSoftware Components on page 17 SPARC: OracleSolaris Cluster Topologies on page 25 x86: OracleSolaris Cluster Topologies on page 33

    Oracle Solaris Cluster System Hardware and SoftwareComponents

    This information is directedprimarily to hardware service providers. Theseconcepts canhelpservice providers understand the relationshipsbetween hardware components before they install, congure, or service cluster hardware. Cluster systemadministrators might also nd thisinformation useful as background information before installing, conguring, andadministering cluster software.

    A cluster is composed of several hardware components, including the following: Cluster nodes with local disks (unshared) Multihost storage (disks/LUNs are shared between cluster nodes) Removablemedia (tapes, CD-ROMs, andDVDs) Cluster interconnect Public network interfaces Administrative console Console access devices

    The followinggure illustrates howthehardware components work with each other.

    2C H A P T E R 2

    17

  • 8/11/2019 Sun Cluster 3 3 Concepts

    18/98

    Administrativeconsole andconsole accessdevices areused to reach thecluster nodes or theterminal concentrator as needed. TheOracleSolaris Cluster software enables youto combinethe hardware components into a variety of congurations. Thefollowingsectionsdescribethese congurations: SPARC: OracleSolaris Cluster Topologies on page 25 x86: OracleSolaris Cluster Topologies on page 33

    Cluster NodesAn OracleSolaris host (or simply cluster node) is one of the following hardware or softwarecongurations that runs theOracleSolaris OS and itsown processes: A physical machine that is not congured with a virtual machine or as a hardware domain OracleVM Server forSPARC guest domain OracleVM Server forSPARC I/Odomain

    FIGURE 21 OracleSolaris Cluster HardwareComponents

    Administrative

    console

    Consoleaccess device

    Cluster transport adapters

    ttya ttya

    Cluster transport cables

    Client

    systems

    Public network

    Host 1 Host 2

    Clusterinter-connect

    Localdisks

    Multihostdisks

    Localdisks

    Multi-pathinggroup

    Storage interfaces

    Multi-pathing

    group

    IP networkmultipathing

    IP networkmultipathing

    Oracle Solaris Cluster SystemHardwareand SoftwareComponents

    Oracle Solaris Cluster ConceptsGuide March2013, E377230118

  • 8/11/2019 Sun Cluster 3 3 Concepts

    19/98

    A hardware domain

    Dependingon your platform, OracleSolaris Cluster software supports thefollowing

    congurations: SPARC: OracleSolaris Cluster softwaresupports from 116 cluster nodes in a cluster.

    Different hardware congurations impose additional limitson the maximum numberof nodes that you can congure in a cluster composed of SPARC based systems. See SPARC:OracleSolaris Cluster Topologies on page 25 for the supported congurations.

    x86: OracleSolaris Cluster software supports from 18cluster nodes in a cluster. Differenthardware congurations impose additional limits on themaximum numberof nodes thatyou can congure in a cluster composed of x86 based systems. See x86: Oracle SolarisCluster Topologies on page 33 for the supportedcongurations.

    Cluster nodes aregenerally attached to oneor more multihost storage devices. Nodes that arenot attached to multihost devices can use a cluster le system to access the data on multihostdevices. For example, onescalable services congurationenables nodes to service requestswithoutbeing directly attached to multihost devices.

    Inaddition, nodes in parallel database congurations share concurrent access to all thedisks. SeeMultihost Devices on page 21 for information about concurrent access to disks. SeeSPARC: Clustered Pair Topologyon page 25 and x86: Clustered Pair Topologyon

    page 33 formore information about parallel database congurations andscalable topology.

    Publicnetwork adapters attachnodes to thepublicnetworks, providingclientaccess to thecluster.

    Cluster members communicate with theother nodes in thecluster through oneor morephysically independent networks. This setof physically independent networks is referred to asthe cluster interconnect .

    Everynode in thecluster is awarewhen another node joins or leaves thecluster. Additionally,every node in the cluster is aware of the resources that are running locally as well as theresources that are running on theother cluster nodes.

    Nodes in the samecluster should have the same OS and architecture, as well as similarprocessing, memory, and I/Ocapability to enable failover to occurwithout signicantdegradation in performance. Becauseof thepossibility of failover, every node must haveenough excess capacity to support the workload of all nodes for which they are a backup orsecondary.

    Oracle Solaris Cluster SystemHardwareand Software Components

    Chapter 2 Key Concepts for Hardware Service Providers 19

  • 8/11/2019 Sun Cluster 3 3 Concepts

    20/98

    Software Components forCluster Hardware MembersTo functionas a cluster member, a cluster node must have the followingsoftware installed: OracleSolaris OS Oracle Solaris Cluster software Data service applications Optional:Volume management (for example, Solaris Volume Manager)

    An exceptionis a conguration that uses hardware redundant array of independent disks(RAID). This congurationmightnot require a software volume manager such as SolarisVolume Manager.

    For more information, seethe following: The Oracle Solaris Cluster Software InstallationGuide for information about howto install

    theOracleSolaris OS, OracleSolaris Cluster, andvolumemanagement software. The OracleSolaris Cluster Data Services PlanningandAdministration Guide for information

    about howto install and conguredata services. Chapter 3, Key Concepts for System Administrators and Application Developers, for

    conceptual information about these softwarecomponents.

    Thefollowinggure provides a high-level view of thesoftwarecomponents that work togetherto create theOracleSolaris Cluster environment.

    Thefollowinggure shows a high-level view of thesoftware components that work together tocreate the Oracle Solaris Cluster softwareenvironment.

    FIGURE 22 High-LevelRelationshipof OracleSolaris Cluster Components

    Solaris operating environment

    Kernel

    User

    Data service software

    Volume management software

    Sun Cluster software

    Oracle Solaris Cluster SystemHardwareand SoftwareComponents

    Oracle Solaris Cluster ConceptsGuide March2013, E377230120

    O l S l i Cl S H d d S f C

    http://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTINSTALLhttp://www.oracle.com/pls/topic/lookup?ctx=E19680&id=SC31DSPADMINhttp://www.oracle.com/pls/topic/lookup?ctx=E19680&id=SC31DSPADMINhttp://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTINSTALL
  • 8/11/2019 Sun Cluster 3 3 Concepts

    21/98

    MultihostDevicesLUNs that can be connected to more than one cluster node at a time are multihost devices. Acluster with more than two nodes does not require quorum devices. A quorum device is a sharedstorage device or quorum server that is shared by two or more nodes and that contributes votesthat are used to establish a quorum. The cluster can operate only when a quorum of votes isavailable. For more information about quorum, see Quorum andQuorumDeviceson

    page 52.

    FIGURE 23 OracleSolaris Cluster Software Architecture

    Local disks(global devices)

    Multihost disks(global devices)

    Data service

    API

    Clusternetworking

    Global deviceaccess

    Clientsystems

    Volumemanagement

    TCP/IP

    RGM

    CCR

    Kernel

    CMM Clustertransport

    IP networkmultipathing

    Globalnamespace

    Disk device groups containingdata services resources

    Clusterfile system

    Otherhosts

    HA

    framework

    User

    Oracle Solaris Cluster SystemHardwareand Software Components

    Chapter 2 Key Concepts for Hardware Service Providers 21

    Oracle Solaris Cluster SystemHardwareand SoftwareComponents

  • 8/11/2019 Sun Cluster 3 3 Concepts

    22/98

    Multihost devices have the following characteristics: Ability to store application data, application binaries, and conguration les.

    Protection against host failures. If clients request the data through onenode and thenodefails, the I/Orequests arehandled by thesurviving node.

    A volumemanager canprovide softwareRAID protection for thedata residing on themultihostdevices.

    Combiningmultihostdevices with disk mirroringprotects against individual disk failure.

    LocalDisksLocal disks are the disks that are connected only to a single cluster node. Local disks aretherefore notprotected against node failure (they arenothighlyavailable). However, all disks,including local disks, are included in theglobalnamespaceand areconguredas global devices.Therefore, thedisks themselves arevisible from all cluster nodes.

    SeeGlobal Devices on page 40 formore information about globaldevices.

    Removable MediaRemovable media, such as tape drives and CD-ROMdrives, are supported in a cluster. Youinstall, congure, andservice these devices in thesame way as in a nonclustered environment.Refer to OracleSolaris Cluster 3.33/13 HardwareAdministrationManual for information aboutinstalling and conguring removable media.

    See theGlobal Deviceson page 40 section formore information about globaldevices.

    Cluster InterconnectThe cluster interconnect is thephysical conguration of devices that is used to transfercluster-private communications and data service communications between cluster nodes in thecluster.

    Only nodes in thecluster can be connected to thecluster interconnect. TheOracleSolarisCluster security model assumes that only cluster nodes have physical access to theclusterinterconnect.

    You can set up from one to six cluster interconnects in a cluster. While a single clusterinterconnect reduces thenumberof adapter ports that areused for the private interconnect, itprovides no redundancy and less availability. If a single interconnect fails, moreover, theclusteris at a higher risk of having to perform automatic recovery. Whenever possible, install twoormore cluster interconnects to provide redundancy andscalability, and thereforehigheravailability, by avoiding a single point of failure.

    Oracle Solaris Cluster SystemHardwareand SoftwareComponents

    Oracle Solaris Cluster ConceptsGuide March2013, E377230122

    Oracle Solaris Cluster SystemHardwareand Software Components

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAM
  • 8/11/2019 Sun Cluster 3 3 Concepts

    23/98

    The cluster interconnect consistsof three hardware components: adapters, junctions, andcables. Thefollowing list describeseach of these hardware components. Adapters Thenetwork interfacecards that are located in each cluster node. Their names

    areconstructed from a devicename immediately followed by a physical-unit number, forexample, qfe2. Some adapters have only onephysical network connection, but others, liketheqfecard, have multiplephysical connections. Some adapters combine both the functionsofaNICandanHBA.A network adapter with multiple interfaces could become a single point of failure if theentireadapter fails. For maximum availability, plan your cluster so that thepaths betweentwo nodes do not depend on a single network adapter.

    Junctions Theswitches that are located outside of thecluster nodes. In a two-node cluster, junctionsare not mandatory. In that case, the nodes can be connected to each other throughback-to-backnetwork cable connections. Congurations of more than twonodes generally require junctions.

    Cables Thephysical connections that youinstall eitherbetween two network adapters orbetween an adapter anda junction.

    Figure 24 shows how the two nodes are connected by a transport adapter, cables, and a

    transport switch.

    PublicNetwork InterfacesClients connect to thecluster through thepublicnetwork interfaces.

    Youcansetup cluster nodes in thecluster to include multiple publicnetwork interfacecardsthatperform the followingfunctions: Enablea cluster node to be connected to multiplesubnets Provide publicnetwork availabilityby having interfaces actingas backups foroneanother

    (through IPMP)

    Ifone of the adapters fails, IP network multipathing software is called to fail over the defectiveinterface to another adapter in thegroup. For more information about IPMP, see Chapter 27,Introducing IPMP (Overview), in Oracle Solaris Administration: IP Services.

    Nospecial hardware considerations relate to clustering for thepublic network interfaces.

    FIGURE 24 Cluster Interconnect

    Host

    Cable

    Other

    HostsTransport

    SwitchTransportAdapter

    Oracle Solaris Cluster SystemHardwareand Software Components

    Chapter 2 Key Concepts for Hardware Service Providers 23

    Oracle Solaris Cluster SystemHardwareand SoftwareComponents

    http://www.oracle.com/pls/topic/lookup?ctx=E26505&id=SYSADV3mpoverviewhttp://www.oracle.com/pls/topic/lookup?ctx=E26505&id=SYSADV3mpoverviewhttp://www.oracle.com/pls/topic/lookup?ctx=E26505&id=SYSADV3mpoverviewhttp://www.oracle.com/pls/topic/lookup?ctx=E26505&id=SYSADV3mpoverviewhttp://www.oracle.com/pls/topic/lookup?ctx=E26505&id=SYSADV3mpoverviewhttp://www.oracle.com/pls/topic/lookup?ctx=E26505&id=SYSADV3mpoverview
  • 8/11/2019 Sun Cluster 3 3 Concepts

    24/98

    Logging Into the Cluster RemotelyYou must have console access to all nodes in the cluster.

    To gain console access, useoneof the followingmethods: The cconsole utility can beused from the command line to log into the cluster remotely.

    For more information, see the cconsole (1M) manpage. The terminal concentrator that youpurchasedwith your cluster hardware. Thesystem controller on Oracleservers, such as SunFire servers (also forSPARC based

    clusters). Another devicethat canaccess ttya on each node.

    Only onesupported terminal concentrator is available from Oracleanduseof thesupportedSun terminal concentrator is optional. Theterminal concentrator enables access to/dev/console on each node by using a TCP/IP network. The result is console-level access foreach node from a remotemachine anywhereon thenetwork.

    Other console access methods include other terminal concentrators, tip serial port access from

    another node, anddumb terminals.

    Caution You can attach a keyboard or monitor to a cluster node provided that the keyboardandmonitor aresupportedby thebase serverplatform. However, youcannotuse that keyboardor monitor as a console device. You must redirect the console to a serial port and RemoteSystem Control (RSC) by setting theappropriate OpenBoot PROM parameter.

    AdministrativeConsoleYoucanusea dedicatedworkstation or administrative console to reach thecluster nodes or theterminal concentrator as needed to administer the activecluster. Usually, youinstall and runadministrative tool software, such as theCluster Control Panel (CCP), on theadministrativeconsole. Using cconsole under the CCP enables you to connect to more than one node consoleat a time. For more information about how to use the CCP, see the Chapter 1, Introduction toAdministering Oracle Solaris Cluster, in Oracle Solaris Cluster System Administration Guide.

    Youuse theadministrative console for remoteaccess to thecluster nodes, eitherover thepublicnetwork or, optionally, through a network-based terminal concentrator.

    OracleSolaris Cluster does not require a dedicated administrative console, butusing oneprovides these benets: Enables centralized cluster management by grouping console andmanagement tools on the

    samemachine Provides potentially quicker problem resolution by yourhardware service provider

    y p

    Oracle Solaris Cluster ConceptsGuide March2013, E377230124

    SPARC: Oracle Solaris ClusterTopologies

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcconsole-1mhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcconsole-1mhttp://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTSYSADMINz4000070997776http://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTSYSADMINz4000070997776http://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTSYSADMINz4000070997776http://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTSYSADMINz4000070997776http://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTSYSADMINz4000070997776http://www.oracle.com/pls/topic/lookup?ctx=E19680&id=CLUSTSYSADMINz4000070997776http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcconsole-1m
  • 8/11/2019 Sun Cluster 3 3 Concepts

    25/98

    SPARC: Oracle Solaris ClusterTopologiesA topology is the connection scheme that connects the Oracle Solaris nodes in the cluster to thestorage platforms that areused in an OracleSolaris Cluster environment. OracleSolaris Clustersoftware supports anytopology that adheres to thefollowingguidelines: An OracleSolaris Cluster environment that is composed of SPARC based systems supports

    from 116 cluster nodes in a cluster. Different hardware congurations impose additionallimits on the maximum number of nodes that you can congure in a cluster composed of SPARC based systems.

    A shared storage device can connect to as many nodes as the storage device supports. Shared storage devices do not need to connect to all nodes of the cluster. However, these

    storage devices must connect to at least two nodes.

    Youcancongure OracleVM Server forSPARC software guest domains and I/Odomains ascluster nodes. Inother words, you can create a clustered pair, pair+N, N+1, and N*N clusterthat consistsof anycombination of physical machines, I/Odomains, andguest domains. Youcan also createclusters that consist of only guest domains andI/Odomains.

    OracleSolaris Cluster softwaredoes not require youto congure a cluster by using specictopologies. Thefollowing topologies aredescribed to provide thevocabulary to discuss acluster'sconnection scheme: SPARC: ClusteredPair Topologyon page 25 SPARC: Pair+N Topologyon page 26 SPARC: N+1(Star)Topologyon page 27 SPARC: N*N(Scalable) Topologyon page 28

    SPARC: OracleVM Server forSPARC Software GuestDomains: Cluster in a Box Topologyonpage29 SPARC: OracleVM Server forSPARC Software GuestDomains: Clusters Span Two

    Different HostsTopologyon page 30 SPARC: OracleVM Server forSPARC Software GuestDomains: Redundant I/ODomains

    onpage32

    The followingsections include sample diagrams of each topology.

    SPARC: Clustered Pair TopologyA clustered pair topology is two or more pairs of Oracle Solaris nodes that operate under asingle cluster administrativeframework. In this conguration, failover occurs only between apair. However, all nodes areconnectedby thecluster interconnect andoperate under OracleSolaris Cluster software control. Youmight usethis topology to runa parallel databaseapplication on onepair anda failover or scalable application on another pair.

    Chapter 2 Key Concepts for Hardware Service Providers 25

    SPARC: Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    26/98

    Using thecluster lesystem, youcould also have a two-pair conguration. More than twonodes can run a scalable service or parallel database, even though all the nodes are not directly connected to thedisks that store the application data.

    The followinggure illustrates a clusteredpair conguration.

    SPARC: Pair+NTopologyThe pair+Ntopology includes a pair of cluster nodes that are directlyconnected to thefollowing: Shared storage

    An additional setof nodes that usethecluster interconnect to access sharedstorage (they haveno direct connection themselves)

    The following gure illustrates a pair+N topology where two of the four nodes (Host 3 and Host4) use thecluster interconnect to access thestorage. This congurationcan be expanded toinclude additional nodes that do nothave directaccess to theshared storage.

    FIGURE 25 SPARC:ClusteredPairTopology

    Storage Storage Storage Storage

    Host 2Host 1 Host 3

    Junction

    Junction

    Host 4

    Oracle Solaris Cluster ConceptsGuide March2013, E377230126

    SPARC: Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    27/98

    SPARC: N+1 (Star)TopologyAn N+1 topology includes some number of primary cluster nodes andonesecondarynode.Youdo nothave to congure theprimary nodes andsecondarynode identically. Theprimary nodes actively provide application services. Thesecondarynode need notbe idle while waitingfor a primary node to fail.

    The secondary node is the only node in the conguration that is physically connected to all themultihost storage.

    Ifa failure occurs on a primary node, Oracle Solaris Cluster fails over the resources to thesecondarynode. Thesecondarynode is where the resources function until they areswitchedback (either automatically or manually) to theprimary node.

    The secondary node must always have enough excess CPU capacity to handle the load if one of the primary nodes fails.

    The followinggure illustrates an N+1conguration.

    FIGURE 26 Pair+NTopology

    Host 2Host 1 Host 3

    Junction

    Storage Storage

    Junction

    Host 4

    Chapter 2 Key Concepts for Hardware Service Providers 27

    SPARC: Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    28/98

    SPARC: N*N (Scalable)TopologyAn N*N topology enables everysharedstorage device in thecluster to connect to every cluster

    node in thecluster. This topology enables highlyavailableapplications to fail over from onenode to another without service degradation. When failover occurs, thenewnode canaccessthe storage device by using a local path instead of theprivate interconnect.

    The followinggure illustrates an N*Nconguration.

    FIGURE 27 SPARC:N+1 Topology

    Host 2Primary

    Host 1Primary

    Host 3Primary

    Junction

    Storage Storage Storage

    Junction

    Host 4Secondary

    FIGURE 28 SPARC:N*N Topology

    Storage Storage

    Host 2Host 1 Host 3

    Junction

    Junction

    Host 4

    Oracle Solaris Cluster ConceptsGuide March2013, E377230128

    SPARC: Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    29/98

    SPARC: OracleVM Server for SPARC Software GuestDomains: Cluster in a Box TopologyIn this Oracle VM Server for SPARC guest domain topology, a cluster and every node withinthat cluster are located on the same Oracle Solaris host. Each guest domain acts the same as anode in a cluster. This conguration includes three nodes rather than only two.

    In this topology, you do not need to connect each virtual switch (VSW) for the private network to a physical network because they need only communicatewith each other. In this topology,cluster nodes can also share the same storage devicebecause all cluster nodes are located on the

    samehost or box. To learn more about guidelines for using and installing guest domains or I/Odomains in a cluster, see How to Install OracleVM Server forSPARC Softwareand CreateDomains in Oracle Solaris Cluster Software InstallationGuide.

    Caution The common hostor box in this topology representsa single point of failure.

    All nodes in the cluster are located on the same host or box. Developers and administratorsmightnd this topology useful for testing andother non-production tasks. This topology is alsocalled a cluster in a box. " Multiple clusters can share thesame physical host or box.

    The followinggure illustrates a cluster in a box conguration.

    Chapter 2 Key Concepts for Hardware Service Providers 29

    SPARC: Oracle Solaris ClusterTopologies

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoak
  • 8/11/2019 Sun Cluster 3 3 Concepts

    30/98

    SPARC: OracleVM Server for SPARC Software GuestDomains: Clusters SpanTwo Different HostsTopologyIn this OracleVM Server forSPARC software guest domain topology, each cluster spans twodifferent hosts and each cluster has one host. Each guest domain acts the sameas a host in acluster. In this conguration, because both clusters share thesame interconnect switch, youmust specify a different private network address on each cluster. If youspecify thesame privatenetwork address on clusters that share an interconnectswitch, the conguration fails.

    To learn more aboutguidelines forusing and installing OracleVM Server for SPARC softwareguest domains or I/O domains in a cluster, see How to Install OracleVM Server for SPARCSoftware andCreateDomains in Oracle Solaris Cluster Software Installation Guide.

    FIGURE 29 SPARC: Cluster in a Box Topology

    Node 1 Node 2

    VSW 0Private

    VSW 1Private

    VSW 2Public

    VSW = Virtual Switch

    Node 3

    GuestDomain 1

    GuestDomain 2

    Cluster

    Host

    I/O Domain

    Public Network

    GuestDomain 3

    Physical Adapter

    Storage

    Oracle Solaris Cluster ConceptsGuide March2013, E377230130

    SPARC: Oracle Solaris ClusterTopologies

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoak
  • 8/11/2019 Sun Cluster 3 3 Concepts

    31/98

    The followinggure illustrates a congurationin which more than a single cluster spans twodifferenthosts.

    FIGURE 210 SPARC: ClustersSpan Two Different Hosts

    I/O Domain

    ClusterInterconnect

    Multipleclusters onthe same

    interconnect

    switch

    Host 1 Host 2

    Guest Domain 1 Guest Domain 2

    Guest Domain 3 Guest Domain 4

    VSW 1

    VSWPrivate

    VSWPrivate

    VSWPrivate

    VSWPrivate

    VSW 2

    PhysicalAdapter

    PhysicalAdapter

    VSW = Virtual Switch

    Public Network

    Node 1 Node 2

    Node 1 Node 2

    Cluster 1

    Cluster 2

    Storage

    I/O Domain

    Chapter 2 Key Concepts for Hardware Service Providers 31

    SPARC: Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    32/98

    SPARC: OracleVM Server for SPARC Software GuestDomains: Redundant I/O DomainsIn this OracleVM Server forSPARC software guest domain topology, multiple I/Odomainsensure that guest domains, whichareconguredas cluster nodes, continueto operate if an I/Odomain fails. Each guest domain node acts the sameas a cluster node in a cluster.

    To learn more about guidelines for using and installing guest domains or I/O domains in acluster, see How to Install OracleVM Server forSPARC Software andCreateDomains inOracle Solaris Cluster Software Installation Guide.

    Thefollowinggure illustrates a congurationin which redundant I/Odomains ensure thatnodes within the cluster continue to operate if an I/O domain fails.

    FIGURE 211 SPARC:RedundantI/O Domains

    Node 1Guest Domain 1

    Host 1 Host 2

    Clusterlust rCluster

    VSW 1Public

    PhysicalAdapter

    PhysicalAdapter

    PhysicalAdapter

    PhysicalAdapter

    VSWPrivate

    VSW = Virtual Switch

    IPMP Mirror

    I/O DomainPrimary

    VSW 2Public

    VSWPrivate

    I/O DomainAlternate

    Public Network

    Node 2

    VSW 3Public

    VSWPrivate

    IPMP Mirror

    I/O DomainPrimary

    VSW 4Public

    VSWPrivate

    I/O DomainAlternate

    Guest Domain 2

    Storage Storage

    Mirror

    Oracle Solaris Cluster ConceptsGuide March2013, E377230132

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoakhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTggoak
  • 8/11/2019 Sun Cluster 3 3 Concepts

    33/98

    x86:Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    34/98

    x86: N+1(Star)TopologyAn N+1 topology includes some number of primary cluster nodes andonesecondarynode.

    Youdo nothave to congure theprimary nodes andsecondarynode identically. Theprimary nodes actively provide application services. Thesecondarynode need notbe idle while waitingfor a primary node to fail.

    The secondary node is the only node in the conguration that is physically connected to all themultihost storage.

    Ifa failure occurs on a primary node, Oracle Solaris Cluster fails over the resources to thesecondarynode. Thesecondarynode is where the resources function until they areswitchedback (either automatically or manually) to theprimary node.

    The secondary node must always have enough excess CPU capacity to handle the load if one of the primary nodes fails.

    The followinggure illustrates an N+1conguration.

    N*N (ScalableTopology)An N*N topology enables everysharedstorage device in thecluster to connect to every clusternode in thecluster. This topology enables highlyavailableapplications to fail over from onenode to another without service degradation. When failover occurs, thenewnode canaccessthe storage device by using a local path instead of theprivate interconnect.

    The followinggure illustrates an N*Nconguration.

    FIGURE 213 x86: N+1Topology

    Host 2Primary

    Host 1Primary

    Host 3Primary

    Junction

    Storage Storage Storage

    Junction

    Host 4Secondary

    Oracle Solaris Cluster ConceptsGuide March2013, E377230134

    x86:Oracle Solaris ClusterTopologies

  • 8/11/2019 Sun Cluster 3 3 Concepts

    35/98

    FIGURE 214 N*NTopology

    Storage Storage

    Host 2Host 1 Host 3

    Junction

    Junction

    Host 4

    Chapter 2 Key Concepts for Hardware Service Providers 35

  • 8/11/2019 Sun Cluster 3 3 Concepts

    36/98

    36

  • 8/11/2019 Sun Cluster 3 3 Concepts

    37/98

    Key Concepts for System Administrators andApplication Developers

    This chapter describes thekeyconcepts that are related to thesoftware components of theOracleSolaris Cluster environment. Theinformation in this chapter is directed to systemadministrators andapplication developers whouse theOracleSolaris Cluster API. Clusteradministrators can use this information in preparation for installing, conguring, andadministering cluster software. Applicationdevelopers can use the information to understand

    the cluster environment in which they work.This chapter covers thefollowing topics: Administrative Interfaceson page 38 High-Availability Framework on page 39 Device Groups on page 43 Global Namespaceon page 46 Cluster File Systems on page 47 Disk Path Monitoringon page 49 Quorum andQuorum Devices on page 52 Load Limits on page 58 Data Services on page 59 Developing New Data Serviceson page 66 Using theCluster Interconnect forData Service Trafficon page 68 Resources, Resource Groups, andResource Types on page 69 Support forOracleSolaris Zones on page 72 Service Management Facility on page 75 System ResourceUsageon page 76 Data Service Project Conguration on page 78 Public NetworkAdaptersand IP Network Multipathing on page 87 SPARC: Dynamic Reconguration Support on page 88

    3C H A P T E R 3

    37

    Administrative Interfaces

  • 8/11/2019 Sun Cluster 3 3 Concepts

    38/98

    Administrative InterfacesYoucanchoosehowyouinstall, congure, andadminister theOracleSolaris Cluster software

    from several user interfaces. Youcanperform systemadministration tasks either through theOracleSolaris Cluster Manager graphicaluser interface(GUI) or through thecommand-lineinterface. Some utilitieson top of the command-line interface, such as scinstall and clsetup ,simplify selected installation and conguration tasks. Refer to AdministrationTools in OracleSolaris Cluster System Administration Guide for more information about the administrativeinterfaces.

    ClusterTimeTime between all OracleSolaris nodes in a cluster must be synchronized. Whether yousynchronize the cluster nodes with anyoutside time source is not important to clusteroperation. TheOracleSolaris Cluster softwareemploys theNetworkTime Protocol (NTP) tosynchronize the clocks between nodes.

    A change in the system clock of a fraction of a second generally causes no problems. However, if you run date or rdate on an active cluster, you can force a time change much larger than afraction of a second to synchronize the system clock to the time source. This forced changemight cause problems with lemodication timestamps or confuse theNTP service.

    When you install the Oracle Solaris OS on each cluster node, you have an opportunity to changethe default time and date setting for the node. You can accept the factory default.

    When youinstall OracleSolaris Cluster software by using the scinstall command, onestep inthe process is to congure NTP for the cluster. OracleSolaris Cluster softwaresupplies twotemplate les, etc/inet/ntp.conf and etc/inet/ntp.conf.sc , that establish a peerrelationshipbetween all cluster nodes. One node is designated thepreferred node. Nodesareidentiedby their private host names and time synchronizationoccursacross theclusterinterconnect. For instructionsabout howto congure thecluster forNTP, see Chapter 2,Installing Software on Global-Cluster Nodes, in Oracle Solaris Cluster Software Installation

    Guide.Alternately, you can set up one or more NTP servers outside the cluster and change thentp.conf le to re ect that conguration.

    Innormal operation, you should never need to adjust the time on the cluster. However, if thetime wasset incorrectlywhen youinstalled theOracleSolaris Operating System andyouwantto change it, the procedure for doing so is included in Chapter 9, Administering the Cluster, in

    Oracle Solaris Cluster System Administration Guide.

    Oracle Solaris Cluster ConceptsGuide March2013, E377230138

    High-Availability Framework

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n683http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n683http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n683http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMz4000075997776http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMz4000075997776http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMz4000075997776http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMz4000075997776http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMz4000075997776http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLISTz40001fb1003552http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n683http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n683
  • 8/11/2019 Sun Cluster 3 3 Concepts

    39/98

    Campus ClustersStandardOracleSolaris Cluster systems provide high availabilityand reliability from a single

    location. If your application must remainavailableafter unpredictable disasters such as anearthquake, ood, or power outage, you cancongure your cluster as a campuscluster.

    Campus clusters enableyou to locatecluster components, such as cluster nodes and sharedstorage, in separate rooms that areseveral kilometers apart. Youcanseparate your nodes andshared storage and locate them in different facilities aroundyour corporatecampusorelsewhere withinseveral kilometers. When a disaster strikes one location, thesurvivingnodescan take over service for the failed node. This enables applicationsand data to remainavailable

    foryour users. For additional information about campuscluster congurations, see the OracleSolaris Cluster 3.3 3/13 Hardware Administration Manual .

    High-Availability Framework The OracleSolaris Cluster software makes all components on thepathbetween users anddatahighly available, including network interfaces, theapplications themselves, thele system, and

    the multihost devices. A cluster component is generally highlyavailable if it survives anysingle(softwareor hardware) failure in thesystem. Failures that are caused by data corruption withinthe application itself areexcluded.

    The following table shows types of OracleSolaris Cluster component failures (both hardwareandsoftware) and thekinds of recovery that arebuilt into thehigh-availability framework.

    TABLE 31 Levels of OracleSolaris Cluster Failure Detection and Recovery

    Failed Cluster Component Software Recovery Hardware Recovery

    Data service HA API, HA framework Not applicable

    Publicnetwork adapter

    IPnetworkmultipathing Multiple public network adapter cards

    Cluster le system Primaryand secondary replicas Multihost devices

    Mirrored multihost

    device

    Volumemanagement (Solaris Volume

    Manager)

    Hardware RAID-5

    Global device Primaryand secondaryreplicas Multiple paths to the device, clustertransport junctions

    Private network HA transport software Multiple private hardware-independentnetworks

    Node CMM, failfast driver Multiple nodes

    Zone HA API, HA framework Not applicable

    Chapter 3 Key Concepts for System Administrators and Application Developers 39

    Th O l S l i Cl f ' hi h il bili f kd d f il i kl

    High-Availability Framework

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAMhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLHAM
  • 8/11/2019 Sun Cluster 3 3 Concepts

    40/98

    TheOracleSolaris Cluster software's high-availability frameworkdetects a node failure quickly and migrates the framework resources on a remaining node in the cluster. At no time are allframework resources unavailable. Framework resources on a failed node are fully available

    during recovery. Furthermore, framework resourcesof the failed node becomeavailableas soonas theyare recovered. A recovered framework resourcedoes not have to wait for all otherframework resources to complete their recovery.

    Highly available framework resources are recovered transparently to most of theapplications(data services) that areusing theresource. Thesemanticsof frameworkresource accessare fully preservedacrossnode failure. Theapplicationscannotdetect that the framework resourceserver has been moved to another node. Failure of a single node is completely transparent toprograms on remainingnodes because they use the les, devices, anddisk volumes that areavailable to the recovery node. This transparencyexists if an alternative hardware path exists tothe disks from another node. An example is the use of multihost devices that have ports tomultiple nodes.

    Global DevicesThe OracleSolaris Cluster software uses global devicesto provide cluster-wide, highly availableaccess to any device in a cluster from any node. Ifa node fails while providing access to a globaldevice, theOracleSolaris Cluster softwareautomatically uses another path to thedevice. TheOracleSolaris Cluster software then redirects theaccess to that path. For more information, seeDevice IDs and DID Pseudo Driver on page 40 . OracleSolaris Cluster globaldevices includedisks, CD-ROMs, and tapes. However, theonly multiported global devices that OracleSolarisCluster softwaresupports aredisks. Consequently, CD-ROM and tape devices arenot currently highly availabledevices. Thelocal disks on each serverarealso notmultiported, and thus arenot highly availabledevices.

    The cluster automatically assigns uniqueIDs to each disk, CD-ROM, and tape device in thecluster. This assignment enables consistent access to each device from anynode in thecluster.The global device namespace is held in the /dev/global directory. See Global Namespace onpage 46 for more information.

    Multiported globaldevices provide more than onepath to a device. Becausemultihost disks arepart of a devicegroup that is hosted by more than one cluster node, the multihost disks aremadehighly available.

    DeviceIDs and DID Pseudo DriverThe OracleSolaris Cluster software manages shareddevices through a constructknown as theDeviceID (DID) pseudodriver. This driver is used to automatically assignuniqueIDs to every device in thecluster, includingmultihost disks, tape drives, and CD-ROMs.

    The DID pseudo driver is an integral part of the shared device access feature of the cluster. The

    DID driver probes all nodes of the cluster and builds a list of unique devices, assigning to each

    Oracle Solaris Cluster ConceptsGuide March2013, E377230140

    device a unique major and a minor number that are consistent on all nodes of the cluster

    High-Availability Framework

  • 8/11/2019 Sun Cluster 3 3 Concepts

    41/98

    device a unique major and a minor number that are consistent on all nodes of the cluster.Access to shareddevices is performedby using thenormalized DID logical name instead of thetraditional OracleSolaris logical name, such as c0t0d0 for a disk.

    This approach ensures that anyapplication that accesses disks (such as a volumemanager orapplications that use rawdevices) uses a consistent path across thecluster. This consistency isespecially important formultihostdisks because thelocal major andminor numbers foreachdevice can vary from node to node, thus changing theOracleSolaris devicenamingconventionsas well. For example, Host1 might identify a multihostdisk as c1t2d0 ,and Host2might identify thesame disk completely differently as c3t2d0 . The DID framework assigns acommon (normalized) logical name, such as d10 , that the nodes use instead, giving each node a

    consistent mapping to the multihost disk.

    Youupdateandadminister device IDs with the cldevice command. See the cldevice (1CL)man page.

    Zone MembershipOracleSolaris Cluster softwarealso trackszone membership by detecting when a zone boots upor halts. These changes also trigger a reconguration. A reconguration can redistributeclusterresourcesamong the nodes in thecluster.

    Cluster Membership MonitorTo ensure that data is kept safe fromcorruption, all nodes must reach a consistent agreement onthe cluster membership. When necessary, theCMM coordinatesa cluster reconguration of cluster services (applications) in response to a failure.

    The CMMreceives information about connectivity to other nodes from thecluster transportlayer. TheCMMuses thecluster interconnect to exchangestate information during areconguration. A problem called split brain canoccur when the cluster interconnect between

    cluster nodes is lost. The cluster becomes partitioned into subclusters, andeach subcluster is nolongeraware of other subclusters. A subcluster that is notaware of theother subclusters couldcausea con ict in shared resources, such as duplicatenetwork addressesand data corruption.The quorumsubsystemmanages thesituation to ensure that split brain does notoccur, and thatonepartitionsurvives. For more information, see Quorum andQuorumDevices on page 52 .

    After detecting a change in cluster membership, theCMMperforms a synchronizedconguration of thecluster. In a synchronized conguration, cluster resources might be

    redistributed, based on thenewmembership of thecluster.

    Chapter 3 Key Concepts for System Administrators and Application Developers 41

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcldevice-1clhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcldevice-1clhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcldevice-1cl
  • 8/11/2019 Sun Cluster 3 3 Concepts

    42/98

  • 8/11/2019 Sun Cluster 3 3 Concepts

    43/98

    Device Group Failover

    Device Groups

  • 8/11/2019 Sun Cluster 3 3 Concepts

    44/98

    Device Group FailoverBecause a disk enclosure is connected to more than one cluster node, all devicegroups in that

    enclosureareaccessible through an alternatepath if thenode currentlymastering thedevicegroup fails. The failure of the node that is mastering the device group does not affect access tothe devicegroup except for the time it takes to perform therecovery andconsistency checks.During this time, all requests areblocked (transparently to theapplication)until the systemmakes thedevicegroup available.

    Device Group OwnershipThis section describesdevice group properties that enableyouto balance performance andavailability in a multiported disk conguration. OracleSolaris Cluster software provides twoproperties that congure a multiported disk conguration: preferenced and numsecondaries .You can control the order in which nodes attempt to assume control if a failover occurs by usingthe preferenced property. Use the numsecondaries property to set thenumberof secondary

    nodes for a device group that you want.

    FIGURE 31 DeviceGroupBefore andAfter Failover

    MultihostDisks

    Before Disk Device Group Failover

    DiskDeviceGroups

    ClientAccess

    DataAccess

    Primary SecondaryHost1 Host 2

    MultihostDisks

    After Disk Device Group Failover

    DiskDeviceGroups

    Client

    Access

    DataAccess

    Secondary

    Host 2Host 1Primary

    Oracle Solaris Cluster ConceptsGuide March2013, E377230144

    A highlyavailableservice is considered down when theprimary node fails andwhen no eligible

    Device Groups

  • 8/11/2019 Sun Cluster 3 3 Concepts

    45/98

    secondarynodes canbe promoted to primary nodes. If service failover occursand thepreferenced property is true , then the nodes followthe order in the node list to select a

    secondarynode. Thenode list denes theorder in which nodes attempt to assumeprimary control or transition from spare to secondary. Youcan dynamically change thepreference of adevice service by using the clsetup command. Thepreference that is associated withdependentservice providers, forexample, a globalle system, is identical to thepreference of the device service.

    Secondary nodes arecheck-pointed by theprimary node duringnormal operation. Inamultiported diskconguration, checkpointing eachsecondarynode causes clusterperformancedegradation and memory overhead. Spare node support was implemented tominimize theperformance degradation and memory overhead that checkpointing caused. By default, your devicegroup hasoneprimary andonesecondary. Theremainingavailableprovider nodes becomespares. If failover occurs, thesecondarybecomes primary and thenodehighest in priority on thenode list becomes secondary.

    You can set the number of secondary nodes that you want to any integer between one and thenumber of operational nonprimary provider nodes in thedevicegroup.

    Note Ifyou are using Solaris VolumeManager, you must create the devicegroup rst. Use themetaset command beforeyouuse the cldevicegroup command to set the numsecondariesproperty.

    The default number of secondaries fordeviceservices is 1. Theactualnumber of secondary providers that is maintained by thereplica framework is thenumber that you want, unless the

    number of operational nonprimary providers is less than thenumber that you want. Youmustalter the numsecondaries property and double-check the node list if you are adding orremoving nodes from your conguration. Maintaining thenode list andnumberof secondariesprevents con ict between thecongurednumberof secondaries and theactual number that isallowed by the framework. Use the cldevicegroup command for Solaris VolumeManager device groups, in

    conjunction with the preferenced and numsecondaries property settings, to manage theaddition of nodes to and theremoval of nodes from your conguration.

    Refer to Overviewof AdministeringCluster File Systems in OracleSolaris Cluster System Administration Guide for procedural information about changing device groupproperties.

    Chapter 3 Key Concepts for System Administrators and Application Developers 45

    Global Namespace

    Global Namespace

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n6a5http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n6a5http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n6a5http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n6a5http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLADMx-4n6a5
  • 8/11/2019 Sun Cluster 3 3 Concepts

    46/98

    Global NamespaceThe OracleSolaris Cluster software mechanismthat enables globaldevices is the global

    namespace. Theglobal namespace includes the /dev/global/ hierarchy as well as the volumemanager namespaces. Theglobalnamespacere ects both multihost disks and local disks (andanyother cluster device, such as CD-ROMs and tapes). Each cluster node that is physically connected to multihost disks provides a path to the storage for any node in the cluster.

    For Solaris VolumeManager, the volumemanager namespaces are located in the/dev/md/ diskset /dsk (and rdsk ) directories. These namespaces consist of directories foreachSolaris VolumeManager disk set imported throughout thecluster.

    In theOracleSolaris Cluster software, each device host in the local volumemanager namespaceis replaced by a symbolic link to a device host in the /global/.devices/node@ nodeID lesystem. nodeID is an integer that represents thenodes in thecluster. OracleSolaris Clustersoftware continues to present thevolume manager devices as symbolic links in their standardlocationsas well. Both theglobalnamespaceand standard volume manager namespaceareavailable from anycluster node.

    Theadvantages of theglobalnamespaceinclude the following: Each host remains fairly independent, with littlechange in thedevice administration model. Third-party generateddevice trees arestill valid and continueto work. Given a local device name, an easy mapping is provided to obtain its global name.

    Localand Global Namespaces Example

    The following table shows themappings between the local and globalnamespaces foramultihost disk, c0t0d0s0 .

    TABLE 32 Local andGlobalNamespace Mappings

    Component or Path Local Host Namespace Global Namespace

    Oracle Solarislogical name /dev/dsk/c0t0d0s0 /global/.devices/node@ nodeID/dev/dsk/c0t0d0s0

    DIDname /dev/did/dsk/d0s0 /global/.devices/node@ nodeID/dev/did/dsk/d0s0

    Solaris VolumeManager /dev/md/ diskset /dsk/d0 /global/.devices/node@ nodeID/dev/md/ diskset /dsk/d0

    Theglobal namespace is automatically generatedon installation andupdated with every reconguration reboot. Youcanalso generate theglobalnamespaceby using the cldevice

    command. Seethe cldevice (1CL) manpage.

    Oracle Solaris Cluster ConceptsGuide March2013, E377230146

    Cluster File Systems

    Cluster FileSystems

    http://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcldevice-1clhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcldevice-1clhttp://www.oracle.com/pls/topic/lookup?ctx=E37745&id=CLCRMcldevice-1cl
  • 8/11/2019 Sun Cluster 3 3 Concepts

    47/98

    yOracleSolaris Cluster softwareprovides a cluster le systembased on theOracleSolaris Cluster

    Proxy File System (PxFS). Thecluster lesystemhas thefollowingfeatures: File access locations are transparent. A process can open a le that is located anywhere in the

    system. Processes on all cluster nodes can use the same path nameto locate a le.

    Note When the cluster le systemreads les, it doesnot update the access timeon thoseles.

    Coherency protocols are used to preserve the UNIX le access semantics even if the le isaccessedconcurrently frommultiple nodes.

    Extensivecaching is used along with zero-copybulk I/Omovement to move le dataefficiently.

    Thecluster lesystemprovides highlyavailable, advisory le-locking functionality by usingthe fcntl command interfaces. Applications that runon multiplecluster nodes cansynchronize access to data by using advisory le locking on a cluster le system. File locksare recovered immediately from nodes that leave thecluster and from applications that failwhile holding locks.

    Continuous access to data is ensured, even when failures occur. Applicationsarenotaffected by fai