introduction - netapp clustered data ontap 8.3x and 8.2.x

Upload: oscarlevitorres9719

Post on 06-Jul-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    1/19NetApp Clustered Data ONTAP 8.3 and 8.2.x

    Technical Report

    NetApp Clustered Data ONTAP 8.3.x and 8.2.x  An Introduction

    Jay Goldfinch, NetApp

    November 2015 | TR-3982

    Abstract

    This technical report is an introduction to the architecture and key customer benefits

    of the NetApp® clustered Data ONTAP® 8.3.x and 8.2.x operating system.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    2/19

    2 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    TABLE OF CONTENTS

    NetApp Clustered Data ONTAP: Overview ............................................................................................... 4 

    Physical Cluster Components ................................................................................................................... 6 

    Nodes ..................................................................................................................................................................... 6 

    HA pairs .................................................................................................................................................................. 7 

    Drives, RAID groups, and Aggregates .................................................................................................................... 8 

    Network Ports ......................................................................................................................................................... 9 

    Clusters ................................................................................................................................................................ 10 

    Logical Cluster Components ................................................................................................................... 10 

    Storage Virtual Machines ..................................................................................................................................... 10 

    Logical Interfaces (LIFs) ....................................................................................................................................... 11 

    Flexible Volumes .................................................................................................................................................. 11 

    LUNs .................................................................................................................................................................... 12 NAS ...................................................................................................................................................................... 13 

    SAN ...................................................................................................................................................................... 14 

    Key Features ............................................................................................................................................. 15 

    Managability ......................................................................................................................................................... 15 

    Multiprotocol Unified Architecture ......................................................................................................................... 16 

    Storage Efficiency ................................................................................................................................................. 16 

    Data Protection and Business Continuity ............................................................................................................. 17 

    Storage QoS ......................................................................................................................................................... 17 

    Infinite Volume ...................................................................................................................................................... 18 

    Intelligent Scale-Out Storage ................................................................................................................................ 18 

    Nondisruptive Operations ..................................................................................................................................... 18 

    Summary .................................................................................................................................................... 19 

    Resources .................................................................................................................................................. 19 

    LIST OF FIGURES

    Figure 1) A Data ONTAP cluster consisting of FAS controllers in a mix of all -flash, hybrid, and capacityconfigurations. A dedicated, redundant 10 gigabit Ethernet interconnect (top) connects the controllers. .......................4 

    Figure 2) A small Data ONTAP cluster with three storage virtual machines (SVMs). Clients and hosts connect to astorage virtual machine, rather than directly to the storage arrays that host the SVMs. Each SVM has its ownvolumes, LUNs, network connectivity (LIFs), and authentication. ..................................................................................4 

    Figure 3) A six node Data ONTAP cluster with one storage virtual machine. Clients and hosts can access data on anynode from any logical interface. Flexible volumes, LUNs, and LIFs can move nondisruptively, so the SVM can growas the cluster scales out. ................................................................................................................................................5 

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    3/19

    3 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Figure 4) A single-node cluster consisting of a FAS8000 series controller running clustered Data ONTAP. ThisFAS8000 controller has one shelf of solid-state disks and another shelf of capacity drives. The controller hasredundant connections to its storage shelves. ................................................................................................................7 

    Figure 5) An HA pair of FAS controllers provides redundancy. In most cases, controllers in an HA pair reside in thesame chassis with redundant power supplies and passive interconnect circuitry. This visualization splits the nodesapart to illustrate the HA interconnect and redundant disk connectivity. ........................................................................7 

    Figure 6) A Flash Pool aggregate consisting of high-capacity SATA drives in one storage shelf (grey), and solid statedrives in another storage shelf (green). In this example, the SATA drives are grouped together in two RAID groupswith six data drives and two parity drives each. The solid state drives are grouped together in one RAID group withsix data drives and one parity drive. ...............................................................................................................................8 

    Figure 7) A two-node cluster with two NAS data ports highlighted. Even through these two ports reside on differentphysical nodes, the ports are on the same VLAN, and therefore provide the same connectivity to clients or hosts.......9 

    Figure 8) A clustered Data ONTAP system consisting of three HA pairs. The client and host facing networks mayinclude SAN, NAS, or both. The cluster interconnect is dedicated, dual-fabric 10 Gigabit Ethernet. The managementnetwork provides administrative access to the cluster. Disk shelves and HA interconnects omitted for clarity. ........... 10 

    Figure 9) A NAS LIF with IP address 192.168.1.1. The LIF is not permanently bound to a specific physical port. If thetwo ports shown are in the same VLAN, an administrator can move the LIF to either port. ......................................... 11 

    Figure 10) A Flash Pool aggregate containing 8 flexible volumes, belonging to 3 distinct storage virtual machines(represented by burgundy, teal, and purple). The volumes are logically isolated. Each storage virtual machine canonly access its own volumes. Each of these volumes can be moved to a different aggregate while the data inside it isbeing accessed. ........................................................................................................................................................... 12 

    Figure 11) A LUN move operation in progress. The SAN host is accessing the LUN in a new location on the righthand side. All write operations occur there. The contents of the LUN are pulled over from the original location on theleft hand side on a scheduled basis and as read requests are made. .......................................................................... 13 

    Figure 12) A NAS namespace. Dotted lines represent junctions to child volumes. Solid lines represent folders anddirectories inside a flexible volume. The path to a file or directory in a namespace remains the same, even if thevolume containing that file or directory is moved to a new physical location inside the cluster. ................................... 14 

    Figure 13) ALUA MPIO. SAN hosts use the most direct path to a LUN. In this depiction, if the LUN or its containingvolume on the right moved a node in the HA pair on the left, the SAN host would begin accessing the LUN throughthe more direct path on the left. .................................................................................................................................... 15 

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    4/19

    4 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    NetApp Clustered Data ONTAP: Overview

    Clustered Data ONTAP is enterprise-capable, unified scale-out storage. It is the basis for virtualized,shared storage infrastructures. Clustered ONTAP is architected for nondisruptive operations, storage andoperational efficiency, and scalability over the lifetime of the system.

     An ONTAP cluster typically consists of fabric-attached storage (FAS) controllers: computers optimized to

    run the clustered Data ONTAP operating system. The controllers provide network ports that clients andhosts use to access storage. These controllers are also connected to each other using a dedicated,redundant 10 gigabit ethernet interconnect. The interconnect allows the controllers to act as a singlecluster. Data is stored on shelves attached to the controllers. The drive bays in these shelves may containhard disks, flash media, or both.

    Figure 1) A Data ONTAP cluster consisting of FAS controllers in a mix of all-flash, hybrid, and capacityconfigurations. A dedicated, redundant 10 gigabit Ethernet interconnect (top) connects the controllers.

     A cluster provides hardware resources, but clients and hosts access storage in clustered ONTAP throughstorage virtual machines (SVMs). SVMs exist natively inside of clustered ONTAP. They define the storageavailable to the clients and hosts. SVMs define authentication, network access to the storage in the formof logical interfaces (LIFs), and the storage itself, in the form of SAN LUNs or NAS volumes.

    Clients and hosts are aware of SVMs, but may be unaware of the underlying cluster. The cluster providesthe physical resources the SVMs need in order to serve data. The clients and hosts connect to an SVM,rather than to a physical storage array.

    Like compute virtual machines, SVMs decouple services from hardware. Unlike compute virtualmachines, a single SVM may use the network ports and storage of many controllers, enabling scale-out.One controller ’s physical network ports and physical storage may be also shared by many SVMs,

    enabling multitenancy.

    Figure 2) A small Data ONTAP cluster with three storage virtual machines (SVMs). Clients and hosts connectto a storage virtual machine, rather than directly to the storage arrays that host the SVMs. Each SVM has itsown volumes, LUNs, network connectivity (LIFs), and authentication.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    5/19

    5 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

     A single cluster may contain multiple storage virtual machines (SVMs) targeted for various use cases,including server and desktop virtualization, large NAS content repositories, general-purpose file services,and enterprise applications. SVMs may also be used to separate different organizational departments ortenants.

    The components of an SVM are not permanently tied to any specific piece of hardware in the cluster. AnSVM’s volumes, LUNs, and logical interfaces can move to different physical locations inside the cluster,while maintaining the same logical location to clients and hosts. While physical storage and networkaccess moves to a new location inside the cluster, clients can continue accessing data in those volumesor LUNs, using those logical interfaces.

    Figure 3) A six node Data ONTAP cluster with one storage virtual machine. Clients and hosts can accessdata on any node from any logical interface. Flexible volumes, LUNs, and LIFs can move nondisruptively, sothe SVM can grow as the cluster scales out.

    This allows a cluster to continue serving data as physical storage controllers are added or removed from

    it. It also enables workload rebalancing, and native, nondisruptive migration of storage services todifferent media types, like flash, spinning media, or hybrid configurations.

    With clustered ONTAP, you can expand flash capacity when you need performance, high-density driveswhen you need raw capacity, or both. You can scale up when you need a higher end storage array, orscale out horizontally when you need to distribute a workload. All of these operations can be performedwhile clients and hosts continue accessing their data.

    You can also scale compute in a public cloud using NetApp Private Storage (physical FAS systems nextto a public cloud data center) or Cloud ONTAP (clustered ONTAP running in a virtual machine inside apublic cloud), while maintaining control of your data.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    6/19

    6 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Clustered ONTAP uses NetApp’s Write Anywhere File Layout (WAFL), which delivers storage andoperational efficiency technologies like fast, storage efficient Snapshot copies; thin provisioning; volume,LUN, and file cloning; and deduplication. Most storage efficiency features are available regardless of theunderlying media type.

    Clustered ONTAP natively supports all-flash and hybrid configurations. Hybrid configurations offerperformance acceleration using flash along with dense spinning media for capacity. In a hybrid

    configuration, active data is moved into and out of flash without administrative intervention. Fast inlinecompression is supported in all-flash and hybrid configurations beginning with clustered ONTAP 8.3.1.Inline deduplication is supported in all-flash and hybrid configurations beginning with clustered ONTAP8.3.2.

    WAFL accelerates write operations using nonvolitile memory inside the storage controller, in conjuctionwith optimized file layout on the underlying storage media.

    Clustered ONTAP supports a wide range of options for data protection and business continuity.Syncronous (instantaneous) mirroring to another data center is supported with MetroCluster. MetroClusteroffers a simple procedure for switching over to an alternate site in the event of a catastrophic incident.

     Asyncronous (periodic) mirroring to another data center is supported with SnapMirror. SnapMirror allowsthe mirrored site to become the primary site during a DR event while allowing a switch back to the originalprimary site once the event is over. Starting with clustered ONTAP 8.3.1, a Storage Virtual Machine’s

    configuration, as well as its data, can be mirrored to another site using SnapMirror for Storage VirtualMachines.

    Long term archives of Snapshot copies can be stored on a secondary storage system in the same or adifferent cluster using SnapVault. SnapVault allows the use of inexpensive media on the secondarysystem. SnapVault also allows end users to restore their own data without intervention from the storageadministrator.

    Clustered ONTAP offers application integration, allowing for application-aware backup and recoveryworkflows. It offers integration with hypervisors such as VMWare ESX and Microsoft Hyper-V.

    Clustered ONTAP supports both SAN (block) and NAS (file) protocols. Most of the same features areavailable regardless of the protocol in use.

    This paper is an overview of clustered Data ONTAP, including its architecture and core capabilities.

    Physical Cluster Components

     A cluster is typically composed of physical hardware: controllers with attached storage (solid state drives,spinning media, or both; or a third-party storage array when FlexArray is used), network interface cards,and, optionally, PCI-based flash cards (Flash Cache). Together, all of these components create aphysical resource pool.

    This physical resource pool is visible to cluster administrators but not to the applications and hosts thatuse the cluster. The storage virtual machines (SVMs) in the cluster use these resources to serve data toclients and hosts.

    This section describes the most important physical components of a cluster: nodes, HA pairs, aggregates,network ports, and the cluster itself.

    Nodes

    Storage controllers are presented and managed as cluster nodes, or instances of clustered ONTAP.

    Nodes have network connectivity and storage. The terms “node” and “controller” are sometimes used

    interchangeably, but “node” more frequently means a controller, its storage, and the instance of clustered

    ONTAP running on it.

    The NetApp clustered Data ONTAP operating system can be deployed in a wide range of configurations,

    from entry-level systems to enterprise-class systems to virtualized nodes running in a public cloud. All of

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    7/19

    7 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    these configurations run the same operating system and offer almost all of the same data management

    functionality. They have the same user interface and are managed with the same tools.

    Figure 4) A single-node cluster consisting of a FAS8000 series controller running clustered Data ONTAP.This FAS8000 controller has one shelf of solid-state disks and another shelf of capacity drives. Thecontroller has redundant connections to its storage shelves.

    HA pairs

     Although single-node clusters are supported, clustered ONTAP is normally deployed on HA pairs of FAS

    storage controllers.

     An HA pair of FAS controllers provides redundancy so that storage services can be taken over by thepartner node during maintenance events, or in the event of a controller hardware failure. In most cases,controllers in an HA pair reside in the same chassis with redundant power supplies and passiveinterconnect circuitry. HA pairs always consist of like FAS models.

     An HA pair includes an interconnect between the two nodes. This interconnect is used to mirror writeoperations to the partner’s nonvolatile memory. When a write request is made to either of the two nodesin an HA pair, the write request is logged on both nodes before a response is sent back to the client or thehost.

    Each controller in an HA pair also has connections to the other c ontroller’s storage media. Theseconnections are used to provide continued access to the data if one of the two nodes needs to take overthe storage services normally provided by the other node.

    Clustered ONTAP allows the controllers in an HA pair to be replaced using Aggregate Relocate (ARL). ARL allows one node in an HA pair to take over ownership of the storage of the other node, even if HA isdisabled. Once the network connections and storage have been moved to one of the two nodes, the othercan be replaced, and the process can be repeated for the remaining original node.

    Controllers in an HA pair may provide redundant network connections. However, NetApp clustered DataONTAP systems may consist of more than two nodes. In larger clusters, redundant network connectivitymay be provided by either partner node, other nodes in the cluster, or both.

    Figure 5) An HA pair of FAS controllers provides redundancy. In most cases, controllers in an HA pair residein the same chassis with redundant power supplies and passive interconnect circuitry. This visualizationsplits the nodes apart to illustrate the HA interconnect and redundant disk connectivity.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    8/19

    8 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Drives, RAID groups, and Aggregates

    The physical drives managed by nodes are grouped together in RAID groups. Clustered Data ONTAP

    offers a choice of RAID technologies: dual-parity RAID, sometimes called NetApp RAID DP, and RAID-4.

    Drives in a RAID group typically have the same size and type.

    In most cases, NetApp recommends the use of RAID DP because it can survive two simultaneous disk

    failures per RAID group. This means that in the event of a drive failure, data is still protected with another

    parity drive.

    RAID groups are grouped together to form an aggregate. An aggregate can consist of solid-state drives

    (SSDs), spinning media, or both.

     An aggregate that consists of both SSDs and spinning media is called a “Flash Pool aggregate.” Flash

    Pool aggregates allow data that is in use to reside on flash media, and data that has not been accessed

    in months or years to reside on less expensive spinning media.

    The flash portion of a Flash Pool aggregate acts as a cache, rather than a separate storage tier.

    Movement of data into and out of the flash portion of a Flash Pool aggregate is handled by the system

    itself. It does not require administrator intervention.

     An aggregate can be thought of as a large pool of raw, RAID protected physical storage. SVMs use the

    raw storage in an aggregate to store data for clients and hosts.

    Clustered ONTAP 8.3 provides Advanced Drive Partitioning (ADP), which allows single physical disk to

    be shared by multiple aggregates. NetApp supports ADP for All-Flash FAS, entry platforms, and NetApp

    Flash Pool configurations.

    Figure 6) A Flash Pool aggregate consisting of high-capacity SATA drives in one storage shelf (grey), andsolid state drives in another storage shelf (green). In this example, the SATA drives are grouped together in

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    9/19

    9 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    two RAID groups with six data drives and two parity drives each. The solid state drives are grouped togetherin one RAID group with six data drives and one parity drive.

    Network Ports

    Network ports provide the physical connectivity required to serve data to NAS clients and SAN hosts.

    In the case of NAS, network ports are usually physical links to an Ethernet switch. All the ports in the

    cluster that are connected to a specific VLAN on the Ethernet switch have the same physical connectivity

    to the cluster.

    In the case of fibre channel, network ports are connected to NPIV-enabled (N_Port ID Virtualization) ports

    on a fibre channel switch. Physical connectivity to the storage with fibre channel is managed through

    zoning on the switch, rather than VLANs, but the concept is similar.

     A network port can be thought of as a raw, physical path to clients and hosts that SVMs can use to serve

    data.

    Figure 7) A two-node cluster with two NAS data ports highlighted. Even through these two ports reside ondifferent physical nodes, the ports are on the same VLAN, and therefore provide the same connectivity toclients or hosts.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    10/19

    10 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Clusters

     A cluster consists of a group of nodes connected to each other with a private, dedicated, dual-fabric 10

    Gigabit Ethernet interconnect. Clusters of two nodes can be optionally configured without switches, with

    point-to-point connections used for the cluster-interconnect.

    Management of the cluster is often performed through a management network. Cluster management

    traffic can be placed on a separate physical network, to provide increased security.

     A cluster can consist of up to 8 nodes if SAN is enabled, or up to 24 nodes in a NAS-only configuration. If

    a cluster has more than one node, all the nodes in the cluster must be in HA pairs. Single-node clusters

    are the only odd numbered cluster size NetApp supports.

     Although an HA pair consists of like controllers, clusters of four or more nodes may consist of varying

    FAS or All Flash FAS models, and varying configurations. All-flash configurations can reside in the same

    cluster with hybrid configurations, capacity-only configurations using only SATA drives, and clustered

    ONTAP systems that front-end third party storage arrays with FlexArray technology.

    Together, the nodes in the cluster, their client and host-facing network ports (which may reside in different

    network segments), and their attached storage aggregates form a single resource pool.

    Figure 8) A clustered Data ONTAP system consisting of three HA pairs. The client and host facing networksmay include SAN, NAS, or both. The cluster interconnect is dedicated, dual-fabric 10 Gigabit Ethernet. Themanagement network provides administrative access to the cluster. Disk shelves and HA interconnectsomitted for clarity.

    Logical Cluster Components

    Storage Virtual MachinesThe cluster provides connectivity and RAID-protected storage, but clients and hosts do not mountnetwork ports or raw storage. Clients and hosts require IP addresses, WWPNs, NAS volumes, SMB

    (CIFS) shares, NFS exports, and LUNs.Storage virtual machines (SVMs, sometimes called “vservers”) define these client and host-facing entities,and use the hardware of the cluster to deliver the storage services.

     An SVM is what users connect to when they access data. The components of an SVM are notpermanently bound to any piece of physical hardware. This allows an SVM to continue serving data asphysical components of a cluster are added and removed.

    The separation of physical hardware from storage services allows storage services to continue as all thephysical components of a cluster are incrementally replaced.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    11/19

    11 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Each SVM may have its own authentication, its own storage, its own network segments, its own users,and its own administrators. A single SVM can provide SAN, NAS, or both. A single SVM can use physicalstorage or network connectivity on any cluster node, enabling scale-out. New SVMs can be provisionedon demand, without deploying additional hardware.

    Logical Interfaces (LIFs)

    Connectivity to SVMs is provided through logical interfaces (LIFs). A LIF has an IP address or World WidePort Name used by a client or host to connect to an SVM.

     A LIFs is hosted on a physical port. An SVM can have LIFs on any cluster node. Clients and hosts canaccess data regardless of the physical location of the data in the cluster. The cluster will use itsinterconnect to route traffic to the appropriate location regardless of where the request arrives.

    LIFs virtualize IP addresses or WWPNs, rather than permanently mapping IP addresses and WWPNs toNIC and HBA ports.

    Each SVM requires its own dedicated set of LIFs. A LIF defined for NAS access can be migrated toanother port on the same or a different controller to to rebalance client performance, or to evacuate allresources on a controller for hardware lifecycle operations.

    Figure 9) A NAS LIF with IP address 192.168.1.1. The LIF is not permanently bound to a specific physical

    port. If the two ports shown are in the same VLAN, an administrator can move the LIF to either port.

    Flexible Volumes

    SVMs store data for clients and hosts in flexible volumes. Aggregates provide the raw storage required by

    flexible volumes.

    Flexible volumes are logical containers that contain data used by applications. This can include NAS dataor SAN LUNs. NetApp recommends that NAS data and SAN LUNs not be mixed in the same flexible

    volume, because each may have different backup requirements.

    Each flexible volume can have up to 255 NetApp Snapshot copies. Snapshot copies are a powerful

    element of clustered Data ONTAP, and they are the basis of most NetApp replication technologies.

    When a Snapshot copy of a flexible volume is taken, a read-only copy of the data in the volume at that

    point in time is created. That means that application administrators can restore LUNs using the Snapshot

    copy, and end users can restore their own files.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    12/19

    12 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Snapshot copies are high performance. When writes are made to a flexible volume that has an older

    Snapshot copy, the new writes are made to free space on the underlying storage. This means that the old

    contents do not have to be moved to a new location. The old contents stay in place, which means the

    system continues to perform quickly, even if there are many Snapshot copies on the system.

    Flexible volumes can be mirrored, archived, or nondisruptively moved to other aggregates. Nondisruptive

    data mobility, sometimes called volume move or NetApp DataMotion for Volumes, is a key advantage of

    clustered Data ONTAP.

    Just as LIF migration between network ports allows client access to move to a different nodes,

    DataMotion for Volumes allows clients and hosts to continue accessing data as that data is moved to

    other cluster nodes.

    DataMotion for Volumes allows a cluster to continue serving data as physical storage controllers are

    added or removed from it. It also enables workload rebalancing and nondisruptive migration of storage

    services to different media types, such as flash, spinning media, and Flash Pool aggregates.

    No matter where a volume goes, it keeps its identity. That means that its Snapshot copies, its replication

    relationships, its deduplication, and other characteristics of the flexible volume remain the same.

    Figure 10) A Flash Pool aggregate containing 8 flexible volumes, belonging to 3 distinct storage virtual

    machines (represented by burgundy, teal, and purple). The volumes are logically isolated. Each storagevirtual machine can only access its own volumes. Each of these volumes can be moved to a differentaggregate while the data inside it is being accessed.

    LUNs

    SVMs store data for SAN hosts in LUNs. Applications requiring block data services through protocols

    such as iSCSI, fibre channel, and fibre channel over Ethernet use SAN LUNs.

    LUNs reside in flexible volumes. Just as with the containing flexible volumes, LUNs are not permanently

    tied to a storage aggregate. When a LUN’s containing flexible volume is moved, access to that LUNcontinues uninterrupted.

    In clustered ONTAP 8.3, a LUN can also nondisruptively move to a different flexible volume with LUN

    move, or DataMotion for LUNs. DataMotion for LUNs offers near-instantaneous cutover to a destination

    volume.

    When DataMotion for LUNs is used, any write requests from the host will be serviced by the destination

    system as read requests are completed by retrieving the data from the source system. This immediately

    reduces I/O load on the source system.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    13/19

    13 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Figure 11) A LUN move operation in progress. The SAN host is accessing the LUN in a new location on theright hand side. All write operations occur there. The contents of the LUN are pulled over from the originallocation on the left hand side on a scheduled basis and as read requests are made.

    NAS An SVM can act as a virtualized NAS server that provides file access with NFS, SMB (CIFS), or bothconcurrently. Each NAS-enabled SVM offers a single namespace. The namespace provides client accessto storage on many cluster nodes via a single NFS mount or SMB (CIFS) share.

     A namespace can contain many flexible volumes.To NAS clients, each volume appears as a folder orsubdirectory inside another volume. Each flexible volume may also have its own folders andsubdirectories that are not linked to any other volume.

    Flexible volumes are linked to one another inside a namespace using junctions, which are roughlyanalagous to Unix mount points. Clients can mount at or beneath the root of the namespace. Clients seeonly the volumes that are mounted below their access point.

    In some scale-out environments, a single SVM may have a large namespace using all the controllers in acluster. In environments with many SVMs, each SVM will have its own namespace.

    Client access is controlled using CIFS share permissions, file and directory permissions, and exportpolicies. Export policies are analagous to the access control entries in a typical Unix /etc/exports file, andare set on a per-volume basis. Export policies should be disabled for CIFS access and in kerberosauthenticated Unix environments, as the centralized authentication in those environments is typicallysufficient.

    Flexible volumes can be added to an SVM’s namespace at any time. Newly added volumes areimmediately available to the clients, with no remount required for visibility to the new storage. The flexiblevolumes inside an SVM’s namespace can be moved around between controllers and me dia typesnondisruptively, eliminating the outages and operational complexity associated with environments usingexclusively automounters and DFS.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    14/19

    14 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Figure 12) A NAS namespace. Dotted lines represent junctions to child volumes. Solid lines represent foldersand directories inside a flexible volume. The path to a file or directory in a namespace remains the same,even if the volume containing that file or directory is moved to a new physical location inside the cluster.

    Namespaces in large SVMs are sometimes nested, with child volumes, parent volumes, grandparentvolumes, and ancestor volumes. Small namespaces are more often flat, with individual volumes mounted

    at the top of the namespace. In a flat namespace, flexible volumes have l ocations like “/users”, “/VMs”,and “/app_data”. 

    SAN An SVM can serve data with SAN protocols such as Fibre Channel, Fibre Channel over Ethernet (FCoE),and iSCSI.

    Because the physical cluster consists of multiple, interconnected controllers, there may be multiple logicalpaths to any individual LUN in an SVM. These paths are managed by multipath I/O (MPIO) on the SANhost. Asymmetric Logical Unit Access (ALUA), an industry standard protocol for identifying optimizedpaths to a LUN is also used by the hosts.

    If the optimized path to any LUN changes because the LUN is moved using DataMotion for LUNs, or itscontaining volume is moved with DataMotion for volumes, this is automatically recognized by the host.The host will update its path information, and choose a new, optimized path. If the optimized pathbecomes unavailable, the host’s ALUA-enabled MPIO stack can nondisruptively switch to any otheravailable path.

    Paths to a LUN can be added to or removed from any node in a cluster, and, following a rescan, a host’s ALUA enabled multipath stack will automatically switch to the most direct path nondisruptively. Paths to aLUN can also be reduced using selective LUN mapping in clustered ONTAP 8.3, which easies pathmanagement.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    15/19

    15 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Figure 13) ALUA MPIO. SAN hosts use the most direct path to a LUN. In this depiction, if the LUN or itscontaining volume on the right moved a node in the HA pair on the left, the SAN host would begin accessingthe LUN through the more direct path on the left.

    Key Features

    Managability

    Different methods are available for managing and configuring clustered Data ONTAP, depending on user

    or application requirements. Supported methods include:

      NetApp OnCommand® System Manager  is a simple graphical user interface designed for

    administering a clustered ONTAP system. It provides the ability to provision storage virtual

    machines, volumes, LUNs, network connectivity, and replication relationships. It can be used to

    manage local users and groups, and administrative roles. It is available as an on-box user

    interface in clustered Data ONTAP 8.3. 

      System Setup is configuration software designed to help storage administrators get a NetApp

    clustered Data ONTAP system up and running with minimal effort. It provides a wizard interface

    to configure a cluster according to best practices, performs a number of checks to make sure that

    everything is wired correctly, and checks for common mistakes to help you know that the cluster

    is healthy and ready for production.

      OnCommand Unified Manager Provides a single dashboard to confirm the health of your

    NetApp® clustered Data ONTAP® storage availability, capacity, performance, and data-

    protection relationships. OnCommand® Unified Manager provides operational efficiency at scale.  

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    16/19

    16 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

      OnCommand Workflow Automation enables automation of repeatable manual storage-

    management processes, enabling storage self-service and standards enforcement. OnCommand

    Workflow automation uses a graphical interface does not require the use of a scripting language. 

      Oncommand Performance Manager enables data storage performance troubleshooting,

    problem isolation, and concrete solutions to performance issues based on system analysis.

      Command Line a command-line interface is available for cluster and storage virtual machineadministrators. Commands are organized in a hierarchy. Tab completion is provided. Each

    command can act as a wizard, prompting the administrator for additional input parameters as

    required. 

      Software Development Kits (SDKs) provide the infrastructure to invoke Data ONTAP® APIs,

    OnCommand® Unified Manager APIs, and web services APIs for OnCommand Unified Manager

    Core Package on a server. SDK Core API libraries are for C and C++, Java™, Perl®, C#,

    VB.NET, PowerShell®, Python™, and Ruby. 

      Application and hypervisor plugins are available, including Virtual Storage Console for

    VMware® vSphere™, Citrix XenServer and Red Hat Enterprise Virtualization; NetApp® Snap

    Creator™ Framework to standardize and simplify backup, restore, and DR in any environment;

    NetApp SnapDrive to automate storage provisioning tasks; and SnapManager for application-

    aware configuration, backup, and restore.

      NetApp® OnCommand Cloud Manager  software provides a simplified management interface

    for Cloud ONTAP and NetApp Private Storage (NPS) for Cloud solutions.

      OnCommand Insight provides a multi-vendor  view of performance metrics, including application

    performance, datastore performance, virtual machine performance, and storage infrastructure

    performance. 

      Autosupport is dial-home functionality that allows NetApp to help you proactively manage your

    system and diagnose conditions on it. Depending on your service contract, Autosupport allows

    NetApp to begin the process of shipping an RMA before you even open a case.  

    Multiprotocol Unified Architecture

     A multiprotocol unified architecture provides the ability to support multiple data access protocolsconcurrently in the same storage system over a whole range of different controller and disk storage types.The supported protocols in clustered Data ONTAP are:

      NFS v3, v4, and v4.1, including pNFS

      SMB 1, 2, 2.1, and 3, including support for nondisruptive failover in Microsoft® Hyper-V™ environments and Microsoft SQL Server ® 

      iSCSI

      Fibre Channel

      FCoE

    Data replication and storage efficiency features are supported across all protocols in clustered Data

    ONTAP.

    Storage EfficiencyStorage efficiency built into clustered Data ONTAP offers substantial space savings, allowing more datato be stored at lower cost.

      Thin provisioning is the most efficient way to provision storage, because although the clientssee the total storage space assigned to them, the storage is not preallocated up front. When avolume or LUN is created by using thin provisioning, space on the storage system is not used

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    17/19

    17 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    until it is consumed. The space remains unused until data is written to the LUN or the volume, atwhich time only enough space to store the data is used. Unused storage is shared across allvolumes, and the volumes can grow and shrink on demand.

      NetApp FlexClone ®  technology allows for near-zero-space, exact, writable virtual copies ofdatasets, including volumes, files, and LUNs. It offers rapid, space-efficient creation of additionaldata copies ideally suited for disaster recovery testing and test and development environments.

      Deduplication removes duplicate data blocks in primary and secondary storage, storing onlyunique blocks. This results in storage space and cost savings. Deduplication runs on acustomizable schedule. Inline deduplication is available beginning with clustered Data ONTAP8.3.2.

      Inline Compression was intruduced for primary workloads such as database and desktopvirtualization with clustered ONTAP 8.3.1. Inline compression is on by default in the all-flash FASproduct family starting with 8.3.1.

      Virtual Storage Tiering allows hot data that is frequently accessed to be transparently stored onflash. Clustered Data ONTAP offers two varieties of virtual storage tiering: NetApp Flash Cache isa PCI-e based read cache inside nodes that make up the cluster, and NetApp Flash Pool is astorage aggregate-level cache used to improve performance of both reads and writes.

    Data Protection and Business Continuity

      NetApp Snapshot™ copies. These are automatically scheduled point-in-time copies that take upno space and incur no performance overhead when created. Over time, Snapshot copiesconsume minimal storage space, because only changes to the active file system are written.Individual files and directories can be easily recovered from any Snapshot copy, and the entirevolume can be restored back to any Snapshot state in seconds.

      Dual Parity RAID can survive two simultaneous disk failures per RAID group. This means that inthe event of a drive failure, data is still protected with another parity drive.

      NetApp SnapMirror  ®  technology. SnapMirror provides asynchronous replication of volumes,independent of protocol, either within the cluster or to another clustered Data ONTAP system fordata protection and disaster recovery. SnapMirror for Storage Virtual Machines, also called

    Storage Virtual Machine Disaster Recovery (SVM DR) replicates both data and Storage VirtualMachine configuration settings. It is available starting with clustered ONTAP 8.3.1.

      NetApp SnapVault ®  technology. Volumes can be copied for space-efficient, read-only, disk-to-disk backup either within the cluster or to another clustered Data ONTAP system. SnapVault,when used in conjunction with version independent SnapMirror, allows a single destinationvolume to serve as both a backup and disaster recovery copy.

      NetApp MetroCluster  ®  software addresses the need to provide continuous data availabilitybeyond the data center (or beyond the cluster). MetroCluster is native within the NetApp DataONTAP operating system. It is a synchronous mirroring relationship between two distinct butidentically configured two-node clusters up to 200km apart. Syncronous mirroring withMetroCluster between two single-node clusters is available starting with clustered ONTAP 8.3.1.

    Storage QoSClustered Data ONTAP provides storage quality of service (QoS) policies on cluster objects. An entireSVM, or a group of volumes or LUNs within an SVM, can be dynamically assigned to a policy group,which specifies a throughput limit, defined in terms of IOPS or MB/sec. This can be used to reactively orproactively throttle rogue workloads and prevent them from affecting the rest of the workloads.

    QoS policy groups can also be used by service providers to prevent tenants from affecting each other, aswell as to avoid performance degradation of the existing tenants when a new tenant is deployed on theshared infrastructure.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    18/19

    18 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved 

    Infinite VolumeInfinite Volume is a type of volume that is contained in a dedicated SVM, which can scale up to 20PB andstore up to 2 billion files. An Infinite Volume can coexist with standard SVMs and support both NFS andSMB client access. An Infinite Volume is well suited for enterprise NAS content repositories.

    Intelligent Scale-Out Storage

    With scale-out , as the storage environment grows, additional controllers are added seamlessly to theresource pool residing on a shared storage infrastructure. Scale-out, together with built-in storagevirtualization, provides nondisruptive movement of host and client connections, as well as the datastoresthemselves, anywhere in the resource pool.

    With these capabilities, new workloads can be easily deployed and existing workloads can be easily andnondisruptively balanced over the available resources. Technology refreshes such as replacing diskshelves or storage controllers are accomplished while the environment remains online and serving data.

    Nondisruptive OperationsNondisruptive operations (NDO) allows the storage infrastructure to remain up and serving data duringmaintenance and IT lifecycle operations. The goal of NDO is to eliminate downtime, and to allow changesto the system to occur at any time.

    Clustered Data ONTAP is highly available by design and can transparently migrate data and clientconnections throughout the storage cluster.

    Data migration may be performed to rebalance capacity usage, to optimize for changing performancerequirements, to isolate one or more controllers, to promote data to an all-flash array, to archive data to acapacity configuration, or to execute maintenance or lifecycle operations.

    Software updates and configuration changes occur throughout any system’s lifecycle. Additionally, thehardware infrastructure must be added to and replaced, potentially many times. Years after a system wasoriginally commissioned, the data has outlived the hardware, so that little or none of the original hardwaremay remain. Through NDO capabilities, all of these changes can occur without outages to theapplications or attached clients and hosts.

  • 8/17/2019 Introduction - Netapp Clustered Data ONTAP 8.3x and 8.2.x

    19/19

    19 N tA Cl t d D t ONTAP 8 3 d 8 2 ©2015 N tA I All Ri ht R d

    Summary

    Clustered ONTAP delivers nondisruptive operations, efficiency, and scalability.

    Nondisruptive operations eliminate planned downtime. They enable you to continue serving datathroughout your entire tech refresh lifecycle and to move workloads between all-flash, high-capacity, andhybrid tiers without reconfiguring your applications or impacting your users.

    Efficiency allows you to make the most of both data storage and your organization. Storage efficiencytranslates to a reduction in capacity requirements and more efficient use of flash. Cloning technologiesalso help you optimize your most important resource: the people who depend on infrastructure.

    Scalability allows organizations to reduce risk by knowing that the same tools and processes will beavailable as needs grow, and provides the flexibility to expand what is needed, when it is needed. Flashcapacity can be increased for performance, and high-density drive capacity can be increased for archivedata. Clustered ONTAP allows you can scale up when you need a higher-end storage array or scale outhorizontally when you need to distribute a workload. You can scale compute in the cloud whilemaintaining control of your data.

    Clustered ONTAP provides all this functionality in a single platform.

    Resources

    Clustered Data ONTAP Product Documentation 

    NetApp Documentation Library 

    NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of anyinformation or recommendations provided in this publication, or with respect to any results that may beobtained by the use of the information or observance of any recommendations provided herein. Theinformation in this document is distributed AS IS, and the use of this information or the implementation ofany recommendations or techniques herein is a customer’s responsibility and depends on the customer’sability to evaluate and integrate them into the customer’s operational environment. This document and theinformation contained herein may be used solely in connection with the NetApp products discussed in thisdocument.

    © 2015 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp,Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, DataMotion, Data ONTAP,Flash Cache, Flash Pool, FlexCache, FlexClone, Manage ONTAP, OnCommand, RAID-DP, SnapMirror, Snapshot, and SnapVault

    t d k i t d t d k f N tA I i th U it d St t d/ th t i Mi ft i i t d

    Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact productand feature versions described in this document are supported for your specific environment. The NetAppIMT defines the product components and versions that can be used to construct configurations that aresupported by NetApp. Specific results depend on each customer's installation in accordance with publishedspecifications.

    http://mysupport.netapp.com/documentation/productlibrary/index.html?productID=30092http://mysupport.netapp.com/documentation/productlibrary/index.html?productID=30092http://www.netapp.com/us/library/http://www.netapp.com/us/library/http://support.netapp.com/matrix/mtx/login.dohttp://support.netapp.com/matrix/mtx/login.dohttp://support.netapp.com/matrix/mtx/login.dohttp://support.netapp.com/matrix/mtx/login.dohttp://www.netapp.com/us/library/http://mysupport.netapp.com/documentation/productlibrary/index.html?productID=30092