nas foundation
TRANSCRIPT
-
7/29/2019 NAS Foundation
1/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 1
2006 EMC Corporation. All rights reserved.
NAS FoundationsNAS Foundations
Welcome to NAS Foundations.
The AUDIO portion of this course is supplemental to the material and is not a replacement for the
student notes accompanying this course.EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and
reading the notes in their entirety.
These materials may not be copied without EMC's written consent.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC is a registered trademark of EMC Corporation.
All other trademarks used herein are the property of their respective owners.
-
7/29/2019 NAS Foundation
2/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 2
2006 EMC Corporation. All rights reserved. NAS Foundations - 2
NAS Foundations
Upon completion of this course, you will be able to:
y Identify the concepts and value of Network Attached Storage
y List Environmental Aspects of NAS
y Identify EMC NAS Platforms and their differences
y Identify and describe Celerra Software Features
y Identify and describe Celerra Management Software offerings
y Identify and describe Windows Specific Options with respect to EMCNAS environments
y Identify and describe NAS Business Continuity Options with respectto the various EMC NAS platforms
The objectives for this course are shown here. Please take a moment to read them.
-
7/29/2019 NAS Foundation
3/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 3
2006 EMC Corporation. All rights reserved. NAS Foundations - 3
Network Attached Storage
y Identify what constitutes a NAS environment
NAS environment components are reviewed in this section.
-
7/29/2019 NAS Foundation
4/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 4
2006 EMC Corporation. All rights reserved. NAS Foundations - 4
What Is Network-Attached Storage
y Built on the concept of
shared storage on aLocal Area Network
y Leverages the benefitsof a network file serverand network storage
y Utilizes industry-standard network and
file sharing protocols
Network
File Server + Network-Attached Storage = NAS
App lication Application Application
Unix Client Unix ClientWindows Client
The benefit of NAS is that it now brings the advantages of networked storage to the desktop through
file-level sharing of data via a dedicated device.
NAS is network-centric and typically used for client storage consolidation on a variety of networktopologies such as LANs (Local Area Network), MANs (Metropolitan Area Network), WANs (Wide
Area Network), etc. NAS is a preferred storage capacity solution for enabling clients with unregulated
access to files quickly and directly via purpose built data sharing equipment. This eliminates several
bottlenecks users often encounter when accessing files from a general-purpose servers. In addition,
NAS can serve UNIX and Microsoft Windows users seamlessly, sharing the same data between the
different architectures.
NAS provides security and performs all file and storage services through standard network protocols:
y TCP/IP for data transfer
y Ethernet and Gigabit Ethernet for media access
y CIFS, http, ftp, and NFS for remote file service
-
7/29/2019 NAS Foundation
5/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 5
2006 EMC Corporation. All rights reserved. NAS Foundations - 5
Why NAS?
y Highest availability
y Scales for growth
y Avoids file replication
y Increases flexibilityy Reduces complexity
y Improves security
y Reduces Costs
Firewall
Web
Servers
NAS
Internet
Data CenterSn
S2
.
..
.
S1
Internal
Network
Through the advent of NAS applications that use file system level access, the data can now be shared
to large numbers of users, that may be geographically dispersed, simultaneously. Therefore many users
can now take advantage of the availability and scalability of networked storage. Centralizing file
storage can reduce system complexity and system administration costs, along with simplifying backup,restore, and disaster recovery solutions.
Although NAS trades some performance for manageability and simplicity, it is by no means a lazy
technology. Gigabit Ethernet allows NAS to scale to high performance and low latency, making it
possible to support a myriad of clients through a single interface. Many NAS devices support multiple
interfaces and can support multiple networks at the same time.
-
7/29/2019 NAS Foundation
6/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 6
2006 EMC Corporation. All rights reserved. NAS Foundations - 6
NAS Operations
y Traditional IO operations use filelevel IO protocols
y File system is mounted remotelyusing a network file accessprotocol, such as:Network File System (NFS) for Unix
Common Internet File System(CIFS)for
Microsoft Windows
y IO is redirected to remote system
y Utilizes mature data transport (e.g.,TCP/IP) and media accessprotocols
y NAS device assumes responsibilityfor organizing data (R/W) on diskand managing cache
Disk
IP Network
App lication
NAS Device
NAS
SANORDirect
Attach
One of the key differences of a NAS disk device, compared to DAS or other networked storage
solutions such as SAN, is that all traditional I/O operations use file level I/O protocols. File I/O is a
high level type of request that, in essence, specifies only the file to be accessed, but does not directly
address the storage device. The client file I/O is converted into block level I/O by the NAS deviceoperating system to retrieve the actual data. Once the data has been retrieved it is once again converted
back to file level I/O for return to the client.
A file I/O specifies the file. It also indicates an offset into the file. For instance, the I/O may specify
Go to byte 1000 in the file (as if the file were a set of contiguous bytes), and read the next 256 bytes
beginning at that position.
Unlike block I/O, there is no awareness of a disk volume or disk sector in a file I/O request. Inside the
NAS appliance, the operating system keeps tracks of where files are located on disk. The OS issues a
block I/O request to the disks to fulfill the file I/O read and write requests it receives.
The disk resources can be directly attached to the NAS device or using a SAN, referred to as agateway configuration.
Block level IO support by NAS devices is discussed later in this module.
-
7/29/2019 NAS Foundation
7/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 7
2006 EMC Corporation. All rights reserved. NAS Foundations - 7
NAS Architecture
Application
Remote I/O
request
Operating System
NFS/CIFS
TCP/IP Stack
Network Interface
File I/O to NAS
I/O Redirector
Network Interface
TCP/IP Stack
Network FileProtocol Handler
NASOperatingSystem
To Storage
NFS and CIFS handle file
requests to remote filesystem
I/O is encapsulated byTCP/IP Stack to moveover the network
NAS device convertsrequests to block IO andreads or writes data toNAS disk storage
Drive Protocol (SCSI)
Storage Network
Protocol(Fibre Channel)
The Network File System (NFS) protocol and Common Internet File System (CIFS) protocol handle
file I/O requests to the remote file system, which is located in the NAS device storage. I/O requests are
packaged by the initiator into the TCP/IP protocols to move across the IP network. The remote NAS
file system converts the request to block I/O and reads or writes the data to the NAS disk storage. Toreturn data to the requesting client application, the NAS appliance software re-packages the data to
move it back across the network.
Here we see an example of an IO being directed to the remote NAS device and the different protocols
that play a part in moving the request back and forth to the remote file system located on the NAS
server.
-
7/29/2019 NAS Foundation
8/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 8
2006 EMC Corporation. All rights reserved. NAS Foundations - 8
NAS Device
y Single-purpose machine orcomponent, serves as a dedicated,
high-performance, high-speedcommunication of data using bothfile level and block level IO
y Is sometimes called a filer or anetwork appliance
y Uses one or more NetworkInterface Cards (NICs) to connectto the customer network
y Uses proprietary optimizedoperating system; DART, Data
Access in Real Time
y Uses industry standard storageprotocols to connect to storageresources Disk
Storage
IP Network
Client Application
NAS Device
Network Drivers and Protocols
NFS CIFS
NAS Device OS (DART)
Storage Drivers and Protocols
A NAS server is not a general-purpose compute. NAS devices use a significantly streamlined/tuned OS
in comparison to general purpose computer. It is sometimes called a filer because it focuses all of its
processing power solely on file service and file storage. The NAS device is sometimes called a
network appliance, referring to the plug and play design of many NAS devices. Common networkinterface cards (NICs) include gigabit Ethernet (1000 Mb/s) or Fast Ethernet (10Mb/s), ATM, and
FIDDI. Most NAS devices also support NDMP (Network Data Management Protocol) for backup,
Novell Netware, FTP and HTTP protocols.
The NAS operating system for Network Appliance products is called Data ONTAP. The NAS
operating system for EMC Celerra is DART - Data Access in Real Time. These operating systems
are tuned to perform file operations including open, close, read, write, etc.
The NAS device generally uses a standard drive protocol, some form of SCSI, to manage data to and
from the disk resources.
-
7/29/2019 NAS Foundation
9/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 9
2006 EMC Corporation. All rights reserved. NAS Foundations - 9
NAS Applications
y CAD/CAM environments, wherewidely dispersed engineers have toshare and modify design drawings
y Serving Web pages to thousandsof workstations at the same time
y Easily sharing company-wideinformation among employees
y Database application
Low transaction rate
Low data volatilitySmaller in size
Not performance constrained
Database applications have traditionally been implemented in a SAN architecture. The primary reason
is the conclusive performance of a SAN. This characteristic is especially applicable for very large, on-
line transactional applications with high transaction rates and high data volatility.
However, NAS might be appropriate where the database transaction rate is low and performance is not
constrained. Extensive application profiling should be done in order to understand the specific
database application requirement and, if in fact, a NAS solution would be appropriate.
When considering a NAS solution, the databases should:
ybe sequentially accessed, non-indexed or have a flat file structure
y have a low transaction rate
y have low data volatility
ybe relatively small
y not have performance / timing constraints
y require multiple dynamic path access to application servers
-
7/29/2019 NAS Foundation
10/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 10
2006 EMC Corporation. All rights reserved. NAS Foundations - 10
NAS Environment
y Identify components in a common networking
environment
Key components of NAS and networking infrastructure are reviewed in this section.
-
7/29/2019 NAS Foundation
11/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 11
2006 EMC Corporation. All rights reserved. NAS Foundations - 11
Terminology
y Ethernet: Local network protocol that uses coaxial or
twisted pair cablesy Network Topology: Geometric arrangement of nodes and
cable links in a LAN; used in two general configurations:bus and star
y Protocol: Defines how computers identify one another ona network, the form that the data should take in transit,and how this information is processed once it reaches its
final destination
y IP Address: Unique number that identifies a computer toall other computers connected to the network
Ethernet is a local-area network protocol that uses coaxial or twisted pair cables as a means for
communication. Ethernet is popular because it strikes a good balance between speed, cost, and ease of
installation. These benefits, combined with wide acceptance in the computer marketplace and the
ability to support virtually all popular network protocols, make Ethernet an ideal networkingtechnology for most computer users today.
A network topology is the geometric arrangement of nodes and cable links in a LAN, and is used in
two general configurations: bus and star.
A protocol defines how computers identify one another on a network, the form that the data should
take in transit, and how this information is processed once it reaches its final destination. TCP/IP is a
common protocol used in sending information via the Internet. Protocols also define procedures for
handling lost or damaged transmissions, or "packets.
An Internet Protocol (IP) address is a four octet number in the commonly used IP version 4, for
example 155.10.20.11, that uniquely identifies a computer to all other computers connected to thenetwork.
-
7/29/2019 NAS Foundation
12/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 12
2006 EMC Corporation. All rights reserved. NAS Foundations - 12
What is a Network?
y LAN
y Physical Media
y WAN
y MAN
Site 1
Site 2
LAN
A network is any collection of independent computers that communicate with one another over a shared network medium.
LANs are networks usually confined to a geographic area, such as a single building or a college campus. LANs can be
small, linking as few as three computers, but often linking hundreds of computers used by thousands of people.
Physical Media
An important part of designing and installing a network is selecting the appropriate medium. There are several types in use
today: Ethernet, Fiber Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM), and Token Ring.
WAN
Wide area networking combines multiple LANs that are geographically separate. Services such as dedicated leased phone
lines, dial-up phone lines (both synchronous and asynchronous), satellite links, and data packet carrier services connect the
different LANs. Wide area networking can be as simple as a modem and remote access server for employees to dial into, or
it can be as complex as hundreds of branch offices globally linked using special routing protocols and filters to minimize
the expense of sending data over vast distances.
MANMetropolitan area networking is a networking infrastructure size that falls in-between a LAN and a WAN. They are
generally used to consolidate networking infrastructures in a campus sized, generally between five (5) and fifty (50)
kilometers in diameter, area to provide sharing of localized resources. They typically use wireless or optical
interconnections between localized sites within the MAN. The IEEE 802.6 standard specifies the unique way that the MAN
can communicate between sites to minimize latency and congestion. This is known as a distributed queue dual-bus network,
(DQDB), which utilizes a dual bus, distributed queuing.
-
7/29/2019 NAS Foundation
13/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 13
2006 EMC Corporation. All rights reserved. NAS Foundations - 13
Physical Components
y Network
InterfaceCard (NIC)
y Switches
y Routers
NIC
NIC
NIC
NIC
Switch
Switch
Router
155.10.10.XX
155.10.20.XX
Network Interface Card
Network interface cards, commonly referred to as NICs, are used to connect a Host, Server,
Workstation, PC, etc. to a network. The NIC provides a physical connection between the networkingcable and the computer's internal bus. The rate at which data passes back and forth can be different.
Switches
LAN switches can link multiple network connections together. Todays switches accept and analyze
the entire packet of data to catch certain packet errors and keep them from propagating through the
network before forwarding it to its destination. Each of the segments attached to an Ethernet switch has
the full bandwidth of the switch 10Mb/100Mb/1Gigabit.
Routers
Routers pass traffic between networks. Routers also divide networks logically instead of physically. An
IP router can divide a network into various subnets so that only traffic destined for particular IP
addresses can pass between segments.
-
7/29/2019 NAS Foundation
14/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 14
2006 EMC Corporation. All rights reserved. NAS Foundations - 14
Network Protocols
y Network transport Protocols
Universal Datagram Protocol (UDP) for non-connectionoriented networks
Transmission Control Protocol (TCP) for connection
oriented networks
y Network filesystem Protocols
NFS to manage files in a networked Unix environmentCIFS to manage files in a networked Windows environment
Network transport protocols are standards that allow computers to communicate. They are used to
manage the movement of data packets to devices communicating across the network. UDP and TCP
are examples of transport protocol.
In a non-connection oriented communication model, the data is sent out to a recipient using a best
effort approach with no acknowledgement of the receipt of the data being sent back to the originator
by the recipient. Error correction and resend must be controlled by a higher layer application to ensure
data integrity.
In a connection oriented model, all data packets sent by an originator are acknowledged by the
recipient and transmission errors / lost data packets are managed at the protocol layer.
TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms), IPX (for Novell NetWare),
DECnet (for networking Digital Equipment Corp. computers), AppleTalk (for Macintosh computers),
and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks) are examples of network
transport protocols in use today.
Network filesystem protocols are used to manage how data requests are processed once it reaches its
final destination. Both NFS and CIFS support UDP and TCP transport protocols.
Network block level protocols are discussed later in this presentation.
-
7/29/2019 NAS Foundation
15/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 15
2006 EMC Corporation. All rights reserved. NAS Foundations - 15
Network Addressing
y IP Addressing
y DHCP
y DNS
155.10.10.13Host Name Peter
155.10.10.11
Switch
Router
155.10.20.11
DNS Server
155.10.10. 14
Host name Mary
Host Name = Account1
155.10.10.XX
155.10.20.XX
DHCPServer
155.10.10.12
Several things must happen in order for computers to be able to communicate data across the network.
First, the computer must have a unique network address, referred to as the IP Address.
An address can be assigned in one of two ways; dynamically or statically. A static address requiresentering the IP address that the computer uses in a local file. However, if two computers on the same
subnet are assigned the same IP address, they would not be able to communicate. Another approach is
to set up a computer on the network to dynamically assign an IP address to a host when it joins the
network. This is called the Dynamic Host Configuration Protocol (DHCP Server).
In our example, the host Mary is assigned an IP address 155.10.10.14, and the host Peter is assigned
an IP address 155.10.10.13 by the DHCP server. The NAS device, Account1, is a File server. Servers
normally have a statically assigned IP address. In this example, it has the IP address 155.10.20.11.
A second requirement for communications is to know the address of the recipient of the
communication. The more common approach is to communicate by name, for example, the name you
place on a letter. However, the network uses numerical addresses. A more efficient solution is theDomain Name Service (DNS). The DNS is a hierarchical database, which resolves host names to IP
addresses. In our example, if someone on host Mary wants to talk to host Peter, it is the DNS server
that resolves Peter to 155.10.20.13.
-
7/29/2019 NAS Foundation
16/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 16
2006 EMC Corporation. All rights reserved. NAS Foundations - 16
Volume and Files
y CreateVolumesVolume
y CreateNetwork
Filesystem
155.10.10.13
Host Name Peter
155.10.10.11
Router
DNS Server
155.10.10. 14Host name Mary
Account1
Array
/Acct_ RepFile System
NAS
155.10.20.11
DHCP
Server
155.10.10.12
Create Array Volume
The first step in a network attached storage environment is to create logical volumes on the array andassign it a LUN Identifier. The LUN is then presented to the NAS device.
Create NAS Volume
The NAS device performs a discovery operation when it first starts or when directed. In the discoveryoperation, the NAS device sees the array LUN as a physical drive. The next task is to create logicalvolumes at the NAS device level. The Celerra creates meta volumes using the volume resourcespresented by the array.
Create Network File
When the logical volumes are created on the Celerra, it can use them to create a file system.
In this example, we have created a file system /Acct_Rep on the NAS server Account1.
Mount File System
Once the file system has been created, it must be mounted. With the file system mounted, we can thenmove to the next step, which is publishing the file system on the network.
-
7/29/2019 NAS Foundation
17/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 17
2006 EMC Corporation. All rights reserved. NAS Foundations - 17
Publish
y Export
y Share
155.10.10. 13Host name Peter
User PeterUnixExport
155.10.10.11
Router
DNS Server
155.10.10. 14Host name MaryUser MaryMS Windows
Share
Array
ACCOUNT1 /Acct_ Rep
155.10.20.11
Group Name =SALES
Group Name =Accounting
NAS
DHCPServer
155.10.10.12
Now that a network file system has been created and mounted, there are two ways it can be accessed
using the network.
The first method is through the UNIX environment using NFS. This is accomplished by performing anExport. The Export publishes to those UNIX clients who can mount (access) the remote file
system. Access permissions are assigned when the export is published.
The second method is through the Windows environment using CIFS. This is accomplished by
publishing a share. The share publishes to those Windows clients who map a drive to access the
remote file system. Access permission are assigned when the share is published.
In our example, we may only allow Mary and Peter, who are in the Sales organization, share or
export access. At this level, NFS and CIFS are performing the same function but are used in
different environments. All members of the Group SALES, which include the users Mary and Peter,
are granted access to /Acct_Rep.
-
7/29/2019 NAS Foundation
18/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 18
2006 EMC Corporation. All rights reserved. NAS Foundations - 18
Client Access
yMount
yMAP
155.10.10. 13
Host name PeterUser Peter
Unix
nfsmount
155.10.10.11
Router
DNS Server
155.10.10. 14Host name MaryUser Mary
MS Windows
MAP
Array
ACCOUNT1 /Acct_ Rep
155.10.20.11
Group Name =SALES
Group Name =Accounting
NAS
DHCPServer
155.10.10.12
To access the network file system, the client must mount a directory or map a drive pointing to the remote file
system.
Mount is a UNIX command performed by a UNIX client to set a local directory pointer to the remote file system.
The mount command uses NFS protocol to mount the export locally.
For a UNIX client to perform this task, it executes the nfsmount command. The format for the command is:
y nfsmount /name of the NAS server:name of the remote file system/name of the local directory
For example:
y nfsmount/Account1:Acct_Rep /localAcct_Rep.
For a Windows client to perform this task, it executes a map network drive. The sequence is my computer>
tools>map network drive. Select the drive letter and provide the server name and share name in the Folderfield.
For example:
y G:
y \\Account1\Acct_Rep
If you make a comparison, the same information is provided: the local drive (Windows) or the local directory and
the name of the NAS server and the name of the export or the share.
-
7/29/2019 NAS Foundation
19/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 19
2006 EMC Corporation. All rights reserved. NAS Foundations - 19
File Permissions
y Create File
y File Request
155.10.10. 13
Host name PeterUser Peter
Unix
155.10.10.11
Router
DHCPServerDNS Server
155.10.10. 14Host name MaryUser Mary
MS Windows
155.10.10.12
Array155.10.20.11
Group Name =SALES
Group Name =Accounting
Account1
/Acct_ Rep
MRPT1 PRPT2Files
NAS
Create file
Once access is gained by the client, files can be created on the remote file system. When a file is
created by a client, normal permission is assigned. The Client can also modify the original permissionsassigned to a file. File permission is changed in UNIX using the chmod command. File permission in
Windows is changed through right clicking on the selected file, then selecting Properties> Security add
or remove group add or remove permissions. It should be noted that in order to modify the file
permissions, one must have the permission to make the change.
File request
If a request for a file is received by the NAS server, the NAS server first authenticates the user locally
or over the network. If the user identity is confirmed, then the user is allowed to perform operations
contained in the file permissions of the users Group.
In our example, user Mary on host Mary creates a file MRPT1 on the NAS server Accout1. She assigns
herself the normal permission for this file, which allows her to read and write to this file. She also
limits file permissions to other members of the Group Sales to read only. User Peter on host Peter is a
member of the Group SALES. Peter has access to the export / Acct_Rep. If user Peter attempts to
write to file MRPT1, he would be denied the permission to write to the file.
-
7/29/2019 NAS Foundation
20/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 20
2006 EMC Corporation. All rights reserved. NAS Foundations - 20
EMC NAS Platforms
y Identify products from the EMC NAS range of equipment
EMC NAS products are reviewed in this section.
-
7/29/2019 NAS Foundation
21/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 21
2006 EMC Corporation. All rights reserved. NAS Foundations - 21
EMC Celerra PlatformsBroadest Range of NAS Products
SIMPLE WEB-BASED MANAGEMENT
CLARiiON, Symmetrix
NAS gateway to SAN
One or twoData Movers
* X-Blade Technology
High availability
NS40G*NS500G
NS700G
NSXNS704GNS80G*
NS704
NS350
NS40*NS500
NS700
Advanced clusteringAdvanced clusteringAdvanced clusteringHigh availability
CLARiiON, SymmetrixCLARiiON, SymmetrixIntegratedCLARiiON
IntegratedCLARiiON
NAS gateway to SANNAS gateway to SANUpgradeableto gateway
Upgradeableto gateway
Four to eightX-Blades
Four Data Movers* X-Blade Technology
Four Data MoversOne or two
Data Movers* X-Blade technology
An important decision you must make is, What is the right information platform that meets my
business requirements?
EMC makes it easy by offering the broadest range of NAS platforms in the industry. Rate yourrequirements and choose your solution.
The range of EMC NAS all use DART, Data Access in Real Time, operating system, which is
specially developed to provide efficient data transfer between the front end network connections and
the backend disk interfaces. There are at present two configurations available, Gateway and Integrated.
The Gateway models provide a NAS interface to SAN/Fabric attached storage arrays, while the
Integrated have their storage arrays contained within the same frames as the NAS heads, Data Movers,
which are solely dedicated to NAS functionality, (no shared host access to disks).
The Celerra NS Gateway (can be configured with up four Data Movers) and the Celerra NS GS
(configured with a single Data Mover) connects to CLARiiON CX arrays and/or Symmetrix DMX
arrays through a fibre channel switch or directly connected (in the case of CLARiiON).
The NSX gateway model can be configured with between four and eight Data Movers.
-
7/29/2019 NAS Foundation
22/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 22
2006 EMC Corporation. All rights reserved. NAS Foundations - 22
Celerra NAS - SAN Scalabil ity
y Consolidated storageinfrastructure for allapplications
y NAS front end scalesindependently of SANback end
y Allocate storage toCelerra and servers asneeded
y Centralized managementfor SAN and NAS
y iSCSI gateway to SANWindows
UNIX
CLARiiON
CX Family
ConnectrixSAN
SymmetrixDMX Family
Celerra NS GFamily
Celerra NSX
One of the reasons that EMC NAS scales impressively is due to the gateway architecture that separates
the NAS front end (Data Movers) from the SAN back end (Symmetrix or CLARiiON).
This allows the front end and back end to grow independently. Customers can merely add Data Moversto the EMC NAS to scale the front-end performance to handle more clients. As the amount of data
increases, you can add more disks, or the EMC NAS can access multiple Symmetrix or CLARiiON.
This flexibility leads to improved disk utilization.
EMC NAS supports simultaneous SAN and NAS access to the CLARiiON and Symmetrix. and can be
added to an existing SAN, with general purpose servers now able to access non-NAS back-end
capacity. This extends the improved utilization, centralized management, and TCO benefits of SAN
plus NAS consolidation to EMC NAS, Symmetrix, and CLARiiON.
The configuration can also be reconfigured via software. Since all Data Movers can see the entire
file space, it is easy to reassign filesystems to balance the load. In addition, filesystems can be
extended online as they fill.
Even though the architecture splits the front end among multiple Data Movers and a separate SAN
back end, the entire NAS solution can be managed as a single entity.
-
7/29/2019 NAS Foundation
23/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 23
2006 EMC Corporation. All rights reserved. NAS Foundations - 23
Celerra Family Hardware
y Describe and identify common EMC NAS components
Due to the diversity of the range of EMC NAS systems we now briefly review some of the major
hardware components, to differentiate between the various options available.
-
7/29/2019 NAS Foundation
24/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 24
2006 EMC Corporation. All rights reserved. NAS Foundations - 24
Celerra NS Family Control Station Hardware
yThe control station provides an interface to the control; to
manage and configure the NAS solution
Control Station provides the controlling subsystem of the Celerra, as well as the management interface
to all file server components. The Control Station provides a secure user interface as a single point of
administration and management for the whole Celerra solution. Control Station administrative
functions are accessible via the local console, Telnet (not recommended), or a Web Browser.
The Control station is single Intel processor based, with high memory capacity. Dependent on the
model, the Control Stations may have internal storage. The local LAN switch provides the internal
communications network for the Data Movers and the Control Station and should NOTbe integratedinto a client networking infrastructure.
Within the NSX model there are no serial interconnections between the Control Station and the Data
Movers and the internal switch has been built into the Control Station functionality.
-
7/29/2019 NAS Foundation
25/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 25
2006 EMC Corporation. All rights reserved. NAS Foundations - 25
NSX Next Generation Control Station
y Celerra NSX Front view
LEDs and Switches
Power Switch
NMI Switch
Reset Switch ID Switch
Power Boot Sequence LED
Status LED
HDD Act LED
HDD Fault LED Gb #1 and Gb #2 LED
USB Connectors 2 and 3
ID LED
Serial Port COM2
The Control Station is a dedicated management Intel processor-based computer that monitors and
sends commands to the blades. The private network connects the two Control Stations (always shipped
on NSX systems) to the blades through the system management switch modules. Like previous
versions it provides software installation and upgrade services, and high-availability features such asfault monitoring, fault recovery, fault reporting (CallHome), and remote diagnosing. Two Control
Stations can be connected to a public or private network for remote administration. Each Control
Station has a serial port that connects to an external modem so that the Control Station can call home
to EMC or a service provider if a problem should arise.
-
7/29/2019 NAS Foundation
26/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 26
2006 EMC Corporation. All rights reserved. NAS Foundations - 26
NSX Next Generation Control Station
y Celerra NSX Rear view
eth3 - PublicLAN Port
COM1 - To serialmodem (for Call-Home)
eth0 Internal Network(To Mgmt. Switch-A in
Enclosure 0)
Video Port
Gb2 Internal Network(To Mgmt. Switch-B in
Enclosure 0)
Gb1 IPMI(To eth1 of the other
Control Station)
This slide displays the rear view of the Next Generation Control Station. Note the lack of a 25-pin
quad serial port and spider cable.
-
7/29/2019 NAS Foundation
27/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 27
2006 EMC Corporation. All rights reserved. NAS Foundations - 27
Celerra Family Data Mover Hardware
y Single or Dual IntelProcessors
y PCI or PCI-X based
y High memory capacity
y Multi-port Network cards
y Fibre Channel connectivityto storage arrays
y No internal storage devices
y Redundancy mechanism
Fibre I/O module GbE I/O module
NSX Data Mover/ NS80G Data Mover
NS40 Data Mover
Each Data Mover is an independent, autonomous file server that transfers requested files to clients and
are managed as a single entity. Data Movers are hot pluggable and can be configured with standbys to
implement N to 1 unaffected should a problem arise with another Data Mover. The multiple Data
Movers (up to 8 in the NSX and 4 in the NS range availability. A Data Mover (DM) connects to a LANthrough FastEthernet and/or Gigabit Ethernet.
The default name for a Data Mover is server n, where n was its original slot location in the first NAS
frames. This has been continued into the new frames and the naming convention remains slot
related. For example, in the Golden Eagle/ Eagle frame, a Data Mover can be in slot location 2
through 15 (i.e. server_2 - server_15), therefore the first Data Mover is any frame remains server_2,
the second server_3, etc.
There is no remote login capability on the DM, nor do they run any binaries (very secure) and all
access to the Data Mover for management and configuration must be performed via the Control
Station.
Data Mover redundancy is the mechanism by which the Celerra family reduces the network data
outage in the event of a Data Mover failure. The ability to failover the Data Movers is achieved by the
creation of a Data Mover configuration database on the Control Station system volumes and is
managed via the Control Station. No Data Mover failover occurs if the Control Station is not available
for some reason.
-
7/29/2019 NAS Foundation
28/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 28
2006 EMC Corporation. All rights reserved. NAS Foundations - 28
Celerra Family Data Mover Hardware (Cont.)
y Standby Data Mover Configuration Options
Each standby Data Mover, as a standby for a single primary DataMover
Each standby Data Mover, as a standby for a group of primary DataMovers
y Failover Operational Modes
Automatic
Retry
Manual
These Standby Data Movers are powered and ready to assume the personality of their associated Primary Data Movers in
the event of a failure. If a Primary Data Mover fails, the Control Station detects the failure and initiates the failover process.
The failover procedure, in an Automatic configuration, is as follows.
The Control Station:1. Removes power from the failed Data Mover.
2. Sets the location for the Standby Data Mover to assume its new personality in the configuration database.
3. Controls the personality take over and allows the Standby Data Mover to assume the primary role, thereby enabling
clients to re-access their data transparently via the standby.
Once the failed Data Mover is repaired, the failback mechanism is always manually administrator initiated. This process is
the reverse of the failover process and restores the primary functionality to the repaired Primary Data Mover and returns the
Standby Data Mover into its standby state in preparation for any future outage.
There are three operational modes of operation for Failover: Automatic, Retry, and Manual.
1. Automatic Mode: Control Station detects the failure of a Data Mover. The failover process occurs without trying any
recovery process first.
2. Retry Mode: Control Station detects the failure, an attempt to reboot the failed Data Mover is tried first before thefailover procedure is initiated.
3. Manual Mode: Control Station detects the failure and removes power from the failed Data Mover. However, no further
Data Mover recovery action is taken until administrative intervention. Recovery after a Data Mover failover is always
a manual process.
-
7/29/2019 NAS Foundation
29/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 29
2006 EMC Corporation. All rights reserved. NAS Foundations - 29
NAS Reference Documentation
yNAS Interoperability
MatrixData Movers
Control Stations
Software supported features
yWebsite
www.emc.com/horizontal/interoperability
The NAS interoperability Guide provides support information on the Data Movers and Control Station
models, NAS software version, supported features, storage models, and microcode. This
interoperability reference can be found at: http://www.emc.com/horizontal/interoperability.
-
7/29/2019 NAS Foundation
30/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 30
2006 EMC Corporation. All rights reserved. NAS Foundations - 30
Celerra Family Software
y Describe operating systems used by EMC NAS
Having briefly reviewed some of the major hardware components, the software environment of the
high-end EMC NAS offering is covered next.
-
7/29/2019 NAS Foundation
31/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 31
2006 EMC Corporation. All rights reserved. NAS Foundations - 31
Celerra Software Operating Systems
y EMC LinuxThis is an industry hardened and EMC modified Operating System loaded on the
Control Station to provide: Secure NAS management environment Growing in popularity and corporate acceptance
y DART Data Access in Real TimeThis is a highly specialized Operating System designed to optimize network
traffic Input/Output throughput and is loaded on the Data Movers
Is multi-threaded to optimize load balancing capabilities of the multi-processorData Movers
Advanced volume management - UxFS Large file size and filesystem support
Ability to extend filesystems online
Metadata logging for fast recovery
Striped volume support
Feature rich to support the varied specialized capabilities of the Celerra range Data Mover Failover
Networking functionality Port Aggregation, FailSafe Network device, multi-protocol support
Point in time Filesystem copies
Windows environmental specialties
EMC Linux OS is installed on the Control Station. Control Station OS software is used to install,
manage, and configure the Data Movers, monitor the environmental conditions and performance of all
components, and implement the Call Home and dial-in support feature. Typical Administration
functions include the volume and filesystem management, configuration of network interfaces,creation of filesystems, exporting filesystems to clients, performing filesystem consistency checks, and
extending filesystems.
The OS that the Data Movers run is EMCs Data Access in Real Time (DART) embedded system
software, which is optimized for file I/O, to move data from the EMC storage array to the network.
DART supports standard network and file access protocols: NFS, CIFS, and FTP.
-
7/29/2019 NAS Foundation
32/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 32
2006 EMC Corporation. All rights reserved. NAS Foundations - 32
Celerra Family Software Management
y Describe user interfaces available for EMC NAS
management
The two user interfaces available for EMC NAS management are reviewed in this section.
-
7/29/2019 NAS Foundation
33/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 33
2006 EMC Corporation. All rights reserved. NAS Foundations - 33
Celerra Management Command Line
yThe command line can be accessed on the Control
Station viaAn ssh interface tool (i.e. PuTTy)
Telnet
y Its primary function is for scripting of common repetitivetasks that may run on a predetermined schedule to easeadministrative burden
y It has approximately 80 UNIX command-like commands
Telnet access is disabled, by default, on the Control Station due to the possibility of unauthorized
access if the Control Station is placed on a publicly accessible network. If this is the case, it is strongly
recommended that this service is not enabled.
The preferred mechanism of accessing the Control Station is the SSH (Secure Shell) daemon via an
SSH client such as PuTTy.
-
7/29/2019 NAS Foundation
34/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 34
2006 EMC Corporation. All rights reserved. NAS Foundations - 34
Celerra Manager Management
GUI management has become consolidated into one product with two options; Celerra Native Manager
Basic Edition and Celerra Management Advanced Edition.
The Basic Edition is installed, along with the DART OS, and provides a complete set of commonmanagement functionality for a single Celerra at a time. The Advanced Edition adds multiple Celerra
support, along with some advanced feature GUI management, and is licensed separately from the
DART code.
-
7/29/2019 NAS Foundation
35/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 35
2006 EMC Corporation. All rights reserved. NAS Foundations - 35
Celerra Manager Wizards
Celerra Manager offers a number of configuration Wizards for various tasks to assist with new
administrator ease of implementation.
-
7/29/2019 NAS Foundation
36/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 36
2006 EMC Corporation. All rights reserved. NAS Foundations - 36
Celerra Manager Tools
Celerra Manager offers a set of tools to integrate Celerra monitoring functionality and launch
Navisphere Manager.
With the addition of the Navisphere Manager Launch capability, the SAN/NAS administrator has amore consolidated management environment.
-
7/29/2019 NAS Foundation
37/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 37
2006 EMC Corporation. All rights reserved. NAS Foundations - 37
EMC ControlCenter V5.x.x NAS Support
y Discovery and Monitoring
Data MoversDevices and volumes
Network adapters and IPinterfaces
Mount points
Exports
Filesystems (includingsnapshots and checkpoints)
The EMC flagship management product, EMC ControlCenter, has the capability of an assisted
discovery of both EMC NAS and third party NAS products, namely NetApps filers
Currently, management of the EMC NAS family is deferred to the specific product managementproducts due to the highly specialized nature of the NAS environment. Therefore, this product
functionality (shown on this slide) is focused mainly around discovery, monitoring, and product
management software launch capability
ControlCenter V5.x.x has enhanced device management support for the Celerra family. The
ControlCenter Celerra Agent runs on Windows and has enhanced discovery and monitoring
capabilities. You can now view properties information on Celerra Data Movers, devices, network
adapters and interfaces, mount points, exports, filesystems (including snapshots and checkpoints), and
volumes from the ControlCenter Console. You can also view alerting information for the Celerra
family as well.
-
7/29/2019 NAS Foundation
38/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 38
2006 EMC Corporation. All rights reserved. NAS Foundations - 38
Celerra Family Software Management
y Describe the implementation of VLANs (Virtual Local
Area Networks) for environmental management with EMCNAS
Next, an overview of the virtual local area networking environment, or VLANs, is reviewed.
-
7/29/2019 NAS Foundation
39/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 39
2006 EMC Corporation. All rights reserved. NAS Foundations - 39
VLAN Support
y Create logicalLAN segment Divide a single
LAN into logicalsegments
J oin multipleseparatesegments into onelogical LAN
y VLAN Tagging 802.1q
y Simplified
Management No network
reconfigurationrequired formemberrelocation
Hub Hub
Hub Hub
Bridge
or
Switch
Bridge
or
Switch
Hub Hub
Router
Workstation VLAN B
VLAN B
VLAN A
VLAN A
Collision Domain
LAN Segment
Collision DomainLAN Segment
Collision Domain
LAN Segment
Broadcast Domain
LAN
Broadcast Domain LAN
Network domains are categorized into Collision, a LAN segment within which data collisions are contained, or Broadcast,
the portions of the network through which broadcast and multicast traffic is propagated. Collision domains are determined
by hardware components and how they are connected together. The components are usually client computers, hubs, and
repeaters. A network switch or a router that generally does not forward broadcast traffic separates a Collision domain from
a Broadcast domain. VLANs allow multiple, distinct, possibly geographically separate network segments to be connected in
to one logical segment. This can be done by subnetting or using VLAN tags (802.1q.), which is an address added to
network packets to identify the VLANs to which the packet belongs. This could allow servers that were connected to
physically separate networks to communicate more efficiently and it could prevent servers that were attached to the same
physical network from impeding one another.
By using VLANs to logically segment the Broadcast Domains, the equipment contained within this logical environment
need not be physically located together. This now means that if a mobile client moves location, an administrator need not
do any physical network or software configuration for the relocation as bridging technology would now be used, and a
router would only be needed to communicate between VLANS.
There are two commonly practiced ways of implementing this technology:
1. IP Address subnetting or
2. VLAN Ethernet packet tagging
When using the IP address subnetting methodology, the administrator configures the broadcast domains to include the
whole network area for specific groups of computers by using BridgeRouter technology. When using the VLAN tagging
methodology, the members of a specific group have an identification tag embedded into all of their Ethernet packet traffic.
VLAN Tagging allows a single Gigabit Data Mover port to service multiple logical LANs (Virtual LANs). This allows data
network nodes to be configured (added and moved as well as other changes) quickly and conveniently from the
management console, rather than in the wiring closet. VLAN also allows a customer to limit traffic to specific elements of a
corporate network and protect against broadcasts (such as denial of service) affecting whole networks. Standard router
based security mechanisms can be used with VLANs to restrict access and improve security.
-
7/29/2019 NAS Foundation
40/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 40
2006 EMC Corporation. All rights reserved. NAS Foundations - 40
VLAN Benefits
y Performance
y ReducedOverhead
y Reduced Costs
y Security
VLAN-A VLAN S VLAN E
The benefits of VLAN support include:
y Performance: In all networks, there is a large amount of broadcast and multicast traffic and
VLANS can reduce the amount of traffic being received by all clients.
y Virtual Collaborative Work Divisions: by placing widely dispersed collaborative users into a
VLAN, broadcast and multicast traffic between these users are kept from affecting other network
clients and reduce the amount of routing overhead placed on their traffic.
y Simplified Administration: with the large amount of mobile computing today, physical user
relocation generates a lot of administrative user reconfiguration (adding, moving and changing). If
the user has not changed company functionality, but has only relocated, VLANs can achieve
undisrupted job functionality.
y Reduced Cost by using VLANS: expensive routers and billable traffic routing costs can be reduced.
y Security, by placing users into a tagged VLAN environment, external access to sensitive broadcast
data traffic can be reduced.
VLAN support enables a single Data Mover with Gigabit Ethernet port(s) to be the standby for
multiple primary Data Movers with Gigabit Ethernet port(s). Each primary Data Mover's Gigabit
Ethernet port(s) can be connected to different switches. Each of these switches can be in a different
subnet and different VLAN. The standby Data Mover's Gigabit Ethernet port is connected to a switch
which is connected to all the other switches.
-
7/29/2019 NAS Foundation
41/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 41
2006 EMC Corporation. All rights reserved. NAS Foundations - 41
Celerra Family Filesystem Management
y Describe filesystem Quotas implementation on EMC NAS
Next, file system controls supported by Celerra Management software are reviewed.
-
7/29/2019 NAS Foundation
42/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 42
2006 EMC Corporation. All rights reserved. NAS Foundations - 42
Filesystem Controls - User Quota Restrictions
yThere are three main types of quotas used in data space
control:Soft Quota
Amount of data space or number of files used under normal workingconditions
Hard Quota
Total space or number of files a user/group can use or create ona filesystem
Tree Quota
Total space or number of files that a user/group can use or create on a
data directory tree. They are used as a logical mechanism to segmentlarge file systems into smaller administrative portions that do not affecteach others operation
One of the most common concerns in a distributed data environment is that users tend to save many
copies of the same information. When working in a collaborative distributed environment, the amount
of data space required by each user expands rapidly and, in some cases, uncontrollably. To minimize
data space outages, the user space can be controlled by imposing Quotas on users, or groups of users,to limit the number of blocks of disk space they can use or the number of files they can create.
The Soft Quota is a logical limit placed on a user that can be exceeded without the need for any
administrative intervention. Once the soft quota limit has been exceeded, the user has a grace period to
use the extra space defined by the hard quota limit. However, the user/group cannot exceed the hard
limit
The grace period is a time limit during which the user, or group, can continue to increase the amount of
disk space used or number of files created. If the grace period expires, the user/group must reduce the
amount of space used or the number of files to below the soft limit before any new space or files can
be created.
The Celerra family supports all of these Quota methodologies, thereby assisting administrators used to
these management tools, with a seamless transition into an EMC NAS environment.
-
7/29/2019 NAS Foundation
43/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 43
2006 EMC Corporation. All rights reserved. NAS Foundations - 43
Celerra Family Management Software
y Describe some Windows-specific options for
environmental management using EMC NAS including:Usermapper
Virtual Data Movers
Microsoft Management Console Snap-ins
Celerra family high availability features are reviewed in this section.
-
7/29/2019 NAS Foundation
44/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 44
2006 EMC Corporation. All rights reserved. NAS Foundations - 44
Usermapper Windows and UNIX integration
y Usermapper is the methodology by which Windows SIDs
(Security Identifiers) are equated with UNIX UIDs(User/Group Identifiers) on EMC NAS devices
yThere are two configurable environments to achievethese mappings
Internal
Part of the Data Mover's software. It does not require a separateinstallation or additional configuration procedures for a new CelerraNetwork Server
ExternalRuns as a daemon on a Celerra Control Station. Requires a separate
installation as well as additional configuration and managementprocedures
EMC NAS device Data Mover operating system, DART, utilizes a very specialized UNIX like file
system and thus has the same security structures. To support disparate clients, NFS and CIFS, the
various environmental security structures need to be equated to the Data Mover structures. In the NFS
environment no translation needs to be performed, however in the Microsoft environment the SecurityIdentifiers, (SID), need to be equated to the security structures on the filesystem. Usermapper is the
mechanism that is used in an EMC NAS device to achieve this mapping.
-
7/29/2019 NAS Foundation
45/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 45
2006 EMC Corporation. All rights reserved. NAS Foundations - 45
Usermapper Windows and UNIX integration (Cont.)
y Other considerations about choosing the Usermapper
methodology is determined by the client environmentWindows only
Internal Usermapper in Windows-only environments is recommended.Celerra Network Server installations after version 5.2 use this by default
Mixed protocol UNIX and Windows
In multiprotocol environments, file systems can be accessed by UNIXand Windows users. Some of the methodologies that enable this tobeachieved are:
Active Directory (using Microsoft Management Console snap-ins)
A Data Movers local user and group files
Network Information Service (NIS)
ACL= Access Control Lists
ACE= Access Control Entry
In multiprotocol environments, file systems can be accessed by UNIX and Windows users. File accessis determined by the permissions on the file or directory, the UNIX permissions, Windows access
control lists (ACLs), or both permissions and ACLs. Therefore, if a user has a UNIX and Windows
user accounts, you should choose a mapping method that allows you to indicate that the two accounts
represent the same user. Some of the methodologies that enable this to be achieved are:
Active Directory (using Microsoft Management Console snap-ins)
A Data Movers local user and group files
Network Information Service (NIS)
If a user in a multiprotocol environment only uses a single logon ( through Windows or UNIX), then it
is acceptable to use Usermapper. If a user has only one account, mapping to an equivalent identity in
the other environment is not necessary.
-
7/29/2019 NAS Foundation
46/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 46
2006 EMC Corporation. All rights reserved. NAS Foundations - 46
Configuration/Installation
Secondary Server
Data Mover
UsermapperResolver
Example:
New User Requests Resource(New mapping required)
(1) Resolver queries
First server configured
Usermapper DB
(3)New Mapping
Request
(2) Mapping is not in DB
Primary Server
Usermapper DB
(4) Adds a new entry from specified
UID/GID range
(6)New Entry
(5) Notifies all other Secondary Servers that
they should initiate a cache update request
(8) Replies to the DataMover's request with the
UID/GID mapping
(7) Updates cache with new mapping
Usermapper DB
DART
v5.2Pre - DART
v5.2
Usermapper - Pre & Post DART v5.2
This slide illustrates the steps to grant access.
Step 1: A client request is received at a Data Mover, with the resolver stub running, without a validUID/ID. The resolver then contacts the first usermapper server configured in the configuration file
with a request for a UID/GID
Step 2 and 3: A secondary server is contacted due to its configuration priority over the primary server.If this secondary server does not have a listing for the particular user making the request in its cache, arequest is made to the primary server for a UID / GID new mapping.
Step 4 and 5: When the primary server receives a request for a new mapping, an entry from thespecified UID/GID range is added to the database and a notification is issued to all secondary serversthat their cache entries must be updated.
Step 6 and 7: The secondary server making the request for the new mapping updates its cache withnew information from the primary server upon the receipt of the update notification.
Step 8: The secondary server that received the initial request now responds back to the requesting DataMover with the new mapping information and the user is granted access (or denied access) to therequested resource.
Note:
DART v 5.2 introduces a fundamental upgrade to the usermapper process. Each Data Mover nowmaintains its own usermapper data base of user mappings. This upgrade assists with Data Moverfailover connectivity continuance and access is unaffected by possible Control Station failure.
-
7/29/2019 NAS Foundation
47/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 47
2006 EMC Corporation. All rights reserved. NAS Foundations - 47
Virtual Data Movers
y Virtual Data Movers on Single Physical Data Movers
Ability to create multiple virtual CIFS servers on each logical DataMover
Consolidation of multiple server file serving functionality onto singleData Movers as each virtual Data Mover can maintain isolated CIFSservers with their own root filesystem environment
Allows whole Virtual Data Mover environments to be loaded,unloaded, or even replicated between physical Data Movers for easein Windows environmental management
Currently, in pre DART v5.2, a Data Mover supported one NFS server and multiple CIFS servers, where each
server has the same view of all the resources. The CIFS servers are not logically isolated and although they are
very useful in consolidating multiple servers into one data mover, they do not provide the isolation between
servers as needed in some environments such as data from disjoint departments hosted on the same data mover.
Now, VDMs support separate isolated CIFS servers, allowing you to place one or multiple CIFS servers into a
VDM, along with their file systems. The servers residing in a VDM store their dynamic configuration
information (such as local groups, shares, security credentials, and audit logs, etc.) in a configuration file system.
A VDM can then be loaded and unloaded, moved from Data Mover to Data Mover, or even replicated to a
remote Data Mover as an autonomous unit. The servers, their file systems, and all of the configuration data that
allows clients to access the file systems are available in one virtual container.
VDMs provide virtual partitioning of the physical resources and independently contain all the information
necessary to support the contained CIFS servers. Having the file systems and the configuration information
contained in a VDM does the following:
1. Enables administrators to separate CIFS servers and give them access to specified shares;
2. Allows replication of the CIFS environment from primary to secondary without impacting server
access,
3. Enables administrators to easily move CIFS servers from one physical Data Mover to another.
A VDM can contain one or more CIFS servers. The only requirement is that you have at least one interface
available for each CIFS server you create. The CIFS servers in each VDM have access only to the file systems
mounted to that VDM, and therefore can only create shares on those file systems mounted to the VDM. This
allows a user to administratively partition or group their file systems and CIFS servers.
-
7/29/2019 NAS Foundation
48/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 48
2006 EMC Corporation. All rights reserved. NAS Foundations - 48
Additional Tools: MMC Snap-ins
y UNIX User Management
Active Directory migration tool
MMC plug-in extension for Active
Directory uses and computers
Celerra Management tool snap-in
(MMC Console)
y Virus Checker Management
Celerra Management tool:
(MMC Console)
y Home Directory snap-in Allows multiple points of entry to a single share
y Data Mover security snap-in
Manage user rights and auditing
Celerra offers a number of Windows management tools with the Windows look and feel. For example, Celerra shares
and quotas can be managed by the standard Microsoft Management Console (MMC).
The tools include:
y The Celerra Management Tool (MMC Console): Snap-in extension for Dart Virus Checker Management whichmanages parameters for the DART Virus Checker.
y The Active Directory (AD) Migration tool: Migrates the Windows/UNIX user and group mappings to Active
Directory. The matching users/groups are displayed in a property page with a separate sheet for users and groups. The
administrator selects the users/groups that should be migrated and de-selects those that should not be migrated or
should be removed from Active Directory.
y The Microsoft Management Console (MMC): Snap-in extension for AD users and computers. This adds a property
page to the users property sheet to specify UID (user ID) /GID (group ID)/Comment and adds a property page to the
group property sheet to specify GID/Comment. You can only manage users and a group of the local tree.
y The Celerra Management Tool (MMC Console): Snap-in extension for Dart UNIX User Management displays
Windows users/groups which are mapped to UNIX attributes. It also displays all domains that are known to the local
domain (Local Tree, Trusted domains).
y The Home Directories capability in the Celerra allows a customer to set up multiple points of entry to a singleShare/Export so as to avoid sharing out many hundreds of points of entry to a filesystem for each individual user for
storing their Home Directories. The MMC Snap-in provides a simple and familiar management interface for Windows
administrators for this capability.
y The Data Mover Security Settings Snap-in provides a standard Windows interface for managing user rights
assignments, as well as the settings for which statistics Celerra should audit, based on the NT V4 style auditing
policies.
-
7/29/2019 NAS Foundation
49/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 49
2006 EMC Corporation. All rights reserved. NAS Foundations - 49
Celerra Family Software
y Describe some network high availability features
incorporated into the EMC NAS solution
Celerra family high availability features are reviewed in this section.
-
7/29/2019 NAS Foundation
50/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 50
2006 EMC Corporation. All rights reserved. NAS Foundations - 50
NS Series Networking
y Network interfaces
Ethernet
Gigabit Ethernet
y Network protocols
TCP/IP, UDP/IP
CIFS, NFS V2, V3 and V4
FTP, TFTP, and SNMP
NDMP V2, V3, and V4
NTP, SNTP
iSCSI target
y Feature support Link aggregation
FailSafe Networking
Ethernet Trunking
Virtual LAN
Trunking
SNMP
TCP
NDMPCIFS
FTP
iSCSI
NFS
VLAN
GigabitEthernet
FSNEthernet
The NS Series implements industry-standard networking protocols:
y The network ports supported by the NS700, NS704, NS700G, and NS704G consist of 10/100/1000
Ethernet (Copper) and Optical Gigabit Ethernet. All other NS Series platforms support Copper
10/100/1000 Ethernet only.
yNetwork protocols supported include Transmission Control Protocol over Internet Protocol
(TCP/IP) and User Datagram Protocol over IP (UDP/IP).
y File-sharing protocols are CIFS (Common Internet File System), used by Windows; and NFS
(Network File System) V2, V3, and V4, used by UNIX and Linux.
y File transfers are supported with the FTP and TFTP protocols. NDMP V2, V3, and V4 are
supported for LAN-free backups.
yNetwork management can be accomplished with Simple Network Management Protocol (SNMP).
yNTP and SNTP protocols allow Data Movers to synchronize with a known time source. SNTP is
more appropriate for LAN environments.y The NS Series supports iSCSI Target for block access.
y VLAN Tagging allows a single Gigabit port to service multiple logical LANs (virtual LANs).
y FailSafe Networking extends the failover functionality to networking ports.
-
7/29/2019 NAS Foundation
51/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 51
2006 EMC Corporation. All rights reserved. NAS Foundations - 51
Network FailSafe Device
y Network outages, due to environmental failure, are more
common than Data Mover failuresy Network FailSafe Device
DART OS mechanism to minimize data access disruption due tothese failures
Logical device is created using physical ports or other logical portscombined together to create redundant groups of ports
Logically grouped Data Mover network ports monitor network trafficon the ports
Active FailSafe Device port senses traffic disruptionStandby (non-active) port assumes the IP Address and Media
Access Control address in a very short space of time, thus reducingdata access disruption
Having discussed the maintenance of data access via redundant Data Movers, we now discuss the same conceptutilizing network port mechanisms. First lets look at the Network Failsafe device.
Network outages due to environmental failures are more common than Data Mover failures.
To minimize data access disruption due to these failures, the DART OS has a mechanism that is environmentagnostic, the Network FailSafe Device.
This is a mechanism by which the Network ports of a Data Mover may be logically grouped together into apartnership that monitor network traffic on the ports. If the currently active port senses a disruption of traffic, thestandby (non-active) port assumes the active role in a very short space of time, thus reducing data accessdisruption.
The way this works is a logical device is created, using physical ports or other logical ports, combined togetherto create redundant groups of ports.
In normal operation, the active port carries all network traffic. The standby (nonactive port) remains passive untila failure is detected. Once a failure has been detected by the FailSafe Device, this port assumes the networkidentity of the active port, including IP Address and Media Access Control address.
Having assumed the failed port identity, the standby port now continues the network traffic. Network disruption
due to this change over is minimal and may only be noticed in a high transaction oriented NAS implementationor in CIFS environments due to the connection-oriented nature of the protocol.
There are several benefits achieved by configuring the network FailSafe device: 1. Configuration is handledtransparently to client access; 2. the ports that make up the FailSafe device need not be of the same type; 3.Rapid recovery from a detected failure; 4. can be combined with logical Aggregated Port devices to provideeven higher levels of redundancy.
Although the ports that make up the FailSafe device need not be of the same type, care must be taken to ensurethat once failover has occurred, that client expected response times remain relatively the same and data accesspaths are maintained.
-
7/29/2019 NAS Foundation
52/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 52
2006 EMC Corporation. All rights reserved. NAS Foundations - 52
Link Aggregation - High Availability
y Link aggregation
Combining of two or more data
channels into a single data channelfor high availability
Two Methods: IEEE 802.3ad LACP& CISCO FastEtherChannel
y IEEE 802.3ad LACP
Combining links for improvedavailability
If one port fails, other ports takeover
Industry standard IEEE 802.3ad
Combines 212 Ethernet ports intoa single virtual link
Deterministic behavior
Does not increase single clientthroughput
LINK
IndustryStandardSwitchCelerra
Having discussed the network FailSafe device, the next methodologies we look at are the two Link
Aggregation methodologies. Link aggregation is the combining of two or more data channels into a
single data channel. There are two methodologies that are supported by EMC NAS devices. They are
IEEE 802.3ad Link Aggregation Control Protocol and CISCO FastEtherChannel using PortAggregation Protocol (PAgP).
The purpose for combining data channels in the EMC implementation is to achieve redundancy and
fault tolerance of network connectivity. It is commonly assumed that link aggregation provides a single
client with a data channel bandwidth equal to the sum of the bandwidth of individual member
channels. This is not, in fact, the case due to the methodology of channel utilization and, it may only be
achieved with very special considerations to the client environment. The overall channel bandwidth is
increased, but the client only receives, under normal working conditions, the bandwidth equal to one of
the component channels.
To implement Link Aggregation, the network switches must support the IEEE 802.3ad standard. It is a
technique for combining several links together to enhance availability of network access and applies to
a single Data Mover and not across Data Movers. The current implementation focuses on availability,
therefore check the NAS support matrix. Only full duplex operation is currently supported. Always
check the NAS Interoperability Matrix for supported features at the following:
http://www.emc.com/horizontal/interoperability
-
7/29/2019 NAS Foundation
53/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 53
2006 EMC Corporation. All rights reserved. NAS Foundations - 53
Channel
CISCO Switch
Celerra
Link Aggregation - High Availability (Cont.)
yCISCO EtherChannel
Port grouping for improvedavailability
Combines 2,4, or 8Ethernet ports into a singlevirtual device
Inter-operates withtrunking-capable switches
High availability: if one portfails, other ports take over
Does not increase singleclient throughput
Ethernet Trunking (Ether Channel) increases availability. It provides statistical load sharing by
connecting different clients to different ports. It does not increase single-client throughput. Different
clients get allocated to different ports. With only one client, the client accesses Celerra via the same
port for every access. This DART OS feature interoperates EtherChannel capable Cisco switches.EtherChannel is Cisco proprietary.
-
7/29/2019 NAS Foundation
54/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 54
2006 EMC Corporation. All rights reserved. NAS Foundations - 54
Network Redundancy - High Availability
y An example of FSN and Port aggregation co-operation
This example shows a fail-safe network device that consists of a FastEtherChannel comprising the four
ports of an Ethernet NIC and one Gigabit Ethernet port. The FastEtherChannel could be the primary
device but, per recommended practices, the ports of the Fail Safe Network (FSN) would not be marked
primary or secondary. FSN provides the ability to configure a standby network port for a primary port,and the two or more ports can be connected to different switches. The secondary port remains passive
until the primary port link status is broken, then the secondary port takes over operation.
An FSN device is a virtual device that combines 2 virtual ports. A virtual port can consist of a single
physical link or an aggregation of links (EtherChannel, LACP). The port types or number need not be
the same when creating a failsafe device group. For example, a quad Ethernet card can be first
trunked and then coupled with a single Gigabit Ethernet port. In this case, all four ports in the trunk
would need to fail before FSN would implement failover to the Gigabit port. Thus, Celerra could
tolerate four network failures before losing the connection.
Note:
An active primary port/active standby port configuration on the Data Mover is not recommended
practice.
-
7/29/2019 NAS Foundation
55/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 55
2006 EMC Corporation. All rights reserved. NAS Foundations - 55
Celerra Family Business Continuity
y Describe EMS NAS disk based replication and recovery
solutions
Having integrated the Celerra into the environment, data replication and recovery solutions that
augment the environment are reviewed next.
-
7/29/2019 NAS Foundation
56/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 56
2006 EMC Corporation. All rights reserved. NAS Foundations - 56
Disk-Based Replication and Recovery Solutions
NS & NSX / Symmetrix
Celerra / FC4700
SynchronousSynchronousDisasterDisasterRecoveryRecovery
SRDFSRDF
Seconds
FileFile
RestorationRestorationCelerra SnapSureCelerra SnapSure
Hours
FileFile--basedbasedReplicationReplicationTimeFinder/FSTimeFinder/FS
Celerra ReplicatorCelerra ReplicatorEMC OnCourseEMC OnCourse
Minutes
NS / CLARiiON
CelerraNS
FUNCTIONALITY
RECOVERY TIME
High-end environments require non-stop access to the information pool. From a practical perspective,not all data carries the same value. The following illustrates that EMC Celerra provides a range ofdisk-based replication tools for each recovery time requirement.
File restoration: This is the information archived to disk and typically saved to tape. Here we measurerecovery in hours. Celerra SnapSure enables local point-in-time replication for file undeletes andbackups.
File-based replication: This information is recoverable in time frames measured in minutes.Information is mirrored to disk by TimeFinder, and the copy is made accessible with TimeFinder/FS.The Celerra Replicator creates replicas of production filesystems locally or at a remote site. Recoverytime from the secondary site depends on the bandwidth of the IP connection between the two sites.EMC OnCourse provides secure, policy-based file transfers.
The Replicator feature supports data recovery for CIFS and NFS by allowing the secondaryfilesystem (SFS) to be manually switched to read/write mode after the Replicator session has been
stopped, manually or due to a destructive event. Note: There is no re-synch or failback capability.Synchronous disaster recovery: This is the information requiring disaster recovery with no loss oftransactions. This strategy allows customers to have data recovery in seconds. SRDF, in synchronousmode, facilitates real-time remote mirroring in campus environments (up to 60 km).
File restoration and file-based replication (Celerra Replicator, EMC OnCourse) are available withCelerra /CLARiiON. The entire suite of file restoration, file-based replication, and synchronousdisaster recovery are available with Celerra /Symmetrix.
-
7/29/2019 NAS Foundation
57/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 57
2006 EMC Corporation. All rights reserved. NAS Foundations - 57
Disaster Recovery
y Describe EMC NAS disaster recovery methodology using
Celerra SRDF (Symmetrix Remote Data Facility)
Celerra disaster recovery, when integrated with the Symmetrix, utilizes a very synergistic combination
of Celerra and the Symmetrix functionality.
-
7/29/2019 NAS Foundation
58/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 58
2006 EMC Corporation. All rights reserved.
Celerra SRDF Disaster Recovery
y Increases data availability by combining the high availability of the Celerra family withthe Symmetrix Remote Data Facility
y Celerra synchronous disaster recovery solution Allows an administrator to configure remote standby Data Movers waiting to assume primary
roles in the event of a disaster occurring at the primary data site
SRDF allows administrator to achieve a remote synchronous copy of production filesystems at aremote location
Real-time, logically synchronized and consistent copies of selected volumes Uni-directional and bi-directional support
Resilient against drive, link, and server failures
No lost I/Os in the event of a disaster
Independent of CPU, operating system, application, or database
Simplifies disaster recovery switchover and back
CelerraCelerraUni or bi-directional
Campus (60 km) distance
Network
In the NAS environment, data availability is one of the key aspects for implementation determination. By combining thehigh availability of the Celerra family with the Symmetrix Remote Data Facility, data available increases exponentially.What the SRDF feature allows an administrator to achieve is a remote synchronous copy of production filesystems at aremote location. However, as this entails the creation of Symmetrix specific R1 and R2 data volumes, this functionality iscurrently restricted to Celerra / Symmetrix implementations only.
This feature allows an administrator to configure remote standby Data Movers waiting to assume primary roles in the eventof a disaster occurring at the primary data site. Due to data latency issues, this solution is restricted to a campus distance ofseparation between the two data sites (60 network km).
The SRDF solution for Celerra can leverage an existing SRDF transport infrastructure to support the full range of supportedSAN (storage area network) and DAS (direct-attached storage) connected general purpose server platforms. The Celerradisaster recovery solution maintains continuously available filesystems, even with an unavailable or non-functioningCelerra. Symmetrix technology connects a local and remote Celerra over a distance of up to 40 miles (66 km) via anESCON or Fiber Channel SRDF connection. After establishing the connection and properly configuring the Celerra, usersgain continued access to filesystems in the event that the local Celerra and/or the Symmetrix becomes unavailable. TheCelerra systems communicate over the network to ensure the primary and secondary Data Movers are synchronized withrespect to meta data, while the physical data is transported over the SRDF link. In order to ensure an up to date andconsistent copy of the filesystems on the remote Celerra, the synchronous mode of SRDF operation is currently the onlysupported SRDF operational mode. Implementation of Celerra disaster recovery software requires modification of thestandard Celerra configuration. SRDF has two modes of operation: active-passive and active-active. Active-passive (Uni-
directional) SRDF support means that one Celerra provides active Data Mover access while a second (remote) Celerraprovides all Data Movers as failover. Active-active (Bi-directional) SRDF support means that one Celerra can serve localneeds while reserving some of its Data Movers for recovery of a remote Celerra, which reserve some of its Data Movers forrecovery of the first Celerra . In addition, local failover Data Movers can be associated with Data Movers in the primarySymmetrix to ensure that local failover capability is initiated in the unlikely event there is a hardware related issue with aspecific Data Mover.
The mode of operation with SRDF/S is Active-Active.
y With active-active(SRDF/S only) support, one NS Series/NSX Gateway can serve local needs while reserving someof its Data Movers for recovery of a remote NS Series/NSX Gateway, which reserve some ofitsData Movers forrecovery of the first NS Series/Gateway.
-
7/29/2019 NAS Foundation
59/78
Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved.
NAS Foundations - 59
2006 EMC Corporation. All rights reserved. NAS Foundations - 59
Da