datacore san tech cookbook series · vms and their local sanmelody servers will be over a virtual...

36
DATACORE SAN TECH COOKBOOK SERIES FEBRUARY 8, 2008 Implementing an Auto-Failover & Recovery Business Continuity Solution Using 2 Physical Servers Running Server and Storage Virtualization Software BC = 2 * (VMware ESX & DataCore VM); For more information on how to Go Virtual see: New, Portable, Feature-Packed VM Starter SAN – Runs on VMware, Microsoft VS, Oracle VM, SUN VM, Virtual Iron and Citrix XenServer Virtualization Platforms, or Hardware Servers or Blades. AUTHORS: TIM WARDEN Senior Technical Consultant DataCore Software Corporation [email protected] (520) 260-8119 SCOT COLMER Senior Virtualization Consultant AccessFlow, LLC [email protected] (408) 768-6069 GARY LAMB Chief Technology Officer AccessFlow, LLC [email protected] (916) 998-9203 Page 1 DataCore Software Corporation 6300 N.W. 5th Way Fort Lauderdale, FL 33309 www.datacore.com

Upload: others

Post on 24-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

DATACORE SAN TECH COOKBOOK SERIES FEBRUARY 8, 2008 Implementing an Auto-Failover & Recovery Business Continuity Solution Using 2 Physical Servers Running Server and Storage Virtualization Software BC = 2 * (VMware ESX & DataCore VM); For more information on how to Go Virtual see: New, Portable, Feature-Packed VM Starter SAN – Runs on VMware, Microsoft VS, Oracle VM, SUN VM, Virtual Iron and Citrix XenServer Virtualization Platforms, or Hardware Servers or Blades. AUTHORS: TIM WARDEN Senior Technical Consultant

DataCore Software Corporation [email protected] (520) 260-8119

SCOT COLMER Senior Virtualization Consultant

AccessFlow, LLC [email protected] (408) 768-6069

GARY LAMB Chief Technology Officer

AccessFlow, LLC [email protected] (916) 998-9203

Page 1 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 2: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

TABLE OF CONTENTS EXECUTIVE SUMMARY 3 HIGH AVAILABILITY & BUSINESS CONTINUITY 4 SOLUTION OVERVIEW 5 CONFIGURING SERVER HARDWARE 6 SETTING UP THE ESX SERVERS 7 SETTING UP VIRTUALCENTER 8 SETTING UP NETWORKING 11 INSTALLING THE SANmelody VMs 12 CONFIGURING THE SANmelody PARTNERS 18 PROVISIONING SHARED STORAGE TO ESX 20 CREATING BUSINESS CONTINUITY MIRRORS 23 CONNECTING THE ESX SERVERS TO THE SAN 24 MAPPING THE VOLUMES TO THE ESX SERVERS 26 DISCOVERING & USING THE SHARED STORAGE 28 MONITORING STORAGE UTILIZATION 34 TESTING FAILOVER AND FAILBACK 35 AUTOMATING STARTUP & SHUTDOWN 35 ADDING VALUE WITH OUR VIRTUAL SAN 36

Page 2 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 3: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

EXECUTIVE SUMMARY

Over the last few years, virtualization has become IT’s household word. You can’t have missed it unless, of course, you’re so tuned out you somehow managed to miss the Macarena back in the 90s (my wife, amazingly, had never heard the song, but she does know what virtualization is about). Even people outside the world of IT now drop the “V” word, just as they do “24/7”… which is what this white paper is all about.

Server Virtualization can help just about any size organization reduce IT costs by consolidating their servers. However, if you are a small shop (or you have many small servers spread across a hundred sites), you may find yourself confronted with managing risk vs. cost as you consolidate the servers. Having all your servers virtualized and running on one physical platform will certainly save you money, but exposes your entire IT infrastructure to that single point of failure: the one physical server.

It’s bad enough having a failed voltage regulator take down your Exchange server, but in the virtual world such a physical hardware failure would take out the Exchange server, the SQL servers, the web server, the file server… the whole shop! Clearly, you don’t want to put all your eggs in one basket… you’ll want at least two baskets and the ability to move your eggs about from basket to basket.

Some Server Virtualization solutions offer an HA option, such as VMWare HA. This failover feature is based on clustering two or more ESX hosts around shared disk devices. If an ESX host fails, its VMs automatically restart on any surviving ESX nodes. Sharing disks among multiple ESX servers is also used by the VMotion and DRS options, allowing running virtual machines to be manually or dynamically moved between physical ESX hosts at will, distributing the load. Combining these features protects your virtual machines against failures and allows you to bring physical hosts down for maintenance without disrupting production.

However, the requisite shared disk device typically implies “SAN” or “Storage Area Network”. For many small shops, the cost of implementing a traditional SAN is the barrier to realizing a Server Virtualization project.

In this paper we discuss a solution for implementing a Virtual SAN, running as VMs on a pair of virtual servers and providing highly

Page 3 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 4: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

available storage back to the physical hosts for Business Continuity. In our example we will be using two VMotion-enabled VMWare ESX servers and SANmelody, the SAN Virtualization package from DataCore Software Corporation.

HIGH AVAILABILITY & BUSINESS CONTINUITY

High Availability or H/A refers to systems and components designed to withstand a variety of non-catastrophic local failures. The vendors implement H/A in servers and storage arrays via redundancy: redundant power, cooling, cabling, switches, RAID groups, dual processors, etc. The idea is that a fault of a component or a pulled cable shouldn't stop the show. An alternate path or component can take over without missing a beat. With H/A, users shouldn't notice any disruption in service when such a failure occurs.

The traditional highly available SAN storage array with its dual storage processors is in and of itself a Single Point of Failure. Although most of the components are redundant, there is still a single backplane, a single enclosure, and the possibility of individual drive failures potentially taking down the whole SAN (i.e. the “LIP storm” known to occur in Fibre Channel SANs)

Business Continuity / Continuance or BC takes High Availability one step further. It is the idea of adding an additional layer of redundancy to the architecture so that it can withstand the failure of entire systems without stopping production. Often when storage vendors talk about Business Continuity, they are implying the use of Synchronous Mirroring between two of their high-end storage arrays, perhaps separated over some short distance, such as between two buildings on a campus.

The solution we define below is based on this concept of Business Continuity. The servers and virtual SAN storage will be fully redundant and they will offer both failover and fail-back functionality.

Page 4 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 5: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

SOLUTION OVERVIEW

In a nutshell, the solution consists of setting up two VMWare ESX servers, both licensed for VMotion and HA, and optionally for DRS. Each server should have enough local storage (internal drives or external drives connected via RAID controller and JBOD shelves) to satisfy the capacity requirements for the VMs.

Virtual SAN VMs Serve Mirrored VMFS Volumes Back To Their Hosts

On each ESX host, we use the local VMFS file system to create a Windows VM, installing Windows 2003 there. We then install the SANmelody virtual SAN storage array software on that VM.

Each ESX host then dedicates its remaining local storage to its SANmelody VM. Each SANmelody VM uses the local storage to create volumes which are then mirrored to the partner SANmelody VM’s volumes on the adjacent ESX server.

Page 5 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 6: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

The ESX servers themselves each create iSCSI connections to both SANmelody virtual storage arrays. The mirrored virtual volumes are then mapped to both ESX servers over iSCSI and used to create VMotion-enabled VMFS file systems.

The result? Redundant virtual SAN storage arrays serving mirrored volumes over redundant paths to redundant ESX servers… In two words: “Business Continuity”.

CONFIGURING SERVER HARDWARE

In this configuration, SANmelody makes no special demands on the server hardware; you can build your ESX servers using your vendor of preference, provided the hardware chosen meets VMWare ESX requirements.

At a minimum, we should configure a 2 x dual-core processor system, for a total of four cores, one of which will be dedicated to the SANmelody VM.

RAM should be sized according to ESX recommendations based on the number and nature of VMs you will be running, keeping in mind that SANmelody will use RAM as storage processor cache.

The ESX hardware should include enough local storage to satisfy our capacity requirements, either via internal drives or an external storage enclosure tethered to the server via a RAID controller. The choice of drive technologies (SATA, SAS, 10K, 15K, LFF or SFF, etc.) is up to you. For the purposes of this example, we choose a server with 6 or 8 populated drive slots.

We use the internal RAID controller of the server to create two RAID groups. A two-disk RAID-1 group will be used for the ESX installation, and the remaining drives will be placed into RAID-5 and used as our SAN storage.

As for networking, at a minimum we configure four Gig-E NICs. Obviously, we will need at least one or more per server for communicating with the VMs (the VM Network). We will need at least one NIC for implementing the iSCSI SAN that the two ESX servers will use to access the two SANmelody servers. We will also want an iSCSI mirror channel dedicated to the two SANmelody VMs for implementing synchronous data mirroring – we can use a crossover cable between

Page 6 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 7: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

the physical NICs for this channel. Finally, we will want a NIC for use by the Service Console and VMotion.

There are different schools of thought on how best to configure the LAN and VMotion. Some prefer to use a separate NIC for VMotion as VMWare recommends; others prefer to team two or more NICs on the same vSwitch and share the aggregate bandwidth for both the VM LAN Network and VMotion / DRS. Gig-E NICs are relatively cheap and most servers come with 2 onboard. Adding addition ports won’t break the bank, but will give us the performance and resilience we need.

It is interesting to note that any iSCSI reads and writes between local VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection. Of course, the mirrored writes will use the physical NICs between the two ESX servers, as will any reads from VMs whose primary storage path is to a SANmelody VM on the partner ESX host.

SETTING UP THE ESX SERVERS

Before beginning the installation, we should plan how the servers will integrate into our existing infrastructure. For instance, we will need static addresses for the ESX servers as well as the VirtualCenter License Server. Do we have a particular naming scheme for our servers? How do we assign static IP addresses?

We choose a Class C 192.168.1.xx network for the LAN and management console, and a Class C 192.168.3.xx network for the iSCSI SAN. As for our iSCSI mirror channel, only our two SANmelody VMs will need access to the network. We will assign 10.0.0.x addresses to the mirror ports on the two SANmelody servers.

The installation of ESX is relatively straightforward and uses a graphical user interface. We insert the installer CD and follow the instructions.

Early in the installation, we will be prompted for Partitioning Options. we opt for the default partitioning scheme and review the installer’s recommendation in the next screen. Here we will want to be assured the swap space is adequate and that the default vmfs3 file system is large enough to comfortably hold the local SANmelody VM.

Any additional space we provide for this local vmfs3 file system can be used for installing other VMs, but we need to keep in mind that this

Page 7 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 8: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

vmfs3 file system is not shared storage and so any VMs installed there will not be candidates for VMotion, HA or DRS.

We then setup networking, choosing the hardware controller that we will use as a console, setting its static address, entering the DNS server addresses, etc.

In the ensuing screens we select the time zone, enter the root password and confirm the installation. That’s it. Once the installation has completed, we will be prompted to reboot the machine.

Once the ESX host is up and running, the console will advise us that we can connect to this ESX host via the console IP address we assigned.

SETTING UP VIRTUALCENTER

Virtual Center is the centralized management console for our ESX hosts. The product is installed on a Windows server and has a VMWare License Server associated which our ESX hosts will access to check out their licenses. The product also installs the Virtual Infrastructure Client, a Windows GUI-based administration utility for managing the ESX environment. You can use the client to manage individual hosts (connecting to their name or IP address), or to manage the entire farm of ESX hosts managed by VirtualCenter.

During the installation, we will be prompted to provide our VMWare license file. If you have received more than one file (one for the hosts, a second for VirtualCenter), you will need to combine the key contents of the two files, taking care not to modify the keys in any way.

Once installed, we will need to configure our ESX servers to use the License Server to checkout their licenses. We use the Virtual Infrastructure Client to connect to each ESX host by its IP address, entering user root and password. We navigate to the Configuration tab and select Licensed Features. There we set the license source to be our VirtualCenter server, and configure the license type to be a

Page 8 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 9: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Page 9 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

standard ESX server.

We are now ready to build our VirtualCenter cluster. For this, we will use the Virtual Infrastructure Client, logging onto the VirtualCenter

server using an authorized account on the corresponding Windows host. In our lab, we are logging onto the VirtualCenter machine locally, so we use “localhost” as my machine address and enter user Administration and the password.

s of Tucson, Arizona, where one of this paper’s

n

g ct

ly the “VMware DRS” ption.

On successfully logging in, we are presented with a “tabula raza” VirtualCenter environment. We begin by creating a new Datacenter which we will name “Sabino”, after the beautiful canyon in the Santa Catalina mountainauthors is based.

We then create a new Cluster oSabino, naming it TUSESX, in accordance with the server naminscheme we are using. We selethe “VMware HA” option, and optionalo

Page 10: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

At last, we add our two ESX servers to the cluster.

Useful Tip: VMWare HA is dependent on DNS. If you don’t have a local DNS server configured, HA can’t resolve the hostnames. The solution is simple: for each ESX host, log into the console as root and edit the /etc/hosts file, adding the short and fully qualified names of both our ESX hosts, as well as their corresponding IP addresses. Each hosts file will already contain the fully qualified name of its local host:

192.168.1.4 tusesx1.mydomain.com

After editing, our hosts file should look roughly like this:

192.168.1.4 tusesx1 tusesx1.mydomain.com

192.168.1.5 tusesx2 tusesx2.mydomain.com

Page 10 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 11: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

SETTING UP NETWORKING

On each ESX host, we configure networking to enable VMotion and iSCSI. In the screenshot below, we have setup 3 vSwitches, one for our VM Network with VMotion, another for our iSCSI SANmelody Mirror Channel, and a 3rd for our iSCSI SAN network.

Page 11 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 12: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

INSTALLING THE SANmelody VMs

We will create a SANmelody VM on each of the two ESX servers, naming them according to our naming convention: TUSSAN1 and TUSSAN2.

In the New Virtual Machine Wizard, we choose a Microsoft Windows Server 2003, Standard Edition guest OS. We choose a single processor VM with an appropriate amount of memory, keeping in mind that SANmelody will use up to 80% of the VM’s RAM as storage processor cache. We then choose networking for the machine, configuring 2 virtual NICs, one as a front end iSCSI target using the vSwitch designated for the iSCSI SAN, the other as an iSCSI mirror channel using the vSwitch designated for the mirror channel.

Page 12 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 13: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

On each ESX server we assign the RAID 5 LUN to the corresponding SANmelody VM.

Mapping a volume as a Raw Device

Useful Tip: We have used RDM’s in the lab, as indicated in the screenshot. It has been observed that certain RAID controllers will not generate a serial number on their LUNs, and so ESX will not consider those LUNs as candidates for use as RDMs – the RDM option will be grayed out. In such cases, you should simply create a VMFS on the LUN and create a full “new virtual disk” for the SANmelody VM on that VMFS. As it turns out in our lab tests with ESX 3.5, the overhead was negligible.

Page 13 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 14: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

The Local RAID 5 RDM Disk Discovered in the SANmelody VM

At this point, the VM is ready and we can install the Windows 2003 OS and the SANmelody software. The specifics of the Windows installation are outside the scope of this document; suffice it to say we perform a basic installation, applying the latest service packs. We install the latest version of Microsoft iSCSI Initiator, which SANmelody will use as a driver to initiate mirror write requests across the synchronous mirror channel to the partner SANmelody server’s mirror target port.

We configure each SANmelody server’s NIC’s with static addresses.

Page 14 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 15: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

We confirm we have connectivity between the two SANmelody VM’s across their mirror channel, and that both ESX servers can access the target channels of both SANmelody servers. We open any ports on any firewalls (including the Windows soft firewall) as necessary.

Verifying the Partners Can Connect Over The Mirror Channel

Page 15 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 16: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Our Windows VM’s are ready for installing the SANmelody software and for establishing synchronous mirroring between them. We run the SANmelody installation package on TUSSAN1 and follow the wizard screens.

SANmelody Installer Splash Screen

The installation wizard completes and we reboot the virtual machine. Upon reboot, SANmelody installs its iSCSI target drivers on any available IP stacks and the VM becomes a Virtual SAN.

We repeat the procedure on the TUSSAN2, effectively creating two independent virtual SAN storage controllers.

Page 16 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 17: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

SANmelody is managed via a set of MMC Snap-Ins, as shown in the screenshot below. We start the SANmelody service on each of the two SANmelody VMs.

Our next step is to configure the two autonomous servers as an “Auto-Failover” partnership to implement Business Continuity.

Page 17 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 18: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

CONFIGURING THE SANmelody PARTNERS

When two SANmelody servers are configured in a partnership, they share their configuration information and function much like two storage processors in a traditional dual controller SAN. Both are active storage processors and both are able to serve LUNs. They are also both able to mirror their LUNs to volumes on the partner server and in turn serve as a failover for the partner’s mirrored LUNs.

To create the partnership, we set one of the SANmelody servers into a “Listen” mode, ready to accept a partnership proposal. From the other SANmelody server, we “Add Partner”, specifying the name of the partner SANmelody VM.

Upon clicking “OK” to create the partnership, the listening SANmelody VM will drop its configuration and accept the configuration of the proposing partner.

Page 18 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 19: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

The partnership established, we need to connect up the iSCSI mirror channel that will be used for synchronous mirroring of volumes. We connect each SAN VM’s iSCSI initiator to the partner SAN VM’s mirror target.

Finally, before provisioning storage or attaching storage clients, we will want to configure our SANmelody target ports for use with the ESX servers. By default, the ports are set to remain enabled even if the SANmelody server is stopped. With ESX, we need to make sure the ports disable whenever we stop our SANmelody server. On each SANmelody server, in the iSCSI Manager snap-in we right-click over the channel that we will use as our iSCSI SAN target, selecting “Properties” from the contextual menu. In the ensuing dialog, we select the “Advanced” tab and set the Disable Port When Stopped option to “Yes”, as in the screenshot.

The SANmelody setup and partnership is complete and we are now ready to use our virtual Business Continuity SAN solution.

Page 19 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 20: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

PROVISIONING SHARED STORAGE TO ESX

SANmelody has a few ways to turn “backend” storage into LUNs that can be presented on the “front-end” for use by SAN clients. In our example, backend storage refers to each SANmelody server’s Disk 1, the 876GB volume based on the RDM of each ESX hosts internal drives in RAID 5.

The most common way to use the backend storage is to add it to a Thin Provisioned Storage Pool. Using Thin Provisioned Pools simplifies the creation of volumes, facilitates “tiering” and growing our storage and also gives us the possibility of “over-subscribing” – provisioning more storage than we currently have. Think of it as a “storage credit card”.

On each SANmelody server, we create a Storage Pool and add our 876GB raw disk to it. For the sake of example, we’ll name the pool S1-SAS-15K-R5, indicating the pool is on TUSSAN1 and is comprised of 15K RPM SAS drives in a RAID 5 configuration. Should we decide to grow our SAN, we can later add additional raw storage to these existing pools, or we can create new pools for, say, RAID 10 SAS or RAID 5 SATA.

We need to decide how we will allocate the storage to the ESX hosts. How many volumes? We really only need one shared VMFS to implement the Business Continuity solution. However, we have two active storage controllers, but we know that the ESX servers don’t active/active multipath, so in the interest of efficacy we decide to have each SANmelody server present a volume to both ESX hosts, giving us two VMFS volumes over which we will spread our VM’s. VMFS-V1 will be presented primarily from TUSSAN1, and VMFS-V2 will be presented primarily from TUSSAN2. The two virtual volumes will be mirrored between the two SANmelody servers to assure Auto-Failover should one of our ESX hosts or VMs fail.

Page 20 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 21: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Creating new volumes from a Storage Pools is as simple as a right-click over a pool to select from a contextual menu. You can even create multiple volumes at once – they’re thin provisioned so you don’t have to worry about banal things like finding a contiguous block large enough to hold a new partition. Storage pools really make provisioning storage trivial.

On each SANmelody server we create two volumes from the pool.

Each SANmelody node now has an inventory of two volumes from their pools on the “backend”. We will use those volumes to create the virtual volumes (also known as “vvols”) that will be presented on the “front-end” to our ESX hosts.

We use the “Virtual Volumes” snap-in to select from the SANmelody volume inventory to create our virtual volumes.

On each SANmelody node, one volume will be used as the primary vvol presented to the ESX servers by that SANmelody node. The second volume will serve as a secondary mirror half of the partner’s primary for the vvol.

Page 21 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 22: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Page 22 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Clicking an icon or selecting from a contextual menu brings up the “New Virtual Volume” dialog. We arbitrarily select “Volume1” from TUSSAN1 to build our virtual volume named “VMFS-V1”. We then arbitrarily select “Volume2” from TUSSAN2 to become our “VMFS-V2” virtual volume.

The vvols begin their lives as “linear” type virtual volumes – there is a one-to-one correspondence between the vvol and the volume. In the next section we will turn them into “multipath mirror” type vvols by joining them to the two remaining volumes from their respective partner SANmelody nodes.

By default, Virtual Volumes are set to the full size of the volume they are based on. Thin Provisioned volumes are always set to a maximum 2 TB. We can choose to leave the vvols at 2 TB, in which case we will be largely over-subscribed: 4 TB of mirrored storage, but only 876 GB of physical storage on each host. There’s nothing wrong with that, provided you know how to manage a “line of

the fly” whenever we need to.

credit”.

We can, of course, choose not to over-subscribe and resize the two vvols so that their combined capacity is 876 GB or less. For this example,

we give ourselves some storage credit, resizing both vvols to 500GB each, for a total of 1TB. Sure, we’re over-subscribed, but we know we can add more physical storage “on

Page 23: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

CREATING BUSINESS CONTINUITY MIRRORS

As discussed in the previous section, a newly created vvol is a “linear” entity, based on a sole volume. At any time, we can turn the linear vvol into a mirrored vvol by simply adding in a secondary volume. In effect, we are creating a “RAID 1” set, with primary and secondary mirror halves.

The two SANmelody controllers employ mirrored write caching, much like two storage processors in a tradition hardware SAN box. Any “writes” to a vvol must be in both caches before the write is considered “committed” and an acknowledgement returned. The two SANmelody nodes guard against failures and de-stage cache to disk immediately if either partner fails. This yields the highest level of security against data loss, while promoting excellent performance via write caching.

Creating mirrors in SANmelody is a simple process. You right-click over any linear vvol you want to mirror and select “Set Mirror” from the contectual menu, as shown in the screenshot.

SANmelody presents a dialog allowing you to choose from candidate volumes on the partner SANmelody server. For instance, we created our vvol “VMFS-V2” from “Volume2” on TUSSAN2. If we choose “Set Mirror” on this vvol, SANmelody will present us with a list of candidate volumes on the partner, TUSSAN1.

To be a candidate, the volume must be on the partner server, must not already be virtualized, and must be big enough… in this case, at least 500GB in size.

Page 23 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 24: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Page 24 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

As you can see in the screenshot below, SANmelody presents the 2TB Thin Provisioned “Volume2” from TUSSAN1 as the sole candidate.

We select “Volume2” from TUSSAN1 and choose the mirror type – will this be a standard mirror or a multipath mirror?

VMWare implements native multipathing in ESX, so we choose “3rd Party Alternate Path (APas the

) mirror type.

CONNECTING THE ESX SERVERS TO THE SAN

In order for our ESX servers to use the virtual SAN and access the virtual volumes we’ve created, the ESX servers will need to connect to our SANmelody servers’ iSCSI targets.

We first configure each ESX server to use iSCSI. You may recall we have already created the ESX iSCSI ports when we configured networking on the ESX servers, adding a VM Kernel and Service Console to our iSCSI Virtual Switch. Now we need to configure the iSCSI Software Adapter. In VirtualCenter, we select the configuration tab for each ESX server, clicking “Storage Adapters”. We select the iSCSI Software Adapter and edit its properties to “enable” the iSCSI drivers.

Page 25: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Once enabled, we connect to the two SANmelody iSCSI target ports. Each ESX host will connect to both SANmelody servers, so that each can multipath to the mirrored volumes.

Each ESX Server Connects to both SANmelody VMs

On each SANmelody VM we can verify that the ESX servers are connected by examining the SANmelody iSCSI Manager.

Page 25 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 26: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

MAPPING THE VOLUMES TO THE ESX SERVERS

Every Shared Storage Array (or “SAN” if you prefer) must have a means of identifying the SAN client’s initiator ports in order to map (or “LUN Mask”) volumes to them. In SANmelody, we manage the SAN clients in the “Application Servers” snap-in. We use the interface to organize the client’s initiator ports into logical entities called, not surprisingly, “Application Servers”.

We create two new Application Servers named “TUSESX1” and “TUSESX2” and assign the ESX hosts respective initiator ports to their Application servers.

Adding Initiator Channels to our Application Servers

Page 26 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 27: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Useful Tip: If you’re new to SANs and shared storage, you’ll want to pick up a few acronyms. In the world of Fibre Channel, you’ll often hear people speak of WWN’s or “World Wide Names”. These are like the MAC addresses of the Fibre Channel endpoints. For instance, each port on a Fibre Channel HBA will have a unique WWN that can be used to identify it. In the iSCSI world, the concept is referred to as an “IQN” or “iSCSI Qualified Name”. The IQN is also a unique identifier for an iSCSI endpoint and it shouldn’t surprise you if it looks somewhat like a fully qualified domain name.

Now that we’ve created our Application Servers, we can map our virtual volumes to their channels. This is known in SAN speak as “LUN masking”. SANmelody controls access to volumes via the mapping: only those channels that have the volume mapped to them can read or write the volume.

We want our two mirrored virtual volumes (VMFS-V1 and VMFS-V2) to be accessible by both ESX servers, so we map each volume to both ESX servers’ IQN’s.

Note that ESX requires a shared volume to use the same LUN (or Logical Unit Number, the “address” of the volume) on its mappings to all ESX servers. By default SANmelody will attempt to use the same LUN on each volume mapping. For instance, if we mapped VMFS-V1 first, it will likely use LUN 0, and VMFS-V2 will use LUN 1. Once we’ve placed the volume into production, we should avoid changing the LUN of the volume. It is for this reason that many storage admins will give their virtual volumes a name indicating the LUN, such as VMFS-LUN0 instead of VMFS-V1. You can, of course, change any of the LUNs used for any volume as long as it is unique on a channel.

Page 27 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 28: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Mapping the Virtual Volumes to the 2 ESX Hosts

All that remains is to discover the shared SAN volumes on our ESX hosts and create their VMotion-enabled VMFS file systems.

DISCOVERING & USING THE SHARED STORAGE

In VirtualCenter, we return to the configuration view for the TUSESX1 server, selecting “Storage Adapters” and clicking “Rescan…”. Once the rescan has completed, we should see our 2 x 500GB virtual volumes. Each will be presented twice, once each over different paths from each SANmelody server.

Page 28 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 29: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Discovering our 2 Multipathed Virtual Volumes in ESX

We then repeat the procedure for TUSESX2. Now both ESX hosts can see the two mirrored volumes.

To use the storage, we select the “Storage (SCSI, SAN, and NFS)” view on one of the ESX server’s configuration panes. We click the “Add Storage…” link and advance through the Add Storage wizard’s screens.

Page 29 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 30: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Creating a VMFS3 File system on the SANmelody Storage

Page 30 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 31: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

In Virtual Center we can choose how the ESX servers will deal with path failures: will the ESX servers employ a “preferred path” policy or will they use whichever path has been most recently available? If our ESX servers are version 3.0.2 build 52542 up to ESX 3.5, we can use either MRU or Fixed Path. If using an older build of ESX 3.0.x, ESX 3.5, or a virtual switch configured for “IP hash based load balancing”, we will need to select MRU. To change the path policy in ESX, we right-click over each file system (e.g. VMFS-V1) and select “Properties” from the contextual menu. We then click the “Manage Paths…” button in the ensuing dialog and finally click the “Change” button under “Policy” section of the Managed Paths dialog.

Changing the path policy to “Fixed”

Finally, we follow VMWare’s recommendations for “Advanced Settings” on the ESX hosts as per the VMWare SAN Configuration Guide. In particular, we set

Disk.UseLunReset = 1

Disk.UseDeviceReset = 0

Page 31 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 32: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

While we’re in the Advanced Settings dialog, we add SANmelody to the Disk.SANDevicesWithAPFailover list, entering the string exactly as shown here: “SANmelody :”. (Note the space and colon following the string “SANmelody”.)

We reboot the ESX server to apply the changes.

Page 32 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 33: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

That’s it. We’ve implemented mirrored, shared SAN storage between two independent SAN storage controllers located on two physically separate ESX hosts.

We can now begin deploying our VMs, placing their files on the shared SAN storage.

Deploying a New VM on the Shared Storage

Of course, our SANmelody Virtual SAN is not limited to the confines of the ESX servers. We can easily configure any of our non-virtualized physical servers to use our SANmelody iSCSI targets, offering those servers true Business Continuity for their data storage.

Page 33 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 34: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

MONITORING STORAGE UTILIZATION

We created our first VM named “FileServer”, with 2 virtual disks, an 8 GB drive for the Windows 2003 R2 system, and a second 150 GB drive to hold our fileshares.

Looking at the storage pool on one of the two SANmelody servers, we note how Thin Provisioning plays in the ESX environment. Our pool is still only at 1% utilization, even if our ESX host thinks 158 GBs have been allocated from VMFS-V1.

Note we have 876 GB of physical capacity, but we’ve given out 1TB of storage with our 2 x 500GB mirrored volumes. As we are currently over-subscribed, we will want to monitor the storage pools to assure we do not completely deplete them. There are provisions in SANmelody for setting alert thresholds and sending warnings when the pools approach depletion. Adding additional storage can be done live; even if we need to stop the ESX servers to add physical storage or HBAs, this can be performed in a non-disruptive fashion, one ESX server at a time. Remember, that’s what Business Continuity is all about.

Page 34 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 35: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

AUTOMATING STARTUP & SHUTDOWN

We will want to automate the startup and shutdown of the shared storage so that, for instance, when an ESX server boots it will start its local SANmelody VM and then – once SANmelody is running – automatically rescan the SAN to rediscover the shared storage and start the VMs. Otherwise, we will need to perform a manual “rescan” to get our shared datastores remounted.

The automation can be accomplished via a shell script called from cron on the ESX host using standard tools and a few of the ESX command line tools, such as the esxcfg-* or vmware-cmd line tools.

Although not essential, as a matter of completeness you would also want to create a script that allows you to cleanly bring down an ESX server. After having migrated or stopped the shared storage VMs and the local SANmelody server, a script would rescan the iSCSI initiator to determine the paths to the local SANmelody server were disconnected before performing the shutdown. Can you guess where that script might be installed?

If you’re handy at scripting, you can do this work yourself. The scripts are relatively straightforward and you can use your favorite supported scripting language like bash or Perl. If scripting isn’t your cup of tea, DataCore and their resellers offer Professional Services that can build and install the scripts for you.

TESTING FAILOVER AND FAILBACK

Testing our Business Continuity environment is a matter of introducing failures. For instance, we can pull a cable from the iSCSI SAN or mirror channel, or we can simply stop one of the SAN storage arrays. Consider the example of stopping the TUSSAN1 VM. We expect to see an automatic failover of all I/O to the surviving TUSSAN2 VM. In this degraded mode, TUSSAN2 will disable write caching and commit all writes immediately to disk to avoid data loss should a double failure occur. Once we restart TUSSAN1, the mirrors will rapidly resynchronize via log-based recovery. Upon returning to a health state, TUSSAN1 will re-publish its path to the virtual volumes and, if

Page 35 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

Page 36: DATACORE SAN TECH COOKBOOK SERIES · VMs and their local SANmelody servers will be over a virtual network at memory or “pipe” speeds – significantly faster than a Gig-E connection

Page 36 DataCore Software Corporation

6300 N.W. 5th Way Fort Lauderdale, FL 33309

www.datacore.com

the ESX servers are set to use “Fixed Path”, they will again access their volumes from the preferred path, returning to normal service.

ADDING VALUE WITH OUR VIRTUAL SAN

SANmelody has proven value in this solution for implementing an iSCSI-based Virtual SAN on our ESX hardware. The solution is brilliant as it provides true Business Continuity in a small footprint – an advanced feature that would cost a fortune if implemented using traditional SAN storage hardware.

We’ve also seen how SANmelody’s Storage Pooling and Thin Provisioning simplify managing storage and allow us to over-subscribe in anticipation of future growth.

But the story gets even better when we consider that SANmelody offers a means to replicate our virtual volumes offsite to a DR facility – using standard IP. It’s a feature of SANmelody called “AIM” or Asynchronous IP Mirroring. No special hardware or protocol converters are required.

Adding the AIM feature to one of our SANmelody VM’s provides an invaluable solution for companies faced with a multitude of sites and looking for an economical way to replicate the satelite site data back to a central datacenter. To learn more about AIM, Asynchronous Replication and Disaster Recovery / Offsite Backups, please contact DataCore Software Corporation or your authorized DataCore reseller.

For more information on how to Go Virtual see: New, Portable, Feature-Packed VM Starter SAN – Runs on VMware, Microsoft VS, Oracle VM, SUN VM, Virtual Iron and Citrix XenServer Virtualization Platforms, or Hardware Servers or Blades.

Copyright © 2008 DataCore Software Corporation. All rights reserved. DataCore, the DataCore logo and SANmelody are trademarks or registered trademarks of DataCore Software Corporation. Other DataCore product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners.