how to configure vio luns from san to client lpars

11
Configuring SAN Storage on VIO server and Assigning to Client LPARS as VSCI LUNS Scope of this Document: This document covers the steps involved and best practices for allocation of SAN LUNs from VIO server to client LPARs using VSCSI. This document does not cover assigning LUNs using NPIV and the general best practices for VIO server configuration and maintenance . 1.0 Configuration on the VIO Server 1.1 Set the Fiber channel SCSI Attributes for Each Fiber Channel Adapter Enable 'dynamic tracking' and 'fast fail' for high availability and quick failover for each fiber channel adapter. Example: As 'padmin' # chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes -perm fscsi0 changed Note: This should be run for all the adapters in 'lsdev -Cc adapter |grep fcs' output. # lsdev -dev fscsi0 -attr attribute value description user_settable attach switch How this adapter is CONNECTED False dyntrk yes Dynamic Tracking of FC Devices True f{}c_err_recovfast_fail FC Fabric Event Error RECOVERY Policy True

Upload: tirumala-rao

Post on 28-Mar-2015

7.033 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: How to configure VIO LUNs from SAN to client LPARs

Configuring SAN Storage on VIO server and Assigning to Client LPARS as VSCI LUNS

Scope of this Document:

This document covers the steps involved and best practices for allocation of SAN LUNs from VIO server to client LPARs using VSCSI. This document does not cover assigning LUNs using NPIV and the general best practices for VIO server configuration and maintenance .

1.0 Configuration on the VIO Server

1.1 Set the Fiber channel SCSI Attributes for Each Fiber Channel Adapter

Enable 'dynamic tracking' and 'fast fail' for high availability and quick failover for each fiber channel adapter.

Example:

As 'padmin'

# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes -perm

fscsi0 changed

Note: This should be run for all the adapters in 'lsdev -Cc adapter |grep fcs' output.

# lsdev -dev fscsi0 -attr

attribute value description user_settable

attach switch How this adapter is CONNECTED False

dyntrk yes Dynamic Tracking of FC Devices True

f{}c_err_recovfast_fail FC Fabric Event Error RECOVERY Policy True

scsi_id 0x6f0058 Adapter SCSI ID False

sw_fc_class 3 FC Class for Fabric True

Note: Reboot of VIO server is required for the 'dyntrk' and 'fc_err_recov' parameters to take into effect.

1.2 Prepare the VIO server for Configuring the SAN Storage

Page 2: How to configure VIO LUNs from SAN to client LPARs

Prepare the VIO server to configure the SAN storage at OS level. The steps involved would be same as AIX stand alone host . High level steps are:

1.2.1 Install ODM file sets

Install the necessary the AIX ODM file sets, host attachment scripts, this software would be supplied by the storage vendor, i.e., IBM, EMC, Hitachi etc.

1.2.2 Install Multipath Software

Install the Multipath software i.e., SDD PCM, EMC Power path, Hitachi HDLM etc as required.

1.2.3 Validate that multi pathing is operating normally

Verify that multipaths are active using the path query commands

Example:

For SDDPCM

# pcmpath query adapter

For EMC Power Path

# powermt display

For HDLM# dlnkmgr view -path

1.3 Logical disk configuration

Set the reserve policy on the logical disk that would be used to create virtual devices, i.e., hdisk, vpath, hdiskpower etc. The logical disk that would be used to create a virtual device would depend upon the multipath software. For example if IBM SDD is being used as multipath software it would be 'vpath', IBM SDD PCM is being used as multipath software it would be 'hdisk', whereas using EMC Powerpath it would be 'hdiskpower'.

1.3.1 Assign pvid and set the 'reserve policy' on the AIX logical disk

Example:

For SDD PCM, or Hitachi HDLM PCM versions:

As 'padmin'

# chdev -dev hdiskx -attr pv=yes

Page 3: How to configure VIO LUNs from SAN to client LPARs

# chdev -dev hdisk -attr reserve_policy=no

For EMC Power path devices:

# chdev -dev hdiskpower0 -attr pv=yes

# chdev -dev hdiskpower0 -attr reserve_lock=noNote: Setting reserve policy and enabling pvid should be performed on all the VIO servers in the frame,.

1.3.2 Queue depth setting for logical devices on VIO server

Check the queue depth settings on the logical devices that would be used to create the virtual devices i.e., vpath, hdisk, hdispower etc. IBM and EMC ODM file sets the queue depth value to a reasonable value and does not require initial tuning. For example, IBM storage ODM file sets the queue depths to 20, EMC ODM file sets it to 32, However Hitachi ODM configuration sets the queue depth to 2. Queue-depth of 2 is very low and and it is recommended to work with SAN team/storage support vendor support to change the initial setting of queue depth to a reasonable value. In case of Hitachi devices start with a value in the range of 8 to 16.

Example:

In the following example please make a note of 'queue_depth' and 'rw_timeout'

lsdev -dev hdiskpower1 -attr

attribute value description user_settable

clr_q yes Clear Queue (RS/6000) True

location Location True

lun_id 0x1000000000000 LUN ID False

lun_reset_spt yes FC Forced Open LUN True

max_coalesce 0x100000 Maximum coalesce size True

max_transfer 0x100000 Maximum transfer size True

pvid 00f6095a2c486cec0000000000000000 Physical volume identifier False

pvid_takeover yes Takeover PVIDs from hdisks True

q_err no Use QERR bit True

Page 4: How to configure VIO LUNs from SAN to client LPARs

q_type simple Queue TYPE False

queue_depth 32 Queue DEPTH True

reassign_to 120 REASSIGN time out value True

reserve_lock no Reserve device on open True

rw_timeout 30 READ/WRITE time out True

scsi_id 0x710008 SCSI ID False

start_timeout 60 START unit time out True

ww_name 0x50001441001bcff1 World Wide Name False

lsdev -dev hdisk32 -attr

attribute value description user_settable

PCM PCM/friend/hitachifcp N/A True

PR_key_value 0x104f967d6 Reserve Key True

algorithm round_robin N/A False

clr_q no Device CLEARS its Queue on error True

dvc_support N/A False

location Location Label True

lun_id 0x1c000000000000 Logical Unit Number ID False

max_transfer 0x40000 Maximum TRANSFER Size True

node_name 0x50060e8005722303 FC Node Name False

pvid 0004f967fea332590000000000000000 Physical Volume ID False

q_err yes Use QERR bit False

q_type simple Queue TYPE True

queue_depth 8 Queue DEPTH True

Page 5: How to configure VIO LUNs from SAN to client LPARs

reassign_to 120 REASSIGN time out True

reserve_policy no_reserve Reserve Policy True

r{}w_timeout 60 READ/WRITE time out True

scsi_id 0x770000 SCSI ID False

start_timeout 60 START UNIT time out True

ww_name 0x50060e8005722303 FC World Wide Name False

1.4 Create Virtual devices (assign LUNs to VIO client LPARS)

Map the vhosts that belong to each client lpar, these are the vhosts that would be used to allocate LUNs to client lpars.

Example:

lsdev -slots and lsmap -all |grep vhost would help determine the vhosts available and their mapping to clients

# lsmap -all |grep vhost

vhost0 U8233.E8B.10095AP-V2-C2 0x00000004

vhost1 U8233.E8B.10095AP-V2-C3 0x00000005

vhost2 U8233.E8B.10095AP-V2-C4 0x00000004

vhost3 U8233.E8B.10095AP-V2-C5 0x00000005

vhost4 U8233.E8B.10095AP-V2-C6 0x00000004

vhost5 U8233.E8B.10095AP-V2-C7 0x00000005

vhost6 U8233.E8B.10095AP-V2-C8 0x00000004

vhost7 U8233.E8B.10095AP-V2-C9 0x00000005

vhost8 U8233.E8B.10095AP-V2-C10 0x00000004

vhost9 U8233.E8B.10095AP-V2-C11 0x00000005

If needed use 'lshwres' command on HMC To create a mapping between the client and server LPAR virtual adapters.

Page 6: How to configure VIO LUNs from SAN to client LPARs

Example:

# lshwres -r virtualio --rsubtype scsi --m Server-9117-570-SN6539DEC --filter lpar_ids=04

Where 04 is the lpar id of the client lpar

Once the vhost is identified based on the mapping, create the virtual device as shown below:

In the following example the LUN is being assigned to vhost2 (lpar id 004) and the logical disk is hdiskpower0. The naming convention for the device (-dev) is client lpar name plus the AIX logical disk name. The naming convention could be site specific and the same convention should be used consistently on all the VIOS

mkvdev -vdev hdiskpower0 -vadapter vhost2 -dev dbp02_hdpowr0

lsmap -all |grep hdiskpower0 to verify that virtual device is created

Run the mkvdev command on the second VIO server also , which would provide the second path to the disk from the client

Note: To identify the logical disk (hdisk, vpath, hdiskpower etc) that matches with a LUN serial number from storage use the respective multipath software commands or lscfg commands.

Example: pcmapth query device |grep 020B

powermt display dev=all |grep 02C

Where 02C and 02B are LUN serial number's last three alphanumeric characters

Alternatively generate an output file using the above commands so that multiple serial numbers can be mapped without having to run the pcmpath, powermt , dlnkmgr commands for every disk.

Note: The number of LUNs that can be mapped to a vhost

The total number of LUNs (disks) that can be mapped to a vhost depends on the queue depth of the SAN LUN. The best practice is to set the queue depth on the virtual disk (client) same as

the physical disk (VIOS).

The maximum number of LUNs per vhost (virtual adapter) = (512-2)/ (3+queue depth)

Page 7: How to configure VIO LUNs from SAN to client LPARs

From the examples above,

For EMC storage with queue depth of 32 the maximum number of disks per virtual adapter would be (512-2)/(3+32) = 14

For IBM storage where the queue depth is typically set at 20, the maximum number of disks per virtual adapter would be (512-2)/(3+20) = 22

For Hitachi storage with a typical queue depth value of 16, the maximum number of disks per virtual adapter would be (512-2)/(3+16) = 26

Note: If needed create additional server and client adapters for allocating the SAN LUNs.

2.0 Configuration on Client LPAR

2.1 Set the Attributes for each Virtual SCSI Adapter

Set the 'vscsi_path_to' and 'vscsi_err_recov' to the values shown below for each vscsi adapter on the client lpar.# chdev -l vscsi1 -a vscsi_path_to=30 -P

# chdev -l vscsi1 -a vscsi_err_recov=fast_fail -P

Note: vscsi_err_recov cannot be modified on older AIX 5.3 TLs and VIOS 2.x is needed for it to work.

# lsattr -El vscsi1

vscsi_err_recov fast_failN/A True

vscsi_path_to 30 Virtual SCSI Path Timeout True

Reboot of Client LPAR is required to for the above parameters to take into effect.

2.2 Configure VSCSI disks and Set Device Characteristics on the Client LPARs

2.2.1 Run 'cfgmgr'

run 'cfgmgr' and verify that the disks assingned to vhosts on VIOS appear on the client lpars, use ' lspv |grep pvid ' to match the disks from VIOS to client lpars, also check UDID using the following command on both client and server. Not all multipath software and versions would use UDID method to identify the devices.

# odmget -q attribute=unique_id CuAt (first and last numbers will be different from VIOS and client)

Page 8: How to configure VIO LUNs from SAN to client LPARs

on the newer versions of VIO the following command can also be used to query the unique id

# chkdev -dev hdiskx -verbose

2.2.2 Verify multipath to disks at Client LPAR

Verify that two paths are enabled to each vscsi disk (one from each vio server)

lspath should show two paths to each vscsi disk (hdisk) on the client LPAR.

# lspath |grep hdisk16

Enabled hdisk16 vscsi4

Enabled hdisk16 vscsi5

2.2.3 Set 'health_check' interval and 'queue depth' for disks on Client LPAR

Once the hdisks get configured on the client lpars, set the following device characteristics

hcheck_interval

queue_depth

Set the hcheck_interval to at least equal to or smaller than the' rw_timeout' value of the SAN LUN on VIO server. For IBM and Hitachi Storage the rw-timeout value is typically 60 so hcheck_interval can be set to 60 or less. Check the 'rw_timeout' value from VIOS using 'lsattr' as shown above in 1.3.2

Example:

# chdev -l hdisk2 -a hcheck_interval=50 -P

For EMC devices rw-timeout value is 30, so set the hecheck_interval to a value equal to or less than 30

# chdev -l hdisk2 -a hecheck_interval=20 -P

Set the queue depth value to the same value on the physical device on the VIOS

Example: if the queue depth value on the VIOS is 20

# chdev -l hdisk16 -a queue_depth=20 -P

If the queue depth value on the VIOS is 32

Page 9: How to configure VIO LUNs from SAN to client LPARs

# chdev -l hdisk16 -a queue_depth=32 -P

The values set using -P option would require a reboot of client lpar.

2.2.4 Path Priority settings

Path priority settings would facilitate load balancing across the VIO servers. There are two approaches to load balance from client lpars to VIO server lpars, in the first approach set the path priority for all the disks from one client lpar to one VIO sever, from the second client lpar to the second VIO server and so on. In the second approach set the path priority for some disks to first VIO lpar and for other disks to second VIO lpar, i.e., use all VIO servers from all client lpars. The first approach i.e., using one VIO server from one client LPAR would be easier to administer and provides load balancing.

Example for path priority setting:

# chpath -l hdisk16 -p vscsi4 -a priority=2 ( to set the path priority)

# lspath -AHE -l hdisk16 -p vscsi4 (to verify the path priority)

attribute value description user_settable

priority 2 Priority True

3.0 MPIO/VIO Testing

After the configuration steps on VIO server and client lpars have been completed, perform MPIO failover testing by shutting down one VIO server at a time. The path on client LPAR should failover to the second path, 'lspath' and 'errpt' could be used to validate the path failover. The volume groups should remain active while the VIOS is down. Start the VIO server, once the VIO server is up, the failed path on the client LPAR should go to 'Enabled' state.