ad07 working with mpio, sddpcm, sdd and san boot ·  · 2011-01-17ad07 working with mpio, sddpcm,...

38
© 2010 IBM Corporation IBM Power Systems Technical University October 18–22, 2010 — Las Vegas, NV AD07 Working with MPIO, SDDPCM, SDD and SAN boot John Hock Dan Braden Power Systems Advanced Technical Skills

Upload: trinhcong

Post on 23-Apr-2018

227 views

Category:

Documents


2 download

TRANSCRIPT

© 2010 IBM Corporation

IBM Power Systems Technical University

October 18–22, 2010 — Las Vegas, NV

AD07Working with MPIO, SDDPCM, SDD and SAN boot

John HockDan BradenPower Systems Advanced Technical Skills

2© 2010 IBM Corporation

IBM Power Systems Technical University

Agenda

� Multi-path basics

� Multi Path I/O (MPIO)

►Useful MPIO Commands

►Path priorities

►Failed Path Recovery and path health

checking

►MPIO path management

� SDD and SDDPCM

� Multi-path code choices for DS4000,

DS5000 and DS3950

� SAN Boot

3© 2010 IBM Corporation

IBM Power Systems Technical University

How many paths for a LUN?

Server

FC Switch

Storage

• Paths = (# of paths from server to switch) x (# paths from storage to switch)

• Here there are potentially 6 paths per LUN…But reduced via:

• LUN masking at the storage• Assign LUNs to specific FC adapters at the host, and thru

specific ports on the storage• Zoning

• WWPN or SAN switch port zoning• 4 paths per LUN are sufficient for availability and reduce CPU

overhead for choosing the path• Path selection overhead is relatively low

• MPIO has no practical limits to number of paths• Other products have path limits

• SDDPCM limited to 32 paths per LUN

4© 2010 IBM Corporation

IBM Power Systems Technical University

Path selection benefits and costs

� Path selection algorithms choose a path to hopefully minimize latency

added to an IO to send it over the SAN to the storage

� Latency to send a 4 KB IO over a 8 Gbps SAN link is

4 KB / (8 Gb/s x 0.1 B/b x1048576 KB/GB) = 0.0048 ms

� Multiple links may be involved, and round trip for an IO

� As compared to fastest IO service times around 1 ms

� If the links aren’t busy, there likely won’t be much if any savings

� Costs of path selection algorithms

� CPU cycles to choose the best path

� Memory to keep track of in-flight IOs down each path, or

� Memory to keep track of IO service times down each path

� Latency added to the IO to choose the best path

5© 2010 IBM Corporation

IBM Power Systems Technical University

Disk configuration

� The disk vendor dictates what multi-path code can be used, supplies

the filesets for it and supports it

� A fileset is loaded to update the ODM to support the storage

� AIX then recognizes and appropriately configures the disk

� Without this, disks are configured using a generic ODM definition

� Performance and error handling may suffer as a result

� # lsdev –Pc disk displays supported storage

� The multi-path code will be a different fileset

� Unless using the MPIO that’s included with AIX

6© 2010 IBM Corporation

IBM Power Systems Technical University

Multi-path IO with VIO and VSCSI LUNs

VIO ClientMPIO

VIO ServerMulti-path code

VIO ServerMulti-path code

Disk Subsystem

� Two layers of multi-path code: VIOC and VIOS

� Disks always use MPIO and all IO for a LUN normally goes to one VIOS

� VIOS uses the multi-path code specified for the

disk subsystem

� Set path priorities for hdisks so half the hdisks use one VIOS and half use the other

7© 2010 IBM Corporation

IBM Power Systems Technical University

Multi-path IO with VIO and NPIV

VIO ClientMulti-path code

VIO Server VIO Server

Disk Subsystem

� VIOC has virtual FC adapters (vFC)

►Potentially one vFC adapter for every real FC adapter

in each VIOC

►Maximum of 64 vFC adapters per real FC adapter

recommended

� VIOC uses multi-path code that the disk subsystem supports

� IOs for a LUN can go thru both VIOSs

� One layer of multi-path code

VFC VFC VFC VFC

8© 2010 IBM Corporation

IBM Power Systems Technical University

What is MPIO?

� MPIO is an architecture designed by AIX development� MPIO is also a commonly used acronym for Multi-Path IO

► In this presentation MPIO refers explicitly to the architecture, not the acronym

� Why was the MPIO architecture developed?

► With the advent of SANs, each disk subsystem vendor wrote their own multi-path code

► These multi-path code sets were usually incompatible

● Mixing disk subsystems was usually not supported on the same system, and if they were, they usually required their own FC adapters

► Integration with AIX IO error handling and recovery

● Several levels of IO timeouts: basic IO timeout, FC path timeout, etc

� MPIO architecture details available to disk subsystem vendors

► Compliant code requires a Path Control Module (PCM) for each disk subsystem

► Default PCMs for SCSI and FC exist on AIX and often used by the vendors

► Capabilities exist for different path selection algorithms

► Disk vendors have been moving towards MPIO compliant code

9© 2010 IBM Corporation

IBM Power Systems Technical University

Overview of MPIO Architecture

� LUNs show up as an hdisk

►Architected for 32 K paths

►No more than 16 is necessary

� PCM: Path Control Module

►Default PCMs exist for FC, SCSI

►Vendors may write optional PCMs

►May provide commands to manage paths

� Allows various algorithms to balance

use of paths

� Full support for multiple paths to rootvg

� Hdisks can be Available, Defined or non-existent

� Paths can also be Available, Defined, Missing or non-existent

� Path status can be enabled, disabled or failed if the path is Available

� One must get the device layer correct, before working with the path status layer

10© 2010 IBM Corporation

IBM Power Systems Technical University

MPIO support

fail_over, round_robinDefault FC PCMXIV

fail_over, round_robinDefault FC PCMNSeries

Default SCSI PCM

Default SCSI PCM

Default FC PCM

HDLM

Default FC PCM

SDDPCM

MPIO code

fail_overVIO VSCSI

fail_over, round_robinSCSI

fail_over, round_robinHP (not EVA) some

members of XP family

fail_over, round robin,

extended round robin

HDS

fail_over, round_robinEMC Symmetrix

fail_over, round_robin, load

balance, load balance port

IBM ESS, DS6000, DS8000,

DS4000, DS5000, SVC

Multi-path algorithmStorage subsystem family

11© 2010 IBM Corporation

IBM Power Systems Technical University

Non-MPIO multi-path code

DMPVertias supported storage

HDLM (older versions)HDS

AutoPathHP

Power PathEMC

RDACIBM DS4000 and DS5000

Multi-path codeStorage subsystem family

12© 2010 IBM Corporation

IBM Power Systems Technical University

Mixing multi-path code sets

� The disk subsystem vendor specifies what multi-path code is supported for their storage

►The disk subsystem vendor supports their storage, the server vendor generally doesn’t

� You can mix multi-path code compliant with MPIO and even share adapters

►Exception with HP and HDS which require their own adapters

� Generally one non-MPIO compliant code set can exist with other MPIO compliant code

sets

►Except that SDD and RDAC can be mixed on the same LPAR

►The non-MPIO compliant code must be using its own adapters

� Devices of a given type use only one multi-path code set

►E.G., you can’t used SDDPCM for one DS8000 and SDD for another DS8000 on the same AIX

instance

13© 2010 IBM Corporation

IBM Power Systems Technical University

Sharing Fibre Channel Adapter ports

� HP and HDS require adapters dedicated to their disk subsystems

� Disk using MPIO compliant code sets can share adapter ports

� It’s recommended that disk and tape use separate ports

tape and disk devices require incompatible HBA performance settings

14© 2010 IBM Corporation

IBM Power Systems Technical University

MPIO Commands

� lspath – list paths, path status and path attributes for a disk

� chpath – change path status or path attributes

► Enable or disable paths

� rmpath – delete or change path state

► Putting a path into the defined mode means it won’t be used (from available to defined)

►One cannot define/delete the last path of a device

� mkpath – add another path to a device or makes a defined path available

►Generally cfgmgr is used to add new paths

� chdev – change a device’s attributes (not specific to MPIO)

� cfgmgr – add new paths to an hdisk or make defined paths available

(not specific to MPIO)

15© 2010 IBM Corporation

IBM Power Systems Technical University

Useful MPIO Commands

� List status of the paths and the parent device (or adapter)

# lspath -Hl <hdisk#>

� List connection information for a path

# lspath -l hdisk2 -F"status parent connection path_status“

hdisk2 Enabled fscsi0 203900a0b8478dda,f000000000000 Available

hdisk2 Enabled fscsi0 201800a0b8478dda,f000000000000 Available

hdisk2 Enabled fscsi1 201900a0b8478dda,f000000000000 Available

hdisk2 Enabled fscsi1 203800a0b8478dda,f000000000000 Available

� The connection field contains the storage port WWPN

► In the case above, all paths go to storage WWPN 203800a0b8478dda

� List a specific path's attributes

# lspath -AEl hdisk2 -p fscsi0 –w

“203900a0b8478dda,f00000000000“

scsi_id 0x30400 SCSI ID False

node_name 0x200800a0b8478dda FC Node Name False

priority 1 Priority True

16© 2010 IBM Corporation

IBM Power Systems Technical University

Path priorities

� A Priority Attribute for paths can be used to specify a preference for

path IOs. How it works depends whether the hdisk’s algorithm

attribute is set to fail_over or round_robin.

� algorithm=fail_over

►the path with the higher priority value handles all the IOs unless there's a path failure.

►the other path(s) will only be used when there is a path failure.

►Set the primary path to be used by setting it's priority value to 1, and the next path's priority (in case of path failure) to 2, and so on.

� algorithm=round_robin

►If the priority attributes are the same, then IOs go down each path equally.

►In the case of two paths, if you set path A’s priority to 1 and path B’s to 255, then for every IO going down path A, there will be 255 IOs sent down path B.

� To change the path priority of an MPIO device on a VIO client:

►# chpath -l hdisk0 -p vscsi1 -a priority=25

17© 2010 IBM Corporation

IBM Power Systems Technical University

Path priorities

# lsattr -El hdisk20

PCM PCM/friend/otherapdisk Path Control Module False

algorithm fail_over Algorithm True

hcheck_interval 60 Health Check Interval True

hcheck_mode nonactive Health Check Mode True

lun_id 0x5000000000000 Logical Unit Number ID False

node_name 0x20060080e517b6ba FC Node Name False

queue_depth 10 Queue DEPTH True

reserve_policy single_path Reserve Policy True

ww_name 0x20160080e517b6ba FC World Wide Name False

# lspath -l hdisk9 -F"parent connection status path_status"

fscsi1 20160080e517b6ba,5000000000000 Enabled Available

fscsi1 20170080e517b6ba,5000000000000 Enabled Available

# lspath -AEl hdisk9 -p fscsi1 -w"20160080e517b6ba,5000000000000"

scsi_id 0x10a00 SCSI ID False

node_name 0x20060080e517b6ba FC Node Name False

priority 1 Priority True

18© 2010 IBM Corporation

IBM Power Systems Technical University

Path priorities – why change them?

� With VIOCs, send half the IOs to one VIOS and half to the other VIOS

►Set priorities for half the LUNs to use VIOSa and half to use VIOSb

►Uses both VIOSs CPU and adapters

►algorithm=fail_over is the only option at the VIOC for VSCSI disks

� With NSeries – have the IOs go the primary controller for the LUN

►Set via the dotpaths script that comes with Nseries filesets

19© 2010 IBM Corporation

IBM Power Systems Technical University

Path Health Checking and Recovery – Validate Path is Working

� For SDDPCM and MPIO compliant disks, two hdisk attributes apply:# lsattr -El hdisk26 hcheck_interval 0 Health Check Interval Truehcheck_mode nonactive Health Check Mode True

� hcheck_interval ► Defines how often the health check is performed on the paths for a device. The attribute supports a range

from 0 to 3600 seconds. When a value of 0 is selected (the default), health checking is disabled� hcheck_mode

► Determines which paths should be checked when the health check capability is used:● enabled: Sends the healthcheck command down paths with a state of enabled ● failed: Sends the healthcheck command down paths with a state of failed● nonactive: (Default) Sends the healthcheck command down paths that have no active I/O, including

paths with a state of failed. If the algorithm selected is failover, then the healthcheck command is also sent on each of the paths that have a state of enabled but have no active IO. If the algorithm selected is round_robin, then the healthcheck command is only sent on paths with a state of failed, because the round_robin algorithm keeps all enabled paths active with IO.

� Enable path health checking if paths aren’t normally used, and only for enough LUNs to ensure physical paths are checked► Often turning on health checking for a single LUN is sufficient to monitor all physical paths’ status► An error shows up in the error log if a path fails► Consider setting up error notification

Failed Path Recovery

� To enable a failed path: # chpath -l hdisk1 -p <parent> -s enable

20© 2010 IBM Corporation

IBM Power Systems Technical University

Path management with MPIO

� Includes examining, adding, removing, enabling and disabling paths

► Adapter failure and replacement

► VIOS upgrades (VIOS or multi-path code)

► Cable failure and replacement

► Storage controller/port failure and repair

� Adapter replacement

► Paths will not be in use if the adapter has failed, paths will be in the failed state

► Remove paths with # rmpath –l <hdisk> -p <parent> -w <connection> [-d]

● -d will remove the path, without it the path will changed to Defined

► Remove the adapter with # rmdev –Rdl <fcs#>

► Replace the adapter

► cfgmgr

► Check the paths with lspath

� It’s better to stop using a path before you know the path will disappear

► Avoid timeouts, application delays or performance impacts and potential error

recovery bugs

21© 2010 IBM Corporation

IBM Power Systems Technical University

Active/Active vs. Active/Passive Disk Subsystem Controllers

� IOs for a LUN can be sent to any storage port with Active/Active controllers

� LUNs are balanced across controllers for Active/Passive disk subsystems

►So a controller is active for some LUNs, but passive for the others

� IOs for a LUN are only be sent to the Active controller’s port for disk subsystems with Active/Passive controllers

►ESS, DS6000, and DS8000 have active/active controllers

►DS4000, DS5000, DS3950, NSeries have active/passive controllers

● The NSeries passive controller can accept IOs but IO latency is affected

►The passive controller takes over in the event the active controller or all paths to it fail

� MPIO recognizes Active/Passive disk subsystems and sends IOs only to the primary controller

►Except under failure conditions, then the active/passive role switches for the affected LUNs

22© 2010 IBM Corporation

IBM Power Systems Technical University

SDD: An Overview

� SDD = Subsystem Device Driver

� Used with IBM ESS, DS6000, DS8000 and the SAN Volume Controller, but is not

MPIO compliant.

►A “host attachment” fileset (populates the ODM) and SDD fileset are both installed

►Host attachment: ibm2105.rte

►SDD: devices.sdd.<sdd_version>.rte

� LUNs show up as vpaths, with an hdisk device for each path

►32 paths maximum per LUN, but less are recommended with more than 600 LUNs

� One installs SDDPCM or SDD, not both.

� No support for rootvg, dump or paging devices

► One can exclude disks from SDD control using the excludesddcfg command

► Mirror rootvg across two separate LUNs on different adapters for availability

23© 2010 IBM Corporation

IBM Power Systems Technical University

SDD

� Load balancing algorithms

►fo: failover

►rr: round robin

►lb: load balancing (aka. df or the default) and chooses adapter with fewest in-flight IOs

►lbs: load balancing sequential – optimized for sequential IO

►rrs: round robin sequential – optimized for sequential IO

� The datapath command is used to examine vpaths, adapters, paths, vpath statistics,

path statistics, adapter statistics, dynamically change the load balancing algorithm,

and other administrative tasks such as adapter replacement, disabling paths, etc.

� mkvg4vp is used instead of mkvg, and extendvg4vp is used instead of extendvg

� SDD automatically recovers failed paths that have been repaired via the sddsrv

daemon

24© 2010 IBM Corporation

IBM Power Systems Technical University

SDDPCM: An Overview

� SDDPCM = Subsystem Device Driver Path Control Module

� SDDPCM is MPIO compliant and can be used with IBM ESS, DS6000, DS8000,

DS4000 (most models), DS5000, DS3950 and the SAN Volume Controller

► A “host attachment” fileset (populates the ODM) and SDDPCM fileset are both installed

► Host attachment: devices.fcp.disk.ibm.mpio.rte

► SDDPCM: devices.sddpcm.<version>.rte

� LUNs show up as hdisks, paths shown with pcmpath or lspath commands

► 16 paths per LUN supported

� Provides a PCM per the MPIO architecture

� One installs SDDPCM or SDD, not both. SDDPCM is recommended and

strategic

25© 2010 IBM Corporation

IBM Power Systems Technical University

SDDPCM

� Load balancing algorithms

► rr - round robin

► lb - load balancing based on in-flight IOs per adapter

► fo - failover policy

► lbp - load balancing port (for ESS, DS6000, DS8000 and SVC only) based on in-flight IOs per adapter and per storage port

� The pcmpath command is used to examine hdisks, adapters, paths, hdisk statistics, path statistics, adapter statistics, dynamically change the load balancing algorithm, and other administrative tasks such as adapter replacement, disabling paths

� SDDPCM automatically recovers failed paths that have been repaired via the pcmserv daemon

► MPIO health checking can also be used, and can be dynamically set via the pcmpath command. This is recommended. Set the hc_interval to a non-zero value for an appropriate number of LUNs to check the physical paths.

26© 2010 IBM Corporation

IBM Power Systems Technical University

Path management with SDDPCM and the pcmpath command

# pcmpath query adapter

# pcmpath query device

# pcmpath query port

# pcmpath query devstats

# pcmpath query adaptstats

# pcmpath query portstats

# pcmpath query essmap

# pcmpath set adapter …

# pcmpath set device path …

# pcmpath set device algorithm

# pcmpath set device hc_interval

# pcmpath disable/enable ports …

# pcmpath query wwpn

And more� SDD offers the similar datapath command

List adapters and status

List hdisks and paths

List DS8000/DS6000/SVC portsList hdisk/path IO statistics

List adapter IO statistics

List DS8000/DS6000/SVC port statistics

List rank, LUN ID and more for each hdisk

Disable/enable paths to adapter

Disable/enable paths to a hdiskDynamically change path algorithm

Dynamically change health check interval

Disable/enable paths to a disk portDisplay all FC adapter WWPNs

27© 2010 IBM Corporation

IBM Power Systems Technical University

Path management with SDDPCM and the pcmpath command

# pcmpath query device

DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2145 ALGORITHM: Load Balance

SERIAL: 600507680190013250000000000000F4

==========================================================================

Path# Adapter/Path Name State Mode Select Errors

0 fscsi0/path0 OPEN NORMAL 40928736 0

1* fscsi0/path1 OPEN NORMAL 16 0

2 fscsi2/path4 OPEN NORMAL 43927751 0

3* fscsi2/path5 OPEN NORMAL 15 0

4 fscsi1/path2 OPEN NORMAL 44357912 0

5* fscsi1/path3 OPEN NORMAL 14 0

6 fscsi3/path6 OPEN NORMAL 43050237 0

7* fscsi3/path7 OPEN NORMAL 14 0

• * Indicates path to passive controller

• 2145 is a SVC which has active/passive nodes for a LUN• DS4000, DS5000 and DS3950 also have active/passive controllers• IOs will be balanced across paths to the active controller

28© 2010 IBM Corporation

IBM Power Systems Technical University

Path management with SDDPCM and the pcmpath command

# pcmpath query devstats

Total Dual Active and Active/Asymmetrc Devices : 67

DEV#: 2 DEVICE NAME: hdisk2

===============================

Total Read Total Write Active Read Active Write Maximum

I/O: 169415657 2849038 0 0 20

SECTOR: 2446703617 318507176 0 0 5888

Transfer Size: <= 512 <= 4k <= 16K <= 64K > 64K

183162 67388759 35609487 46379563 22703724

• Maximum value useful for tuning queue depths

29© 2010 IBM Corporation

IBM Power Systems Technical University

SDD & SDDPCM: Getting Disks configured correctly

� Install the appropriate filesets► SDD or SDDPCM for the required disks (and host attachment fileset)► If you are using SDDPCM, install the MPIO fileset as well which comes with AIX

● devices.common.IBM.mpio.rte► Host attachment scripts

● http://www.ibm.com/support/dlsearch.wss?rs=540&q=host+scripts&tc=ST52G7&dc=D410� Reboot or start the sddsrv/pcmsrv daemon

� smitty disk -> List All Supported Disk► Displays disk types for which software support has been installed

� Or # lsdev -Pc disk | grep -i mpiodisk mpioosdisk fcp MPIO Other FC SCSI Disk Drive

disk 1750 fcp IBM MPIO FC 1750disk 2105 fcp IBM MPIO FC 2105disk 2107 fcp IBM MPIO FC 2107disk 2145 fcp MPIO FC 2145disk DS3950 fcp IBM MPIO DS3950 Array Diskdisk DS4100 fcp IBM MPIO DS4100 Array Diskdisk DS4200 fcp IBM MPIO DS4200 Array Diskdisk DS4300 fcp IBM MPIO DS4300 Array Diskdisk DS4500 fcp IBM MPIO DS4500 Array Diskdisk DS4700 fcp IBM MPIO DS4700 Array Diskdisk DS4800 fcp IBM MPIO DS4800 Array Diskdisk DS5000 fcp IBM MPIO DS5000 Array Diskdisk DS5020 fcp IBM MPIO DS5020 Array Disk

30© 2010 IBM Corporation

IBM Power Systems Technical University

Migration from SDD to SDDPCM

� Migration from SDD to SDDPCM is fairly straightforward and doesn't require a lot of time. The procedure is documented in the manual:► Varyoff your SDD VGs► Stop the sddsrv daemon via stopsrc -s sddsrv► Remove the SDD devices (both vpaths and hdisks) via instructions below► Remove the dpo device► Uninstall SDD and the host attachment fileset for SDD► Install the host attachment fileset for SDDPCM and SDDPCM ► Configure the new disks (if you rebooted it's done, else run cfgmgr and startsrc–s pcmserv)

► Varyon your VGs - you're back in business

� To remove the vpaths and hdisks, use the following inline script:# for i in `lsdev -Cc disk | egrep "vpath|2105" | cut -f1 -d" "`

> do

> rmdev -dl $i

> done

� Or an even easier method which removes the dpo device, vpaths and hdisks.► rmdev -Rdl dpo

� No exportvg/importvg is needed because LVM keeps track of PVs via PVID

� Effective queue depths change (and changes to queue_depth will be lost):► SDD effective queue depth = # paths for a LUN x queue_depth► SDDPCM effective queue depth = queue_depth

31© 2010 IBM Corporation

IBM Power Systems Technical University

Multi-path code choices for DS4000/DS5000/DS3950

� These disk subsystems might use RDAC, MPIO or SDDPCM

►Choices depend on model and AIX level

� MPIO is strategic

►SDDPCM uses MPIO and is recommended

►SDDPCM not supported on VIOS yet for these disk subsystems so use MPIO

� SAN cabling/zoning is more flexible with MPIO/SDDPCM than with RDAC

►RDAC requires fcsA be connected to controllerA and fcsB connected to controllerB with

no cross connections

� These disk subsystem have active/passive controllers

►All IO for a LUN goes to its primary controller

● Unless the paths to it fail, or the controller fails, then the other controller takes over

the LUN

►Storage administrator assigns half the LUNs to each controller

� The manage_disk_drivers command is used to choose the multi-path code

►Choices vary among models and AIX levels

� DS3950, DS5020, DS5100, DS5300 use MPIO/SDDPCM only

32© 2010 IBM Corporation

IBM Power Systems Technical University

Multi-path code choices for DS3950, DS4000 and DS5000

# manage_disk_drivers -l

Device Present Driver Driver Options

2810XIV AIX_AAPCM AIX_AAPCM,AIX_non_MPIO

DS4100 AIX_SDDAPPCM AIX_APPCM,AIX_fcparray

DS4200 AIX_SDDAPPCM AIX_APPCM,AIX_fcparray

DS4300 AIX_SDDAPPCM AIX_APPCM,AIX_fcparray

DS4500 AIX_SDDAPPCM AIX_APPCM,AIX_fcparray

DS4700 AIX_SDDAPPCM AIX_APPCM,AIX_fcparray

DS4800 AIX_SDDAPPCM AIX_APPCM,AIX_fcparray

DS3950 AIX_SDDAPPCM AIX_APPCM

DS5020 AIX_SDDAPPCM AIX_APPCM

DS5100/DS5300 AIX_SDDAPPCM AIX_APPCM

DS3500 AIX_AAPCM AIX_APPCM

� To set the driver for use:# manage_disk_drivers -d <device> -o <driver_option>

� AIX_AAPCM - MPIO with active/active controllers

� AIX_APPCM - MPIO with active/passive controllers� AIX_SDDAPPCM - SDDPCM� AIX_fcparray - RDAC

33© 2010 IBM Corporation

IBM Power Systems Technical University

Other MPIO commands for DS3/4/5000

# mpio_get_config –Av

Frame id 0:

Storage Subsystem worldwide name:

608e50017be8800004bbc4c7e

Controller count: 2

Partition count: 1

Partition 0:

Storage Subsystem Name = 'DS-5020'

hdisk LUN # Ownership User Label

hdisk4 0 A (preferred) Array1_LUN1

hdisk5 1 B (preferred) Array2_LUN1

hdisk6 2 A (preferred) Array3_LUN1

hdisk7 3 B (preferred) Array4_LUN1

hdisk8 4 A (preferred) Array5_LUN1

hdisk9 5 B (preferred) Array6_LUN1

# sddpcm_get_config –Av

34© 2010 IBM Corporation

IBM Power Systems Technical University

Storage Area Network Boot

� Requirements for SAN Booting

► System with FC boot capability

► Appropriate microcode (system, FC adapter, disk subsystem and FC switch)

► Disk subsystem supporting AIX FC boot

� Some older systems don’t support FC boot, if in doubt, check the sales manual

� SAN disk configuration

► Create the SAN LUNs and assign them to the system's FC adapters’ WWPNs prior to installing the system

► For non-MPIO configurations, assign one LUN to one WWPN to keep it simple

� AIX installation

► Boot from installation CD or NIM, this runs the install program

► When you do the installation you'll get a list of disks that will be on the SAN for the system

► Choose the disks for installing rootvg

● You may want to know the SAN disk volume IDs if you want a specific LUN for rootvg. You should also see the size of the disk, so that's a good clue about which one(s) to use for rootvg.

35© 2010 IBM Corporation

IBM Power Systems Technical University

Storage Area Network Booting: Pros & Cons

� The main benefits of SAN rootvg

► Performance < 2 ms write, 5-10 ms read

► Availability with built in RAID protection

► Ability to easily redeploy disk

► Ability to FlashCopy the rootvg for backup

� SAN rootvg disadvantages

► SAN problems can cause loss of access to rootvg ~ not an issue as app data is on SAN anyway

► Potential loss of system dump and diagnosis if loss of access to SAN is caused by a kernel bug

► Difficult to change multi-path IO code

● Not an issue with dual VIOS

� SAN boot thru VIO with VSCSI is different

► SAN boot thru VIO with NPIV is like SAN boot

36© 2010 IBM Corporation

IBM Power Systems Technical University

Changing multi-path IO code for rootvg – not so easy

� How do you change/update rootvg multi-path code when it’s in use?

� Changing from SDD to SDDPCM (or vice versa) requires contacting support if booting from SAN, or:

►Move rootvg to internal SAS disks, e.g., using extendvg, migratepv, reducevg,

bosboot and bootlist, or use alt_disk_install

►Change the multi-path code

►Move rootvg back to SAN

►Newer versions of AIX require a newer version of SDD or SDDPCM

� Follow procedures in the SDD and SDDPCM manual for upgrades of AIX

and/or the multi-path code� Not an issue when using VIO with dual VIOSs

37© 2010 IBM Corporation

IBM Power Systems Technical University

� Set fscsi dynamic tracking to yes

� Allows dynamic SAN changes

� Set FC fabric event error recovery fast_fail to yes if the switches support it

� Switch fails IOs immediately without timeout if a path goes away

� Switches without support result in errors in the error log

# lsattr -El fscsi0

attach switch How this adapter is CONNECTED False

dyntrk no Dynamic Tracking of FC Devices True

fc_err_recov delayed_fail FC Fabric Event Error RECOVERY Policy True

scsi_id 0x1c0d00 Adapter SCSI ID False

sw_fc_class 3 FC Class for Fabric True

# chdev –l fscsi0 –a dyntrk=yes –a fc_err_recov=fast_fail –P

# shutdown –Fr

� Virtual FC adapters have these set to yes and fast_fail by default

Switch attached fibre channel adapters

38© 2010 IBM Corporation

IBM Power Systems Technical University

Documentation

� Infocenter “Multiple Path IO”

http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.baseadmn/doc/

baseadmndita/dm_mpio.htm

� SDD and SDDPCM Support matrix:

www.ibm.com/support/docview.wss?rs=540&uid=ssg1S7001350

� Downloads and documentation for SDD

www.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S4000065&loc=en_US&cs=utf-8&lang=en

� Downloads and documentation for SDDPCM:

www.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S4000201&loc=en_US&cs=utf-8&lang=en