power6 vscsi to power7 vfc (npiv) migration

Post on 24-Feb-2016

94 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Power6 vSCSI to Power7 vFC (NPIV) migration . Rajesh Shah UNIX Systems Administrator III. NPIV Overview. Backup prior migration. Important: Ensure all backups of the source client LPAR is in place …before migration . Basic steps when migrating from vSCSI to NPIV. - PowerPoint PPT Presentation

TRANSCRIPT

Classified - Internal use

Power6 vSCSI to Power7 vFC (NPIV) migration

Rajesh ShahUNIX Systems Administrator III

Classified - Internal use

NPIV Overview

Classified - Internal use

Backup prior migration

Important: Ensure all backups of the source client LPAR is in place …before migration

Basic steps when migrating from vSCSI to NPIV

-remove vscsi / vhosts on source (can be done later)On Target

-create Virtual fibre channel at VIO -create Virtual FC client at LPAR

-vfcmap -Locate the WWPNs for the destination client Fibre Channel adapter from the

HMC and remap SAN storage that was originally being mapped for source partition to the WWPNs for the destination partition.

Classified - Internal use

Classified - Internal use

STEPS DONE BEFORE AND AFTER MIGRATION on client LPAR

NOTE: source P6 client lpar LUN’s serial numbers and target P7 lpar virtual wwpn’s info given to Storage team for LUN masking and rezoning target LUN to virtual wwpns on P7 lpar

During Outage window steps AFTER migrationOn OLD p6 (source) after p7 lpar is booted up (NEW)target lpar a)TL updates a) Enable NFS/NAS mounts b)Change/verify LUN attributes b)Install SAN multipath codes c) REBOOT SAN vendor specified (EMCODM) d)SYSTEM verification e)APPLICATION verificationc)Etherchannel drop ITS BIG OK HEREd)NIC recfg on single en interface g) cleanupe)Disable NFS/NAS mounts f)Rebootg)shutdown

Classified - Internal use

P5/P6 lpar

VIO

SVC

IBM SAN

P7 lpar

EMC SAN

SAN team in background using SVC Hot Array level migration method

migrates ONLINE all real-time DATA from IBM SAN to EMC SAN until real

outage window

Classified - Internal use

P6 lpar

VIO

SVC

IBM SAN

P7 lpar

EMC SAN

During REAL OUTAGE WINDOW, SAN team rezones

EMC SAN storage to the destination p7 lpar’s virtual

wwpns

Classified - Internal use

Migrated LUN

Classified - Internal use

Prerequisites for multipath NPIV configuration

• IBM POWER6 or above processor based hardware• 8Gb PCIe Dual-port FC Adapter

• SAN switch are NPIV enabled• VIOS version 2.1 or above

• AIX 5.3 Technology Level 09 or above• AIX 6.1 Technology Level 02 or above

Classified - Internal use

Basic steps when migrating from vSCSI to NPIV

-remove source vscsi / vhosts (can be done later)-create Virtual fibre channel at VIO

-create Virtual FC client at LPAR-vfcmap

-Locate the WWPNs for the destination client Fibre Channel adapter from the

HMC and remap SAN storage that was originally being mapped for source partition to the WWPNs for the

destination partition.

Classified - Internal use

Setting up NPIV containers on p7 (target) prior migration

1)Create a virtual fibre adapter in VIO server

2)Create a virtual fibre adapter in your VIO client

3)Map the virtual adapter in the VIO server to physical fibre adapter using vfcmap command

4)Give the virtual worldwide name (WWN) to your SAN team

Classified - Internal use

VIOS configurationYou have to create dual VIOS instances on the System p server to avail maximum redundancy for the paths. Each VIOS should own at least 1

number of 8Gb FC Adapter (FC#5735) and one Virtual FC Server Adapter to map it to the client LPAR.

NOTE: Before doing any NPIV related configuration, the VIOS and AIX LPAR profile configuration has to be completed with all the other

details required for running the VIOS and the AIX LPAR, like proper Virtual SCSI adapter and/or Virtual Ethernet Adapter mapping,

physical SCSI/RAID adapter mapping to the VIOS and/or AIX LPAR etc. This is because, for doing a Virtual FC Server Adapter to Client Adapter mapping both the VIOS and AIX LPAR should exist in the system, NOT like Virtual SCSI adapter mapping where you have to provide only the

slot numbers and it does not check for the existence of the client.

Classified - Internal use

Creating Virtual FC Server Adapters

Open the first VIOS’s profile for editing and navigate to the Virtual Adapters tab, click on Actions > Create > Fibre Channel Adapter as

shown in the figure below

Input the Virtual FC Server Adapter slot number, select the client LPAR from the dropdown which will be connected to this server

adapter and input its planned Virtual FC Client Adapter slot number as shown below

Classified - Internal use

Classified - Internal use

Same way, edit the profile of the second VIOS and add the Virtual FC

Server Adapter details

Classified - Internal use

Classified - Internal use

MAP virtual FC server adapters to physical HBA ports

Once the Virtual FC Server Adapters are created, it has to be mapped to the physical HBA ports for connecting it to the SAN

fabric. This will connect the Virtual FC Client Adapters to the SAN fabric through Virtual FC Server Adapter and physical HBA port.For mapping the Virtual FC Server Adapters to the HBA port, the

HBA port should be already connected to the SAN switch enabled with NPIV support. You can run the command ‘lsnports’ in

VIOS ‘$’ prompt to know whether the connected SAN switch is enabled for NPIV. If the value for the ‘fabric’ parameter shows as ‘1’ that means that HBA port is connected to a SAN switch

supporting the NPIV feature. If the SAN switch does not support NPIV, this value will be ‘0’ and if there is no SAN connectivity there

won’t be any output for the ‘lsnports’ command.

Classified - Internal use

If the value for ‘fabric’ is ‘1’, we can go ahead mapping the Virtual FC adapter to those HBA ports. In the VIOS,

the Virtual FC adapters will be shown as ‘vfchost’ devices. We can run the ‘lsdev –vpd | grep

vfchost’ command to know which device represents the Virtual FC adapter on any specific slot.

vfcmap -vadapter vfchost10 -fcp fcs0vfcmap -vadapter vfchost11 -fcp fcs1

Classified - Internal use

Classified - Internal use

Repeat the above mentioned steps in the second VIOS LPAR also. If you have more client LPARs, repeat the

steps for all those LPARs in both the VIOS LPARs.

Classified - Internal use

AIX LPAR Configuration

Now we need to create the Virtual FC Client Adapters in the AIX LPAR

Classified - Internal use

Creating Virtual FC Client Adapters

Open the LPAR profile for the AIX LPAR for editing and add the Virtual FC Client Adapters by navigating to the Virtual Adapters tab and selecting Actions >

Create > Fibre Channel Adapter as shown

Classified - Internal use

Input the first Virtual FC Client Adapter slot number details as

shown in below slideYou need to make sure that the slot numbers

entered is exactly matching with the slot numbers entered while creating the Virtual FC Server

Adapter in the first VIOS LPAR. This adapter will be the first path for the SAN access in the AIX LPAR

Classified - Internal use

Classified - Internal use

Do same mapping for second VIO

Create the second Virtual FC Client Adapter. Make sure the slot numbers match with the slot

numbers we have entered in the second VIOS LPAR while creating the Virtual FC Server Adapter

Classified - Internal use

Classified - Internal use

ACTIVATE TARGET P7 LPAR USING HMC in SMS menu

Classified - Internal use

From hmc push all active and inactive wwpns to the switch

us6thmc01:~> chnportlogin -m CCRUTCEC01 -o login -p lparname -n default

Use correct cec and lpar name

Ensure above command completes with Exit Status 0

us6thmc01:~> lsnportlogin -m CCRUTCEC01 --filter "profile_names=default“ |grep lpar

Classified - Internal use

EMCODM filesets install on source client lpar

Install the EMC Symmetrix ODM filesets from us6nim02

mount us6nim02: /usr/sys/inst.images /usr/sys/inst.images

cd /usr/sys/inst.images/EMCODM

smitty installp -> select EMC Symmetric FCP MPIO fileset from the list and select option for installing requisite fileset to Yes

NIC Reconfiguration Part on source client lpar TO BE DONE FROM TERMINAL CONSOLE From HMC client lpar console login to source lpar Ifconfig en2 down detach Then remove Etherchannel smitty etherchannel – remove Etherchannel remove ent1 virtual adapter configure IP on en0 interfance by smitty tcpip ->

Classified - Internal use

AIX TL update before migrationStart - Steps during Actual Outage window On Source client lpar,

cd /usr/sys/inst.images/6100-07-03-1207

smitty update_all and update to latest 6.1 TL7 SP3 level

Do oslevel –s and ensure its at that level

Classified - Internal use

shutdown –Fr After reboot of client lpar , ensure oslevel –s is correct level and make sure you can login from networkTake OS backup so we have latest modified sysback with EMCODM filesets and new NIC configuration and TL updates ..

Disable NFS/NAS mount points shutdown source lpar # shutdown –F now

THIS is the start of real outage window time

Classified - Internal use

SAN TEAM WILL REZONE VMAX LUN’s to Target virtual wwpns

During the actual outage window they will rezone VMAX LUN’s and present to P7 target LPAR’s virtual wwpns

P7 TARGET LPAR STEPSActivate the P7 target lpar and let it boot on its ownIf lpar cannot find the boot disk on its own, then

Shutdown lparActivate lpar in sms menuSelect Boot Options Configure Boot Device orderSelect the 1st Boot Device -> Hard Drive -> SAN -> List all devices

Select the first path disk in the list to boot from

Then exit the SMS menu and let it boot Once p7 lpar is booted

Change/Verify LUN attributesEnable NFS/NAS mountsSYSTEM VERIFICATION . You can change hcheck_interval=60 for all LUN’s REBOOT

Classified - Internal use

NFS MOUNTS – IMPORTANT change NFS and NAS mounts in /etc/filesystems to mount=true

change all VMAX LUN attributes as below for all VMAX disks chdev –l hdiskX –a reserve_policy=no_reserve –P chdev –l hdiskX –a algorithm=round_robin –P chdev –l hdiskX –a queue_depth=80 –P

Verify that VMAX LUN’s are seen as EMC Symmetrix FCP MPIO VRAID by below command

lsdev –Cc disk Make sure you are able to connect from outside network to migrated lpar

Once system verification is done

shutdown –Fr

After system comes up ensure all filesystems, pv Is fine

Application Verfication

Ask Application team to verify application/db from their end

Once Application Verification is done

Classified - Internal use

THEN HERE IT IS BIG OKMIGRATION COMPLETE …

Classified - Internal use

CLEANUP on target client lparRemove old definitions of hdisk, vscsi

lsdev –Cc disk , lsdev –Cc adapter will list all.Remove old adapters in defined state

Ex: rmdev –dl vscsi0 and same for vscsi1# cfgmgr -> ensure they are gone

NOTE: hdisk numbering will change after migration for rootvg and othervg’s

Classified - Internal use

End of Presentation

top related