user perspective of emerging emc technologies - gse...

57
User Perspective of Emerging EMC Technologies Session DH 2 November 2016 Jim Erdahl U.S.Bank [email protected]

Upload: trinhque

Post on 13-Apr-2018

224 views

Category:

Documents


3 download

TRANSCRIPT

User Perspective of Emerging EMC

Technologies

Session DH 2 November 2016

Jim Erdahl

U.S.Bank [email protected]

Lets Keep it Lite Mary and I have been married for more than 38 years.

Mary allows me to do everything I want to do,

as long as I ALWAYS do what she tells me to do.

Mary’s love for me causes her to shop a lot.

We SCUBA so that we don’t need to hear each

other speak.

U.S. Bank and EMC

U.S. Bank and EMC have worked together to

introduce:

• Host Read Only (HRO) Devices USA SHARE-March 2016

• Universal Data Consistency™ USA SHARE-March 2014

• Data Protector for z Systems (zDP) Today

Like a Kid in a Candy Store…

Who got to go to the factory and eat off the line!

Agenda zBoost™

– Performance

– zHPF

– PAV Optimizer

VMAX V3 Beta – Hardware

– MFE 8.0

GDDR 5.0 Beta

DC3: V3 Cutover

zBoost™

• No charge* non-disruptive microcode upgrade to

improve performance for Mainframe VMAX20K

and VMAX40K.

– Improves the throughput with FICON

– Provides full zHPF support

– Enhances the use of PAV’s

*Support for zHPF is a chargeable feature on VMAX, part of the Mainframe Essentials bundle.

zBoost Performance Benefits (VMAX 40K)

• Improves the maximum IOPS by moving a back end processor core to the front end for FICON (previously known as Mainframe Performance Accelerator - MPA)

– Increases usable front end director capacity

– Improved IOPS for FICON

– Reduced response time

– Batch run-times were improved

MPA Comparison

FICON Front End Director Improvement

zHPF Support

• Verify hardware support – D IOS,ZHPF

• We have not performed

any measurements

• Worth Mentioning – No issues with our z13 Processors

– On z12, non-disruptive Channel Detected Errors

– Heard that the FICON fiber needs to be clean

zBoost™ PAV Optimizer Objectives

• Significantly improves performance of

multi-track zHPF I/O

• Extend parallel processing programming

paradigms into the I/O itself

• Ensure transparent implementation for

ease of exploitation

– No JCL or program changes necessary

zHPF I/O optimized with PAV Optimizer

Tracks 4-6 Complete

z/OS IOS

Subsystem

zHPF Channel Pgm

Read 9 tracks

Device 1901

TCW

TCCB

DCW

READ TRACK

Count=9

A

c

C

e

S

S

M

e

t

h

o

d 19F5

Alias

19F1

Alias

1901

Base

P

A

V

O

P

T

I

M

I

Z

E

R

PAVO • 64 HyperPAV’s per 192 devices-already heavily

used

• Early use of the functions – Beta via Early Ship

– 16 Jobs assisted since late Oct 2015

– > 1,100 devices

Project was delayed because ………

Sample SCF INI PAVO Parms:

SCF.LFC.LCODES.LIST=1234-5678-9012-3456 /* PAV OPT */ ******************************************************** /* PAV OPTIMIZER - ENABLEMENT SCF.DEV.OPTIMIZE.ENABLE=YES SCF.DEV.OPTIMIZE.PAV=YES /* YES/PASSIVE */ SCF.DEV.OPTIMIZE.SMF.RECID=203 /* SMF RECORD ID NUMBER */ /* SCF.MSC.PAVO=YES /* SUSPEND/RESUME AT CYCLE SWITCH */ /* /* PAV OPTIMIZER - SELECTION (SITE SPECIFIC) SCF.DEV.OPTIMIZE.PAV.STORGRP.INCLUDE.LIST=SGONE,SGTWO SCF.DEV.OPTIMIZE.PAV.JOBPREFIX.LIST=JAERDAH SCF.DEV.OPTIMIZE.PAV.JOBNAME.LIST=JOBNAME1,JOBNAME2 /* /* PAV OPTIMIZER - RESOURCE UTILIZATION SCF.DEV.OPTIMIZE.PAV.TRACK.MIN=2 SCF.DEV.OPTIMIZE.PAV.SPLIT.MAX=8 SCF.DEV.OPTIMIZE.PAV.SPLIT.MAX.READ=4 SCF.DEV.OPTIMIZE.PAV.SPLIT.MAX.WRITE=8 SCF.DEV.OPTIMIZE.PAV.QUIPOINT.GLOBAL=5000 SCF.DEV.OPTIMIZE.PAV.QUIPOINT.LCU.PCT=75 SCF.DEV.OPTIMIZE.PAV.QUIPOINT.DEVICE=8

As provided, do not

use

Selection Criteria for Batch Job Optimization • Mission critical Application

• A Lot of zHPF I/O (SMF Type 42)

• Significant Run Time Reduction

• Job is in the critical path

• Under MBOS* influence (larger buffers)

*Mainview Batch Optimizer from BMC

PAVO Results

Note: These are development jobs, so the data tested with varies greatly.

-10.0

0.0

10.0

20.0

30.0

40.0

50.0

60.0

Percent Runtime Reduction

Percent Runtime Reduction

What’s Next:

We have engaged EMC resources

We need to better understand how to use the SMF

data

We have requested enhancements

• Selection by data set name

• Need passive and active mode in parallel

VMAX V3 Beta Validation

• Hardware – Mainframe Engines

• Mainframe specific microcode

• Mainframe Enabler – MFE Version 8.0

• Geographically Dispersed Disaster Restart – GDDR Version 5.0

The objective was to validate functions, not performance

Initial Beta Configuration

.

3 LPARS

RA00

RB00

R900

VMAX3

DC1 DC2

z13

SRDF/A • VMAX 200K

• 4TB

• 8 FICON

VMAX3

VMAX V3 Beta

• The Beta validation was a success. This is what we have participated with:

– I/O Driver provided by EMC and with our software

– SRDF/A with MSC and MCM

– TF Mirror Clone Emulation

– SnapVX

– Batch zDP

– Host Read Only

– ISPF Interface for zDP

– SRDF/S

– ConGrp

– Autoswap

– Unisphere

Beta Validation Process • Education – Train the trainers

• At each phase, delivery included:

– Microcode

– MFE 8.0 Software

– New/updates to manuals (w/o messages)

– Validation Script

– Validation JCL

• Enhanced Validation

Lets Drill Down

• SnapVX

• zDP

SnapV Method • Historical TimeFinder technologies

– Need a target device at the time of the Copy

Copy to target

Source Volume Target Volume

Source Volume Target Volume

Copy to target

This slide was originally prepared by Justin Bastin, EMC

TimeFinder SnapVX

• Built on Thin provisioning – Data stored in thin pools (SRP)

• Volume level only

• Supports up to 256 ‘target-less’

snapshots per volume

• Single architecture supporting: – TF/Mirror (via Clone Emulation)

– TF/Clone

– TF/SNAP

– SnapVX

This slide was originally prepared by Justin Bastin, EMC

Production

Volume

Linked Target

Snapshot

Snapshot

Snapshot

Storage

Resource

Pool (SRP)

TimeFinder SnapVX • New SnapVX commands

– CREATE – Create snapshot structure of source with unique name

– ACTIVATE – Obtain point it time

– LINK/UNLINK – Associate/disassociate PiT snapshot to target volume

– QUERY – Displays information on the snapshot

– RENAME – Change name of snapshot

– TERMINATE – End snapshot once no longer linked

TimeFinder SnapVX– Pointer-Based SNAPs

12 Noon

Source Device

(TDEV)

Snapshot (up to 256)

10 A.M.

Target Device(s)

(TDEV)

Storage

Resource

Pool

CREATE & ACTIVATE LINK / UNLINK

TERMINATE

Host volume

Snapshot

Pointer

Data

structure

This slide was originally prepared by Justin Bastin, EMC

zDP Uses SnapVX Data Protector for z Systems (zDP) delivers the capability to recover from logical data corruption with minimal data loss.

Yes, it was conceived in a bar

But, professionally designed ….

zDP - How does it work? • Built on top of SnapVX

• Maintains Consistency of data across volumes and VMAX V3’s, using either ECA or SRDF

• Creates up to 256 Snap Sets every 10 minutes – (42.5 hours)

• Retain Snap Sets

• Terminate Snap Sets

• EMC has a tool to size your SRP using ChangeTracker

zDP Validation • First - Performed the validation as requested from EMC with

two volumes

• Second - Enhanced validation was with: – 94 volumes (MOD 3, 9, 27, 54, & EAV’s)

– 10 minute cycle time

– Ran I/O Driver process generating about 200 writes per second

• Results – Ran for more than week

– Used less than 34GB of the SRP

Operator Commands • Start: F emcscf,ZDP,START vdg_name

• Stop: F emcscf,ZDP,STOP vdg_name

• Pause: F emcscf,ZDP,PAUSE vdg_name

• Resume: F emcscf,ZDP,RESUME vdg_name

• Locks: F emcscf,ZDP,RELEASEDEVICELOCK vdg_name

vdg_name is case sensitive

The zDP Process

• Define the Versioned Data Group (VDG)

• Add the source volumes to the group

• Define the Target Set (TGT)

• Add the target devices

• Start the zDP process

• Celebrate – that was easy

zDP Batch JCL

//STEPA EXEC PGM=EIPINIT,REGION=0M

//STEPLIB DD DISP=SHR,DSN=SYS2.EMC.MFE800.LINKLIB

//SYSPRINT DD SYSOUT=*

//SYSUDUMP DD SYSOUT=*

//SCF$EMCL DD DUMMY ---> YOUR EMCSCF ADDRESS SPACE

Statement1

Statement2

….

Statement(n)

/*

zDP Define Versioned Data Group GLOBAL MAX_RC(4)

DEFINE VDG VDGR900A,

CYCLE_TIME(10,0),CYCLE_OVERFLOW(NEXT),

CONSISTENT(YES),TIMEOUT(30),

TERM_POLICY(OLDEST),

SRP_WARN%(80),

MAX_SNAPSETS(255), Save 1 snapset for other SnapVx

SAVED_SNAPSETS(1,3),

PRESERVED_COPY_LIMIT(003),

LOG_OPT(SCF),SMFREC(204,VOLUMES),

EXIT(NONE),MAXRC(4)

MODIFY VDG VDGR900A,ADD,SYMDEV(FD2C,00A4-00AF)

MODIFY VDG VDGR900A,ADD,SYMDEV(FD2C,00B0-00BE)

MODIFY VDG VDGR900A,ADD,SYMDEV(FD2C,046C-048B))

zDP Define Target Group GLOBAL MAX_RC(4)

DEFINE TARGET_SET TGTR900A

MODIFY TGT TGTR900A,ADD,SYMDEV(FD2C,06E0-06EB) MODIFY TGT TGTR900A,ADD,SYMDEV(FD2C,06EC-06FA) MODIFY TGT TGTR900A,ADD,SYMDEV(FD2C,0AA8-0AC7)

Let’s fire this baby up !

zDP Start Command F emcscf,ZDP START vdg_name

14.41.36 STC23778 SCF0740I ZDP START VDGR900A

14.41.36 STC23778 SCF0741I ZDP START command accepted

14.41.36 STC23778 SCF0746I ZDP VDG VDGR900A Started

14.41.36 STC23778 EIP0200I *** EMC zDP - V8.0.0 (000) - Friday, February 26, 2016 ***

14.41.39 STC23778 EIP0201I VDG VDGR900A, Beginning cycle 1, Snapset VDGR900A.......160571441S00001

14.41.40 STC23778 EIP0217I VDG VDGR900A, Devices validated for consistency, via SRDF/A

14.41.43 STC23778 EIP0204I VDG VDGR900A, Snapset VDGR900A.......160571441S00001 created

14.41.45 STC23778 EIP0202I VDG VDGR900A, Completed cycle 1, next cycle scheduled for 14:51:39

14.41.53 STC23778 SCF1301I MSC - TASK TIMER

14.46.53 STC23778 SCF1301I MSC - TASK TIMER

14.51.39 STC23778 EIP0201I VDG VDGR900A, Beginning cycle 2, Snapset VDGR900A.......160571451C00002

14.51.40 STC23778 EIP0217I VDG VDGR900A, Devices validated for consistency, via SRDF/A

14.51.44 STC23778 EIP0204I VDG VDGR900A, Snapset VDGR900A.......160571451C00002 created

14.51.46 STC23778 EIP0202I VDG VDGR900A, Completed cycle 2, next cycle scheduled for 15:01:39

Link a Snap Set LINK VDG(VDGR900A),SNAPSET(160571501C00003) TGT(TGTR900A)

EIP0001I *** EMC zDP - V8.0.0 (000) - SCF V8.0.0 (000) *** 07:30:47 02/28/2016

EMCP001I LINK VDG(VDGR900A),SNAPSET(160571501C00003) TGT(TGTR900A)

EIP0053I SYMM FD2C/0001967-01562, Linking SNAPSET VDGR900A.......160571501C00003

EIP0034I LINK command completed

EIP0002I All control statements processed, highest RC 00

Keep a Snap Set PERSISTENT SET,VDG(VDGR900A),SNAPSET(160571451C00002)

EIP0001I *** EMC zDP - V8.0.0 (000) - SCF V8.0.0 (000) *** 06:44:48 02/27/2016

EMCP001I PERSISTENT SET,VDG(VDGR900A),SNAPSET(160571451C00002)

EIP0060I SYMM FD2C/0001967-01562, PERSISTENT SET for SNAPSET VDGR900A.......160571451C00002

EIP0034I PERSISTENT command completed

EIP0002I All control statements processed, highest RC 00

zDP QUERY Commands

Versioned Data Group Query QUERY VDG VDGR900A,STATUS

QUERY VDG VDGR900A,DEVICE

QUERY VDG VDGR900A,SNAPSET

QUERY VDG VDGR900A,SNAPSET,DETAIL

Target Query QUERY TGT TGTR900A,STATUS

QUERY TGT TGTR900A,DEVICE

QUERY TGT TGTR900A,STATUS,DEVICE

zDP Query Command of Snapsets

EIP0001I *** EMC zDP - V8.0.0 (000) - SCF V8.0.0 (000) *** 07:31:09 02/28/2016 Page 1

EMCP001I QUERY VDG VDGR900A,SNAPSET

EIP0035I Snapset Query for VDG VDGR900A

EIP0023I SYMM 0001967-01562, Microcode level 5977_0799, Type VMAX200K

EIP0024I Gatekeeper FD2C, Device Count: 59

EIP0025I SRP ID/Name: 0001/SRP_1, Reserved Capacity: 10%

EIP0026I Total Capacity: 3359M, Total Allocated: 16M, Snap Allocated: 487

EIP0036I CREATE SOURCE_TRACKS EXPIRATION

EIP0036I SNAPSET_NAME STATE DATE TIME CHANGED UNIQUE DATE TIME

EIP0036I ____________________________ _____ ___________________ ______________ ___________________

EIP0039I VDGR900A.......160571441S00001 ACT-S 02/26/2016 14:41:41 13485 79 02/29/2016 14:41:40

EIP0039I VDGR900A.......160571451C00002 ACT-P 02/26/2016 14:51:41 13477 53

EIP0039I VDGR900A.......160571501C00003 ACT 02/26/2016 15:01:40 13164 53

………..

EIP0039I VDGR900A.......160590721C00245 ACT 02/28/2016 07:21:42 83 52

EIP0039I VDGR900A.......160571501C00003 LNK 02/28/2016 07:30:54 13164 53

EIP0034I QUERY command completed

zDP Device Locks F emcscf,ZDP,RELEASEDEVICELOCK, vdg_name

07.31.10 STC03576 SCF0740I ZDP,RELEASEDEVICELOCK VDG_A_1

07.31.10 STC03576 SCF0741I ZDP RELDLOCK command accepted

07.31.10 STC03576 SCF0746I ZDP VDG VDG_A_1 Started

07.31.10 STC03576 EIP0200I *** EMC zDP - V8.0.1 (027) - Thursday, July 28, 2016 ***

07.31.39 STC03576 EIP0203I VDG VDG_A_1, Ended - Thursday, July 28, 2016 ***

07.31.39 STC03576 SCF0747I ZDP VDG VDG_A_1 Ended

ISPF Interface

Snapset Query

Lessons Learned • All volumes in a Snap Set must be in the same state

– SRDF/A

– ECA

• Target volume can not be smaller than the source – The define of Target Group works, but…….

– LINK command will verify sizes, then fail

• I prefer batch setup over ISPF (I’m a CSECT guy)

• At this time, GDDR does not play with zDP

GDDR V5.0 Validation – 2 Site • Still using two VMAX V3

• Much like the MFE 8.0 Validation

• Software libraries delivered in XMIT format

• Converted our GDDR Config 3 site to 2-site with Autoswap

• Several Planned Autoswaps

• Performed an Un-Planned Autoswap

GDDR 5.0 Beta Configuration

DC1 DC2

DC3

4 LPARS

RA00- DC1 C-System

RB00- DC2 C-System

RC00- DC3 C-System

R900- Managed System

IBM z13

SRDF/S

R900 Uses Subchannel Sets

SS2 to DC1

SS0 to DC2

SS3 to DC3 VMAX V3 & DLm

VMAX V3 & DLm VMAX SE & DLm

S

S

3

GDDR 5.0 Notes • To convert, copy GDDR V4.1 backup into V5.0

• New RACF Rules

• Additions to IKJTSOxx

• E05TFDEV and E04SRDFA members

• Refresh LMOD GDDRXG1A into dynamic LPA

• Not able to mix TF/Mirror (PGM=TMCTF) with TF/Clone or TF/SnapVX

(PGM=EMCSNAP) for consistency function

– GDDR 4.2 used TF/Mirror exclusively

– GDDR 5.0 adds support for TF/Clone and TF/SnapVX • Users with mixed VMAX2 and VMAX3 must migrate to TF/Clone to allow common consistency function

• GDDR supports common consistency across V2 (TF/Clone – Snap Volume) and V3 (TF/SnapVX –

CREATE) with a single ACTIVATE command in EMCSNAP

Enough playing – Lets do this for real……

DC3 VMAX V3 Cutover Process Prepare:

Recruited Application Test Teams to Validate

Generated 72 EMC I/O Driver Jobs

Stage V3 Hardware & Pray that 100 cables are correct

Updated MFE to 8.0 and GDDR to 5.0 o GA Base code released on March 30, 2016

Converted DLm VMAX V2 from TF/Mirror to SNAPV o Copy all the data

o Dynamic QOS reduced run time from est. of 19 days to 7 ½

DC3 VMAX v3 Cutover Process Execution:

Initialized SA-Dump volumes with ICKDSF

Defined SRDF and pushed the data from DC1

Swapped the cables to the V3’s

Converted GDDR to 2-site w/Autoswap

Stopped SRDF/A and deleted all definitions

DC3 VMAX v3 Cutover Process Execution (Continued): Monitored SRDF progress with PGMITRKA

(from Lam Hairston)

Activated MSC

Converted GDDR back to 3-site w/Autoswap

DC3 VMAX v3 validation

Engaged Application Teams (friends) to validate

Ran our 72 EMC I/O Driver jobs

Brought up 13 systems, z/OS 2.1 & z/OS 2.2

Activated 10 day CBU

Cutover issues Who would believe me if I said there were none ?

Our Network Bandwidth between DC1 and DC3

Three Bin’s needed to be changed

o The wrong file was installed

o We did not communicate correctly to EMC

No Sev-1 Issues Created

DC3 VMAX V3 performance

WOW !

Better Make a Dentist Appointment

Thank You for letting me share my trip through the

Candy Store and then through the Candy Factory

Thank You

Please remember to complete your session evaluations.

User Perspective of Emerging EMC

Technologies

Session DH

Jim Erdahl

U.S.Bank