keeping up with z/os’ alphabet soup darrell faulkner computer associates development manager...

76
Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Darrell Faulkner Computer Associates Development Manager NeuMICS

Upload: bernard-godden

Post on 01-Apr-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Keeping Up with z/OS’ Alphabet Soup

Darrell FaulknerDarrell FaulknerComputer Associates

Development ManagerNeuMICS

Page 2: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Objectives

Integrated Coupling Facility (ICF) and Integrated Facilities for LINUX (IFL)

PR/SM and LPs Intelligent Resource Director (IRD) IBM License Manager (ILM)• Capacity Upgrade on Demand (CUoD)• Conclusions

Page 3: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Acronyms

CF - Coupling FacilityCP - Central ProcessorCPC - Central Processor ComplexICF - Integrated Coupling FacilityIFL - Integrated Facility for LINUXPR/SM - Processor Resource/Systems

ManagerHMC - Hardware Management ConsoleLLIC - LPAR Licensed Internal CodeLogical CP - Logical ProcessorLP - Logical PartitionLPAR - Logical Partitioning (LPAR mode)PU - Processor Unit

Page 4: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Beginning with some IBM G5 processor models, the ability to configure PUs (Processing Units) as non-general purpose processors

Benefit - Does not change model number hence no software licensing cost increase

IFL - Integrated Facility for LINUX ICF - Integrated Coupling Facility

ICF and IFL

Page 5: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

z900 Models 2064-(101-109)

CP

Processing Unit (PU)PU

ICF

IFL

SAP

Central (General) Processor

Integrated Coupling Facility

Integrated Facility for LINUX

System Assist Processor

CPC MEMORY

All contain a 12 PU MultiChip Module (MCM)

PU PUPUPUPUPUPU PUPUPU PU PU

Page 6: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

CPs Defined = Model Number

z900 Model 2064-105

5 PUs Configured as CPs = Model 105

CPC MEMORY

All contain a 12 PU MultiChip Module (MCM)

PU PUPUCPCPCPCP PUPU SAP SAPCP

Central (General) ProcessorCP

Page 7: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

z900 Model 2064-105

5 PUs Configured as CPs = Model 105

Central (General) ProcessorCP CPs Defined = Model Number

ICF IFL SAP ICFs, IFLs, and SAPs do not incur software charges

CPC MEMORY

One PU always left unconfigured for “spare”

CPCPCP PUIFLIFL SAP SAPCP CP ICFICF

Page 8: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

There is one section per EBCDIC name that identifies a CPU type. 'CP' and 'ICF', with appropriate trailing blanks, are examples of EBCDIC names describing a General Purpose CPU and an Internal Coupling Facility CPU, respectively.

Name = SMF70CIN Length = 16 EBCDIC Description = CPU-identification Name Offsets 0 0

As of z/OS Version 1 Release 2, both IFLs and ICFs are represented by ‘ICF’ in the SMF type 70 CPU ID Section

IBM SMF Type 70 subtype 1 record - CPU Identification Section

CP = Central Processor

ICF - Integrated Coupling Facility

IFL - Integrated Facility for LINUX

ICFs and IFLs

Page 9: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Allows up to 15 images (LPs) per CPC Different control programs on images

– (z/OS, z/VM, Linux, CFCC etc.)

Each LP (image) assigned CPC resources:

– Processors (CPs) (referred to as “logical CPs”)– Memory– Channels

Each LP either DEDICATED or SHAREDLP = Logical Partition CPC = Central Processor

Complex

Logical CP = Logical Processor

PR/SM LPAR

Page 10: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Protection/isolation of business critical applications from non-critical workloads

Isolation of test operating systems Workload Balancing Different operating systems -- same CPs Ability to guarantee minimum percent of

shared CP resource to each partition More “white space” – the ability to

handle spikes and unpredictable demand

PR/SM Benefits

Page 11: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LP definitions entered on HMC– Dedicated or not-dedicated (shared)– Logical processors (initial, reserved)– Weight (initial, min, max)– Capped or not-capped– CPC memory allocation– I/O Channel distribution/configuration– More

CPC = Central Processor Complex

LP = Logical Partition

HMC = Hardware Management Console

LP Configuration Decisions

Page 12: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LPs logical CPs are permanently assigned to specific CPC physical CPs

Less LPAR overhead (than shared LPs)

HMC Image Profile

ZOS1

•Dedicated LPs waste physical (CPC) processor cycles unless 100% busy

•When less than 100% busy, the physical CPs assigned to dedicated LPs are IDLELP = Logical Partition

Logical CP = Logical Processor

CPC = Central Processor Complex

Dedicated LPs

Page 13: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LPAR MODE - Dedicated

LCPLCPLCPLCP LCP

ZOS1 ZOS2CPC MEMORY

ZOS1 Image - 3 Dedicated Logical Processors ZOS2 Image - 2 Dedicated Logical Processors

Same problem as basic mode - Unused cycles wasted

PR/SM LPAR LIC

CPCPCPCP CP

LCP = Logical CP = Logical Processor

Page 14: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

HMC Image Profile

ZOS1

Shared LPs

Page 15: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

HMC Image Profile

ZOS2

Shared LPs

Page 16: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 ZOS2CPC MEMORY

LPAR Mode - Shared

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

LCPLCP LCPLCPLCP LCP LCPLCP

ZOS1 Image5 Logical CPsWeight 400

ZOS2 Image3 Logical CPsWeight 100

LCP = Logical CP = Logical Processor

Page 17: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

What does LLIC (LPAR Licensed Internal Code) Do? LCPs are considered dispatchable units of work LCPs placed on ready queue LLIC executes on physical CP

– it selects a ready LCP and– dispatches it onto real CPs

z/OS executes on physical CP until timeslice expires (12.5-25 milliseconds) or until z/OS enters a wait state

Environment saved, LLIC executes on freed CP If LCP still ready (used timeslice), it is placed back on

ready queue

LCP = Logical CP = Logical Processor

LLIC = LPAR Licensed Internal Code

CP = Central Processor

LPAR Dispatching

Page 18: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Priority on the “ready” queue is determined by PR/SM LIC

– Based on LP logical CP “actual” utilization versus “targeted” utilization

Targeted utilization is determined as a function of #LCPs and LP Weight

– LP weight is a user specified number between 1 and 999 (recommended 3 digits)

LCP = Logical CP = Logical Processor CP = Central Processor

LP = Logical Partition LLIC = LPAR Licensed Internal Code

Selecting Logical CPs

Page 19: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LP Weights -Shared Pool %

ZOS1 ZOS2

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

ZOS1 Image ZOS2 Image

•Total of LP Weights = 400 + 100 = 500• ZOS1 LP Weight % = 100 * 400/500 = 80%• ZOS2 LP Weight % = 100 * 100/500 = 20%

400 100

CPC MEMORY

LCPLCP LCPLCPLCP LCP LCPLCP

LCP = Logical CP = Logical ProcessorLP = Logical Partition

Page 20: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Weight assigned to each LP defined as shared All active LP weights summed to Total Each LP is guaranteed a number of the pooled

physical CPs based on weight% of Total Based on #shared logical CPs defined for

each LP & LP weight%, LLIC determines the “ready queue” priority of each logical CP

Weight priority enforced only when contention!

LP = Logical Partition

CP = Central Processor

LLIC = LPAR Licensed Internal Code

LCP = Logical CP = Logical Processor

LP Weights Guarantee“Pool” CP % Share

Page 21: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 ZOS2

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

ZOS1 Image ZOS2 Image

400 100

• ZOS1 LP Weight % = 80% •Target CPs = 0.8 * 5 = 4.0 CPs

• ZOS2 LP Weight % = 20%• Target CPs = 0.2 * 5 = 1.0 CPs

CPC MEMORY

LCPLCP LCPLCPLCP LCP LCPLCP

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

LP Target CPs

Page 22: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

LP Logical CP share

ZOS1 LP is guaranteed 4 physical CPs– ZOS1 can dispatch work to 5 logical CPs– Each ZOS1 logical CP gets 4/5 or 0.8 CP– ZOS1 effective speed = 0.8 potential speed

ZOS2 LP is guaranteed 1 physical CP– ZOS2 can dispatch work to 3 logical CPs– Each ZOS2 logical CP gets 1/3 or 0.333 CP– ZOS2 effective speed = 0.333 potential speed

Page 23: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

An active LP’s weight can be changed non-disruptively using system console

– Increasing an LP’s weight by “x”, without any other configuration changes, increases its pooled CP share at the expense of all other shared LPs

– This is because the TOTAL shared LP weight increased, while all other sharing LPs weights remained constant:

LPn weight LPn weight >

TOTAL (TOTAL + x)

LP = Logical Partition CP = Central Processor

Impact of Changing Weights

Page 24: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 ZOS2CPC MEMORY

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

400100

100+

ZOS1 Image ZOS2 Image

•Total of LP Weights = 400 + 200 = 600• ZOS1 LP Weight % = 100 * 400/600 = 66.67%• ZOS2 LP Weight % = 100 * 200/600 = 33.33%

LCPLCP LCPLCPLCP LCP LCPLCP

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

Changing LPAR Weights

Page 25: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 ZOS2CPC MEMORY

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

400 200

ZOS1 Image ZOS2 Image

• ZOS1 Weight % = 66.67%• Target CPs = 0.667 * 5 = 3.335 CPs

• ZOS2 LP Weight % = 33.33%• Target CPs = 0.333 * 5 = 1.665 CPs

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

LCPLCP LCPLCPLCP LCP LCPLCP

LP Target CPs

Page 26: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 LP is guaranteed 3.335 physical CPs– ZOS1 can dispatch work to 5 logical CPs– Each ZOS1 logical CP gets 3.335/5 or 0.667 CP– ZOS1 effective speed = 0.667 potential speed

ZOS2 LP is guaranteed 1.665 physical CP– ZOS2 can dispatch work to 3 logical CPs– Each ZOS2 logical CP gets 1.665/3 or 0.555 CP– ZOS2 effective speed = 0.555 potential speed

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

LP Logical CP share

Page 27: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

An active LP’s logical CPs can be increased or reduced non-disruptively

Changing the number of logical CPs for a shared LP increases or decreases the LP work “potential”

– Changes z/OS and PR/SM overhead– Does not change the % CPC pool share– Changes the LP logical CP “effective speed”

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

CPC = Central Processor Complex

Changing Logical CP Count

Page 28: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LL L LL

ZOS1 ZOS2CPC MEMORY

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

400

ZOS1 Image ZOS2 Image

+

• Total LP Weights = 400 + 100 = 500• ZOS1 LP Weight % = 100 * 400/500 = 80%• ZOS2 LP Weight % = 100 * 100/500 = 20%

WEIGHT %UNCHANGED!!

LCPLCP LCPLCPLCP LCP LCPLCP 100LCP

Adding Logical CPs

Page 29: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LCPLCP LCPLCPLCP LCP LCPLCP

ZOS1 ZOS2CPC MEMORY

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

400

ZOS1 Image ZOS2 Image

LCP

• ZOS1 Weight % = 80%• Target CPs = 0.8 * 5 = 4.0 CPs

• ZOS2 LP Weight % = 20%• Target CPs = 0.2 * 5 = 1.0 CPs

TARGET CPs UNCHANGED!!

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

100

Adding Logical CPs

Page 30: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS2 Effective logical CP speed DECREASED!!

ZOS1 LP is guaranteed 4 physical CPs– ZOS1 can dispatch work to 5 logical CPs– Each ZOS1 logical CP gets 4/5 or 0.8 CP– ZOS1 effective speed = 0.8 potential speed

ZOS2 LP is guaranteed 1 physical CP– ZOS2 can dispatch work to 4 logical CPs– Each ZOS2 logical CP gets 1/4 or 0.25 CP– ZOS2 effective speed = 0.25 potential speed

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

Adding Logical CPs

Page 31: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LCPLCP LCP LCPLCP

ZOS1 ZOS2CPC MEMORY

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

400

ZOS1 Image ZOS2 Image

LCP-

• Total LP Weights = 400 + 100 = 500• ZOS1 LP Weight % = 100 * 400/500 = 80%• ZOS2 LP Weight % = 100 * 100/500 = 20%

WEIGHT %UNCHANGED!!

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

LCPLCP LCP 100

Subtracting Logical CPs

Page 32: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LCPLCPLCPLCP LCP LCPLCP

ZOS1 ZOS2CPC MEMORY

PR/SM LPAR LIC

CPCPCPCP CPShared CP Pool

400

ZOS1 Image ZOS2 Image• ZOS1 Weight % = 80%• Target CPs = 0.8 * 5 = 4.0 CPs

• ZOS2 LP Weight % = 20%• Target CPs = 0.2 * 5 = 1.0 CPs

TARGET CPs UNCHANGED!!

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

100

Subtracting Logical CPs

Page 33: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS2 Effective logical CP speed INCREASED!!

ZOS1 LP is guaranteed 4 physical CPs– ZOS1 can dispatch work to 5 logical CPs– Each ZOS1 logical CP gets 4/5 or 0.8 CP– ZOS1 effective speed = 0.8 potential speed

ZOS2 LP is guaranteed 1 physical CP– ZOS2 can dispatch work to 2 logical CPs– Each ZOS2 logical CP gets 1/2 or 0.5 CP– ZOS2 effective speed = 0.5 potential speed

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

Subtracting Logical CPs

Page 34: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Both z/OS and PR/SM overhead minimized when LCP count is equal to the physical CP requirements of the executing workload

The number of LCPs online to an LP is correct … sometimes …

– When the LP is CPU constrained, too few– When the LP is idling, too many– When the LP is about 100% busy, just right!

Ideally, effective LCP speed = 1.0

LP = Logical Partition LCP = Logical CP = Logical Processor

CP = Central Processor

Logical CPs - How Many?

Page 35: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LP definitions entered on HMC– Dedicated or not-dedicated (shared)– Logical processors (initial, reserved)– Weight (initial, min, max)– Capped or not-capped– CPC memory allocation– I/O Channel distribution/configuration– etc

HMC = Hardware Management Console

CPC = Central Processor Complex

LP Configuration Decisions

Page 36: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

HMC Image Profile Initial weight enforcedInitial weight enforced LLIC will not allow LP to LLIC will not allow LP to

use more than use more than guaranteed shared pool guaranteed shared pool % even when other LPs % even when other LPs idleidle

Dynamic change to Dynamic change to capping statuscapping status

– Capped or not cappedCapped or not capped– Capped weight valueCapped weight value

• In general, not In general, not recommendedrecommended

LLIC = LPAR Licensed Internal Code LP = Logical Partition

LP “Hard” Capping

Page 37: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IRD brings fourfour new functions to the parallel SYSPLEX that help insure important workloads meet their goals

– WLM LPAR Weight Management– WLM Vary CPU Management– Dynamic Channel-Path Management– Channel Subsystem I/O Priority Queueing

Intelligent Resource Director

Page 38: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IRD WLM CPU management allows WLM to dynamically change the weights and number of online logical CPs of all z/OS shared LPs in a CPC LPAR cluster

IRD WLM Weight Management– Allows WLM to instruct PR/SM to adjust shared

LP weight

IRD WLM Vary CPU Management– Allows WLM to instruct PR/SM to adjust logical

CPs online to LPs

Logical CP = Logical ProcessorLP = Logical Partition

IRD = PR/SM + WLM

Page 39: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

HMC Image Profile

ZOS1

Running z/OS in 64-bit mode

Running z/900 in LPAR mode

Using shared (not dedicated) CPs

No hard LP caps Running WLM goal

mode LPs must select

“WLM Managed” Access to SYSPLEX

coupling facility

LP = Logical Partition

IRD Prerequisites

Page 40: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

What is an LPAR Cluster?

An LPAR cluster is the set of all z/OS shared LPs in the same z/OS parallel SYSPLEX on the same CPC

z900z900

SYSPLEX2

z/VM

Linux

ZOS1

ZOS2

ZOS3

ZOS4

ZOS5

ZOSD dedicated

shared

ZOSZ

ZOSY

ZOSA

ZOSB

ZOSC

ZOSD

ZOSE

dedicated

shared

ZOSX

SYSPLEX1

CPC = Central Processor Complex

Page 41: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

SYSPLEX2

What is an LPAR Cluster?

4 LPAR clusters in this configuration (color coded)

z900

z/VM

Linux

ZOS1

ZOS2

ZOS3

ZOS4

ZOS5

ZOSD dedicated

shared

z900

ZOSZ

ZOSY

ZOSA

ZOSB

ZOSC

ZOSD

ZOSE

dedicated

shared

ZOSX

SYSPLEX1

Page 42: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Dynamically changes LP Weights Donor Receiver Strategy WLM Evaluates all SYSPLEX Workloads Suffering Service Class Periods (SSCPs)

– High (>1) SYSPLEX Performance Index (PI)

– High Importance– CPU delays

LP = Logical Partition WLM = Workload Manager

WLM LPAR Weight Management

Page 43: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IF The SSCP is missing goal due to CPU Delayand WLM cannot help the SSCP by adjusting

dispatch priorities within an LP

THEN WLM and PR/SM start talking ---1. Estimate impact of increasing SSCPs LP weight2. Find donor LP if there will be SSCP PI

improvement, 3. Donor LP must contain heavy CPU using SCP4. Evaluate impact of reducing donor LPs weight

- Cannot hurt donor SCPs with >= importance5. WLM changes weights via new LPAR interface

SSCP = Suffering Service Class Periods

PI = Performance Index SCP = Service Class Period

WLM = Workload Manager

WLM Policy Adjustment Cycle

Page 44: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Rules and Guidelines

5% from donor, 5% to receiver No “recent” LP cluster weight adjustments

– Must allow time for impact of recent adjustments. Avoid see-saw effect

Receiver and Donor LPs will always obey specified min/max weight assignments

Non-z/OS images unaffected because total shared LP weight remains constant!

LP = Logical Partition

Page 45: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

MAKE SURE YOUR SCP GOALS AND IMPORTANCE REFLECT REALITY AT THE

LPAR CLUSTER LEVEL!

Because WLM thinks you knew what you were doing!

Goals Should Reflect Reality

SCP = Service Class Period WLM = Workload Manager

Page 46: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Goals / Reality Continued

In the past, the WLM goal mode SCPs on your “test” or “development” LPs had no impact on “production” LPs

– If part of the same LPAR cluster,– IRD will take resource away (decrease weight) of

“production” LP,– Add resource (increase weight) of “test” LP to meet

the goal set for a SCP of higher importance on “test” LP

Develop service policy as though all SCPs are running on a single system image

SCP = Service Class PeriodWLM = Workload Manager

LP = Logical Partition

Page 47: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

WLM uses the Level of Importance YOU assign to make resource allocation decisions!

CICS WEBSITE DAVE’S STUFF

GUTTER WORK BOB’S STUFF

Importance 1

Importance 2

Importance 3

Importance 4

Importance 5

Workload Manager Level of Importance

Page 48: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Varies logical CPs online/offline to LPs Goals:

– Higher effective logical CP speed– Less LPAR overhead and switching

Characteristics:– Aggressive: Vary logical CP online– Conservative: Vary logical CP offline

Influenced by IRD LP weight adjustments

Logical CP = Logical ProcessorLP = Logical Partition

WLM Vary CPU Management

Page 49: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Only initially online logical CPs eligible– Operator varied offline not available

If z/OS LP switched to compatibility mode,

– all IRD weight and vary logical CP adjustments “undone”.

– LP reverts to initial CP and weight settings

LP = Logical Partition

CP = Central Processor

Logical CP = Logical Processor

Vary CPU Algorithm Parameters

Page 50: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

What is Online Time?

LCPLCP LCPZOS2RMF interval:

30 minutes

LCP 0

LCP 1

LCP 2

---------- 30 MINUTES ----------

Previously, the length of the interval was the Previously, the length of the interval was the MAX time that each LCP could be MAX time that each LCP could be dispatched. RMF reports on the actual dispatched. RMF reports on the actual dispatch time.dispatch time.

In the past, RMF only indicated that LPC 2 In the past, RMF only indicated that LPC 2 was not online at end of interval. Now, RMF was not online at end of interval. Now, RMF reports the online time for each LPC for each reports the online time for each LPC for each partition.partition.

LCP 0

LCP 1

LCP 2

---------- 30 MINUTES ----------

Interval time=online timeInterval time=online time

LPC Dispatch TimeLPC Dispatch Time

Note: LCP 2 varied offline during interval

PAST (pre-IRD) PRESENT (IRD)

Page 51: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Before IRD CPU Vary ManagementCPU Time = Interval Time - Wait Time

and CPU % Busy = CPU Time * 100

Interval Time * No. Processors After IRD CPU Vary Management

CPU Time = Online Time - Wait Time

and CPU % Busy = CPU Time * 100

Total Online Time New SMF70ONT field, total time processor online to LP

during RMF interval

Vary CPU - CPU Percent Busy

Page 52: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Goals– Better I/O response (less pend) time– I/O configuration definition simplification– Reduces the need for > 256 channels– Enhanced availability– Reduced management

Operates in both goal and compatibility modes

Dynamic Channel-PathManagement (DCM)

Page 53: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Systems running on z900 or later CPC Running z/OS 1.1 in 64 bit mode Both LPAR and Basic mode supported To share managed channels on same CPC

– Systems in LPAR cluster Balance mode either compatibility or goal Goal mode requires WLM goal mode and

that global I/O Priority queuing selected

DCM = Dynamic Channel-path Management

CPC = Central Processor Complex

DCM Prerequisites

Page 54: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IRD DCM Balance Mode

0%10%20%30%40%50%60%70%80%90%

100%

0 4 8 12 16 20 24 28 32 36 40 44 48

Channel Path

PCT

Busy

Before IRD DCM

0%10%20%30%40%50%60%70%80%90%

100%

0 4 8 12 16 20 24 28 32 36 40 44 48

Channel Path

PCT

Busy

After IRD DCM

Stated simply, the goal of IRD DCM is to evenly distributeI/O activity across all channel paths attached to the CPC

DCM = Dynamic Channel-path Management

CPC = Central Processor Complex

Page 55: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Moves channel bandwidth where needed Simplified configuration definition

– For managed CUs, define 22 non-managed paths plus nn managed paths to meet peak workload

DCM balanced mode removes paths from non-busy CUs and adds paths to busy CUs

Currently manages paths to DASD CUs New metric: I/O Velocity

DCM = Dynamic Channel-path Management

CU = Control Unit

DCM Balance Mode

Page 56: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LCU productive time

LCU productive time + channel contention delays

Calculated by WLM and the IOS Uses the same CF structure for CPU mgmt Calculated for every LCU in LPAR cluster to

compute a weighted average DCM attempts to ensure all managed LCUs

have similar I/O velocities

LCU = Logical Control UnitCF = Coupling Facility

DCM = Dynamic Channel Path Management

I/O Velocity =

I/O Velocity

Page 57: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

All LPs in I/O cluster in WLM goal mode During policy adjustment routine WLM selects

SSCP– IF I/O delays the problem & increasing I/O priority does

not help & adding alias for PAV volumes will not help & I/O requests suffering channel contention …

– THEN WLM estimates impact of increasing LCU I/O velocity and if benefits SCP PI, sets explicit velocity for LCU

Explicit velocity overrides Balance mode

LCU = Logical Control UnitSSCP = Suffering Service Class Periods

SCP = Service Class Period PI = Performance Index

DCM Goal Mode

Page 58: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Dynamically manages channel subsystem I/O priority against WLM policy goals

Only meaningful when I/Os are queued Supports prioritization of:

– I/Os waiting for a SAP– I/Os waiting for a channel

Previously, these delays were handled FIFO

SAP = System Assist Processor

Channel Subsystem (CSS)I/O Priority Queueing

Page 59: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Uses Donor Receiver strategy CSS I/O priority setting (waiting for a SAP)

– System SCP assigned highest prioritySystem SCP assigned highest priority– I/O delayed SCP missing goal nextI/O delayed SCP missing goal next– When meeting goals:When meeting goals:

• Light I/O users higher• Discretionary work has lowest priority

UCB and CU I/O priority setting (waiting for a channel)

– System SCP assigned highest prioritySystem SCP assigned highest priority– I/O delayed SCP missing goal nextI/O delayed SCP missing goal next– Less important SCP is donorLess important SCP is donor

SAP = System Assist Processor

SCP = Service Class Period

Channel Subsystem I/O Priority Queueing (cont’d)

Page 60: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IBM License Manager (ILM)

PROBLEMS:– Software charges based on CPC size– CPCs getting bigger– Workloads more erratic (spikes)

• eBusiness

SOLUTION– New LP setup option: Defined Capacity– Software priced at LP Defined Capacity

CPC = Central Processor Complex LP = Logical Partition

Page 61: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IBM License Manager (ILM)

z/900 servers running z/OS V1R1+ New external: “Defined Capacity” for shared LPs Defined capacity expressed in MSUs

– Millions of Service Units per hour

Rolling 4 hour average MSU use compared with defined capacity by WLM

When rolling 4 hour MSU usage exceeds defined capacity, WLM tells PR/SM to “soft cap” LP (really a temporary hard cap)

MSU = Millions of Service UnitsLP = Logical Partition

Page 62: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Rolling Four Hour Average

0

100

200

300

400

500

600

700

6:14 7:14 8:14 9:14 10:14 11:14 12:14 13:14

Actual MSU Consumed

Avg Actual

Rolling Avg DETAIL

Avg Rolling

Actual MSUs Consumed

Rolling Avg Detail

Average Actual

Avg Rolling

Page 63: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IBM License Manager (ILM)

For software pricing, IBM uses following:– Dedicated LPs - logical CPs * engine MSU– PR/SM hard cap - shared pool % * engine MSU– Defined Capacity - defined capacity– Basic mode - model’s MSU rating

www.ibm.com/servers/eserver/zseries/srm/

MSU = Millions of Service Units

Logical CP = Logical Processor

Page 64: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

IBM License Manager (ILM)

If ZOS1 set with defined capacity of 100 MSU …

Defined Capacity

White SpaceAbility to handle spikes and unpredictable demand

100

175

Page 65: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

CUoD supports non-disruptive activation of PUs as CPs or ICFs in a CPC

Specify “Reserved Processors” When an upgrade is made (105 to 106)

– LPs with reserved processors can begin using immediately after operator command

IBM recommends specifying as many reserved processors as model supports

CUoD = Capacity Upgrade on Demand

PU = Processor Unit

CP = Central ProcessorICF = Integrated Coupling Facility

LP = Logical Partition

CPC = Central Processor Complex

Capacity Upgrade on Demand

Page 66: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

z900 Model 2064-105

5 PUs Configured as CPs = Model 105

CPC MEMORY

All contain a 12 PU MultiChip Module (MCM)

PU PUPUCPCPCPCP PUPU SAP SAPCP

Central (General) ProcessorCP CPs Defined = Model Number

Page 67: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

HMC Image Profile

ZOS1

Shared LPs

Page 68: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

CPs Defined = Model Number

z900 Model 2064-106

5 +1 PUs Configured as CPs = Model 106

CPC MEMORY

All contain a 12 PU MultiChip Module (MCM)

PUCPCPCPCP PUPU SAP SAPCP

Central (General) ProcessorCP

CP PU

Page 69: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

LP Weights - LP Weight %

ZOS1 ZOS2

PR/SM LPAR LIC

LCPLCP LCPLL L LL

ZOS1 Image ZOS2 Image

•Total of LP Weights = 400 + 100 = 500• ZOS1 LP Weight % = 100 * 400/500 = 80%• ZOS2 LP Weight % = 100 * 100/500 = 20%

400 100

CPC MEMORY

CPCPCPCP CPShared CP Pool CP

LCPLCP LCP LCPLCP

Page 70: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 ZOS2

PR/SM LPAR LIC

• ZOS1 Weight % = 80%• Target CPs = 0.8 *6 = 4.8 CPs

• ZOS2 LP Weight % = 20%• Target CPs = 0.2 * 6 = 1.2 CPs

CPC MEMORY

CPCPCPCP CPShared CP Pool CP

LCPLCP LCPLL L LL

ZOS1 Image ZOS2 Image

400 100LCPLCP LCP LCPLCP

LP Target CPs

Page 71: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

ZOS1 LP is guaranteed 4.8 physical CPs– ZOS1 can dispatch work to 5 logical CPs– Each ZOS1 logical CP gets 4.8/5 or 0.96 CP– ZOS1 effective speed = 0.96 potential speed

ZOS2 LP is guaranteed 1.2 physical CP– ZOS2 can dispatch work to 3 logical CPs– Each ZOS2 logical CP gets 1.2/3 or 0.40 CP– ZOS2 effective speed = 0.40 potential speed

Logical CP = Logical ProcessorCP = Central Processor

LP Logical CP share

Page 72: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

• Large system resource, configuration, and workload management tasks shifting towards intelligent, automated, dynamic WLM functions

• Responsibility of capacity planners and performance analysts shifting towards better understanding of business workloads relative importance and performance requirements

Conclusions

Page 73: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

NeuMICS Support

April 2002 PSP• CAP - New IRD and ILM Planning Applications• PER - New MSU and Soft Capping Analysis

October 2001 PSP• RMF6580 - IRD, ILM, and CUoD

April 2001 PSP• RMF6560 - z/OS (64 bit) Multisystem Enclaves, USS

Kernal, and 2105 Cache Controllers

October 2000 PSP• RMF6540 - ICF, IFL, and PAV support

Page 74: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Parallel Sysplex Overview: Introducing Data Sharing and Parallelism in a Sysplex (SA22-7661-00)

Redbook: z/OS Intelligent Resource Director (SG24-5952-00)

IBM e-server zSeries 900 and z/OS Reference Guide (G326-3092-00)

zSeries 900 Processor Resource/Systems Manger Planning Guide (SB10-7033-00)

References

Page 75: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Trademarks

PR/SM

RMF

SMF

Sysplex Timer

S/390

VSE/ESA

zSeries

z/OS

CICSDB2

ESCON

FICON

IBM

IMS

MVS

MVS/ESA

OS/390

Parallel Sysplex

Processor Resource/Systems Manager

The following terms are trademarks of the International Business Machines Corporation in the United States, or other countries, or both:

Page 76: Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Thanks!

Questions ???

Darrell Faulkner

NeuMICS Development Manager

Computer Associates

Email: [email protected]