z/os performance analysis wlm update db2 performance …
TRANSCRIPT
• z/OS Performance Analysis
• WLM Update
• Db2 Performance Analysis
EPV Performance University 1
IRD and HiperDispatch
EPV Performance University 2
Agenda
• IRD
• HiperDispatch
• HiperDispatch measurements
EPV Performance University 3
IRD
EPV Performance University 4
EPV Performance University 5
IRD
• The Intelligent Resource Director (IRD) allows WLM to manage processor and channel subsystem resources moving them from one LPAR to another LPAR belonging to the same IRD cluster
• An IRD cluster can be composed by LPARs– in the same CEC
– in the same Parallel Sysplex
• Initially IRD could also reduce the number of LPAR logicalprocessors but this function is now performed (more effectively) by HiperDispatch
EPV Performance University 6
IRD
• With HiperDispatch the functions still performed by IRD are:
– LPAR weight management
– Dynamic Channel Path Management
– Channel Subsystem Priority Queuing
• Very few customers use IRD for DCPM and CSPQ
• We will only discuss LPAR weight management
EPV Performance University 7
IRD
• With LPAR weight management, you give each logical partition an initial LPAR weight along with an optional minimum and maximum weight
• WLM will then dynamically balance these weights to best meet the goals of the work in the partitions, with no human intervention
• If you don’t set a minimum and maximum weight, WLM will be free to manage the LPAR weights, almost without limitations, towards the goal of the most important applications
• The total weight of the cluster as a whole will remain constant, so LPARs outside the cluster are unaffected
• The PRPLX01 and TESTPLEX clusters have been defined• IRD is active• Note that IRD can manage only standard CPUs
EPV Performance University 8
IRD
• In PRPLX01 the PRDE01 LPAR weight can be raised up to 930 (+50); the minimum is equal to the initial weight (880)
• All the other LPARs have the maximum equal to the initial weight; they can only be reduced (globally -50)
EPV Performance University 9
IRD
• TESTE01 and TESTE02 LPARs in the TESTPLEX clusters have no minimum and no maximum defined
• WLM could freely manage their weights but it never reduces the weight to less than 5% of a CP
EPV Performance University 10
IRD
EPV Performance University 11
IRD
• To activate LPAR weight management you have to set the following definitions in HMC for each LPAR:
– Enter the initial processing weight
– Enter the minimum and maximum weights
– Check the WLM Managed box
• You have also to create and activate a Coupling facility structure (SYSZWLM_xxxxyyyy where the xxxxyyyy suffix represents a portion of the CPU ID for the CEC)
HiperDispatch
EPV Performance University 12
EPV Performance University 13
HiperDispatch
• HiperDispatch goals are:
– Reducing the system overhead by using only the needed logical processors
– Reducing the system overhead by re-dispatching work on the same logical and physical processor
• WLM, z/OS dispatcher and PR/SM work together to reach these goals
EPV Performance University 14
HiperDispatchz15 processor cache architecture
EPV Performance University 15
HiperDispatch
• HD vertical polarization assigns the LPs of a LPAR to one of the following groups:
– high processor share (or vertical high polarity); they will have a target share corresponding to 100% of a CP
– medium processor share (or vertical medium polarity); they will have a target share greater than 0% and normally less than 100% of a CP; they get the remainder of the LPAR’s shares after the allocation of high share LPs; at least one medium is needed (to provide part of the share to VL)
– low processor share (or vertical low polarity); they will receive a target share of 0% of a CP; they are not needed for the LPAR to fully utilise the CPs associated with its weight; they will be parked/unparked
EPV Performance University 16
HiperDispatch
• The sum of the target share of vertical high and medium LPs corresponds to the LPAR target CPs calculated by multiplying LPAR %WEIGHT by the physical CPs in the CEC
LPAR %WEIGHT = 30% CEC CPs = 19 LPAR LPs = 8 TARGET CPs = 5,7
VH = 5 VM = 1 (70% share) VL = 2 (0% share)
• High polarity processors will be re-dispatched on the same physical processor or chip
• Medium and Low polarity processors have no fixed physical processor placement
EPV Performance University 17
• WLM and z/OS Dispatcher:✓manages work in multiple affinity dispatch queues✓considers all the LPs associated to the same affinity dispatch queue as a LP
affinity pool
• PR/SM:✓establishes affinity nodes (chip) which correspond to high polarity LP
affinity pools✓try to dedicate a CP to each high polarity LP✓try to keep all the CPs in the same affinity node on the same book/node✓try to dispatch a LP to the same CP previously used or as an alternative to
a CP in the same affinity node
HiperDispatch
EPV Performance University 18
HiperDispatch
• This is the cycle performed by WLM, every 2 seconds:
– Testing if HiperDispatch is ON or OFF
– Reading logical processor topology from PR/SM
– Parking and un-parking low polarity LPs based on processor demand
– Building affinity nodes
– Balancing units of work to affinity nodes
EPV Performance University 19
HiperDispatch
• SRB activity from the SYSSTC service class can run on any available logical processor
• The reason is the need to support the high performance requirements for work typically classified to SYSSTC (many short-running SRBs required for transaction workflow)
• Examples of this kind of SYSSTC address spaces are VTAM, TCP/IP and IRLM
HiperDispatch measurements
EPV Performance University 20
EPV Performance University 21
HiperDispatch measurements
• Information about HiperDispatch polarization and parking is available in SMF 70
• Both for CPU and zIIP
• zIIPs are managed by logical core not by thread
EPV Performance University 22
HiperDispatch measurements
• Vertical Polarity indicates 4 High, 1 Medium, 2 Low
EPV Performance University 23
• Only Low can be parked
HiperDispatch measurements
EPV Performance University 24
• Sum of SHARE are the LPAR TARGET CPs multiplied by 100
• Calculation based of LPAR %WEIGHT
HiperDispatch measurements
EPV Performance University 25
• Information about HD is provided in SMF 99-14
• Records are written every 5 minutes or whenever a topology change occurs
• The most common reasons for a topology change are:
✓ Configuration changes
✓ Partition weight changes (also due to WLM soft capping)
HiperDispatch measurements
EPV Performance University 26
• Records provide information by LPAR
• These records are always requested by IBM technical support when they are asked to investigate performance issues so it’s highly recommended to collect them on a regular basis
• To collect SMF 99 subtype 14 records you have only to allow it in SMFPRMxx
HiperDispatch measurements
EPV Performance University 27
• To exploit SMF 99-14, IBM WLM provides a free tool: the WLM Topology Report
• This tool is based on an Excel spreadsheet that displays lots of interesting information such as:
✓ the association of logical processors to chips, books or nodes/drawers
✓ the vertical polarization of the processors (high, medium, low),
✓ the processor type (CPU and zIIP)
✓ the association to WLM affinity nodes
✓ topology changes
HiperDispatch measurements
EPV Performance University 28
• The topology report tool can be downloaded from an IBM ftp site by using the following link:
ftp://public.dhe.ibm.com/eserver/zseries/zos/wlm/
HiperDispatch measurements
EPV Performance University 29
• Then you have to run the setup program
HiperDispatch measurements
EPV Performance University 30
• You will get the TopoReport folder:
HiperDispatch measurements
EPV Performance University 31
• In the HostTopo folder, you will find two files in transmit format
• You must upload them to your z/OS system in BINARY mode
• Then you need to issue the RECEIVE INDSN(‘<HLQ>.TOPOREP.JCL.BIN’) and RECEIVE INDSN(‘<HLQ>.TOPOREP.LOADLIB.BIN’) commands
HiperDispatch measurements
EPV Performance University 32
• You will get:
✓ a load library, including the S99ERPTD program which converts SMF 99 subtype 14 records to CSV format
✓ a JCL library, including the SAMPLE JCL to be used to run the above program
HiperDispatch measurements
EPV Performance University 33
• Sample JCL to run to produce the CSV file
• The CSV file has to be sent back to the Windows system where the topology report application has been installed
HiperDispatch measurements
EPV Performance University 34
• To produce the report you have to open the TopoReport.xlsm spreadsheet
HiperDispatch measurements
EPV Performance University 35
• Then click the Open New CSV File button
HiperDispatch measurements
EPV Performance University 36
• Then click the Open New CSV File button
HiperDispatch measurements
EPV Performance University 37
• Choose an interval and click the Copy Data button
HiperDispatch measurements
EPV Performance University 38
• This section show the reasons of eventual topology changes
HiperDispatch measurements
EPV Performance University 39
HiperDispatch measurements
EPV Performance University 40
• This section shows info about LPAR share and LPs
HiperDispatch measurements
EPV Performance University 41
• This section shows affinity node and nesting levels info
HiperDispatch measurements
EPV Performance University 42
• Finally click the Make Report button
HiperDispatch measurements
• Each line is a LP coded as: SSSS_NN_Vtttnnn:✓ SSSS = SMF system id
✓ NN = WLM affinity node number
✓ V = polarization = {H,M,L}
✓ ttt = processor type ={CPU,IIP,AAP}
✓ nn = processor number
• High LPs have a yellow background, medium LPs a light yellow background
• CPUs are written in black; zIIPs in red
EPV Performance University 43
HiperDispatch measurements
EPV Performance University 44
• Example of z13 topology
• LPs mostly on 2 nodes of the same drawer
• Some zIIPs on another drawer
HiperDispatch measurements
EPV Performance University 45
• Nice and useful tool
• Some important limitations:
✓ SMF records are not synchronized with other SMF and RMF records
✓ No information about logical to a specific physical processor is provided
✓ Only one system at a time can be selected
✓ To get the complete picture you should create a report for each system and integrate them manually
• SMF 99-14 is fully supported in EPV zParser and EPV SMF2XL
HiperDispatch measurements
EPV Performance University 46
• SMF 99-12 contains HD interval data
• A set of subtype 12 records is written for each WLM policy interval (10 seconds)
• 1 record every 2 seconds (WLM interval for HD)
• To collect SMF 99 subtype 12 records you have only to allow it in SMFPRMxx
HiperDispatch measurements
EPV Performance University 47
• Lots of information about factors influencing HD activity:
✓ CEC busy
✓ MVS busy
✓ Utilization of LPAR share
✓ Utilization of LPAR share by LP polarization type
✓ LPAR weight
✓ Capping
✓ SMT
HiperDispatch measurements
EPV Performance University 48
• Lots of information about HD activity:
✓ Parking
✓ Unparking
✓ LP polarization
• Not clearly documented
• SMF 99-12 is fully supported in EPV zParser and EPV SMF2XL
HiperDispatch measurements
EPV Performance University 49
• In case of performance problems opened to IBM, it’s very likely these SMF 99 records will be required
• It’s a good practice collecting them on a regular basis
HiperDispatch measurements
Questions?
EPV Performance University 50