mainframe performance improvement

7
MAINFRAME PERFORMANCE IMPROVEMENTS A DATAKINETICS WHITEPAPER

Upload: others

Post on 19-Nov-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

MainfraMe PerforMance iMProveMents

A DAtAKinetics WhitePAPer

Mainframe Performance improvement

DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2020 DataKinetics 2

Table of ContentsMainframe Performance Challenges 3

Mainframe Performance Improvement 3

High-Performance In-Memory Technology 4

Balancing Cost Savings With Performance 5

IT Business Intelligence 6

Conclusion 7

The Next Step 7

About Us 7

Mainframe Performance improvement

DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2020 DataKinetics 3

Mainframe Performance challengesToday most CIOs, CTOs and IT managers are aware of the impact of digital transformation (DX) on their respective businesses and IT organizations. Disruptive technologies like mobile, big data, business analytics, cloud (computing and storage), digital payment, and more recently the algorithmic economy and the Internet of Things (IoT), are making their impacts felt. Meanwhile, in a growing economy, increases in expected business workloads are also making their impacts felt through increased online transactions, web requests, mobile requests, batch jobs, ad hoc queries, data warehousing analysis, utility jobs, and Db2 commands, etc.

In many cases the impact that is being felt is a perceived decrease in system performance - slower running applications, less responsive databases and a general erosion of computing response times, as systems cope with the enormous increases in workload demands being piled upon them.

Is this an indictment of the value or capability of the mainframe as a business computing platform? Hardly. However, business and IT management consulting firms will be happy to blame the mainframe platform itself, which is not in and of itself a surprise, since they will surely benefit financially if you are convinced that this is the case. The truth is that any platform being crushed by constantly increasing demands is going to suffer a similar fate. The solution is to solve the performance problem, not to solve the platform.

Mainframe Performance improvementThe mainframe is widely regarded as the best platform on the planet for running large-scale transaction processing because that is what it was designed for. No other platform can compete with the throughput performance of the mainframe. But workloads are increasing year-by-year, and the mainframe needs to keep pace.

There are several ways to improve mainframe performance—the most popular being a systems upgrade—meaning an upgrade to a newer model mainframe (for example, from a zEnterprise 196 to a z14 mainframe system), or adding processors and memory to an existing system. These can be costly solutions, but frankly need to be done from time to time in a growing business environment. But there are techniques that can be used to augment the upgrade cycle, and to reduce the frequency of upgrades—resulting in improved performance at a lower cost.

Are there really techniques that improve the performance of a mainframe processor or its memory or buses? No. However, by optimizing applications, they can use far fewer system resources (I/O and CPU), allowing them to run faster, and potentially present a smaller impact on operational cost. This virtually improves the performance of the application. And by optimizing several applications (even in several different ways), there can be a significant and measurable system-wide performance improvement. Similarly, if several database applications are optimized, there will be an apparent (and measurable) database performance improvement—even though the database has not been changed in any way.

Figure 1: Increased resource demand necessitating upgrades

Mainframe Performance improvement

DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2020 DataKinetics 4

high-Performance in-Memory technologyIn-memory technology augments your DBMS and its buffering facilities. The reference data that is used most often by your applications—a very small amount of data—is copied from the DBMS into dataspace resident high-performance in-memory tables, where it is accessed using a simple and tight API. To get the most out of in-memory technology, you must identify applications that perform repetitive accesses (thousands, millions) of read-only reference data—and you must identify that data.

The reason is that a small amount of data is responsible the most accesses to your databases, and if you can replace those calls with in-memory calls—eliminating I/O, CPU and database overhead—that can make a significant difference in overall performance of your mainframe system.

You may think that buffering is enough in-memory technology, but you can obtain a far higher performance benefit from high-performance in-memory technology. And the key to that is the difference in code path between this technology and your database buffers. Figure 3 shows the difference—a typical DBMS call to buffered data consumes from 10,000 to 100,000 machine cycles, whereas a call to data contained in a high-performance in-memory table consumes about 400 machine cycles.

It is important to know that this technology does not replace the database—it merely augments it. Your database does not change in any way, and it remains one of your most important assets.

Tests and comparisons have been completed by both independent third-party testing organizations, and DataKinetics customer IT organizations using tableBASE high-performance in-memory technology. In all cases, systems augmented using tableBASE allow data to be accessed by applications at a rate considerably superior to any other technique. Actual customer systems (using Db2, Db2 buffers augmented by tableBASE) out-perform systems employing only Db2 + Db2 buffer optimization by a wide margin: up to 3000% faster.

Figure 3: A shorter path to your data

Figure 4: Improved mainframe performance

Figure 2: 20% of data is responsible for 80% of data access

Calling Application

BSDS Logs SQL Parse SQL Opt Rec Map Index Mgr DSM LRWBuffer Pools IRLM

SSAS RDS

DASD

DM BMMed MgrVSAM

DriverHigh-performance In-memory Tables

CPU

Without in-memory technology With in-memory technology

CPU Usage

t

Elapsed Time

I/O

I/O Consumption

Mainframe Performance improvement

DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2020 DataKinetics 5

Balancing cost savings With PerformanceWhile performance is a paramount challenge for mainframe datacenters, controlling costs is equally important for many. Some of these organizations go to great lengths to control rising mainframe operational costs, sometimes to the detriment of system performance. Often this is accomplished using IBM Soft Capping. However, for most organization, saving on cost is a nice-to-have, but never at the expense of system performance—or more so, at the expense of business-critical workload processing.

Therefore, to avoid capping, you must ensure that your system capacity is equal to or greater than your maximum capacity usage. This almost always means that you’re not going to be saving on operational costs at all—the process is self-defeating (see Figure 5). However, there are third-party products out there that can help an organization to control costs without having to sacrifice the performance of mission-critical processing.

Using automated soft capping, an organization can pay for lower capacity, while using shared resources to ensure that critical workloads are not capped. Lower-priority workloads on the LPAR—or on other LPARs—can be capped instead, ensuring that mission-critical workloads are not capped, and can proceed using all of the resources needed. This is accomplished by the automated sharing of resources (MSU and CPU) between LPARs. The end result is that an organization can actually pay for lower capacity, but use higher capacity when needed (see Figure 6). It is virtual power on demand.

With Dynamic AdjustmentsExtra capacity as needed

Power onDemand

IBM Soft CappingCapacity = max. capacity

With Dynamic AdjustmentsExtra capacity as needed

Power onDemand

IBM Soft CappingCapacity = max. capacity

Figure 6: Shared capacity

Figure 5: System capacity must equal required capacity

Mainframe Performance improvement

DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2020 DataKinetics 6

it Business intelligenceIT organizations collect tremendous amounts of data about their own computing resources every day—both mainframe, midrange servers locally, or in third-party datacenters. So much data is collected, that you could call it their own “IT Big Data.” And with the right toolsets, this IT data can be used to reduce the cost of batch running on your mainframe, and can help identify low-priority batch candidates to offload for running on other platforms.

IT business intelligence identifies lower-priority batch workloads that are potential candidates for reprioritization, re-platforming or even elimination. This can directly contribute to improved performance, especially during peak and mission-critical workloads (see Figure 7).

IT business intelligence can also show which departments are using mainframe resources, and how much that is costing. This information can further help to re-prioritize batch processing based on the new-found transparency of departmental spending patterns.

Figure 7: A low-priority batch workload contributes to the peak workload of the week

1,000

500

0

0 2 4 6 8 10 12 14 16 18 20 22 0 2 4 6 8 10 12 14 16 18 20 22

Cost per month

$100,000

$90,000

$80,000

$70,000

$60,000

$50,000

$40,000

$30,000

$20,000

$10,000

$0

Mips

Cost per month Retail Banking Internal Finance Capital Markets

Figure 8: Business information on mainframe resource usage per organizational unit

Mainframe Performance improvement

DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2020 DataKinetics 7

© DataKinetics Ltd., 2020. All rights reserved. No part of this publication may be reproduced without the express written permission of DataKinetics Ltd. DataKinetics and tableBASE are registered trademarks of DataKinetics Ltd. Db2 and z/OS are registered trademarks of IBM Corporation. All other trademarks, registered trademarks, product names, and company names and/or logos cited herein, if any, are the property of their respective holders.

conclusionThese solutions address existing system performance challenges with little or no changes to existing hardware, databases or applications. They are low-risk and budget-friendly; independently, they provide good performance improvement, but together they provide very significant improvements. They can help to decrease the frequency of your system upgrades. They also provide short-term ROI, coupled with long-term cost savings and improved efficiency, enabling improved strategic business flexibility. Improved performance while maintaining cost control—just the prescription needed for today’s over-taxed mainframe systems.

the next stepTo see how much of an impact these unique performance optimization solutions would have on your business, a “proof of concept” trial can be arranged. The steps of the trial would include identifying a specific problem area, applying a solution to it, measuring the effectiveness, and then assessing the overall cost impact.

Your organization and the Professional Services staff from DataKinetics will collaborate to outline a high-level project plan and approach that will review applicable environments, infrastructure, application code, etc., and help implement the proof of concept. We will work with you to provide a proposal based on your current IT plan. Contact DataKinetics for more information.

About UsAs the global leader in Data Performance and Optimization Solutions, the world’s largest banks, credit card, brokerage, insurance, healthcare, retail and telecommunication organizations rely on DataKinetics to dramatically improve their data throughput and processing.

Our comprehensive, world-renowned suite of solutions enables Fortune 500 companies to:

• Process over a billion mission-critical transactions every day

• Accelerate application processing by up to 98%

• Seamlessly integrate data on mainframe and distributed systems

• Enable increased control and flexibility in sub-capacity pricing and R4H soft capping environments

Figure 9: Controlled resource demand reducing need for upgrades