migrating from websphere compute grid 6 - ibm...2015/02/04  · migrating compute grid 6.1.1 to...

57
WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS © 2015, IBM Corporation WP102389 at ibm.com/support/techdocs Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group, Application and Integration Middleware Software [email protected] Jeff Dutton IBM Software Group, Application and Integration Middleware Software [email protected]

Upload: others

Post on 23-Jun-2021

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation WP102389 at ibm.com/support/techdocs

Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

Version Date: Feb 4, 2015

Douglas MacIntosh IBM Software Group, Application and Integration Middleware Software

[email protected]

Jeff Dutton

IBM Software Group, Application and Integration Middleware Software [email protected]

Page 2: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation ii WP102389 at ibm.com/support/techdocs

Table of Contents 1 Overview ........................................................................................................................................ 1

1.1 Compute Grid Migration Scenarios ......................................................................................... 1

1.1.1 Scenario 1 – Migration of Existing Nodes ........................................................................ 1 1.1.2 Scenario 2 – Migration through New Node Creation ....................................................... 1 1.1.3 Scenario 3 – Migration through New Cell Creation and Redeployment ........................... 2

1.2 Choosing the Best Scenario for Your Migration ...................................................................... 2 1.3 Starting and Target Builds ...................................................................................................... 3

1.4 The Compute Grid Configuration Used to Test the Migration Process ................................... 4 1.5 Starting Topology.................................................................................................................... 4 1.6 Intermediate Topology ............................................................................................................ 6

1.6.1 Scenario 1 ....................................................................................................................... 7 1.6.2 Scenario 2 ....................................................................................................................... 8

1.6.3 Scenario 3 ..................................................................................................................... 10 1.7 Target Topology.................................................................................................................... 11

1.7.1 Scenario 1 ..................................................................................................................... 12 1.7.2 Scenario 2 ..................................................................................................................... 13

1.7.3 Scenario 3 ..................................................................................................................... 13 1.8 The Migration Process .......................................................................................................... 16

2 Prepare Your Environments for WAS8.5.5 (All Scenarios) .......................................................... 17 2.1 Install (Upgrade) the Installation Manager ............................................................................ 17 2.2 Install (Upgrade) the WCT Tool ............................................................................................ 17

2.3 Create the WAS8.5.5 Product File System ........................................................................... 17 3 Prepare the WCG6.1.1 Cell for Migration (Scenarios 1 and 2) .................................................... 18

3.1 Add SystemApp WebSphere Environment Variables ........................................................... 18

3.2 Backup the Current Compute Grid Configuration ................................................................. 19

3.3 Backup the Cell..................................................................................................................... 20 3.4 Rollback Procedure in Case a Migration Issue is Encountered ............................................ 20

4 Migrate the Deployment Manager (Scenarios 1 and 2) ............................................................... 21 4.1 Add STATUS_LISTENER_ADDRESS Port (Optional) ......................................................... 21 4.2 Migrate the Deployment Manager to WAS8.5.5 ................................................................... 21

4.3 Restore the Compute Grid Configuration .............................................................................. 22 4.3.1 Required Outage ........................................................................................................... 22

4.3.2 Perform --wasmigrate .................................................................................................... 22 4.3.3 Start the WAS8.5.5 Deployment Manager ..................................................................... 23 4.3.4 Perform --restore ........................................................................................................... 23 4.3.5 Restart the WCG6.1.1 Scheduler and Endpoints .......................................................... 24

4.3.6 Mix Mode Exceptions and Errors ................................................................................... 24 5 Scenario 1 Migration Process ...................................................................................................... 26

5.1 Migrate the First Node to WAS8.5.5 ..................................................................................... 27

5.2 Migrate SPI Property Files (if PJM Installed) ........................................................................ 27 5.3 Enable RUN_IN_MIXED_MODE Custom Property (if PJM Installed) ................................... 27 5.4 Prepare DB2 Migration Jobs ................................................................................................. 29 5.5 Migrate the CG Database ..................................................................................................... 29 5.6 Restore the Scheduler’s Compute Grid Configuration .......................................................... 30 5.7 Configure Native WSGrid to Run on WAS855 Scheduler (if installed).................................. 30 5.8 Start the Scheduler and Verify Batch Operation ................................................................... 31

5.9 Migrate the Remaining Nodes .............................................................................................. 31

Page 3: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Table of Contents

© 2015, IBM Corporation iii WP102389 at ibm.com/support/techdocs

5.10 The Next Step ....................................................................................................................... 32

6 Scenario 2 Migration Process ...................................................................................................... 33 6.1 Create WAS8.5.5 Nodes ...................................................................................................... 33 6.2 Create WAS8.5.5 Scheduler and Endpoint Static Clusters ................................................... 34 6.3 Add SystemApp WebSphere Environment Variables for WAS8.5.5 clusters ........................ 34 6.4 Create and Configure WAS8.5.5 JDBC Providers and JDBC Data Sources ........................ 35

6.4.1 Create New JDBC Providers and Data Sources ............................................................ 35 6.4.2 Configure LREE Environment Variables ........................................................................ 37 6.4.3 Configure the currentSchema Custom Property ............................................................ 37

6.5 Migrate SPI Property Files (if PJM Installed) ........................................................................ 38 6.6 Prepare DB2 Migration Jobs ................................................................................................. 38

6.7 Migrate the CG Database ..................................................................................................... 38 6.8 Configure the Scheduler ....................................................................................................... 39

6.8.1 Configure the Scheduler Hosted By Attribute ................................................................ 39 6.8.2 Verify the Rest of the Scheduler’s Configuration ........................................................... 39

6.9 Enable RUN_IN_MIXED_MODE Custom Property (if PJM Installed) ................................... 40 6.10 Configure Native WSGrid to Run on WAS855 Scheduler (if installed).................................. 41 6.11 Start the Scheduler and Verify Mixed Mode Batch Operation ............................................... 42

6.11.1 Verify WAS8.5.5 Scheduler with WCG6.1.1 Endpoints ................................................. 42 6.11.2 Verify WAS8.5.5 Scheduler with WCG6.1.1 and WAS8.5.5 Endpoints ......................... 42

6.11.3 Possible EJBConfigurationException and Work Around ................................................ 43 6.12 The Next Step ....................................................................................................................... 44

7 Complete the Migration (Scenarios 1 and 2) ................................................................................ 45

7.1 Criteria for Completing the Migration .................................................................................... 45 7.2 Transition from Mixed Mode to WAS8.5.5 Mode .................................................................. 45

7.2.1 Perform --afterMigrationCleanUp ................................................................................... 45 7.2.2 Remove Mixed Mode Capability from the CG Database ............................................... 46

7.2.3 Disable RUN_IN_MIXED_MODE Custom Property (if PJM Installed) ........................... 46 7.3 Delete WAS7.0/WCG6.1.1 Nodes (for Scenario 2) .............................................................. 47

7.4 Updating WCG6.1.1 xJCL to WAS8.5.5 xJCL (optional) ...................................................... 48 7.4.1 Overview of WCG6.1.1 Batch Jobs ............................................................................... 48 7.4.2 What has Changed in WAS8.5.5 ................................................................................... 48

7.4.3 Migrating WCG6.1.1 xJCL to WAS8.5.5 xJCL ............................................................... 48 8 Scenario 3 Migration Process ...................................................................................................... 50

8.1 Overview of the Active Database Transition from WCG6.1.1 to WAS8.5.5 .......................... 51

8.2 Create the Test Database ..................................................................................................... 51 8.3 Create the WAS8.5.5 Target Cell and Test .......................................................................... 51 8.4 Make the Transition from the WCG6.1.1 Cell to the WAS8.5.5 Cell ..................................... 52

8.4.1 Point the WAS8.5.5 Cell to the Active Database ........................................................... 52 8.4.2 Migrate the Active Database to WCG6.1.1 / WAS8.5.5 Compatibility Mode ................. 52 8.4.3 Update the IP Sprayer ................................................................................................... 53 8.4.4 Verify WAS8.5.5 Cell Operation .................................................................................... 53

8.5 Complete the Migration ......................................................................................................... 53 8.5.1 Convert WCG6.1.1 xJCL to WAS8.5.5 xJCL (optional) ................................................. 53 8.5.2 Remove Mixed Mode Capability from the CG Database ............................................... 53 8.5.3 Decommission the WCG6.1.1 Cell ................................................................................ 54

Page 4: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 1 WP102389 at ibm.com/support/techdocs

1 Overview

1.1 Compute Grid Migration Scenarios

There are three recommended scenarios for migrating a WebSphere Compute Grid 6.1.1.x (WCG6.1.1) cell to WebSphere 8.5.5.x (WAS8.5.5). Two of the scenarios are actual migrations, while the third scenario involves building a new cell and redeploying the WCG6.1.1 applications. In this paper, we discuss the migration process for all three scenarios on z/OS.

1.1.1 Scenario 1 – Migration of Existing Nodes

The first scenario involves the migration of the Deployment Manager, followed by the sequential migration of existing WCG6.1.1 nodes. Typically nodes are migrated one at a time, or a few at a time, depending on migration requirements (e.g., high availability).

After the first node is migrated, the WAS8.5.5 scheduler is configured and started. Note that all WCG6.1.1 schedulers must be stopped prior to starting the WAS8.5.5 scheduler. Also note that a WCG6.1.1 scheduler cannot be restarted after the first WAS8.5.5 scheduler is started. This constraint should not matter since the WAS8.5.5 scheduler is able to dispatch batch jobs to both WCG6.1.1 and WAS8.5.5 endpoints (mixed mode).

The migration is complete once the last WCG6.1.1 node is migrated to WAS8.5.5 and some housekeeping is done to remove WCG6.1.1 compatibility from the cell and database.

This migration scenario is intended to be done in a relatively short period of time. Once the operation of the WAS8.5.5 scheduler and endpoints on the first migrated node is verified, the remaining nodes are migrated as quickly as possible. Although running in mixed mode is supported for Scenario 1, it is not recommended for extended periods of time.

Note that the granularity of the migration is at the node level in Scenario 1. You will observe that in Scenario 2 the granularity of the migration is at the application level.

1.1.2 Scenario 2 – Migration through New Node Creation

The second migration scenario does not involve the migration of the existing WCG6.1.1 nodes. Instead, new WAS8.5.5 nodes are created for the WAS8.5.5 scheduler and endpoints. In this scenario, the Deployment Manager is migrated, new WAS8.5.5 nodes and clusters are created, the WAS8.5.5 scheduler is configured, WCG6.1.1 schedulers are stopped, and the WAS8.5.5 scheduler is started.

Note that the WAS8.5.5 scheduler can dispatch jobs to both WCG6.1.1 and WAS8.5.5 endpoints, which is commonly referred to as mixed mode. Over time, the applications are migrated from the WCG6.1.1 endpoints to the new WAS8.5.5 endpoints. Or possibly, a WCG6.1.1 application is not migrated, but replaced altogether by a new WAS8.5.5 application.

When the last WCG6.1.1 application has been migrated or replaced, the WCG6.1.1 nodes can be removed from the Cell’s configuration. At this point the migration is considered complete.

While Scenario 1 should be done in a relatively short period of time, note that Scenario 2 is better suited for situations where the cell will be in mixed mode for an extended period of time. The reason

Page 5: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 2 WP102389 at ibm.com/support/techdocs

why Scenario 2 is better suited for mixed mode is because all clusters in the cell are homogeneous. Having clusters where its members are all at the same build level is considered a best practice, while mixed mode clusters is not.

Having distinct WCG6.1.1 and WAS8.5.5 clusters allow the applications to be migrated one at a time. Hence the granularity of the Scenario 2 migration is at the application level and not at the node level as in Scenario 1. Being able to migrate applications at different times may be better suited to satisfy migration requirements and provide added flexibility that Scenario 1 does not.

1.1.3 Scenario 3 – Migration through New Cell Creation and Redeployment

The third scenario is not technically a migration, but is a process that achieves the same end result.

In this scenario, a new WAS8.5.5 cell is constructed with the existing WCG6.1.1 applications already deployed. After the WAS8.5.5 cell is thoroughly tested, there is a short outage that allows the existing WCG6.1.1 cell to be replaced by WAS8.5.5 cell. During the outage, the WCG6.1.1 database is migrated to a level that is compatible with both WCG6.1.1 and WAS8.5.5 batch access, the WAS8.5.5 cell has its data sources reconfigured to point to the migrated database, and internal networking changes are made to route IP traffic away from the old cell and to the new cell.

Having the Compute Grid database in a form that supports both WCG6.1.1 and WAS8.5.5 batch access allows a method of recovery in case something goes wrong. In that event, the WAS8.5.5 cell can be taken out of service and the WCG6.1.1 put back in. This capability is supported, but hopefully never used.

Once the WAS8.5.5 cell has been in service long enough and it has been determined that there is no longer a need to switch back to the WCG6.1.1 cell, there will be another brief outage to transform the Compute Grid database into strictly WAS8.5.5 format. The motivation for this step is to remove overhead in the database to simultaneously support both Compute Grid releases.

1.2 Choosing the Best Scenario for Your Migration

Making this decision to choose the best scenario is the most important step of the migration process. Careful consideration needs to put into the decision to make sure you chose the best path for your needs. The table below is a comparison between the three migration scenarios and summary of some factors that may help you make your decision.

Page 6: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 3 WP102389 at ibm.com/support/techdocs

Migration Comparison Information

Scenario 1 Scenario 2 Scenario 3

Migrate Existing Nodes Create New Nodes Create New Cell

Granularity of Migration Node level Application level Cell level

Duration of Migration Short period of time. Nodes are migrated as quickly as possible.

Extended period of time. Gives time to migrate WCG611 apps over extended period of time.

The transition from WCG611 to WAS8.5.5 is done quickly. The creation and testing of the target WAS855 cell done over an extended period of time.

Homogeneous clusters during migration. Yes No Yes

Required outages

Migration of DMgr, Migration of DB to WAS855 compatibility mode and restoration of scheduler configuration, Migration of DB to WAS855 mode and after migration cleanup.

Migration of DMgr, Migration of DB to WAS855 compatibility mode and configuration of WAS855 scheduler, Migration of DB to WAS855 mode and after migration cleanup.

Migration of DB to WAS855 compatibility mode and reconfiguration of IP Sprayer, Migration of DB to WAS855 mode.

Table 1: A Comparison of the Three Migration Scenarios

Lastly, do not let the amount of text in the paper dedicated to Scenarios 1 and 2 influence your decision. Scenario 3 in this paper, from a migration perspective, requires very little detail compared to the other two migration scenarios. The amount of coverage in the paper for Scenarios 1 and 2 does not imply Scenario 3 requires far less work. What is not addressed in this paper for Scenario 3 is the effort involved in building the WAS8.5.5 cell from scratch, deploying the existing WCG6.1.1 applications, and verifying cell and application operation. This effort is not required for Scenarios 1 and 2.

1.3 Starting and Target Builds

The migration scenarios in this paper were tested using the following starting point and target build levels:

Starting Product File Systems: WAS7.0.0.33 + WCG6.1.1.6

Target Product File System: WAS8.5.5.3

Careful consideration should be given before deciding which Scenario to use when migrating from WCG6.1.1 to WAS8.5.5

Page 7: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 4 WP102389 at ibm.com/support/techdocs

When performing your migration, we strongly recommend you start the migration process using the latest version of WCG6.1.1.x and move to the most current version of WAS8.5.5.x.

Note if you do choose to migrate from a product version earlier than WCG6.1.1.6, your WCG6.1.1 database DDL will need to be updated prior to migrating. The upgrade to WCG6.1.1.6 required that the ADDLRS, ADDLREE, and UPDLRS DDL be applied for DB2 on z/OS. The WAS8.5.5 DDL requires that these updates have been applied. This required DDL can be found in the WAS8.5.5 product under the util/Batch directory.

1.4 The Compute Grid Configuration Used to Test the Migration Process

The migration scenarios documented in this paper are based on actual migrations that were performed in the lab using a robust Compute Grid configuration. These cells have the following Compute Grid attributes and requirements:

The WCG6.1.1 Parallel Job Manager is installed with Shared Lib SPI.

Native WSGrid is installed.

Job schedules created in WCG6.1.1 must migrate directly to WAS8.5.5 and continue to run without modification.

Existing WCG6.1.1 xJCL must work with the WAS8.5.5 scheduler and endpoints.

The WAS8.5.5 scheduler must be able to dispatch jobs to WAS8.5.5 and WCG6.1.1 endpoints (mixed mode) for Scenarios 1 and 2. For Scenario 3, we have two homogeneous cells where all servers are at one level or the other. Neither cell runs in mixed mode.

In case a rollback to WCG6.1.1 is necessary, the restored WCG6.1.1 scheduler and EPs must work with the migrated database. In other words, the Compute Grid database does not have to be restored during the WAS rollback procedure.

The rollback procedure must be optimized to minimize production cell down time.

1.5 Starting Topology

The three migration scenarios presented in this paper have the same starting point topology. We performed each scenario using our WC6 test cell. The starting point topology for the WC6 cell is depicted in Figure 1 below.

Page 8: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 5 WP102389 at ibm.com/support/techdocs

Figure 1: Starting Cell Topology for all Migration Scenarios

The WC6 cell spans two LPARs and has one node per LPAR. There is a scheduler cluster (Scheduler) and two grid endpoints clusters (GridEndPointA and GridEndPointB), where each cluster spans both LPARS (and nodes). There are four batch applications deployed across the two endpoint clusters, plus one Compute Grid system app in GridEndPointA.

The following color conventions will be used with respect to application types found in the topology diagrams throughout the paper.

Page 9: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 6 WP102389 at ibm.com/support/techdocs

Figure 2: Application Types

In the starting point topology diagram, and are traditional batch applications (compute

intensive or transactional), while and are transactional batch applications that utilize the Parallel Job Manger (PJM) and a Shared Lib SPI.

The PJM is one of several hidden Compute Grid system apps and it provides the ability to implement parallel batch jobs. A hidden system app means that it cannot be viewed or accessed via the Administration Console, but its presence is vital to cell operation. We choose to include the PJM system app in the topology diagrams, since its implementation is different between WCG6.1.1 and WAS8.5.5 and there is benefit in having its presence visible in the topology diagrams. The impact of the other Compute Grid system apps also affects the migration process and will be discussed in the paper without having to refer to a topology diagram.

There can be only one instance of the PJM in a WCG6.1.1 cell and it must be deployed to an endpoint server or cluster. However, applications that utilize the PJM can be deployed to any

endpoint cluster. In our cell, the PJM system app is installed in the GridEndPointA cluster.

1.6 Intermediate Topology

There are significant differences between the mixed mode topologies for Scenarios 1 and 2. These differences will be discussed in the first two subsections. For Scenario 3, there is no mixed mode operation, but there is a transition between two homogeneous cells that needs to be understood. This transition mode topology will also be discussed in the last subsection below.

Page 10: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 7 WP102389 at ibm.com/support/techdocs

1.6.1 Scenario 1

The mixed mode topology for Scenario 1 is given in Figure 3. This figure depicts the cell after having the Deployment Manager and the first node migrated to WAS8.5.5. The red “X” over the WCG6.1.1 scheduler indicates that only the WAS8.5.5 scheduler is permitted to run while in mixed mode.

Figure 3: Mixed Mode Cell Topology for Migration Scenario 1

When the scheduler’s cluster is in mixed mode, only WAS8.5.5 schedulers are permitted to run. Starting a WCG6.1.1 scheduler while in mixed mode is an unsupported configuration and the WCG6.1.1 scheduler will not function correctly.

Page 11: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 8 WP102389 at ibm.com/support/techdocs

1.6.2 Scenario 2

The mixed mode topology for Scenario 2 has some flexibility based on the intended use of the cell during the migration process. For example, the mixed mode topology shown in Figure 4 below may be well suited for a cell used in a Test environment. In this mixed mode topology, the existing

applications continue to run on the WCG6.1.1 clusters, while new applications and

are developed and tested in the WAS8.5.5 nodes.

Figure 4: A Mixed Mode Cell Topology for Migration Scenario 2

Page 12: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 9 WP102389 at ibm.com/support/techdocs

Figure 5 below depicts a possible Scenario 2 migration of a Production cell. In this case, all WCG6.1.1 applications are deployed across both the WCG6.1.1 and WAS8.5.5 clusters. Furthermore, new WAS8.5.5 applications have been deployed to WAS8.5.5 clusters as well. At this point, the migration is almost complete. The next step would be to stop the WCG6.1.1 clusters and remove the WCG6.1.1 nodes.

Figure 5: Another Mixed Mode Cell Topology for Migration Scenario 2

Page 13: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 10 WP102389 at ibm.com/support/techdocs

1.6.3 Scenario 3

Thus far in the paper, we have defined mixed mode to mean a WAS8.5.5 scheduler can dispatch jobs to both WCG6.1.1 and WAS8.5.5 endpoints. This transition state exists in both Scenarios 1 and 2. In Scenario 3 however, we don’t have a mixed mode state. But, we do have an extended transition state that exists between the starting point (i.e., a WCG6.1.1 cell) and the target state (i.e., a WAS8.5.5 cell).

This transition state is when the WAS8.5.5 cell is being built, configured, and tested with respect to the deployment of existing WAS6.1.1 applications, along with the testing of new WAS8.5.5 applications. Figure 6 below depicts this transition state.

Figure 6: Transition Topology for Migration Scenario 3

The WC6 cell on sysplex A is the WCG6.1.1 cell that needs to be migrated. The Compute Grid database, as well as application databases, resides in the Active Database. The IP Sprayer handles all incoming requests and currently routes them to the WCG6.1.1 cell. Note that the IP Sprayer is

Page 14: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 11 WP102389 at ibm.com/support/techdocs

comprised of any component that delivers batch job traffic to the cell such as a hardware device, HTTP server, proxy server, etc.

The WC6 cell on sysplex B is the WAS8.5.5 cell that is being built and will ultimately replace the WCG6.1.1. Note that the WAS8.5.5 cell that is under development has its own test database and only processes test jobs.

1.7 Target Topology

In the first two migration scenarios, the migration is complete when no WCG6.1.1 nodes exist in the cell. For Scenario 3, the target topology is reached when the WCG6.1.1 cell is decommissioned.

Page 15: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 12 WP102389 at ibm.com/support/techdocs

1.7.1 Scenario 1

The target topology for Scenario 1 is shown in Figure 7 below. Note that the apps that were running at the start of the migration are now deployed on the WAS8.5.5 clusters. Furthermore, new WAS8.5.5 apps have been deployed to the WAS8.5.5 clusters as well.

Figure 7: Target Cell Topology for Migration Scenario 1

Page 16: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 13 WP102389 at ibm.com/support/techdocs

1.7.2 Scenario 2

The target topology for Scenario 2 is shown in Figure 8 and is nearly identical to that of Scenario 1. The only difference is that Scenario 1 has the original nodes and clusters as in the starting topology, while Scenario 2’s topology has new nodes and clusters.

Figure 8: Target Cell Topology for Migration Scenario 2

1.7.3 Scenario 3

The “near” (almost) target topology for Scenario 3 is shown in Figure 9 below. Note that the IP Sprayer is now directing incoming active job requests to the WAS8.5.5 cell on sysplex B. Also note that the WAS8.5.5 cell is now using the Active Database, which has been migrated to WAS8.5.5/WCG6.1.1 mixed mode compatibility. The WCG6.1.1 cell on sysplex A can be retired (deleted) once the migration to the WAS8.5.5 cell on Host B is deemed successful. Until that time, the WCG6.1.1 cell is available to be reactivated in the event a problem arises with the migration to the WAS8.5.5 cell.

Page 17: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 14 WP102389 at ibm.com/support/techdocs

Figure 9: Near Target Topology for Migration Scenario 3

The figure below shows the actual target topology for Scenario 3. The migration has been deemed successful, the original WCG611 cell on sysplex A has been deleted, and the Active database has been migrated from WCG6.1.1 / WAS8.5.5 compatibility mode to strictly WAS8.5.5 mode.

Page 18: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 15 WP102389 at ibm.com/support/techdocs

Figure 10: Target Topology for Migration Scenario 3

Page 19: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 1 - Overview

© 2015, IBM Corporation Page 16 WP102389 at ibm.com/support/techdocs

1.8 The Migration Process

Regardless of which migration scenario you choose, the steps in Chapter 2, Prepare Your Environments for WAS8.5.5 (All Scenarios), must be performed.

Migration Scenarios 1 and 2 have significant overlap, and where possible, the overlap is presented in chapters that can be referenced by both scenarios. Where the scenarios differ, there are specific chapters to handle those differences. Chapters 3 through 7 are specific to Scenarios 1 and 2.

Migration Scenario 3 has very little in common with Scenarios 1 and 2, hence the bulk of Scenario 3 is in its own chapter (Chapter 8). The table below maps the flow each migration Scenario through the following chapters.

Chapter

Scenario

1 2 3

1 Overview √ √ √

2 Prepare Your Environments for WAS8.5.5 (All Scenarios) √ √ √

3 Prepare the WCG6.1.1 Cell for Migration (Scenarios 1 and 2) √ √

4 Migrate the Deployment Manager (Scenarios 1 and 2) √ √

5 Scenario 1 Migration Process √

6 Scenario 2 Migration Process √

7 Complete the Migration (Scenarios 1 and 2) √ √

8 Scenario 3 Migration Process √

Table 2: Scenario / Chapter Mappings

Page 20: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 17 WP102389 at ibm.com/support/techdocs

2 Prepare Your Environments for WAS8.5.5 (All Scenarios)

Preparation for the migration involves installing (or updating) WebSphere tools on both your workstation and sysplex for all three migration scenarios.

For the workstation, Installation Manager (IM) is used to install or update the WebSphere Customization Tool (WCT). The WCT is used to generate the migration jobs used in the migration process. On the sysplex, IM is used to build the target WAS8.5.5 product file system. The installation or upgrade of each of these tools is covered in the sections to follow.

2.1 Install (Upgrade) the Installation Manager

The “Installation Manager and Packaging Utility download links” can be found at:

http://ibm.com/support/docview.wss?uid=swg27025142

Open the link above and select the version of Installation Manager you want to install or upgrade, then select the “Download document” for that version. The Download document contains a link to the “Installing overview” topic in the IBM Knowledge Center, as well as the link to the download for the desired platform.

This URL and outlined procedure should be used at this time to update IM on your workstation and sysplex.

2.2 Install (Upgrade) the WCT Tool

The following URL contains instructions for "Installing WebSphere Customization Toolbox":

http://ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.installation.zseries.doc/ae/tins_installation_wct_gui.html

Using the instructions referenced above to install WCT for WAS8.5.5 on your workstation.

2.3 Create the WAS8.5.5 Product File System

The IBM Knowledge Center contains documentation on how to build a WAS8.5.5.x file system. The following three methods can be used:

Access a live service repository and use web-based updating.

Download files from Fix Central and use local updating.

Apply fix-pack PTFs to the SMP/E-managed repository and use local updating.

The details for each of these methods can be found in the Knowledge Center at:

http://ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.installation.zseries.doc/ae/tins_install_fixes_z.html

At this time, you should build your WAS8.5.5 file system on our sysplex. The WAS8.5.5 file system contains the migration scripts you will be using in subsequent sections. For the purpose of this paper, we will refer to the location of the WAS8.5.5.3 product as <WAS855_Product>.

Page 21: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 18 WP102389 at ibm.com/support/techdocs

3 Prepare the WCG6.1.1 Cell for Migration (Scenarios 1 and 2) For migration Scenarios 1 and 2, there are steps that need to be taken to prepare your CG6.1.1 cell for the migration. The scripts that are used to prepare for the migration can be found in the WAS8.5.5 product file system that you built in the previous chapter.

3.1 Add SystemApp WebSphere Environment Variables

By default, Compute Grid 6.1.1 has the following WebSphere environment variables defined for the cell as shown in Figure 11.

Figure 11: Current Compute Grid WebSphere Environment Variables

There are some additional WebSphere environment variables that need to be created to support the migration. To create them, change your directory to the Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the addCGSystemAppVariables.py script, which is found in the WAS855 product file system:

The following instances of CG_SYSTEM_APP_LOCATION variable were created by the script for the WC6 Cell.

< WAS7.0:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP

-host <host> -port <port> -user <user> -password <password> -f <WAS855_Product>/bin/addCGSystemAppVariables.py

Page 22: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 3 - Prepare the WCG6.1.1 Cell for Migration (Scenarios 1 and 2)

© 2015, IBM Corporation Page 19 WP102389 at ibm.com/support/techdocs

Figure 12: CG_SYSTEM_APP_LOCATION Environment Variables for WC6 Cell

3.2 Backup the Current Compute Grid Configuration

The WCG6.1.1 configuration needs to be backed up so it can be restored when migrating to WAS8.5.5. The Deployment Manager must be running in order to back up the current Compute Grid configuration.

Before the Compute Grid configuration can be backed up, there are some files in the configuration that need to be cleaned up. If these files exist at the time the backup is performed, their existence will be detected and the backup procedure terminated. To cleanup these files, change your directory to the Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following script:

Next, run the following script to backup the Compute Grid configuration:

The CG_SYSTEM_APP_LOCATION variable is not used in the 6.1.1.x configuration; therefore there is no need to cycle servers at this time.

<WAS7.0:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP

-host <host> -port <port> -user <user> -password <password> -f <WAS855_Product>/bin/migrateConfigTo85.py --cleanupFiles

Page 23: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 3 - Prepare the WCG6.1.1 Cell for Migration (Scenarios 1 and 2)

© 2015, IBM Corporation Page 20 WP102389 at ibm.com/support/techdocs

3.3 Backup the Cell

It is important that the cell is completely backed up at this time, so it can be restored if an issue is encountered with the migration process.

It is also highly recommended to back up the cell at various points during the migration process, so those intermediate points can be restored instead of having to restore completely back to the beginning of the migration process. For example, we typically back up the cell after migrating the Deployment Manager and after migrating each node in Scenario 1. For Scenario 2, we backup after migrating the Deployment Manager, after creating the new WAS8.5.5 nodes, and after configuring the WAS8.5.5 scheduler.

3.4 Rollback Procedure in Case a Migration Issue is Encountered

The URL below is from the WAS7 InfoCenter and discusses the “Rolling back a WebSphere Application Server, Network Deployment cell”.

http://ibm.com/support/knowledgecenter/SS7K4U_7.0.0/com.ibm.websphere.migration.zseries.doc/info/zseries/ae/tmig_rollbackdm.html

The instructions for rolling back the cell in the InfoCenter are good and relatively straightforward. We tested the rollback procedure for both scenarios and did not encounter any issues or problems. If a rollback is necessary, note that the Compute Grid database can remain in its migrated state. The restored WCG6.1.1 scheduler and EPs will work with the migrated database.

<WAS7.0:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP

-host <host> -port <port> -user <user> -password <password> -f <WAS855_Product>/bin/migrateConfigTo85.py --backup -configBackupDir <pathToBackupLocation> -nameOfProfile <profile>

Page 24: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 21 WP102389 at ibm.com/support/techdocs

4 Migrate the Deployment Manager (Scenarios 1 and 2)

4.1 Add STATUS_LISTENER_ADDRESS Port (Optional)

The STATUS_LISTENER_ADDRESS port was added to the Deployment Manager’s profile in WebSphere 8.5. This port is used by Job Managers and Deployment Managers for status updates coming from registered nodes. This port is not used by Compute Grid functionality. The STATUS_LISTENER_ADDRESS port is assigned a default value when it is added to the WAS 8.5 Deployment Manager. The z/OS Migration Management Tool does not provide the ability to override the default value when creating the migration jobs. If the default value is not acceptable in your environment, you can add the STATUS_LISTENER_ADDRESS port with an appropriate value to the WAS7 Deployment Manger prior to migrating. Or, update the port after you bring up the WAS8.5.5 Deployment Manager later in this chapter.

4.2 Migrate the Deployment Manager to WAS8.5.5

The WCG6.1.1 / WAS 7.0 cell was originally built using the z/OS Profile Management Tool (zPMT). This tool, along with the z/OS Migration Management tool, is found in the WebSphere Customization Toolbox (WCT). Details on how to download and update these tools are given in the “Install (Upgrade) the WCT Tool” section on page 17.

After upgrading the WCT tool, use the z/OS Migration Management Tool (zMMT) to create the migration jobs. Next, upload the migration jobs to the target z/OS host.

The WAS7.0 Deployment Manager must be stopped in order to migrate it to WAS8.5.5. Note that the Deployment Manager must remain stopped throughout the WAS8.5.5 migration process described in this section, as well as for part of the Compute Grid restoration process described in section 4.3.

The Compute Grid migration requirements above for the “Deployment Manger to remain stopped means” you will have to deviate from the instructions outlined in the zMMT tool under the “Customization Instructions”. Specifically, follow steps 1 through 4 under the “Running the migration jobs” section, but do not perform steps 5 and 6 at this time.

A variation to steps 5 (Shut down the application servers and daemon) and 6 (start the Deployment Manager) will be performed in subsection 4.3.1 and subsection 4.3.3 respectively when administering the first of two required outages during the migration process.

Follow steps 1 through 4 under the “Running the migration jobs” section in the zMMT instructions.

Do not perform steps 5 and 6 at this time.

Page 25: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 4 - Migrate the Deployment Manager (Scenarios 1 and 2)

© 2015, IBM Corporation Page 22 WP102389 at ibm.com/support/techdocs

4.3 Restore the Compute Grid Configuration

The next step is to restore the Compute Grid configuration. This step is broken into four parts and each part is addressed in the following subsections.

4.3.1 Required Outage

Before we restore the previous WCG6.1.1 configuration into the new WAS8.5.5 environment, we need to stop the WAS7.0 Daemon on the Deployment Manager’s LPAR. This will also stop all servers associated with the cell running on this LPAR. We need to do this so that we can start the WAS8.5.5 Daemon when we start the WAS8.5.5 Deployment Manager. Furthermore, we need to stop all scheduler and endpoint servers running on the remaining LPARs. However, be sure to leave the Daemons and Node Agents up on these remaining LPARs.

4.3.2 Perform --wasmigrate

Note that --wasmigrate must be performed prior to starting the Deployment Manager.

Change your directory to the WAS8.5.5 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following script to migrate the 6.1.1.x Compute Grid configuration.

Stop the WAS7.0 Daemon on the Deployment Manager’s LPAR, which will in turn stop all servers associated with the cell on the LPAR. Also stop all scheduler and endpoint servers on the remaining LPARs.

Performing –wasmigrate must be done after the WAS7.0/WCG6.1.1 DMgr has been migrated, and prior to starting the WAS8.5.5 DMgr for the first time.

Page 26: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 4 - Migrate the Deployment Manager (Scenarios 1 and 2)

© 2015, IBM Corporation Page 23 WP102389 at ibm.com/support/techdocs

4.3.3 Start the WAS8.5.5 Deployment Manager

At this point, start the WAS8.5.5 deployment manger. The command to start the Deployment Manager can be found in the “Customization Instructions” generated by the zPMT tool.

Note that the value of the CG_SYSTEM_APP_LOCATION environment variable was modified during the --wasmigrate step for the cell scoped and Deployment Manger scoped instances. These environment variables were updated to include the value specified by the -cg611ProductFS argument as show below in Figure 13.

Figure 13: Updated CG_SYSTEM_APP_LOCATION Environment Variables for WC6 Cell

4.3.4 Perform --restore

Change your directory to the WAS8.5.5 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following script to restore the migrated 6.1.1.x Compute Grid configuration:

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -conntype NONE -lang jython -f <WAS855_Product>/bin/migrateConfigTo85.py --wasmigrate -oldWASHome <oldWASHome> -oldBackendID <6.1.1CellGEEBackendID> -lreeJNDIName <lreeJNDIName> -newWASHome <newWASHome> -cellName <cell> -nameOfProfile <profile> -configBackupDir <PathToBackupLocation> -pjmJNDIName < pjmJNDIName> -pjmBackendID <pjmBackendID> -pjmSchema <pjmSchema> -cg611ProductFS <WCG6.1.1.x_Product> -dmgrNodeName <dmgrNodeName>

Page 27: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 4 - Migrate the Deployment Manager (Scenarios 1 and 2)

© 2015, IBM Corporation Page 24 WP102389 at ibm.com/support/techdocs

Note that the --restore command creates the GRID_ENDPOINT_DATABASE_SCHEMA environment variable and sets it to the value of the -lreeSchema argument that was passed into the script.

Figure 14: Impact of --restore on Environment Variables for WC6 Cell

4.3.5 Restart the WCG6.1.1 Scheduler and Endpoints

At this point, the WAS8.5.5 Deployment Manager is up and the WCG6.1.1 configuration has been restored. Start the WCG6.1.1 scheduler and endpoints and resume using the cell.

4.3.6 Mix Mode Exceptions and Errors

Grid endpoint functionality is implemented via a hidden system application that is installed on endpoint servers. This application changed between WCG6.1.1 and WAS8.5.5 and we will refer to them as GEE and PGCController respectively throughout the paper. However, note that the actual name for these system apps use the GEE or PGCController prefix, followed by either the cluster

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS855_Product>/bin/migrateConfigTo85.py --restore -nameOfProfile <profile> -configBackupDir <PathToBackupLocation> -lreeJNDIName <lreeJNDIName> -lreeSchema <lreeSchema>

Make sure the WAS8.5.5 DMgr has been started and “open for e-bus” before restoring the Compute Grid configuration.

Page 28: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 4 - Migrate the Deployment Manager (Scenarios 1 and 2)

© 2015, IBM Corporation Page 25 WP102389 at ibm.com/support/techdocs

name or server name accordingly. For example, in our cell for cluster GridEndPointA, we would see GEE_GridEndPointA and PGCController_GridEndPointA for a WCG6.1.1 and WAS8.5.5 respectively.

This change in the endpoint system application creates a migration issue for cells running in mixed mode. When in mixed mode, grid endpoint clusters span both WCG6.1.1 and WAS8.5.5 nodes. Hence, either GEE or PGCController is running depending on node level.

The node scoped instance of the CG_SYSTEM_APP_LOCATION variable determines where the system application binaries are found. In order to handle the mixed mode situation, the endpoint attempts to load both GEE and PGCController. This means the WCG6.1.1 endpoint will load the GEE system application, but will complain about not finding the PGCController application. Likewise, the WAS8.5.5 endpoint will load the PGCController and fail to load the GEE application.

The failure to load one or the other system app will generate exceptions in the servant region logs for the WCG6.1.1 and WAS8.5.5 endpoints. These exceptions should be ignored and will be listed at the end of this section.

We have a similar situation for the Parallel Job Manger as with the grid end point execution environments. When the cell is in mixed mode, the grid endpoint cluster where the WCG6.1.1 PJM resides will span both WCG6.1.1 and WAS8.5.5 endpoints. When the endpoint attempts to load the PJM application, it will succeed for WCG6.1.1 endpoints, but will fail for WAS8.5.5 endpoints. Hence we will see errors for the PJM on WAS8.5.5 endpoints that should also be ignored while the cell is in mixed mode.

Note that these grid execution environment exceptions and PJM errors will be resolved once the migration is complete and the “afterMigrationCleanUp” is performed in section 7.2.1. However, for the mean time (mixed mode), please ignore these exceptions and errors in the address space logs.

Ignore the following exceptions in WAS8.5.5 endpoint servers seen in Scenario 1:

WSWS1002E: An error occurred while processing the Web services deployment descriptor for module: Batch JobExecutionEnvironmentEJBs.jar with error: java.lang.ClassNotFoundException: com.ibm.ws.batch.BatchGridDiscriminatorBean

WSVR0040E: addEjbModule failed for ParallelJobManagerEJBs.jar com.ibm.ejs.container.ContainerException

PMGR0000E: Call stack: com.ibm.ws.ejbpersistence.utilpm.PersistenceManagerException: PMGR1010E: The current backend id,DB2UDBOS390_V9_1, does not have equivalent deployed code in the jar.

Ignore the following exception in WCG6.1.1 endpoint servers seen in Scenario 2:

ExtendedMessage: [Servlet Error]-[class java.lang.ClassNotFoundException: com.ibm.ws.gridcontainer.PGCControllerServlet]: java.lang.ClassNotFoundException: class java.lang.ClassNotFoundException: com.ibm.ws.gridcontainer.PGCControllerServlet

Page 29: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 26 WP102389 at ibm.com/support/techdocs

5 Scenario 1 Migration Process

Figure 15 below shows the starting point of the Scenario 1 migration process. At this point, the Deployment Manager has been migrated to WAS8.5.5.3 and there are two nodes (wc6nd21 and wc6nd31) that remain at WCG6.1.1.6/WAS7.0.0.33.

Figure 15: Migrate the First Node to WAS8.5.5

The wc6nd21 node will be migrated first, thus putting the cell into mixed mode. When in mixed mode, a WAS8.5.5 scheduler is started on the migrated node (wc6nd21) and is able to dispatch work across WCG6.1.1 and WAS8.5.5 endpoints running in the wc6nd31 and wc6nd21 nodes respectively. While in mixed mode, you can verify WAS8.5.5 behavior and proceed with the rest of the migration one node at a time. In our cell, we only have two nodes to migrate; hence the wc6nd31 node will be migrated next. Once the last node is migrated, the cell is no longer in mixed mode and steps can be taken to remove the remaining WCG6.1.1 dependencies.

Page 30: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 5 - Scenario 1 Migration Process

© 2015, IBM Corporation Page 27 WP102389 at ibm.com/support/techdocs

5.1 Migrate the First Node to WAS8.5.5

Use the z/OS Migration Management Tool (zMMT) to create the migration jobs for the first node (wc6nd21). Next, upload the migration jobs to the target z/OS host. Then follow the instructions outlined in the zMMT tool under the “Customization Instructions” tab to migrate the node. Note that steps 1 through 9 in the “Running the migration jobs” section should be completed. Verify the node agent starts without issue.

For High Availability (HA) environments, special attention must be given to this step. The migration of the first node can become a “single point of failure” if not properly addressed by your migration process.

For example, the topology shown in Figure 15 has an HA vulnerability immediately following the migration of the first node. This problem remains until the second node is migrated. The issue is there is a single WAS8.5.5 scheduler available to dispatch jobs to the WCG6.1.1 and WAS8.5.5 endpoints. If this scheduler fails, the dispatching of jobs will cease until the time the scheduler can be restarted. Remember, it is not possible for WAS8.5.5 and WCG6.1.1 schedulers to be up at the same time.

One possible work around is to have two schedulers in the first node. Or, it may be acceptable to take the risk and run in this state if the migration of the second node is done as soon as possible after the first node has been migrated.

5.2 Migrate SPI Property Files (if PJM Installed)

This section can be skipped if your cell does not use Compute Grid’s Parallel Job Manager.

Parallel job execution invokes a System Programming Interface (SPI), which is an extension to the execution environment. The SPI is configured using the xd.spi.properties file located in the <WAS7.0:USER_INSTALL_ROOT>/properties directory. In addition to xd.spi.properties file, other property files may be used to configure PJM applications.

When a node is migrated, the SPI property files must be manually copied to the node’s target file system (<WAS8.5.5:USER_INSTALL_ROOT>/properties). The migration process does not automatically copy the SPI property files.

5.3 Enable RUN_IN_MIXED_MODE Custom Property (if PJM Installed)

This section can be skipped if your cell does not use Compute Grid’s Parallel Job Manager.

In WCG6.1.1, the Parallel Job Manager is implemented as a hidden system application and the parallelJobManager.py script is used to install and configure the PJM. In WAS8.5.5, the PJM is now

For High Availability environments, you will be exposed to a “single point of failure” if there is a single scheduler in the first node.

For production environments, a minimum of two schedulers in the first node may be required.

Page 31: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 5 - Scenario 1 Migration Process

© 2015, IBM Corporation Page 28 WP102389 at ibm.com/support/techdocs

part of the grid endpoint environment and does not have to be installed separately. Also note that the top level xJCL for parallel jobs has changed between WCG6.1.1 and WAS8.5.5.

These differences in PJM implementation and xJCL style impact the Scenario 1 migration process. When in mixed mode, the cell will have WCG6.1.1 and WAS8.5.5 endpoints. When this occurs, the scheduler must know which PJM implementation to use. This is handled via the scheduler’s RUN_IN_MIXED_MODE custom property.

When RUN_IN_MIXED_MODE is set to true, and a PJM job is submitted using WCG6.1.1 style xJCL, the scheduler expects the hidden PJM application to be running on a WCG6.1.1 endpoint. In this case, the scheduler will dispatch the top level job to a WCG6.1.1 endpoint. If no WCG6.1.1 endpoint is available, the job will remain in “Submitted” state until one is brought up.

When RUN_IN_MIXED_MODE is set to true, and a PJM job is submitted using WAS8.5.5 style xJCL, the scheduler will dispatch the top level job to a WAS8.5.5 server. If no WAS8.5.5 endpoint is available, the job will remain in "Submitted" state until one is started.

When the migration is complete and RUN_IN_MIXED_MODE is set to false, PJM jobs can be submitted using both WCG6.1.1 and WAS8.5.5 style xJCL.

Note that when RUN_IN_MIXED_MODE is set to true, parallel subjobs can be dispatched to both WCG6.1.1 and WAS8.5.5 endpoints. Of course, in order for this to occur, the batch application must be deployed to both WCG6.1.1 and WAS8.5.5 clusters. Although this is supported, it is not considered a best practice. It is not recommended to have subjobs running across endpoints having different build levels of WAS and WCG for an extended period of time. This capability is provided for testing purposes and to aid in the migration process.

The RUN_IN_MIXED_MODE custom property can be set via the AdminCon as follows:

System administration > Job scheduler > Custom properties

Note that the default value for the RUN_IN_MIXED_MODE is false if the custom property is not defined.

Once all PJM applications have been migrated from the WCG6.1.1 clusters, the RUN_IN_MIXED_MODE custom property must be set to false (or deleted from the scheduler’s Custom properties). Otherwise, top level parallel jobs will never be dispatched. Also note that when RUN_IN_MIXED_MODE is false, the scheduler will translate the old style PJM xJCL into the new WAS8.5.5 style.

Under no circumstances should a WCG6.1.1 endpoint be started when RUN_IN_MIXED_MODE is set to false. This is an unsupported configuration for the Parallel Job Manager and the WCG6.1.1 endpoint will not be able to process PJM top level jobs. If this situation occurs, and a top level job is submitted to a WCG6.1.1 endpoint, the top level job will begin executing and never terminate. However, the top level job hangs before any subjobs are dispatched and recovery from this situation is achieved by stopping the WCG6.1.1 endpoint as soon as possible.

The scheduler’s RUN_IN_MIXED_MODE custom property must be set to true in order for the WCG8.5.5 scheduler to dispatch parallel jobs to a WCG6.1.1 endpoint.

Page 32: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 5 - Scenario 1 Migration Process

© 2015, IBM Corporation Page 29 WP102389 at ibm.com/support/techdocs

5.4 Prepare DB2 Migration Jobs

The WCG6.1.1 database needs to be migrated to WAS8.5.5 compatibility for the LRS and LREE schemas. There are separate migration jobs for each schema (MIGLRS and MIGLREE respectively). These migration jobs (DDL) are for DB2 on z/OS and are found in the WAS8.5.5 product under the util/Batch directory. These jobs will need to be tailored to match your cell’s implementation of the WCG6.1.1 database.

After these two jobs are executed, the database will be in a state to support mixed mode operation. In other words, the WAS8.5.5 scheduler will be able to dispatch work to both WCG6.1.1 and WAS8.5.5 endpoints.

There is a third migration job that will need to be run when mixed mode operation is no longer needed. The MIGDONE job (DDL) will move the database into a state that only supports WAS8.5.5 operation. This step will be discussed in a later section.

If the starting point for your migration is not WCG6.1.1.6, then note you will also have to apply the ADDLRS, ADDLREE, and UPDLRS DDL jobs for DB2 on z/OS. These three DDL modifications were required when upgrading to WCG6.1.1.6 and are also required for WAS8.5.5 compatibility. This required DDL can be found in the WAS8.5.5 product under the util/Batch directory.

5.5 Migrate the CG Database

Before starting the WAS8.5.5 scheduler and endpoints, the WCG6.1.1 database will need to be migrated to WAS8.5.5 compatibility. Once migrated, the database will support both WCG6.1.1 and WAS8.5.5 server access.

In mixed mode, it is expected that the WAS8.5.5 scheduler will dispatch batch jobs to both WCG6.1.1 and WAS8.5.5 endpoints. However, once in mixed mode, the WCG6.1.1 scheduler will be disabled and only the WAS8.5.5 scheduler will be allowed to run.

At this point, all schedulers and endpoints in the cell need to be stopped. Follow your normal shutdown procedure to ensure a graceful shutdown.

Next, run the MIGLREE and MIGLRS migration jobs that were prepared in the previous section.

Before proceeding with the migration of the WCG6.1.1 database to WCG8.5.5 compatibility mode, make sure you back up your database.

Stop all WCG6.1.1 scheduler and endpoint servers in the cell.

Page 33: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 5 - Scenario 1 Migration Process

© 2015, IBM Corporation Page 30 WP102389 at ibm.com/support/techdocs

5.6 Restore the Scheduler’s Compute Grid Configuration

Prior to the migration of the first scheduler node to WAS8.5.5, the scheduler was configured for WCG6.1.1. The scheduler’s configuration for WCG6.1.1 is incompatible with WAS8.5.5. Therefore, when a scheduler’s node is migrated to WAS8.5.5, it must also have its scheduler’s configuration updated (restored) for WAS8.5.5.

Furthermore, the first time the scheduler’s configuration is restored on a WAS8.5.5 node; it will break the scheduler’s ability to run on any WCG6.1.1 server anywhere within the cell. For this reason, stop all WCG6.1.1 schedulers in the cell prior to restoring the scheduler’s configuration on the first WAS8.5.5 node.

The scheduler’s configuration can be restored by using the migrateConfigTo85.py script with the --restoreScheduler option. To perform the restore, change your directory to the migrated node’s <USER_INSTALL_ROOT>/bin directory and execute the following script.

Once the first WAS8.5.5 node has the scheduler’s configuration restored, it is not possible to run a WCG6.1.1 scheduler from that point forward. A system rollback will be required in order to return back to a WCG6.1.1 scheduler.

5.7 Configure Native WSGrid to Run on WAS855 Scheduler (if installed)

This section can be skipped if your cell does not have WSGrid native installed.

If WSGrid native is installed, you will have to create the WSGrid load module for the WAS8.5.5 installation. This is done using the following script.

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <user> -password <password> -f <WAS855_Product>/bin/migrateConfigTo85.py --restoreScheduler -node <nameOfNode>

Stop all WCG6.1.1 schedulers in the cell prior to restoring the scheduler’s configuration on the first WAS8.5.5 node.

Page 34: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 5 - Scenario 1 Migration Process

© 2015, IBM Corporation Page 31 WP102389 at ibm.com/support/techdocs

You also need to update the STEPLIB specification for WSGrid load module in the JCL used to invoke WSGrid jobs.

5.8 Start the Scheduler and Verify Batch Operation

The WAS8.5.5 scheduler will dispatch non Parallel Job Manager jobs to both WCG6.1.1 and WAS8.5.5 endpoints. Top level jobs that require the Parallel Job Manager are dispatched to either a WCG6.1.1 or WAS8.5.5 endpoint depending upon the value of the RUN_IN_MIXED_MODE custom property and the style of xJCL used to submit the job.

When this property set to true, a top level PJM job is dispatched to a WCG6.1.1 endpoint when WCG6.1.1 style xJCL is used. When WAS8.5.5 style xJCL is used, the top level job will be dispatched to a WAS8.5.5 endpoint. In both cases, subjobs are dispatched to both WCG6.1.1 and WAS8.5.5 endpoints depending upon availability.

When RUN_IN_MIXED_MODE is set to false, all PJM jobs (top level and subjobs) are dispatched to WAS8.5.5 endpoints and no WCG6.1.1 endpoints should be running

Start the WAS8.5.5 scheduler and verify its operation. Verification is easier if the WCG6.1.1 endpoints are started and tested while the WAS8.5.5 endpoints remain down. Once the WCG6.1.1 endpoints are verified, start the WAS8.5.5 endpoints and verify their operation.

5.9 Migrate the Remaining Nodes

The remaining nodes can be migrated one at a time, or several at once depending on migration requirements and high availability considerations. Each node being migrated will need to have the following steps performed:

Migrate the node to WAS8.5.5 (section 5.1).

Migrate SPI Property Files if PJM Installed (section 5.2)

Verify the operation of the WAS8.5.5 scheduler and end point on the migrated node.

Recall that when the first scheduler node was migrated, there was a required step to restore the scheduler’s configuration. That restoration was performed using cluster scope, hence it does not need to be repeated for each successive node migration.

Once the last node has been migrated, you need to disable RUN_IN_MIXED_MODE if you had the Parallel Job Manager installed in your WCG6.1.1 cell. This is because top level jobs are only dispatched to WCG6.1.1 endpoints when RUN_IN_MIXED_MODE is enabled. Refer to Section 7.2.2 for details and for instructions on how to disable RUN_IN_MIXED_MODE.

<WAS855_Product>/bin/unpackWSGRID <was home> <hlq> <work dir> <batch> <debug>

Page 35: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 5 - Scenario 1 Migration Process

© 2015, IBM Corporation Page 32 WP102389 at ibm.com/support/techdocs

5.10 The Next Step

Once the last node is migrated, the WAS8.5.5 environment no longer needs to support WCG6.1.1 functionality. Chapter 7 will provide the information you need to know to remove the WCG6.1.1 overhead and complete the migration.

If you had the Parallel Job Manager installed in your WCG6.1.1 cell, you need to disable RUN_IN_MIXED_MODE once the last WCG6.1.1 node is migrated to WAS8.5.5.

Page 36: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 33 WP102389 at ibm.com/support/techdocs

6 Scenario 2 Migration Process

In this scenario, new WAS8.5.5 nodes and clusters are created in addition to the existing WCG6.1.1 nodes and clusters.

6.1 Create WAS8.5.5 Nodes

Figure 16 below shows the two WAS8.5.5 nodes that need to be created for our cell, along with the corresponding scheduler and endpoint clusters.

Figure 16: The new WAS8.5.5 Nodes and Clusters

The two new WAS8.5.5 nodes were built as follows:

a) Use the zPMT tool to create a WAS8.5.5 “Managed (custom) node” environment.

b) Use the zPMT tool to upload the customized JCL jobs to the z/OS target system.

c) Follow the “Customization Instructions” in the zPMT tool to create the managed node.

At this point, the WC6 cell now has the following nodes:

Figure 17: WCG6.1.1 and WAS8.5.5 Nodes

Page 37: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 34 WP102389 at ibm.com/support/techdocs

6.2 Create WAS8.5.5 Scheduler and Endpoint Static Clusters

The next step is to create three WAS8.5.5 clusters spanning both the new nodes created in the last section; one cluster for the WAS8.5.5 scheduler (Scheduler2) and the other two for WAS8.5.5 endpoints (GridEndPointA2 and GridEndPointB2). Each cluster will have one server / node (LPAR). Details for each cluster are shown in Figure 16.

These clusters can be built using the Administration Console, or via wasadmin.sh scripting. Regardless of which method you choose to create the clusters, make sure the server ports are set appropriately before moving on to the next section.

Figure 18: WAS7.0/WCG6.1.1 and WAS8.5.5 Clusters

6.3 Add SystemApp WebSphere Environment Variables for WAS8.5.5 clusters

In section 3.1, the addCGSystemAppVariables.py script was run in order to prepare the cell for migration. This script added several instances of the CG_SYSTEM_APP_LOCATION environment variable, each having different levels of scoping (e.g., cell, node).

In section 4.3.2, the –wasmigrate step updated the cell scoped and Deployment Manager scoped values for CG_SYSTEM_APP_LOCATION.

In this section, we need to add an instance of this environment variable for the WAS8.5.5 nodes we created in section 6.1. Both variables will have node scope and their value set to the WAS8.5.5 location for system applications (${WAS_INSTALL_ROOT}/systemApps). The figure below shows all

instances of this variable with current values, including the new ones for the wc6nd22 and wc6nd32 nodes.

After creating the WAS8.5.5 clusters, check the port assignments for the cluster members and modify appropriately if necessary.

Page 38: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 35 WP102389 at ibm.com/support/techdocs

Figure 19: CG_SYSTEM_APP_LOCATION Environment Variables for the Cell

Before moving onto the next step in the Scenario 2 migration process, it is recommended to start the WAS8.5.5 clusters at this time to verify their operation.

6.4 Create and Configure WAS8.5.5 JDBC Providers and JDBC Data Sources

6.4.1 Create New JDBC Providers and Data Sources

In our WC6 cell configuration, the WAS7.0/WCG6.1.1 scheduler and endpoint data sources were created using a “DB2 Universal JDBC Driver Provider (XA) “. With respect to these existing data sources, it is important to note the following points:

For this reason, new data sources (and providers) for the WAS8.5.5 scheduler and endpoint clusters must be created. The table below summarizes the naming convention information for the existing WAS7.0 data sources and for the new WAS8.5.5 data sources for our cell.

If a server’s node is migrated from WAS7.0/WCG6.1.1 to WAS8.5.5, then the migrated Compute Grid server can continue using the same data source it used prior to the migration.

However, if a Compute Grid data source was created in a WAS7 environment, it cannot be used by a Compute Grid server residing in a newly created WAS8.5.5 node. Note that this restriction applies to Compute Grid data sources and not to data sources used by batch applications.

Page 39: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 36 WP102389 at ibm.com/support/techdocs

Cluster WAS

Version Data Source Name JNDI Name Scoping Comment

GridEndPointA

7.0 lreeDataSource jdbc/lreeDB2 Cell=

wc6cell

Created in a WAS7.0 environment and will be used for the life of the WAS7.0 clusters.

GridEndPointB

Scheduler JobSchedulerDataSource jdbc/jobSchedDB2

GridEndPointA2

8.5

lreeDataSource jdbc/lreeDB2

Cluster= GridEndPointA2 Created in a

WAS8.5.5 environment and will be used by WAS8.5.5 clusters.

GridEndPointB2 Cluster=

GridEndPointB2

Scheduler2 JobSchedulerDataSource jdbc/jobSchedDB2 Cluster=

Scheduler2

Table 3: Cell Data Sources

In this table, note all JNDI names for the job scheduler data sources are the same, as are the JNDI names for the end point data sources. The WAS7.0/WCG6.1.1 servers will use cell scoping when resolving the JNDI name, while the new WAS8.5.5 servers will use cluster scoping to resolve JNDI names. Cell scope resolution will locate the WAS7.0/WCG6.1.1 data sources, while cluster level resolution will return WAS8.5.5 data sources.

To avoid this confusion, we renamed all existing providers to include a WAS7.0 suffix. We then used the default provider names for the WAS8.5.5 JDBC providers. We chose to rename the WAS7.0 providers since they will be removed from the configuration once all applications are migrated to a WAS8.5.5 endpoint.

New WAS8.5.5 JDBC providers must be created before creating WAS8.5.5 data sources.

Note that the default provider name for “DB2 Universal JDBC Drivers” is the same in WAS8.5.5 as they are in WAS7.0.

To avoid confusion, make sure existing WAS7.0 provider names are distinct from the new WAS8.5.5 provider names. Otherwise, it will be difficult to distinguish between them when referencing them in the Administration Console.

Page 40: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 37 WP102389 at ibm.com/support/techdocs

JDBC Provider Name Comment

DB2 Universal JDBC Driver Provider – WAS7.0 Created in WAS7.0 cell and can only be used by WAS7.0 servers. Both providers have cell scope.

DB2 Universal JDBC Driver Provider (XA) – WAS7.0

DB2 Universal JDBC Driver Provider Created in WAS8.5.5 cell and can only be used by WAS8.5.5 servers. Both providers have multiple instances, all at cluster scope. DB2 Universal JDBC Driver Provider (XA)

Table 4: DB2 Universal JDBC Driver Naming Conventions.

6.4.2 Configure LREE Environment Variables

After creating the WAS8.5.5 data sources, the next step is to update the WAS8.5.5 endpoint configurations so they can use the new data sources.

In WAS8.5.5, the job steps are POJOs (Plain Old Java Object) for both compute-intensive and transactional batch applications. These POJOs are invoked directly and WAS8.5.5 xJCL identifies them by class name. To support the class name look up, there are two additional environment variables that need to be defined for each WAS8.5.5 endpoint cluster. We need to create an instance of GRID_ENDPOINT_DATASOURCE with cluster scope. The value of this variable will be the JNDI name of the corresponding data source we just created in the previous section (jdbc/lreeDB2).

We also need to create an instance of GRID_ENDPOINT_DATABASE_SCHEMA with cluster scope and set its value to the schema used in the LREE database. For our cell, the value is LREEWC6. For your cell, the value was identified in section 3.1 when you were preparing the cell for migration. Figure 20 below shows the two pairs of LREE environment variables that have been created for the two WAS8.5.5 endpoints in our cell.

Figure 20: LREE Environment Variables for WAS8.5.5 Endpoint Clusters

6.4.3 Configure the currentSchema Custom Property

In the previous subsection, we configured the schema for a WAS8.5.5 endpoint using the GRID_ENDPOINT_DATABASE_SCHEMA environment variable. This configuration is required to support the use of class names to identify job steps in WAS8.5.5 xJCL.

In WCG 6.1.1, transactional batch job steps were also implemented as POJOs, but were wrapped and deployed as Container Managed Persistence (CMP) beans. In order to look up and invoke these CMPs, a JNDI name was required and supplied in the WCG6.1.1 xJCL.

Page 41: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 38 WP102389 at ibm.com/support/techdocs

The implementation of JNDI lookup for job step CMPs in WCG6.1.1 also requires the use of the endpoint schema. However, in WCG6.1.1, the endpoint schema was specified via the currentSchema custom property in the endpoint data source. Since both WCG6.1.1 xJCL and WAS8.5.5 xJCL is supported in WAS8.5.5, we need to set the currentSchema custom property for the WAS8.5.5 endpoint data sources so that WCG6.1.1 xJCL can be used.

6.5 Migrate SPI Property Files (if PJM Installed)

This section can be skipped if your cell does not use Compute Grid’s Parallel Job Manager.

Parallel job execution invokes a System Programming Interface (SPI), which is an extension to the execution environment. The SPI is configured using the xd.spi.properties file located in the < USER_INSTALL_ROOT >/properties directory for a node. In addition to xd.spi.properties file, other property files may be used to configure PJM applications.

At this time, copy the SPI property files that are located under the <WAS Install Directory>/properties directory for the existing WCG6.1.1 nodes to the newly created WAS8.5.5 nodes.

6.6 Prepare DB2 Migration Jobs

The WCG6.1.1 database needs to be migrated to WAS8.5.5 compatibility for the LRS and LREE schemas. There are separate migration jobs for each schema (MIGLRS and MIGLREE respectively). These migration jobs (DDL) are for DB2 on z/OS and are found in the WAS8.5.5 product under the util/Batch directory. These jobs will need to be tailored to match your cell’s implementation of the WCG6.1.1 database.

After these two jobs are executed, the database will be in a state to support mixed mode operation. In other words, the WAS8.5.5 scheduler will be able to dispatch work to both WCG6.1.1 and WAS8.5.5 endpoints.

There is a third migration job that will need to be run when mixed mode operation is no longer needed. The MIGDONE job (DDL) will move the database into a state that only supports WAS8.5.5 operation. This step will be discussed in a later section.

If the starting point for your migration is not WCG6.1.1.6, then note you will also have to apply the ADDLRS, ADDLREE, and UPDLRS DDL jobs for DB2 on z/OS. These 3 DDL modifications were required when upgrading to WCG6.1.1.6 and are also required for WAS8.5.5 compatibility

6.7 Migrate the CG Database

Before starting the WAS8.5.5 scheduler and endpoints, the WCG6.1.1 database will need to be migrated to WAS8.5.5 compatibility. Once migrated, the database will support both WCG6.1.1 and WAS8.5.5 server access.

In mixed mode, it is expected that the WAS8.5.5 scheduler will dispatch batch jobs to both WCG6.1.1 and WAS8.5.5 endpoints. However, once in mixed mode, the WCG6.1.1 scheduler will be disabled and only the WAS8.5.5 scheduler will be allowed to run.

Page 42: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 39 WP102389 at ibm.com/support/techdocs

At this point, all schedulers and endpoints in the cell need to be stopped. Follow your normal shutdown procedure to ensure a graceful shutdown.

Next, run the MIGLREE and MIGLRS migration jobs that were prepared in the previous section.

6.8 Configure the Scheduler

There are a few steps that need to be taken to configure the WAS8.5.5 scheduler.

6.8.1 Configure the Scheduler Hosted By Attribute

Configuring the “Scheduler hosted by” attribute involves four steps. Note that you must perform all four steps in the sequence indicated below. If you skip steps a) and b) and only do steps c) and d), the configuration of the scheduler will not work.

a) Using the Administration Console, navigate to the System administration > Job Scheduler panel and set the “Scheduler hosted by” attribute to none.

->

b) Save and synchronize the nodes.

c) Next, set the “Scheduler hosted by” attribute to the WAS8.5.5 cluster designated for the scheduler. In our cell, it is the JobScheduler2.

->

d) Save and synchronize the nodes.

6.8.2 Verify the Rest of the Scheduler’s Configuration

After configuring the “Scheduler hosted by” attribute, verify the “Database schema name” and “Data source JNDI name” attributes are correct. It would also be a good idea to make sure the “WebSphere grid endpoints” panel is also correct. If you need to update any of these values, make sure you save and synchronized changes with the nodes.

Before proceeding with the migration of the WCG6.1.1 database to WCG8.5.5 compatibility mode, make sure you back up your database.

Stop all WCG6.1.1 scheduler and endpoint servers in the cell.

Page 43: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 40 WP102389 at ibm.com/support/techdocs

6.9 Enable RUN_IN_MIXED_MODE Custom Property (if PJM Installed)

This section can be skipped if your cell does not use Compute Grid’s Parallel Job Manager.

In WCG6.1.1, the Parallel Job Manager is implemented as a hidden system application and the parallelJobManager.py script is used to install and configure the PJM. In WAS8.5.5, the PJM is now part of the grid endpoint environment and does not have to be installed separately. Also note that the top level xJCL for parallel jobs has changed between WCG6.1.1 and WAS8.5.5.

These differences in PJM implementation and xJCL style impact the Scenario 2 migration process. When in mixed mode, the cell will have WCG6.1.1 and WAS8.5.5 clusters. When this occurs, the scheduler must know which PJM implementation to use. This is handled via the scheduler’s RUN_IN_MIXED_MODE custom property.

When RUN_IN_MIXED_MODE is set to true, and a PJM job is submitted using WCG6.1.1 style xJCL, the scheduler expects the hidden PJM application to be running on a WCG6.1.1 endpoint. In this case, the scheduler will dispatch the top level job to a WCG6.1.1 endpoint. If no WCG6.1.1 endpoint is available, the job will remain in “Submitted” state until one is brought up.

When RUN_IN_MIXED_MODE is set to true, and a PJM job is submitted using WAS8.5.5 style xJCL, the scheduler will dispatch the top level job to a WAS8.5.5 server. If no WAS8.5.5 endpoint is available, the job will remain in "Submitted" state until one is started.

When the migration is complete and RUN_IN_MIXED_MODE is set to false, PJM jobs can be submitted using both WCG6.1.1 and WAS8.5.5 style xJCL.

Note that when RUN_IN_MIXED_MODE is set to true, parallel subjobs can be dispatched to both WCG6.1.1 and WAS8.5.5 endpoints. Of course, in order for this to occur, the batch application must be deployed to both WCG6.1.1 and WAS8.5.5 clusters. Although this is supported, it is not considered a best practice. It is not recommended to have subjobs running across endpoints having different build levels of WAS and WCG. This capability is provided for testing purposes and to aid in the migration process.

The RUN_IN_MIXED_MODE custom property can be set via the AdminCon as follows:

System administration > Job scheduler > Custom properties

Note that the default value for the RUN_IN_MIXED_MODE is false if the custom property is not defined.

Once all PJM applications have been migrated from the WCG6.1.1 clusters, the RUN_IN_MIXED_MODE custom property must be set to false (or deleted from the scheduler’s Custom properties). Otherwise, top level parallel jobs will never be dispatched. Also note that when RUN_IN_MIXED_MODE is false, the scheduler will translate the old style PJM xJCL into the new WAS8.5.5 style.

The scheduler’s RUN_IN_MIXED_MODE custom property must be set to true in order for the WCG8.5.5 scheduler to dispatch parallel jobs to a WCG6.1.1 endpoint.

Page 44: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 41 WP102389 at ibm.com/support/techdocs

Under no circumstances should a WCG6.1.1 endpoint be started when RUN_IN_MIXED_MODE is set to false. This is an unsupported configuration for the Parallel Job Manager and the WCG6.1.1 endpoint will not be able to process PJM top level jobs. If this situation occurs, and a top level job is submitted to a WCG6.1.1 endpoint, the top level job will begin executing and never terminate. However, the top level job hangs before any subjobs are dispatched and recovery from this situation is achieved by stopping the WCG6.1.1 endpoint as soon as possible.

6.10 Configure Native WSGrid to Run on WAS855 Scheduler (if installed)

This section can be skipped if your cell does not have WSGrid native installed.

If WSGrid native is installed, it will have to be uninstalled from the WCG6.1.1 scheduler and reinstalled on the WAS8.5.5 scheduler.

There are two Compute Grid scripts available to assist with configuring WSGrid native on z/OS. Use the installWSGridMQ.py script if WSGrid is configured for MQ Bindings mode or use the installWSGridMQClientMode.py script if WSGrid is configured for MQ Client mode. Since our cell has WSGRid configured for bindings mode, we demonstrate this process using installWSGridMQ.py. The client mode process is analogous.

For MQ Bindings mode, change your directory to the WAS8.5.5 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following scripts to update the WSGrid configuration.

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS855_Product>/bin/installWSGridMQ.py -remove -cluster <WCG611_Scheduler> -qmgr <QMgr> -inqueue <InQueue> -outqueue <OutQueue>

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS855_Product>/bin/installWSGridMQ.py -install -cluster <WAS855_Scheduler> -qmgr <QMgr> -inqueue <InQueue> -outqueue <OutQueue>

Page 45: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 42 WP102389 at ibm.com/support/techdocs

The next step is to create the WSGrid load module for the WAS8.5.5 installation. This is done using the following script.

You also need to update the STEPLIB specification for WSGrid load module in the JCL used to invoke WSGrid jobs.

6.11 Start the Scheduler and Verify Mixed Mode Batch Operation

6.11.1 Verify WAS8.5.5 Scheduler with WCG6.1.1 Endpoints

At this point, the WAS8.5.5 scheduler has been configured and is ready to begin dispatching jobs to the WCG6.1.1 endpoint cluster. Start the WAS8.5.5 scheduler and the WCG6.1.1 endpoints and verify that batch processing is working correctly.

6.11.2 Verify WAS8.5.5 Scheduler with WCG6.1.1 and WAS8.5.5 Endpoints

The WAS8.5.5 scheduler will also be able to dispatch jobs to a WAS8.5.5 endpoint once a batch application has been deployed to it. You can deploy an existing application to run on both WCG6.1.1 and WAS8.5.5 endpoints, or install a new application to the WAS8.5.5 endpoints.

The Administrative Console can be used to deploy an existing application to both WCG6.1.1 and WAS8.5.5 clusters using the “Applications > Enterprise Applications > [App] > Manage Modules” panel. This is demonstrated in Figure 21, which maps the pjmAppA_EJBs module to both the GridEndPointA (WCG6.1.1) and GridEndPointA2 (WAS8.5.5) clusters.

<WAS855_Product>/bin/unpackWSGRID <was home> <hlq> <work dir> <batch> <debug>

Note that you should not deploy new applications to WCG6.1.1 endpoints after the Deployment Manager has been upgraded to WAS8.5.5.

When running in mixed mode, the deployment of applications should only be performed into servers that are running at the latest level of WebSphere.

Page 46: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 43 WP102389 at ibm.com/support/techdocs

Figure 21: pjmApp_A Deployed to a WCG6.1.1 and WAS8.5.5 Endpoint Cluster

Note that if an existing application uses a Shared LIB SPI on the WCG6.1.1 endpoint, and is deployed to a WAS8.5.5 endpoint, then the WAS8.5.5 endpoint must be configured to have access to the existing shared library. The endpoint can be configured using the configCGSharedLib.py script found in the product file system. To do so, change your directory to the WAS8.5.5 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following script to create a shared library:

Once an application is verified on the WAS8.5.5 cluster, it can be removed from the WCG6.1.1 endpoint.

6.11.3 Possible EJBConfigurationException and Work Around

There is a known issue with EJB 3.0 entity beans that could generate the following exception when an application uses “Container Managed Persistence Commit Option A” in a clustered environment.

WSVR0068E: Attempt to start EnterpriseBean

..... failed with exception: com.ibm.ejs.container.EJBConfigurationException:

Using Commit Option A with workload managed server is not supported.

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <ID> -password <PW> -f <WAS855_Product>/bin/configCGSharedLib.py -sharedLibraryName <nameOfSharedLib> -sharedLibraryPath <pathOfSharedLIb>

Page 47: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 6 - Scenario 2 Migration Process

© 2015, IBM Corporation Page 44 WP102389 at ibm.com/support/techdocs

If you do encounter this issue, setting the following JVM custom property for each cluster member will resolve the problem:

Application servers > [server] > Process definition > Servant > Java Virtual Machine > Custom properties > New

com.ibm.websphere.ejbcontainer.wlmAllowOptionAReadOnly=true

Note that we have seen this problem with the XDCGIVT sample application in the Scenario 2 migration process. However, we have not encountered this issue using the Scenario 1 migration process.

6.12 The Next Step

The cell will remain in a mixed mode state until all WCG6.1.1 batch applications have been either migrated to WAS8.5.5 clusters or replaced by new WAS8.5.5 applications. At that time, the migration process can be completed and the cell moved to a homogeneous WAS8.5.5 cell. Refer to Chapter 7 for instructions on how to complete the migration.

Page 48: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 45 WP102389 at ibm.com/support/techdocs

7 Complete the Migration (Scenarios 1 and 2)

At this point, the WAS8.5.5 environment for Scenarios 1 and 2 no longer needs to support WCG6.1.1 functionality. This chapter provides the information you need to know to remove the WCG6.1.1 overhead and to complete the migration.

The steps to complete the migration are very similar for migration Scenarios 1 and 2. There are a few differences, which will be addressed in the steps below.

7.1 Criteria for Completing the Migration

In Scenario 1, each WCG6.1.1 node is migrated to WAS8.5.5. Once the migration of the last node has been completed, you are now ready to complete the migration using the steps below.

In Scenario 2, new WAS8.5.5 nodes and clusters where built to replace the WCG6.1.1 counterparts. The cell will remain in a mixed mode state until all WCG6.1.1 batch applications have been either migrated to WAS8.5.5 clusters or replaced by new WAS8.5.5 applications. Once this point has been reached, you are now ready to complete the migration using the steps below.

7.2 Transition from Mixed Mode to WAS8.5.5 Mode

The following three steps are required when moving from mixed mode to WAS8.5.5 mode independent from which migration scenario you used. Also note that these steps require that all Compute Grid schedulers and endpoints be stopped while these steps are performed.

7.2.1 Perform --afterMigrationCleanUp

In subsection 4.3.6, we explained why some endpoint servers have a ClassNotFoundException in its SR log when starting. The ClassNotFoundException is a byproduct of the cluster having to support servers at different Compute Grid levels. Because the endpoint system application is different between the Compute Grid levels, each server attempts to load both. Based on its configuration, the server will find one of the system apps and complain about not finding the other.

Now that the cell is fully migrated (no longer in mixed mode), a grid endpoint server no longer needs to attempt to load both system apps. The --afterMigrationCleanUp script will uninstall the WCG6.1.1 GEE app from all endpoint clusters, which will in turn, removes the ClassNotFoundException thrown each time the server is started. The --afterMigrationCleanUp script also removes various other items in the configuration, which were there solely to support the migration and are no longer needed.

To perform this cleanup, stop all scheduler and endpoint servers. Change your directory to the WAS8.5.5 Deployment Manager’s <USER_INSTALL_ROOT>/bin directory and execute the following script.

Stop all Compute Grid schedulers and endpoints before proceeding with the following steps.

Page 49: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 7 - Complete the Migration (Scenarios 1 and 2)

© 2015, IBM Corporation Page 46 WP102389 at ibm.com/support/techdocs

7.2.2 Remove Mixed Mode Capability from the CG Database

When the cell is running in mixed mode, the CG database must support both WAS8.5.5 and WCG6.1.1 SQL calls. After the migration is complete, there is no need to support WCG6.1.1 access. When this point is reached, the database can be modified to remove WCG6.1.1 access.

There is a single DDL job (MIGDONE) to remove WCG6.1.1 support from the database. This DDL is located in the WAS8.5.5 product under the util/Batch directory.

Modify the MIGDONE DDL in a similar manner as you did when preparing the migration jobs (MIGLRS and MIGLREE) in section 5.4 and section 6.6 for Scenarios 1 and 2 respectively.

7.2.3 Disable RUN_IN_MIXED_MODE Custom Property (if PJM Installed)

When RUN_IN_MIXED_MODE is set to true, the scheduler expects the hidden PJM application to be running on a WCG6.1.1 endpoint. When a job is submitted for parallel job execution with RUN_IN_MIXED_MODE set to true, the scheduler will dispatch the top level job to a WCG6.1.1 endpoint. If no WCG6.1.1 endpoint is available, the job will remain in “Submitted” state until one is brought up.

Removing mixed mode support from Compute Grid database will make database access more efficient. The overhead of supporting both WCG6.1.1 and WCG8.0 does impact database efficiency.

All scheduler and endpoint servers must be stopped before executing the MIGDONE job. This job alters the database and all database access must be stopped before proceeding.

<WAS8.5.5:USER_INSTALL_ROOT>/bin/wsadmin.sh -lang jython -conntype SOAP -host <host> -port <port> -user <user> -password <password> -f <WAS855_Product>/bin/migrateConfigTo85.py --afterMigrationCleanUp -dmgrNodeName <DMgrNodeName> -cellName <CellName>

Page 50: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 7 - Complete the Migration (Scenarios 1 and 2)

© 2015, IBM Corporation Page 47 WP102389 at ibm.com/support/techdocs

At this point in the migration process, there are no more WCG6.1 endpoints with batch applications deployed to them. For Scenario 1, all WCG6.1.1 nodes have been migrated to WAS8.5.5. For Scenario 2, all WCG6.1.1 endpoints no longer have any applications deployed.

Once this state is reached, we need to tell the scheduler that top level jobs for parallel job execution must be dispatched to a WAS8.5.5 endpoint. This is accomplished by setting the scheduler’s RUN_IN_MIXED_MODE custom property to false, or simply deleting this property from the configuration (RUN_IN_MIXED_MODE defaults to false if not defined).

After modifying the RUN_IN_MIXED_MODE custom property, save and synchronize your changes, and restart the schedulers and endpoints. At this point, your Compute Grid environment is now at the WAS8.5.5 level.

7.3 Delete WAS7.0/WCG6.1.1 Nodes (for Scenario 2)

Now that all Compute Grid applications, schedulers, and endpoints are running on WAS8.5.5 servers, the original WAS7.0/WCG6.1.1 clusters and nodes are no longer needed and should be removed from the cell. Figure 22 below shows the WAS7.0/WCG6.1.1 servers, clusters, and nodes that can be retired.

Figure 22: WCG6.1.1 Nodes and Clusters

Deleting the WAS7.0/WCG6.1.1 nodes will delete all servers residing under the nodes, but will not remove the clusters that span the nodes. If you have no further plans for the original clusters, you should also remove them at this time. You must start the WAS7.0/WCG6.1.1 node agents prior to deleting the nodes. The deletion process will stop the node agents after the node has been removed.

Restart the Compute Grid schedulers and endpoints.

Page 51: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 7 - Complete the Migration (Scenarios 1 and 2)

© 2015, IBM Corporation Page 48 WP102389 at ibm.com/support/techdocs

7.4 Updating WCG6.1.1 xJCL to WAS8.5.5 xJCL (optional)

The migration of WCG6.1.1 xJCL to WAS8.5.5 xJCL is optional, but is highly recommended.

7.4.1 Overview of WCG6.1.1 Batch Jobs

In WCG 6.1.1, transactional batch job steps were implemented as POJOs, but were wrapped and deployed as Container Managed Persistence (CMP) beans. In order to look up and invoke these CMPs, a JNDI name was required and the LREE database had tables to support persistence of these beans. The LREE database provided a system table (POJO table) that mapped the JNDI name to the job step CMP.

In WCG 6.1.1, compute-intensive (CI) job steps were not wrapped with CMPs. Instead, the POJOs were invoked directly and there was no need for a JNDI name to look up and invoke the CI job steps.

7.4.2 What has Changed in WAS8.5.5

Starting in WCG8.0, there are job step POJOs for both compute-intensive and transactional batch job steps. This change is also seen in WAS8.5.5. These POJOs are invoked directly; hence there are no CMPs and no step level JNDI names. Since there is no CMP, there is no need to create a system table to map JNDI names to the corresponding job step CMP. Therefore, the POJO table that was found in WCG6.1.1 is no longer necessary in WAS8.5.5 providing that only WAS8.5.5 xJCL is used.

However, we saw in mixed mode that WCG6.1.1 xJCL was supported in WAS8.5.5 for the existing WCG6.1.1 applications. It was supported because; (i) the POJO table needed to do the JNDI look up already existed in the WCG6.1.1 DDL and (ii) the WAS8.5.5 Scheduler used the POJO table to automatically convert the JNDI reference to the corresponding class name.

Although you could continue to use the WCG6.1.1 xJCL for the WCG6.1.1 apps that have been deployed to the WAS8.5.5 endpoints, this level of indirection is performed by the WAS8.5.5 scheduler each time a job is submitted using the old WCG6.1.1 xJCL. Upgrading the WCG6.1.1 xJCL to WAS8.5.5 xJCL is straightforward and recommended.

7.4.3 Migrating WCG6.1.1 xJCL to WAS8.5.5 xJCL

Compute-Intensive applications use class names to reference job steps in both WCG6.1.1 and WCG8.5.5. Therefore, no change to their xJCL is required.

Transactional batch application xJCL can be migrated to WAS8.5.5 format by replacing all JNDI name references to job steps with their corresponding job step class name.

For example, consider the XDCGIVT sample application that was provided with the WCG6.1.1 product code. We can migrate its WCG6.1 xJCL to WAS8.5.5 xJCL as follows. Note the WCG6.1.1 xJCL is commented out.

Page 52: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 7 - Complete the Migration (Scenarios 1 and 2)

© 2015, IBM Corporation Page 49 WP102389 at ibm.com/support/techdocs

<job-step name="IVTStep1"> <!-- jndi-name>ejb/GenerateDataStep</jndi-name --> <classname>com.ibm.websphere.batch.samples.tests.steps.GenerateDataStep</classname>

<job-step name="IVTStep2"> <!-- jndi-name>ejb/GenericXDBatchStep</jndi-name --> <classname>com.ibm.websphere.batch.devframework.steps.technologyadapters.GenericXDBatchStep</classname>

<job-step name="IVTStep3"> <!-- jndi-name>ejb/DataIntegrityVerificationStep</jndi-name --> <classname>com.ibm.websphere.batch.samples.tests.steps.DataIntegrityVerificationStep</classname>

Page 53: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS

© 2015, IBM Corporation 50 WP102389 at ibm.com/support/techdocs

8 Scenario 3 Migration Process

The term “migration” can be used in many ways. We consider Scenarios 1 and 2 as migrations, since the original cell is still used when the migration process is complete. In this chapter, we discuss the Scenario 3 migration process, but note that in the strictest sense of the word, this is not a migration. Instead, it consists of building a new WAS8.5.5 cell from scratch and taking the necessary steps to seamlessly replace the original WCG6.1.1 cell with the new cell.

Figure 23 below has the WCG6.1.1 starting topology for Scenario 3 to the left of the dashed line in the diagram. The WAS8.5.5 target topology is to the right of the dashed line. These are two distinct cells residing on sysplexes A and B respectively.

Figure 23: Starting and Target Topologies for Migration Scenario 3

In this chapter, we document how to make the transition from the WCG6.1.1 cell on the left to the WAS8.5.5 cell on the right as seamlessly as possible.

Page 54: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 8 - Scenario 3 Migration Process

© 2015, IBM Corporation Page 51 WP102389 at ibm.com/support/techdocs

8.1 Overview of the Active Database Transition from WCG6.1.1 to WAS8.5.5

The starting and target topologies shown in Figure 23 above refer to the Active database. In the diagram, we see that the Active database is initially in WCG6.1.1 format. This is the format of the database resulting from the application of WCG6.1.1 DDL.

There have been changes made to the WCG6.1.1 database that were required for Compute Grid 8.0, which are also required for WAS8.5.5. Due to these changes, the WCG6.1.1 database must be migrated to what we refer to as WCG6.1.1 / WAS8.5.5 compatibility mode before the WAS8.5.5 cell can use it. When in this mode, both the WCG6.1.1 and WAS8.5.5 cells can use the same database. However, access to the Active database must be restricted to one cell at a time.

In Scenario 3, the transition from the WCG6.1.1 cell to the WAS8.5.5 cell occurs when the Active database is in mixed mode. For this transition to occur, the WCG6.1.1 cell is stopped, the database is migrated to mixed mode, and then the WAS8.5.5 cell is started. Once the WAS8.5.5 cell is up, job submission using the WCG6.1.1 xJCL can resume and the WAS8.5.5 cell will process the jobs that were formally handled by the WCG6.1.1 cell.

The Active database will remain in mixed mode for a period of time. This will allow the WAS8.5.5 behavior to be monitored and correct cell operation confirmed. Once there is a high level of confidence that the transition to the WAS8.5.5 cell has been successful, the Active database will then be migrated to WAS8.5.5 mode. This migration removes the overhead that is necessary to support mixed mode. Once in WAS8.5.5 mode, it is not possible for the WCG6.1.1 cell to use the database.

The transition of the Active database from WCG6.1.1 mode to WAS8.5.5 mode is a two step process, so that if necessary, the WAS8.5.5 cell can be stopped and the WCG6.1.1 cell restarted in the event a problem has surfaced with the migration. The Active database is only migrated to WAS8.5.5 mode once it is certain there will be no need to go back to the WCG6.1.1 cell.

The transition from the WCG6.1.1 cell to the WAS8.5.5 cell is done after the WAS8.5.5 cell has been built, configured, and thoroughly tested. In order to do this testing without impacting the Active database, a Test database is used instead.

8.2 Create the Test Database

The sample DDL for building the WCG6.1.1 database is located in the following directory:

<WCG611_Product>/longRunning

For DB2 on z/OS, the SPFLRS and SPFLREE DDLs construct the CG database using distinct schemas for the scheduler and endpoint respectively. SPFPJM is used for constructing the required definitions used by the Parallel Job Manager. ADDLRS, ADDLREE, and UPDLRS DDL must also be applied for WCG6.1.1.6, which is the required starting point for migrating the database to WCG6.1.1 / WAS8.5.5 compatibility mode.

Next, run the MIGLREE and MIGLRS migration jobs to put the Test database into compatibility mode.

8.3 Create the WAS8.5.5 Target Cell and Test

The WAS8.5.5 target environment needs to be constructed in a manner that will allow it to replace the WCG6.1.1 environment. The cell does not have to be identical, but does need to be compatible.

Page 55: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 8 - Scenario 3 Migration Process

© 2015, IBM Corporation Page 52 WP102389 at ibm.com/support/techdocs

Note that this is an opportunity to alter your WCG6.1.1 topology based on lessons learned with the current cell.

The initial Compute Grid configuration will use the Test database constructed in the previous section. Using the test database will allow you to test the cell, plus the process that will be followed when making the transition from the WCG6.1.1 cell to this new WAS8.5.5 cell.

After the cell is built, the WCG6.1.1 applications are deployed and tested. Initially, these applications are tested using the WCG6.1.1 xJCL. Note that not having to migrate application xJCL during the transition from the WCG6.1.1 cell to the WAS8.5.5 cell simplifies the transition process. The migration of application xJCL to WAS8.5.5 format can be deferred until after the transition to the WAS8.5.5 cell is complete.

It is recommended however, that WCG6.1.1 xJCL is eventually migrated to WAS8.5.5 format. For this reason, it is a good idea to test the xJCL migration process at this time. Refer to section 7.4, “Updating WCG6.1.1 xJCL to WAS8.5.5 xJCL (optional)”, for why this should be done and how to do it.

8.4 Make the Transition from the WCG6.1.1 Cell to the WAS8.5.5 Cell

At this point, it is now possible to make the transition to the WAS8.5.5 cell. The following subsections describe this process.

8.4.1 Point the WAS8.5.5 Cell to the Active Database

The Test database is no longer needed. It was used to verify the WAS8.5.5 cell operation in preparation for the transition from the WCG6.1.1 cell to the WAS8.5.5 cell.

At this time, we need to update the WAS8.5.5 cells data sources so they point to the Active database. Before doing this, stop all WAS8.5.5 schedulers and endpoints. Only the Deployment Manager and node agents should be up when this step is performed. Update all data sources to now point to the Active database, test the connections via the Administrative Console data source panel, save, and synchronize the nodes. Do not start any WAS8.5.5 schedulers or endpoints until the WCG6.1.1 cell has been stopped.

8.4.2 Migrate the Active Database to WCG6.1.1 / WAS8.5.5 Compatibility Mode

All WCG6.1.1 schedulers and endpoints must be stopped before the Active Database can be migrated to WCG6.1.1 / WAS8.5.5 compatibility mode. This is also a good time to stop the rest of the WCG6.1.1 cell.

Next, run the MIGLREE and MIGLRS migration jobs to put the Active database into compatibility mode.

Page 56: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 8 - Scenario 3 Migration Process

© 2015, IBM Corporation Page 53 WP102389 at ibm.com/support/techdocs

8.4.3 Update the IP Sprayer

Note that the IP Sprayer is comprised of any component that delivers batch job traffic to the cell such as a hardware device, HTTP server, proxy server, etc.

At this point, the WCG6.1.1 cell should be completely stopped. Next, start the WAS8.5.5 schedulers and endpoints and verify there are no issues in address space logs. Lastly, have your network team update the IP Sprayer so that it now routes all incoming IP traffic for Compute Grid to the WAS8.5.5 cell.

8.4.4 Verify WAS8.5.5 Cell Operation

Verify all cell operation to make sure the transition from the WCG6.1.1 cell to the WAS8.5.5 cell was successful.

In the event there is a problem, you can reverse the steps in the previous section to transition back to the WCG6.1.1 cell. In other words, stop the WAS8.5.5 cell, update the IP Sprayer to point incoming Compute Grid traffic to the WCG6.1.1 cell, and start the WCG6.1.1 cell. Once the problem in the WAS8.5.5 cell has been rectified, repeat the transition back to the WAS8.5.5 cell.

8.5 Complete the Migration

The following sections address some additional steps that should be performed once the transition to the WAS8.5.5 cell is complete and deemed successful.

8.5.1 Convert WCG6.1.1 xJCL to WAS8.5.5 xJCL (optional)

This step is optional, but is highly recommended. It was strongly encouraged to perform this step in section 8.3 when creating and testing the WAS8.5.5 cell. Assuming this testing was done, you already have migrated application xJCL. If not, refer to section 7.4 for instruction on how to migrate WCG6.1.1 xJCL to WAS8.5.5 format.

8.5.2 Remove Mixed Mode Capability from the CG Database

When the cell is running in mixed mode, the CG database must support both WAS8.5.5 and WCG6.1.1 SQL calls. After the migration is complete, there is no need to support WCG6.1.1 access. When this point is reached, the database can be modified to remove WCG6.1.1 access.

Removing mixed mode support from Compute Grid database will make database access more efficient. The overhead of supporting both WCG6.1.1 and WCG8.0 does impact database efficiency.

Page 57: Migrating from WebSphere Compute Grid 6 - IBM...2015/02/04  · Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Version Date: Feb 4, 2015 Douglas MacIntosh IBM Software Group,

WP102389a - Three Scenarios for Migrating Compute Grid 6.1.1 to WebSphere 8.5.5 on z/OS Chapter 8 - Scenario 3 Migration Process

© 2015, IBM Corporation Page 54 WP102389 at ibm.com/support/techdocs

There is a single DDL job (MIGDONE) to remove WCG6.1.1 support from the database. This DDL is located in the WAS8.5.5 product under the util/Batch directory.

Modify the MIGDONE DDL in a similar manner as you did when preparing the migration jobs (MIGLRS and MIGLREE) in section 8.4.2.

8.5.3 Decommission the WCG6.1.1 Cell

If not already done, you can remove the WCG6.1.1 cell from your sysplex.

All scheduler and endpoint servers must be stopped before executing the MIGDONE job. This job alters the database and all database access must be stopped before proceeding.