z/vm v6.4 enabling z/vm for openstack (support for ... · z/vm openstack openstack

248
z/VM Enabling z/VM for OpenStack (Support for OpenStack Newton Release) Version 6 Release 4 SC24-6253-00 IBM

Upload: others

Post on 22-Mar-2020

34 views

Category:

Documents


0 download

TRANSCRIPT

z/VM

Enabling z/VM for OpenStack(Support for OpenStack Newton Release)Version 6 Release 4

SC24-6253-00

IBM

Note:Before you use this information and the product it supports, read the information in “Notices” on page 217.

This edition applies to version 6, release 4, modification 0 of IBM z/VM (product number 5741-A07) and to allsubsequent releases and modifications until otherwise indicated in new editions.

© Copyright IBM Corporation 2014, 2017.US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contractwith IBM Corp.

Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

About This Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiWho Should Read This Book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiWhere to Find More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Links to Other Documents and Websites . . . . . . . . . . . . . . . . . . . . . . . . . xi

How to Send Your Comments to IBM . . . . . . . . . . . . . . . . . . . . . . xiii

Summary of Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvz/VM Version 6 Release 4, SC24-6253-00 . . . . . . . . . . . . . . . . . . . . . . . . . xv

Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1z/VM System Management Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 4OpenStack Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5The Cloud Manager Appliance (CMA). . . . . . . . . . . . . . . . . . . . . . . . . . . 6OpenStack and xCAT Service Deployment Patterns . . . . . . . . . . . . . . . . . . . . . . 8Choosing the Correct Deployment Pattern for your Installation . . . . . . . . . . . . . . . . . . 11Migrating from Pre-v/ZM 6.4 xCAT . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Chapter 2. Planning and Requirements . . . . . . . . . . . . . . . . . . . . . . 15z/VM System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Disk Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

CMA Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Multipath Support for Persistent Disks . . . . . . . . . . . . . . . . . . . . . . . . . 16

Network Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Default Network Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Physical Network and VLAN Considerations . . . . . . . . . . . . . . . . . . . . . . . . 18IP Address/MAC Address Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 19

Chapter 3. z/VM Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 21z/VM System Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21SMAPI and Directory Manager Configuration . . . . . . . . . . . . . . . . . . . . . . . . 21SMAPI and External Security Manager Configuration . . . . . . . . . . . . . . . . . . . . . 21Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Chapter 4. SMAPI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 5. OpenStack Configuration . . . . . . . . . . . . . . . . . . . . . . . 25Configuring the CMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

DMSSICMO COPY File Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Starting the CMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Accessing the CMA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Verifying the CMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Modifying the CMA on Subsequent Boots . . . . . . . . . . . . . . . . . . . . . . . . 36Final Configuration of the CMA via the OpenStack Horizon Dashboard . . . . . . . . . . . . . . 38CMA Usage Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Reconfiguring the CMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Configuring a non-CMA Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . 51

Configuring OpenStack Files on a non-CMA Compute Node . . . . . . . . . . . . . . . . . . 51Verify the OpenStack Configuration for a non-CMA Compute Node . . . . . . . . . . . . . . . 52

© Copyright IBM Corp. 2014, 2017 iii

||

Configuration of SSH for xCAT and Nova Compute Nodes . . . . . . . . . . . . . . . . . . 54Network Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Sample Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Network Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Chapter 6. Image and cloud-init Configuration. . . . . . . . . . . . . . . . . . . 69Image Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Make a Deployable z/VM Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Install Linux on z Systems in a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . 70Define the Source System as an xCAT Node . . . . . . . . . . . . . . . . . . . . . . . 71Configuration of xcatconf4z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Installation and Configuration of cloud-init . . . . . . . . . . . . . . . . . . . . . . . . 77Optionally Load the zfcp Module . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Capture the Node to Generate the Image in the xCAT MN . . . . . . . . . . . . . . . . . . 84Export the Image to the Nova Compute Server . . . . . . . . . . . . . . . . . . . . . . 85Upload the Image from the Nova Compute Server to Glance . . . . . . . . . . . . . . . . . . 86Remove the Image from the xCAT Management Node . . . . . . . . . . . . . . . . . . . . 91Deactivate cloud-init on the Captured Source System . . . . . . . . . . . . . . . . . . . . 92

Chapter 7. Getting Started with Boot from Volume . . . . . . . . . . . . . . . . . 93Creating a Bootable Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Pre-Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93Installing Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Post-Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Cloning a New Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115Creating a Volume Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115Cloning a Volume from a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Booting a Virtual Server Instance from a Volume . . . . . . . . . . . . . . . . . . . . . . 116

Chapter 8. Alternative Deployment Provisioning . . . . . . . . . . . . . . . . . 119Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Planning and Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

z/VM Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Alternative Deployment Provisioning and the Cloud Manager Appliance . . . . . . . . . . . . . 121Master Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Clone Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Configuration – Send in the Clones . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Creating a Dummy Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123Add the Dummy Image to Glance . . . . . . . . . . . . . . . . . . . . . . . . . . 123Setting Up the Master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Creating or Updating the DOCLONE COPY File . . . . . . . . . . . . . . . . . . . . . 126Reading the DOCLONE COPY File . . . . . . . . . . . . . . . . . . . . . . . . . . 127Creating a Dummy Subnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128Creating a Flavor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Deploying Virtual Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Deploying Virtual Servers Using the Horizon GUI . . . . . . . . . . . . . . . . . . . . . 134

Appendix A. Installation Verification Programs . . . . . . . . . . . . . . . . . . 139Location of the IVP Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139Installing the IVP Preparation Script. . . . . . . . . . . . . . . . . . . . . . . . . . . 139Running the IVP Preparation Script on the Compute Node . . . . . . . . . . . . . . . . . . . 139Uploading the Driver Script to Your System . . . . . . . . . . . . . . . . . . . . . . . . 143Messages from the IVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

iv z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||||

Appendix B. Using DDR to Reset the CMA . . . . . . . . . . . . . . . . . . . . 145

Appendix C. Getting Logs from xCAT or ZHCP . . . . . . . . . . . . . . . . . . 147

Appendix D. Checklist for Capture/Deploy/Resize . . . . . . . . . . . . . . . . . 151

Appendix E. Checklist for Live Migration . . . . . . . . . . . . . . . . . . . . 153

Appendix F. OpenStack Configuration Files . . . . . . . . . . . . . . . . . . . 155Settings for Nova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155Settings for Cinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163Settings for Neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165Settings for Ceilometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Sample Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Sample File for Nova z/VM Driver . . . . . . . . . . . . . . . . . . . . . . . . . . 172Sample File for Cinder z/VM Driver . . . . . . . . . . . . . . . . . . . . . . . . . 174Sample Files for Neutron z/VM Driver. . . . . . . . . . . . . . . . . . . . . . . . . 175Sample File for Ceilometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

Appendix G. Common Procedures . . . . . . . . . . . . . . . . . . . . . . . 179xCAT Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

Using the Script Panel in the xCAT User Interface . . . . . . . . . . . . . . . . . . . . . 179Increasing the httpd Timeout in the xCAT MN . . . . . . . . . . . . . . . . . . . . . . 181Backing Up and Restoring xCAT Table Information. . . . . . . . . . . . . . . . . . . . . 182

Increasing the Size of the CMA's Root Disk using LVM Commands . . . . . . . . . . . . . . . . 184Changing Configuration Options at Runtime . . . . . . . . . . . . . . . . . . . . . . . . 187

Appendix H. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . 189Logging within the Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . 189prep_zxcatIVP Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190zxcatIVP Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Exchanging SSH Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Compute Node Startup Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

OpenStack Services Related to Startup . . . . . . . . . . . . . . . . . . . . . . . . . 192Logs Related to Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192Compute Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Deployment Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194OpenStack Services Related to Deployment . . . . . . . . . . . . . . . . . . . . . . . 194Logs Related to Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Scheduler Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Compute Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Additional Network Debug Procedures. . . . . . . . . . . . . . . . . . . . . . . . . 204

Capture Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205OpenStack Services Related to Capture . . . . . . . . . . . . . . . . . . . . . . . . . 205Logs Related to Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206Periodic Failure Due to Unavailable Resources or Timeouts . . . . . . . . . . . . . . . . . . 206Unable to Locate the Device Associated with the Root Directory . . . . . . . . . . . . . . . . 206

Importing Image Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206OpenStack Services Related to Image Import . . . . . . . . . . . . . . . . . . . . . . . 206Logs Related to Image Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

CMA Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207OpenStack Dashboard Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

Reconfiguration Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207No Route to Host Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

xCAT Management Node Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208Space Issues on /install Directory Can Lead to xCAT MN Issues . . . . . . . . . . . . . . . . 208LVM Errors in the /install Directory Can Lead to xCAT MN Issues . . . . . . . . . . . . . . . 208

Alternative Deployment Provisioning Issues . . . . . . . . . . . . . . . . . . . . . . . . 209Logging within the Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Contents v

||

Unlocking a System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210Adding a Dummy Image to Glance . . . . . . . . . . . . . . . . . . . . . . . . . . 210Setting up the DOCLONE COPY file . . . . . . . . . . . . . . . . . . . . . . . . . 210Deploying Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211When All Else Fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

Appendix I. z/VM Commands for OpenStack . . . . . . . . . . . . . . . . . . . 215updateimage.py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218Terms and Conditions for Product Documentation . . . . . . . . . . . . . . . . . . . . . . 219IBM Online Privacy Statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223Where to Get z/VM Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223z/VM Base Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223z/VM Facilities and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Prerequisite Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

vi z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Figures

1. z/VM Systems Management, Conceptual View . . . . . . . . . . . . . . . . . . . . . . 22. OpenStack Managing z/VM, Conceptual View . . . . . . . . . . . . . . . . . . . . . . 33. Using xCAT to Manage Virtual Servers. . . . . . . . . . . . . . . . . . . . . . . . . 54. OpenStack Solution with z/VM OpenStack Drivers . . . . . . . . . . . . . . . . . . . . 85. OpenStack Solution with a z/VM CMA as a Remote OpenStack Compute Node . . . . . . . . . . . 96. Enterprise Virtualization Manager using z/VM’s CMA as a Cloud Controller . . . . . . . . . . . 107. Using z/VM’s CMA as an Entry Level Cloud . . . . . . . . . . . . . . . . . . . . . . 108. OpenStack Services Running on Another Platform or Outside the CMA . . . . . . . . . . . . . 129. OpenStack Services Running in the CMA . . . . . . . . . . . . . . . . . . . . . . . 13

10. Overview of Multipath Support . . . . . . . . . . . . . . . . . . . . . . . . . . . 1711. OpenStack Dashboard Log In Screen . . . . . . . . . . . . . . . . . . . . . . . . . 3912. OpenStack Dashboard Overview Screen . . . . . . . . . . . . . . . . . . . . . . . . 4013. OpenStack Dashboard Images Screen . . . . . . . . . . . . . . . . . . . . . . . . . 4214. Create Images Button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4315. Image Details Screen, Part 1 of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 4416. Image Details Screen, Part 2 of 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 4517. Results of Creating an Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4618. Launch Menu on the Images Screen . . . . . . . . . . . . . . . . . . . . . . . . . 4719. Update Image Metadata Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . 4820. Selecting the Operating System (OS) Version . . . . . . . . . . . . . . . . . . . . . . 4921. Image List Displayed after Saving the Image Metadata . . . . . . . . . . . . . . . . . . . 5022. Sample Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5723. Flat Network, Using Public IP Addresses. . . . . . . . . . . . . . . . . . . . . . . . 6024. Flat Network, Using Private IP Addresses . . . . . . . . . . . . . . . . . . . . . . . 6225. Network in which the CMA has the compute_mn Role . . . . . . . . . . . . . . . . . . . 6826. Specifying the Unlock Action . . . . . . . . . . . . . . . . . . . . . . . . . . . 7327. Specifying the Root Password . . . . . . . . . . . . . . . . . . . . . . . . . . . 7428. Create An Image Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8729. Updating Image Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8830. Entering the Property Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8931. Entering the Image Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9032. Installation Method Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9733. NSF Setup Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9834. Specifying the Storage Device . . . . . . . . . . . . . . . . . . . . . . . . . . . 9835. Adding the Volume as a System Storage Device . . . . . . . . . . . . . . . . . . . . . 9936. Specify the Volume Information . . . . . . . . . . . . . . . . . . . . . . . . . . 10037. Results of Adding a Volume Successfully . . . . . . . . . . . . . . . . . . . . . . . 10138. Type of Installation Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10239. Preparing the Volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10340. Specifying Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . 10441. Installation Settings Screen: Selecting the Change Partitioning Button . . . . . . . . . . . . . . 10542. Verifying that LVM Partitioning is Not Selected . . . . . . . . . . . . . . . . . . . . . 10643. Suggested Partitioning Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . 10744. Expert Partitioner Screen with Sample Partitioning Plan . . . . . . . . . . . . . . . . . . 10845. Editing a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10946. Edit Partition Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11047. Fstab Options Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11148. Installation Settings Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11249. IPL Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11450. Overview of Alternative Deployment Provisioning . . . . . . . . . . . . . . . . . . . . 12051. Cloud Management Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . 12452. Create an Image Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12453. Image Metadata Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12554. Results of Creating an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . 12655. Sample zxcatCopyCloneList.pl Command Output . . . . . . . . . . . . . . . . . . . . 128

© Copyright IBM Corp. 2014, 2017 vii

||||||||||||||||||||||||||

||||

56. Networks Tab of Cloud Management Dashboard Screen . . . . . . . . . . . . . . . . . . 12957. Create Network Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12958. Subnet Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13059. Subnet Details Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13060. Networks Tab with Results Shown . . . . . . . . . . . . . . . . . . . . . . . . . 13161. Flavors Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13262. Create Flavor Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13363. Create Flavor Screen – Granting Public Access . . . . . . . . . . . . . . . . . . . . . 13364. Results of Creating a Flavor. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13465. Instances Tab in Cloud Management Dashboard . . . . . . . . . . . . . . . . . . . . . 13566. Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13567. Launch Instances Tab, Source . . . . . . . . . . . . . . . . . . . . . . . . . . . 13668. Launch Instances Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13669. Networks Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13770. Security Groups Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13771. Instances Tab with Results Shown . . . . . . . . . . . . . . . . . . . . . . . . . 13872. Selecting the Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14773. Selecting “Event log” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14874. Filling in the Logs Fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14875. Information Box Confirming Copy . . . . . . . . . . . . . . . . . . . . . . . . . 14976. Going to the Files Screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14977. Choosing “Save Link As...” on the Files Screen . . . . . . . . . . . . . . . . . . . . . 15078. Selecting the xcat Node Checkbox on the Nodes Panel . . . . . . . . . . . . . . . . . . . 17979. Selecting “Run script” on the Actions Pulldown of The Nodes Panel . . . . . . . . . . . . . . 18080. Entering Commands in the Script Box . . . . . . . . . . . . . . . . . . . . . . . . 18081. Yellow Status Box Showing Results of Commands . . . . . . . . . . . . . . . . . . . . 18182. Selecting Timeout Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18283. Creating a Subdirectory Under /install on the Files Panel . . . . . . . . . . . . . . . . . . 18384. Unlock Panel for Node Checkbox on demonode . . . . . . . . . . . . . . . . . . . . . 191

viii z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||||||||

||||

Tables

1. OpenStack Services that Can Run in the CMA . . . . . . . . . . . . . . . . . . . . . . 52. System Roles and the Services they Run . . . . . . . . . . . . . . . . . . . . . . . . 73. OpenStack vs. Neutron z/VM Driver Terminology . . . . . . . . . . . . . . . . . . . . 184. Summary of DMSSICMO COPY File Properties and Whether They are Required or Optional . . . . . . 335. Settings for IP Address Properties Defined in the DMSSICMO COPY file and the DMSSICNF COPY file,

Based on the CMA Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346. OpenStack Configuration Options Which Will be Overwritten by the CMA Configuration Tools . . . . . 37

© Copyright IBM Corp. 2014, 2017 ix

x z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

About This Document

This document is intended to provide guidance to IBM® z/VM® customers who wish to configure aproduct based on the OpenStack Newton release that includes the z/VM plug-in for enabling OpenStackfor z/VM.

Notes:

1. This support works only after obtaining the z/VM plug-in included with a product. The plug-in is notavailable from the OpenStack community source.

2. This document describes the z/VM plug-in built to work in the OpenStack Newton release. Thez/VM plug-in for other OpenStack versions may have a related version of this manual. The title ofthe manual indicates the version of OpenStack which it supports. To obtain a document for a differentversion, see “Bibliography” on page 223.

Who Should Read This BookThis book is designed for administrators responsible for managing their system with products thatinclude the OpenStack for z/VM plug-in.

Where to Find More InformationSee “Bibliography” on page 223 at the back of this book.

Links to Other Documents and WebsitesThe PDF version of this document contains links to other documents and websites. A link from thisdocument to another document works only when both documents are in the same directory or database,and a link to a website works only if you have access to the Internet. A document link is to a specificedition. If a new edition of a linked document has been published since the publication of this document,the linked document might not be the latest edition.

© Copyright IBM Corp. 2014, 2017 xi

xii z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

How to Send Your Comments to IBM

We appreciate your input on this publication. Feel free to comment on the clarity, accuracy, andcompleteness of the information or give us any other feedback that you might have.

Use one of the following methods to send us your comments:1. Send an email to [email protected]. Go to IBM z/VM Reader's Comments (www.ibm.com/systems/z/os/zvm/zvmforms/webqs.html).

Include the following information:v Your namev Your email addressv The publication title and order number:

z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)SC24-6253-01

v The topic name or page number related to your commentv The text of your comment

When you send comments to IBM®, you grant IBM a nonexclusive right to use or distribute yourcomments in any way it believes appropriate without incurring any obligation to you.

IBM or any other organizations will use the personal information that you supply only to contact youabout the issues that you submit to IBM.

If You Have a Technical Problem

Do not use the feedback methods listed above. Instead, do one of the following:v Contact your IBM service representative.v Contact IBM technical support.v See IBM: z/VM Service Resources (www.ibm.com/vm/service/).v Go to IBM Support Portal (www.ibm.com/support/entry/portal/Overview/).

© Copyright IBM Corp. 2014, 2017 xiii

xiv z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Summary of Changes

This document contains terminology, maintenance, and editorial changes. Technical changes are indicatedby a vertical line to the left of the changes. Some product changes might be provided through service andmight be available for some prior releases.

z/VM Version 6 Release 4, SC24-6253-00With the PTF for APAR VM65893, this edition includes changes to support product changes provided orannounced after the general availability of z/VM V6.4.

These changes include the following:v Installation verification program (IVP) enhancements.v Support for aodh Alarm services in the Cloud Manager Appliance.v Support for provisioning Ubuntu 16.04 servers.v Updates resulting from user feedback.

Deprecated Interfaces

IBM has deprecated the following interfaces starting with the Newton release:v The openstack_xcat_mgt_ip property in the DMSSICMO COPY file. See “DMSSICMO COPY File

Properties” on page 26.v The openstack_xcat_mgt_mask property in the DMSSICMO COPY file. See “DMSSICMO COPY File

Properties” on page 26.

IBM fully supports interfaces first deprecated in this release and intends to fully support them in any fixpacks for this release, but IBM components may ignore them in any future release, or require that youremove them as part of upgrading to any future release. Whenever you must take action to stop usingdeprecated interfaces, you can find the planning and implementation information for doing so on theWeb at z/VM OpenStack Cloud Information (http://www.vm.ibm.com/sysman/osmntlvl.html).

© Copyright IBM Corp. 2014, 2017 xv

xvi z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 1. Introduction

z/VM provides a set of enablement components that OpenStack solutions can re-use to reduce the cost ofsupporting z/VM as an OpenStack-enabled hypervisor.

This document discusses configuration of the z/VM OpenStack enablement components, either in az/VM®-owned virtual server or as part of an OpenStack solution. It discusses set up only for the subsetof components owned by z/VM and only when they run inside z/VM-owned virtual servers. When anOpenStack solution integrates z/VM’s open source OpenStack enablement components, that solution isresponsible for documenting its set up. This document also discusses configuration/creation of an imagefor deployment by OpenStack, and includes various troubleshooting appendices. (z/VM: SystemsManagement Application Programming discusses configuration of these non-OpenStack components: theExtreme Cloud Administration Toolkit (xCAT) and the Systems Management APIs (SMAPI).

This chapter provides an overview of the z/VM systems management architecture, OpenStackarchitecture, the Cloud Manager Appliance (CMA), environments available for using OpenStack withz/VM, and assistance for choosing the correct environment for your installation.

Note:

v The OpenStack services for z/VM described in this document are available only when you are runningthe CMA for the Newton release. (See “The Cloud Manager Appliance (CMA)” on page 6.) TheOpenStack services do not work with any other version of the xCAT, including the xCAT downloadedfrom SourceForge/GitHub and the original z/VM 6.3 version of xCAT that always runs xCAT andZHCP services in separate virtual machines.

v z/VM provides multiple systems management application programming interfaces (APIs) andgraphical user interfaces (GUIs). Other systems management solutions use the z/VM APIs and GUIs tooffer other APIs and GUIs. These APIs and GUIs vary in several ways, such as whether or not they areremotely accessible, their degree of standardization, and the concepts they present.

Figure 1 on page 2 shows these basic z/VM systems management components and relationships, whichare explained in more depth later in this chapter.

© Copyright IBM Corp. 2014, 2017 1

Figure 1 shows you some of the environments that are possible. (“OpenStack and xCAT ServiceDeployment Patterns” on page 8 shows specific examples in greater detail.) For example, you might havescripts orchestrating changes across multiple components, or this might be done by another vendor’sproduct or solution, or your needs might be simple enough that you handle those needs manually. If youare using all of these components, you probably have rules or conventions about which product or layermanages which guests, in order to avoid having them take conflicting actions. This document isconcerned with OpenStack enablement, so it does presume that you have some OpenStack solution inaddition to z/VM systems management components such as a directory manager, SMAPI, and xCAT.

z/VM does not aspire to be a complete cloud or OpenStack solution. In particular, you will need aseparate z/VM-enabled OpenStack solution if you require any of the following:v Heterogeneous platform support. z/VM’s enablement only manages z/VM guests; it does not support

managing any other platforms.v OpenStack projects other than those specifically listed in “OpenStack Architecture” on page 5.v A GUI suitable for use by users other than cloud administrators. While z/VM supplies the standard

OpenStack administrator dashboard (horizon), this dashboard will not display any z/VM-specificfeatures. (This situation may or may not be suitable for your users.)

If you don't require any of the above items, you can consider using z/VM’s enabling components directlyas an entry level cloud that you can later configure to be part of an OpenStack solution.

OpenStack solutions can re-use z/VM’s enablement components in any of several supported deploymentconfigurations; for example:v By integrating z/VM-supported open source components into the OpenStack solution, and configuring

them to use z/VM’s integrated xCAT component’s REST APIs.v By configuring z/VM’s enabling components as an OpenStack compute node, and configuring the

OpenStack solution to include that OpenStack compute node in its service catalog.

External systemsmanagement solutions

z/VM

RESTAPIs

OPNCLOUD (xCAT/ZHCP)

SMAPI

Directory manager

Managed VirtualMachines/Guests

Enterprise Virtualization Manager

OpenStack-based solution

z/VM-specific solution

Figure 1. z/VM Systems Management, Conceptual View

2 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

v By configuring z/VM’s enabling components as an OpenStack cloud controller, and configuring theOpenStack solution (such as VMware vRealize Automation) to group that controller with another peercontroller to create a multi-region cloud.

In all of these cases, the OpenStack solution can call the APIs from anywhere, including from a virtualmachine hosted by z/VM, or from other platforms such as x86 or POWER®, since they are standardHTTP APIs. The OpenStack solution chooses its supported platforms and configurations.

Note: When you use OpenStack to manage Linux virtual servers on z/VM, IBM recommends that youuse the OpenStack solution as your primary systems management interface for those z/VM virtualservers. You should only use z/VM’s xCAT GUI in a secondary capacity.

z/VM OpenStack Enablement Components

Figure 2 shows a conceptual view of the relationship between any OpenStack solution and z/VM.

An OpenStack solution is free to run its components wherever it wishes; its options range from runningall components on z/VM, to running some on z/VM and others elsewhere, to running all components onother platform(s). The solution is also free to source its components wherever it wishes, either usingz/VM’s OpenStack enablement components or not.

z/VM supplies the following OpenStack enablement components:v Open source z/VM drivers for the OpenStack projects nova (Compute), neutron (Networking), and

ceilometer (Telemetry). Solutions typically integrate them into their own OpenStack code. A solutioncan use its own drivers for z/VM instead of using the z/VM-supplied open source z/VM drivers.

z/VM

OpenStack solution

OPNCLOUD (xCAT/ZHCP)

SMAPI

Directory manager

Managed VirtualMachines/Guests

OpenStackcloud controller

OpenStackcompute node

z/VM driver

Figure 2. OpenStack Managing z/VM, Conceptual View

Chapter 1. Introduction 3

v An integrated CMA that serves several purposes, including:– Simplifying the installation and configuration of OpenStack cloud controller, compute node, and/or

z/VM driver services when they are running inside a z/VM-owned virtual server. A solution is freeto run its own OpenStack services instead, and disable z/VM’s, when configuring the CMA.

– Simplifying the installation and configuration of the z/VM OPNCLOUD virtual server that runs thexCAT MN and ZHCP services, which implement APIs used by the open source z/VM OpenStackdrivers.

z/VM System Management Architecturez/VM ships a set of servers that provide local system management APIs. These servers consist of requestservers that accept local connections, receive the data, and then call one of a set of worker servers toprocess the request. These servers are known collectively as SMAPI. The worker servers can interact withthe z/VM hypervisor (CP) or with a directory manager. A directory manager is required for thisenvironment.

Beginning with z/VM version 6.3, additional functionality is provided by integrated xCAT services. xCATis an Open Source scalable distributed computing management and provisioning tool that provides aunified interface for hardware control, discovery, and deployment, including remote access to the SMAPIAPIs. It can be used for the deployment and administration of Linux servers that OpenStack wants tomanipulate. The z/VM drivers in the OpenStack services communicate with xCAT services via RESTAPIs to manage the virtual servers.

xCAT is composed of two main services: the xCAT management node (xCAT MN) and ZHCP. Both thexCAT MN server and the ZHCP server run within the same virtual machine, called the OPNCLOUDvirtual machine. The xCAT MN coordinates creating, deleting and updating virtual servers. Themanagement node uses a z/VM hardware control point (ZHCP) to communicate with SMAPI toimplement changes on a z/VM host. Only one instance of the xCAT MN is necessary to support multiplez/VM hosts. Each z/VM host runs one instance of ZHCP. xCAT MN supports both a GUI for humaninteraction and REST APIs for use by programs (for example, OpenStack).

Figure 3 on page 5 illustrates the basic process flow when you use the xCAT GUI to do the following:v Add a minidisk to a virtual machine on z/VM 1:

– The GUI interacts z/VM 1's xCAT MN to submit an xCAT command– xCAT MN communicates with ZHCP to send requests to SMAPI servers on z/VM 1– The requests flow into the SMAPI servers through a SMAPI request server– A SMAPI worker server receives the requests and interacts with the directory manager– The directory manager obtains a minidisk from the disk pool that it maintains and updates the user

directory to give the disk to the target machinev Power off a virtual machine on z/VM 2:

– The GUI interacts with the xCAT MN to submit an xCAT rpower command– xCAT MN communicates with ZHCP on z/VM 2– The requests flow into z/VM 2's SMAPI servers through a request server– A SMAPI worker server receives the requests and issues commands to the z/VM control program to

shutdown the targeted virtual machine and log it off.

4 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

The xCAT MN contains a local repository of images (as shown in Figure 3). These images are used byZHCP when provisioning the disks of a virtual machine that is being created. Access to the repository forremote ZHCP instances is provided by an NFS mount point established on the ZHCP server.

In z/VM 6.3, the xCAT MN and the ZHCP service ran in separate virtual machines, called XCAT andZHCP, respectively. As of z/VM 6.4, the xCAT MN and the ZHCP services run in the OPNCLOUDvirtual machine. See “The Cloud Manager Appliance (CMA)” on page 6 and “OpenStack and xCATService Deployment Patterns” on page 8 for more information.

OpenStack ArchitectureThis section presents a basic view of OpenStack. You can find a more in depth discussion of OpenStack athttp://docs.openstack.org.

Each OpenStack release is designated by a name and has documentation related to that release on theWorld Wide Web. This document is intended to be used with OpenStack code for the release specified inits title.

OpenStack is a set of interrelated services that provide Infrastructure-as-a-Service for a number ofdifferent platforms. These services can be used to create z/VM virtual servers, specifically Linux on zservers running in a z/VM virtual machine. Each service is developed by an OpenStack project. (z/VM'sCMA can run the following OpenStack services, depending upon the system role in which you configurethe CMA to run. See “The Cloud Manager Appliance (CMA)” on page 6 for more information on systemroles.) For z/VM, the following services are supported:

Table 1. OpenStack Services that Can Run in the CMA

Service Project Function

Alarms aodh Provides alarms and notifications based on metrics.

Figure 3. Using xCAT to Manage Virtual Servers

Chapter 1. Introduction 5

|

Table 1. OpenStack Services that Can Run in the CMA (continued)

Service Project Function

Block Storage cinder Provides persistent block storage for virtual servers. For z/VM, these are nativeSCSI disks. Note that the compute service provides another type of disk knownas an ephemeral disk. You can use OpenStack without configuring the cinderservice if you only intend to use ephemeral disks.

Compute nova Manages the lifecycle of virtual servers and their ephemeral disks. It allows youto create, delete, modify, and power on/off the servers. Each z/VM hypervisorhas a compute service that supports it.

Dashboard horizon Provides a web based management portal for OpenStack operators andadministrators. This GUI is in addition to the one provided by xCAT for cloudoperators and administrators. Any OpenStack-based solution may provide itsown GUI(s) -- for example, a self-service portal GUI for end users -- or mayenhance horizon with z/VM-specific content. z/VM's focus is on enablingOpenStack APIs, not providing an end-user GUI.

Identity keystone Provides an authentication and authorization service for other OpenStackservices.

Image glance Stores and retrieves virtual machine disk images.

Networking neutron Enables Network-Connectivity-as-a-Service for other OpenStack services, such asOpenStack Compute. It works to provide the necessary networking for the virtualserver.

Orchestration heat Orchestrates composite cloud applications using a declarative template format.

Telemetry ceilometer Monitors and meters the OpenStack cloud for billing, benchmarking, scalability,and statistical purposes.

The services communicate with each other. Often they are categorized as cloud controller services whichmanage one or more hypervisor instances, and compute nodes which manage a single hypervisor (forexample, a single z/VM host). z/VM provides drivers for some OpenStack services to support the z/VMplatform. These drivers run within OpenStack's nova, neutron, and ceilometer services, acting as adaptersthat communicate with the xCAT MN service to implement changes within the z/VM host.Architecturally, OpenStack services can be deployed to run anywhere in your IT environment. Thespecific OpenStack solution and how it is configured determines where the services will actually run inyour environment.

The Cloud Manager Appliance (CMA)

The z/VM Cloud Manager Appliance (CMA) provides an easy method to deploy z/VM OpenStackenablement. OpenStack products and solutions can be constructed to use as many or as few of theservices as is appropriate, whether that means that the CMA runs cloud controller services, compute nodeservices, or only services needed by OpenStack z/VM drivers running in other virtual machines or onother platforms.

z/VM supports the following system roles, which control the set of services running inside theOPNCLOUD virtual machine:

controllerRuns cloud controller services (such as the glance image services) in addition to all services listedunder the compute role, and also runs the xCAT MN and ZHCP services to allow the controllerto manage OpenStack z/VM hosts. For more information on cloud controller services, seehttp://docs.openstack.org/ops-guide/arch-cloud-controller.html.

computeRuns OpenStack compute services (nova-compute service), networking services

6 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

(neutron-zvm-agent service), and telemetry services (ceilometer-polling) for the z/VM hypervisor,and also runs the ZHCP service to allow a remote xCAT MN service to manage the host.

compute_mnRuns OpenStack compute, networking, and telemetry services (listed under the compute role) forthe z/VM hypervisor. It also runs the xCAT MN and ZHCP services. This role is used in anenvironment where OpenStack controller services are run outside the CMA (for example, on otherplatforms). The xCAT MN and ZHCP services allow a controller to manage the z/VM hostwithout requiring cloud controller services to be running on the host. For more information, see“Reconfiguring the CMA” on page 50.

mn Runs the xCAT MN and ZHCP services. This is useful when all OpenStack services are outsidethe CMA or when you want to use xCAT and not OpenStack.

zhcp Runs only the ZHCP service. This is useful when all OpenStack services are running in non-CMAnodes or when the you want to use xCAT and not OpenStack. Note that another z/VM host mustrun an xCAT MN service to manage the host through the ZHCP service.

Table 2 summarizes the services that run in each configured CMA role.

Table 2. System Roles and the Services they Run

Services controller role compute rolecompute_mn

role mn role zhcp role

Cloud controller services such asglance, cinder, keystone,neutron-server, etc.

X

OpenStack compute services such asopenstack-nova-compute,neutron-zvm-agent, andopenstack-ceilometer-polling

X X X

xCAT MN service (used by theOpenStack z/VM driver to interactwith z/VM)

X X X

ZHCP service X X X X X

Note:

The following OpenStack services are enabled (by default) on the CMA configured in the controller role:v keystone Identity service: openstack-keystone-admin, openstack-keystone-publicv nova Compute services: openstack-nova-api, openstack-nova-scheduler, openstack-nova-conductor,

openstack-nova-computev neutron Network services: neutron-server, neutron-zvm-agentv glance Image services: openstack-glance-api, openstack-glance-registryv cinder Block Storage services: openstack-cinder-api, openstack-cinder-backup, openstack-cinder-

scheduler, openstack-cinder-volumev heat Orchestration services: openstack-heat-api, openstack-heat-api-cfn, openstack-heat-engine,

openstack-heat-api-cloudwatchv ceilometer Telemetry services: openstack-ceilometer-api, openstack-ceilometer-collector,

openstack-ceilometer-notification, openstack-ceilometer-pollingv aodh Alarming services: openstack-aodh-api, openstack-aodh-evaluator, openstack-aodh-listener,

openstack-aodh-notifier

The following OpenStack services are enabled (by default) on the CMA configured in the compute role:v nova Compute service: openstack-nova-compute

Chapter 1. Introduction 7

|

||

||

v neutron Network service: neutron-zvm-agentv ceilometer Telemetry service: openstack-ceilometer-polling

OpenStack and xCAT Service Deployment PatternsThe figures in this section describe the most common intended deployment patterns. Please be aware thatthe following characteristics apply to all these deployment patterns, and unless otherwise indicated theyapply to all solutions regardless of the supplier. They are consequences of the various architectures z/VMOpenStack enablement uses, and represent either architectural limitations or best practices.v z/VM’s OpenStack drivers run inside OpenStack compute, networking, and telemetry services.v There is one xCAT MN service per set of z/VM OpenStack compute services, per cloud.v There is one OpenStack compute service, and one corresponding ZHCP service, per z/VM system. The

compute service uses the ZHCP service through the xCAT MN service to manage the z/VM systemwhere ZHCP runs.

All non-z/VM details shown are examples and will vary with each specific solution; for example, thecloud controller and each compute service might run in distinct virtual machines in some OpenStacksolutions, and they might run on any platform (for example, on a blade server or within a POWER orz/VM virtual machine). The solution might allow its users to choose which deployment patterns it usesto support z/VM, or it might support only one of them. Consult your solution’s documentation todetermine which deployment patterns it supports and how to choose from among them.

Figure 4 shows services and virtual machines that run when an OpenStack solution runs z/VM’sOpenStack drivers outside of CMA, for example on another platform. It shows variations with both oneand two z/VM systems.

In this section when a system role is in double quotes (for example "mn"), it refers to the syntax you willspecify in the DMSSICMO COPY file (or in the configuration wizard) to configure the CMA in that givenrole.

You configure the CMA on one z/VM system in the cloud (for example, z/VM 1) to run in the "mn" role,so that it sets up the OPNCLOUD virtual server to run the xCAT MN and ZHCP services. You configureCMA on all other z/VM systems (for example, z/VM 2) in the cloud to run in the "zhcp" role so theyeach set up one OPNCLOUD virtual server to run the ZHCP service. Because the xCAT MN servicerunning in z/VM 1 manages all z/VM systems in the cloud through the ZHCP service running on eachz/VM system, you do not run the xCAT MN service on any other systems in the cloud (in this case, on

Figure 4. OpenStack Solution with z/VM OpenStack Drivers

8 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

z/VM 2). The OpenStack solution you are using installs the OpenStack code (cloud controller services,and one compute service per z/VM system) and configures its z/VM drivers.

Figure 5 shows services and virtual machines that run when an OpenStack solution uses z/VM’s CMA asa remote OpenStack compute node. It shows variations with both one and two z/VM systems.

You configure the CMA on one z/VM system in the cloud (for example, z/VM 1) to run in the"compute_mn" role, so it sets up the OPNCLOUD virtual server to run the OpenStack compute service(managing the system it is running on, for example z/VM 1), and the xCAT MN and ZHCP services. Youconfigure the CMA on all other z/VM systems (for example, z/VM 2) in the cloud to run in the"compute" role so that they each set up one OPNCLOUD virtual server to run the OpenStack computeand ZHCP services that manage the system those services are running on. Since the xCAT MN servicerunning in z/VM 1 manages all z/VM systems in the cloud through the ZHCP service running on eachz/VM system, you do not run the xCAT MN service on any other systems in the cloud (in this case, onz/VM 2). Each CMA installs the OpenStack code and configures its z/VM drivers automatically, since allCMAs here are configured to run OpenStack compute services. The OpenStack solution’s cloud controllercalls OpenStack compute APIs when it needs to interact with the virtual servers it deploys on z/VM;these APIs are served by the OPNCLOUD virtual machine running on the compute node whereOpenStack deployed the virtual server (in this case, either z/VM 1 or z/VM 2).

Figure 6 on page 10 shows services and virtual machines that run when a virtualization manager usesz/VM’s CMA as a cloud controller. It shows variations with both one and two z/VM systems.

Figure 5. OpenStack Solution with a z/VM CMA as a Remote OpenStack Compute Node

Chapter 1. Introduction 9

You configure the CMA on one z/VM system in the cloud (for example, z/VM 1) to run in the"controller" role, so it sets up the OPNCLOUD virtual server to run the OpenStack cloud controller,OpenStack compute (managing the system it is running on, for example z/VM 1), xCAT MN, and ZHCPservices. You configure the CMA on all other z/VM systems (for example, z/VM 2) in the cloud to run inthe "compute" role so they each set up one OPNCLOUD virtual server to run the OpenStack computeand ZHCP services that manage the system those services are running on. Since the xCAT MN servicerunning in z/VM 1 manages all z/VM systems in the cloud through the ZHCP service running on eachz/VM system, you do not run the xCAT MN service on any other systems in the cloud (in this case, onz/VM 2). Each CMA installs the OpenStack code and configures its z/VM drivers automatically, since allCMAs here are configured to run OpenStack compute services. The virtualization manager callsOpenStack cloud controller APIs when it needs to interact with the virtual servers OpenStack deploys onz/VM; these APIs are served by the OPNCLOUD virtual server on z/VM 1.

Figure 7 shows services and virtual machines that run when you use z/VM’s CMA as an entry levelcloud, which you might do in preparation for adopting an OpenStack solution, for evaluation purposes,or to run an entry level z/VM-only cloud. It shows variations with both one and two z/VM systems.

You configure the CMA on one z/VM system in the cloud (for example, z/VM 1) to run in the"controller" role, so it sets up the OPNCLOUD virtual server to run the OpenStack cloud controller,OpenStack compute, xCAT MN, and ZHCP services. You configure the CMA on all other z/VM systems(for example, z/VM 2) in the cloud to run in the "compute" role, so they each set up one OPNCLOUDvirtual server to run the OpenStack compute and ZHCP services that manage the system those services

Figure 6. Enterprise Virtualization Manager using z/VM’s CMA as a Cloud Controller

Figure 7. Using z/VM’s CMA as an Entry Level Cloud

10 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

are running on. Since the xCAT MN service running in z/VM 1 manages all z/VM systems in the cloudthrough the ZHCP service running on each z/VM system, you do not run the xCAT MN service on anyother systems in the cloud (in this case, on z/VM 2). Each CMA installs the OpenStack code andconfigures its z/VM drivers automatically, because all CMAs here are configured to run OpenStackcompute services.

Choosing the Correct Deployment Pattern for your InstallationThis section explains how to decide which deployment pattern is correct for you. This section discussesyour options by presenting a series of questions. (If you are installing a product that has embedded thez/VM plug-ins, the choice might have already been made for you, and it might not be necessary to readthis section.)1. Do you want to run only the xCAT service (not the OpenStack services) on this z/VM host?

v Yes – OK, you only want to use xCAT. At least one of the following is probably true:– you want to try out xCAT before proceeding with an OpenStack installation– you are running a previous release of OpenStack (such as Juno) and not using the CMA to install

OpenStack services.– you are running a product with the z/VM OpenStack driver included with it, or you have built

your own OpenStack distribution and included the z/VM driver from the community

If you have some other intention then you probably want to review “The Cloud ManagerAppliance (CMA)” on page 6 and “OpenStack and xCAT Service Deployment Patterns” on page 8to make certain that running only xCAT is appropriate for your installation.Do you want to have a GUI and/or do you manage other z/VM hosts?

Yes – You want to install xCAT using the CMA with the system role of "mn" if this is your onlyz/VM host system, or is the z/VM on which you plan to run the xCAT MN.No – If this z/VM host system will be managed by an xCAT MN service on another z/VM,then you want to install xCAT using the CMA with the system role of "zhcp".In either case, you want to continue by reading z/VM: Systems Management ApplicationProgramming. Other than setting the system role and defining one additional property (thecmo_data_disk property), there are no other OpenStack properties that you should considerdefining.

v No – Continue to the next question.2. Do you want to run the OpenStack services for the release that is indicated on the title of this

document; for example, the Newton release?

v Yes – Continue to the next question.v No – You are reading the wrong manual. You should read Enabling z/VM for OpenStack for the

release of OpenStack that you intend to use. See “Bibliography” on page 223 for the list of Enablingz/VM for OpenStack documentation or go to the z/VM OpenStack Web page:http://www.vm.ibm.com/sysman/openstk.html

3. Do you want to let z/VM install the OpenStack services for you in a virtual machine?

v Yes – this is a great choice for many installations.– Do you want to run the controller and compute services in this z/VM host; or only compute

services; or the compute services and the xCAT MN service?

ControllerYou want to install the controller with the CMA role of "controller".

ComputeYou want to install the compute node with the CMA role of "compute".

Compute services and the xCAT MN serviceYou probably intend to run an OpenStack controller somewhere else and intend to

Chapter 1. Introduction 11

||

manage this z/VM host with its own xCAT MN service. For this case, you want to useCMA to install a system with the CMA role of "compute_mn".

Remember the CMA role you need, and see “Configuring the CMA” on page 25 for moreinformation on these configuration options.

v No – Well, this is confusing. Your series of answers do not take you to a valid choice. Please review“OpenStack and xCAT Service Deployment Patterns” on page 8 to determine your deploymentpattern and then return to this section.

Migrating from Pre-v/ZM 6.4 xCATThe following figures compare the services and virtual machines that run in various configurations whenyou use xCAT on z/VM, before and after installing z/VM 6.4.

Running the xCAT MN and the ZHCP server in separate virtual machines is not supported in z/VM 6.4 and laterreleases. Additionally, new xCAT features may require z/VM 6.4.

Select the figure that most closely represents your current situation:v You are running OpenStack services, but not on z/VM or outside of the CMA, as shown in Figure 8.v You are running OpenStack services on z/VM and using z/VM's CMA, as shown in Figure 9 on page

13.

Figure 8 compares the services and virtual machines that run when you use xCAT on z/VM before andafter installing z/VM 6.4, and when you have OpenStack services either running on another platform orinstalled without using the CMA.

xCAT at earlier service levels is represented by the “before” picture, with the xCAT MN and ZHCPservices running in two separate virtual machines. To run the same set of services after installing z/VM6.4, you configure the CMA on z/VM 1 to run in the CMA "mn" role, and the CMA on z/VM 2 to run inthe CMA "zhcp” role, as described in “Configuring the CMA” on page 25.

Figure 9 on page 13 compares the services and virtual machines that run when you use z/VM’s CMAbefore and after installing z/VM 6.4, and when you have OpenStack services running in the CMA.

Figure 8. OpenStack Services Running on Another Platform or Outside the CMA

12 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

CMA at earlier service levels corresponds to the “before” picture, with the xCAT MN and ZHCP servicesrunning in two separate virtual machines. To run the same set of services after installing z/VM 6.4, youconfigure the CMA on z/VM 1 to run in the CMA "controller" role, and the CMA on z/VM 2 to run inthe CMA "compute" role, as described in “Configuring the CMA” on page 25.

Note that if you are migrating from z/VM 6.3 to z/VM 6.4, the process of moving to using a CMA withone user ID for xCAT and ZHCP should be done in z/VM 6.3, before you upgrade to z/VM 6.4. Thisprocess is described in the section “Migrating to an Integrated xCAT MN and ZHCP Server in the SameCMA” in z/VM: Migration Guide.

Figure 9. OpenStack Services Running in the CMA

Chapter 1. Introduction 13

14 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 2. Planning and Requirements

This chapter provides requirements related to the z/VM system, disk storage, your network, the physicalnetwork and VLAN, and your IP address and MAC address ranges.

z/VM System Requirementsv A supported version of z/VM 6.4.v In order to use live migration, the z/VM system must be configured in a Single System Image (SSI)

configuration, and must have been created using the IBM-provided installation instructions for SSIconfigurations.

v The appropriate APARs installed, the current list of which can be found at z/VM xCAT Maintenance(http://www.vm.ibm.com/sysman/xcmntlvl.html) and z/VM OpenStack Cloud Information(http://www.vm.ibm.com/sysman/osmntlvl.html).

Note: IBM z Systems™ hardware requirements are based on both the applications and the load on thesystem. Please consult your IBM Sales Representative or Business Partner for assistance in determiningthe specific hardware requirements for your environment.

The CMA code is not shipped using the z/VM APAR process. Information on obtaining the appliance isprovided in the APAR that delivers the support. This code is downloaded and then placed on the 0101and 0102 minidisks of the OPNCLOUD user ID.

Disk Storage

The CMA requires disk storage to accomplish its functions, such as storing captured images. Its disks aremanaged by the Logical Volume Manager (LVM). The following statements apply to all configurations:v A logical volume's size can be increased but not decreased. Trying to remove disks from the logical

volume will result in lost data, and may make all CMA-based services unusable.v The logical volume contains user data. You should back up its contents regularly in case it is

compromised for any reason.v The amount of space required is largely dependent on the size of the disks that will be captured, and

whether compression is used in the capture process. Uncompressed images are approximately the samesize as the original disk, while compressed images are approximately 1/2 to 1/3 of the disk size(depending on how much of the disk is in use).

v 50 GB of space is recommended to start with if you take the default value for thexcat_free_space_threshold property. (For more information on this property, see “Settings for Nova”on page 155.)

v The OPNCLOUD virtual machine manages a storage pool named XCAT1. You provide a list of z/VMvolume labels, which must not be assigned to the directory manager, as described in the section below.The xCAT boot process detects changes in the list and issues directory manager commands that addany new volumes to this storage pool.

CMA Environment

Storage is needed for OpenStack services to provide minidisks for the root directories of deployed virtualservers, as well as for storing code and user data, depending on the role the CMA has been assigned.SCSI disk space is required only if the deployed systems will have persistent disks attached byOpenStack’s block storage service.

© Copyright IBM Corp. 2014, 2017 15

Running OpenStack services requires a 0101 minidisk and a 0102 minidisk, no matter what role the CMAhas been assigned. These disks must be included in the OPNCLOUD virtual machine's directory entry; itcan be managed by the directory manager in a disk pool. Each disk can be either:v An ECKD disk of 3338 cylinders, orv A FBA/eDevice of 4806720 blocks.

Running OpenStack services in the controller role requires minidisks to store user data. CMA managesthese disks with LVM, using the logical volume name cmo-data. The logical volume cmo-data contains alluser data, such as OpenStack configuration files and database files, generated certificate keys, image files,and temporary files. The amount of space you need depends on how many images/files that you want tokeep. Specify z/VM volume labels in the DMSSICMO COPY file property cmo_data_disk during theconfiguration process. (For more information on this property, see “DMSSICMO COPY File Properties” onpage 26.)

Running OpenStack services in the controller or compute role requires minidisks for the root directory ofdeployed virtual servers. The directory manager allocates minidisks from a disk pool it manages; specifythe volume pool name and type in the DMSSICMO COPY file property openstack_zvm_diskpool. (Formore information on this property, see “DMSSICMO COPY File Properties” on page 26.) An existing diskpool other than XCAT1 must be used for the minidisk pool. The amount of space required is dependentupon the size of the root directories in the images being deployed or captured.

If persistent disks attached by OpenStack’s block storage service will be used by the deployed virtualservers, FCP disk pools must be defined to the SAN Volume Controller (SVC) for this use and theirdetails specified in the controller’s DMSSICMO COPY file. The amount of space required is dependent onthe size and number of persistent disks used by the deployed virtual servers. For more information, seethis page in the IBM developerWorks® community:https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W21ed5ba0f4a9_46f4_9626_24cbbb86fbb9/page/Managing%20FCP%20storage%20through%20zVM%20OpenStack%20services

Multipath Support for Persistent Disks

Multipath support improves the reliability of persistent disks by allowing you to define two paths fromthe host to a disk. If one of the paths to the disk fails, the host can access the disk using the second path.The switching of paths is transparent to users, and applications running on the host are not interruptedwhen the path is switched.

As shown in Figure 10 on page 17, the multipath feature consists of a host side and a storage side. Twohost bus adapters (HBA 1 and HBA 2) connect the z Systems host to the persistent disk LU 1. This isconsidered host side multipath. If one of the paths fails, the host can still access LU1 via the second path.HBA x and HBA y are host bus adapters located in the storage system. This is storage side multipath.HBA x and HBA y function in the same way as do HBA 1 and HBA 2.

16 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

To use host side multipath, the zvm_multiple_fcp property in the /etc/nova/nova.conf file must bespecified as True, and the zvm_fcp_list property in the same configuration file must be configuredcorrectly for the system where your compute nodes are located. At least two sets of FCP devicescorresponding to different CHPIDs must be configured in the zvm_fcp_list. For a description of theseproperties, see “Settings for Nova” on page 155. The use of storage side multipath depends on yourphysical connections and therefore can not be configured using properties in the OpenStack configurationfiles. To use persistent disks, IBM recommends that you use physical multipath connections to the storageback end, and that multipath tools are installed on every instance.

For more general information on multipath support, see the following URLs:https://www-01.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.ldsg/ldsg_c_multipathing.html?cp=linuxonibmhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/DM_Multipath/index.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/DM_Multipath/index.htmlhttps://www.suse.com/documentation/sles11/stor_admin/data/multipathing.htmlhttps://www.suse.com/documentation/sles-12/singlehtml/stor_admin/stor_admin.html#cha.multipathhttps://help.ubuntu.com/lts/serverguide/dm-multipath-chapter.html

Network Considerations

The neutron z/VM driver is designed as a neutron Layer 2 plugin/agent combination, to enableOpenStack to exploit z Systems and z/VM virtual network facilities. Typically, from the OpenStackneutron perspective, a neutron plugin performs the database related work, while a neutron agentperforms the real configuration work on hypervisors. Note that in this document, the terms “neutronz/VM plugin” and “neutron z/VM agent” both refer to the neutron z/VM driver.

The main component of the neutron z/VM driver is neutron-zvm-agent, which is designed to work witha Neutron server running with the ML2 plugin. The neutron z/VM driver uses the neutron ML2 pluginto do database related work, and neutron-zvm-agent will use the xCAT REST API to do realconfiguration work on z/VM.

Notes:

v Because neutron-zvm-agent will only configure a network on z/VM, if you plan to use neutron Layer 3network features or DHCP features, you need to configure and run the neutron OpenVswitch agentand other Layer 3 agents with the neutron server. Refer to The Networking Chapter of the OpenStackCloud Administrator Guide for more information. Otherwise, the neutron OpenVswitch agent is notneeded.

v One neutron-zvm-agent can work with or configure only one z/VM host.

z Systems host

storage subsystemdevice mapper

SCSI stack

QDIO

/sys/block/dm-O

/sys/block/sdb/sys/block/sda

/sysblock/sda

/sysblock/sdb

LU 1switch

HBA y

HBA x

zfcp

Linux

HBA 1 HBA 2

Figure 10. Overview of Multipath Support

Chapter 2. Planning and Requirements 17

||||

|||||||

v The neutron-zvm-agent does not have to run on the same server with nova-compute, although inCMA-based deployments it always does.

v The neutron z/VM driver does not support IPV6.

Note that there are some terminology differences between OpenStack and the neutron z/VM driver, asfollows:

Table 3. OpenStack vs. Neutron z/VM Driver Terminology

OpenStack Neutron z/VM Driver

Physical network z/VM vswitch

Segmentation ID VLAN ID

FLAT VLAN UNAWARE

base_mac System prefix or user prefix

The neutron z/VM driver uses a z/VM vswitch to provide connectivity for OpenStack instances. Refer toz/VM: Connectivity for more information on vswitches and the z/VM network concept.

Default Network Considerations

Using a default network allows you to reduce the number of manual steps required to create a networkon the CMA. To use a default network, you have to plan the network topology and set the IP ranges andthe Classless Inter-Domain Routing (CIDR) value of the network to the openstack_default_networkproperty in the DMSSICMO COPY file. For more information on this property, see “DMSSICMO COPYFile Properties” on page 26.

When the CMA starts up, if it detects that the openstack_default_network property is configured, itcreates a network connected to the vswitch specified in the XCAT_MN_vswitch property defined in theDMSSICNF COPY file. The network's subnet CIDR value will be that which is configured in theopenstack_default_network property. For more information see, “Configuring the CMA” on page 25.

Note the following when using a default network:v The default network is created only for flat networks, and only one subnet is supported.v The default network's gateway is set to x.x.x.1/cidr. For example if the openstack_default_network

property is set to 192.168.1.2–192.168.1.250/24, then the gateway for the network is 192.168.1.1.

Physical Network and VLAN Considerations

In the ML2 plugin configuration file, you define a FLAT network with the flat_networks property, andyou define a VLAN aware network with the network_vlan_ranges property. Both properties are optional.You can define neither type of network, one type, or both types. The following example defines bothtypes of networks:flat_network=xcatvsw2,datanet3network_vlan_ranges=datanet1:1:4094,datanet2:2:355

In this example, each comma-delimited field is the configuration for a single physical network. Innetwork_vlan_ranges, the colon-delimited physical network configuration field is divided into a physicalnetwork name, VLAN ID start, and VLAN ID end.

Refer to Chapter 5, “OpenStack Configuration,” on page 25 for more information on the configuration.

A VLAN is used to separate the network with a VLAN ID. Only instances with the same VLAN ID cancommunicate with each other. From the OpenStack perspective, when a network is created by the

18 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||||

||||

||

neutron server, and if the network type is VLAN, the neutron server will assign one VLAN ID(segmentation_id) for the network. The user also can specify the segmentation_id when creating thenetwork. The segmentation_id must be in the range defined in network_vlan_ranges. In this way, VLANranges can be used to control how many networks can be created on the physical network. z/VMvswitch supports VLAN ID ranges from 1 to 4094. A VLAN range defined in network_vlan_ranges cannot be larger than this. (It can be a subset of the 1-4094 range.) If more than 4094 networks are needed,the user needs to define more physical networks. From a system management perspective, for example, auser can choose different physical networks for different departments.

When planning a VLAN, you should also consider the network which the z/VM system is in. Ask thenetwork administrator which VLAN ranges are authorized for the z/VM system.

If the network is FLAT, network traffic is exposed among all instances on the physical network. From asystem management perspective, for example, when the user chooses different physical networks fordifferent departments, more than one FLAT physical network can be defined.

When the neutron-zvm-agent starts, it will:v Read the ML2 configuration file to get the flat_networks and network_vlan_ranges configuration.v Treat each physical network in network_vlan_ranges as a vswitch name, and try to create each of them

in z/VM, if the vswitch does not already exist.

All of these newly created vswitches are working on Layer 2. If the physical network is FLAT, thecorresponding vswitch in z/VM will be created as VLAN UNAWARE. Otherwise, it will be created asVLAN AWARE and use the same VLAN range as the physical network. For example:flat_network=xcatvsw2,datanet3network_vlan_ranges=datanet1:1:4094,datanet2:2:355

In this example, the neutron-zvm-agent will try to create/setup four vswitches: xcatvsw2 and datanet3are VLAN UNAWARE, datanet1 is VLAN AWARE (and supports VLAN ID range 1-4094), and datanet2is VLAN AWARE (and supports VLAN ID range 2-355).

Notes:

v By default, xcatvsw2 is created by the xCAT MN. The neutron-zvm-agent will create only the otherthree vswitches. By default, xcatvsw2 is a Layer 2, VLAN UNAWARE vswitch.

v By default, there is a built-in vswitch, xcatvsw1. It is a Layer 3, VLAN UNAWARE vswitch. It shouldonly be used by the xCAT MN and ZHCP services for internal communication.

IP Address/MAC Address Considerations

An IP address range is needed when creating a subnet in the neutron server. When a server instance iscreated, the neutron server will assign one IP address from the subnet. If you are using a private IPaddress or an isolated network, you need to consider how many instances you need to support, thenchoose the appropriate IP range. If you will use a public IP address or your own network, you need toget input from the network administrator. The neutron server will generate MAC addresses for allports/NICs of the server instances. In the neutron server configuration file, base_mac is used to controlthe first three/four fields of the generated MAC. All generated MAC addresses have the same prefix, asdefined in base_mac. base_mac can also be used to prevent MAC address conflicts. base_mac values shouldbe the same as the z/VM user prefix or system prefix. Refer to z/VM: CP Planning and Administration formore information on z/VM MAC address management.

Chapter 2. Planning and Requirements 19

20 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 3. z/VM Configuration

This chapter shows the basic configurations for z/VM. It does not discuss setting up xCAT, which isdiscussed in a later section.

z/VM System Configuration

To set up z/VM V6.4 with an SSI configuration, see z/VM: Installation Guide.

You may also find the following two IBM Redbooks to be helpful:v An Introduction to z/VM Single System Image (SSI) and Live Guest Relocation (LGR)v Using z/VM v 6.2 Single System Image (SSI) and Live Guest Relocation (LGR).

SMAPI and Directory Manager Configuration

If you are using a directory manager, it should be configured to ensure that disks are cleared before theyare reused. For example, if you are using DirMaint, it must be configured with the'DISK_CLEANUP=YES' operand.

Refer to the “Setting up and Configuring the Server Environment” chapter in z/VM: Systems ManagementApplication Programming to configure the z/VM Systems Management API (SMAPI) server.

If you use DirMaint as your directory manager, see z/VM: Directory Maintenance Facility Tailoring andAdministration Guide. If you are using a different directory manager product, consult the documentationfor that product.

SMAPI and External Security Manager Configuration

If you use an External Security Manager (ESM), ensure that you have followed the directions in the“Using SMAPI with an External Security Manager” appendix of z/VM: Systems Management ApplicationProgramming. Additionally, see your Directory Manager product documentation for configuring yourDirectory Manager to work with an ESM.

If you are using RACF, the following RACF changes must be made.1. Enable xCAT to link to minidisks for image deployments.

RAC ALU XCAT_userid OPERATIONS

where:

XCAT_useridis the z/VM user ID specified in the XCAT_user property in the DMSSICNF COPY file. (Theshipped default is OPNCLOUD).

2. All users managed by OpenStack must have access to the xCAT MN vswitch specified in theDMSSICNF COPY file by the XCAT_MN_vswitch property. (The shipped default is XCATVSW2).Support to grant access from ZHCP is not implemented, so the RACF profile for xCAT MN vswitchshould be deleted. This causes RACF access validation for the vswitch to defer to CP.

RAC RDELETE VMLAN SYSTEM.XCATVSW2

where:

© Copyright IBM Corp. 2014, 2017 21

XCATVSW2is the name of the xCAT MN vswitch.

After all RACF permissions are established, z/VM Systems Management (SMAPI) should be restarted byrestarting VSMGUARD. On an authorized z/VM user ID issue the following commands:

FORCE VSMGUARDXAUTOLOG VSMGUARD

Restarting the VSMGUARD server is the only recommended procedure for recycling the SMAPI servers.Other methods can cause server corruption and prevent SMAPI from performing the necessary start-upsequence.

Storage Configuration

If using FBA disks, live migration requires that those FBA disks shared among SSI members have thesame EDEV and EQID. Log on to MAINT and issue following command to set EQID for the volume:

SET EDEV edev EQID eqid TYPE FBA ATTR SCSI FCP_DEV fcp_rdev WWPN wwpn LUN lun

where:

edev is the edevice ID.

eqid is the equivalency identifier to use for the device.

fcp_rdevis the real device number used to access the device.

wwpn is the world-wide port name.

lun is the logical unit number.

22 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 4. SMAPI Configuration

The SMAPI servers must be configured for the z/VM hypervisor on which they run, including theOPNCLOUD virtual machine that runs the xCAT MN and ZHCP services. The instructions to configurethem are provided in z/VM: Systems Management Application Programming. After first reading this SMAPIConfiguration chapter, refer to the “Configuring CMA Servers” section in the “Setting up andConfiguring the Server Environment” chapter in z/VM: Systems Management Application Programming.

Most of the configuration of the SMAPI servers is provided within the DMSSICNF COPY file on thez/VM hypervisor. When the SMAPI servers are logged on, part of their boot process is to use the valuesin the DMSSICNF COPY file to configure the xCAT and/or ZHCP services. This includes creating xCATnodes representing:v The xCAT MN service, with the name specified in the XCAT_Host property.v The ZHCP service, with the name specified in the ZHCP_Host property.v The z/VM hypervisor with the name specified in the XCAT_zvmsysid property.

The xCAT_MN_pw property should be changed to a value other than "NOLOG" in order to allow you toSSH into the system.

If an xCAT MN manages ZHCP agents running in another z/VM hypervisor, a few additional steps haveto be performed to allow the xCAT MN and ZHCP to work together. Other z/VM hypervisors attempt tocreate their nodes and do the set up with a default user name and password. If this set up fails, you haveto create the node manually. These steps are discussed in the “Using a Single CMA xCAT MN andMultiple CMA ZHCP Servers” section in the “Setting up and Configuring the Server Environment”chapter in z/VM: Systems Management Application Programming.

z/VM: Systems Management Application Programming contains a description of the DMSSICNF COPY file,and also tells you when to start the SMAPI virtual servers following configuration of the DMSSICNFCOPY file. When a CMA is being configured, SMAPI servers should not be started until additionalconfiguration is performed in the CMA related copy file; see Chapter 5, “OpenStack Configuration,” onpage 25 for more information.

© Copyright IBM Corp. 2014, 2017 23

24 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 5. OpenStack Configuration

You must configure z/VM – with the set of remotely accessible systems management services that shouldrun when SMAPI starts – by selecting a system role in the DMSSICMO COPY file. When SMAPI startsthe OPNCLOUD virtual machine, the CMA configures its services using the properties specified in theDMSSICMO COPY and DMSSICNF COPY files on z/VM. Depending on the role you choose, you mayneed other configuration values. For example, if the system role requires the CMA to run OpenStackservices, you must configure them for the z/VM system that they manage. DMSSICMO COPY containsOpenStack service configuration values (used when those services are run in the OPNCLOUD virtualserver), and DMSSICNF COPY contains configuration values for the xCAT MN and ZHCP services.

Each OpenStack compute, network, or telemetry service manages a single z/VM system, regardless ofwhether it runs in a CMA or elsewhere. The services do not need to be running in the same server butcommon practice is to do so, and services hosted in a CMA are co-located. Each service needs to beconfigured to talk to the xCAT MN and to identify the ZHCP agent which corresponds to the z/VMhypervisor that it will manage. In addition, configuration properties specify resource choices OpenStackuses when creating virtual server instances and virtual networks.

When you run OpenStack services outside of a CMA, you (or the OpenStack product distribution) mustset up the OpenStack configuration files. Some products provide installation scripts which perform thetailoring while others have the files edited by the person installing the product. Even if an installationscript is used, it is a good idea to verify the configuration file settings. See Appendix F, “OpenStackConfiguration Files,” on page 155 for information on the OpenStack configuration file contents. SeeAppendix A, “Installation Verification Programs,” on page 139 for information on scripts you can use toverify the contents of the OpenStack configuration files.

Configuring the CMA

For a CMA, initial configuration information is provided by the DMSSICMO COPY file and theDMSSICNF COPY file. IBM ships defaults as part of the z/VM product, which you can use as a startingpoint. As described in this section, the DMSSICMO COPY file must be updated for your local installation.The DMSSICMO COPY file must reside on the MAINT 193 disk.

A configuration wizard assists you in updating the DMSSICMO COPY file and the DMSSICNF COPYfile. For details on using the wizard, see the “Configuring the CMA Using the Configuration Wizard”section in the “Configuring the CMA Servers” chapter in z/VM: Systems Management ApplicationProgramming.

You can also manually update the COPY files by editing them with XEDIT. IBM recommends that youkeep at least two previous versions of this file as backups. For information on manually updating theproperties in these files, see “DMSSICMO COPY File Properties” on page 26 and Chapter 4, “SMAPIConfiguration,” on page 23.

When the virtual server that runs the CMA starts, its initial boot logic reads the COPY files to configurethe services running within its virtual server. In the case of OpenStack services, when running themwithin a CMA, a script reads the COPY files and updates the various OpenStack configuration files.

Subsequent restarts of the CMA will honor most values defined in the DMSSICMO COPY file and theDMSSICNF COPY file. Most properties can be changed by editing these files. Some configuration values,such as cmo_admin_password, cannot be changed in this manner, although they can be updated by othermeans. Information on which properties are reconfigurable by updating the DMSSICNF and DMSSICMOCOPY files is discussed in “Modifying the CMA on Subsequent Boots” on page 36. If changes are made

© Copyright IBM Corp. 2014, 2017 25

||

||||

||||

to the nova, neutron, cinder, and ceilometer configuration files by hand instead of by updating theDMSSICNF COPY and DMSSICMO COPY files, IBM recommends that you keep the COPY files in syncwith the changes made by hand so that a possible CMA reset is less disruptive. Since CMA initializationupdates the OpenStack configuration files, changes you make to them that conflict with the properties inDMSSICMO may be over-written.

If you are configuring a CMA as the single (or first) system of a z/VM-only cloud, configure it as acontroller node, which will also act as a compute node.

After configuring the CMA, see “Reconfiguring the CMA” on page 50 for information on other changesyou can make to the CMA

DMSSICMO COPY File Properties

The following properties are specified in the DMSSICMO COPY file. They are listed in alphabetical order,with the following information:v Name of the propertyv Indication of whether the property is required or optional for the given system role. A property is

required for a specific system role if it is necessary for the mainline operation of the support. Aproperty is optional if it is only necessary for an optional feature.

v The format of the value, and its definition. The values you specify for configuration properties in theDMSSICMO COPY must be specified in the form option_name="option_value". The option_value itselfcannot contain single or double quotes.

v Notes indicating the description of the property and additional information on usage of the property.

For a summary of the DMSSICMO COPY file properties, see Table 4 on page 33.

cmo_admin_password

Required: controller, compute, compute_mn

Value: Initial password for OpenStack accounts/projects, including OpenStack projects, AMQPservice, OpenStack Databases, the "admin" user in the OpenStack Horizon GUI, etc.

Notes:

v The first time you log in as "admin" in the Horizon GUI, you should change this passwordusing the user settings view by clicking on the Admin link in the upper right hand section ofthe GUI. Whenever you change the Horizon GUI admin password, you must also update itsvalue in the file named "openrc" by logging in, via ssh, to the mnadmin user and editing thefile "openrc" in that user's home directory. (The user ID of the mnadmin user is defined in theXCAT_MN_User property in the DMSSICNF COPY file.)

v This is considered an initial password because it is only used when the LVM allocated in thecmo_data_disk property is first initialized. After that time, the value is ignored.

cmo_data_disk

Required: controller, compute_mn, mn

Value: "volid1 volid2 volid3 volid4"

Notes:

v The xCAT MN and OpenStack services use these minidisks to store user data (the CMA datadisk). IBM recommends that you back up the contents of these volumes regularly, because theystore your user data.

v CMA manages the disks with the Logical Volume Manager (LVM). The LVM logical volumename is cmo-data; it contains all the user data, such as OpenStack configuration and databasefiles, generated certification keys, xCAT image files, and temporary files. The logical volume

26 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

|

|||

|||

|

|

cmo-data is mounted to the /data directory. The /install directory is bound to the/data/xcat/install directory. See “Space Issues on /install Directory Can Lead to xCAT MNIssues” on page 208 for LVM troubleshooting.

v The volume labels can be specified on multiple lines with one double quote at the beginning ofthe value and another at the end. For example:cmo_data_disk = "volid1 volid2

volid3 volid4"

v You should perform the following steps before defining the volumes with this property:1. Use CPFMTXA to issue a CP FORMAT of the volumes being used. Note that the CMA will

be able to use the DASD volumes only if they are formatted by CPFMTXA.2. ATTACH the volumes to the SYSTEM.3. Enter the volume labels into the cmo_data_disk variable in the DMSSICMO COPY file.4. Do not add the volumes to a directory manager disk group.

If you want to add new DASD volumes to the repository, repeat the steps shown here.v Each time the controller starts, it identifies new volumes which have been added to the list,

defines them as full-volume minidisks linked from the OPNCLOUD z/VM user ID, and addsthem to the directory manager in a group named XCAT1. The first time you add DASDvolumes and restart CMA, it may take several minutes to initialize the volumes.

v Space equivalent to seven ECKD 3390 Model 9 volumes is recommended to start with if youtake the default value for the xcat_free_space_threshold property. (For more information onthis property, see “Settings for Nova” on page 155.)

v After you add new disks with this property, perform the following steps to make the CMArestart and bring the disks online:1. Use a 3270 terminal to connect to z/VM, and logon as MAINT.2. Issue FORCE VSMGUARD.3. Issue XAUTOLOG VSMGUARD.

When you can see in the operator console that the CMA is up, you can ssh into the CMA withthe MNADMIN user name and password to double check whether LVM is expanded. Thenissue the following command:

df -h /dataFilesystem Size Used Avail Use% Mounted on/dev/mapper/cmo-data 21G 15G 4.5G 78% /data

v You cannot remove volumes from the LVM by removing their volume IDs from thecmo_data_disk property. The CMA maintenance page (z/VM OpenStack Cloud Information(http://www.vm.ibm.com/sysman/osmntlvl.html) contains a utility you can use to removedisks from an LVM. Always back up all current LVM disks before attempting to remove a diskfrom an LVM.

openstack_controller_address

Required: compute, compute_mn

Value: Controller's external IP address which the nova compute node will use.

Notes:v If the controller is running in another CMA, this address should be the xCAT_MN_Addr from the

controller's DMSSICNF COPY file. Otherwise, this should be the external controller's IPaddress.

openstack_default_network

Optional: controller, compute, compute_mn

Value: IP address range and CIDR for the default network; for example:192.168.1.2–192.168.1.250/24.

Chapter 5. OpenStack Configuration 27

|

||||

|||

Notes:v When the CMA starts up, the network and subnet with this IP range is created for OpenStack

users in all projects. The default network is connected to the physical vswitch specified in theXCAT_MN_vswitch property in the DMSSICNF COPY file.

v Users in all projects can get default network information by issuing the neutron net-listcommand.

v If there is a neutron network coupled to the physical vswitch specified in the XCAT_MN_vswitchproperty, the default network is not created.

v The default network can work in conjunction with the openstack_xcat_mgt_ip andopenstack_xcat_mgt_mask properties. If the IP address ranges specified in theopenstack_default_network property can not be accessed by the CMA, you can set the valuesfor the openstack_xcat_mgt_ip and openstack_xcat_mgt_mask properties to be in the same IPsegment, but to not be in the same IP range, as the default network. Then, when the CMAstarts up, the default network is created and you can use the xCAT management IP address toaccess the deployed instances.

v If the openstack_xcat_mgt_ip property is set to NONE, and the IP ranges in theopenstack_default_network property can not be accessed by the CMA, the CMA uses the firstIP address in the address range of the default network as the xCAT management IP address,and it uses the rest of the IP address range to create the network. For example, if the defaultnetwork is in the range 192.168.1.2–.168.1.250/24 and the openstack_xcat_mgt_ip property isset to NONE, then 192.168.1.2 is used as the xCAT management IP address (and is set in theneutron_zvm_plugin.ini file), and the IP address range 192.168.1.3–.168.1.250/24 is used as thesubnet IP address range for default network.

openstack_endpoints_enable_https

Optional: controller, compute, compute_mn

Value: TRUE or FALSE

Notes:v Use this property to configure the protocol required when calling OpenStack endpoints. You

can switch the protocol by changing this property value (for example, from TRUE to FALSE)and then restarting the CMA.

v The value is case-insensitive. The default value is TRUE, meaning that OpenStack API callersare required to use HTTPS. If the value is FALSE, HTTP is required.

v This property does not affect the protocol used by other CMA REST APIs, such as the xCATREST API and OpenStack’s internal message queue service.

v When you configure the CMA to use HTTPS for the OpenStack API protocol, it secures theconnections using certificates that you can control. IBM recommends that you replace thedefault self-signed certificate with one signed by a Certificate Authority (CA) that you trust, asdescribed in “Replacing the Default SSL Certificates” on page 40.

v If a cloud has both a CMA running in the controller role and one or more CMAs running inthe compute or compute_mn role, you must make the endpoint protocol consistent on all ofthese CMAs. Change the property value on each system, then restart the controller role CMA,and finally, restart the other CMAs, in that order.

v You must delete all web browser cookies set by the OpenStack Dashboard after changing thevalue of this property and restarting the affected controller. If you fail to do so, the next timeyou use the Dashboard unpredictable errors can occur. Common symptoms include: the loginpage may not load successfully, or you might see an "Unable to retrieve project list" error in theDashboard.

openstack_instance_name_template

Required: controller, compute, compute_mn

Value: Template string to be used to generate instance names.

Notes:v Shipped default is "OSP%05x".

28 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

|

|

|

|||||||||||||||||||||

v This property is used to configure the nova configuration property: instance_name_template.See “Settings for Nova” on page 155.

openstack_san_ip

Optional: controller, compute, compute_mn

Value: IP address of your SVC storage.

Notes:v Contact your SVC service manager if you don’t know the address.v This property is used to configure the cinder configuration property: san_ip. See “Settings for

Cinder” on page 163.v An IPv4 address should be specified as four octets written as decimal numbers ranging from 0

to 255 and concatenated with a period between the octets. Do not specify leading zeros for anoctet as this can cause some utilities to treat the octet as a number in octal representation. (Forexample, 09.0.05.11 is wrong, 9.0.5.11 is correct.)

openstack_san_private_key

Optional: controller, compute, compute_mn

Value: Filename of the private key file to use for SSH authentication to your SVC storage.

Notes:v Contact your SVC service manager to get the file.v This property is used to configure the cinder configuration property: san_private_key. See

“Settings for Cinder” on page 163.v The file should be placed in the home directory of the user ID defined in the XCAT_MN_admin

property. For example, if XCAT_MN_admin is defined as “mnadmin”, the private key should beplaced in /home/mnadmin. After placing the file in this directory, use either of the followingmethods to allow the cinder service to consume this key:– Restart VSMGUARD. The CMA will automatically set up the key file permission.– Use the following steps to manually set up the key file permission. In the following

commands, mnadmin is the value defined in the XCAT_MN_admin property, and san_private_keyis the value defined in the openstack_san_private_key property:sudo cp -f /home/mnadmin/san_private_key /var/lib/cinder/sudo chown cinder:cinder /var/lib/cinder/san_private_keysudo chmod 600 /var/lib/cinder/san_private_keysudo systemctl restart openstack-cinder-volume.service

For either method, after making sure cinder services are working correctly, for security reasonsit is recommended that you delete the key file from the home directory of the user ID definedin the XCAT_MN_admin property.

openstack_storwize_svc_volpool_name

Optional: controller, compute, compute_mn

Value: The name of the VDISK pool from which cinder will carve disks.

Notes:v Contact your SVC service manager to get the file.v This property is used to configure the cinder configuration property:

storwize_svc_volpool_name. See “Settings for Cinder” on page 163.v The VDISK pool must be created and ready to work before OpenStack can use it. The volumes

that can be created depend on the capability of the VDISK pool. Contact your SVC servicemanager if you don’t know which pool you can use.

openstack_storwize_svc_vol_iogrp

Optional: controller, compute, compute_mn

Value: The io_group_id with which to associate the virtual disk.

Chapter 5. OpenStack Configuration 29

Notes:v This property is used to configure the cinder configuration property: storwize_svc_vol_iogrp.

See “Settings for Cinder” on page 163.

openstack_system_role

Required: controller, compute, compute_mn, mn, zhcp

Value: Specifies the role associated with the CMA. The value can be:

compute To enable the CMA to act in the compute role.

compute_mnTo enable the management of a z/VM hypervisor using a cross-platform OpenStackcontroller.

controller To enable the CMA to act in the controller role. A controller runs both controller servicesand compute services for the z/VM hypervisor on which it is running.

mn To enable the CMA to access the xCAT GUI to manage the entire cluster.

zhcp To enable the CMA to act in only the zhcp role and connect to z/VM SMAPI.

Notes:v The value of this property should not be changed after the CMA is restarted. If you do change

this value later, generally you must re-install the CMA code from its DDR images and reformatthe user-data LVM before restarting the CMA with the new role. The CMA maintenance page(z/VM OpenStack Cloud Information (http://www.vm.ibm.com/sysman/osmntlvl.html)contains an LVM utility you can use to reformat the user-data LVM.

v For information on choosing the appropriate system role, see “Choosing the CorrectDeployment Pattern for your Installation” on page 11.

openstack_volume_enable_multipath

Optional: controller, compute, compute_mn

Value: TRUE or FALSE.

Notes:v This property is used to configure the nova configuration property zvm_multiple_fcp. See

“Settings for Nova” on page 155.

openstack_xcat_mgt_ip

Optional: controller, compute, compute_mn

Value: IP address – xCAT management interface IP address used by the xCAT MN tocommunicate through to newly deployed instance servers.

Notes:v Use of this property is deprecated in the Newton release. For more information on deprecated

interfaces, see “Deprecated Interfaces” on page xv.v This property configures the neutron configuration property: xcat_mgt_ip. For more

information on using this property, see “Settings for Neutron” on page 165 for moreinformation.

v This property is used when new instances do not have public IP addresses that would allowthe xCAT MN to communicate with the instances. If specified, an additional interface will becreated in the OPNCLOUD virtual server over which the xCAT MN will communicate withdeployed systems. Whether the property is specified depends upon the type of networks thatare used by the deployed systems. For more information see “Network Configurations” onpage 56, which discusses and shows examples of:– Flat network. See “Single Flat Network” on page 59.

30 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||||||

||

||

||

– Using private IP addresses flat network. See “Using Private IP Addresses for Instances” onpage 61.

– VLAN mixed network. See “Flat and VLAN Mixed Network” on page 65.v xCAT MN and the z/VM OpenStack code support only one additional interface as the xCAT

management IP address. For this reason, any compute node with private networks using thesame xCAT must use the same value for these properties.If CMA running in controller role is involved, then the property may be specified in theDMSSICMO copy file for the controller's z/VM and omitted on the other compute nodes. Thecontroller will ensure that the interface is defined when the controller starts up and because thecontroller contains the xCAT MN it will be defined prior to its use by another OpenStackcompute node (for example, a CMA in compute role).

v The xcat_mgt_mask and xcat_mgt_ip must be set in the same broadcast domain as theinstance's IP address. A broadcast domain is a logical division of a computer network, in whichall nodes can reach each other by broadcast at the data link layer.

v IBM recommends that xCAT MN be defined so that this is the first IP address of yourmanagement network.

v An IPv4 address should be specified as four octets written as decimal numbers ranging from 0to 255 and concatenated with a period between the octets. Do not specify leading zeros for anoctet as this can cause some utilities to treat the octet as a number in octal representation. (Forexample, 09.0.05.11 is wrong, 9.0.5.11 is correct.)

openstack_xcat_mgt_mask

Optional: controller, compute, compute_mn

Value: Netmask of your xCAT management network (for example: 255.255.255.0).

Notes:v Use of this property is deprecated in the Newton release. For more information on deprecated

interfaces, see “Deprecated Interfaces” on page xv.v This property is used to configure the neutron configuration property: xcat_mgt_mask. This

property is used when new instances do not have public IP addresses that would allow thexCAT MN to communicate with the instances. This property is used in conjunction withopenstack_xcat_mgt_ip. For related information on xcat_mgt_mask, see “Settings for Neutron”on page 165 for more information.

v The xcat_mgt_mask and xcat_mgt_ip must be set in the same broadcast domain as theinstance's IP address. A broadcast domain is a logical division of a computer network, in whichall nodes can reach each other by broadcast at the data link layer.

openstack_zvm_diskpool

Required: controller, compute, compute_mn

Value: The name and type of the disk pool to be used when deploying virtual machines.

Notes:v This disk pool should already be configured, and it should be managed by the z/VM Directory

Manager (DIRMAINT). An existing disk pool other than XCAT1 must be used. Ask yoursystem administrator which disk pool to use.

v The value of this property is a colon-delimited list in the form diskpool_type:diskpool_name; forexample, FBA:xcatfba or ECKD:xcateckd. The disk pool type must be in uppercase: FBA orECKD. This property name is a user chosen name.

v This property is used to configure the nova configuration properties: zvm_diskpool andzvm_diskpool_type. See “Settings for Nova” on page 155.

openstack_zvm_fcp_list

Optional: controller, compute, compute_mn

Value: The list of FCPs used by virtual server instances.

Notes:

Chapter 5. OpenStack Configuration 31

|

||

v This property is used to configure the nova configuration property: zvm_fcp_list. See “Settingsfor Nova” on page 155.

openstack_zvm_image_default_password

Required: controller, compute, compute_mn

Value: The default password to be used as the default OS root password for the newly bootedvirtual server instances.

Notes:v This property is used to configure the nova configuration property:

zvm_image_default_password. See “Settings for Nova” on page 155.

openstack_zvm_scsi_pool

Optional: controller, compute, compute_mn

Value: The name of the xCAT SCSI pool.

Notes:v This property is used to configure the nova configuration property: zvm_scsi_pool. See

“Settings for Nova” on page 155.

openstack_zvm_timeout

Optional: controller, compute, compute_mn

Value: The number of seconds a newly deployed instance is given to accept incomingcommunications before OpenStack treats the deployment request as having failed, after which theinstance is automatically deleted.

Notes:v The default is 300 seconds (five minutes).

openstack_zvm_vmrelocate_force

Required: controller, compute, compute_mn

Value: ARCHITECTURE, DOMAIN, NONE, or STORAGE

Notes:v This value indicates the following:

ARCHITECTUREAttempt relocation even though hardware architecture facilities or CP features are notavailable on the destination system.

DOMAINAttempt relocation even though the VM would be moved outside of its domain.

NONEIndicates that no VMRELOCATE FORCE option will be used. Relocations will fail ifarchitecture, domain or storage warnings or errors are encountered.

STORAGERelocation should proceed even if CP determines that there is insufficient storage.

openstack_zvm_xcat_master

Required: compute, zhcp

Value: The xCAT node name of the controller or the xCAT MN.

Notes:v The node name should be specified in lowercase.

openstack_zvm_xcat_service_addr

Required: compute

32 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Value: Specifies the external IP Address of the xCAT MN to which this compute nodecommunicates, to allow REST API calls to flow from the compute node to the MN.

Notes:v This is the xCAT management node IP address that is reachable by all compute nodes, as

specified in XCAT_MN_Addr on the compute_mn node.v This property is required only for a compute node which is connected to a compute_mn and

non-controller. For a compute node connected to a controller, traffic flows over theopenstack_controller_address.

openstack_zvm_zhcp_fcp_list

Optional: controller, compute, compute_mn

Value: The list of FCPs used only by the ZHCP service. The FCP addresses may be specified aseither an individual address or a range of addresses connected with a hyphen. Multiple valuesare specified with a semicolon connecting them (for example: 1f0e;2f02-2f1f;3f00).

Notes:v This property is used to configure the nova configuration property: zvm_zhcp_fcp_list. See

“Settings for Nova” on page 155.v The FCP addresses must be different from the ones specified for the zvm_fcp_list. Any FCPs

that exist in both zvm_fcp_list and zvm_zhcp_fcp_list will lead to errors.v IBM recommends that you specify only one FCP in this property, to avoid wasting resources.v Contact your z/VM system administrator if you don’t know which FCPs you can use.

Table 4 lists the properties in the DMSSICMO COPY file and indicates which are required (R) or optional(O) depending on the system role you choose for the CMA instance. A blank indicates that the propertyis not used when the CMA is configured in the indicated role.

Table 4. Summary of DMSSICMO COPY File Properties and Whether They are Required or Optional

Property controller compute compute_mn mn zhcp

cmo_admin_password R R R

cmo_data_disk R R R

openstack_controller_address R R

openstack_default_network O O O

openstack_endpoints_enable_https O O O

openstack_instance_name_template R R R

openstack_san_ip O O O

openstack_san_private_key O O O

openstack_storwize_svc_volpool_name O O O

openstack_storwize_svc_vol_iogrp O O O

openstack_system_role R R R R R

openstack_volume_enable_multipath O O O

openstack_xcat_mgt_ip O O O

openstack_xcat_mgt_mask O O O

openstack_zvm_diskpool R R R

openstack_zvm_fcp_list O O O

openstack_zvm_image_default_password R R R

openstack_zvm_scsi_pool O O O

openstack_zvm_timeout O O O

Chapter 5. OpenStack Configuration 33

||||||

Table 4. Summary of DMSSICMO COPY File Properties and Whether They are Required or Optional (continued)

Property controller compute compute_mn mn zhcp

openstack_zvm_vmrelocate_force R R R

openstack_zvm_xcat_master R R

openstack_zvm_xcat_service_addr R

openstack_zvm_zhcp_fcp_list O O O

Settings for IP Address Properties

Table 5 shows you the values that certain properties should be set to, based on the role configured for theCMA.

Table 5. Settings for IP Address Properties Defined in the DMSSICMO COPY file and the DMSSICNF COPY file,Based on the CMA Role

Property controller role compute role compute_mn role mn role zhcp role

openstack_controller_address

Ignored The IP addressof the cloudcontroller,which isrunning on adifferenthypervisor.

The Ihe IP addressof the cloudcontroller, which isrunning on adifferenthypervisor.

Ignored Ignored

openstack_xcat_mgt_ip The xCATMN's instancemanagementnetwork IPaddress. Usedtocommunicatewith newlydeployedinstances.(optional)

The xCATMN's instancemanagementnetwork IPaddress. Usedtocommunicatewith newlydeployedinstances.(optional)

The xCAT MN'sinstancemanagementnetwork IPaddress. Used tocommunicate withnewly deployedinstances.(optional)

Ignored Ignored

openstack_zvm_xcat_service_addr

Ignored The IP addressof the xCATMN, which isrunning on adifferenthypervisor.

Ignored Ignored Ignored

XCAT_Addr The IP addressassociatedwith the xCATMN tocommunicatewith theZHCP servers.

Ignored The IP addressassociated with thexCAT MN tocommunicate withthe ZHCP servers.

The IP addressassociated withthe xCAT MN tocommunicatewith the ZHCPservers.

The IP addressassociated withthe xCAT MN tocommunicatewith the ZHCPservers.

34 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Table 5. Settings for IP Address Properties Defined in the DMSSICMO COPY file and the DMSSICNF COPY file,Based on the CMA Role (continued)

Property controller role compute role compute_mn role mn role zhcp role

XCAT_MN_Addr The externalIP address ofthe xCAT MN,the cloudcontroller, andthe computenode.

The OpenStackcomputenode's externalIP address.When runningthis role, theXCAT MNservice runson a differenthypervisor.

The external IPaddress of thexCAT MN and thecompute node.

xCAT MN'sexternal IPaddress.

The external IPaddress of thexCAT MN thatwill manage thisZHCP service.The managingXCAT MNservice runs on adifferenthypervisor.

ZHCP_Addr Ignored The IP addressassociatedwith theZHCP service.This addressmust bereachable fromthe networkconfigured onthe xCAT MNservice thatwill managethe hypervisorwhere ZHCPis running.The XCAT MNservice runson a differenthypervisor.

Ignored Ignored The IP addressassociated withthe xCAT MNthat will managethe z/VMhypervisor whereZHCP is running.The xCAT MNruns on adifferenthypervisor.

Starting the CMA

After configuring both the DMSSICNF and DMSSICMO COPY files, the virtual machine which runs theCMA can be started. The initial boot of the virtual machine will use the configuration files to configurethe services with the CMA. The only recommended method for starting the server is to log off the SMAPIVSMGUARD server and log it back on. The start up process of this server will recycle each of the knownSMAPI servers, including the one in which you are running the CMA. On an authorized z/VM user IDissue the following commands:FORCE VSMGUARDXAUTOLOG VSMGUARD

Note:

v Always use the FORCE VSMGUARD command when restarting the CMA. Do not forcibly log off orrestart an individual virtual machine unless instructed to do so by IBM support personnel.

v In a multisystem cluster, the controller role CMA system should be started before the compute roleappliance system, to ensure the NFS connections between these two types of systems are set upcorrectly.

v CMAs on other systems need to be restarted after the CMA controller is restarted. If a compute roleCMA is being started for the first time or after resetting the CMA, (see Appendix B, “Using DDR toReset the CMA,” on page 145), you should SSH into the compute role system and update thezvm_xcat_username and zvm_xcat_password properties in the /etc/nova/nova.conf,

Chapter 5. OpenStack Configuration 35

||

|||

||||

/etc/neutron/plugins/zvm/neutron_zvm_plugin.ini, and /etc/ceilometer/ceilometer.conf files. Formore information on these properties, see Appendix F, “OpenStack Configuration Files,” on page 155.

Accessing the CMA

For all system roles other than the zhcp role, properties in the DMSSICNF COPY file identify the userthat is allowed to log in to the CMA. If the XCAT_MN_pw property has a value other than “NOLOG”, theuser is allowed to log in through SSH. We strongly recommend that you specify a value in order to allowthe user to login to the CMA to modify configuration settings and verify the configuration. Use an SSHclient to log into the CMA.v The user name is the user name defined in the XCAT_MN_admin property.v The password is the password defined in the XCAT_MN_pw property when the XCAT_MN_admin user

was first created. Note that this password may have changed if you have already followed theinstructions (described under the description of the XCAT_MN_pw property in z/VM: Systems ManagementApplication Programming) to change it following initial configuration of the CMA.

Command line access is provided to support updating the OpenStack configuration files and restartingthe OpenStack services. sudo is supported to allow you to access privileged commands or files. Forexample:v sudo /bin/vi file

allows editing of OpenStack configuration files, where file is the file specification of the configurationfile.

v sudo systemctl restart service_name.service

Restarts the service specified by service_name.

Subsequent restarts of the CMA will honor the values defined in the DMSSICNF COPY file and theDMSSICMO COPY file, unless otherwise noted in the property's documentation. For more information,see “Modifying the CMA on Subsequent Boots.”

OpenStack commands may be issued on the CMA system. A file exists in the home directory of theadmin user specified by the XCAT_MN_admin property. The file can be sourced to define shell variables usedby the OpenStack commands. Issue:source $HOME/openrc

Verifying the CMA

After starting the CMA, it is recommended that you verify the configuration for the various services thatwere configured. This process can be automated using the installation verification programs (IVP)shipped with the appliance. For more information, see Appendix A, “Installation Verification Programs,”on page 139.

Modifying the CMA on Subsequent Boots

Most properties can be changed by updating their values in the DMSSICNF COPY file and/or theDMSSICMO COPY file and then rebooting the CMA. Some properties, such as cmo_admin_password, areupdated by other means; see the individual property descriptions for instructions on updating theirvalues. This section documents how CMA propagates changes in the DMSSICNF COPY and DMSSICMOCOPY files to OpenStack configuration files1. Update the DMSSICNF COPY file and the DMSSICMO COPY file on all z/VM systems where the

CMA is running in the controller role or the compute role. When the CMA boots up, the CMA

36 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||

||||

||||

configuration tools will honor most of the values defined in DMSSICNF COPY and DMSSICMOCOPY. Table 6 shows the configuration options which will be overwritten by values in the DMSSICNFCOPY and DMSSICMO COPY files:

Table 6. OpenStack Configuration Options Which Will be Overwritten by the CMA Configuration Tools

File Section Option Name Option Value

/etc/nova/nova.conf DEFAULT default_publisher_id XCAT_Host in DMSSICNF COPY

host XCAT_zvmsysid in DMSSICNF COPY

instance_name_template openstack_instance_name_template inDMSSICMO COPY

my_ip XCAT_MN_Addr in DMSSICNF COPY

zvm_diskpool openstack_zvm_diskpool in DMSSICMOCOPY

zvm_diskpool_type openstack_zvm_diskpool in DMSSICMOCOPY

zvm_fcp_list openstack_zvm_fcp_list in DMSSICMOCOPY

zvm_host XCAT_zvmsysid in DMSSICNF COPY

zvm_image_default_password openstack_zvm_image_default_passwordin DMSSICMO COPY

zvm_reachable_timeout openstack_zvm_timeout in DMSSICMOCOPY

zvm_scsi_pool openstack_zvm_scsi_pool in DMSSICMOCOPY

zvm_vmrelocate_force openstack_zvm_vmrelocate_force inDMSSICMO COPY

zvm_xcat_master openstack_zvm_xcat_master inDMSSICMO COPY

zvm_xcat_server openstack_zvm_xcat_service_addr inDMSSICMO COPY

zvm_xcat_username admin

zvm_zhcp_fcp_list openstack_zvm_zhcp_fcp_list inDMSSICMO COPY

/etc/neutron/plugins/zvm/neutron_zvm_plugin.ini

AGENT xcat_mgt_ip openstack_xcat_mgt_ip in DMSSICMOCOPY; if openstack_xcat_mgt_ip is notset, the first IP specified inopenstack_default_network inDMSSICMO COPY is used

xcat_mgt_mask openstack_xcat_mgt_mask inDMSSICMO COPY; ifopenstack_xcat_mgt_mask is not set, themask in openstack_default_network inDMSSICMO COPY is used

xcat_zhcp_nodename ZHCP_Host in DMSSICNF COPY

zvm_host XCAT_zvmsysid in DMSSICNF COPY

zvm_xcat_server openstack_zvm_xcat_service_addr inDMSSICMO COPY

zvm_xcat_username admin

Chapter 5. OpenStack Configuration 37

Table 6. OpenStack Configuration Options Which Will be Overwritten by the CMA Configuration Tools (continued)

File Section Option Name Option Value

/etc/cinder/cinder.conf DEFAULT san_ip openstack_san_ip in DMSSICMO COPY

san_private_key openstack_san_private_key inDMSSICMO COPY

storwize_svc_connection_protocol cinder_volume_protocol in DMSSICMOCOPY

storwize_svc_volpool_name openstack_storwize_svc_volpool_name inDMSSICMO COPY

storwize_svc_vol_iogrp openstack_storwize_svc_vol_iogrp inDMSSICMO COPY

/etc/ceilometer/ceilometer.conf

zvm host XCAT_zvmsysid in DMSSICNF COPY

zvm_host XCAT_zvmsysid in DMSSICNF COPY

zvm_xcat_master openstack_zvm_xcat_master inDMSSICMO COPY

xcat_zhcp_nodename ZHCP_Host in DMSSICNF COPY

zvm_xcat_server openstack_zvm_xcat_service_addr inDMSSICMO COPY

zvm_xcat_username admin

Note: The xcat_mgt_ip and xcat_mgt_mask properties are deprecated in the Newton release.2. Restart the CMA. On an authorized z/VM user ID issue the following commands:

FORCE VSMGUARDXAUTOLOG VSMGUARD

Note: Do not issue FORCE OPNCLOUD. Always use VSMGUARD to restart these virtual machines.3. If this is a compute role appliance and the password associated with the admin user ID for the xCAT

GUI is changed, you also will have to update the following properties for the compute role appliance:v sudo vi /etc/nova/nova.conf, and update zvm_xcat_password with the new password.v sudo vi /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini, and update zvm_xcat_password with

the new password.v sudo vi /etc/ceilometer/ceilometer.conf, and update zvm_xcat_password with the new password.

After making these changes, issue the following commands so that related services can restart:sudo systemctl restart openstack-nova-compute.servicesudo systemctl restart neutron-zvm-agent.servicesudo systemctl restart openstack-ceilometer-polling.service

Final Configuration of the CMA via the OpenStack Horizon Dashboard

Note: This section applies only if your CMA is configured to run in the controller role.

For a CMA configured to run in the controller role, final configuration of the CMA is performed using aGUI.

Use your browser to access the URL for the OpenStack horizon Dashboard using the following URL:https://XCAT_MN_Addr

Where XCAT_MN_Addr is the address specified in the XCAT_MN_Addr property in the DMSSICNFCOPY file.

Note:

v The default web page will be redirected to https://XCAT_MN_Addr/dashboard.

38 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|

|

|

||

|

|

v All HTTP requests to the CMA HTTP server on port 80 are redirected to port 443 with SSL securityenabled.

v Unless you have already replaced CMA's default SSL certificates with ones that your browser trusts,you might see a browser warning when you access the GUI's URL. See “Replacing the Default SSLCertificates” on page 40 for instructions avoiding this warning and improving the CMA's security.

Figure 11 shows the OpenStack Dashboard Log In screen.

Log in using the following information:

User Name: adminPassword: The current password used to log in as "admin" with the HorizonGUI. The original value of the password for "admin" was specified in thecmo_admin_password property in the DMSSICMO COPY file. You shouldchange this password the first time you log in as "admin" with thehorizon GUI.

Figure 11. OpenStack Dashboard Log In Screen

Chapter 5. OpenStack Configuration 39

||

|||

|

|

Then click the Connect button to log in to the OpenStack Dashboard. After logging in you will seeFigure 12.

For additional installation instructions, see the CMA140 FILE file, which is included as part of therequired APAR.

After configuring the CMA, see “Reconfiguring the CMA” on page 50 for information on other changesyou can make to the CMA.

CMA Usage NotesThis section provides tips for using the CMA.

Flushing Expired OpenStack Keystone TokensBy default, the keystone Identity service's expired tokens remain stored in its database. This can increasethe size of the database and possibly degrade service performance. On the CMA configured in controllerrole, an hourly cron job is configured and run by default to flush expired tokens. The cron job is locatedin /var/spool/cron/keystone:

@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1

The token flush log is located in the /var/log/keystone/keystone-tokenflush.log file.

Replacing the Default SSL CertificatesDuring CMA installation, the CMA uses an internal Certificate Authority (CA) to create a self-signedcertificate used to secure all CMA APIs that use HTTPS; for example, the xCAT REST API, and (if youconfigured CMA to run OpenStack services securely) OpenStack APIs. This is sufficient for initialinstallation and testing, but IBM recommends that you replace the default self-signed certificate with onesigned by a Certificate Authority (CA) that you trust, as described here. The set of internal certificatescreated by the CMA is located in the /data/PKIcerts/certs directory.

Because the default CA certificate is self-signed and not recognized by the connecting client, the first timeyou access the services using HTTPS protocol you will see a warning message stating that the servercertificate is not trusted. For example, when using a web browser as the client to connect to the CMA,you might see this warning: "The certificate is not trusted because the issuer certificate is unknown." Youwill have to verify the certificate and confirm that it is trusted. One way to avoid getting this message isto add the certificate to each connecting client's trusted certificates list. Another way is to replace thecertificate with a trusted CA-signed certificate.

Figure 12. OpenStack Dashboard Overview Screen

40 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||||||

To replace the internal CA-signed certificate with your own external CA-signed certificate, the followingfiles are required. (Replace mnadmin with the value of the XCAT_MN_admin property in the DMSSICNFCOPY file.)

/home/mnadmin/cma_certs/privkey.pemThe private key generated for the CMA.

/home/mnadmin/cma_certs/cmacert.pemThe corresponding server certificate issued to the CMA and signed by the external CA.

/home/mnadmin/cma_certs/cacert.pemThe trusted CA certificate chain that issued the above certificate.

These files must be put on the CMA using the specified directory and file name(s). When the CMA isrestarted, it checks whether these files exist and are valid. If valid, the CMA uses this certificate to set upthe services that have enabled HTTPS. To be valid, a server certificate for use by the CMA must meet thefollowing requirements:v The private key must not be protected with a passphrase.v The private key must be at least 2048 bits in length.v The CommonName (CN) of the server certificate must have the same value as the value of the

XCAT_MN_Addr property in the DMSSICNF COPY file.v The certificate and the private key file must be in Privacy Enhanced Mail (PEM) format.v The server certificate must be valid for server authentication. You can verify this by issuing the

following command:openssl verify -purpose sslserver -CAfile /home/mnadmin/cma_certs/cacert.pem

/home/mnadmin/cma_certs/cmacert.pem

If the certificate can be verified successfully, you will see “OK” at the end of the command output.

Note:

v Because the value of the XCAT_MN_Addr property is used as the server certificate's CN, if this value ischanged the certificate must be regenerated. If no valid external-CA signed certificate is provided, theCMA will generate a new internal CA-signed certificate with the new value, thereby replacing the oldcertificate. You must add the new certificate to your connecting clients' trust list.

v If you restore the CMA and clear the data disk, when the CMA starts up again it regenerates acertificate with the serial number it had initially. Because the CMA server certificate uses the value ofthe XCAT_MN_Addr property as its CN, if you have connected to the CMA with this same IP address andtrusted the certificate, you will get a “sec_error_reused_issuer_and_serial” error message when you tryto connect after the CMA is restored. To resolve this problem, you must remove from the client theoriginal trusted certificate (with the same issuer and the same CN).

v If you want to use a DNS server to verify the certificate, insert your DNS server information into the/etc/resolv.conf file. In this file, replace "127.0.0.1" with the IP address of the DNS server you want touse.

Changing Default User Quotas

Most OpenStack services, including the graphical user interface and the command line interface, are notmodified in z/VM's CMA. If you have questions about how to use these services that are not alreadyanswered by this publication, consult the OpenStack documentation web page (http://docs.openstack.org). For example, if you want to change default user quotas, you can search that webpage for: "manage quotas", "quotas", or "quotas command line".

Creating Images Using the Horizon DashboardFollow the steps in this section to use the OpenStack horizon Dashboard to create images and updateimage metadata required by the z/VM OpenStack driver.1. Log on to the Dashboard.

Chapter 5. OpenStack Configuration 41

||

|

|||||

|||

|

2. Navigate to Admin->System->Images, as shown in Figure 13.

3. Select the Create Images button on the top right of the page, as shown in Figure 14 on page 43.

Figure 13. OpenStack Dashboard Images Screen

42 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

||

||

4. The Image Details screen is then displayed, as shown in Figure 15 on page 44 and Figure 16 on page45. Enter the image information. For information on each available option, select the Help button(represented by a question mark) in the top right corner. Note the following when you enterinformation on this screen:v The only format the z/VM driver supports is "Raw". Other options are not supported.v You can upload an image source from a file, or import an image file from a URL.

When you have finished entering all the required information, select the Create Image button.

Figure 14. Create Images Button

Chapter 5. OpenStack Configuration 43

|

|||

||||

|

|

||

Figure 15. Image Details Screen, Part 1 of 2

44 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

|

5. You will see an informational message on the top right corner of the screen, as shown in Figure 17 onpage 46.

Figure 16. Image Details Screen, Part 2 of 2

Chapter 5. OpenStack Configuration 45

|

|||

|||

6. Where the newly created image is displayed, select the Launch pulldown menu, and then selectUpdate Metadata. See Figure 18 on page 47.

Figure 17. Results of Creating an Image

46 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

|||

7. On the Update Image Metadata screen, select the plus (+) button next to the z/VM Image Propertiesfield, as shown in Figure 19 on page 48.

Figure 18. Launch Menu on the Images Screen

Chapter 5. OpenStack Configuration 47

|

|||

|||

8. Enter the OS version of the image. See Figure 20 on page 49. Note the following when you enterinformation on this screen:v Use the information at the bottom of the screen if you need help specifying the correct format of

the OS version.v z/VM image properties consist of predefined metadata which is used to help you define image

metadata specifically for the z/VM driver. To update these metadata definitions, navigate toAdmin->SYSTEM-> Metadata Definitions and update “z/VM Image Properties".

When you have finished entering information, select the Save button.

Figure 19. Update Image Metadata Screen

48 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

||

||

|||

||

9. After you save the image metadata, horizon will display the image list, as shown in Figure 21 on page50.

Figure 20. Selecting the Operating System (OS) Version

Chapter 5. OpenStack Configuration 49

|

|||

|||

Reconfiguring the CMA

In previous releases of the CMA (before Newton), reconfiguration of the CMA was accomplished byusing cookbooks and a Chef server. Because this method was at times complicated, IBM now provideseasy-to-use scripts to reconfigure the CMA.v You can reconfigure the CMA to use a remote keystone server by issuing the following command in

the CMA:/usr/local/bin/enable_remote_keystone

To display command usage information, enter:enable_remote_keystone -h

To display restrictions and guidelines, enter:enable_remote_keystone -g

v You can reconfigure the CMA to have z/VM managed by an external cross-platform OpenStackcontroller by issuing the following command in the CMA:

/usr/local/bin/enable_compute_anywhere

To display command usage information, enter:enable_compute_anywhere -h

Figure 21. Image List Displayed after Saving the Image Metadata

50 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

||

|||

||

|

|

|

|

|

||

|

|

|

To display restrictions and guidelines, enter:enable_compute_anywhere -g

Note:

v Only the controller role and the compute role can be reconfigured by the enable_remote_keystone tool.Reconfiguring the compute_mn role is not supported by this tool.

v If you have both a compute role CMA and a controller role CMA configured, then after reconfiguringthe controller role CMA to use external keystone the compute role CMA must be reconfigured to usethe same external keystone server.

v The enable_remote_keystone tool does not support switching from a CMA internal keystone server toan external keystone server, and vice versa. Also, switching between different external keystone serversis not supported.

v If you reconfigure the CMA to use an external keystone server, the OpenStack horizon Dashboardservice inside the CMA server will not be available. If you want to use the OpenStack Dashboardservice to manage your cloud environment, you must set up and run these services on the externalcontroller.

Configuring a non-CMA Compute Node

For a non-CMA compute node, the product which installs the compute node may configure the computenode configuration files or you may need to do this yourself. This section discusses how to go abouttailoring the files and verifying the configuration.

In this document, the term "non-CMA" compute node refers to a deployment where OpenStack servicesmanaging z/VM are running outside of a CMA. The CMA is a cloud manager appliance shipped byz/VM to run in a virtual machine which contains the OpenStack code; this is also called a CMA node.The z/VM OpenStack drivers can also run as a non-CMA compute node: in a user-created Linux machinewhich could exist on z/VM in a virtual machine, or on another server such as Linux running in a PC ora blade server.

Configuring OpenStack Files on a non-CMA Compute Node

Each compute node manages a single z/VM system. There are three services running in the computenode that need to be configured: nova, neutron, and ceilometer. These services do not need to be runningin the same server but common practice is to do so. Each service needs to be configured to talk to thexCAT MN and to identify the ZHCP agent (or z/VM hypervisor) that it will manage. In addition,configuration properties specify resource choices to be used when creating virtual server instances andvirtual networks.

When the xCAT machine logged on, it created an xCAT node that represents the xCAT MN, in additionto nodes that represent the ZHCP agent and the z/VM system.

To complete the configuration, you will need to have the following xCAT information (see Chapter 4,“SMAPI Configuration,” on page 23 for more information on the properties specified in the DMSSICNFCOPY file):v IP address of the xCAT MN. This was specified with the XCAT_MN_Addr property in the DMSSICNF

COPY file. You also use this IP address when using the xCAT GUI.v Netmask for the xCAT management network. This was specified with the XCAT_MN_Mask property in the

DMSSICNF COPY file.v xCAT node name that represents the xCAT MN. This was specified with the XCAT_Host property in the

DMSSICNF COPY file. When the xCAT machine logged on, it created an xCAT node that representsthe xCAT MN.

Chapter 5. OpenStack Configuration 51

|

|

|

||

|||

|||

||||

|

v ZHCP node name that represents the ZHCP agent. This was specified with the ZHCP_Host property inthe DMSSICNF COPY file. When the xCAT machine logged on, it created an ZHCP node thatrepresents the ZHCP agent.

v z/VM system node name that represents the z/VM system. This was specified with the XCAT_zvmsysidproperty in the DMSSICNF COPY file. When the xCAT machine logged on, it created an xCAT nodethat represents the z/VM system.

v User and password that will be used in the xCAT GUI to contact the xCAT MN and also by theservices using the REST API.

You will also need this information from your z/VM system administrator:v The z/VM Directory Manager disk pool name. This is the Directory Manager's pool/group that has

been set up for allocation of minidisks used when a virtual server is created by xCAT.

Once you have gathered all the necessary information, you should update the OpenStack configurationfiles. Consult the product which provided the OpenStack z/VM plugin for information on how to tailorthe configuration properties. See Appendix F, “OpenStack Configuration Files,” on page 155 forinformation on each of the properties within the various files.

After you have configured the files, you will want to restart the OpenStack services. The easiest way torestart the OpenStack services is to restart the operating system in which the OpenStack services arerunning. If the system is running in a z/VM virtual machine, log that machine off and then back on.

Verify the OpenStack Configuration for a non-CMA Compute Node

After configuring the non-CMA compute node, the properties should be verified. This is described inAppendix A, “Installation Verification Programs,” on page 139, and involves first running a script toperform simple validation followed by steps which send key properties to the xCAT MN for furthervalidation. Following the successful run of the IVP, the following sections should be followed.

Note: You will have to set OpenStack-related environment variables before you issue any OpenStackcommands, unless the non-CMA product installation process does this for you. See its productdocumentation for the necessary commands to issue.

Nova

Verify that the nova services, especially nova-compute, can start successfully. Start the nova services andissue the nova service-list command. The services status should be "enabled" and the state should be"up".

Neutron

Start the neutron services (neutron-server and neutron-zvm-agent) and issue the neutron net-list andneutron subnet-list commands to see the net and subnet you created.

nova service-list

+------------------+--------------------------+----------+---------+-------+----------------------------+-----------------+| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |+------------------+--------------------------+----------+---------+-------+----------------------------+-----------------+| nova-conductor | openstack | internal | enabled | up | 2013-09-02T02:02:29.000000 | None || nova-scheduler | openstack | internal | enabled | up | 2013-09-02T02:02:31.000000 | None || nova-consoleauth | openstack | internal | enabled | up | 2013-09-02T02:02:24.000000 | None || nova-cert | openstack | internal | enabled | up | 2013-09-02T02:02:22.000000 | None || nova-compute | openstack | nova | enabled | up | 2013-09-02T02:02:24.000000 | None |+------------------+--------------------------+----------+---------+-------+----------------------------+-----------------+

52 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

neutron net-list

+--------------------------------------+-----------------+------------------------------------------------------+| id | name | subnets |+--------------------------------------+-----------------+------------------------------------------------------+| 1928c22c-8017-4a48-9d2d-9944fcc27845 | opnstk_datanet1 | 35265a86-59a7-48a4-bf4c-43d4ee7cf6cd 192.168.1.0/24 || 2928c2cc-8212-4a43-9d2d-9243fcc24355 | opnstk_datanet2 | 35265343-54a2-4434-fbcc-4353534f6433 192.168.2.0/24 || 2c482d06-77eb-483f-bde5-db9d132a112d | xcat_management | fe2b5c0c-4193-496b-bbdf-fc5d88cd8473 10.1.0.0/16 |+--------------------------------------+-----------------+------------------------------------------------------+

neutron subnet-list

+--------------------------------------+------+-----------------+----------------------------------------------------+| id | name | cidr | allocation_pools |+--------------------------------------+------+-----------------+----------------------------------------------------+| 35265a86-59a7-48a4-bf4c-43d4ee7cf6cd | | 192.168.1.0/24 | {"start": "192.168.1.1", "end": "192.168.1.254"} || 35265343-54a2-4434-fbcc-4353534f6433 | | 192.168.2.0/24 | {"start": "192.168.2.1", "end": "192.168.2.254"} || fe2b5c0c-4193-496b-bbdf-fc5d88cd8473 | | 10.1.0.0/16 | {"start": "10.1.13.100", "end": "10.1.13.200"} |+--------------------------------------+------+-----------------+----------------------------------------------------+

Chapter 5. OpenStack Configuration 53

Cinder

Start cinder services (cinder-api, cinder-volume, cinder-scheduler) and try to create a volume using thenova volume-create command. Then show the volume you just created with the nova volume-listcommand.nova volume-create 1

+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| status | creating || display_name | None || attachments | [] || availability_zone | nova || bootable | False || created_at | 2013-09-02T08:18:44.207684 || display_description | None || volume_type | None || snapshot_id | None || source_volid | None || size | 1 || id | 4d146af5-3502-4db7-9e3d-0d88a4147cb8 || metadata | {} |+---------------------+--------------------------------------+

nova volume-list

+--------------------------------------+-----------+--------------+------+-------------+-------------+| ID | Status | Display Name | Size | Volume Type | Attached to |+--------------------------------------+-----------+--------------+------+-------------+-------------+| 4d146af5-3502-4db7-9e3d-0d88a4147cb8 | available | None | 1 | scsi | || e879fe83-641e-4cd8-8f70-27cea3cbd0c7 | available | hycva | 1 | scsi | |+--------------------------------------+-----------+--------------+------+-------------+-------------+

Ceilometer

Start ceilometer services (ceilometer-api, ceilometer-collector, ceilometer-notification, ceilometer-polling)and aodh services (aodh-api, aodh-evaluator, aodh-listener, aodh-notifier). Then show the samples usingthe ceilometer sample-list command.ceilometer sample-list

+--------------------------------------+--------------------------------------+----------------+------------+-----------------+------+----------------------------+| ID | Resource ID | Name | Type | Volume | Unit | Timestamp |+--------------------------------------+--------------------------------------+----------------+------------+-----------------+------+----------------------------+| bbcce6aa-bc91-11e6-b6b3-0200010000b8 | 012c0751-f964-4140-b80c-c766f001fbed | disk.root.size | gauge | 4.0 | GB | 2016-12-07T15:27:58.833761 || be10d104-bc8e-11e6-bd27-0200010000b8 | 012c0751-f964-4140-b80c-c766f001fbed | cpu | cumulative | 11848647000.0 | ns | 2016-12-07T15:06:34.151186 || be11a282-bc8e-11e6-be8e-0200010000b8 | 012c0751-f964-4140-b80c-c766f001fbed | cpu_util | gauge | 0.0237935572649 | % | 2016-12-07T15:06:34.151186 || be12182a-bc8e-11e6-be8e-0200010000b8 | 012c0751-f964-4140-b80c-c766f001fbed | cpu.delta | delta | 285505000.0 | ns | 2016-12-07T15:06:34.151186 || be0eeaa6-bc8e-11e6-bd27-0200010000b8 | b17811ad-06cf-4450-9f6c-21629086039d | image.size | gauge | 747541888.0 | B | 2016-12-07T15:06:34.139827 |+--------------------------------------+--------------------------------------+----------------+------------+-----------------+------+----------------------------+

Configuration of SSH for xCAT and Nova Compute Nodes

In order for OpenStack to be able to deploy systems or resize/move systems, SSH communication isneeded between the xCAT MN and the compute node, and between compute nodes involved in a resizefunction. This sections covers setting up communication between xCAT and the compute node, andsetting up communication between two or more compute nodes.

SSH Key Between xCAT and Nova for a non-CMA compute Node

xCAT MN’s root user needs to be authorized by the user of the nova-compute service. This step isrequired by the image import/export function for a non-CMA compute node. It is not necessary for aCMA in the controller role or the compute role because this is done automatically.

54 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

||||||||||||||

By default, the nova-compute service uses “nova” as its default user, so before you deploy an instance,you need to ensure that the xCAT root user's public key is added to the nova user's authorized_keys fileon your nova-compute server. Refer to the following steps to configure it:1. Log on to the nova-compute server and change the nova user’s right to be able to log in.

ssh root@nova-compute-IPusermod -s /bin/bash nova

where:

nova-compute-IPis the IP address of the nova compute node.

2. Change to nova user and inject xCAT MN’s public key into it.su - novascp mnadmin@xCAT_MN_IP:/root/.ssh/id_rsa.pub $HOMEmkdir -p $HOME/.sshmv $HOME/id_rsa.pub $HOME/.ssh/authorized_keys

where:

mnadminis the user defined for SSH access to xCAT MN.

xCAT_MN_IPis the IP address of the xCAT MN.

Note: If the $HOME/.ssh/authorized_keys file already exists, you just need to append the xCATMN’s public key to it.

3. Ensure that the file mode under the $HOME/.ssh folder is 644.chmod -R 644 $HOME/.ssh/*

4. Issue the following command to determine if SELinux is enabled on the system:getenforce

5. If SELinux is enabled then set SELinux contexts on the nova home directory.su -chcon -R -t ssh_home_t nova_home

where:

nova_homeis the home directory for the nova user on the compute node.

Note: You can obtain nova_home by issuing:echo ~nova

Synchronizing SSH Key Between Nova Compute Nodes for Resize

In addition to configuring SSH between xCAT MN and the nova compute-service server, nova's resizefunction requires that the servers running nova-compute service be authorized by each other. Otherwise,the temporary images generated during resize process will not be transferred between the resize sourceand destination host and will result in a resize failure. Use the following steps to configure it:1. Identify the nova-compute hosts that you will use for the resize function. On the controller node,

issue:nova host-list

2. Refer to “SSH Key Between xCAT and Nova for a non-CMA compute Node” on page 54 to put thepublic key of nova-compute service’s user into the other nova compute nodes. For example, if thereare two hosts (A and B) running nova-compute services, both of them using nova user to run the

Chapter 5. OpenStack Configuration 55

nova-compute service, then you need to ensure that if you logon nova@hostA, you can directly SSHto nova@hostB without typing a password, and vice-versa.

Network Configurations

This topic assumes that you are using no deprecated interfaces. If you are using deprecated interfaces, forexample during migration from an earlier release, this topic will not apply to you until you are no longerusing these interfaces. For more information, see “Deprecated Interfaces” on page xv.

Sample Configuration

Figure 22 on page 57 shows a typical configuration in a multi-system environment.

56 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

|

Use the following guide to understand this configuration:v Networks:

The network connections shown in Figure 22 are divided into groups, according to their roles:

ZVMa

ZVMb

...

...

Instance a001

Instance b001

Instance annn

Web Browser orREST API Client

xCAT REST API and other

Data/Compute Network

OPNCLOUD Internal Network

Instance bnnnOPNCLOUDZHCP role

10.10.10.30 10.1.11.2/16 10.1.11. /16n

192.168.1.2

192.168.1.x

192.168.1.n

192.168.1.y

enccw0.0.0600

enccw0.0.0600

enccw0.0.0700 enccw0.0.1000 enccw0.0.1000

enccw0.0.1000 enccw0.0.1000

OSAa1

OSAb1

OSAa2

OSAb2

OSAa3

OSAb3

OPNCLOUDcontroller role

10.10.10.10 1.2.3.4

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw2(Vswitch for Management Network)

xcatvsw2(Vswitch for Management Network)

datanet1(Vswitch for Data/Compute Network)

datanet1(Vswitch for Data/Compute Network)

Figure 22. Sample Configuration

Chapter 5. OpenStack Configuration 57

||

||||

– Connections between the OpenStack controller and the OPNCLOUD virtual machine in thecontroller role (also called the xCAT MN): OpenStack z/VM drivers use the xCAT REST API to issuecommands on xCAT or instance servers. The OpenStack controller needs to be able to connect to thexCAT MN. In OpenStack z/VM driver configuration files, zvm_xcat_server is used to specify thexCAT MN's (REST API server) IP address.

– Connections for instance servers to connect to the WAN or to other instance servers.– Connections between the xCAT MN and the OPNCLOUD virtual machine in the zhcp role: this

connection allows the xCAT MN to issue SMAPI functions on ZHCP.From a system management perspective, each of the groups can have one or more dedicated networksto prevent unexpected network access:– The Data/Compute Network is for the instance server to the WAN or to other instance servers.– The OPNCLOUD Internal Network is for connections between the xCAT MN and the OPNCLOUD

virtual machine in the zhcp role.v Virtual Switches:

Parameters for the virtual switches xcatvsw1 and xcatvsw2 are specified in the DMSSICNF COPY file.In both cases, if the virtual switch (vswitch) already exists when OPNCLOUD (xCAT) starts, xCATignores the parameters and uses the vswitches as they are currently defined. Conversely, if the vswitchdoes not exist, xCAT creates it using the parameters specified. When xCAT creates those virtualswitches, by default it creates xcatvsw1 as a VLAN UNAWARE, Layer 3 vswitch, and it createsxcatvsw2 as a VLAN UNAWARE, Layer 2 vswitch. Figure 22 on page 57 uses an additional vswitch,datanet1, for communication between the OpenStack-deployed guests, to isolate their data/computenetwork traffic from xCAT's management network traffic. As with the vswitches above, you can eithercreate the vswitch datanet1 in advance, in which case the z/VM neutron agent will re-use it withoutmodification, or you can allow z/VM's neutron agent to create it. When the agent creates the vswitch,it will create datanet1 as a Layer 2 vswitch that is either VLAN UNAWARE or VLAN AWARE,depending on the contents of the ML2 configuration file.The OPNCLOUD virtual machine in the zhcp role and the xCAT MN are granted authority to coupleto xcatvsw1. The xCAT MN and all new OpenStack instances are granted authority to couple toxcatvsw2. The uplink ports (OSAa1, OSAa2 or OSAb1, OSAb2) of the two vswitches are specified in theDMSSICNF COPY file.All of the vswitches listed in the ML2 configuration file's flat_networks and network_vlan_rangesproperties are created by the neutron agent if they do not already exist, except for xcatvsw2 (which iscreated by xCAT). Each vswitch's access authorization is determined by the project's neutron networkconfiguration. Each is defined as a Layer 2 vswitch; its uplink port (OSAa3 or OSAb3, in this example) isdefined in the neutron agent configuration file (in the [datanet1] section, in this example, since theagent is only creating the vswitch datanet1).

v NICs:

NICs in the OPNCLOUD virtual machine in the zhcp role: The IP address of enccw0.0.0600 is definedin the DMSSICNF COPY file. Refer to Chapter 3, “z/VM Configuration,” on page 21. The NIC of theinterface (enccw0.0.0600) is defined when OPNCLOUD logs on, and enccw0.0.0600 is initialized whileZHCP is starting up.NICs in the xCAT MN: The IP addresses of enccw0.0.0600 and enccw0.0.0700 are defined in theDMSSICNF COPY file. Refer to Chapter 3, “z/VM Configuration,” on page 21. The NICs and interfaces(enccw0.0.0600 and enccw0.0.0700) are defined when the xCAT MN is logged on, and enccw0.0.0600and enccw0.0.0700 are initialized while the xCAT MN is starting up.NICs in instances: All NICs and interfaces are defined by the OpenStack driver.

v OSA:

ZVMa and ZVMb are in different LPARs: The connection between the OSA cards can be done bysharing OSA ports (OSAa1 and OSAb1 share the same OSA card port, OSAa2 and OSAb2 share anotherOSA card port).ZVMa and ZVMb in are different CECs: There must be a physical connection between the OSA ports(there is a physical connection between OSAa1 and OSAb1).

58 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||||

||

||

|

||||

|

Network Scenarios

Single Flat Network

The following scenarios show a single flat network.

Using Public IP Addresses for Instances:

Figure 23 on page 60 shows a flat network that uses public IP addresses, which can be reached fromoutside the network.

Chapter 5. OpenStack Configuration 59

|

To use this scenario, the following configuration options are needed:v In the neutron ML2 plugin configure file (default file name is /etc/neutron/plugins/ml2/

ml2_conf.ini), make sure that xcatvsw2 is in the flat_networks option:flat_networks = xcatvsw2

ZVMa

ZVMb

...

...

Instance a001

Instance b001

Instance annn

Web Browser orREST API Client

xCAT REST API

Mixed Networks

OPNCLOUD Internal Network

Instance bnnnOPNCLOUDZHCP role

10.10.10.30

1.2.3.5/16

1.2.4.1/16

1.2.3.n/16

1.2.4. /16n

enccw0.0.0600 enccw0.0.1000 enccw0.0.1000

enccw0.0.0600 enccw0.0.1000 enccw0.0.1000

enccw0.0.0700

OSAa1

OSAb1

OSAa2

OSAb2

10.10.10.10 1.2.3.4/16

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw2(Vswitch for Mixed Networks)

xcatvsw2(Vswitch for Mixed Networks)

OPNCLOUDcontroller role

Figure 23. Flat Network, Using Public IP Addresses

60 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||||

v In the neutron z/VM agent configure file (default file name is /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini), the options xcat_mgt_ip and xcat_mgt_mask should be commented out. Also,make sure that the following options are present:

[AGENT]zvm_xcat_username = adminzvm_xcat_password = adminzvm_xcat_server = 1.2.3.4xcat_zhcp_nodename = zhcp

Notes:

v The xcat_mgt_ip and xcat_mgt_mask options are not defined, so the neutron z/VM agent will not createa new interface on xCAT MN. The xCAT MN will use enccw0.0.0700 to connect to the instances.

v The neutron z/VM agent configuration shown above is for ZVMa. Update the xcat_zhcp_nodenameoption to configure for ZVMb.

After restarting the neutron server and neutron z/VM agent, follow these steps on the OpenStackcontroller to create the network and subnet. On a CMA system, issue the following commands:1. Set the OpenStack-related environment variables before you issue any OpenStack commands:

source $HOME/openrc

2. Create the single flat network:neutron net-create --shared singleflat --provider:network_type flat--provider:physical_network xcatvsw2

3. Create the appropriate subnet for the network:neutron subnet-create --allocation-pool start=1.2.3.5,end=1.2.4.254--gateway 1.2.3.1 singleflat 1.2.0.0/16

Using Private IP Addresses for Instances:

Figure 24 on page 62 shows a flat network that uses private IP addresses, which can be reached only bythe OPNCLOUD virtual machine.

Chapter 5. OpenStack Configuration 61

||

To use this scenario, the following configuration options are needed:v In the Neutron ML2 plugin configuration file (default file name is /etc/neutron/plugins/

ml2/ml2_conf.ini), make sure that xcatvsw2 is in the flat_networks option:flat_networks = xcatvsw2

ZVMa

ZVMb

...

...

Instance a001

Instance b001

Instance annn

Web Browser orREST API Client

xCAT REST API

Mixed Networks

OPNCLOUD Internal Network

Instance bnnnOPNCLOUDZHCP role

10.10.10.30

192.168.1.2/16

192.168.2.2/16

192.168.1.n/16

192.168.2.n/16

enccw0.0.0600 enccw0.0.1000

enccw0.0.0600 enccw0.0.1000 enccw0.0.1000

enccw0.0.1000enccw0.0.0700

OSAa1

OSAb1

OSAa2

OSAb2

OPNCLOUDcontroller role

10.10.10.10 1.2.3.4

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw2(Vswitch for Mixed Networks)

xcatvsw2(Vswitch for Mixed Networks)

Figure 24. Flat Network, Using Private IP Addresses

62 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||||

v In the Neutron z/VM agent configuration file (default file name is /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini), the following options are needed:

[AGENT]zvm_xcat_username = mnadminzvm_xcat_password = adminzvm_xcat_server = 1.2.3.4xcat_zhcp_nodename = zhcp

Notes:

v The neutron z/VM agent configuration shown above is for ZVMa. Update the xcat_zhcp_nodenameoption to configure for ZVMb.

After restarting the neutron server and neutron z/VM agent, follow these steps on the OpenStackcontroller to create the network and subnet. On a CMA system, issue the following commands:1. Set OpenStack-related environment variables before you issue any OpenStack commands:

source $HOME/openrc

2. Create the single flat network:neutron net-create --shared singleflat --provider:network_type flat--provider:physical_network xcatvsw2

3. Create the appropriate subnet for the network:neutron subnet-create --allocation-pool start=192.168.1.2,end=192.168.2.254--gateway 192.168.1.1 singleflat 192.168.0.0/16

Note: The gateway 192.168.1.1 can be a physical gateway or a virtual gateway created by neutron L3agent or others, depending on the OpenStack configuration. Refer to the “Networking” chapter in theOpenStack Administrator's Guide for more information about OpenStack Layer 3 support.

Single VLAN Network

Two scenarios are discussed in this section: “Using the Default VLAN ID for Instances” and “Using theUser-Specified VLAN ID for Instances” on page 64.

Both scenarios use a single VLAN network as the OpenStack compute/data network.

Using the Default VLAN ID for Instances

This scenario uses a VLAN aware vswitch and its default (defined in z/VM) VLAN ID. You do not needto configure any additional information in OpenStack related to the VLAN in the network. To OpenStackthe infrastructure appears the same as in the Flat network scenario. You can configure your vswitch nameto flat_network or network_vlan_ranges. You do not need to configure the VLAN ID for your networkin OpenStack as it appears to OpenStack as a "FLAT" network.

To use this scenario, the following configuration options are needed:v Specify one of the following options in the neutron ML2 plugin configure file (default file name is

/etc/neutron/plugins/ml2/ml2_conf.ini):flat_networks = xcatvsw3

network_vlan_ranges = xcatvsw3:1:4094

Note: "xcatvsw3" is an example of a value used by the XCAT Management Network, so you shouldcreate this as a VLAN aware Layer 2 vswitch with a default VLAN ID defined on z/VM.

v In the neutron z/VM agent configure file (default file name is /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini), the following options are needed:

Chapter 5. OpenStack Configuration 63

|

[AGENT]zvm_xcat_username = mnadminzvm_xcat_password = adminzvm_xcat_server = 1.2.3.4xcat_zhcp_nodename = zhcp

Note: The neutron z/VM agent configuration shown above is for ZVMa. Update thexcat_zhcp_nodename option in the neutron z/VM agent configuration file for ZVMb (default file nameis /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini) to configure the neutron z/VM agent forZVMb.

Using the User-Specified VLAN ID for Instances

This scenario uses a VLAN aware SWITCH with the VLAN ID specified in OpenStack instead of usingthe defined default VLAN ID specified for the vswitch in z/VM. The difference between this scenarioand “Using the Default VLAN ID for Instances” on page 63 is that here the OpenStack user must specifya VLAN ID for the instances to be deployed.

To use this scenario, the following configuration options are needed:v In the neutron ML2 plugin configuration file (default file name is /etc/neutron/plugins/ml2/

ml2_conf.ini), make sure that the network_vlan_ranges property is specified as follows:network_vlan_ranges = xcatvsw3:1:4094

Note: xcatvsw3 is used by the XCAT Management Network. It should be a VLAN aware Layer 2vswitch on z/VM.

v In the neutron z/VM agent configuration file (default file name is /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini), the following options are needed:

[AGENT]zvm_xcat_username = mnadminzvm_xcat_password = adminzvm_xcat_server = 1.2.3.4xcat_zhcp_nodename = zhcp

Note: The neutron z/VM agent configuration shown above is for ZVMa. Update thexcat_zhcp_nodename option in the neutron z/VM agent configuration file for ZVMb (default file nameis /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini) to configure the neutron z/VM agent forZVMb.

After restarting the neutron server and neutron z/VM agent, follow these steps on the OpenStackcontroller to create the network and subnet for each of the physical networks. On a CMA system, issuethe following commands:1. Set OpenStack-related environment variables before you issue any OpenStack commands:

source $HOME/openrc

2. Create the xCAT management network. Enter the following command:neutron net-create --shared xcat_mgt --provider:network_type vlan --provider:physical_network xcatvsw3--provider:segmentation_id 521

Note: The segmentation_id is the VLAN ID. It should be in the range of network_vlan_ranges in/etc/neutron/plugins/ml2/ml2_conf.ini.

3. Create the appropriate subnet for the xCAT management network, changing the IP range to theappropriate values according to the xCAT configuration:

neutron subnet-create --allocation-pool start=1.2.3.5,end=1.2.4.254 --gateway 1.2.3.1 xcat_mgt 1.2.0.0/16

When new instances are spawned, neutron-zvm-agent will set the VLAN ID(521) for each instance. xCATMN can reach and manage the new instances through the management network.

64 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

Flat and VLAN Mixed Network

Figure 22 on page 57 shows a sample configuration of a flat and VLAN mixed network. To use thisscenario, the following configuration options are needed:v In the neutron ML2 plugin configure file (default file name is /etc/neutron/plugins/

ml2/ml2_conf.ini), make sure that the flat_networks and network_vlan_ranges property lines read asfollows:

flat_networks = xcatvsw2network_vlan_ranges = datanet1:1:4094

Physical network names will be used as z/VM vswitch names, as follows:

xcatvsw2is used by the xCAT Management Network. By default, there is a VLAN UNAWARE Layer 2vswitch on z/VM with the name xcatvsw2. It is created and configured for xCAT managementnetwork, so you can use xcatvsw2 as in this example.

datanet1is used by the OpenStack Data/Compute network. neutron z/VM agent will create a VLANAWARE vswitch, with the name datanet1. The range of possible VLAN IDs is from 1 to 4094(i.e. the complete VLAN ID range).

v In the neutron z/VM agent configure file (default file name is /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini), the following options are needed:

[AGENT]zvm_xcat_username = mnadminzvm_xcat_password = adminzvm_xcat_server = 1.2.3.4xcat_zhcp_nodename = zhcp[datanet1]# OSAa3 uses RDEV A3rdev_list=a3

Note: The neutron z/VM agent configuration shown above is for ZVMa. Update the xcat_zhcp_nodenameoption to configure for ZVMb.

After restarting the neutron server and neutron z/VM agent, follow these steps on the OpenStackcontroller to create the network and subnet for each of the physical networks. On a CMA system, issuethe following commands:1. Set OpenStack-related environment variables before you issue any OpenStack commands:

source $HOME/openrc

2. Create the xCAT management network:neutron net-create --shared xcat_management --provider:network_type flat--provider:physical_network xcatvsw2

3. Create the appropriate subnet for xCAT management network, changing the IP range to theappropriate values according to xCAT configuration:

neutron subnet-create --allocation-pool start=10.1.0.2,end=10.1.11.254 xcat_management 10.1.0.0/16

4. Create the Data/Compute network for physical network datanet1:neutron net-create opnstk_datanet1 --provider:network_type vlan --provider:physical_network datanet1

5. Create the appropriate subnet for the Data/Compute network opnstk_datanet1:neutron subnet-create opnstk_datanet1 192.168.1.0/24

Note: The xCAT Management Network ID should always be passed in the first --nic network_IDparameter when creating a new instance with the nova boot command. This restriction ensures that thexCAT MN can reach and manage the new instances through the management network.

Chapter 5. OpenStack Configuration 65

||

|

Optionally Creating More Than One Data/Compute Network

In the current Neutron z/VM agent implementation, physical network names are used as vswitch names.There is no limitation on the number or the order of physical networks, so in the Neutron ML2 pluginconfiguration file (/etc/neutron/plugins/ml2/ml2_conf.ini), you could have:

flat_networks = xcatvsw2,datanet2network_vlan_ranges = datanet1:1:4094,datanet3:1:4094

And in the Neutron z/VM agent configuration file (/etc/neutron/plugins/zvm/neutron_zvm_plugin.ini),you could have:

[AGENT]zvm_xcat_username = mnadminzvm_xcat_password = adminzvm_xcat_server = 1.2.3.4xcat_zhcp_nodename = zhcp[datanet1]# OSAa3 uses RDEV A3rdev_list=a3[datanet3]# OSAa4 uses RDEV A4rdev_list=a4[datanet2]# OSAa5 uses RDEV A5rdev_list=a5

In this case, xcatvsw2 will be used by the xCAT Management Network, and datanet1-datanet3 will beused by Compute/Data Network. The Neutron z/VM agent will create vswitches named datanet1,datanet2, and datanet3. datanet2 will be a VLAN UNAWARE vswitch, while datanet1 and datanet3 willbe VLAN AWARE.

Note: Each of the switches needs at least one OSA defined. The OSA card needs to be connected to thetrunk port if the VLAN is enabled. The related rdev_list should be updated to list one of the OSAs.

With datanet2 and datanet3, more Data/Compute networks can be defined, as follows.

Note: On a CMA system, issue the following commands:1. Set OpenStack-related environment variables before you issue any OpenStack commands:

source $HOME/openrc

2. Create the Data/Compute network for physical network datanet2:neutron net-create opnstk_datanet2 --provider:network_type flat --provider:physical_network datanet2

3. Create the appropriate subnet for the Data/Compute network opnstk_datanet2:neutron subnet-create opnstk_datanet2 192.168.2.0/24

4. Create the Data/Compute network for physical network datanet3:neutron net-create opnstk_datanet3 --provider:network_type vlan --provider:physical_network datanet3

5. Create the appropriate subnet for the Data/Compute network opnstk_datanet3:neutron subnet-create opnstk_datanet3 192.168.3.0/24

In this example, all Data/Compute networks have gateways defined. If an instance wants to connect tomore than one of the Data/Compute networks, only one gateway is supported. Because opnstk_datanet3is created later, the gateway of opnstk_datanet3 (192.168.3.1) will be set as the gateway in the instance. Tomake the gateway in opnstk_datanet1 the gateway of the instance, add the --no-gateway parameter whencreating opnstk_datanet2 and opnstk_datanet3, as follows:

neutron subnet-create --no-gateway opnstk_datanet2 192.168.2.0/24neutron subnet-create --no-gateway opnstk_datanet3 192.168.3.0/24

66 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

If you created a network like opnstk_datanet3 in the past without specifying --shared, and you want thatnetwork to be usable when non-administrator ICM self-service users in any project create virtual serverinstances, use the following command to make it a shared network:

neutron net-update --shared=true network_ID

where network_ID is the ID returned by the previous net-create command for your network name (forexample, opnstk_datanet3), or in the output from neutron net-list. A network_ID has a value likef4812075-9aaa-481b-8d68-dabad8bbfe98.

If you want the network to be usable when non-administrator ICM self-service users in a particularproject (but not all projects) create virtual server instances, see the OpenStack neutron command lineinterface documentation.

Network in which the CMA has the compute_mn Role

Figure 25 on page 68 shows a networking scenario in which the CMA has the compute_mn role managedby a cross-platform OpenStack controller. The xCAT REST API interfaces are used for the OpenStackcontroller node to communicate with the compute services running on this CMA. Different managementcomponents such as glance, cinder, neutron, keystone, etc. can be located on different hosts. TheOPNCLOUD internal network and an OPNCLOUD mixed network communicate with the OPNCLOUDin the zhcp role and other instances.

Chapter 5. OpenStack Configuration 67

||||||

ZVMa

ZVMb

...

...

Instance a001

Instance b001

Instance annn

Cross-PlatformOpenStackController

Web Browser orREST API Client

xCAT REST API

Mixed Networks

OPNCLOUD Internal Network

Instance bnnn

ComputeService

OPNCLOUDZHCP role

10.10.10.30

1.2.3.5/16 1.2.3.n/16

1.2.4.1/16 1.2.4.n/16 1.2.4.4/16

enccw0.0.0600

enccw0.0.0600 enccw0.0.1000 enccw0.0.1000 enccw0.0.1000

enccw0.0.0700 enccw0.0.1000 enccw0.0.1000

OSAa1

OSAb1

OSAa2

OSAb2

OPNCLOUDcompute_mn role

10.10.10.10 1.2.3.4/16

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw1(OPNCLOUD Internal Network)

xcatvsw2(Vswitch for Mixed Networks)

xcatvsw2(Vswitch for Mixed Networks)

Figure 25. Network in which the CMA has the compute_mn Role

68 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

||||

Chapter 6. Image and cloud-init Configuration

This section discusses setting up the Linux on System z that is the target of the initial image capture,along with the process to define the system to xCAT. In addition, this section will discuss capturing thesystem in xCAT, and then uploading and importing the image into OpenStack.

Image Requirements

These are the requirements for an image to be captured and deployed by z/VM OpenStack support:v A supported Linux distribution (for deploy). The following are supported:

– RHEL 6.2, 6.3, 6.4, 6.5, 6.6, and 6.7– RHEL 7.0, 7.1 and 7.2– SLES 11.2, 11.3, and 11.4– SLES 12 and SLES 12.1– Ubuntu 16.04

v A supported root disk type for snapshot/spawn. The following are supported:– FBA– ECKD

v Images created with the previous version of the OpenStack support should be recaptured with anupdated xcatconf4z installed, version 2.0 or later. See “Make a Deployable z/VM Image” on page 70for more information on creating the image.

v An image deployed on a compute node must match the disk type supported by that compute node, asconfigured by the zvm_diskpool_type property in the nova.conf configuration file. A compute nodesupports deployment on either an ECKD or FBA image, but not both at the same time. If you wish toswitch image types, you need to change the zvm_diskpool_type and zvm_diskpool properties in thenova.conf file, accordingly. Then restart the nova-compute service to make the changes take effect.

v If you deploy an instance with an ephemeral disk, both the root disk and the ephemeral disk will becreated with the disk type that was specified by zvm_diskpool_type property in the nova.conf file. Thatproperty can specify either ECKD or FBA.

v When resizing, remember that you can only resize an instance to the same type of disk. For example, ifan instance is built on an FBA type disk, you can resize it to a larger FBA disk, but not to an ECKDdisk.

v For glance image-create, it is strongly suggested that you capture an instance with a root disk size nogreater than 5GB. If you really want to capture a larger root device, you will need to logon xCAT MNand modify the timeout value in for httpd service to make image-create work as expected. Refer to“Increasing the httpd Timeout in the xCAT MN” on page 181 for information on increasing thetimeout.

v For nova boot, it is recommended that you deploy an instance with a root disk size no greater than5GB. If you really want to deploy a larger root device, you will need to logon xCAT MN and modifythe timeout value in for httpd service to make boot work as expected.

v For nova resize operation, we suggest that you resize an instance with a root disk size no greater than5GB.

v The network interfaces must be IPv4 interfaces.v Image names should be restricted to the UTF-8 subset, which corresponds to the ASCII character set. In

addition, special characters such as /, \, $, %, @ should not be used. For the FBA disk type "vm",capture and deploy is supported only for an FBA disk with a single partition. Capture and deploy isnot supported for the FBA disk type "vm" on a CMS formatted FBA disk.

© Copyright IBM Corp. 2014, 2017 69

|

|

The virtual server/Linux instance used as the source of the new image should meet the following criteria:v The root filesystem must not be on a logical volume.v The minidisk on which the root filesystem resides should:

– be a minidisk of the same type as desired for a subsequent deploy (for example, an ECKD diskimage should be captured for a subsequent deploy to an ECKD disk),

– not be a full-pack minidisk, since cylinder 0 on full-pack minidisks is reserved, and– be defined with virtual address 0100.

v The root disk should have a single partition.v The image being captured should support SSH access using keys instead of specifying a password. The

subsequent steps to capture the image will perform a key exchange to allow xCAT to access the server.v The image being captured should not have any network interface cards (NICs) defined below virtual

address 1100.

In addition to the specified criteria, the following recommendations allow for efficient use of the image:v The minidisk on which the root filesystem resides should be defined as a multiple of full gigabytes in

size (for example, 1GB or 2GB). OpenStack specifies disk sizes in full gigabyte values, whereas z/VMhandles disk sizes in other ways (cylinders for ECKD disks, blocks for FBA disks, and so on). See theappropriate online information if you need to convert cylinders or blocks to gigabytes; for example:http://www.mvsforums.com/helpboards/viewtopic.php?t=8316.

v During subsequent deploys of the image, the OpenStack code will ensure that a disk image is notcopied to a disk smaller than the source disk, as this would result in loss of data. The disk specified inthe flavor should therefore be equal to or slightly larger than the source virtual machine's root disk.IBM recommends specifying the disk size as 0 in the flavor, which will cause the virtual machine to becreated with the same disk size as the source disk.

Make a Deployable z/VM Image

If you already have an image file created by xCAT (example: /root/0100.img), go to Steps 3 and 4 of“Upload the Image from the Nova Compute Server to Glance” on page 86 to upload it directly to Glance.Otherwise, create that file using the steps described below.

Install Linux on z Systems in a Virtual Machine1. Prepare a Linux on z Systems virtual server in the z/VM system that is managed by xCAT. (For more

information, refer to the IBM Redbook: The Virtualization Cookbook for z/VM 6.3, RHEL 6.4 andSLES 11 SP3. You will have to make adjustments to the procedures that are documented in thisRedbook in order to keep the resulting virtual server within the bounds of the image requirements.See “Image Requirements” on page 69).v For RHEL 7 installation, see http://www.redbooks.ibm.com/abstracts/sg248303.html?Open.v For SLES 12 installation, see http://www.redbooks.ibm.com/abstracts/sg248890.html?Open.v For Ubuntu 16.04 installation, see http://www.redbooks.ibm.com/redbooks/pdfs/sg248354.pdf.Note that the ext file sytem is supported for RHEL 6, SLES 11, and Ubuntu 16.04; both the ext and xfsfile systems are supported for RHEL 7 and SLES 12.

2. Install the mkisofs and openssl modules on it.3. Make sure SELinux is disabled and the SSH connection (default port number is 22) can pass the

firewall.

Notes:

v SELinux must be disabled the entire time you are running the OpenStack z/VM driver.

70 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

||

v By default, RHEL installation will enable the SSH connection and enable SELinux. SLES will disableSELinux. Refer to the Redhat instructions for Enabling and Disabling SELinux on RHEL. Forgeneral Linux information, refer to Red Hat Enterprise Linux documentation and SUSE LinuxEnterprise Server documentation.

4. Set UseDNS no in /etc/ssh/sshd_config file in order to improve the inventory collection efficiency.5. For Ubuntu 16.04, you must enable root ssh access. By default, root ssh access is not enabled.

To do useful work with the user data, the virtual machine image must be configured to run a service thatretrieves the user data passed from the zVM driver and then takes some action based on the contents ofthat data. This service is also known as an activation engine (AE). Customers can choose their ownunderlying AE, such as cloud-init, scp-cloud-init, and so on, according to their requirements. In thisdocument, we use cloud-init as an example when showing how to configure an image. These steps aredescribed in subsequent sections.

Define the Source System as an xCAT Node

Before a virtual server’s disk image is created, the virtual server must be defined to xCAT as an xCATnode. Subsequent xCAT commands use the node name to identify the server being used. The xCAT nodecontains additional properties that define the virtual machine running on the z/VM system (z/VM userID and the hostname of the ZHCP agent that supports managing the node), the OS (OS level) along withaccess information (IP address and DNS hostname).

Note: You can use the same virtual machine for multiple captures by updating the Linux OS in thevirtual machine with a different version and recapturing the image. After changing the OS you wouldwant to update the OS related properties discussed later in this section using the xCAT chtab command.

Perform the following steps to create the xCAT node for the target system:1. Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node

(xcat), as described in “Using the Script Panel in the xCAT User Interface” on page 179.In the Script box enter:/opt/xcat/bin/mkdef -t node -o demonode userid=ftest03a hcp=zhcp.ibm.com mgt=zvm groups=all

where:

demonode is the node name you want to create for the z/VM user you created in Step 1 on page 70.

The node name is the short DNS hostname for the system. The full DNS host name isspecified in a subsequent step.

ftest03ais the user ID name you created in Step 1 on page 70.

zhcp.ibm.comis the ZHCP server’s host name.

Then press the Run button.2. After the script completes, you will need to reload the nodes page in the browser or click on

Node/Nodes again.3. Update the node’s properties by invoking the chtab command using the XCAT GUI Script panel. In

the Script box, enter the following command. Note that this command is a single line command thatshould be issued without inserting carriage returns. If you see "command not found" errors, you havea carriage return in your command. (You may need to copy/paste it into a separate text editor to seethe stray carriage return.)

Chapter 6. Image and cloud-init Configuration 71

|

/opt/xcat/sbin/chtab node=demonode hosts.ip="10.1.20.1" hosts.hostnames="demonode.endicott.ibm.com"noderes.netboot=zvm nodetype.os=rhel6.2 nodetype.arch=s390x nodetype.profile=demoprofilenodetype.provmethod=netboot

where:

demonodeis the node name you specified in Step 1 on page 71.

10.1.20.1is the IP of the server you prepared in Step 1 on page 70.

demonode.endicott.ibm.comis the hostname of the server you prepared in Step 1 on page 70.

rhel6.2 is the Linux on z Systems distribution name and version. The value should show thedistribution name in lower case (rhel, sles, or ubuntu), followed by the version. The valueshould not contain blanks. For example: rhel6, rhel6.4, sles11, sles11.2, sles11sp2, ubuntu16.04.

demoprofileis the profile name you choose for the node.

4. Make a host by invoking the makehosts command using the XCAT GUI Script panel. In the Script box,enter:

/opt/xcat/sbin/makehosts

Note: When a script command runs successfully you will see a "0" in the output panel. No othersuccess message is displayed.

5. Logon the virtual machine of the Linux system that will be captured, if it is not already logged on.You can log on the virtual server using the xCAT GUI by navigating to the Nodes - Nodes panel.Select the virtual server from the list of virtual servers and then from the Actions pulldown, select“Power On”.

6. Unlock the node to allow xCAT MN to communicate with it. Do the following steps:a. Navigate the GUI to the Nodes->groups->all->Nodes panel. The Groups selection should be "all"

in the Groups frame on the left hand side of the panel.b. Select the node to be unlocked by placing a check in the box before the node name.c. Select the Configuration pulldown and the Unlock action, as shown in Figure 26 on page 73.

72 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

|

d. Specify the root password for the system that is being unlocked in the password input field andclick on the Unlock button, as shown in Figure 27 on page 74.

Figure 26. Specifying the Unlock Action

Chapter 6. Image and cloud-init Configuration 73

Note: If you encounter problems unlocking the node, see “Exchanging SSH Key Issues” on page 191.

Configuration of xcatconf4z

xCAT supports initiating changes to a Linux on z Systems virtual machine while Linux is shut down orthe virtual machine is logged off. The changes to Linux are implemented using an activation engine (AE)that is run when Linux is booted the next time. The xCAT activation engine, xcatconf4z, handles changesinitiated by xCAT. The script/service must be installed in the Linux on z Systems virtual server so it canprocess change request files transmitted by the xCAT ZHCP service to the reader of the virtual machineas a class X file. The script is xcatconf4z and is located at /opt/xcat/share/xcat/scripts in the xCATMN machine.

The xcatconf4z script should be installed in a machine that can be managed while it is logged off. Thisincludes a Linux on z Systems that will be captured for netboot or sysclone deploys.

The Newton version of the OpenStack support requires that xcatconf4z be at version 3.0 or later to enableNewton functionality. (This is discussed in more detail in “Configuration of xcatconf4z on RHEL 6.x andSLES 11.x” on page 75, “Configuration of xcatconf4z on RHEL 7.x and SLES 12.x” on page 75, and“Configuration of xcatconf4z on Ubuntu 16.04” on page 76.) Also, it is recommended that you always usethe latest version of xcatconf4z in your images.

Note: An additional activation engine, cloud-init, should be installed to handle OpenStack relatedtailoring of the system. The cloud-init AE relies on tailoring performed by the xCAT AE, xcatconf4z. See“Installation and Configuration of cloud-init” on page 77 for more information.

Figure 27. Specifying the Root Password

74 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||||

Configuration of xcatconf4z on RHEL 6.x and SLES 11.x

Perform the following steps:1. For images that are being captured for deployment by the OpenStack Newton or later releases, please

ensure that the xcatconf4z you are using is version 3.0 or later. You can verify that the xcatconf4z is atthe required version by doing the following: bring up the xCAT GUI, authenticate into xCAT, and goto the Script panel for the xCAT MN node (xcat) and issue the xcatconf4z command to show theversion of the file. See “Using the Script Panel in the xCAT User Interface” on page 179 for moreinformation.

/opt/xcat/share/xcat/scripts/xcatconf4z version

2. Obtain the xcatconf4z script from the xCAT MN.Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node(xcat) and issue the scp command to move the xcatconf4z file to the target system. See “Using theScript Panel in the xCAT User Interface” on page 179 for more information.

/usr/bin/scp /opt/xcat/share/xcat/scripts/xcatconf4z demonode:/opt/xcatconf4z

where:

demonodeis the node name of the system that is being set up for image capture.

/opt is the target location to receive the xcatconf4z file.3. Change the script to specify the authorizedSenders. It is recommended that this be set to a list of user

IDs which are allowed to transmit changes to the machine. At a minimum, this list should include thevalue of the XCAT_User property in the DMSSICNF COPY file, which is usually OPNCLOUD. (It canbe set to '*', which indicates any virtual machine may send configuration requests to it, but this is notrecommended.)

4. xcatconf4z is configured to run with run level 2, 3 and 5. It is not configured to run as part of customrun level 4. If that run level is going to be used, then the # Default-Start: line at the beginning ofthe file should be updated to specify run level 4 in addition to the current run levels.

5. Install the xcatconf4z file in the target Linux machine:a. Copy the xcatconf4z file to /etc/init.d and make it executable.b. Add the xcatconf4z as a service by issuing:

chkconfig --add xcatconf4z

c. Activate the script by issuing:chkconfig xcatconf4z on

If you wish to run with custom run level 4, then add 4 to the list of levels:chkconfig --level 2345 xcatconf4z on

6. Verify that you installed the correct version of xcatconf4z on the target machine. Do this by issuingthe following service command:

service xcatconf4z versionxcatconf4z version: 3.0

7. Verify that xcatconf4z on the target machine is configured to handle configuration requests fromZHCP servers. Also, verify that the user ID of the machine which is running ZHCP is correctlyspecified. Do this by issuing the following service command:

service xcatconf4z statusxcatconf4z is enabled to accept configuration reader files from: ZHCP

If xcatconf4z is not enabled to accept configuration reader files then verify that you followed Step 3.

Configuration of xcatconf4z on RHEL 7.x and SLES 12.x

Perform the following steps:

Chapter 6. Image and cloud-init Configuration 75

|

|||||

||

1. For images that are being used with the OpenStack Newton and later releases, please ensure that thexcatconf4z you are using is version 3.0 or later. You can verify that the xcatconf4z is at the requiredversion by doing the following: bring up the xCAT GUI, authenticate into xCAT, and go to the Scriptpanel for the xCAT MN node (xcat) and issue the xcatconf4z command to show the version of the file.

/opt/xcat/share/xcat/scripts/xcatconf4z version

See “Using the Script Panel in the xCAT User Interface” on page 179 for more information.2. Obtain the xcatconf4z script from the xCAT MN. Bring up the xCAT GUI, authenticate into xCAT, and

go to the Script panel for the xCAT MN node (xcat) and issue the scp command to move thexcatconf4z.service file and the xcatconf4z file to the target system. See “Using the Script Panel in thexCAT User Interface” on page 179 for more information.

/usr/bin/scp /opt/xcat/share/xcat/scripts/xcatconf4z /opt/xcat/share/xcat/scripts/xcatconf4z.service demonode:/opt/

where:

demonodeis the node name of the system that is being set up for image capture.

/opt is the target location to receive the xcatconf4z file.3. Change the script to specify the authorizedSenders. It is recommended that this be set to a list of user

IDs which are allowed to transmit changes to the machine. At a minimum, this list should include thevalue of the XCAT_User property in the DMSSICNF COPY file, which is usually OPNCLOUD. (It canbe set to '*', which indicates any virtual machine may send configuration requests to it, but this is notrecommended.)

4. Install the xcatconf4z service in the target Linux machine:a.v If the target Linux machine is RHEL7.x, copy the xcatconf4z.service file to:

/lib/systemd/systemv If the target Linux machine is SLES12.x, copy the xcatconf4z.service file to:

/usr/lib/systemd/system

Also, if the target machine is SLES12.x, it is recommended that you change theNetworkManager.service to be wicked.service in the xcatconf4z.service file.

b. Copy xcatconf4z to the /usr/bin/ folder and make it executable.c. Enable the xcatconf4z service by issuing:

systemctl enable xcatconf4z.service

d. Start the xcatconf4z service by issuing:systemctl start xcatconf4z.service

Configuration of xcatconf4z on Ubuntu 16.04

Perform the following steps:1. For images that are being used with the OpenStack Newton and later releases, please ensure that the

xcatconf4z you are using is version 3.0 or later. You can verify that the xcatconf4z is at the requiredversion by doing the following: bring up the xCAT GUI, authenticate into xCAT, and go to the Scriptpanel for the xCAT MN node (OPNCLOUD) and issue the xcatconf4z command to show the versionof the file.

/opt/xcat/share/xcat/scripts/xcatconf4z version

See “Using the Script Panel in the xCAT User Interface” on page 179 for more information. Note thatbefore running a script against a node on the xCAT GUI, the node must be unlocked.

2. Obtain the xcatconf4z script from the xCAT MN. Bring up the xCAT GUI, authenticate into xCAT, andgo to the Script panel for the xCAT MN node (xcat) and issue the scp command to move the

76 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||||

|

|

|||||

|

||

||

xcatconf4z.service file and the xcatconf4z file to the target system. See “Using the Script Panel in thexCAT User Interface” on page 179 for more information.

/usr/bin/scp /opt/xcat/share/xcat/scripts/xcatconf4z /opt/xcat/share/xcat/scripts/xcatconf4z.service demonode:/opt/

where:

demonodeis the node name of the system that is being set up for image capture.

/opt is the target location to receive the xcatconf4z file.3. Change the script to specify the authorizedSenders. It is recommended that this be set to a list of user

IDs which are allowed to transmit changes to the machine. At a minimum, this list should include thevalue of the XCAT_User property in the DMSSICNF COPY file, which is usually OPNCLOUD. (It canbe set to '*', which indicates any virtual machine may send configuration requests to it, but this is notrecommended.)

4. Install the xcatconf4z service in the target Ubuntu machine:a. Tailor the xcatconf4z.service file for an Ubuntu 16.04 image by modifying the file contents as

follows:[Unit]Description=Activation engine for configuring z/VM when it startsWants=local-fs.targetAfter=local-fs.targetBefore=cloud-init-local.service network-pre.target

[Service]Type=oneshotExecStart=/usr/bin/xcatconf4z start

StandardOutput=journal+console

[Install]WantedBy=multi-user.target

b. Copy the xcatconf4z.service file to /lib/systemd/system.c. Copy xcatconf4z to the /usr/bin/ folder and make it executable.d. Enable the xcatconf4z service by issuing:

systemctl enable xcatconf4z.service

e. Start the xcatconf4z service by issuing:systemctl start xcatconf4z.service

Installation and Configuration of cloud-init

An activation engine is an enablement framework used for boot-time customization of virtual images.OpenStack uses cloud-init as its activation engine. Some distributions include cloud-init either alreadyinstalled or available to be installed. If your distribution does not include cloud-init, you can downloadthe code from https://launchpad.net/cloud-init/+download. After installation, if you issue the followingshell command and no errors occur, cloud-init is installed correctly.

cloud-init init --local

Installation and configuration of cloud-init differs among different Linux distributions, and cloud-initsource code may change. This section provides general information, but you may have to tailor cloud-initto meet the needs of your Linux distribution. You can find a community-maintained list of dependenciesat http://ibm.biz/cloudinitLoZ.

The z/VM OpenStack support has been tested with cloud-init 0.7.4 and 0.7.5 for RHEL6.x and SLES11.x,0.7.6 for RHEL7.x and SLES12.x, and 0.7.8 for Ubuntu 16.04. If you are using a different version ofcloud-init, you should change your specification of the indicated commands accordingly.

Chapter 6. Image and cloud-init Configuration 77

|||

|

||

||

|||||

|

||

||||||||||||||

|

|

|

|

|

|

|

|||

During cloud-init installation, some dependency packages may be required. You can use zypper andpython setuptools to easily resolve these dependencies. See https://pypi.python.org/pypi/setuptools formore information.

Installation and Configuration of cloud-init on RHEL 6.x1. Download the cloud-init tar file from Init scripts for use on cloud images (https://launchpad.net/

cloud-init/+download).2. Using the file cloud-init-0.7.5 as an example, untar this file by issuing the following command:

tar -zxvf cloud-init-0.7.5.tar.gz

3. Issue the following to install cloud-init:cd ./cloud-init-0.7.5python setup.py buildpython setup.py installcp ./sysvinit/redhat/* /etc/init.d

4. Update /etc/init.d/cloud-init-local to ensure that it starts after the xcatconf4z and sshd services.The change is shown below in bold. On RHEL 6, change the # Required-Start line in the ### BEGININIT INFO section from:

### BEGIN INIT INFO# Provides: cloud-init-local# Required-Start: $local_fs $remote_fs# Should-Start: $time# Required-Stop:

to:### BEGIN INIT INFO# Provides: cloud-init-local# Required-Start: $local_fs $remote_fs xcatconf4z sshd# Should-Start: $time# Required-Stop:

On RHEL 5, multiple changes are required:a. Add a line near the top of the /etc/init.d/cloud-init-local file to specify the start and stop

priority, so that the cloud-init service is started after the xcatconf4z service. For example:chkconfig: 235 08 92

where 08 is the service start priority and 92 is the stop priority.b. Update the /etc/init.d/xcatconf4z file with a similar line, but with a start priority that is smaller

than the one specified for the cloud-init-local file. For example:chkconfig: 235 07 92

5. The default configuration file /etc/cloud/cloud.cfg is for ubuntu, not RHEL. To tailor it for RHEL:a. Replace distro:ubuntu with distro:rhel at around line 79.b. Change the default user name, password and gecos as you wish, at around lines 82 to 84.Change

the groups tag to remove user groups that are not available for this distribution. After the change,the groups tag at around line 85 should appear similar to the following:groups: [adm, audio, cdrom, dialout, floppy, video, dip]

For more information on changing these values, see the cloud-init documentation(http://cloudinit.readthedocs.org/).

6. Cloud-init will try to add user syslog to group adm. This needs to be changed. RHEL does not have asyslog user by default, so issue:useradd syslog

7. Add the cloud-init related service with the following commands:chkconfig --add cloud-init-localchkconfig --add cloud-initchkconfig --add cloud-configchkconfig --add cloud-final

78 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Then start them with the following sequence:chkconfig cloud-init-local onchkconfig cloud-init onchkconfig cloud-config onchkconfig cloud-final on

You can issue ls -l /etc/rc5.d/ | grep -e xcat -e cloud to find the services. (Make sure thatxcatconf4z starts before any cloud-init service.)

lrwxrwxrwx. 1 root root 22 Jun 13 04:39 S50xcatconfinit -> ../init.d/xcatconf4zlrwxrwxrwx. 1 root root 26 Jun 13 04:39 S51cloud-init-local -> ../init.d/cloud-init-locallrwxrwxrwx. 1 root root 20 Jun 13 04:39 S52cloud-init -> ../init.d/cloud-initlrwxrwxrwx. 1 root root 22 Jun 13 04:39 S53cloud-config -> ../init.d/cloud-configlrwxrwxrwx. 1 root root 21 Jun 13 04:39 S54cloud-final -> ../init.d/cloud-final

8. To verify cloud-init configuration, issue:cloud-init init --local

Make sure that no errors occur. The following warning messages can be ignored:/usr/lib/python2.6/site-packages/Cheetah-2.4.4-py2.6.egg/Cheetah/Compiler.py:1509: UserWarning:

You don’t have the C version of NameMapper installed! I’m disabling Cheetah’s useStackFramesoption as it is painfully slow with the Python version of NameMapper. You should get a copyof Cheetah with the compiled C version of NameMapper.

\nYou don’t have the C version of NameMapper installed!

9. Issue rm -rf /var/lib/cloud (if this file exists), or cloud-init will not work after a reboot.

Installation and Configuration of cloud-init on SLES11.x1. Download the cloud-init tar file from https://launchpad.net/cloud-init/+download.2. Using the file cloud-init-0.7.5 as an example, untar this file by issuing the following command:

tar -zxvf cloud-init-0.7.5.tar.gz

3. Issue the following commands to install cloud-init:cd ./cloud-init-0.7.5python setup.py buildpython setup.py install

Note: After you issue the command tar -zxvf cloud-init-0.7.5.tar.gz, the directory./sysvinit/sles/ does not exist. So you have to copy the cloud-init related services from./sysvinit/redhat/* to /etc/init.d/:

cp ./sysvinit/redhat/* /etc/init.d

You will find that four scripts, cloud-init-local, cloud-init, cloud-config, and cloud-final are added to/etc/init.d/. Modify each of them by replacing the variable:

cloud_init="/usr/bin/cloud-init"

with:cloud_init="/usr/local/bin/cloud-init"

Note: For some versions of SLES, cloud-init does not perform the customization indicated byuser_data input. This issue has been reported to the cloud-init development team. This issue isapparent when a IBM cloud orchestration product fails to change the user password as part of firstboot customization or when the Maestro customization package is not downloaded and installed fromthe central servers. To circumvent this problem:

Edit /usr/lib/python2.6/site-packages/cloud_init-0.7.5-py2.6.egg/cloudinit/sources/DataSourceConfigDrive.py. (You can get the installation path from the python setup.py installcommand you issued in Step 3). Then update the get_data(self) function to comment out the linesindicated by bold font (commenting already added):

Chapter 6. Image and cloud-init Configuration 79

# we want to do some things (writing files and network config)# only on first boot, and even then, we want to do so in the# local datasource (so they happen earlier) even if the configured# dsmode is ’net’ or ’pass’. To do this, we check the previous# instance-idprev_iid = get_previous_iid(self.paths)cur_iid = md[’instance-id’]#if prev_iid != cur_iid and self.dsmode == "local":# self.helper.on_first_boot(results)

4. Update /etc/init.d/cloud-init-local to ensure that it starts after the xcatconf4z service. On SLES,change the # Required-Start line in the ### BEGIN INIT INFO section from:

### BEGIN INIT INFO# Provides: cloud-init-local# Required-Start: $local_fs $remote_fs# Should-Start: $time# Required-Stop:

to:### BEGIN INIT INFO# Provides: cloud-init-local# Required-Start: $local_fs $remote_fs xcatconf4z# Should-Start: $time# Required-Stop:

5. The default configuration file /etc/cloud/cloud.cfg is for ubuntu, not SLES. To tailor it for SLES:a. Replace distro:ubuntu with distro:sles at around line 79.b. Change the default user name, password and gecos as you wish, at around lines 82 to 84.c. Change the groups at around line 85: groups: [adm, audio, cdrom, dialout, floppy, video,

dip]

d. Cloud-init will try to add user syslog to group adm. This needs to be changed. For SLES, issue thefollowing commands:

useradd sysloggroupadd adm

For more information on changing these values, see the cloud-init documentation(http://cloudinit.readthedocs.org/).

6. Start the cloud-init related services with the following commands, ignoring the error “insserv: Servicenetwork is missed in the runlevels 4 to use service cloud-init” if it occurs:

insserv cloud-init-localinsserv cloud-initinsserv cloud-configinsserv cloud-final

At this point, you should find that the services in /etc/init.d/rcX.d appear as you would expect(make sure that xcatconf4z starts before any cloud-init service):

lrwxrwxrwx. 1 root root 22 Jun 13 04:39 S50xcatconfinit -> ../init.d/xcatconf4zlrwxrwxrwx. 1 root root 26 Jun 13 04:39 S51cloud-init-local -> ../init.d/cloud-init-locallrwxrwxrwx. 1 root root 20 Jun 13 04:39 S52cloud-init -> ../init.d/cloud-initlrwxrwxrwx. 1 root root 22 Jun 13 04:39 S53cloud-config -> ../init.d/cloud-configlrwxrwxrwx. 1 root root 21 Jun 13 04:39 S54cloud-final -> ../init.d/cloud-final

7. To verify cloud-init configuration, issue:cloud-init init --local

Make sure that no errors occur. The following warning messages can be ignored:/usr/lib/python2.6/site-packages/Cheetah-2.4.4-py2.6.egg/Cheetah/Compiler.py:1509: UserWarning:

You don’t have the C version of NameMapper installed! I’m disabling Cheetah’s useStackFramesoption as it is painfully slow with the Python version of NameMapper. You should get a copy

80 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

of Cheetah with the compiled C version of NameMapper.

\nYou don’t have the C version of NameMapper installed!

8. Issue rm -rf /var/lib/cloud (if this file exists), or cloud-init will not work after reboot.

Installation and Configuration of cloud-init on RHEL 7.x and SLES 12.x1. Download cloud-init0.7.6 from https://launchpad.net/cloud-init/+download.2. Untar it with this command:

tar -zxvf cloud-init-0.7.6.tar.gz

3. Issue the following commands to install cloud-init:cd ./cloud-init-0.7.6python setup.py buildpython setup.py install --init-system systemd

4. OpenStack on z/VM uses ConfigDrive as the data source during the installation process. You mustadd the following lines (in bold text) to the default configuration file, /etc/cloud/cloud.cfg:# Example datasource config# datasource:# Ec2:## metadata_urls: [ ’blah.com’ ]## timeout: 5 # (defaults to 50 seconds)## max_wait: 10 # (defaults to 120 seconds)datasource_list: [ ConfigDrive, None ]datasource:

ConfigDrive:dsmode: local

5. In order to work well with other products, the service start up sequence for cloud-init-local andcloud-init should be changed to the following. (The cloud-init related service files are located in thefolder /lib/systemd/system/ for RHEL7.x and in /usr/lib/systemd/system/ for SLES12.x)cat /lib/systemd/system/cloud-init-local.service[Unit]Description=Initial cloud-init job (pre-networking)Wants=local-fs.target sshd.service sshd-keygen.serviceAfter=local-fs.target sshd.service sshd-keygen.service

[Service]Type=oneshotExecStart=/usr/bin/cloud-init init --localRemainAfterExit=yesTimeoutSec=0

# Output needs to appear in instance console outputStandardOutput=journal+console

[Install]WantedBy=multi-user.target

# cat /lib/systemd/system/cloud-init.service[Unit]Description=Initial cloud-init job (metadata service crawler)After=local-fs.target network.target cloud-init-local.serviceRequires=network.targetWants=local-fs.target cloud-init-local.service

[Service]Type=oneshotExecStart=/usr/bin/cloud-init initRemainAfterExit=yesTimeoutSec=0

Chapter 6. Image and cloud-init Configuration 81

# Output needs to appear in instance console outputStandardOutput=journal+console

[Install]WantedBy=multi-user.target

6. Manually create the cloud-init-tmpfiles.conf file:touch /etc/tmpfiles.d/cloud-init-tmpfiles.conf

Insert comments into the file by issuing the following command:echo "d /run/cloud-init 0700 root root - -" > /etc/tmpfiles.d/cloud-init-tmpfiles.conf

7. Because RHEL does not have a syslog user by default, you have to add it manually:useradd syslog

8. In /etc/cloud/cloud.cfg, remove the ubuntu-init-switch, growpart and resizefs modules from thecloud_init_modules section. Here is the cloud_init_modules section after the change:# The modules that run in the ’init’ stagecloud_init_modules:- migrator- seed_random- bootcmd- write-files- set_hostname- update_hostname- update_etc_hosts- ca-certs- rsyslog- users-groups- ssh

9. In /etc/cloud/cloud.cfg, remove the emit_upstart, ssh-import-id, grub-dpkg, apt-pipelining,apt-config, landscape, and byobu modues from the cloud_config section. Here is thecloud_config_modules section after the change:cloud_config_modules:# Emit the cloud config ready event# this can be used by upstart jobs for ’start on cloud-config’.- disk_setup- mounts- locale- set-passwords- package-update-upgrade-install- timezone- puppet- salt-minion- mcollective- disable-ec2-metadata- runcmd

10. The /etc/cloud/cloud.cfg file is meant for ubuntu, and must be updated for RHEL and SLES. Totailor this file for RHEL and SLES:a. Change the disable_root: true line to: disable_root: falseb. In the system_info section, replace distro:ubuntu with distro:rhel or distro:sles according to

the distribution you will use.c. Change the default user name, password, and gecos under default_user configuration section as

needed for your installation.d. Change the groups tag to remove the user groups that are not available on this distribution.

When cloud-init starts up at first time, it will create the specified users and groups. Thefollowing is a sample configuration for SLES:system_info:

# This will affect which distro class gets useddistro: sles

82 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

# Default user name + that default user’s groups (if added/used)default_user:name: sleslock_passwd: falseplain_text_passwd: ’sles’gecos: sles12usergroups: userssudo: ["ALL=(ALL) NOPASSWD:ALL"]shell: /bin/bash

For more information on cloud-init configurations, see:http://cloudinit.readthedocs.org/en/latest/topics/examples.html

11. Enable and start the cloud-init related services by issuing the following commands:systemctl enable cloud-init-local.servicesystemctl start cloud-init-local.service

systemctl enable cloud-init.servicesystemctl start cloud-init.service

systemctl enable cloud-config.servicesystemctl start cloud-config.service

systemctl enable cloud-final.servicesystemctl start cloud-final.service

If you experience problems the first time you start cloud-config.service and cloud-final.service, trystarting them again.

12. Ensure all cloud-init services are in active status by issuing the following commands:systemctl status cloud-init-local.servicesystemctl status cloud-init.servicesystemctl status cloud-config.servicesystemctl status cloud-final.service

13. Optionally, you can start the multipath service:systemctl enable multipathdsystemctl start multipathdsystemctl status multipathd

14. Remove the /var/lib/cloud directory (if it exists), so that cloud-init will not run after a reboot:rm -rf /var/lib/cloud

Installation and Configuration of cloud-init on Ubuntu 16.04

For Ubuntu 16.04, cloud-init0.7.8 or higher is required. The examples in this section use cloud-init0.7.8.1. Download cloud-init0.7.8 from https://launchpad.net/cloud-init/+download.2. Untar it with this command:

tar -zxvf cloud-init-0.7.8.tar.gz

3. Issue the following commands to install cloud-init:cd ./cloud-init-0.7.8python3 setup.py buildpython3 setup.py install --init-system systemd

You might have to install all the dependencies that cloud-init requires according to your sourcez/VM environment. For example, you might have to install setuptools before installing cloud-init.For more information, see https://pypi.python.org/pypi/setuptools.

4. OpenStack on z/VM uses ConfigDrive as the data source during the installation process. You mustadd the following lines (in bold text) to the default configuration file, /etc/cloud/cloud.cfg:

# Example datasource config# datasource:# Ec2:

Chapter 6. Image and cloud-init Configuration 83

|

|

|

|

|

|

|||

|||

||

|||

## metadata_urls: [ ’blah.com’ ]## timeout: 5 # (defaults to 50 seconds)## max_wait: 10 # (defaults to 120 seconds)datasource_list: [ ConfigDrive, None ]datasource:

ConfigDrive:dsmode: local

5. Enable root login by configuring the /etc/cloud/cloud.cfg file:# If this is set, ’root’ will not be able to ssh in and they# will get a message to login instead as the above $user (ubuntu)disable_root: false

6. Optionally, you can tailor the modules that run during the cloud-config stage or the cloud-final stageby modifying cloud_config_modules or cloud_final_modules in /etc/cloud/cloud.cfg file.

7. Enable and start the cloud-init related services by issuing the following commands:ln -s /usr/local/bin/cloud-init /usr/bin/cloud-initsystemctl enable cloud-init-local.servicesystemctl start cloud-init-local.servicesystemctl enable cloud-init.servicesystemctl start cloud-init.servicesystemctl enable cloud-config.servicesystemctl start cloud-config.servicesystemctl enable cloud-final.servicesystemctl start cloud-final.service

8. Ensure all cloud-init services are in active status by issuing the following commands:systemctl status cloud-init-local.servicesystemctl status cloud-init.servicesystemctl status cloud-config.servicesystemctl status cloud-final.service

9. If you intend to use persistent disks, start the multipath service:systemctl enable multipathdsystemctl start multipathdsystemctl status multipathd

10. Remove the /var/lib/cloud directory (if it exists), so that cloud-init will not run after a reboot:rm -rf /var/lib/cloud

Optionally Load the zfcp Module

This section applies to all supported distributions.

If you want to use the persistent disks provided by cinder, you must load the zfcp module by issuing thefollowing command:

modprobe zfcp allow_lun_scan=0

Otherwise, cinder functions may not work correctly.

Capture the Node to Generate the Image in the xCAT MN

Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node (xcat)and issue the imgcapture command to capture the node’s root disk. See “Using the Script Panel in thexCAT User Interface” on page 179 for more information.

/opt/xcat/bin/imgcapture demonode --profile demonewprofile

where:

84 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||||||||||

|

|||

||

|

|||||||||

|

||||

|

|||

|

|

demonodeis the node name.

demonewprofileis the profile name that you want to store the captured image in xCAT.

Note: The capture operation may timeout, so it is recommended that before you run the capturecommand, you check the timeout value for the httpd service by running the following script against thexCAT node on the Script panel:

sed -n ’/^Timeout/p’ /etc/httpd/conf/httpd.conf

If the Timeout value is not long enough (it should be greater than 3600 seconds), modify the value byreferring to “Increasing the httpd Timeout in the xCAT MN” on page 181.

Example:/opt/xcat/bin/imgcapture demonode --profile demonewprofile

The command above results in the following:

demonode: Capturing the image using ZHCP nodedemonode: SOURCE USER ID: "FTEST03A"DISK CHANNEL: "0100"IMAGE FILE: "/mnt/xcat.ibm.com/install/staging/rhel6.2/s390x/demonewprofile/0100.img"

Creating 0100.img image file for FTEST03A’s disk at channel 0100 with disk size 4370 CYL.Image creation successful.

demonode: Moving the image files to the deployable directory:/install/netboot/rhel6.2/s390x/demonewprofiledemonode: Completed capturing the image(rhel6.2-s390x-netboot-demonewprofile) and stored at/install/netboot/rhel6.2/s390x/demonewprofile

As part of the capture process, the virtual server is shutdown prior to capturing the image. After youhave captured the image, you can restart the virtual server using the xCAT GUI by navigating to theNodes->groups->all->Nodes panel. Select the virtual server from the list of virtual servers and then fromthe Actions pulldown, select “Power On”.

Note: xCAT does not maintain an association between an image and the virtual system from which itwas captured. Thus, once an image has been captured, you can reuse the virtual machine to create adifferent Linux system which you can then capture. It is recommended that you do not reuse the systemuntil you have uploaded the image to OpenStack and successfully deployed a system with the image thatyou created. This allows you to change the source Linux system and recreate the image, if you find thatyou need to recreate the original image.

The following errors indicate a configuration problem. See “Define the Source System as an xCAT Node”on page 71 for more information.

Error: systemimager-server is not installed on the xcat.Error: Can not configure Imager Server on xcat.

Export the Image to the Nova Compute Server

Note that this step is not required for CMA; the next step for CMA is “Upload the Image from the NovaCompute Server to Glance” on page 86.

Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node (xcat)and issue the imgexport command to move the image to the nova compute server. See “Using the ScriptPanel in the xCAT User Interface” on page 179 for more information.

Chapter 6. Image and cloud-init Configuration 85

/opt/xcat/bin/imgexport rhel6.2-s390x-netboot-demonewprofile --remotehost [email protected]

where:

rhel6.2-s390x-netboot-demonewprofileis the image name in xCAT that you generated in “Capture the Node to Generate the Image inthe xCAT MN” on page 84.

[email protected] the userid@host of your nova compute node. The user is the user under which the novaservices run. This is normally "nova". By default, the image will be copied to the home directoryof the specified user; for example, /home/nova. When the services are run under a different user,you can determine the home directory using the following commands:grep nova /etc/passwd | cut -d ":" -f6/var/lib/nova

Note: The export operation may timeout, so it is recommended that before you run the exportcommand, you check the timeout value for the httpd service by running the following script against thexCAT node on the Script panel:

sed -n '/^Timeout/p' /etc/httpd/conf/httpd.conf

If the Timeout value is not long enough (it should be greater than 3600 seconds), modify the value byreferring to “Increasing the httpd Timeout in the xCAT MN” on page 181.

Example:/opt/xcat/bin/imgexport rhel6.2-s390x-netboot-demonewprofile --remotehost [email protected] rhel6.2-s390x-netboot-demonewprofile to [email protected] /install/imgexport.49859.hblUPI.Compressing rhel6.2-s390x-netboot-demonewprofile bundle. Please be patient.Done!Moving the image bundle to the remote system location rhel6.2-s390x-netboot-demonewprofile.tgz

Upload the Image from the Nova Compute Server to Glance

The image should be uploaded from the nova compute server to the glance image repository.

Uploading an Image from a CMA

The image is imported into the glance repository using the OpenStack Dashboard by accessing thefollowing URL:https://XCAT_MN_Addr/dashboard

Where XCAT_MN_Addr is the address of the appliance as specified in the related DMSSICNF COPYproperty.1. Authenticate into the GUI using the following values:

User ID: adminPassword: The current password used to log in as "admin" with the HorizonGUI. The original value of the password for "admin" was specified in thecmo_admin_password property in the DMSSICMO COPY file. You should havechanged this password the first time you logged in as "admin" with theHorizon GUI.

2. Navigate to the Project --> Compute --> Images tab.3. Click on the Create Image button in the upper right hand corner. You will then see Figure 28 on page

87.

86 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

4. Fill in the appropriate image information.v You can select either Image Location or Image File to name the image file. When using the Image

Location to name the image file, you can form the URL by concatenating together the followingpieces of information:– https://– The XCAT MN IP address value: XCAT_MN_Addr.– The deployable directory path that XCAT moved the image file to after the capture process

completed in the section “Capture the Node to Generate the Image in the xCAT MN” on page84.

– a slash (/)– The captured image's file name

Using the example output in “Capture the Node to Generate the Image in the xCAT MN” on page84, the resulting URL would be:https://XCAT_MN_Addr/install/netboot/rhel6.2/s390x/demonewprofile/0100.img

v "Format" should be "raw"v "Architecture" should be "s390x"

Figure 28. Create An Image Screen

Chapter 6. Image and cloud-init Configuration 87

After you input all necessary information, click the Create Image button to create new image.5. After the image is created, you have to update the image metadata before you can deploy a new

instance. Select the new image and click the Update Metadata button, as shown in Figure 29.

Input all necessary image properties, including image_type_xcat, os_name, os_version,provisioning_method, and image_file_name.The following example shows you how to update the image_type_xcat property:a. In the Custom column, enter image_type_xcat and click the "+" button, as shown in Figure 30 on

page 89.

Figure 29. Updating Image Metadata

88 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

b. On the right side, input "linux" as the image type, as shown in Figure 31 on page 90

Figure 30. Entering the Property Name

Chapter 6. Image and cloud-init Configuration 89

Click the Save button.

Follow the above steps for all other image properties. For information on the valid values for theother metadata properties, see the “ZVMImageError Exception” section in “Compute Log” on page195.

Uploading an Image from a non-CMA compute Node

By default, the exported image bundle is stored in the nova compute user's home directory as describedin “Export the Image to the Nova Compute Server” on page 85. You will need to untar the image bundleand then upload the image into Glance.1. Untar the image bundle. The name of the bundle is stated in the last line of the response on the

imgexport command that was issued in “Export the Image to the Nova Compute Server” on page 85,and has the suffix “.tgz”.

cd image_locationtar –xvf rhel6.2-s390x-netboot-demonewprofile.tgz

where:

image_locationis the location of the directory containing the image bundle.

Figure 31. Entering the Image Type

90 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

rhel6.2-s390x-netboot-demonewprofile.tgzis the name of the image bundle.

2. Locate the image file. It is in the untarred directory tree with a suffix of .img (e.g.image_location/0100.img).

3. Upload the image into Glance. Note that the following example is a one-line command, but is shownon multiple lines here for readability:

glance image-create --name imagetest --disk-format=raw --container-format=bare--visibility=public < image_location/0100.img

where:

imagetestis the name of the image as known to Glance.

Image names should be restricted to the UTF-8 subset, which corresponds to the ASCIIcharacter set. In addition, special characters such as /, \, $, %, @ should not be used.

image_location/0100.imgis the file specification of the image file.

4. Update the image properties for the image generated in Step 3 in Glance with the followingcommand. Note that the following example is a one-line command, but is shown on multiple lineshere for readability:

glance image-update --property image_type_xcat=linux --property hypervisor_type=zvm--property architecture=s390x --property os_name=Linux --property os_version=os_version--property provisioning_method=netboot --property image_file_name=0100.img uuid

where:

os_versionis the OS version of your capture source node.

Currently, only Red Hat, SUSE, and Ubuntu type images are supported. For a Red Hat typeimage, you can specify the OS version as rhelx.y, redhatx.y, or red hatx.y, where x.y is therelease number. For a SUSE type image, you can specify the OS version as slesx.y or susex.y,where x.y is the release number. For an Ubuntu type image, you can specify the OS version asubuntux.y, where x.y is the release number. (If you don't know the real value, you can get itfrom the osvers property value in the manifest.xml file.)

uuid is the value generated in Step 3.

Note that all of these properties must be updated for a zVM type image. For information on the validvalues for the other metadata properties, see the “ZVMImageError Exception” section in “Compute Log”on page 195.

Remove the Image from the xCAT Management Node

Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node (xcat)and issue the rmimage command with the --xcatdef option. See “Using the Script Panel in the xCAT UserInterface” on page 179 for more information.

/opt/xcat/sbin/rmimage rhel6.2-s390x-netboot-demonewprofile --xcatdef

where:

rhel6.2-s390x-netboot-demonewprofileis the image name that was created by the imgcapture command.

Chapter 6. Image and cloud-init Configuration 91

|

Note: This step is optional. If you wish to keep this image in the xCAT MN for exporting to anothernova compute node or to restore the system that was used as the capture source, you can ignore thisstep.

Deactivate cloud-init on the Captured Source System

Please note that after you finish the configuration of your source virtual machine, when you reboot it,you may see that cloud-init attempts to locate the metadata service, and there may be messages such as"Attempting to open 'http://169.254.169.254/2009-04-04/meta-data/instance-id' with 1 attempts (0 retries,timeout=50) to be performed". This is because your system does not have a configuration disk forcloud-init, and that disk will only be created when you do a deployment with z/VM OpenStack code. Toavoid cloud-init searching for the configuration on the source virtual machine, we suggest that youdisable the cloud-init service on the system that was the source for your capture. This will ensure thatyou can do any operations (start, stop, reboot, etc.) on your source system without cloud-init attemptingto perform configuration.

If you see such messages when you do a new deployment, it indicates that you didn't create your imagewith the necessary activation engines (xcatconf4z, cloud-init, etc.) installed and correctly configured, orthat some other error in the deploy process itself may have occurred.

92 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 7. Getting Started with Boot from Volume

This chapter discusses creating a bootable volume from an existing minidisk, cloning a volume from anexisting volume, and booting a virtual server instance from a volume. After creating a bootable volume,the volume is on a persistent disk. When the virtual server using that volume is deleted, by default thevolume is not destroyed and can be used in subsequent virtual server instances.

Note: Resizing/migration of volume based instances is not supported.

The following Linux distributions are supported for the boot from volume function:v RHEL 6 – see “Installing Red Hat Linux” on page 96.v RHEL 7 – see “Installing Red Hat Linux” on page 96.v SLES 11 – see “Installing SLES 11” on page 102.v SLES 12 – see “Installing SLES 12” on page 106.

The steps for other versions/releases of RHEL or SLES may be different than what is documented here.Boot from volume for Ubuntu is not supported.

Please consult you Linux documentation for instructions related to migrating a Linux system's root diskto a SCSI volume.

Creating a Bootable Volume

The process in this section creates a basic Linux system, but it will not contain any middleware that waspart of the original Linux system. If your original Linux system had middleware installed on it, you canadd the middleware to the system after migrating the system to a volume.

Pre-Installation Tasks

This section discusses the pre-installation tasks you must perform before creating a bootable volume.

Note: On a CMA system, issue the command source $HOME/openrc to set OpenStack-related environmentvariables before you issue any OpenStack commands.1. Locate an NFS that will be accessible to the system you want to migrate from a minidisk to a

volume. The NFS must contain the Linux distribution media for the operating system beingmigrated. Subsequent steps will refer to the media being on NFS.

2. You must provision a virtual server with OpenStack that contains the Linux image which you intendto migrate to the bootable volume. This defines the system to OpenStack so that subsequentcommands can utilize OpenStack functions to add a volume to the system.In this virtual server instance, the root volume resides on an ephemeral disk, ECKD or FBAminidisk. The target Linux system should follow the same requirements and restrictions which areimposed on a Linux system located on ECKD or FBA disks. See “Image Requirements” on page 69for more information.Once the virtual server is known to OpenStack, you will need to obtain the OpenStack ID for thatvirtual server instance. Use the nova list command to get the ID:nova list+--------------------------------------+----------+--------+------------+-------------+------------------+| ID | Name | Status | Task State | Power State | Networks |+--------------------------------------+----------+--------+------------+-------------+------------------+| 20941149-2927-462c-aaf9-42b759de2f3b | hych0035 | ACTIVE | - | Running | mgt=10.1.198.199 |+--------------------------------------+----------+--------+------------+-------------+------------------+

© Copyright IBM Corp. 2014, 2017 93

|

|

|

|

|

||

3. Estimate the size of the disk you will need for the bootable volume. The size of the disk depends onthe version of Linux you will run and on the packages you want to install on your system.Generally, for both RHEL 6 and SLES 11, a minimum of 2 GB is needed. However, a size of 3 GB isrecommended.If the system you intend to create is similar in size to the one you are already using, determine thesize of the disk that contains the root directory.You can use the Linux df -h filenode command to display total capacity as well as the used andavailable capacity. For example:

4. For the bootable volume which will be created, on the target OpenStack virtual server, a persistentdisk (volume) must be added with enough capacity to hold the Linux system. Locate or allocate acinder volume with enough capacity to hold the Linux system that you want to use. Once you havelocated a suitable volume in cinder, you will need to know the volume ID for subsequentcommands. You can issue the cinder list command to list the volumes in cinder and their IDs. Forexample:cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| fddccbf1-1819-4019-ad9b-175279d3f431 | available | None | 3 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

The Size column indicates the volume is 3 GB, and the ID will be used in the next step.5. Attach the volume to the virtual server instance with the nova volume-attach command:

nova volume-attach instance_id volume_id

where:

instance_idThe instance identifier of the virtual server instance.

volume_idThe identifier of the volume.

For example:nova volume-attach 20941149-2927-462c-aaf9-42b759de2f3b fddccbf1-1819-4019-ad9b-175279d3f431+----------+--------------------------------------+| Property | Value ||----------+--------------------------------------+| device | /dev/sdb || id | fddccbf1-1819-4019-ad9b-175279d3f431 || serverId | 20941149-2927-462c-aaf9-42b759de2f3b || volumeId | fddccbf1-1819-4019-ad9b-175279d3f431 |+----------+--------------------------------------+

cinder list+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+| fddccbf1-1819-4019-ad9b-175279d3f431 | in-use | None | 3 | None | false | 20941149-2927-462c-aaf9-42b759de2f3b |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

Note that the volume is now shown attached to the target system (see the "Attached to" column).6. SSH into the Linux server and run the following command to obtain the FCP device, WWPN, and

LUN.

df -h /

Filesystem Size Used Avail Use% Mounted on/dev/dasda1 3.0G 1.6G 1.3G 58% /df -h /rootFilesystem Size Used Avail Use% Mounted on/dev/dasda1 3.0G 1.6G 1.3G 58% /

94 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

lszfcp -D

The output from this command is similar to the following:0.0.1fb2/0x5005076801102991/0x0022000000000000 0:0:4:34

The following values are shown in this example:

FCP device1fb2

WWPN5005076801102991

LUN 00220000000000007. For RedHat, copy the kernel.img and initrd.img files into the Linux server location, and place the

files in the /boot directory. The image files are located in /images directory of the distribution media(for example, /nfs/rhel6.5/dvd1/images/).For SLES, copy the initrd (with no file suffix) and vmrdr.ikr files into the Linux server location, andplace the files in the /boot directory. The image files are located in the boot/s390x directory of thedistribution media (for example, /nfs/sles11sp3GM/dvd1/boot/s390x/).

8. Run the zipl --dry-run command to gather information about the system, such as the name of thedefault Linux system. This command does not modify IPL records. It must be issued before makingany modification to the /etc/zipl.conf file. (These modifications are shown in Step 9.) For example:zipl --dry-runUsing config file ’/etc/zipl.conf’Starting dry-run, target device contents will NOT be modifiedBuilding bootmap in ’/boot/’Building menu ’rh-automatic-menu’Adding #1: IPL section ’linux-2.6.32-431.el6.s390x’ (default)Preparing boot device: dasda (0100).Done.

The entry linux-2.6.32-431.el6.s390, which is the default Linux system, is determined by the[defaultboot] section.

9. Add the following configuration items to the /etc/zipl.conf file.For RedHat:

[linux]image=/boot/kernel.imgramdisk=/boot/initrd.imgparameters="root=/dev/ram0 rd_DASD=0.0.0100 vnc ro ramdisk_size=40000"

For SLES:[linux]image=/boot/vmrdr.ikrramdisk=/boot/initrdtarget = /boot/ziplparameters="root=/dev/ram0 rd_DASD=0.0.0100 vnc ro ramdisk_size=40000"

Add the [defaultboot] section (if it is not already present) and specify default=linux:[defaultboot]default=linux

(The above command defines the default Linux system to be "linux". Previously, the default, as wellas the original boot system, was "linux-2.6.32-431.el6.s390".Issue the command /sbin/zipl to allow the new configuration to take effect:

ziplUsing config file ’/etc/zipl.conf’Building bootmap in ’/boot/’Building menu ’rh-automatic-menu’

Chapter 7. Getting Started with Boot from Volume 95

Adding #1: IPL section ’linux’ (default)Adding #2: IPL section ’linux-2.6.32-431.e16.s390x’Preparing boot device: dasda (0100).Done.

Record the number in the menu associated with the original Linux system, and record the number ofthe boot device. They will be used in subsequent steps. In this example, the number of the originalLinux is 2, and the number of the boot device is 0100.

10. Before you can boot a new instance, you have to set the volume to be bootable using the cinderset-bootable command:

cinder set-bootable volume_ID true

where:

volume_IDis the identifier of the volume. You can obtain this value by using the cinder listcommand.

11. Determine the IP configuration for the Linux system you are about to create on a volume. If you willuse the same IP address as the system you are currently using, you can obtain its IP configurationinformation with the ifconfig command. To use a different IP address, contact your systemadministrator to get the IP address, subnet mask, gateway, DNS server and MAC address that youwill be using.

12. Issue the Linux shutdown command to cause the target Linux server to gracefully shutdown:shutdown now

Note: If the virtual server is periodically disconnected or powered off when you perform the nextset of volume preparation steps, it may be because the compute service is doing periodic statussynchronization work. If this is a problem, you can stop the compute service temporarily ifnecessary.

Installing Linux

This section provides an overview of installing Red Hat 6 Linux, Red Hat Linux 7, SUSE 11 Linux andSUSE 12 Linux. For more detailed information on installing these releases and later Linux releases, referto the following URLs or other appropriate Linux documentation:

For Red Hat:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/index.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/index.html

For SUSE:https://www.suse.com/documentation/sles11/book_sle_deployment/data/book_sle_deployment.htmlhttps://www.suse.com/documentation/sles-12/book_sle_deployment/data/book_sle_deployment.html

Installing Red Hat Linux

The instructions in this section apply to RHEL 6 and RHEL 7.1. Log on to the z/VM virtual machine running the virtual server that you want to migrate.2. Issue the following command:

#cp ipl 0100

where 0100 is the boot device shown in Step 9 on page 95.3. Configure the Linux system network in order to make it accessible from a remote VNC client. Contact

your network administrator to obtain the appropriate values. The following are properties you shouldspecify:

96 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

|

Layer mode must be layer2 (required)MAC must be a unique MAC address associated with this interfaceIPv4 must be an IPv4 address associated with this interfaceNetmask must be a netmask address associated with this interfaceGateway must be a gateway IP address associated with this interfaceDNS server must be a DNS server IP address associated with this interface

4. On the z/VM console, issue the following command:#cp term more 0 0

Without this command, the process will pause when the z/VM screen shows "More..." indicating thatthe system is waiting for you to manually clear the screen. In this case the VNC client will lose itsconnection and will not be able to establish a new connection until the process resumes.

5. Issue the following command on your SSH client to connect to the server as the "install" user to startthe installation:

ssh [email protected]

where x.x.x.x is the IPv4 address you specified in Step 3 on page 96. The installation process is similarto a typical Linux installation. In this example an NFS server is used the source of installation. On the"Installation Method" screen, select "NFS directory" as shown in Figure 32.

6. On the "NFS Setup" screen, shown in Figure 33 on page 98, specify the IP address of your NFS serverin the "NFS server name" field, and specify the absolute path of the install directory in the "Red HatEnterprise Linux directory" field. You can ignore the "NFS mount options" because these options arefilled in by the command in most cases.

Figure 32. Installation Method Screen

Chapter 7. Getting Started with Boot from Volume 97

After system installer loads the necessary files from the NFS server, you will receive a message similarto the following:03:33:57 Please manually connect your vnc client to 9.60.29.35:1 to begin the install.03:33:57 Starting graphical installation.

7. Connect your VNC client to the server and continue the installation process.8. The storage device which will contain the system must be specified to the FCP device which is

attached to the server in Step 5 on page 94.Select "Specialized Storage Device" as shown in Figure 34, and then select the "Next" button.

a. Add the volume as a system storage device as shown in Figure 35 on page 99 by selecting the"Add Advanced Target" button and then selecting "Add ZFCP LUN" followed by clicking on the"Add Drive" button."

Figure 33. NSF Setup Screen

Figure 34. Specifying the Storage Device

98 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

b. Specify the volume information as shown in Figure 36 on page 100 and click on the "Add" button.

Figure 35. Adding the Volume as a System Storage® Device

Chapter 7. Getting Started with Boot from Volume 99

c. Verify the volume was added by selecting the "Other SAN Devices" tab and checking that theadded device is in the list. Figure 37 on page 101 shows the that volume was successfully added.Then select the "Next" button to move to the next screen.

Figure 36. Specify the Volume Information

100 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

d. On the Type of Installation screen, which is shown in Figure 38 on page 102, check the "Reviewand modify partitioning layout" checkbox to use the standard partitioning layout. The "CreateStorage" screen will appear. Select the "Standard Partition" button. Then select the "Next" button tomove to the next screen.

Figure 37. Results of Adding a Volume Successfully

Chapter 7. Getting Started with Boot from Volume 101

e. Make sure the volume uses standard partitioning layout by verifying that no LVM volume groupsare listed for the device.

Installing SUSE Linux

This section describes installing SLES 11 and SLES 12.

Installing SLES 11:

1. Log on to the z/VM virtual machine running the virtual server that you want to migrate.2. Issue the following command:

#cp ipl 0100

where 0100 is the boot device shown in Step 9 on page 95.3. Configure the network to make it accessible from a remote VNC client. Contact your network

administrator to obtain the appropriate values. The following are values you must specify:

Source medium must be networkNetwork protocol must be NFSOSA device must be EthernetPhysical medium must be Ethernet

Figure 38. Type of Installation Screen

102 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Layer mode must be layer2IPv4 address, IPv4 netmask, IPv4 gateway, and display type must be VNC.

After you enter the correct values, the system installer will load the necessary files from your NFSserver. You can then connect to the server with your VNC client.

4. Connect to the server with your VNC client to start the installation. The installation process issimilar to a typical Linux installation.

5. On the Disk Activation screen, select "Configure ZFCP Disks" as shown in Figure 39. Then press the"Next" button to move to the next screen.

6. Specify the volume information as shown in Figure 40 on page 104 and select the "Next" button.

Figure 39. Preparing the Volume

Chapter 7. Getting Started with Boot from Volume 103

7. On the subsequent screen, select the volume, shown as a SCSI disk, then press the "Next" button tomove to the next screen.

8. On the Installation Settings screen, select the "Change" button to access the pulldown menu, andthen select "Partitioning" to specify standard partitioning rather than LVM partitioning, as shown inFigure 41 on page 105. You will then see the screen shown in Figure 42 on page 106.

Figure 40. Specifying Volume information

104 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

9. On the Preparing Hard Disk screen, verify that LVM partitioning is not selected in the "Proposalsettings" section, as shown in Figure 42 on page 106. Then select the "Accept" button.

Figure 41. Installation Settings Screen: Selecting the Change Partitioning Button

Chapter 7. Getting Started with Boot from Volume 105

10. On the Installation Settings screen, shown in Figure 41 on page 105, select "Install".

Installing SLES 12:

1. In “Installing SLES 11” on page 102, follow Step 1 on page 102 through Step 7 on page 104.2. On the Suggested Partitioning screen, shown in Figure 43 on page 107, select the Expert Partitioner

button to customize the partition.

Figure 42. Verifying that LVM Partitioning is Not Selected

106 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

3. You will then see Figure 44 on page 108. The SLES installer by default generates a recommendedpartitioning plan for you, which you can tailor as needed.

Figure 43. Suggested Partitioning Screen

Chapter 7. Getting Started with Boot from Volume 107

4. To edit a partition, select the partition name, as shown in Figure 45 on page 109. Then select the Editbutton.

Figure 44. Expert Partitioner Screen with Sample Partitioning Plan

108 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

5. You will then see the Edit Partition screen, as shown in Figure 46 on page 110. Select the FstabOptions button.

Figure 45. Editing a Partition

Chapter 7. Getting Started with Boot from Volume 109

6. You will then see the Fstab Options screen, as shown in Figure 47 on page 111. In the "Mount in/etc/fstab by" section, you must select the UUID radio button. Otherwise the system could fail toreboot after the installation. Then select OK.

Figure 46. Edit Partition Screen

110 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

7. Continue the installation process by selecting your time zone and setting an initial password. You willthen see the Installation Settings screen, as shown in Figure 48 on page 112. Disable the firewall and

Figure 47. Fstab Options Screen

Chapter 7. Getting Started with Boot from Volume 111

enable SSH, so that you can SSH onto the instance after the installation. Then select Install.

Post-Installation Tasks

This section discusses the tasks you must perform after you install a Red Hat or SLES system.

Note: On a CMA system, issue the command source $HOME/openrc to set OpenStack-related environmentvariables before you issue any OpenStack commands.1. Depending on the Linux distribution you are using, the installation process could reboot the system

automatically for you after the installation completes successfully. In this case you can skip this stepand go to Step 2 on page 113. Otherwise, log on to the zVM virtual machine in which the target Linuxsystem is running, and issue the CP SET LOADDEV command to reboot the newly installed system:

#CP SET LOADDEV PORTNAME wwpn LUN lun

where:

wwpn is the WWPN you in obtained in “Pre-Installation Tasks” on page 93.

lun is the LUN you obtained in “Pre-Installation Tasks” on page 93.

For values longer than eight hexadecimal characters, at least one separator blank is required after theeighth character.Then IPL from the FCP device:

#CP IPL FCP_device

where:

Figure 48. Installation Settings Screen

112 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

FCP_deviceis the FCP device you obtained in “Pre-Installation Tasks” on page 93

2. SSH into the target Linux system and perform additional configuration steps.a. Install the mkisofs module on the system.b. Then install and configure xcatconf4z and cloud-init. For more information, see “Configuration of

xcatconf4z on RHEL 6.x and SLES 11.x” on page 75 and “Installation and Configuration ofcloud-init” on page 77.

c. Disable allow_lun_scan for the module zfcp by updating or creating the file /etc/modprobe.d/50-zfcp.conf and inserting in this file the following new line:’options zfcp allow_lun_scan=0’

If you are running SLES 12, the following steps are required:1) Edit the file /etc/default/grub. Insert "zfcp.allow_lun_scan=0" into the

GRUB_CMDLINE_LINUX_DEFAULT statement, and change the value of the resume propertyto be the device name of the swap partition. The following example shows these changes inbold text.GRUB_CMDLINE_LINUX_DEFAULT="zfcp.allow_lun_scan=0 hvc_iucv=8 TERM=dumb resume=/dev/sda2"

2) Issue the following command to update the grub2 configuration:grub2-mkconfig –o /boot/grub2/grub.cfg

3) Issue following command to update IPL information:grub2-zipl-setup

Then restart your system.d. If you are running SLES 12, edit the file /etc/default/grub and insert the following new line:

GRUB_DEVICE=”/dev/sda3”

where /dev/sda3 is the device name of the root partition (/). Then issue the following commands:grub2-mkconfig –o /boot/grub2/grub.cfggrub2-zipl-setup

Then restart your system.e. Optionally, you can now add in any middleware that you want to be part of the bootable volume.

3. Obtain information from the zipl.conf file. This information will be used to define volume metadatawhen you clone a bootable volume. Issue the following command:

cat /etc/zipl.conf

Output is similar to the following:[defaultboot]timeout=5default=linux-2.6.32-431.el6.s390xtarget=/boot/[linux-2.6.32-431.el6.s390x]

image=/boot/vmlinuz-2.6.32-431.el6.s390xramdisk=/boot/initramfs-2.6.32-431.el6.s390x.imgparameters="root=UUID=92910e26-0273-43fe-b3d0-73c7aabbe746 rd_NO_LUKS rd_NO_LVM LANG=...

Make note of the values of the image and ramdisk parameters.4. Restore the boot configuration for the virtual server instance.

a. Determine which option to select on the IPL Menu, which is shown in Figure 49 on page 114. SeeStep 9 on page 95 to determine which option to select.

b. Use the CP IPL command to re-IPL the z/VM system your Linux server is located on and selectthe original Linux system in the menu displayed on the console, as shown in Figure 49 on page114.

#CP IPL vdev LOADPARM num

Chapter 7. Getting Started with Boot from Volume 113

where:

vdev is the device number you obtained in Step 9 on page 95.

num Corresponds to the menu selection number, as shown in Figure 49.

For example:#CP IPL 0100 LOADPARM 2

5. If you want to boot your original Linux system and restore the /etc/zipl.conf file, change the[defaultboot] section to have the appropriate default value. For example:

[defaultboot]default=linux-2.6.32-431.el6.s390x

Then you can issue the command /sbin/zipl to restore the boot configuration to its original value.Step 9 on page 95 shows the original configuration values.

6. Detach the volume from the instance using the nova volume-detach command so that the volume canbe used in future deploys:

nova volume-detach instance_id volume_id

where:

instance_idThe instance identifier of the virtual server instance.

volume_idThe identifier of the volume.

For example:cinder list+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+| fddccbf1-1819-4019-ad9b-175279d3f431 | in-use | None | 3 | None | false | 20941149-2927-462c-aaf9-42b759de2f3b |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

nova volume-detach 20941149-2927-462c-aaf9-42b759de2f3b fddccbf1-1819-4019-ad9b-175279d3f431

cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| fddccbf1-1819-4019-ad9b-175279d3f431 | available | None | 3 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

zIPL v1.8.2-51.el6 interactive boot menu

0. default (linux-2.6.32-431.el6.s390x:)

1. linux

2. linux-2.6.32-431.el6.s390x:

Note: VM users please use ’#cp vi vmsg <input>’

Please choose (default will boot in 5 seconds):

Figure 49. IPL Menu

114 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Cloning a New Volume

This following sections tell you how tov Create a snapshot from an existing volume.v Clone a volume from a snapshot.

Creating a Volume Snapshot

OpenStack cinder allows you to create a snapshot of a volume, which is an exact capture of what avolume looked like at a particular moment in time, including all it's data. You can then use this snapshotto create volumes.

Note: On a CMA system, issue the command source $HOME/openrc to set OpenStack-related environmentvariables before you issue any OpenStack commands.

To create a snapshot from an existing volume, issue the following cinder command:cinder snapshot-create volume_ID

where:

volume_IDis the identifier of the volume to be captured.

Note: When creating a snapshot, the source volume must not be bootable; that is, it's Bootable propertymust be false.In the following example:v The cinder list command verifies that the volume is not bootable (the Bootable column shows

"False".)v The cinder snapshot-create is used to create the snapshot.v The cinder snapshot-list command shows the results.

Once the snapshot is created, you can clone as many volumes as you want. Any new volumes will

contain the same data as the source volume. Note the snapshot can only be created on the same SVCwhere you created your existing volume.

cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| fddccbf1-1819-4019-ad9b-175279d3f431 | available | None | 3 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

cinder snapshot-create fddccbf1-1819-4019-ad9b-175279d3f431+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| created_at | 2015-01-16T08:45:47.939265 || display_description | None || display_name | None || id | 7a77e8b4-e75f-42a7-b37e-cf2895e71032 || metadata | {} || size | 3 || status | creating || volume_id | fddccbf1-1819-4019-ad9b-175279d3f431 ||---------------------+--------------------------------------+

cinder snapshot-list+--------------------------------------+--------------------------------------+--------------------------+------+| ID | Volume ID | Status | Display Name | Size |+--------------------------------------+--------------------------------------+--------------------------+------+| 7a77e8b4-e75f-42a7-b37e-cf2895e71032 | fddccbf1-1819-4019-ad9b-175279d3f431 | available | None | 3 |+--------------------------------------+--------------------------------------+--------------------------+------+

Chapter 7. Getting Started with Boot from Volume 115

Cloning a Volume from a Snapshot

Note: On a CMA system, issue the command source $HOME/openrc to set OpenStack-related environmentvariables before you issue any OpenStack commands.

To clone a new volume from an existing snapshot, issue the following cinder command:cinder create --snapshot snapshot_ID size

where:

snapshot_IDis the identifier of the snapshot to be captured.

size is the size of the snapshot. Because a snapshot is created from an original volume, the size of thesnapshot is the same as the size of the original volume. You can determine the original volumesize by issuing the cinder list command for that volume.

Booting a Virtual Server Instance from a Volume

Note: On a CMA system, issue the command source $HOME/openrc to set OpenStack-related environmentvariables before you issue any OpenStack commands.1. You must define metadata and set the volume bootable before you can boot an instance. The metadata

is used during the process of booting from this volume. Set the following values as indicated:

image has the same value as in the original Linux system. See “Post-Installation Tasks” on page 112.

ramdiskhas the same value as in the original Linux system. See “Post-Installation Tasks” on page 112.

root has a value in the format /dev/sdaN, where N is the partition number on which the /rootdirectory resides.

[root@gpok170 ~]# cinder snapshot-list+--------------------------------------+--------------------------------------+--------------------------+------+| ID | Volume ID | Status | Display Name | Size |+--------------------------------------+--------------------------------------+--------------------------+------+| 7a77e8b4-e75f-42a7-b37e-cf2895e71032 | fddccbf1-1819-4019-ad9b-175279d3f431 | available | None | 3 |+--------------------------------------+--------------------------------------+--------------------------+------+

[root@gpok170 ~]# cinder create --snapshot 7a77e8b4-e75f-42a7-b37e-cf2895e71032

+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2015-01-16T09:02:34.25292 || display_description | None || display_name | None || encrypted | False || id | 696a3b66-9761-408f-a5af-085f3403668f || metadata | {} || size | 3 || snapshot_id | 7a77e8b4-e75f-42a7-b37e-cf2895e71032 || source_volid | None || status | creating || volume_type | None ||---------------------+--------------------------------------+

[root@gpok170 ~]# cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 696a3b66-9761-408f-a5af-085f3403668f | available | None | 3 | None | false | || fddccbf1-1819-4019-ad9b-175279d3f431 | available | None | 3 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

116 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

os_versionthe operating system version. Can be, for example, "rhel6.5", "rhel7", or "sles11.2".

Use the cinder metadata command to find these values, as shown in the following examples:

An example of metadata for SLES is the following:

2. Set the volume to be bootable using the cinder set-bootable command:cinder set-bootable volume_ID true

where:

volume_IDis the identifier of the volume. You can obtain this value by using the cinder list command.

3. To boot an instance from this volume, issue the following command:nova boot --flavor flavor --block_device_mapping vda=volume_ID:::0 --nic net-id=net-uuid name

where:

flavor is the ID of the flavor that the instance will use. You can obtain this value by using the novaflavor-list command.

volume_IDis the identifier of the volume. You can obtain this value by using the cinder list command.

net-id=net-uuidis the identifier of the network the instance will use. You can obtain this value by using theneutron net-list command.

name is the display name you define for the instance. It is recommended that you use a meaningfulname.

[root@gpok170 ~]# cinder metadata 696a3b66-9761-408f-a5af-085f3403668f set image=/boot/vmlinux-2.6.32-431.el6.s390x[root@gpok170 ~]# cinder metadata 696a3b66-9761-408f-a5af-085f3403668f set ramdisk=/boot/initramfs-2.6.32-431.el6.s390x.img[root@gpok170 ~]# cinder metadata 696a3b66-9761-408f-a5af-085f3403668f set root=/dev/sda2[root@gpok170 ~]# cinder metadata 696a3b66-9761-408f-a5af-085f3403668f set os_version=rhel7[root@gpok170 ~]# cinder metadata-show 696a3b66-9761-408f-a5af-085f3403668f

+-------------- ----+------------------------------------------+| Metadata-property | Value |+-------------------+------------------------------------------+| image | /boot/vmlinux-2.6.32-431.el6.s390x || os_version | rhel7 || ramdisk | /boot/initramfs-2.6.32-431.el6.s390x.img || root | /dev/sda2 |+-------------------+------------------------------------------+

+-------------- ----+--------------------------------------------+| Metadata-property | Value |+-------------------+--------------------------------------------+| image | /boot/image-3.0.76-0.11-default || os_version | sles11.2 || ramdisk | /boot/initrd-3.0.76-0.11-default,0x2000000 || readonly | false || root | /dev/sda2 |+-------------------+--------------------------------------------+

Chapter 7. Getting Started with Boot from Volume 117

118 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Chapter 8. Alternative Deployment Provisioning

This section discusses the z/VM alternative deployment provisioning (cloning) support that you canconfigure to implement OpenStack compute instance deployment requests.

OverviewOpenStack technology provides the ability to create virtual servers (virtual machines and the softwarerunning within them) from image files. However, OpenStack places a number of restrictions on thecustomer installation. These restrictions can prevent a customer installation from taking full advantage ofthe functionality and resources that z/VM provides to virtual servers. In addition, OpenStack forcescustomers to manage virtual servers entirely through OpenStack instead of leveraging support that thecustomer has already invested in and developed. The alternative deployment provisioning support allowsa customer to use z/VM-based virtual server cloning to create a copy of a master virtual machine and itsminidisks in order to quickly deploy virtual servers. In addition, it takes advantage of IBM FlashCopy tomake quick copies of an ECKD minidisk.

An installation creates and configures a master virtual server with an operating system and any desiredmiddleware installed. The installation can add z/VM resources such as linked minidisks. It then logs offthe z/VM guest running the virtual server to stabilize the minidisks. Following this, it can create a clone(copy) of the master virtual server using existing OpenStack APIs, the OpenStack Horizon GUI (or othersupported GUIs). The use of cloning is transparent to all invokers.

To accomplish this transparency, alternative deployment provisioning enhances the z/VM OpenStackplugin and xCAT to recognize OpenStack deployments that should use cloning instead of OpenStackprovisioning. The installation creates a special disk image using xCAT commands and places it into theOpenStack glance image repository. When an OpenStack virtual server deployment request specifies thespecial image, the underlying xCAT support recognizes this and uses the image name to identify themaster virtual machine. Deployment then proceeds by performing a clone function instead of the normalOpenStack deployment process.

Figure 50 on page 120 shows an example of a customer installation. The customer has created a Linuxserver within the Master virtual machine that has two minidisks and links to three other disks in twodifferent virtual machines, UserA and UserB. The virtual server is configured to a virtual switch and theLinux operating system obtains its TCP/IP address and host information from a DHCPD/DDNS serverwhen it boots. This figure uses:v A solid line (no arrowheads) to show minidisks that are owned by a virtual machine; for example,

UserA owns 191, UserB owns 291 and 292v A solid arrow from the virtual machine that is linking to a disk owned by another virtual machine to

show linked disks; for example Master links to UserA’s 191 diskv A dashed double arrow to show TCP/IP communication flows; for example the DHCP flows.

© Copyright IBM Corp. 2014, 2017 119

With alternative deployment provisioning, the customer is able to use OpenStack to deploy a specialvirtual image that instructs xCAT to:v Create a new virtual machine from the Master virtual machine’s directory entry, and thus get access to

the link information and virtual switch information within it, among other resource information. Thisincludes setting up links to the disks owned by UserA and UserB (above: 191, 291, and 292).

v Use FlashCopy to make copies of the minidisks owned by Master (above: 150, 151) and add them tothe cloned virtual machine. Note that if xCAT cannot use FlashCopy, then it formats the target disk forLinux and uses the Linux dd command to move the disk data. For this reason, non-Linux minidisksmust be on FlashCopy enabled minidisks. Otherwise, the dd copy will not produce a usable minidisk.

v Logon the cloned virtual machine:– Customer configured software, not z/VM or xCAT, is responsible for communicating with the

DCHPD/DDNS server to obtain TCP/IP information for the cloned virtual server.– Customer configured software is responsible for all External Security Manager (ESM) controlled

access to a cloned virtual server and the resources that it uses. This can be done prior todeployment, or during the cloned virtual server’s first boot.

v Update the xCAT tables after the cloned virtual server has started so that xCAT can communicate withthe cloned virtual server and OpenStack knows about the existence of the virtual server in order toprovide basic functionality such power on/off.

Planning and RequirementsThis section documents considerations and requirements for defining an environment to use this cloningsupport. It is composed of considerations specific to:v The z/VM host where deployment occursv The Cloud Manager Appliance which handles the cloningv The master virtual machinev The resulting clone virtual machine.

Figure 50. Overview of Alternative Deployment Provisioning

120 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

z/VM Hostv IBM recommends using FlashCopy, as this provides the fastest cloning of images. If FlashCopy support

is not available or fails, then xCAT attempts to copy the disks using the Linux dd command line utility.Linux dd does not support non-Linux minidisks -- for example CMS minidisks -- for when non-Linuxminidisks are used either FlashCopy must work or the deployment will fail.

v In order to use FlashCopy:– You must enable the FlashCopy function on the DASD controller. Note that some controllers limit

the number of concurrent FlashCopy operations in progress. The actual value of the limit dependson the hardware and/or microcode level.

– All disks in the directory entries of a single master and its clones must be on the same DASDcontroller. Multiple masters and their clones may use a single DASD controller.

– You should define a separate directory manager disk pool for each DASD controller, as describedbelow.

– All disks must be ECKD disks.v The installation must define a directory manager disk pool containing disks that the directory manager

will partition into minidisks and assign to the clones. Multiple disk pools may be used. Each disk poolshould contain only one type of disk; for example, ECKD or FBA. FlashCopy-capable ECKD disksshould be in different pools than those that are not FlashCopy-capable. Although IBM supports amixed FlashCopy and non-FlashCopy disk pool, a multiple minidisk virtual machine deploymentmight get one FlashCopy disk and another non-FlashCopy disk from the disk pool. This lessens thebenefit of FlashCopy as the overall deployment process, while fast, would still take longer thanexpected because FlashCopy creates one disk very fast and the other takes longer.

v If the installation uses an External Security Manager (ESM) to control access to cloned virtual machinesand their resources, then the installation is responsible for establishing that control. xCAT does notestablish ESM control of virtual machines that it manages. The customer should establish any ESMcontrols either prior to deployment or as part of the initial logon of the clone virtual machine followingits deployment.

Alternative Deployment Provisioning and the Cloud Manager Appliancev Alternative Deployment Provisioning enhances the xCAT management node server and the z/VM

OpenStack plugins running in both the OpenStack controller and OpenStack compute node. Thesupport is delivered in a CMA running in the controller role (which contains an xCAT managementnode server, OpenStack controller services and compute node services), or the compute role (whichcontains OpenStack compute node services and is configured to talk to a related CMA running incontroller mode). You cannot use it in a stand-alone xCAT server because the necessary OpenStackservices are not running within the same virtual server as the xCAT server.

v OpenStack Horizon provides access to the OpenStack controller environment.

Master Virtual Machinev The installation must define the master virtual machine on the z/VM host where the cloning will

occur.v The installation must define the master virtual machine as a USER entry in the directory and not as an

IDENTITY.v The master virtual machine must have at least one NICDEF statement with this string in the statement

(blanks are significant):" LAN SYSTEM "

The NICDEF statement can be commented out but must contain the strings:" NICDEF "" LAN SYSTEM "

Chapter 8. Alternative Deployment Provisioning 121

||

|

|

||

v The installation should log off the master virtual machine during all cloning operations. xCAT willdeactivate (that is, log off) the virtual machine if it is logged on at the start of a cloning operation. Thedeactivation takes place according to the system default signal timeout value.

v All minidisks owned by the master virtual machine must be accessible to the z/VM host where xCATperforms the cloning operation.

v If the source virtual machine has non-Linux minidisks, both the master virtual machine and the clonevirtual machine disks must support FlashCopy. These FlashCopy requirements include the following:disks must be ECKD disks, all master and clone disks must be on the same controller, and thecontroller must be enabled for FlashCopy. If xCAT cannot perform FlashCopy successfully, xCATformats the disks and subsequently copies them using the Linux dd command. This command doesnot support non-Linux formatted minidisks -- for example CMS minidisks -- so when non-Linuxminidisks are used either FlashCopy must work or the deployment will fail.

v IBM recommends that MDISKs owned by the master virtual machine be on a disk controller thatsupports FlashCopy.

v xCAT does not clone dedicated disks. The dedicate statement is copied directly to the clone so only themaster or at most one clone can be logged on at a time if dedicated disks exist in the master’sdirectory entry.

v The installation must install the xCAT public SSH key on the master virtual server so that the xCATManagement Node can communicate with clones created using that master. See the “UnlockingSystems for Discovery” section in the “Setting up and Configuring the Server Environment” chapter ofz/VM: Systems Management Application Programming for more information on installing the key. xCATcopies the master’s Linux disks to the clone. If the xCAT management node can communicate with themaster virtual server then it will be able to communicate with its cloned virtual servers. The xCATManagement Node accesses each cloned virtual server in order to obtain the hostname set duringpost-boot customization.

Clone Virtual Machinev The IP address of the clone virtual machine must be accessible to the xCAT management node.v All devices on the master virtual machine must be accessible to the clone on the z/VM host where

xCAT deploys it. A z/VM multi-system cluster where a master virtual machine only has access toresources on a specific z/VM cluster member would require that xCAT create the clone on the samez/VM cluster member in order for the clone to have access to the same resources as the master.

v Grants for switches used by the clone must occur within the User entry of the z/VM User Directoryfor the clone virtual machine, or the customer installation must handle them outside of the xCATenvironment.

v The customer installation is responsible for providing and configuring the IP address and host namefor the virtual server when the virtual machine logs on. The OpenStack z/VM support detects asuccessful deployment using the xCAT nodestat command. The nodestat command obtains the IPAddress of the clone from z/VM’s ARP cache and the host name using SSH to access the clone. Thecustomer installation is responsible for ensuring that the nodestat command successfully completesafter a power on of the clone before the expiration of the time specified on the zvm_reachable_timeoutproperty in the nova.conf file. (See “Settings for Nova” on page 155for more information on thisproperty and how to modify it.) The default value for this property is 300 (5 minutes).

Configuration – Send in the ClonesThe following sections discuss setting up a dummy image to import into glance, setting up the mastervirtual machine, updating the DOCLONE COPY file, and instructing xCAT to re-read the file.

122 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

Creating a Dummy ImageIssue the mkdummyimage command to create a dummy image. The dummy image is stored by default inthe /install directory with the name dummy.img. This section documents the simplest form of thecommand. For more information on this command, see z/VM: Systems Management ApplicationProgramming.

Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node(xCAT), as described in “Using the Script Panel in the xCAT User Interface” in z/VM: Systems ManagementApplication Programming and in Appendix G, “Common Procedures,” on page 179. If the z/VM host wasconfigured to use an ECKD disk pool (specified in the openstack_zvm_diskpool property in DMSSICMOCOPY), enter the following in the Script box:

/opt/xcat/bin/mkdummyimage

If the z/VM host was configured to use an FBA disk pool (specified in the openstack_zvm_diskpoolproperty in DMSSICMO COPY), enter the following in the Script box:

/opt/xcat/bin/mkdummyimage --fba

Then press the Run button.

Add the Dummy Image to GlanceThe image is imported into glance so that OpenStack functions can use it when deploying systems.

Adding the Dummy Image Using the Horizon GUIAccess the Horizon GUI with the following URL:

https://XCAT_MN_Addr/dashboard/project

1. 1. Authenticate into the GUI using the following values:

User ID: adminPassword: The current password used to log in as "admin" with the HorizonGUI. The original value of the password for "admin" was specified in thecmo_admin_password property in the DMSSICMO COPY file. You should havechanged this password the first time you logged in as "admin" with theHorizon GUI.

2. Navigate to the Images panel (using the navigation frame on the left: Admin-->Images) and choose+Create Image to bring up the image import dialog box, as shown in Figure 51 on page 124.

Chapter 8. Alternative Deployment Provisioning 123

Input the image information and then select Create Image to import the information, as shown inFigure 52.

Note that for a z/VM type image:

Figure 51. Cloud Management Dashboard

Figure 52. Create an Image Screen

124 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

v "Name" is the human-readable image name. You will use this name later to update/create theDOCLONE COPY file. The image name should not contain semicolons, equal signs, or blanks.

v For "Source Type" select “URL”.v "Location" should be the URL of the image file. For example:

https://xcat_mn_addr/install/dummy.img

Construct the URL by concatenating the following pieces of information:– https://– xcat_mn_addr is the IP address value of the xCAT MN node which contains the dummy image

that was created in a previous section.– The file path and name of the dummy image file created by the mkdummyimage command.

v "Format" must be "Raw".v "Architecture" can be left blank.v "Minimum Disk" can be set at 0.v "Minimum RAM" can be set at 0.v Choose "Public" for "Visibility" and leave "Protected" unchecked.

3. Click "Next>" and choose the "+" sign by "z/VM Image Properties". This fills in some Image Metadataproperties automatically. Fill in some information for os_version as shown in Figure 53; for example:"rhel6.7". This value is not related to the level of Linux running in your master guest. Then click"Create Image".

4. Wait for the image import to complete. You will then see Figure 54 on page 126. Note that by defaultOpenStack will create your image under the Public project.

Figure 53. Image Metadata Screen

Chapter 8. Alternative Deployment Provisioning 125

|

|||

|

|

|

|

|

||

|

|

|

|

|

|||||

|||

Setting Up the MasterThe z/VM userid of a master virtual server requires some configuration so that xCAT can access itduring the cloning process. Once the virtual server has been configured as necessary for its intendedserver role (for example, you have installed any and all software that you want in it and have configuredit to access the necessary z/VM resources), it must be unlocked to xCAT, and the virtual machine mustbe logged off.

Unlocking the MasterThe master virtual server should be unlocked in the same manner as discussed in the “UnlockingSystems for Discovery” section in the “Setting Up and Configuring the Server Environment” chapter ofz/VM: Systems Management Application Programming. This allows the xCAT Management Node to managethe master and all clones created from it.

Logging Off the MasterYour installation should log off the master virtual machine whenever a cloning operation is running. Thisensures that virtual server’s software is not changing the disk contents while the disk copy is beingcopied.

Creating or Updating the DOCLONE COPY FileYou create the DOCLONE COPY file on the MAINT 193 disk. Each line in the file defines a relationshipbetween an image name in glance and the z/VM system that controls the cloning process. The copy fileis a multi-line CMS file. IBM recommends that you define the file as a variable length file to avoid lengthconstraints on the property values specified in the lines.

Each line of the file can be one of the following:v A comment line which begins with /* in the first column.v A blank line.v An image line.

Figure 54. Results of Creating an Image

126 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

||||

The image line uses key/value pairs to provide the information. An equals sign separates the key fromthe value and a semicolon separates the key/value pairs. The keys and values are case insensitive. Thevalues should not contain semi-colons, equal signs or blanks. In particular, these characters cannot not beused in the image name that is defined to glance.

The following is an example of a DOCLONE COPY file. You specify all key/value properties for animage on a single line./* imgflash uses FlashCopy to create the cloned disks. */IMAGE_NAME=imgflash; CLONE_FROM=testeckd; ECKD_POOL=FLASH;

/* imgFBA is used when the master contains FBA minidisks. */IMAGE_NAME=imgFBA; CLONE_FROM=testFBA; FBA_POOL=POOLFBA;

/* This image is for a master system which has a combination of minidisks. *//* While not recommended, because it is just plain different, it is supported. */IMAGE_NAME=imgBoth; CLONE_FROM=testFBA; ECKD_POOL=POOLECKD; FBA_POOL=POOLFBA;

The supported keys for an image line are:

IMAGE_NAMEName of the image in glance. This name must be unique because z/VM uses the name as a key forthe image lines. The Horizon GUI shows the related column heading for this value as “Image name”.

CLONE_FROMz/VM user ID of the master virtual machine.

ECKD_POOLName of the directory manager disk pool (region) xCAT will use when ECKD disks are requested fora clone virtual machine. This property is required if the master, as specified by the CLONE_FROM key,contains ECKD minidisks.

Note: For FlashCopy cloning, the directory manager pool specified as the ECKD_POOL=value mustcontain ECKD disks on the same controller as the disks contained in the CLONE_FROM user ID. Theinstallation must also enable the controller’s FlashCopy function. If FlashCopy does not work, thenxCAT will format the clone’s disk for Linux and then use the Linux dd command to copy the disks.f you have multiple controllers supporting the z/VM host then a master virtual machine should haveall of its disks on the same controller AND the disk pool used for cloning the master should havedisks that are on the same controller. This is because the controller provides the FlashCopy feature,and so using FlashCopy is only possible between disks on the same controller.

FBA_POOLName of the directory manager disk pool (region) xCAT will use when FBA disks are requested for aclone virtual machine. This property is required if the master, as specified by the CLONE_FROM key,contains FBA minidisks.

Note: FBA minidisks are always copied using Linux dd commands. FlashCopy is only available forECKD minidisks.

You can create and edit the DOCLONE COPY file from the MAINT or MAINTvrm z/VM user ID,wherevrm is the version, release, and modification level. For example: MAINT640. The file must reside onthe MAINT 193 disk in order for the xCAT MN to access the data.

Reading the DOCLONE COPY FileThe xCAT management node caches the contents of the DOCLONE COPY file from the MAINT 193 disk,but xCAT only updates its cache when you issue the zxcatCopyCloneList.pl command. You must specifythis command whenever the file is updated. Restarting SMAPI will not update xCAT’s cache entry. Formore information on this command, see z/VM: Systems Management Application Programming.

Chapter 8. Alternative Deployment Provisioning 127

|||||||||

|||

Bring up the xCAT GUI, authenticate into xCAT, and go to the Script panel for the xCAT MN node(xCAT), as described in “Using the Script Panel in the xCAT User Interface” in z/VM: Systems ManagementApplication Programming and in Appendix G, “Common Procedures,” on page 179.

In the Script box, enter:/opt/xcat/bin/zxcatCopyCloneList.pl

Then press the Run button.

The zxcatCopyCloneList.pl script checks the DOCLONE COPY file for any syntax errors. If the commandruns successfully, you will see messages similar to those in Figure 55: .

Creating a Dummy SubnetOpenStack contains code that attempts to manage TCP/IP addresses. OpenStack "thinks" it is managingthe addresses of cloned virtual servers, but in reality, by using cloning you have chosen to manage theaddresses actually assigned to cloned systems. When the OpenStack deployment function creates asystem with alternative deployment provisioning, the you must define a subnet that OpenStack "thinks" itwill manage.

Follow the OpenStack procedures to create the dummy subnet, described in the following sections. Whenyou deploy virtual servers later using OpenStack, use this subnet on the deployment request.

Creating a Dummy Subnet with the Horizon GUIAccess the Horizon GUI with the following URL:

https://XCAT_MN_Addr/dashboard/project

1. Authenticate into the GUI using the following values:

User ID: adminPassword: The current password used to log in as "admin" with the HorizonGUI. The original value of the password for "admin" was specified in thecmo_admin_password property in the DMSSICMO COPY file. You should havechanged this password the first time you logged in as "admin" with theHorizon GUI.

2. Navigate to the Project-->Network-->Networks tab in the left navigation frame to bring up theNetworks panel. Click on the +Create Network button, as shown in Figure 56 on page 129.

DOCCLONE copied to temporary file /var/opt/xcat/doclone.nS12345jq successfullyValidating /var/opt/xcat/doclone.nS12345jq contents for proper syntax ...Validation completed. Temporary file copied to /var/opt/xcat/doclone.txt.It is ready to use.

0

Figure 55. Sample zxcatCopyCloneList.pl Command Output

128 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||||||

3. Fill in the Network Name with a name to associate with the network, and leave the Admin State as"UP" as shown in Figure 57.

4. Click on the Next button to proceed to the Subnet dialog box, as shown in Figure 58 on page 130. Fillin the subnet information:v Subnet name – Name to refer to the subnet.v Network Address – CIDR form of the address (for example, 11.11.0.0/24). Specify a large address

range in order to avoid the need to manage the dummy addresses.v IP Version – Keep the "IPv4" value.v Check the "Disable Gateway" checkbox because this is not a real subnet and will not need a

gateway.

Figure 56. Networks Tab of Cloud Management Dashboard Screen

Figure 57. Create Network Dialog

Chapter 8. Alternative Deployment Provisioning 129

5. Select the Subnet Details tab to move to the Subnet Detail dialog box, as shown in Figure 59. Leave allof the input fields on the Subnet Detail dialog box empty and uncheck the Enable DHCP checkbox.Select the Create button to create the subnet.

6. After definition of the subnet to OpenStack, the Networks tab is displayed and the newly definednetwork is shown. See Figure 60 on page 131

Figure 58. Subnet Dialog

Figure 59. Subnet Details Dialog

130 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

||||

Creating a FlavorOpenStack contains code that determines the size and number of CPUs of a guest according to the"flavor" used during guest creation. In the case of cloned virtual servers, properties such as virtual CPUs,memory, and disk space are determined by the master. As with the dummy subnet and image, you mustdefine a dummy flavor to use when creating your virtual servers. Follow the OpenStack procedures tocreate the dummy flavor, as described in the following section. When you deploy virtual servers laterusing OpenStack, use this flavor on the deployment request.

Creating a Dummy Flavor with the Horizon GUIAccess the Horizon GUI with the following URL:

https://XCAT_MN_Addr/dashboard/project

1. 1. Authenticate into the GUI using the following values:

User ID: adminPassword: The current password used to log in as "admin" with the HorizonGUI. The original value of the password for "admin" was specified in thecmo_admin_password property in the DMSSICMO COPY file. You should havechanged this password the first time you logged in as "admin" with theHorizon GUI.

2. Navigate to the Admin-->System-->Flavors tab in the left navigation frame to bring up the Flavorspanel. Click on the +Create Flavor button, as shown in Figure 61 on page 132.

Figure 60. Networks Tab with Results Shown

Chapter 8. Alternative Deployment Provisioning 131

|

||||||

||

|

|

||||||

|||

3. Input the flavor information, as shown in Figure 62 on page 133. Note that for a basic flavor:v Name is the human-readable flavor name.v ID can be left as "auto".v vCPUs and RAM (MB) can be any value, but both are recommended to be 1. The real numbers for

the virtual server will be determined by the master.v Root disk (GB), Ephemeral Disk (GB) and Swap Disk (MB) should be set to 0. The real numbers for

the virtual server will be determined by the master.v RX/TX Factor can be left as 1. This property is used only for Xen or NSX based systems, so it will

not affect the network transfer rates of your virtual server on z/VM

Figure 61. Flavors Screen

132 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

|

|

|

||

||

||

|

4. Go to the Flavor Access tab and select, using the "+" button, the projects that you want to have accessto this flavor, as shown inFigure 63. IBM recommends making the flavor "Public" so that all users cansee this flavor.

5. After defining the flavor to OpenStack, the flavor should appear in the Flavors tab, as shown inFigure 64 on page 134.

Figure 62. Create Flavor Screen

Figure 63. Create Flavor Screen – Granting Public Access

Chapter 8. Alternative Deployment Provisioning 133

|

|||

|

|||

||||

|||

Deploying Virtual ServersThe process of deploying virtual servers can be driven by using OpenStack commands or by using theHorizon GUI. This section guides you through doing a deploy using the GUI.

Deploying Virtual Servers Using the Horizon GUIAccess the Horizon GUI with the following URL:

https://XCAT_MN_Addr/dashboard/project

1. Authenticate into the GUI using the following values:

User ID: adminPassword: The current password used to log in as "admin" with the HorizonGUI. The original value of the password for "admin" was specified in thecmo_admin_password property in the DMSSICMO COPY file. You should havechanged this password the first time you logged in as "admin" with theHorizon GUI.

2. Navigate to the Project-->Compute-->Instances tab in the left navigation frame to bring up theInstances panel, as shown in Figure 65 on page 135.

Figure 64. Results of Creating a Flavor

134 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

|

3. Select the Launch Instances button to bring up the Launch Instance dialog box.a. Fill in the appropriate information, starting with the Details tab, as shown in Figure 66. Do not use

hyphens in the instance name if your installation uses alternative deployment provisioning.

In the dialog box under the Details tab, fill in the following fields:v For "Availability Zone" choose the zone in which your z/VM system resides.v "Instance Count" is the number of instances to be deployed.

b. Click "Next>" to go to the Source Tab, and fill in the appropriate information about the dummyimage, as shown in Figure 67 on page 136. Choose "Image" for "Select Boot Source" and click the"+" button next to your dummy image to select it. Select "No" for "Create New Volume".

Figure 65. Instances Tab in Cloud Management Dashboard

Figure 66. Details Tab

Chapter 8. Alternative Deployment Provisioning 135

||||

c. Click "Next>" to go to the Flavor Tab, fill in the appropriate information about the dummy flavor,as shown in Figure 68. Click the "+" button next to your dummy flavor to select it.

d. Click "Next>" to go to the Networks tab. Click the "+" button next to your dummy network toselect it, as shown in Figure 69 on page 137.

Figure 67. Launch Instances Tab, Source

Figure 68. Launch Instances Tab

136 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

|

|||

|||

e. Move to the Security Groups tab. Make sure that the default security group is selected, as shownin Figure 70.

4. Select the Launch button to deploy the new virtual server. OpenStack will add the new instance to theInstances panel with a Task of Spawning, as shown in Figure 71 on page 138.

Figure 69. Networks Tab

Figure 70. Security Groups Tab

Chapter 8. Alternative Deployment Provisioning 137

When the deployment completes, the Status will change to "Active" and Power State will change to"Running". Also note that when a clone is powered on, the host name is obtained from the runningcloned virtual server, and OpenStack’s human-readable instance name is replaced with the originalhuman-readable name followed by the user ID/xCAT node name followed by the host name, eachseparated by a hyphen. For example:

RosemaryCloney-osp0007e-clons198.endicott.ibm.com

The hyphens should not be used in the human-readable names because the z/VM OpenStack pluginupdates the human-readable name each time the clone powers on, keeping only the portion before thefirst hyphen and then adding in the xCAT node name and DNS name.

Figure 71. Instances Tab with Results Shown

138 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix A. Installation Verification Programs

Installation verifications programs (also called the IVP) are included in CMA to validate the installationand configuration of the CMA environment. They are invoked from the xCAT GUI, or periodically eachday by chronological jobs, to ensure that the system is running properly. An IVP run is either a full endto end validation from the compute node to the OpenStack controller through to the underlying xCATcomponents (xCAT MN and ZHCP), or a shorter, basic IVP which concentrates on the xCAT components.

The full IVP consists of two main phases:v Validation of the compute node to ensure that the configuration settings necessary for a compute node

related to z/VM are properly set. This is known as running the preparation script, because it validatesand prepares a driver script that is used to initiate the second phase.

v Validation of controller and xCAT components to ensure that they work with the compute node andwith the z/VM Systems Management API (SMAPI) servers. The information gathered from thecompute node is used as part of this validation.

The basic IVP performs only the second phase of the IVP and does not use information from theOpenStack compute node configuration. In most environments, the full IVP can be invoked from xCAT.For information on automating IVP operation, see the “Running the Installation Verification Programs”section in the “Setting Up and Configuring the Server Environment” chapter in z/VM: SystemsManagement Application Programming. Note that there is the possibility that your environment does notallow the first phase validation to be driven from xCAT. You will see a warning or informational messageif the full IVP can not validate the compute node.

This appendix discusses how to install and run the preparation script to build a driver script, and whereto place the constructed driver script so that the rest of the full IVP can be run. The appendix shouldonly be needed when a full IVP driven from CMA is unable to automatically drive the preparation script.

Location of the IVP Programs

IBM ships the preparation script (prep_zxcatIVP_RELEASE.pl) with the CMA on z/VM. RELEASE is thename of the OpenStack release in upper case. An example of the name of this script isprep_zxcatIVP_NEWTON.pl. This script is located in the /opt/xcat/share/xcat/tools/zvm directory.

Installing the IVP Preparation Script

Both CMAs and non-CMA systems running OpenStack services get this script from the node where xCATis running. CMA installs a directory with these scripts when you configure CMA to run the xCAT service.The driver program built by the perl script validates that the properties in the OpenStack configurationfiles are appropriate for interacting with the xCAT server and the z/VM host.

Running the IVP Preparation Script on the Compute Node

This section tells you how to manually run the IVP preparation script. Follow this procedure only if youare unable to use the automated procedures described in the “Running the Installation VerificationPrograms” section in the “Setting Up and Configuring the Server Environment” chapter in z/VM: SystemsManagement Application Programming.

The IVP preparation script (prep_zxcatIVP_RELEASE.pl) prepares and builds a z/VM xCAT IVP driverprogram using the information from the configuration files in the compute node.

© Copyright IBM Corp. 2014, 2017 139

|||||

|

|||

|||

|||||||

|||

|||

||||

||||

The following configuration files are scanned for input:/etc/cinder/cinder.conf/etc/nova/nova.conf/etc/neutron/neutron.conf/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugins/zvm/neutron_zvm_plugin.ini/etc/ceilometer/ceilometer.conf

Note: Some non-CMA compute nodes support running multiple sets of compute node services in thesame server to allow the same server to support multiple z/VM hosts. In order for the configuration tobe unique to the appropriate z/VM host which the compute node supports, the startup scripts in/etc/init.d specify unique configuration files. The uniqueness is provided by adding a hyphen and theconfigured name of the z/VM host to the configuration file name. For example, /etc/nova/nova-hostzvm.conf is the unique nova.conf file for a z/VM host identified as "hostzvm".

The preparation script performs these steps:1. Scans the configuration files to gather the xCAT-related operands.2. Validates the settings in the configuration files and warns of defaults being used or unexpected

settings in the operands (for example, properties for two different services which should contain thesame value but do not).There are three types of messages created: normal processing messages indication the status of thepreparation run, warning messages indicated possible problems detected on in the configuration fileproperties, and information messages which can indicate defaults that will be used in the normaloperation of the compute node. We suggest you review the information messages, in addition to thewarning messages, because they may indicate a default that you had intended to override.

3. Constructs the driver program.Note that an existing program of the same name is renamed to end with the string “_old”.

The default name of the resulting driver program is as follows:

zxcatIVPDriver_IPaddress[_hostname].sh

where:

IPaddressis the IP address of the system where the driver was prepared.

hostnameis the value specified on the --host option. The --host option and other options are described in“IVP Preparation Script Syntax.”

The following are sample driver file names:zxcatIVPDriver_9.123.345.91.shzxcatIVPDriver_9.123.345.91_hostzvm.sh

IVP Preparation Script Syntax

The preparation script is specified using the following format:prep_zxcatIVP_RELEASE.pl [options]

where RELEASE is the name of the OpenStack release in upper case, and the supported options are:

-c filename1[,filename2,...]--config filename1[,filename2,...]

List of configuration files to be processed. This list overrides the default values. Eachconfiguration file is identified by an eye-catcher indicating which configuration file is beingoverridden, followed by a colon and the fully qualified file specification. Multiple configuration

140 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

files may be specified by separating them with a comma (one file per eye-catcher). The followingare recognized eye-catchers and the files that they override:

cinder_conf/etc/cinder/cinder.conf

m12_conf/etc/neutron/plugins/ml2/ml2_conf.ini

neutron_conf/etc/neutron/neutron.conf

nova_conf/etc/nova/nova.conf

neutron_zvm_plugin_conf/etc/neutron/plugins/zvm/neutron_zvm_plugin.ini

ceilometer_conf/etc/ceilometer/ceilometer.conf

-d driverProgramName--driver driverProgramName

File specification of the driver program to construct, or the name of the directory to contain thedriver program.

-h--help

Display help information.

-H hostname--host hostname

Name of z/VM host to process. The startup scripts end with this suffix for the specified z/VMsystem. This option is used with the -i option to indicate which startup scripts should bescanned.

-i--init-files

Scan the system for either System V style startup scripts or systemd service files related toOpenStack. The files are scanned for the name of the related configuration file. For System V, the/etc/init.d directory is scanned. For systemd, it will scan /usr/lib/systemd/system/,/run/systemd/system/, and /etc/systemd/system/ service files (.service).

--ignore valueBlank separated list of message IDs or message severities to ignore. Ignored messages are notcounted as failures and do not produce messages. Instead, the number of ignored messages andtheir message numbers are displayed at the end of processing. Recognized message severities are:bypass, info, warning, and error.

For example, specifying --ignore PROP02,PROP03 causes messages PROP02 and PROP03 to beignored.

-p--password-visible

If specified, password values in the constructed driver script program will be visible. Otherwise,password values are hidden. This option is used when building a driver script to run against anolder xCAT MN.

-s servicesToScan--scan servicesToScan

Services to scan. Can be "all", "nova" or "neutron".

-v Display script version information.

Appendix A. Installation Verification Programs 141

|||||

||

-V--verbose

Display verbose processing messages.

Examples and Usage Notes:

v When the --init-files operand is specified, OpenStack startup scripts are scanned in /etc/init.d.The --host option should be specified to indicate the suffix to use for scripts that are unique to aspecific z/VM host. In the following example, "hostzvm" is used as the operand of the -host option:

prep_zxcatIVP_NEWTON.pl --init-files --host hostzvm

As a result, the following startup scripts are scanned:openstack-ceilometer-api-hostzvmneutron-serverneutron-zvm-agent-hostzvmopenstack-nova-compute-hostvzmopenstack-cinder-volume

When the --init-files operand is specified without the --host operand, the following scripts arescanned:

openstack-ceilometer-apineutron-serverneutron-zvm-agentopenstack-nova-computeopenstack-cinder-volume

v The --config option can be used to override default configuration files. For any configuration file thatis not overridden, the default value is used. Note that the following example is a one-line command,but is shown on multiple lines here for readability:prep_zxcatIVP_NEWTON.pl --config "cinder_conf:/etc/cinder/cinderella,neutron_zvm_plugin_conf:/etc/neutron/plugins/zvm/neutron_zvm_plugin-hostzvm.ini,nova_conf:/etc/nova/nova-hostzvm.conf"

Specifying the IVP Preparation Script for CMA Systems

For CMAs, the preparation IVP script is run using sudo. You will be prompted to specify the passwordfor the management user (for example, the user specified in DMSSICNF COPY in the XCAT_MN_adminproperty).

The simplest invocation is:sudo perl /opt/xcat/share/xcat/tools/zvm/prep_zxcatIVP_RELEASE.pl

where RELEASE is the name of the OpenStack release in upper case.

It is recommended that you create the driver program in the home directory of the CMA user that isinvoking the prep_zxcatIVP_RELEASE.pl script. This can be done by either invoking theprep_zxcatIVP_RELEASE.pl program from the home directory of the user, or by specifying the -d option.For example:

sudo perl /opt/xcat/share/xcat/tools/zvm/prep_zxcatIVP_NEWTON.pl -d /install/zvm/ivp/

The -d option and other options are described in “IVP Preparation Script Syntax” on page 140.

Specifying the IVP Preparation Script for a non-CMA Compute Node

An example of invoking the IVP script on a non-CMA compute node is the following:perl prep_zxcatIVP_RELEASE.pl

where RELEASE is the name of the OpenStack release in upper case.

142 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|||

|

An example of issuing the command and overriding defaults is the following:perl prep_zxcatIVP_NEWTON.pl -d driverProgramName

The -d option and other options are described in “IVP Preparation Script Syntax” on page 140.perl prep_zxcatIVP_RELEASE.pl --init-files --host zvmhost

where:

zvmhostis the host name appended to the host-unique startup scripts.

Uploading the Driver Script to Your System

The driver script should be uploaded to the xCAT MN so that it can be used as part of the full IVP. Priorto performing these steps, you should have attempted at least one full IVP as discussed in the “Runningthe Installation Verification Programs” section in the “Setting Up and Configuring the ServerEnvironment” chapter in z/VM: Systems Management Application Programming. Running a full IVP createsthe directory where you will place your driver script.1. Move the driver script to the system from which you will invoke the browser for the xCAT GUI.2. Bring up the xCAT GUI and move to the Configure-->Files panel to view the folders.3. Create a directory in the /install directory. (For example: /install/ivp). To do this, do the following:

a. Select the New Folder button to create the new folder, if you have not already done so. (Forexample: /install/ivp).

b. Refresh the browser and go back to the Configure-->Files panel, so that you can see the newfolder.

4. Move to the new folder by clicking on the folder icon next to the folder name.5. Click on the Upload button and follow the prompts to select the driver script from the system where

you are invoking the browser, and upload the script into the folder.6. Bring up the Run Scripts panel so that you can move the driver file to its final location.

a. In the xCAT GUI, select Nodes-->Nodes.b. Navigate to the node that represents the xCAT MN, (for example the "xcat" node).c. Check the check box by the node name, and select the Actions pulldown. From that pulldown,

select the “Run script” choice.d. In the script box enter the following commands and then select the Run button. The result will be

shown in the yellow status box near the top of the panel.mv –f /install/ivp/zxcatIVPDriver_7.61.18.197.sh /var/opt/xcat/ivp/chmod 750 /var/opt/xcat/ivp/zxcatIVPDriver_7.61.18.197.shchown root:root /var/opt/xcat/ivp/zxcatIVPDriver_7.61.18.197.shwhere:/install/ivp/zxcatIVPDriver_7.61.18.197.sh is the location of the driver script that you uploaded.

Messages from the IVP

Error messages, warning messages, and their explanations produced by the IVP preparation script andthe IVP driver script (zxcatIVP.pl) are displayed in the output of the IVP run.

You can choose to ignore certain errors or warnings that are not important to your system. Forinformation on ignoring messages, and for more details on how IVP messages are displayed, see the“Running the Installation Verification Programs” section in the “Setting Up and Configuring the ServerEnvironment” chapter in z/VM: Systems Management Application Programming.

Appendix A. Installation Verification Programs 143

|

|

|||||

|

|

|

||

||

|

||

|

|

|

||

||

|||||

||

||

||||

144 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix B. Using DDR to Reset the CMA

You can use the CP DASD Dump Restore (DDR) utility to reset the CMA in order to have the CMAconfiguration tools reconfigure xCAT, OpenStack, and the CMA within the virtual machine that isrunning the CMA. This action can overwrite existing data and configuration information. For this reason,you should only use this function in rare cases and with extreme caution.

To use DDR to reset the CMA, follow the instructions in the CMA140 FILE file, shipped with the z/VMAPAR process, to download and restore the CMA image files to your CMA's 101 and 102 minidisks, andto start the CMA. Starting the CMA completes the reset function. For more information, see “Starting theCMA” on page 35.

In an SSI cluster or other multi-system configuration, if the CMA running in the controller role is reset,then after the controller has finished restarting you must restart each compute role.

© Copyright IBM Corp. 2014, 2017 145

||||

||||

||

146 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix C. Getting Logs from xCAT or ZHCP

Determining the Names of the Log Files

If you need to get log files that have unique names from the ZHCP, for example output from captureimage, you first need to determine the name of the files, as follows:1. Log on to the xCAT user interface as admin and bring up the Script panel for the xCAT MN node

(xcat), as described in “Using the Script Panel in the xCAT User Interface” on page 179.2. In the script box, enter the SSH command to list the files in the log directory, using the IP address of

the ZHCP (the default IP is 10.10.10.20):ssh 10.10.10.20 "ls -al /var/log/zhcp"

where:

/var/log/zhcpis the directory where ZHCP-specific log files are maintained. Optionally, you could specify/var/log to see the general log files.

Then press Run.3. At the top of the page, you will see the names of the files, which you can copy for later use in

“Getting the Log Files.”

Getting the Log Files

To get a log file from xCAT or ZHCP, you need to navigate to the Logs panel. To get a file, you need toknow the name – it could be a well-known name as in messages, or else you may need to run acommand to list the files names, as shown in “Determining the Names of the Log Files.” Perform thefollowing steps:1. Log on to the xCAT user interface as admin and go to Nodes/Nodes (as described in the first two

steps of “Using the Script Panel in the xCAT User Interface” on page 179).2. Click the checkbox for either the xCAT or ZHCP. In this example, we're using the ZHCP.

Figure 72. Selecting the Node

© Copyright IBM Corp. 2014, 2017 147

3. Select "Configuration/Event log".

4. In the Source log field, enter the file specification of the log file you wish to copy -- for example:/var/log/zhcp/creatediskimage_trace_2014-02-18-22_10_28.849.txt

5. Click the Retrieve log checkbox, then enter:/install

in the Log destination field. Then press the Run button.

Figure 73. Selecting “Event log”

Figure 74. Filling in the Logs Fields

148 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

You will see in the information box that the file has been copied to the xCAT management node/install directory.

6. Go to the Configure/Files panel and you should see the file you copied. You can click on it and see itin the browser.

Figure 75. Information Box Confirming Copy

Figure 76. Going to the Files Screen

Appendix C. Getting Logs from xCAT or ZHCP 149

Or else right-click and choose “Save Link As...” if you wish to copy it to your workstation.

Figure 77. Choosing “Save Link As...” on the Files Screen

150 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix D. Checklist for Capture/Deploy/Resize1. Make sure you can SSH from MN to nova compute without a password. Refer to Chapter 5,

“OpenStack Configuration,” on page 25.2. Make sure you have z/VM images in glance. Refer to “Upload the Image from the Nova Compute

Server to Glance” on page 86.3. If the nova-compute is started up by service, the default user is "nova". Ensure that the folder

/var/lib/nova/ has the following mode, otherwise it will cause the image copy from xCAT MN tonova-compute server fail (note that "server" must have a lowercase "s"):

server :/var/lib/novadrwxr-sr-x. 2 nova nova 4096 Aug 23 14:25 .ssh

4. Make sure that nova services and neutron services are started.5. Make sure that neutron net and subnet are created.6. If you want to boot with volume, make sure that cinder services are started and that available

volume(s) exist.7. Note that the host name of the new deployed instance is case insensitive. For example, in nova boot

command, if you typed TEST001 or test001 as the instance name, after the deployment finish, logonthe target instance you will see the both of their host name are test001.

8. Capture should be based on a successfully deployed instance. So if the items above are fulfilled fordeploy, they will be ready for capture.

9. To perform a resize, you need to additionally configure the SSH keys between nova compute nodes.Refer to “Synchronizing SSH Key Between Nova Compute Nodes for Resize” on page 55.

© Copyright IBM Corp. 2014, 2017 151

152 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix E. Checklist for Live Migration1. The z/VM version is V6.3 or later.2. The z/VM SSI cluster is configured correctly.3. Important restriction: A virtual machine with a persistent volume attached cannot perform live

migration through OpenStack.4. It is recommended that there be only one XCAT MN per SSI cluster.5. Each vswitch has an OSA attached in each SSI member. The volumes that are shared in SSI cluster

have same EQID configured in each SSI member.6. Spawn an OpenStack instance, then log on to MAINT and issue:

vmrelocate test instance_name destination_zvm

Make sure the test passed.7. xCAT nodes have been created to represent each z/VM hypervisor and its ZHCP server.

© Copyright IBM Corp. 2014, 2017 153

154 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix F. OpenStack Configuration Files

If you are installing a CMA, you can skip this section unless you want to change settings that are notexposed as corresponding properties in the DMSSICNF and DMSSICMO COPY files. This referenceinformation is useful primarily to third parties running the z/VM OpenStack plugins in a non-CMAnode.

In each of the following sections (“Settings for Nova,” “Settings for Cinder” on page 163, “Settings forNeutron” on page 165, and “Settings for Ceilometer” on page 170), the settings are describedindividually. See “Sample Configuration Files” on page 172 for sample files that can be copied andpasted, and then edited, as appropriate.

Information is provided for each property that is used by the z/VM plugin, as follows:v Whether the property is Required or Optional. A property is Required if it is necessary for the

mainline operation of the support. If a property is only necessary for an optional feature, or it has adefault value, then the property will be specified as Optional. The notes for that property will indicatewhen it is needed, or its default value will be listed.

v Name of the configuration file section where the property is specified. Most of the configurationproperties reside in either the DEFAULT or AGENT section. Some, however, are specified in othersections of a configuration file.

v The format of the value, and its definition.v Additional notes. This may include recommended values, default values, or other information, as

appropriate.

The configuration files are documented in their default location. Some products allow a single server tosupport multiple z/VM hosts. In those systems, multiple copies of the same OpenStack services are run.In order to configure those services to different z/VM hosts, multiple versions of the configuration filesare created. The startup scripts for the OpenStack services (in /etc/init.d) indicate which configurationfile should be used.

Settings for Nova

This section describes the configuration settings related to the nova z/VM driver. For a sample/etc/nova/nova.conf file, see “Sample File for Nova z/VM Driver” on page 172.v In file /etc/nova/nova.conf:

compute_driver

Required

Section: DEFAULT

Value: zvm.ZVMDriver

Notes: Driver to use for controlling virtualization. For z/VM, it is "zvm.ZVMDriver".

config_drive_format

Optional

Section: DEFAULT

Value: iso9660 – format of the config drive.

Notes: If this option is not specified, the default value is iso9660. For z/VM this value must beiso9660.

© Copyright IBM Corp. 2014, 2017 155

||||

|

|

default_ephemeral_format

Optional

Section: DEFAULT

Value: ext2, ext3, or ext4 – file system format for the ephemeral disk.

Notes:

– If this option is not specified in nova.conf, the created ephemeral disk will use default ext3as its file system.

– The file system specified must be one that the Linux distribution (also called a distro) on thedeployed system allows to be mounted in read-write mode.

force_config_drive

Required

Section: DEFAULT

Value: True – controls whether a config drive is used to pass configuration data to a deployedvirtual server instance.

Notes: The value must be "True". The z/VM driver supports only the config drive forcloud-init.

host

Required

Section: DEFAULT

Value: Same value as specified for the zvm_host property.

Notes:

– This is a unique identifier of the compute node. A compute node is related to a single z/VMhypervisor – therefore this property is recommended to be the same value as specified forthe zvm_host property.

– If a cloud is running multiple compute nodes, each node would be configured for a differentz/VM system, with the host property used to uniquely define the compute node and azvm_host property identifying the z/VM hypervisor that the compute node supports.

– Once this value has taken effect, it should not be changed. Changing this value can causeunexpected results. For example, an OpenStack controller managing z/VM as a computenode would see compute instances that existed prior to the change as running on a differenthypervisor.

image_cache_manager_interval

Optional

Section: DEFAULT

Value: Integer – the number of seconds to wait between runs of the image cache manager.

Notes: Not z/VM specific. The default is 2400 (40 minutes).

instance_name_template

Required

Section: DEFAULT

Value: 8 characters or less – template string to be used to generate instance names.

Notes:

156 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

– The template should contain a fixed section, which is included in the name of each createdinstance, followed by the number of hexadecimal digits to be generated. For example, avalue of "abc%05x" indicates that each server begins with "abc", followed by 5 hexadecimaldigits. The hexadecimal value is incremented as systems are created.

– If you are using an External Security Manager (ESM), consult your ESM documentation todetermine the characters allowed in the fixed section of the template. If you are using IBM'sRACF product, the fixed section of the template can consist of only alphanumeric characters.For more information, see z/VM: RACF Security Server Security Administrator's Guide.

– The first three characters of the instance name can not be "rsz" or "RSZ".– The template should be chosen so that the generated instance names do not conflict with

other instances that can be defined on the xCAT management node. When the two computenodes are using the same xCAT management node, if they are under control of the samecontroller node, then they should share the same instance template, otherwise they shouldhave a different instance template. In addition, you should ensure that the instance nameswill not conflict with names defined in the z/VM system where the virtual machine will becreated. This will avoid name clashes in both the xCAT MN and the z/VM systems wherethe virtual machines are created.

my_ip

Optional

Section: DEFAULT

Value: IP address of the compute node.

Notes:

– The compute node must accessible by the xCAT MN. In some cases, the default IP addresschosen for the compute node system is not accessible by the xCAT MN (that is, networkaddress translation can affect the address). This property allows you to override the defaultand specify the IP address that the xCAT MN should use when communicating with thecompute node.

– An IPv4 address should be specified as four octets written as decimal numbers ranging from0 to 255 and concatenated with a period between the octets. Do not specify leading zeros foran octet as this can cause some utilities to treat the octet as a number in octal representation.(For example, 09.0.05.11 is wrong, 9.0.5.11 is correct.)

ram_allocation_ratio

Optional

Section: DEFAULT

Value: Integer – the memory over commit ratio for the z/VM Driver.

Notes: The recommended value is 3.

rpc_response_timeout

Optional

Section: DEFAULT

Value: Integer.

Notes: Required only if zVM live migration is to be used. The recommended value for z/VM is180, to allow zVM live migration to succeed. Live migration will not succeed with the defaultvalue, so set it to 180 seconds.

scheduler_default_filters

Optional

Section: DEFAULT

Appendix F. OpenStack Configuration Files 157

Value: IBM recommends that you do not to specify "CoreFilter" for z Systems, as it is notappropriate for the z/VM hypervisor. For available filters, see the OpenStack documentation.

xcat_free_space_threshold

Optional

Section: DEFAULT

Value: Integer – the size in gigabytes of the threshold at which purge operations will occur onthe xCAT MN disk space to remove images.

Notes: The default value is 50.

xcat_image_clean_period

Optional

Section: DEFAULT

Value: Integer – number of days an unused xCAT image will be retained before it is purged.

Notes: The default is 30 days.

zvm_config_drive_inject_password

Optional

Section: DEFAULT

Value: True or False – defines whether to inject the password in the configuration drive. Foremore information, go to OpenStack.org User Guide (http://docs.openstack.org/user-guide/content/enable_config_drive.html).

Notes:

– The default value is False.– If set to True, the root password of the newly booted VM will be the random value of the

adminPass property that is shown in the output of the nova boot command.– If set to False, the root password of the newly booted VM will be the value specified in

zvm_image_default_password.

zvm_diskpool

Required

Section: DEFAULT

Value: The volume group name in your z/VM system from which xCAT will allocate a diskfor new servers.

Notes:

– The zvm_diskpool name is the name of the storage group defined in the Directory Manager.– You should not use a dollar sign ($) in the zvm_diskpool name.

zvm_diskpool_type

Optional

Section: DEFAULT

Value: ECKD or FBA – the disk type of disks in your diskpool.

Notes:

– The default is ECKD disks.– The diskpool is the storage group defined in the Directory Manager.– You should not mix disk types in the Directory Manager disk pool.

158 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

zvm_fcp_list

Optional

Section: DEFAULT

Value: The list of FCPs used by virtual server instances. The FCP addresses may be specified aseither an individual address or a range of addresses connected with a hyphen. Multiple valuesare specified with a semicolon connecting them (for example, “1f0e;2f02-2f1f;3f00”).

Notes: Required only if persistent disks are to be attached to virtual server instances. Eachinstance needs one FCP in order to attach a volume to itself. Those FCPs should be availableand online before OpenStack can use them. OpenStack will not check their status but use themdirectly, so if they are not ready, errors may be returned. Contact your z/VM systemadministrator if you don’t know which FCPs you can use.

zvm_host

Required

Section: DEFAULT

Value: Same value as XCAT_zvmsysid in DMSSICNF COPY file.

Notes:

– The xCAT node name of the z/VM hypervisor. This property is case sensitive and mustmatch the value specified in XCAT_zvmsysid in the DMSSICNF COPY file.

– Once this value has taken effect, it should not be changed. Changing this value can causeunexpected results. For example, an OpenStack controller managing z/VM as a computenode would see compute instances that existed prior to the change as running on a differenthypervisor.

zvm_image_compression_level

Optional

Section: DEFAULT

Value: An integer between 0 and 9, indicating the level of gzip compression used whencapturing a disk. The values are as follows:0 No compression.1-9 The gzip compression level. The recommended compression value is 6 when

compression is desired.

Notes: If this value is set in nova.conf, the specified gzip compression level will be used whencapturing a disk. If it is not set, the gzip compression level default is 6.

zvm_image_default_password

Required

Section: DEFAULT

Value: The default password to be used as the default OS root password for the newly bootedvirtual server instances.

Notes:

– If the zvm_config_drive_inject_password property is set to False, this password will be usedas default OS root password for the newly booted VM.

– It is recommended that if the default OS root password is used, the root password in thedeployed system be changed as soon as possible.

zvm_image_tmp_path

Optional

Appendix F. OpenStack Configuration Files 159

Section: DEFAULT

Value: The path at which images will be stored (snapshot, deploy, etc.).

Notes: This value defaults to /var/lib/nova/images.

zvm_multiple_fcp

Optional

Section: DEFAULT

Value: True or False. Defines whether to use host side multipath (which supports two paths toa persistent disk) when attaching a persistent disk, so that if one path fails, the instance canaccess the disk via the other path.

Notes:

– The default value is False.– To use the host side multipath feature, at least two sets of FCP devices corresponding to

different CHPIDs must be configured in the zvm_fcp_list property.

zvm_reachable_timeout

Optional

Section: DEFAULT

Value: Integer – timeout value for powering on an instance, in seconds.

Notes:

– The default is 300. This value should be 300 or larger.– This value may also be set to 0 to indicate no timeout limit, but only for the purposes of

troubleshooting certain deployment problems.

zvm_scsi_pool

Optional

Section: DEFAULT

Value: The name of xCAT SCSI pool.

Notes: The default value is xcatzfcp. Users can specify any name. The xCAT MN will createand manage it.

zvm_user_default_password

Optional

Section: DEFAULT

Value: Default password for a newly created z/VM user ID.

Notes: This is the password for any z/VM user IDs created when OpenStack deploys newvirtual servers (also called nova instances), defined in the USER directory statement. Thedefault value is dfltpass. You can change it as needed.

zvm_user_default_privilege

Optional

Section: DEFAULT

Value: Default privilege class for a newly created z/VM user ID.

Notes:

160 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

– The default is G (general user privilege class). When using OpenStack, only class G shouldbe defined. Your z/VM system administrator may choose to define additional privilegeclasses. Contact your z/VM system administrator to determine what privilege classes areavailable to you.

– IBM-defined privilege classes other than class G have additional permissions, which cancause security exposures. If you are defining your own privilege class with fewer commandsthan class G allows, see the “CP Commands Required for Guests Managed by xCAT”section in the “Security Considerations for xCAT” appendix in z/VM: Systems ManagementApplication Programming for the list of commands that must be part of this privilege class.

zvm_user_profile

Optional

Section: DEFAULT

Value: OSDFLT – profile in the user directory for new servers.

Notes:

– The default value is OSDFLT.– In order to retrieve the console log from an OpenStack-deployed guest, you must add

COMMAND SP CONS * START to the OSDFLT SAMPPROF file on the MAINT 193 disk. Thenrestart the CMA. After the restart, each time a new OpenStack-deployed guest starts itautomatically starts the console, and SMAPI/xCAT is able to retrieve all the 3270 consolelogs and return them to OpenStack.

zvm_user_root_vdev

Optional

Section: DEFAULT

Value: Virtual device number for the root disk.

Notes:

– When nova deploys an instance, it creates a root disk and several ephemeral or persistentdisks. The value of this property is the virtual device number of the root disk. If the rootdisk is a cinder volume, this value does not apply.

– The 0-10FF range of devices are reserved for the use of z/VM OpenStack.– The default is 100.– This is an integer value in hexadecimal format, between 0 and 65536 (x'FFFF'). The value

should not conflict with other device numbers in the z/VM guest's configuration; forexample device numbers of the NICs or ephemeral or persistent disks.

zvm_vmrelocate_force

Optional

Section: DEFAULT

Value: ARCHITECTURE, DOMAIN, NONE, or STORAGE – this is the type of relocation to beperformed.

Notes: The values indicate the following:

ARCHITECTUREAttempt relocation even if hardware architecture facilities or CP features are notavailable on the destination system.

DOMAINAttempt relocation even if the VM would be moved outside of its domain.

Appendix F. OpenStack Configuration Files 161

|||||

NONEIndicates the configured system default will be used.

STORAGERelocation should proceed even if CP determines that there are insufficient storageresources on the destination system.

zvm_xcat_connection_timeout

Optional

Section: DEFAULT

Value: Integer. The maximum number of seconds that OpenStack services should wait forresponses to HTTP requests made to the xCAT MN.

Notes:

– An example is a request to copy an image from OpenStack services (the glance imagerepository, in this case) to the xCAT management node in order to deploy a virtual server.

– The default is 3600 seconds (1 hour).

zvm_xcat_ca_file

Optional

Section: DEFAULT

Value: Certificate Authority (CA) file used when making an HTTPS connection to xCAT.

Notes:

– The z/VM OpenStack driver always uses HTTPS to communicate with xCAT. This propertyaffects the degree of server authentication possible when establishing the HTTPS connection.

– If this property is specified and the CA file is usable, the driver will use HTTPS serverauthentication; otherwise, it will use HTTPS and log a warning in the OpenStack logbecause it is unable to verify the identity of the xCAT server using a trusted certificate. IBMrecommends that you always specify this property so the driver can verify that the serveridentifies itself using a trusted certificate.

– When you run the driver as part of a CMA in the controller, compute, or compute_mn role,CMA will generate default self-signed certificates to authenticate xCAT to clients. IBMrecommends that you replace the default certificate file with one that you generate, signedby a CA that your enterprise trusts. See “Replacing the Default SSL Certificates” on page 40for CMA-specific instructions for adding your certificate file and restarting CMA so thatyours is used instead of the default.

zvm_xcat_master

Required

Section: DEFAULT

Value: The xCAT management node (the node name in the xCAT definition).

Notes: This is the same value as the XCAT_Host property in the DMSSICNF COPY file.

zvm_xcat_password

Required

Section: DEFAULT

Value: The password of the xCAT REST (Representational State Transfer) API user.

Notes:

– This is the password of the xCAT administrator. The user name that is related to thispassword is specified in the "zvm_xcat_username" property.

162 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|

|

|

|

||

|||||

||||||

– CMA systems initially set this property to "admin". It is recommended that theadministrator password be changed in xCAT after starting xCAT for the first time. See the“Changing the Administrator Password” section in z/VM: Systems Management ApplicationProgramming for information on how the password is changed.

– Changing the value of this parameter may require you to reset the controller and/orcompute nodes. See “Starting the CMA” on page 35 for more information.

zvm_xcat_server

Required

Section: DEFAULT

Value: The xCAT MN IP address or host name.

zvm_xcat_username

Required

Section: DEFAULT

Value: The user name of the xCAT REST (Representational State Transfer) API user.

Notes:

– This is the user name of the xCAT administrator that is used to authenticate into the xCATGUI. The first time xCAT MN is started, an administrator user named "admin" is created.

– CMA systems initially set this property to "admin".– Changing the value of this parameter may require you to reset the controller and/or

compute nodes. See “Starting the CMA” on page 35 for more information.

zvm_zhcp_fcp_list

Optional

Section: DEFAULT

Value: The list of FCPs used only by the xCAT ZHCP node. The FCP addresses may bespecified as either an individual address or a range of addresses connected with a hyphen.Multiple values are specified with a semicolon connecting them (for example,“1f0e;2f02-2f1f;3f00”).

Notes:

– The FCP addresses must be different from the ones specified for the zvm_fcp_list. AnyFCPs that exist in both zvm_fcp_list and zvm_zhcp_fcp_list will lead to errors.

– It is strongly recommended to specify only one FCP for ZHCP to avoid resource waste.Contact your z/VM system administrator if you don’t know which FCPs you can use.

Settings for Cinder

This section describes the configuration settings related to the cinder z/VM driver. For a sample/etc/cinder/cinder.conf file, see “Sample File for Cinder z/VM Driver” on page 174.

Note: Please refer to the IBM Redbook: Implementing the IBM System Storage SAN Volume ControllerV6.3 for SVC (V7000) to properly setup a SVC in order to make cinder connect to the SVC and applythese parameters.v In file /etc/cinder/cinder.conf:

san_ip

Required

Section: DEFAULT

Appendix F. OpenStack Configuration Files 163

Value: The IP address of your SVC storage.

Notes:

– Contact your SVC service manager if you don’t know the address.– An IPv4 address should be specified as four octets written as decimal numbers ranging from

0 to 255 and concatenated with a period between the octets. Do not specify leading zeros foran octet as this can cause some utilities to treat the octet as a number in octal representation.(For example, 09.0.05.11 is wrong, 9.0.5.11 is correct.)

san_private_key

Required

Section: DEFAULT

Value: File name of the private key file to use for SSH authentication to the SVC storage devicewhose address is specified in the property openstack_san_ip, relative to the home directory ofthe xCAT management node administrative user given in the DMSSICNF propertyXCAT_MN_admin.

Notes:

– For example, if the xcat_mn_admin value is "mnadmin" and the private key is stored in thatuser's .ssh subdirectory in the file id_rsa, then the value of san_private_key should be.ssh/id_rsa. For more information on the xcat_mn_admin property, see z/VM: SystemsManagement Application Programming.

– You must make the id_rsa file and all directories in its path readable to the cinder user.– Contact your SVC service manager to get the file.

storwize_svc_connection_protocol

Required

Section: DEFAULT

Value: FC – connection protocol used by z/VM to connect to the StoreWize SVC.

Notes: This value must be FC.

storwize_svc_volpool_name

Optional

Section: DEFAULT

Value: The name of the VDISK pool within the StoreWize storage from which cinder willallocate disks.

Notes:

– The pool must be created and operational before OpenStack can use it. The volumes thatcan be created depend on the capability of the VDISK pool. Contact your SVC servicemanager if you don’t know which pool you can use.

– The default value is "volpool".

storwize_svc_vol_iogrp

Optional

Section: DEFAULT

Value: The io_group_id within the StoreWize storage to which the virtual disk will beassociated.

Notes:

– Contact your SVC service manager if you don’t know which I/O group you can use.

164 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

– The default value is 0.

volume_driver

Required

Section: DEFAULT

Value: cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver

Notes: The driver for ZVM SVC must use this value. Only SVC is supported.

Settings for Neutron

Setting Descriptions

This section describes the configuration settings related to the Neutron z/VM driver. For a sample/etc/neutron/neutron.conf, /etc/neutron/plugins/ml2/ml2_conf.ini, and /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini file, see “Sample Files for Neutron z/VM Driver” on page 175v In file /etc/neutron/neutron.conf:

base_mac

Optional

Section: DEFAULT

Value: Base MAC address that is used to generate MAC for virtual interfaces specified as 6pairs of hexadecimal digits separated by colons (for example, 02:00:00:EE:00:00).

Notes:

– The default value is: fa:16:3e:00:00:00– The first three pairs of hexadecimal digits should be the same as USERPREFIX in the

VMLAN statement in the z/VM SYSTEM CONFIG file. You can modify the fourth pair toany range, as appropriate to your system. The final two pairs of hexadecimal digits for theMAC address should be 00 and will be replaced with generated values.

core_plugin

Required

Section: DEFAULT

Value: ml2

Notes: z/VM supports only the ML2 plugin.v In file /etc/neutron/plugins/ml2/ml2_conf.ini:

flat_networks

Optional (with network_vlan_ranges)

Section: ml2_type_flat

Value: Comma-separated list of z/VM vswitch names, with which flat networks can be created.At a minimum, the vswitch that was specified in the DMSSICNF COPY file on the z/VMsystem running xCAT (the default is xcatvsw2) should be specified for this property. Additionalvswitches can be added.

Notes:

– In the ML2 plugin configuration file, both the flat_networks property and thenetwork_vlan_ranges property are optional. However, at least one of these properties must

Appendix F. OpenStack Configuration Files 165

|

|

||

be specified. You can define a FLAT network with the flat_networks property, or you candefine a VLAN aware network with the network_vlan_ranges property, or you can defineboth types of networks.

– Specify either the vswitch names (for example, xcatvsw2,datanet2) or use * to allow flatnetworks with arbitrary physical network names.

mechanism_drivers

Required

Section: ml2

Value: zvm

Notes: This property specifies the networking mechanism driver entry points to be loadedfrom the neutron.ml2.mechanism_drivers namespace. This value must be zvm.

network_vlan_ranges

Optional (with flat_networks)

Section: ml2_type_vlan

Value: List of vswitch[:vlan_min:vlan_max] comma-separated values specifying z/VM vswitchnames usable for VLAN provider and project networks, each followed by the range of VLANtags on each vswitch available for allocation as project networks (for example,datanet1:1:4094,datanet3:1:4094).

Notes:

– In the ML2 plugin configuration file, both the flat_networks property and thenetwork_vlan_ranges property are optional. However, at least one of these properties mustbe specified. You can define a FLAT network with the flat_networks property, or you candefine a VLAN aware network with the network_vlan_ranges property, or you can defineboth types of networks.

– The vlan_min:vlan_max range is optional. If not specified, the vswitch will be VLANUNAWARE and the networks created on the vswitch will work as flat networks, thoughthey will be shown as VLAN networks in neutron. This is compatible with oldconfigurations, but it is highly recommended that you specify all VLAN UNAWAREvswitches in flat_networks and all VLAN AWARE vswitches in network_vlan_ranges.

tenant_network_types

Optional

Section: ml2

Value: Ordered list of network types to allocate as tenant (project) networks, separated bycommas. z/VM supports local, flat, and vlan.

Notes:

– The default value is "local".– It is recommended that you specify all z/VM-supported network types in the

tenant_network_types property.– When you create a network with the neutron command, the network type is determined by

the following, in this order:1. The value of the --provider:network_type parameter specified on the neutron command.2. The value of the tenant_network_types property in the neutron configuration file.3. The default value ("local").

type_drivers

Optional

166 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

|

|||||

Section: ml2

Value: List of network type driver entry points to be loaded with the types, separated bycommas. z/VM supports local, flat, and vlan.

Notes:

– The default value is: "local,flat,vlan"– It is recommended that you specify all z/VM-supported types. Optionally, you can specify

only those network types you intend to support.v In file /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini:

polling_interval

Optional

Section: AGENT

Value: Integer – agent polling interval specified in number of seconds.

Notes: This value depends on the network and workload. The default value is 2.

rdev_list

Optional

Section: vswitch_name

Value: The RDEV address of the OSA cards which are connected to the vswitch.

Notes:

– Only one RDEV address may be specified per vswitch. You should choose an active RDEVaddress.

– The section name (for example, xcatvsw2) is the name of the vswitch.

xcat_mgt_ip

Optional

Section: AGENT

Value: IP address – xCAT management interface IP address used by xCAT to communicatewith newly-deployed instance servers.

Notes:

– Use of this property is deprecated in the Newton release. For more information ondeprecated interfaces. see “Deprecated Interfaces” on page xv.

– This property is used when new instances do not have public IP addresses that would allowthe xCAT MN to communicate with the instances. If specified, an additional interface will becreated in the OPNCLOUD virtual server over which the xCAT MN will communicate withdeployed systems. Whether the property is specified depends upon the type of networksthat are used by the deployed systems. For more information see “Network Configurations”on page 56, which discusses and shows examples of:- Flat network. See “Single Flat Network” on page 59.- Using private IP addresses flat network. See “Using Private IP Addresses for Instances”

on page 61.- VLAN mixed network. See “Flat and VLAN Mixed Network” on page 65.

– xCAT MN and the z/VM OpenStack code support only one additional interface as the xCATmanagement IP address. For this reason, any compute node with private networks using thesame xCAT must use the same value for thise property.If CMA running is running in controller role, the property can be specified in theDMSSICMO copy file for the CMA controller's z/VM and omitted on the other compute

Appendix F. OpenStack Configuration Files 167

||

nodes. The CMA controller will ensure that the interface is defined when the controllerstarts up and because the CMA controller contains the xCAT MN it will be defined prior toits use by another OpenStack compute node (for example, a CMA in compute role).

– The xcat_mgt_mask and xcat_mgt_ip must be set in the same broadcast domain as theinstance's IP address. A broadcast domain is a logical division of a computer network, inwhich all nodes can reach each other by broadcast at the data link layer.

– It is recommended that xCAT MN be defined so that this is the first IP address of yourmanagement network.

– An IPv4 address should be specified as four octets written as decimal numbers ranging from0 to 255 and concatenated with a period between the octets. Do not specify leading zeros foran octet as this can cause some utilities to treat the octet as a number in octal representation.(For example, 09.0.05.11 is wrong, 9.0.5.11 is correct.)

xcat_mgt_mask

Optional

Section: AGENT

Value: Netmask of your xCAT management network (for example, 255.255.255.0).

Notes:– Use of this property is deprecated in the Newton release. For more information on

deprecated interfaces, see “Deprecated Interfaces” on page xv.– This property is used when new instances do not have public IP addresses that would allow

the xCAT MN to communicate with the instances. This property is used in conjunction withxcat_mgt_ip. See the description of xcat_mgt_ip for more information.

– The xcat_mgt_mask and xcat_mgt_ip must be set in the same broadcast domain as theinstance's IP address. A broadcast domain is a logical division of a computer network, inwhich all nodes can reach each other by broadcast at the data link layer.

xcat_zhcp_nodename

Optional

Section: AGENT

Value: ZHCP node name in the xCAT MN database, as specified with the ZHCP_Host propertyin the DMSSICNF COPY file.

Notes:

– The default is "zhcp".– This property is case sensitive.

zvm_host

Required

Section: AGENT

Value: Same value as specified for the host property in /etc/nova/nova.conf.

Notes:

– This is a unique identifier of the compute node. A compute node is related to a single z/VMhypervisor – therefore this property is recommended to be the same value as specified forthe host property. If a server was running multiple compute nodes, each node would beconfigured for a different z/VM system, with the host property used to uniquely identifycompute node and the zvm_host property to identify the z/VM hypervisor that the computenode supports.

168 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

– Once this value has taken effect, it should not be changed. Changing this value can causeunexpected results. For example, an OpenStack controller managing z/VM as a computenode would see compute instances that existed prior to the change as running on a differenthypervisor.

zvm_xcat_ca_file

Optional

Section: AGENT

Value: Certificate Authority (CA) file used when making an HTTPS connection to xCAT.

Notes:

– The z/VM OpenStack driver always uses HTTPS to communicate with xCAT. This propertyaffects the degree of server authentication possible when establishing the HTTPS connection.

– If this property is specified and the CA file is usable, the driver will use HTTPS serverauthentication; otherwise, it will use HTTPS and log a warning in the OpenStack logbecause it is unable to verify the identity of the xCAT server using a trusted certificate. IBMrecommends that you always specify this property so the driver can verify that the serveridentifies itself using a trusted certificate.

– When you run the driver as part of a CMA in the controller, compute, or compute_mn role,CMA will generate default self-signed certificates to authenticate xCAT to clients. IBMrecommends that you replace the default certificate file with one that you generate, signedby a CA that your enterprise trusts. See “Replacing the Default SSL Certificates” on page 40for CMA-specific instructions for adding your certificate file and restarting CMA so thatyours is used instead of the default.

zvm_xcat_password

Optional

Section: AGENT

Value: The password of the xCAT REST (Representational State Transfer) API user.

Notes:

– This is the password of the xCAT administrator. The user name that is related to thispassword is specified in the "zvm_xcat_username" property.

– CMA systems initially set this property to "admin". It is recommended that theadministrator password be changed in xCAT after starting xCAT for the first time. See the“Changing the Administrator Password” section in z/VM: Systems Management ApplicationProgramming for information on how the password is changed.

zvm_xcat_server

Required

Section: AGENT

Value: The xCAT MN IP address or host name.

Notes: An IPv4 address should be specified as four octets written as decimal numbers rangingfrom 0 to 255 and concatenated with a period between the octets. Do not specify leading zerosfor an octet as this can cause some utilities to treat the octet as a number in octalrepresentation. (For example, 09.0.05.11 is wrong, 9.0.5.11 is correct.)

zvm_xcat_timeout

Optional

Section: AGENT

Value: Integer – timeout value, in seconds, for waiting for an xCAT response.

Appendix F. OpenStack Configuration Files 169

|

|

|

|

|

||

|||||

||||||

Notes: The default is 300 seconds.

zvm_xcat_username

Optional

Section: AGENT

Value: The user name of the xCAT REST (Representational State Transfer) API user.

Notes:

– This is the user name of the xCAT administrator that is used to authenticate into the xCATGUI. The first time xCAT MN is started, an administrator user named "admin" is created.

– CMA systems initially set this property to "admin".

Settings for Ceilometer

Setting Descriptions

This section describes the configuration settings related to the ceilometer z/VM inspector (referred to asceilometer). For a sample /etc/ceilomter/ceilomter.conf file, see “Sample File for Ceilometer” on page176v In file /etc/ceilomter/ceilometer.conf:

host

Required

Section: DEFAULT

Value: Same value as specified for the zvm_host property.

Notes:

– This is a unique identifier of the compute node. A compute node is related to a single z/VMhypervisor; therefore this property is recommended to be the same value as specified for thezvm_host property.

– If a cloud is running multiple compute nodes, each node would be configured for a differentz/VM system, with the host property used to uniquely define the compute node and azvm_host property identifying the z/VM hypervisor that the compute node supports.

hypervisor_inspector

Required

Section: DEFAULT

Value: Inspector to use for inspecting the hypervisor layer. For z/VM this value must be set to"zvm".

polling_namespaces

Required

Section: DEFAULT

Value: Polling namespace(s) to be used while resource polling.

Notes:

– On the controller node the value should be "central, compute"; on the compute node thisvalue should be "compute".

pollster_list

Required

170 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Section: DEFAULT

Value: List of pollsters (or wildcard templates) to be used while polling (image, image.size,instance, cpu, memory.usage, network.incoming.bytes, network.incoming.packets,network.outgoing.bytes, network.outgoing.packets).

xcat_zhcp_nodename

Optional

Section: zvm

Value: ZHCP node name in xCAT, as specified with the ZHCP_Host property in the DMSSICNFCOPY file.

Notes:

– The default is "zhcp".– This property is case sensitive.

zvm_host

Required

Section: zvm

Value: Same value as the XCAT_zvmsysid property in the DMSSICNF COPY file.

Notes:

– The xCAT node name of the z/VM hypervisor. This property is case sensitive and shouldmatch the value specified in the XCAT_zvmsysid property in the DMSSICNF COPY file.

zvm_xcat_ca_file

Optional

Section: zvm

Value: Certificate Authority (CA) file used when making an HTTPS connection to xCAT.

Notes:

– The z/VM OpenStack driver always uses HTTPS to communicate with xCAT. This propertyaffects the degree of server authentication possible when establishing the HTTPS connection.

– If this property is specified and the CA file is usable, the driver will use HTTPS serverauthentication; otherwise, it will use HTTPS and log a warning in the OpenStack logbecause it is unable to verify the identity of the xCAT server using a trusted certificate. IBMrecommends that you always specify this property so the driver can verify that the serveridentifies itself using a trusted certificate.

– When you run the driver as part of a CMA in the controller, compute, or compute_mn role,CMA will generate default self-signed certificates to authenticate xCAT to clients. IBMrecommends that you replace the default certificate file with one that you generate, signedby a CA that your enterprise trusts. See “Replacing the Default SSL Certificates” on page 40for CMA-specific instructions for adding your certificate file and restarting CMA so thatyours is used instead of the default.

zvm_xcat_master

Required

Section: zvm

Value: The xCAT management node (the node name in the xCAT definition).

Notes:

– This is the same value as the XCAT_Host property in the DMSSICNF COPY file.

Appendix F. OpenStack Configuration Files 171

|

|

|

|

|

||

|||||

||||||

zvm_xcat_password

Required

Section: zvm

Value: The password of the xCAT REST (Representational State Transfer) API user.

Notes:

– This is the password of the xCAT administrator. The user name that is related to thispassword is specified in the zvm_xcat_username property.

– CMA systems initially set this property to "admin". It is recommended that the administratorpassword be changed in xCAT after starting xCAT for the first time. See the “Changing theAdministrator Password” section in the “Setting up and Configuring the ServerEnvironment” chapter of z/VM: Systems Management Application Programming for moreinformation.

– Changing the value of this parameter may require you to reset the controller and/orcompute nodes. See “Starting the CMA” on page 35 for more information.

zvm_xcat_server

Required

Section: zvm

Value: The xCAT MN IP address or host name.

zvm_xcat_username

Required

Section: zvm

Value: The user name of the xCAT REST (Representational State Transfer) API user.

Notes:

– This is the user name of the xCAT administrator that is used to authenticate into the xCATGUI. The first time xCAT MN is started, an administrator user named "admin" is created.

– CMA systems initially set this property to "admin".– Changing the value of this parameter may require you to reset the controller and/or

compute nodes. See “Starting the CMA” on page 35 for more information.

Sample Configuration Files

This appendix provides examples of the various OpenStack configuration files that are configured forz/VM. The names and locations are shown with their default values. In a system that is running multiplecompute nodes to support multiple z/VM hosts in the same server, the name and location of theconfiguration files may be different. The startup scripts located in /etc/init.d can provide informationon the configuration file that is used.

Sample File for Nova z/VM Driver

The following is a sample /etc/nova/nova.conf configuration file, with values:/etc/nova/nova.conf:

# The xCAT server IP that this nova compute node operates onzvm_xcat_server = 1.2.3.4

# The user name of xCAT server which will be used for REST API callzvm_xcat_username = mnadmin

172 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

# The password of xCAT server which will be used for REST API callzvm_xcat_password = xxxxxxxx

# The disk pool name that xCAT will allocate disk from for new servers# Note: The zvm_diskpool name is the name of the storage ’group’ defined in the Directory Managerzvm_diskpool = FBAPOOL1

# The disk pool type (can be FBA or ECKD )zvm_diskpool_type=FBA

# The xCAT node name of the z/VM hypervisorzvm_host= zvmhost1

# The host is used to distinguish different nova compute host, it can be same with zvm_hosthost= zvmhost1

# The default password for a newly created z/VM user ID.zvm_user_default_password = dfltpass

# Default template of user directory for new servers# User should not use lnxdflt but should define his own profile.zvm_user_profile = osdflt

# The virtual device number for the root diskzvm_user_root_vdev = 100

# The path that images will be stored (snapshot, deploy etc)zvm_image_tmp_path = /var/lib/nova/images

# The xCAT master node (the node name in xCAT definition)zvm_xcat_master = xcat

# The config driver format, should be iso9660config_drive_format=iso9660

# Define whether to inject the password in config drive, if zvm_config_drive_inject_password# is set to be True, the default os root password for the new booted vm will be the random value of# adminPass property that is shown in the output of nova boot command.zvm_config_drive_inject_password=False

# If zvm_config_drive_inject_password is set to be False, this password will be# used as default os root password for the new booted vm.zvm_image_default_password=xxxxxxxx

# z/VM only support config drive for cloud-initforce_config_drive=true

# Timeout value for spawn in seconds, if new spawn machine can’t be reachable# after this value, deploy will report error “Failed to power on instance”zvm_reachable_timeout=600

#Timeout value for reading xCAT response.zvm_xcat_connection_timeout=3600

# Default instance name template# There is restriction that you should define the template with length 8, and first 3 should be# characters, should not use “rsz” or “RSZ” as the first 3 characters.instance_name_template = abc%05x

# z/VM drivercompute_driver = zvm.ZVMDriver

# NOT z/VM specific, set it default 86400(s) = 24hoursimage_cache_manager_interval=86400

#xCAT image that not used for a long time (default is 30 days) will be purgedxcat_image_clean_period=30

Appendix F. OpenStack Configuration Files 173

|

# The threshold when xCAT MN disk space is not big enough(default is 1G), purge operation will startxcat_free_space_threshold=1

# The name of xCAT SCSI pool. Users can specify any name as their wish. xCAT will# create and manage it.zvm_scsi_pool=scsipool

# The list of FCPs used by instances. Each instance needs one FCP in order to attach a# volume to itself. Those FCPs should be well planned and made online before# OpenStack can use them. OpenStack will not check their status but use them directly.# So if they are not ready, errors may be returned. The format of this variable should look# like “ min1-max1;min2-max2;min3-max3”. Please contact your z/VM system manager# if you don’t know what FCPs you can use.zvm_fcp_list=B15B-B15F

# The list of FCPs used only by xCAT HCP node. It must be different to zvm_fcp_list.# Any FCP exist in both zvm_fcp_list and zvm_zhcp_fcp_list leads to errors. The format# of this variable should look like “ min1-max1;min2-max2;min3-max3”. Strongly# recommend to specify only one FCP for HCP to avoid resource waste. Please contact# your z/VM system manager if you don’t know what FCPs you can use.zvm_zhcp_fcp_list=B159

#Live migration#Choose one of “ARCHITECTURE”, “DOMAIN” or “STORAGE”zvm_vmrelocate_force=ARCHITECTURE | DOMAIN | STORAGE

# Live migration will not success with default rpc value, set to 180rpc_response_timeout=180

# Set the memory overcommit ratio for z/VM Driverram_allocation_ratio=3

# Set file system format for ephemeral disk, this value can be set to ext2, ext3 or ext4,# please note that the file system specified must be one that the Linux distro on the deployed# system allows to be mounted in read-write mode. If this option is not specified in nova.conf,# the created ephemeral disk will use default ext3 as its file system.default_ephemeral_format=ext3

# When HTTPS is used to communicate between the z/VM driver and xCAT, the z/VM driver must# have a CA file so that certificates will be used to verify that xCAT is the z/VM plugin# to connect to.zvm_xcat_ca_file=/samp/cafile

Sample File for Cinder z/VM Driver

The following is a sample configuration file, with values:/etc/cinder/cinder.conf:

# The driver for ZVM SVC, should use this valuecinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver

# The path of private key for connecting SVC. To avoid inputing password everytime,# public/private key pair is used for authentication. Please generate private/public key# pair, put public key at SVC and put private key at local. Put the local path of the private# key here and make sure the cinder user has read privilege.san_private_key=/home/test/key/id_rsa

# SVC IP address. Please contact your SVC service manager if you don’t know the# address.san_ip=1.2.3.4

# VDISK pool that cinder will carve disk from. It must be created and ready to work# before OpenStack can use it. The volumes can be created depend on the capability of# the VDISK pool. Please contact your SVC service manager if you don’t know which# pool you can use.

174 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||||

storwize_svc_volpool_name=XXXX

# Protocol used by z/VM, should be FCstorwize_svc_connection_protocol=FC

# The io_group_id with which to associate the virtual disk. Please contact# your SVC service manager if you don’t know which I/O group you can use.storwize_svc_vol_iogrp = group_id

Sample Files for Neutron z/VM Driver

The neutron z/VM driver configuration sample file, after installation, is named /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini.sample. The following sections show sample configuration files, withvalues.v /etc/neutron/neutron.conf:

# z/VM only supported ML2.core_plugin = ml2

# Base mac address that is used to be allocate mac from# First 6 hexadecimal digits are delimited into 3 pairs. These 6 hexadecimal digits should be the# same as USERPREFIX in VMLAN statement in z/VM SYSTEM CONFIG file.# You can modify the fourth pair to any range as appropriate in your system.base_mac = 02:00:00:EE:00:00

v /etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]# (ListOpt) List of network type driver entrypoints to be loaded from# the neutron.ml2.type_drivers namespace.#type_drivers = local,flat,vlan# Example: type_drivers = flat,vlan,gre,vxlan

# (ListOpt) Ordered list of network_types to allocate as tenant# (project) networks. The default value ’local’ is useful for# single-box testing but provides no connectivity between hosts.tenant_network_types = local,flat,vlan# Example: tenant_network_types = vlan,gre,vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints# to be loaded from the neutron.ml2.mechanism_drivers namespace.#mechanism_drivers = openvswitchmechanism_drivers = zvm

[ml2_type_flat]# (ListOpt) List of physical_network names with which flat networks# can be created. Use * to allow flat networks with arbitrary# physical_network names.flat_networks = xcatvsw2, datanet2# Example:flat_networks = physnet1,physnet2# Example:flat_networks = *

[ml2_type_vlan]# (ListOpt) List of <physical_network>[:<vlan_min>:vlan_max>] tuples# specifying physical_network names usable for VLAN provider and# tenant (projecT) networks, as well as ranges of VLAN tags on each# physical_network available for allocation as project networks.network_vlan_ranges = datanet1:1:4094,datanet3:1:4094# Example: network_vlan_ranges = physnet1:1000:2999

v/etc/neutron/plugins/zvm/neutron_zvm_plugin.ini:

[AGENT]# (StrOpt) xCat REST API username, default value is adminzvm_xcat_username = mnadmin

Appendix F. OpenStack Configuration Files 175

# Example: zvm_xcat_username = guest

# (StrOpt) Password of the xCat REST API user, default value is adminzvm_xcat_password = admin# Example: zvm_xcat_password = passw0rd

# (StrOpt) xCat MN server address, IP address or host namezvm_xcat_server = YourxCATMNServerAddress# Example: zvm_xcat_server = 10.0.0.1

# (StrOpt) xCat ZHCP nodename in xCAT, default value is zhcpxcat_zhcp_nodename = zhcp# Example: xcat_zhcp_nodename = myzhcp1

# (StrOpt) xCat management interface IP address# xcat_mgt_ip=10.1.1.1

# (StrOpt) xCat management interface netmask# xcat_mgt_mask=255.255.0.0

# (IntOpt) Agent’s polling interval in seconds, default value is 2 secondspolling_interval = 2# Example: polling_interval = 5

# (IntOpt) The number of seconds the agent will wait for# xCAT MN response, default value is 300 secondszvm_xcat_timeout = 300# Example: zvm_xcat_timeout = 600

# (StrOpt) The compute node name neutron-zvm-agent works on, should be the same as# ’host’ property in nova.confzvm_host = zvmhost1# Example: zvm_host = zvmhost1

# OSA configuration for each of the vswitches, these configurations are required if vswitch# needs to connect outside of z/VM[datanet1]# RDEV address of the OSA cards which are connected to the vswitch.rdev_list=6243[datanet3]# RDEV address of the OSA cards which are connected to the vswitch.rdev_list=6343

# When HTTPS is used to communicate between the z/VM driver and xCAT, the z/VM driver must# have a CA file so that certificates will be used to verify that xCAT is the z/VM plugin# to connect to.zvm_xcat_ca_file=/samp/cafile

Notes:

v In above example file, no rdev_list is configured for datanet2, so the neutron z/VM driver will notconfigure an UPLINK port for vswitch datanet2.

v Since z/VM needs the neutron-zvm-agent to initialize the network for nova and xCAT MN,neutron-zvm-agent service should be started prior to nova-compute service, and must be restarted oncethe xCAT MN is restarted.

Sample File for Ceilometer

The following is a sample /etc/ceilometer/ceilometer.conf configuration file, with values:/etc/ceilometer/ceilometer.conf:[DEFAULT]

176 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||||

||||

# Inspector to use for inspecting the hypervisor layer.hypervisor_inspector = zvm

# Polling namespace(s) to be used while resource polling.polling_namespaces = compute, central

# This is a unique identifier of the compute node. A compute node is related to a single# z/VM hypervisor therefore this property is recommended to be the same value as specified# for the zvm_host property.# If a cloud is running multiple compute nodes, each node would be configured for a different# z/VM system, with the host property used to uniquely define the compute node and a zvm_host# property identifying the z/VM hypervisor that the compute node supports.host = opnstk1

[zvm]

# xCat zHCP nodename in xCAT.xcat_zhcp_nodename = zhcp

# Same value as XCAT_zvmsysid in DMSSICNF COPY file.zvm_host = zvmhost1

# The xCAT management node (the node name in the xCAT definition).# Same value as XCAT_Host in the DMSSICNF COPY file.zvm_xcat_master = xcat

# The password of the xCAT REST (Representational State Transfer) API user.zvm_xcat_password = admin

# The xCAT MN IP address or host name.zvm_xcat_server = 1.2.3.4

# The user name of the xCAT REST (Representational State Transfer) API user.zvm_xcat_username = admin

# When HTTPS is used to communicate between the z/VM driver and xCAT, the z/VM driver must# have a CA file so that certificates will be used to verify that xCAT is the z/VM plugin# to connect to.zvm_xcat_ca_file=/samp/cafile

Appendix F. OpenStack Configuration Files 177

||||

178 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix G. Common Procedures

xCAT Procedures

This section describes common procedures that use the xCAT graphical user interface (GUI).

Using the Script Panel in the xCAT User Interface

Some procedures in this section, as well as other procedures documented elsewhere in this book, involveusing the xCAT GUI. The following steps show you how to access the GUI and use the Script panel.1. Bring up the GUI and authenticate into xCAT. The URL for the GUI is normally the IP address of the

xCAT MN followed by “/xcat,” (for example, https://9.9.27.91/xcat). Make certain you use“https://” so that secure sockets are used for the communication.

Note: When the xCAT GUI is started for the first time, the administrator userid/password will be setto admin/admin. The password for admin should be changed as soon as possible.

2. Navigate to the Nodes panel (the default panel) and the Nodes sub-frame.3. Click the xcat node checkbox. (The default node name of the xCAT MN node is "xcat".)

Figure 78. Selecting the xcat Node Checkbox on the Nodes Panel

© Copyright IBM Corp. 2014, 2017 179

4. From the Actions pulldown, select “Run Scripts” to bring up the Script panel for the xCAT MN node(xcat).

5. Enter the command(s) you wish to run in the script box.

Figure 79. Selecting “Run script” on the Actions Pulldown of The Nodes Panel

Figure 80. Entering Commands in the Script Box

180 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

6. Press the Run button.The yellow status box at the top of the frame should show the results of your command(s).

Increasing the httpd Timeout in the xCAT MN

Use the following steps to change the timeout value of the httpd service.1. Log on to the xCAT user interface as admin and bring up the Script panel for the xCAT MN node

(xcat), as described in “Using the Script Panel in the xCAT User Interface” on page 179.2. In the script box, enter:

sed -i 's/^Timeout[[:space:]\t]\+[0-9]*/Timeout 7200/' /etc/httpd/conf/httpd.confgrep -i ^Timeout /etc/httpd/conf/httpd.conf

where:

7200 is the timeout value you want to specify in seconds.Then press the Run button.The yellow status box at the top of the frame should show a line indicating the word Timeout and thevalue that you specified (for example, Timeout 7200).

Figure 81. Yellow Status Box Showing Results of Commands

Appendix G. Common Procedures 181

3. Restart httpd service on the xCAT node in the same Run script panel by entering the following in thescript box:service httpd restart

4. Wait 30 seconds, then log out the xCAT GUI and then log back in to make sure the changes havetaken effect.

Backing Up and Restoring xCAT Table Information

To back up your xCAT table information to your workstation, you'll need to create a tar file that willcontain this information, using the following steps:1. Bring up the xCAT GUI, authenticate into xCAT, as admin and create a target directory for the table

tar file. You can create a subdirectory under /install by going to Configure/Files, then clicking the"New Folder" button. Here is an example of a directory called test:

Figure 82. Selecting Timeout Value

182 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

2. Bring up the Script panel for the xCAT MN node (xcat), as described in “Using the Script Panel in thexCAT User Interface” on page 179.

3. In the script box, enter:. /etc/profile.d/xcat.sh/opt/xcat/sbin/dumpxCATdb -p /install/testcd /install/test/tar -c -f xCATtables.tar *.csvdate

where:

/install/testis your target directory.

Then press the Run button.4. After you see the "Backup Complete." and the current date, you can now go to Configure/Files and

double-click on test directory. Then scroll down to the xCATtables.tar file and do a "Save as" to yourworkstation.

5. To restore the xCAT tables, upload the tar file to your directory by going to Configure/Files anddouble-clicking on your directory (e.g. /install/test). Then press Upload. After the tar file is uploaded,bring up the Script panel for the xCAT MN node (xcat) (again, as described in “Using the Script Panelin the xCAT User Interface” on page 179, and enter:. /etc/profile.d/xcat.shcd /install/testtar -xvf xCATtables.tar/opt/xcat/sbin/restorexCATdb -V -p /install/test

where:

/install/testis your target directory.

When finished, "Restore of Database Complete." will be displayed.

Figure 83. Creating a Subdirectory Under /install on the Files Panel

Appendix G. Common Procedures 183

Increasing the Size of the CMA's Root Disk using LVM Commands

Increasing the size of your root disk may affect future upgrades and migrations. Contact your IBMSupport Center personnel before performing the steps in this section.

This section provides an example of increasing the size of the CMA's root disk using logical volumemanager (LVM) commands. You can use this procedure, for example, when you need more space foradditional software.

For more information on this procedure, see:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/

1. Add an additional disk to the xCAT user. You can do this using either DirMaint commands of ZHCPcommands. This example adds a 1000-cylinder disk to the CMA as disk 104. (You can add only thesame disk type to your CMA as the CMA currently has.)$sudo -i# smcli Image_Disk_Create_DM -T TEST -v 104 -t 3390 -a AUTOG -u 1 -r XCATECKD -m MR -z 1000 -f 0

Adding a disk to TEST’s directory entry... Done

2. Bring online the disk you just added.# chccwdev -e 104

Setting device 0.0.0104 onlineDone

3. Issue the lsdasd command to search for the added dasd (in this example it is "dasdg"); then checkthe total size of the disk (6.5 GB).# lsdasdBus-ID Status Name Device Type BlkSz Size Blocks==============================================================================0.0.0101 active dasda 94:0 ECKD 4096 2347MB 6008400.0.0102 active dasdb 94:4 ECKD 4096 2347MB 6008400.0.0103 active dasdc 94:8 ECKD 4096 2347MB 6010200.0.0193 active dasdd 94:12 ECKD 4096 0MB 1800.0.0400 active dasde 94:16 ECKD 4096 1406MB 3600000.0.0191 active dasdf 94:20 ECKD 4096 0MB 1800.0.0104 active dasdg 94:24 ECKD 4096 703MB 180000

# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/opencloud-system 6.5G 1.8G 4.4G 29% /devtmpfs 2.0G 0 2.0G 0% /devtmpfs 2.0G 0 2.0G 0% /dev/shmtmpfs 2.0G 8.3M 2.0G 1% /runtmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup

4. Format the disk at /dev/dasdg by issuing the following command:# dasdfmt -b 4096 -d cdl -y -f /dev/dasdg

Finished formatting the device.Rereading the partition table... ok

5. Add a partition table for dasdg:# fdasd -a /dev/dasdg

reading volume label ..: VOL1reading vtoc ..........: ok

auto-creating one partition for the whole disk...writing volume label...writing VTOC...rereading partition table..

184 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

||

||

||||

||||

|

6. Use the pvscan command to scan all physical volumes on the system. Note that dasdg is not in theopencloud volume group.# pvscanPV /dev/dasda1 VG opencloud lvm2 [2.29 GiB / 0 free]PV /dev/dasdb1 VG opencloud lvm2 [2.29 GiB / 0 free]PV /dev/dasdc1 VG opencloud lvm2 [2.29 GiB / 0 free]Total: 3 [6.87 GiB] / in use: 3 [6.87 GiB] / in no VG: 0 [0 ]

7. Define the disk dasdg to be a new physical volume:# pvcreate /dev/dasdg1

Physical volume "/dev/dasdg1" successfully created

8. Extend the existing opencloud volume group to include the newly created physical volume(/dev/dasdg1).# vgextend opencloud /dev/dasdg1

Volume group "opencloud" successfully extended

9. Dynamically extend the logical volume size to include the newly created physical volume.# lvextend -l +100%FREE /dev/opencloud/system

Size of logical volume opencloud/system changed from 6.68 GiB (1710 extents) to 7.36 GiB (1885 extents).Logical volume system successfully resized

10. Resize the logical volume to include the newly created physical volume.# lvresize -l +100%FREE /dev/opencloud/system

New size (1885 extents) matches existing size (1885 extents)Run `lvresize --help’ for more information.

11. Resize the file system of the root disk.# resize2fs /dev/opencloud/system

resize2fs 1.42.9 (28-Dec-2013)Filesystem at /dev/opencloud/system is mounted on /; on-line resizing requiredold_desc_blocks = 1, new_desc_blocks = 1The filesystem on /dev/opencloud/system is now 1930240 blocks long.

12. Issue the following commands to verify that the physical volume was successfully created, and thatthe logical volume was successfully extended.# pvscan

PV /dev/dasda1 VG opencloud lvm2 [2.29 GiB / 0 free]PV /dev/dasdb1 VG opencloud lvm2 [2.29 GiB / 0 free]PV /dev/dasdc1 VG opencloud lvm2 [2.29 GiB / 0 free]PV /dev/dasdg1 VG opencloud lvm2 [700.00 MiB / 0 free]Total: 4 [7.55 GiB] / in use: 4 [7.55 GiB] / in no VG: 0 [0 ]

# lvdisplay

--- Logical volume ---LV Path /dev/opencloud/bootLV Name bootVG Name opencloudLV UUID rrQElh-YLDW-WBXf-77CJ-d5Fy-4TgN-JBHHxCLV Write Access read/writeLV Creation host, time bldserv2.endicott.ibm.com, 2016-02-03 09:12:14 +0000LV Status available# open 0LV Size 192.00 MiBCurrent LE 48Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:0

Appendix G. Common Procedures 185

|

|

|

|

--- Logical volume ---LV Path /dev/opencloud/systemLV Name systemVG Name opencloudLV UUID HZu67q-6IsR-sl82-aOET-TybB-NxWv-ms8lFCLV Write Access read/writeLV Creation host, time bldserv2.endicott.ibm.com, 2016-02-03 09:12:17 +0000LV Status available# open 1LV Size 7.36 GiBCurrent LE 1885Segments 4Allocation inheritRead ahead sectors auto- currently set to 1024Block device 253:1

13.

Note: Do not reboot the CMA while performing the remaining steps in this procedure.Issue the following command to edit the zipl.conf file:# vi /etc/zipl.conf

In the parameters options line, add the disk number you created; in this example, the new disk is"rd.DASD=0.0.0104". (The parameters options line is shown on multiple lines here for readability.)[defaultboot]defaultautoprompt=0timeout=1default=OpenCloudtarget=/boot[OpenCloud]target=/bootimage=/boot/vmlinuz-3.10.0-229.20.1.el7.zvm6_3_2.4.s390xparameters="elevator=deadline audit_enable=0 audit=0 audit_debug=0 selinux=0 root=/dev/mapper/opencloud-systemrd.DASD=0.0.0101 rd.DASD=0.0.0102 rd.DASD=0.0.0103 rd.DASD=0.0.0104 loglevel=1 rd.lvm.lv=opencloud/bootrd.lvm.lv=opencloud/system" ramdisk=/boot/initramfs-3.10.0-229.20.1.el7.zvm6_3_2.4.s390x.img

Save the changes you made to the zipl.conf file.14. Mount the boot volume into /boot folder:

# mount /dev/opencloud/boot /boot

15. Issue the following command to have the changes take effect during the next IPL:# zipl -V

Using config file ’/etc/zipl.conf’Run /lib/s390-tools/zipl_helper.device-mapper /bootTarget device informationDevice..........................: 5e:00 *)Device name.....................: dasdaDevice driver name..............: device-mapperType............................: disk deviceDisk layout.....................: ECKD/compatible disk layout *)Geometry - heads................: 15 *)Geometry - sectors..............: 12 *)Geometry - cylinders............: 3338 *)Geometry - start................: 280 *)File system block size..........: 4096Physical block size.............: 4096 *)Device size in physical blocks..: 549755229212*) Data provided by script.

Building bootmap in ’/boot’Building menu ’zipl-automatic-menu’Adding #1: IPL section ’OpenCloud’ (default)initial ramdisk...: /boot/initramfs-3.10.0-229.20.1.el7.zvm6_3_2.4.s390x.imgkernel image......: /boot/vmlinuz-3.10.0-229.20.1.el7.zvm6_3_2.4.s390xkernel parmline...: ’elevator=deadline audit_enable=0 audit=0 audit_debug=0 selinux=0

186 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

|

root=/dev/mapper/opencloud-system rd.DASD=0.0.0101 rd.DASD=0.0.0102 rd.DASD=0.0.0103 rd.DASD=0.0.0104loglevel=1 rd.lvm.lv=opencloud/boot rd.lvm.lv=opencloud/system’

component address:kernel image....: 0x00010000-0x00392fffparmline........: 0x00393000-0x00393fffinitial ramdisk.: 0x003a0000-0x00c97fffinternal loader.: 0x0000a000-0x0000cfff

Preparing boot menuInteractive prompt......: disabledMenu timeout............: 1 secondsDefault configuration...: ’OpenCloud’

Preparing boot device: dasda.Syncing disks...Done.

16. Issue df -h again to verify that the root disk size is now 7.2 GB.Filesystem Size Used Avail Use% Mounted on/dev/mapper/opencloud-system 7.2G 1.8G 5.1G 26% /devtmpfs 2.0G 0 2.0G 0% /devtmpfs 2.0G 0 2.0G 0% /dev/shmtmpfs 2.0G 8.3M 2.0G 1% /runtmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup/dev/mapper/opencloud-boot 170M 15M 142M 10% /boot

Before you reboot the CMA, verify that the zipl.conf file includes the new DASD, and verify that thezipl -V command ran successfully.

Changing Configuration Options at Runtime

You can reload (or "mutate") the OpenStack debug configuration option at runtime without a servicerestart. For more information, see:

http://docs.openstack.org/newton/config-reference/mutable.html

z/VM supports changing only the debug option without restarting the compute node, using only theprocedure described in the above URL.

Appendix G. Common Procedures 187

|

||

|

||

188 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix H. Troubleshooting

This appendix discusses the recommended troubleshooting procedures for problems encountered whileusing the OpenStack for z/VM support. It is oriented specifically towards compute node and z/VM. It isnot intended to discuss problems caused above these layers by products which may include the z/VMcompute node.

The troubleshooting discussion attempts to constrain the problem set by first dividing the problems bythe major function which is affected. The subsections include:v “prep_zxcatIVP Issues” on page 190v “zxcatIVP Issues” on page 190v “Exchanging SSH Key Issues” on page 191v “Compute Node Startup Issues” on page 191v “Deployment Issues” on page 194v “Capture Issues” on page 205v “Importing Image Issues” on page 206v “CMA Issues” on page 207v “Reconfiguration Issues” on page 207v “xCAT Management Node Issues” on page 208v Migration and resize issues – refer to “Deployment Issues” on page 194 and “Capture Issues” on page

205v “Alternative Deployment Provisioning Issues” on page 209.

With any issue, we recommend running the IVP programs as the first step to isolate the problem. Anincorrect environment setting, environment setup, or a change in the status of a z/VM server could causea problem which surfaces later. For example, if the z/VM system runs out of disk space used for virtualservers, you might not encounter the problem when you first ran the IVP after install of the OpenStackfor z/VM support.

Note: If you want to run the mysql program to initiate a manual update, issue the following command:mysql -u root -p

As a password, use the current password used to log in as "admin" with the Horizon GUI. The originalvalue of the password for "admin" was specified in the cmo_admin_password property in the DMSSICMOCOPY file. You should have changed this password the first time you logged in as "admin" with theHorizon GUI.

Logging within the Compute Node

Depending upon the function being driven, there are various logs within the compute node which wouldbe helpful to review for problem determination. Consult Logging and Monitoring in the onlineOpenStack documentation for information on the location of the logs and a general overview of logging.

Information on how to control logging (i.e. increase the log level) is provided in the Manage Logs sectionof the online OpenStack documentation.

The following logs in OpenStack are most often useful in identifying and solving problems:/var/log/nova/nova-compute.log/var/log/nova/nova-conductor.log/var/log/neutron/zvm-agent.log

© Copyright IBM Corp. 2014, 2017 189

In systems which are running multiple compute nodes, logs which are unique to a particular z/VM hostare identified with the host name, that was specified when the compute node was installed, added to thename of the log file. For example:/var/log/neutron/zvm-agent-hostzvm.log/var/log/nova/nova-compute-hostzvm.log

Where "hostzvm" is the host name.

prep_zxcatIVP Issues

One of two types of issues can occur during a run of the prep_zxcatIVP script. These are indicated by“Info” or “Warning” messages. The best method to resolve these issues is to review Appendix A,“Installation Verification Programs,” on page 139, Chapter 3, “z/VM Configuration,” on page 21, andChapter 5, “OpenStack Configuration,” on page 25.

It is recommended that each type of message be reviewed. It is possible that the OpenStack functionsmay appear to work, but that you have not yet encountered the condition in which the problem indicatedin the warning or info message will cause a function to not operate as desired.

Warning messages should always be considered a problem. The cause of those issues should beaddressed.

Info messages may not be a problem. They are intended to inform you of a condition that you may notrealize exists. These messages often indicate that a property was not specified and a default might be inuse. In general, it is recommended that you specify the property to avoid the info messages. This makesthe output of future runs of the prep_zxcatIVP program easier to review, having eliminated the messageswarning about defaults.

zxcatIVP Issues

Issues identified by the zxcatIVP.pl script are primarily related to incorrect settings in the compute nodeconfiguration properties or the z/VM environment. Messages generated by the zxcatIVP.pl script aredisplayed in the output of the IVP run.

A common error identified by the zxcatIVP.pl script is the existence of invalid xCAT nodes for the z/VMsystem and/or the ZHCP server. Tests exist to verify the existence of the host node and the related ZHCPserver. The tests indicate the name of the nodes. If you see an unexpected node name for either thez/VM host (for example, zvmnode) or the ZHCP server, you may have started the XCAT and ZHCPvirtual machines prior to fully configuring DMSSICNF COPY. This will cause the XCAT virtual machine'sstart up scripts to create xCAT nodes, with default values, that are not your desired nodes.

You can remove the xCAT nodes using the xCAT GUI by going to the Nodes->Nodes panel.v If you want to remove an unexpected z/VM host node, select the "hosts" group in the left frame to see

the z/VM hosts. Next, check the box in front of the host node which you want to remove. Select theAction pulldown and the "delete" option. On the subsequent panel, make sure that the "Only deleteentries in database" checkbox is checked and confirm that you want to delete the entry.

v If you want to remove an unexpected ZHCP node, select the "all" group in the left frame to see thevirtual servers. Next, check the box in front of the server you want to remove. Select the Actionpulldown and the "delete" option. On the subsequent panel, make certain that the "Only delete entriesin database" checkbox is checked and confirm that you want to delete the entry.

See the “Using the Script Panel in the xCAT User Interface” section in the “xCAT Utilities and CommonProcedures” appendix in z/VM: Systems Management Application Programming for more information onusing the xCAT GUI.

190 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|||

Exchanging SSH Key Issues

If using the xCAT Nodes/Nodes/Configuration/Unlock to unlock a server fails, try an SSH command inthe Script panel as described below.

Here's an example of trying to do an unlock on demonode:

To check for an SSH problem, go to the Script panel for the xCAT MN node (xcat), as described in “Usingthe Script Panel in the xCAT User Interface” on page 179. In the script box, try to do an SSH to that nodeby entering:ssh root@demonode pwd

If you see this response:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!

or:Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Then remove the old SSH fingerprint of that node by issuing the following command in the script box:makeknownhosts demonode -r

You can now go back and do the original unlock to demonode that failed.

Compute Node Startup Issues

Problems caused during the startup of the compute node are most often related to OpenStack servicesencountering a problem. Verify that the necessary OpenStack services are running and check the logs forerrors and exceptions. See “Logging within the Compute Node” on page 189 for more information on thenames of the logs, and controlling the information in the logs.

Figure 84. Unlock Panel for Node Checkbox on demonode

Appendix H. Troubleshooting 191

OpenStack Services Related to Startup

To verify that OpenStack nova services are running, issue:sudo nova service-list

to obtain the list of nova services. Each of the following nova services should have one line of statusoutput that shows the status enabled and state is :-) (smiley face emoticon):v nova-apiv nova-computev nova-conductorv nova-scheduler

To list the OpenStack neutron services that are running, issue:source $HOME/openrcneutron agent-list

Each neutron services should have one line of status output that shows the status enabled and state is:-) (smiley face emoticon):

In addition, issue:ps -ef | grep service_name

to verify that a process named service_name is actively running.

Logs Related to Startup

The following logs are most likely to contain entries related to startup issues:v /var/log/nova/nova-compute.logv /var/log/nova/nova-conductor.log

See “Logging within the Compute Node” on page 189 for more information on the names of the logs,and controlling the information in the logs.

Compute Log

The following exceptions or messages can appear in the compute log:v ZVMXCATRequestFailed messagev ZVMXCATInternalError exception

ZVMXCATRequestFailed Messagev Error Message:

Request to xCAT server n.n.n.n failed: {’status’: 403, ’reason’: ’Forbidden’, ’message’:’Invalid nodes and/or groups in noderange’}

Explanation: The target VM instance or z/VM host does not exist, or is not be defined in the xCATdatabase.User Action: Make certain that the nova.conf option zvm_host has a correct value. It should be thesame as your z/VM system ID. Make certain that the VM instance or z/VM host that you requested isdefined in the xCAT database. Also review, the XCAT_MN_admin property in DMSSICNF COPY. Theuser name properties should match.User Log Example:

192 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

v Error Message:Request to xCAT server n.n.n.n failed Authentication failure: {’status’: 401, ’reason’:’Unauthorized’, ’message’: ’Authentication failure’}

Explanation: An incorrect xCAT user and/or password is specified in the configuration files.User Action: Check nova.conf options zvm_xcat_username and zvm_xcat_password. Review the values inboth the /etc/nova/nova.conf and the /etc/neutron/neutron.conf files.User Log Example:

ZVMXCATInternalError Exceptionv Error Message:

Error during ComputeManager.update_available_resource

Explanation: The nova compute manager will run a periodic task (update_available_resource) to obtaininformation on the available z/VM hypervisor resources. This includes disk info. xCAT will eventuallycall the directory manager to get the disk pool info. An error can occur if the directory managerencounters problems.User Action: Contact the z/VM system administrator to check if the directory manager is configuredproperly.

2013-08-26 15:57:23.803 28134 TRACE nova Traceback (most recent call last):2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/bin/nova-compute", line 9, in module2013-08-26 15:57:23.803 28134 TRACE nova load_entry_point(’nova==2013.2.a3.g62141be’, ’console_scripts’, ’nova-compute’)()2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/cmd/compute.py", line 68, in main2013-08-26 15:57:23.803 28134 TRACE nova db_allowed=False)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 260, in create2013-08-26 15:57:23.803 28134 TRACE nova db_allowed=db_allowed)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 142, in __init__2013-08-26 15:57:23.803 28134 TRACE nova self.manager = manager_class(host=self.host, *args, **kwargs)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 402, in __init__2013-08-26 15:57:23.803 28134 TRACE nova self.driver = driver.load_compute_driver(self.virtapi, compute_driver)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 1003, in load_compute_driver2013-08-26 15:57:23.803 28134 TRACE nova virtapi)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", line 52, in import_object_ns2013-08-26 15:57:23.803 28134 TRACE nova return import_class(import_value)(*args, **kwargs)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 178, in __init__2013-08-26 15:57:23.803 28134 TRACE nova self._host_stats = self.get_host_stats(refresh=True)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1138, in get_host_stats2013-08-26 15:57:23.803 28134 TRACE nova self._host_stats = self.update_host_status()2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1103, in update_host_status2013-08-26 15:57:23.803 28134 TRACE nova info = self._get_host_inventory_info(host)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1167, in _get_host_inventory_info2013-08-26 15:57:23.803 28134 TRACE nova inv_info_raw = zvmutils.xcat_request("GET", url)[’info’][0]2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 236, in xcat_request2013-08-26 15:57:23.803 28134 TRACE nova resp = conn.request(method, url, body, headers)2013-08-26 15:57:23.803 28134 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 229, in request2013-08-26 15:57:23.803 28134 TRACE nova msg=err)2013-08-26 15:57:23.803 28134 TRACE nova ZVMXCATRequestFailed: Request to xCAT server9.12.27.140 failed: {’status’: 403, ’reason’: ’Forbidden’, ’message’: ’nvalid nodes and/or groups in noderange: scezvm3’}

2013-09-23 04:14:43.035 CRITICAL nova [-] Request to xCAT server 9.60.29.96 failed: {’status’: 401, ’reason’: ’Unauthorized’, ’message’:’Authentication failure’}2013-09-23 04:14:43.035 TRACE nova Traceback (most recent call last):2013-09-23 04:14:43.035 TRACE nova File "/usr/bin/nova-compute", line 10, in module2013-09-23 04:14:43.035 TRACE nova sys.exit(main())2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/cmd/compute.py", line 68, in main2013-09-23 04:14:43.035 TRACE nova db_allowed=False)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 257, in create2013-09-23 04:14:43.035 TRACE nova db_allowed=db_allowed)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 139, in __init__2013-09-23 04:14:43.035 TRACE nova self.manager = manager_class(host=self.host, *args, **kwargs)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 415, in __init__2013-09-23 04:14:43.035 TRACE nova self.driver = driver.load_compute_driver(self.virtapi, compute_driver)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 1049, in load_compute_driver2013-09-23 04:14:43.035 TRACE nova virtapi)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", line 52, in import_object_ns2013-09-23 04:14:43.035 TRACE nova return import_class(import_value)(*args, **kwargs)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 178, in __init__2013-09-23 04:14:43.035 TRACE nova self._host_stats = self.get_host_stats(refresh=True)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1143, in get_host_stats2013-09-23 04:14:43.035 TRACE nova self._host_stats = self.update_host_status()2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1108, in update_host_status2013-09-23 04:14:43.035 TRACE nova info = self._get_host_inventory_info(host)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1172, in _get_host_inventory_info2013-09-23 04:14:43.035 TRACE nova inv_info_raw = zvmutils.xcat_request("GET", url)[’info’][0]2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 235, in xcat_request2013-09-23 04:14:43.035 TRACE nova resp = conn.request(method, url, body, headers)2013-09-23 04:14:43.035 TRACE nova File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 228, in request2013-09-23 04:14:43.035 TRACE nova msg=err)2013-09-23 04:14:43.035 TRACE nova ZVMXCATRequestFailed: Request to xCAT server 9.60.29.96 failed: {’status’: 401, ’reason’: ’Unauthorized’, ’message’:’Authentication failure’}

Appendix H. Troubleshooting 193

User Log Example:

Deployment Issues

Most deployment issues are due to encountering resource constraints in the z/VM hypervisor oractivation problems. Verify that the necessary OpenStack services are running, and check the logs forexceptions or error messages.

OpenStack Services Related to Deployment

To verify that OpenStack nova services are running, issue:sudo nova service-list

to obtain the list of nova services. Each of the following services should have one line of status output:v nova-apiv nova-computev nova-conductorv nova-scheduler

To obtain a list of glance and neutron services, issue:ps -ef | grep glance-ps -ef | grep neutron-

2014-02-19 20:21:24.275 37880 AUDIT nova.compute.resource_tracker [-] NV-313322F Auditing locally available compute resources2014-02-19 20:21:26.549 37880 ERROR nova.openstack.common.periodic_task[-] NV-FC817DE Error during ComputeManager.update_available_resource: Error returned from xCAT:{"data":[{"errorcode":["1"],"error":["tivlp40: (Error) Unable to obtain disk pool information for SCOLIST,additional information: Failed\n Return Code: 8\n Reason Code: 241\n Description: Internal communication error\n"]}]}2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task Traceback (most recent call last):2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py", line 180, in run_periodic_tasks2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

task(self, context)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4903, in update_available_resource2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

rt.update_available_resource(context)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 246, in inner2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

return f(*args, **kwargs)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py", line 274, in update_available_resource2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

resources = self.driver.get_available_resource(self.nodename)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 887, in get_available_resource2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

stats = self.update_host_status()[0]2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1177, in update_host_status2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

info = self._get_host_inventory_info(host)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1241, in _get_host_inventory_info2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

dp_info = self._get_diskpool_info(host)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 1270, in _get_diskpool_info2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

res_dict = zvmutils.xcat_request("GET", url)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 243, in xcat_request2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

return load_xcat_resp(resp[’message’])2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 271, in decorated_function2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

return function(*arg, **kwargs)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 381, in load_xcat_resp2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

raise exception.ZVMXCATInternalError(msg=message)2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_taskZVMXCATInternalError: Error returned from xCAT: {"data":[{"errorcode":["1"],"error":["tivlp40:(Error) Unable to obtain disk pool information for SCOLIST, additional information:Failed\n Return Code: 8\n Reason Code: 241\n Description: Internal communication error\n"]}]}2014-02-19 20:21:26.549 37880 TRACE nova.openstack.common.periodic_task

194 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Each of the following services should have one line of status output:v glance-apiv glance-registryv neutron-serverv neutron-zvm-agent

In addition, issue:ps -ef | grep service_name

to verify that a process named service_name is actively running. Depending on network topologies, otherservices may need to be running to support other OpenStack network features – for example, DHCP, L3,and so on. To simplify network configuration, you need to run only those services that are needed. Forexample, if you choose the use a FLAT-only network with a public IP pool, you need to run only theneutron-server and the neutron-zvm-agent.

Logs Related to Deployment

The following logs related to nova, neutron, and OpenStack message processing will be most helpful indebugging deploy issues:v /var/log/nova/nova-compute.logv /var/log/neutron/zvm-agent.logv /var/log/nova/nova-conductor.log

See “Logging within the Compute Node” on page 189 for more information on the names of the logs,and controlling the information in the logs.

After a deployment is finished, you may occasionally find that some expected configurations have notbeen done on that system. In this case, the following logs may contain useful debug information:v /var/log/boot.log for Red Hat, or /var/log/boot.msg for SUSEv /var/log/cloud-init-output.log (if you installed cloud-init as the underlying AE)v /opt/ibm/scp/scp-cloud-init.log (if you installed scp-cloud-init as the underlying AE)

Scheduler Log

The first step is to verify that the request was sent to the compute node which supports the z/VM systemwhere the virtual servers are to be created. The scheduler log may indicate the possible cause.

NoValidHost Exception

If the OpenStack status of the server being deployed is "NoValidHost" then the request was neverreceived by the compute node. A possible cause of this error is that the OpenStack scheduler does notknow about the compute node. The host property in /etc/nova/nova.conf specifies the ID of the hostwhich the compute node manages. Services in OpenStack are case sensitive, so the value should matchthe case of the host as specified in the output of the nova hypervisor-list command:

nova hypervisor-list

Compute Log

The following exceptions or messages can appear in the compute log:v ZVMImageError exceptionv “Failed to create z/VM userid” messagev ZVMXCATDeployNodeFailed exceptionv ZVMNetworkError exceptionv InstancePowerOnFailure exceptionv “Unable to deploy the image” message

Appendix H. Troubleshooting 195

ZVMImageError Exceptionv Error Message:

Image error: This is not a valid zVM image.

Explanation: One or more required image properties are missing in Glance.User Action: Issue the glance image-show command to verify that all of the following image propertieshave the values shown:

"hypervisor_type" should be "zvm""architecture" should be "s390x""container-format" should be "bare""disk-format" should be "raw""image_file_name" should be filename.img, for example "0100.img""image_type_xcat" should be "linux""os_name" should be "Linux""os_version" should be the OS version of your capture source node. Currently, only Red Hat andSUSE type images are supported. For a Red Hat type image, you can specify the OS version asrhelx.y, redhatx.y, or red hatx.y, where x.y is the release number. For a SUSE type image, you canspecify the OS version as slesx.y or susex.y, where x.y is the release number. For an Ubuntu typeimage, you can specify the OS version as ubuntux.y, where x.y is the release number. (If you don'tknow the real value, you can get it from the osvers property value in the manifest.xml file.)"provisioning_method" should be "netboot"

Issue the glance image-update --property command to set the appropriate values.User Log Example:

v Error Message:Import the image bundle to xCAT MN failed

Explanation: The xCAT management node cannot import the image bundle via SSH.User Action: Ensure that the xCAT root user's public key is added to the nova user's authorized_keysfile on your nova-compute server.User Log Example:

v Error Message:Image error: Request to xCAT server xxx.xxx.xxx.xxx failed...Invalid nodes and/or groups in noderange

2013-09-01 21:20:43.974 22631 ERROR nova.compute.manager [req-3efb4789-178e-4212-89dc-d30fbd620bc0 40cb5af478224cbf81542911b36ceb909e4869de9e184bda867684f2e3978f06] [instance: bd78643c-c895-46be-8c06-83a0a924c2f4] Error: [’Traceback (most recent call last):\n’, ’File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1010, in _build_instance\n set_access_ip=set_access_ip)\n’, ’File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1325, in _spawn\n LOG.exception(_(\’Instance failed tospawn\’), instance=instance)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1311, in _spawn\nblock_device_info)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 275, in spawn\nself._zvm_images.zimage_check(image_meta)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/imageop.py", line 732, inzimage_check\n raise exception.ZVMImageError(msg=msg)\n’, ’ZVMImageError: Image error: This is not a valid zVM image.\n’]

ERROR nova.compute.manager [req-0429dffd-b988-401b-b026-b4a93db1fa60 07808d8602d74754b1c3f4f4c29d5596993489f184a14add8e4831f3064539c2] [instance: 6cdd4c67-9957-4f9b-9682-32648c1ee6ca] Error: [’Traceback (most recent call last):\n’, ’File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1108, in _build_instance\n set_access_ip=set_access_ip)\n’,’ File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1388, in _spawn\n LOG.exception(_(\’Instance failed tospawn\’), instance=instance)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1374, in _spawn\nblock_device_info)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 319, in spawn\n image_name,disk_file)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 493, in _import_image_to_xcat\nimage_profile)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/imageop.py", line 404, in put_image_to_xcat\n raiseexception.ZVMImageError(msg=msg)\n’, ’ZVMImageError: Image error: Import the image bundle to xCAT MN failed: Error returned fromxCAT: {"data":[{"data":["Obtaining the image bundle from the remote system"]},{"errorcode":["1"],"error":["Unable to copy the imagebundle /opt/stack/data/nova/images/spawn_tmp/20130821133315_fbaglodenimage.tgz from the remote host"]}]}\n’]

196 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

Explanation: Nova will do an xCAT free space check when importing the image from Glance intoxCAT. This is performed by sending a request and specifying the value defined in zvm_xcat_master asthe xCAT node to use when obtaining information. In /etc/nova/nova.conf, zvm_xcat_master shouldbe set to the xCAT management nodes node name (e.g. xcat) and not to the IP address.User Action: In /etc/nova/nova.conf, correct the nodename specified for the zvm_xcat_master propertyand restart the nova-compute service.User Log Example:

“Failed to create z/VM userid” Messagev Error Message:

Adding disk to nnnnnnnn’s active configuration...Failed Return Code: 200 Reason Code: 12 Description: Image not active\n

Explanation: It is possible that the same IP address is being used by another existing server instance.xCAT thought the newly created server is active, but it is not.User Action: Locate the virtual server that is using the IP address and then shutdown or purge theserver instance that is using the same IP address.

2014-03-03 16:05:21.856 2507 ERROR nova.compute.manager[req-1ecb87a1-d3ea-4767-8c98-d340ed9b8f13 a7fbf5a1eb5a4c9f8dd6e0e1d479b42e 816c88b5613f49dcbd6ebb35ff737c5e][instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216] Error: Image error: Request to xCAT server 9.42.46.130 failed:{’status’: 403, ’reason’: ’Forbidden’, ’message’: ’Invalid nodes and/or groups in noderange: 9.42.46.130’}

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]Traceback (most recent call last):

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1045, in _build_instance

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]set_access_ip=set_access_ip)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1444, in _spawn

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]LOG.exception(_(’Instance failed to spawn’), instance=instance)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1430, in _spawn

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]block_device_info)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 404, in spawn

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]zvm_inst.delete_xcat_node()

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 344, in spawn

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]tmp_file_fn)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 549, in _import_image_to_xcat

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]CONF.zvm_xcat_master)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/imageop.py", line 397, in check_space_imgimport_xcat

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]os.remove(tar_file)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/imageop.py", line 384, in check_space_imgimport_xcat

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]xcat_free_space_threshold, zvm_xcat_master)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/imageop.py", line 605, in get_free_space_xcat

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]return xcat_free_space_threshold

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]self.gen.throw(type, value, traceback)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 291, in except_xcat_call_failed_and_reraise

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]raise exc(**kwargs)

2014-03-03 16:05:21.856 2507 TRACE nova.compute.manager [instance: 1f60ab7c-f03d-4b8c-b2a0-a6f0c9681216]ZVMImageError: Image error: Request to xCAT server 9.42.46.130 failed: {’status’: 403, ’reason’: ’Forbidden’,’message’: ’Invalid nodes and/or groups in noderange: 9.42.46.130’}

Appendix H. Troubleshooting 197

User Log Example:

v Error Message:Adding a disk to FTEST00B’s directory entry... Failed\nftest00b: Return Code: 596\nftest00b:Reason Code: 3610\nftest00b: Description: Internal directory manager error - product-specificreturn code: 3610\n

Explanation: The size of the root disk for the instance being deployed is greater than the availablecontiguous disk space in the directory manager's disk pool.User Action: Redeploy the instance using a flavor with smaller root disk in size, according to theirDASD pool capability. Either add larger contiguous disk space to the directory manager's disk pool, ordeploy an image with a smaller root disk. If the size of the root disk for the image is known to beavailable in the disk pool, then specify a flavor with a root disk size of 0, which generates a request fora disk from the disk pool that matches the size. Otherwise, choose a smaller image or increase thespace in the disk pool as previously suggested.User Log Example:

ZVMXCATDeployNodeFailed Exceptionv Error Message:

(Error) Unable to deploy the image to nnnnnnnn 0100. Reason: Failed to connect disk: nnnnnnnn:0100

Explanation: The ZHCP agent attaches the disk to itself so that it can copy the image to the disk. Theattempt to attach the disk failed. This can occur if the original configuration of ZHCP did not do thestep which permits the ZHCP agent to link disks, or the DASD volume on which the minidisk residesis not varied online to the z/VM system, or the minidisk was not correctly released from previousdeleted instance.User Action: Perform the following actions:– Review the z/VM xCAT configuration to ensure that ZHCP is allowed to link disks.– Verify that all defined volumes in the directory manager's disk pool are online.– Verify that the volumes and the sizes specified for the directory manager's disk pool are valid.

2013-09-01 06:27:15.970 23277 ERROR nova.virt.zvm.instance [req-f758fccf-5e3d-48cb-86f0-0d0efe90cb71 bcd9adddbbf8413f9f518a4efad182acab2c805b83514fbd8f5fb93661c2fca4] Failed to create z/VM userid: Error returned from xCAT:{"data":[{"errorcode":["1"],"error":["gcb000bd: Adding a disk to GCB000BD’s directory entry... Done\ngcb000bd: Adding disk toGCB000BD’s active configuration... Failed\ngcb000bd: Return Code: 200\ngcb000bd: Reason Code: 12\ngcb000bd: Description: Imagenot active\n"]}]}

2014-03-04 02:57:09.684 ERROR nova.compute.manager [req-a8b0dea3-fcb7-4053-aff4-b1d19c624c7b admin admin][instance: 209b5cdc-2fb7-450d-8479-914ab89360a3] NV-BA150B8 Instance failed to spawn

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]Traceback (most recent call last):

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1498, in _spawn

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]block_device_info)

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 421, in spawn

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]block_device_info)

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]six.reraise(self.type_, self.value, self.tb)

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 354, in spawn

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]zvm_inst.create_userid(block_device_info, image_meta)

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/instance.py", line 263, in create_userid

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]msg=msg)

2014-03-04 02:57:09.684 TRACE nova.compute.manager [instance: 209b5cdc-2fb7-450d-8479-914ab89360a3]ZVMXCATCreateUserIdFailed: Create OPNCLOUD user id ftest00b failed: Failed to create z/VM userid: Error returnedfrom OPNCLOUD:{"data":[{"errorcode":["1"],"error":["ftest00b: Adding a disk to FTEST00B’s directory entry... Failed\nftest00b:

Return Code: 596\nftest00b: Reason Code: 3610\nftest00b: Description: Internal directory manager error- product-specific return code : 3610\n"]}]}

198 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

– Make sure all deleted virtual server instance's minidisk were detached from ZHCP.User Log Example:

v Error Message:(Error) Unable to deploy the image to OS000079 0100. Reason: Target disk is too small for specified image

Explanation: The image being deployed contains a root disk that is larger in size than the root disksize specified in the flavor.User Action: Choose a flavor with larger root disk size that is at least as large as the source disk of theimage or a flavor with a root disk size of 0, which causes a disk of the same exact size as required bythe image to be obtained.User Log Example:

v Error Message:(Error) Unable to deploy the image to xxxxxxxx 0100.Reason: Failed deploy disk image 0100.img at stage(rc): dd(141), zcat(141), ckddecode(x)

where:

xxxxxxxxis the name of the virtual machine being deployed.

0100 is the device number of the disk that is being updated.

2013-08-29 08:17:20.928 2554 ERROR nova.scheduler.filter_scheduler [req-906d6cb0-c0c2-4944-8fd9-8735a4aa2859bcd9adddbbf8413f9f518a4efad182ac ab2c805b83514fbd8f5fb93661c2fca4] [instance: 40c93f15-2594-40b0-84fd-dc81efa91b0d] Error from lasthost: scezvm3 (node SCEZVM3): [u’Traceback (most recent call last):\n’, u’ File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1010, in _build_instance\n set_access_ip=set_access_ip)\n’, u’ File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1325, in _spawn\n LOG.exception(_(\’Instance failed to spawn\’), instance=instance)\n’, u’ File"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1311, in _spawn\n block_device_info)\n’, u’ File"/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 392, in spawn\n self.destroy(instance, network_info,block_device_info)\n’, u’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 350, in spawn\nzvm_inst.deploy_node(deploy_image_name, transportfiles)\n’, u’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/instance.py",line 533, in deploy_node\n zvmutils.xcat_request("PUT", url, body)\n’, u’ File "/usr/lib64/python2.6/contextlib.py", line 34, in__exit__\n self.gen.throw(type, value, traceback)\n’, u’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 285,in except_xcat_call_failed_and_reraise\n raise exc(**kwargs)\n’, u’ZVMXCATDeployNodeFailed: Deploy image on node gcb000a8 failed:Error returned from xCAT: {"data":[{"info":["gcb000a8: Deploying the image using the zHCPnode"]},{"errorcode":["1"],"error":["gcb000a8: (Error) Unable to deploy the image to GCB000A8 0100. Reason: Failed to connect disk:GCB000A8:0100"]}]}\n’]

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]Traceback (most recent call last):

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1430, in _spawn

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]block_device_info)

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 397, in spawn

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]self.destroy(instance, network_info, block_device_info)

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 358, in spawn

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]zvm_inst.deploy_node(deploy_image_name, transportfiles)

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/instance.py", line 520, in deploy_node

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]zvmutils.xcat_request("PUT", url, body)

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]self.gen.throw(type, value, traceback)

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 291, in except_xcat_call_failed_and_reraise

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]raise exc(**kwargs)

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402]ZVMXCATDeployNodeFailed: Deploy image on node os000079 failed: Error returned from xCAT: {"data":[{"info":["os000079:Deploying the image using the zHCP node"]},{"errorcode":["1"],"error":["os000079: (Error) Unable to deploy theimage to OS000079 0100. Reason: Target disk is too small for specified image."]}]}

Appendix H. Troubleshooting 199

0100.imgis the name of the image file.

ckddecode(x)is the return code from the ckddecode function.

Explanation: The image being deployed could not be written to the disk. The “ckddecode” string inthe log entry contains the return code from the ckddecode function and has the following meanings:

2 Unable to open the disk for writing.

3 Unable to allocate 64KB of memory on a page buffer for use as a work buffer. This internalerror can occur if the ZHCP server is overloaded with deploy or capture requests.

4 A negative return code was received on one of the functions calls.

5 An error occurred reading data from STDIN. This error occurs when a pipe error occurs oraccess is lost to the image file.

6 Unable to write a track buffer to the disk.User Action: The user action is based on the return code from ckddecode. For return code(s):

2 Verify the DASD volume is attached to the z/VM system so that minidisks on the volume canbe accessed. See the action for return code 6 for more information on resolving the issue.

3 This problem can be resolved by increasing the virtual storage size of the ZHCP virtualmachine. In normal operation, this should not be required. After the storage size has beenincreased and the SMAPI servers (by restarting VSMGUARD virtual machine), the xCAT MN,ZHCP agent, and OpenStack compute node should be restarted. After the servers have beenrestarted, retry the deploy.

4 Obtain the /var/log/zhcp/unpackdiskimage* log from the ZHCP agent and provide this toIBM. See Appendix C, “Getting Logs from xCAT or ZHCP,” on page 147 for more informationon obtaining the log files.

5 Obtain the /var/log/zhcp/unpackdiskimage* log from the ZHCP agent. See Appendix C,“Getting Logs from xCAT or ZHCP,” on page 147 for more information on obtaining the logfiles. This error can occur if the xCAT MN has been stopped or logged off. Resolve the errorindicated in the log and try the deploy again.

6 This error can be caused by an incorrect specification of the DASD volume's cylinder count inthe DirMaint EXTENT CONTROL file. To verify this error, run the IVP as documented inAppendix A, “Installation Verification Programs,” on page 139. If the DASD volumes shown forthe disk pool do not contain the correct device type for the listed volumes, or the number ofcylinders shown is greater than the number of cylinders expected for the type of device, thenyou have a mismatch in the definition, which will cause I/O errors as the deploy codeattempts to write beyond the expected available cylinders. The following list shows some ofthe most common 3390 device types and the supported maximum number of cylinders foreach:– 3390-03: 3339 cylinders– 3390-09: 10017 cylinders– 3390-27: 32760 cylinders– 3390-54: 65520 cylinders

Additional information on the required cylinder sizes is provided in “Appendix C. DeviceCharacteristics” in the z/VM: Directory Maintenance Facility Tailoring and Administration Guide.

To recover from this issue, the DASD volume should be configured to have the number ofcylinders that match the values in the above appendix for the desired device type. Next, theDirMaint EXTENT CONTROL file should be updated to specify the correct number ofcylinders for this volume. Once the changes have been made, DirMaint, SMAPI, the xCAT MN,and xCAT ZHCP should all be restarted and then a new deploy may be attempted.

200 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

If you do not have a DASD type mismatch, then the probable cause is I/O errors in the DASD.The volume should be analyzed for errors, and another volume should be used.

User Log Example:

ZVMNetworkError Exceptionv Error Message:

Failed to bound vswitch

Explanation: The server instance failed to couple its virtual NIC to the vswitch. As part of deployingthe instance, the neutron-zvm-agent will invoke SMAPI calls to grant instance and couple the instance'sNIC to the vswitch according to the network definition. The nova z/VM driver will check theinstance's NIC status before powering on the instance. There are several possible reasons why theneutron-zvm-agent may not be able to perform the grant and couple process successfully. For example,the neutron-zvm-agent may not be able to connect to the xCAT MN, or the xCAT MN may not beworking properly. These types of problems usually happen when the xCAT MN is restarting or whenz/VM SMAPI servers are restarting. To prevent this, the compute node should be restarted after xCATMN or SMAPI is restarted.User Action:

– Restart the compute node if either the xCAT MN or SMAPI is restarted.– Ensure that the neutron-zvm-agent is started and working correctly.

To verify that the neutron-zvm-agent is running, issue the ps command, as in this example:

Issue the neutron agent-list command to check the neutron z/VM agent status. Note that ":-)"should show in the "alive" column, as in this example:neutron agent-list+--------------------------------------+------------+-------------------+-------+----------------+| id | agent_type | host | alive | admin_state_up |+--------------------------------------+------------+-------------------+-------+----------------+| 18bfa7b4-d103-4597-a952-152ffc138686 | z/VM agent | opnstk1 | :-) | True |+--------------------------------------+------------+-------------------+-------+----------------+

User Log Example:

ZVMXCATRequestFailed Exceptionv Error Message:

Request to xCAT server n.n.n.n failed: Communication error: [Errno 113] EHOSTUNREACH

Explanation: Cannot communicate with xCAT MN.User Action: Ensure that the xCAT management node is running and that the OpenStack configurationfiles correctly specify the IP information for the xCAT management node.User Log Example:

2013-12-04 06:49:10.844 2612 TRACE nova.compute.manager [instance: 296adc07-b7dd-4b6c-8aa4-ace38bd7e402] ZVMXCATDeployNodeFailed:Deploy image on node os000079 failed: Error returned from xCAT: {"data":[{"info":["os000079: Deploying the image using the zHCPnode"]},{"errorcode":["1"],"error":["os000079: (Error) Unable to deploy the image to OS000079 0100. Reason: Failed deploying diskimage 0100.img at stage(rc): dd(141), zcat(141), ckddecode(6)"]}]}

ps aux | grep neutron-zvm-agentroot 25037 0.5 0.7 261320 32224 pts/3 S 04:11 0:26 /usr/bin/python /usr/bin/neutron-zvm-agent--config-file /etc/neutron/plugins/ml2/ml2_conf.ini--config-file /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini--config-file /etc/neutron/neutron.conf

2013-08-21 14:08:36.929 30635 ERROR nova.compute.manager [req-316467f8-84ed-4e05-9c9b-b5f671d2a548 07808d8602d74754b1c3f4f4c29d5596993489f184a14add8e4831f3064539c2] [instance: 465f27ae-e27f-457f-a182-494388cd8481] Error: [’Traceback (most recent call last):\n’, ’File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1108, in _build_instance\n set_access_ip=set_access_ip)\n’, ’File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1388, in _spawn\n LOG.exception(_(\’Instance failed tospawn\’), instance=instance)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1374, in _spawn\nblock_device_info)\n’, ’ File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 362, in spawn\n raiseexception.ZVMNetworkError(msg=msg)\n’, ’ZVMNetworkError: z/VM network error: Failed to bound vswitch\n’]

Appendix H. Troubleshooting 201

InstancePowerOnFailure Exceptionv Error Message:

InstancePowerOnFailure: Failed to power on instance: timeout.

Explanation: If the deployed instance cannot be pinged and accessed with SSH from xCAT, it willcomes up with that error. The most likely reason is a network configuration problem. This can happenif the activation engine chosen for the deployed system has not properly activated the IP address or setthe other Linux IP related configuration information.User Action: Verify IP access to the virtual server. This includes the following steps:– Verify that you specified the correct NIC net-id=xxx in the nova boot command.– Verify that the virtual server can be accessed by the xCAT MN using the assigned IP address. This

can be done using the xCAT GUI:

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupTraceback (most recent call last):

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 117, in wait

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupx.wait()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 49, in wait

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupreturn self.thread.wait()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 168, in wait

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupreturn self._exit_event.wait()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in wait

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupreturn hubs.get_hub().switch()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 187, in switch

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupreturn self.greenlet.switch()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 194, in main

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupresult = function(*args, **kwargs)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 65, in run_service

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupservice.start()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/service.py", line 157, in start

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupself.manager.init_host()

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 766, in init_host

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupself.driver.init_host(host=self.host)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 189, in init_host

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupself._volumeop.init_zhcp_fcp(self._host_stats)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/virt/zvm/volumeop.py", line 801, in init_zhcp_fcp

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupself.volume_op.online_device(hcpnode, hcpuser, fcp_cur_str)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/virt/zvm/volumeop.py", line 76, in online_device

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupself._execute_dsh(hcpnode, body)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/virt/zvm/volumeop.py", line 59, in _execute_dsh

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupzvmutils.xcat_request("PUT", url, body)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 236, in xcat_request

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupresp = conn.request(method, url, body, headers)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupFile "/usr/lib/python2.6/site-packages/nova/virt/zvm/utils.py", line 208, in request

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupraise exception.ZVMXCATRequestFailed(xcatserver=self.host, msg=msg)

2013-08-26 15:17:14.024 27285 TRACE nova.openstack.common.threadgroupZVMXCATRequestFailed: Request to xCAT server 9.12.27.140 failed: Communication error: [Errno 113] EHOSTUNREACH

202 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

1. Go to the Nodes/Nodes panel and select the “status” column header. Clicking on the headername of “status” causes xCAT to poll the servers to verify their status. You may also want toclick on the header name of “power” to see if the virtual machine is logged on the z/VMsystem.

2. Review the information in the “status” column. The server that you are deploying should be inthe list (unless the OpenStack code has already failed the deploy and removed it). If it isaccessible, the column will show “ping” for the status.

If the server is not in the list, or if there are still access issues to the deployed virtual server, youcan:1. Set the zvm_reachable_timeout property in nova.conf to 0 and restart the nova compute service.2. Make another virtual server deployment.On the subsequent deployments, the virtual server instance will not be deleted if the power onoperation of the boot function times out. This allows you to perform additional diagnostics andattempts to access the virtual server in order to debug the problem.

User Log Example:

“Unable to deploy the image” Messagev Error Message:

(Error) Unable to deploy the image to ICO0043 0100. Reason: Unable to link ICO0043 0100 disk.HCPLNM105E ICO0043 0100 not linked; R/W by ABC0028"

Explanation: When a server is deployed the ZHCP agent links the disk so that it can place the diskimage on to the disk. The link failed because another machine had the disk linked. In this example, themachine was a server with the z/VM virtual machine as the one chosen for the server being deployed.This is usually a user error. Someone deleted a running server from the z/VM user directory. Thatserver will not be fully deleted until the running instance is logged off.User Action: Look for a logged on z/VM virtual machine running which has the same virtual machineuserid as the userid specified for the virtual server that is being deployed. Log off that server so thatthe running instance goes away. Servers should be logged off before they are deleted from the z/VMuser directory. Reattempt the deploy.User Log Example:

2013-09-24 22:17:43.983 16148 ERROR nova.compute.manager[req-06174c1e-7aeb-4a90-b407-88fe39e1f22c 767e6c58a76d478880046d2829e9d8bf be994626d0dd454a8603de9f8c27bfcb][instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512] Error: Failed to power on instance: timeout.

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]Traceback (most recent call last):

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1046, in _build_instance

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]set_access_ip=set_access_ip)

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1445, in _spawn

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]LOG.exception(_(’Instance failed to spawn’), instance=instance)

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1431, in _spawn

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]block_device_info)

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]File "/usr/lib/python2.6/site-packages/nova/virt/zvm/driver.py", line 396, in spawn

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]raise err

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]InstancePowerOnFailure: Failed to power on instance: timeout.

2013-09-24 22:17:43.983 16148 TRACE nova.compute.manager [instance: cb93fbdf-9a41-4d6e-84ca-b7c104e13512]

ZVMXCATDeployNodeFailed: Deploy image on node ICO0043 failed: Error returned from xCAT:{"data":[{"info":["ICO0043: Deploying the image using the zHCP node"]},{"errorcode":["1"],"error":["ICO0043: (Error)Unable to deploy the image to ICO0043 0100. Reason: Unable to link ICO0043 0100 disk. HCPLNM105E ICO00430100 not linked; R/W by ABC0028"]}]}

Appendix H. Troubleshooting 203

Additional Network Debug Procedures

Some network related deployment issues cannot be fully identified using the logs. Validation of thenetworking-related environment is necessary in order to debug the issue. The following arerecommended debug procedures related to networking.

In the OpenStack compute node, then check the following items from the compute node:v Verify that all network interfaces, including real OSA cards, NICs, and other virtual devices/bridges

have the correct MAC addresses and IP addresses. Issue:ip addr

or:ifconfig -a

to get the configurations for all network interfaces. The interfaces which need to connect outside of thez/VM server need to comply with z/VM MAC address management.

v Verify that the controller has the correct privilege on connected vswitches, that the VLAN configurationof the vswitches is correct, and that the vswitch configuration has an associated real device for theuplink port. Issue:

modprobe vmcp && vmcp --buffer 1M q vswitch vswitchName det

where vswitchName is the name of the vswitch.Using the information returned by this command, verify that all required privileges are satisfied. Forexample, if interface enccw0.0.xxxx needs to run in promiscuous mode, then the corresponding NICneeds to have promiscuous privilege on the vswitch.Then verify that the VLAN configuration of the vswitches in z/VM are configured to support theconfigured settings in OpenStack. The vswitch VLAN awareness and VLAN ID range (if it is VLANaware) should match the configured settings in OpenStack.Finally, if the deployed virtual machine is intended to communicate with systems outside of z/VM,verify that the vswitch configuration in z/VM has an associated real device for the uplink port.

v Verify that the gateway associated with the subnet to be used by the deployed system is valid for theTCP/IP environment. Issue:

neutron subnet-list

to obtain the list of subnets defined to OpenStack for this compute node.Using the ID of the subnet that is intended to be used by the deployed system, issue the followingcommand to show the configured values for the subnet (pay particular attention to the gateway):

neutron subnet-show 1a5634d2-1bbd-49b4-84a9-3afee962d86f

where 1a5634d2-1bbd-49b4-84a9-3afee962d86f is the ID of the subnet being verified.

Next, log on to xCAT and verify that xCAT can reach management network.

If OpenStack compute node is not running in a z/VM virtual machine:v Run the prep_zxcatIVP.pl and use the driver script to drive the zxcatIVP.pl script. The scripts will

validate some of the network settings. See Appendix A, “Installation Verification Programs,” on page139 for more information on the IVP scripts.

Deployment of an Image to a Different Disk Type Fails

An image created from an FBA disk can only be deployed to an FBA disk. Similarly, an image createdfrom an ECKD disk can only be deployed to an ECKD. If there is a mismatch, the deploy will fail. For

204 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

example, if you deploy a FBA type image to a ECKD disk, you will be receive the following error:

To resolve this issue, make certain to deploy the image on a compute node that supports the appropriatedisk type. Each z/VM compute node is configured to create images of a specific disk type.

This error can occur in a mixed disk environment, where two compute nodes with different disk typesare in the same OpenStack zone. OpenStack will choose the next available host and related compute nodein the zone. If the compute node is configured for the other disk type, the deploy will fail. OpenStackwill then attempt the deployment on the other host/compute node where it could succeed.

To avoid this issue in a mixed disk environment, use the --availability_zone parameter on the bootcommand to specify the desired host.

Periodic Failure Due to Unavailable Resources or Timeouts

Symptom: When performing multiple concurrent operations, the operations periodically fail due toresources being unavailable or timeouts.

Resolution: In the z/VM environment:v Some resources such as disk space can be in recovery mode from previous usage when a subsequent

request is submitted by the user.v Operations fail because they take too long due to excessive concurrent requests. These issues can be

avoided by having sufficient resources defined to allow concurrent operations, and by pacing therequests to avoid delays due to concurrent operations consuming resources within the z/VMenvironment.

v Performance issues related to concurrent requests can also occur because multiple requests areconsuming too much resources of the xCAT MN and ZHCP. You can address this by increasing the sizeof the virtual machines by 1-2GB.

Capture Issues

Problems encountered during capture are most often related to xCAT being unable to access the virtualserver or the disk containing the image, or space issues within either the xCAT management node or inthe OpenStack compute node. Verify that the necessary OpenStack services are running and check thelogs for errors and exceptions.

OpenStack Services Related to Capture

To verify that OpenStack services are running, issue:sudo nova service-list

to obtain the list of services. Each of the following services should have one line of status output thatshows the status enabled and state is :-) (smiley face emoticon):v nova-apiv nova-computev nova-conductorv nova-schedulerv glance-apiv glance-registry

In addition, issue:

ZVMXCATDeployNodeFailed: Deploy image on node ftest027 failed: Error returned from xCAT: {"data":[{"info":["ftest027:Deploying the image using the zHCP node"]},{"errorcode":["1"],"error":["ftest027: (Error) Unable to deploy the image toFTEST027 0100. Reason: Specified image is of a fixed-block volume, but specified disk is not a fixed-block volume."]}]}

Appendix H. Troubleshooting 205

ps -ef | grep service_name

to verify that a process named service_name is actively running.

Logs Related to Capture

The following logs are most likely to contain entries related to capture issues:v /var/log/nova/nova-compute.logv /var/log/nova/nova-conductor.log

See “Logging within the Compute Node” on page 189 for more information on the names of the logs,and controlling the information in the logs.

Periodic Failure Due to Unavailable Resources or Timeouts

Symptom: When performing multiple concurrent operations, the operations periodically fail due toresources being unavailable or timeouts.

Resolution: In the z/VM environment:v Some resources such as disk space can be in recovery mode from previous usage when a subsequent

request is submitted by the user.v Operations fail because they take too long due to excessive concurrent requests. These issues can be

avoided by having sufficient resources defined to allow concurrent operations, and by pacing therequests to avoid delays due to concurrent operations consuming resources within the z/VMenvironment.

v Performance issues related to concurrent requests can also occur because multiple requests areconsuming too much resources of the xCAT MN and ZHCP. You can address this by increasing the sizeof the virtual machines by 1-2GB.

Unable to Locate the Device Associated with the Root Directory

Error Message: An error message or log entry may contain a string similar to:Error: cmo00008: (Error) Unable locate the device associated with the root directory

where cmo00008 is the xCAT node ID.

Explanation: This error can occur when attempting to capture an image from a virtual server inOpenStack or when using the imgcapture command of xCAT which has not successfully exchanged SSHkeys with the xCAT MN. xCAT MN requires access to the virtual server as part of the capture process.

User Action: Unlock the virtual server so that xCAT MN can SSH into the virtual server. This isdescribed in Step 6 on page 72.

Importing Image Issues

Problems encountered when importing an image into Glance are most often related to problems in thecompute node, either space issues or service failures. Verify that the necessary OpenStack services arerunning and check the logs for errors and exceptions.

OpenStack Services Related to Image Import

To verify that OpenStack services are running, issue:sudo nova service-list

206 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

to obtain the list of services. Each of the following services should have one line of status output:v nova-apiv nova-computev nova-conductorv nova-schedulerv glance-apiv glance-registry

In addition, issue:ps -ef | grep service_name

for each of the above services, and for the services glance-api and glance-registry, to verify that a processnamed service_name is actively running.

Logs Related to Image Import

The following logs are most likely to contain entries related to capture issues:v /var/log/nova/nova-compute.logv /var/log/nova/nova-conductor.log

See “Logging within the Compute Node” on page 189 for more information on the names of the logs,and controlling the information in the logs.

CMA Issues

This section describes issues that are unique to CMA. If you are also using xCAT, see “xCATManagement Node Issues” on page 208.

OpenStack Dashboard Issues

This section discusses OpenStack Dashboard issues that are not necessarily related to a specificOpenStack task.

When you use the https protocol to connect the xCAT GUI or the OpenStack Dashboard for the first time,or, if you clear your internet browser setting before you use https protocol to connect the xCAT GUI orthe OpenStack Dashboard, you might see a page in your browser stating the following:v The connection is untrusted.v There is a problem with this web site's security certificate.

This occurs because the xCAT GUI and the OpenStack Dashboard use a self-generated certificate file toencrypt the web connection data, and the internet browser cannot determine whether the certificate file iscorrect. Refer to your browser documentation to make the xCAT GUI and the OpenStack Dashboardtrusted web pages. Then, you can connect the xCAT GUI and the OpenStack Dashboard using anencrypted connection.

Reconfiguration Issues

This section describes issues related to reconfiguring the CMA.

No Route to Host Issuev Error Message:

No route to host - connect(2) for "chef-server-fqdn" port 443

Appendix H. Troubleshooting 207

Explanation: The Chef client attempted to connect to the Chef server, but the connection to the Chefserver is blocked by a firewall.User Action: Run the following commands on the Chef server to disable the firewall:

chkconfig iptables offservice iptables stop

User Log Example:9.60.29.199 Starting Chef Client, version 11.12.89.60.29.199 Creating a new client identity for CMA_ID using the validator key.9.60.29.1999.60.29.199 ================================================================================9.60.29.199 Chef encountered an error attempting to create the client "CMA_ID"9.60.29.199 ================================================================================9.60.29.1999.60.29.1999.60.29.199 [2015-09-24T23:56:48-04:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out9.60.29.199 Chef Client failed. 0 resources updated in 4.76613586 seconds9.60.29.199 [2015-09-24T23:56:48-04:00] ERROR: No route to host - connect(2) for "chef-server-fqdn" port 4439.60.29.199 [2015-09-24T23:56:48-04:00] FATAL: Chef::Exceptions::ChildConvergeError:

Chef run process exited unsuccessfully (exit code 1)

xCAT Management Node Issues

This section contains xCAT management node items that are not necessarily related to a specificOpenStack task.

Space Issues on /install Directory Can Lead to xCAT MN Issues

The /install directory holds temporary space used for importing/exporting images and permanent spaceused to hold images that are deployed by xCAT. Running out of space in /install will cause the failuresin capture and deploy of images. To verify that the /install disk is full, use the xCAT GUI:1. Log on to the xCAT user interface as admin and bring up the Script panel for the xCAT MN node

(xcat), as described in “Using the Script Panel in the xCAT User Interface” on page 179.2. Enter the following command in the script box:

df -h /install

3. Press Run. Response data and return codes indicating the result of the commands will appear in theyellow status box at the top of the panel. 0 means it was successful. A nonzero means there was anerror. If the response shows usage at 100% or nearing 100%, then the /install directory is full and theLVM that provides space for the directory must have additional disks added or files removed. If theresponse shows errors, or is unable to show a usage percentage, then it is possible that the LVM isdamaged and you should review the troubleshooting information in “LVM Errors in the /installDirectory Can Lead to xCAT MN Issues.”

To add volumes, refer to the "Defining the xCAT Image Repository for z/VM Images" section in the“Setting up and Configuring the Server Environment” chapter of z/VM: Systems Management ApplicationProgramming. Prior to performing this task, you should shutdown xCAT MN with the signal commandfrom a class A or C z/VM userid:SIGNAL SHUTDOWN USER XCAT WITHIN 10

where:

XCAT is the z/VM userid of the xCAT MN.

LVM Errors in the /install Directory Can Lead to xCAT MN Issues

If the LVM that contains the disk storage for the /install directory is corrupted or has errors, then failurescan occur with capture and deploy of images. The corruption of the LVM can be observed in the xCATGUI in the Configure/Files panel. A number of errors can appear:

208 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

v /install and subdirectories show file icons while files may show directory icons.v Files in the directory cannot be opened.v Permission denied errors occur for the files in the /install directory and subdirectories.v Response to a df command against the /install returns "can't find mount point".

Errors in the LVM occur most often in one of two ways:v The LVM is missing some disks. Disks might not be attached because they are not available to the

virtual machine. This can occur when the volume is not attached to the system or the full volume wasnot successfully added by xCAT MN to the directory manager.

v The filesystem in the LVM was corrupted by not properly shutting down the xCAT MN's operatingsystem.

To identify the extent of the problem, first use the xCAT GUI to verify that all volumes are attached:1. Log on to the xCAT user interface as admin and bring up the Script panel for the xCAT MN node

(xcat), as described in “Using the Script Panel in the xCAT User Interface” on page 179.2. Enter the following commands in the script box:

vgdisplay -v xcat

If the "Cur PV" and "Act PV" values are not the same value, then a volume is missing. You should locatethe volume and correct the issues that prevent it from coming online. The DMSSICNF COPY file lists thevolumes in order and this can be used to verify that the necessary volumes are attached to the system.

If the "Cur PV" and "Act PV" values are the same value, then obtain the xCAT MN's virtual machineconsole log and look for errors related to LVM processing in the log file. Any LVM setup errors will bedisplayed after the "LVM setup return code" line. To access the console log, log onto the MAINT useridand issue the following commands:for xcat cmd sp cons start to maintfor xcat cmd close console

The spoolid of the console log spool file will be listed as output of the commands. The RDRLISTcommand can be used to view the log.

To resolve the corruption of the LVM providing storage for the /install directory, the LVM will need to beunmounted, deleted and disks reallocated. The directions to do this are detailed in the X_SMAPI package,located at VM Download Packages and described at Description of X_SMAPI.

Alternative Deployment Provisioning Issues

This section discusses the recommended troubleshooting procedures for problems encountered whileusing alternative deployment provisioning. The troubleshooting discussion attempts to constrain theproblem set by first dividing the problems by the major function that is affected. The subsections include:v “Logging within the Compute Node”v “Unlocking a System” on page 210v “Adding a Dummy Image to Glance” on page 210v “Setting up the DOCLONE COPY file” on page 210v “Deploying Systems” on page 211

Logging within the Compute Node

Depending upon the function you drive, there are various logs within the compute node that would behelpful to review for problem determination. Consult "Logging and Monitoring" in the online OpenStackdocumentation (http://docs.openstack.org/trunk/openstack-ops/content/logging_monitoring.html) for

Appendix H. Troubleshooting 209

information on the location of the logs and a general overview of logging. Information on how to controllogging (for example, to increase the log level) is provided in the "Manage Logs" section of the onlineOpenStack documentation (http://docs.openstack.org/admin-guide-cloud/content/section_manage-logs.html).

Unlocking a System

If problems occur unlocking a system prior in preparation for its use as a clone, see “Unlocking Systemsfor Discovery” section in the “Setting Up and Configuring the Server Environment” chapter of the z/VM:Systems Management Application Programming.

Adding a Dummy Image to Glance

See the OpenStack glance documentation for information on these errors.

Setting up the DOCLONE COPY file

This section describes problems with setting up or editing the DOCLONE COPY file.v Problems in setting up the DOCLONE COPY file primarily occur in the formatting of the data lines.

The errors will most likely surface during deploy, which causes the image to either not be recognizedas indicating a clone is desired or results in missing data needed for the clone operation. You shouldcheck the following:– Keywords should not contain typos.– A semicolon separates the keyword/value pairs.– A semicolon in the wrong place, such as between an equal sign and either the keyword or the value

in a keyword/value pair.See “No Valid Host Was Found Issue” on page 211 for more information on debugging issues thatpotentially relate to an incorrect DOCLONE COPY file.

v If a clone operation does not appear to be running, the /var/log/messages in the CMA server wherethe xCAT MN is running can provide confirmation that the problem is a bad line in the DOCLONECOPY file. A clone operation should create log lines similar to:

Nov 10 00:55:22 xcat xCAT: xCAT: Allowing mkvm to osp00069 -p osdflt --mem 512m --cpus 1 --passworddfltpass --privilege g --imagename ICMflash244 for admin from localhostNov 10 00:55:22 xcat xCAT: xCAT::zvmUtils Clone of count:1 to be done from: CLONS244

Look for the following:– The --imagename property to verify the name of the image. If the image is not in the copy file

exactly as shown in the log entry, then a clone will not be attempted.– The "Clone of" log line indicates that xCAT found a matching image name and it is attempting to

create a clone from the master virtual server given by the from: value.Had xCAT found no matching image name, the log entries would have looked like this instead:

Nov 20 15:48:17 xcat xCAT: xCAT: Allowing mkvm to osp0006e -p osdflt --mem 512m --cpus 1 --passworddfltpass --privilege g --imagename flash244 for admin from localhostNov 20 15:48:17 xcat xCAT: xCAT::zvmUtils ==>image file name NOT found

See “Deploying Systems” on page 211 for more information on debugging deployment problems.v If you use XEDIT to edit the DOCLONE COPY file, XEDIT by default will insert sequence numbers

into the file. These sequence numbers can cause errors similar to the following when you runthe zxcatCopyCloneList.pl command:

bash-4.2# perl /opt/xcat/bin/zxcatCopyCloneList.plDOCLONE.COPY copied to temporary file /var/opt/xcat/doclone.qTeg4GQp successfullyValidating /var/opt/xcat/doclone.qTeg4GQp contents for proper syntax...(Error) Missing value for key DOC00020 on line 2(Error) Missing value for key DOC00040 on line 4(Error) Missing value for key DOC00060 on line 6

210 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

To prevent sequence numbers from being inserted, specify the noseq option when you use XEDIT toedit the DOCLONE COPY file. For example:XEDIT DOCLONE COPY (noupdate noseq

Deploying Systems

Most deployment issues are due to encountering resource constraints in the z/VM hypervisor or issuesassigning TCP/IP information by the customer. Verify that the necessary OpenStack services are runningand check the logs for exceptions or error messages.

OpenStack Services Related to Deployment

To verify that OpenStack services are running, issue the following commands to obtain a list of services:source $HOME/openrcnova service list

Each of the following services should have one line of status output that shows the status as enabled, andthe state as :-) (the smiley face emoticon):

nova-apinova-computenova-conductornova-schedulerglance-apiglance-registryneutron-serverneutron-zvm-agent

In addition, issue the following command to verify that a process called service_name is actively running:ps -ef | grep service_name

Logs Related to Deployment

The following logs related to nova, neutron, and OpenStack message processing will be most helpful indebugging deployment issues:v /var/log/nova/compute.logv /var/log/neutron/zvm-agent.logv /var/log/nova/conductor.log

The primary xCAT log containing log entries related to deployment is: /var/log/messages.

No Valid Host Was Found Issuev Error Message:

Failed to launch instance "imageName": Please try again later [Error: No valid host was found.].

Explanation: There are a number of possible causes for this error. The compute log provides additionaldetail so that you can narrow down the error. They include:– The image metadata is missing or incorrect.– The image created by the mkdummyimage command was deployed but not listed in the DOCLONE

COPY file as a cloning image– The /opt/xcat/bin/zxcatCopyCloneList.pl command was not issued to cause the xCAT

Management Node to copy the DOCLONE COPY file from the MAINT 193 minidisk to the localfilesystem.

– The master was not unlocked for access by the xCAT Management Node.

Appendix H. Troubleshooting 211

User Action:

– Verify image metadata. Issue the glance image-show command to verify that all of the following siximage properties have the following values:- image_file_name - Name of the dummy image file created by the mkdummyimage command that

was specified in the “Image Location” input box on the Create Image panel; for example“dummy.img”.

- image_type_xcat - “linux”- hypervisor_type - “zvm”- architecture – “s390x”- os_name – “Linux”- os_version - Image's operating system version. Specify, any value for Red Hat or SUSE type image

because for cloning this is not used. For a Red Hat type image, you can specify the OS version asrhelx.y, redhatx.y, or redhatx.y, where x.y is the release number. For a SUSE type image, you canspecify the OS version as slesx.y or susex.y, where x.y is the release number. For an Ubuntu typeimage, you can specify the OS version as ubuntux.y, where x.y is the release number.

- provisioning_method – “netboot”

If any properties are incorrect then issue the glance image-update command to set the appropriatevalues and re-deploy. Otherwise, continue reading this section.

– Verify the DOCLONE COPY file contains the image. Access the MAINT 193 disk from the z/VMMAINT user ID or the MAINTvrm user ID, where vrm is the z/VM version, release, andmodification level. Edit the DOCLONE COPY file and verify that you properly specified anIMAGE_NAME line. If not correct, then correct the file and reissue the zxcatCopyCloneList.plcommand. Otherwise, continue reading this section.

– Verify xCAT has read the DOCLONE COPY file. issue the following command:cat /var/opt/xcat/doclone.txt | grep imageName

Verify that you properly specified the image line with the correct keys and values. See Setting upthe DOCLONE COPY file section for possible errors. Typos or missing/extra semicolons can causexCAT to not recognize the deployment as a cloning deployment and attempt a normal deploymentthat results in an error related to image file appearing corrupted. If the line is not correct then issuethe correct the DOCLONE COPY file and/or reissue the zxcatCopyCloneList.pl command.The xCAT messages log, /var/log/messages, contains log entries related to the clone deployment.Issue the following command using the xCAT Run Script GUI for the xCAT node to find the nodename of the last system deployed (for example, the one that failed the deploy attempt):

cat /var/log/messages | grep -i mkdef

The xCAT log will have a line similar to the following:Nov 10 00:55:19 xcat xCAT: xCAT: Allowing mkdef -t node -o osp00069 userid=osp00069hcp=zhcp.ibm.com mgt=zvm groups=all for admin from localhost

The information in bold font is key:- Allowing mkdef – xCAT produces an Allowing log line for each xCAT command that it

processes. mkdef is used to begin the creation of the new xCAT node.- -o osp00069 – indicates the node name that is being created; for example, osp00069. You can

search for related log entries using the node name.Issue the following command to see the primary xCAT commands related to deploying the system.(Change the nodename from osp00069 to the desired node name):

cat /var/log/messages | grep -i osp00069

Lines similar to the following are shown:Nov 10 00:55:19 xcat xCAT: xCAT: Allowing mkdef -t node -o osp00069 userid=osp00069hcp=zhcp.ibm.com mgt=zvm groups=all for admin from localhostNov 10 00:55:22 xcat xCAT: xCAT: Allowing mkvm to osp00069 -p osdflt --mem 512m --cpus 1--password dfltpass --privilege g --imagename ICMflash244 for admin from localhost

212 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

|

Nov 10 00:55:22 xcat xCAT: xCAT::zvmUtils Clone of count:1 to be done from: CLONS244Nov 10 00:55:42 xcat xCAT: xCAT::zvmUtils smcli Image_Create_DM -T OSP00069 -f /tmp/osp00069.txtNov 10 00:55:42 xcat xCAT: xCAT::zvmUtils Defining OSP00069 in the directory... DoneNov 10 00:55:45 xcat xCAT: xCAT::zvmUtils smcli Image_Query_DM -T OSP00069 | sed ’$d’Nov 10 00:55:45 xcat xCAT: xCAT::zvmUtils USER OSP00069 XCAT 512M 512M G#012INCLUDEIBMDFLT#012COMMAND SET VSWITCH VSW1 GRANT &USERID#012COMMAND DEFINE NIC 0A00TYPE QDIO#012COMMAND COUPLE 0A00 TO SYSTEM VSW1#012CPU 00#012IPL100#012MACHINE ESA#012CONSOLE 0009 3215 T#012* NICDEF 0A00 TYPE QDIO LAN SYSTEMVSW1 MACID FFFFE9#012SPOOL 000C 2540 READER *#012SPOOL 000D 2540 PUNCHA#012SPOOL 000E 1403 ANov 10 00:55:52 xcat xCAT: xCAT::zvmUtils smcli Image_Disk_Create_DM -T OSP00069 -v 0100 -t3390 -a AUTOG -r FLASH -u 1 -z 3000 -m W -f 1 -R dfltpass -W dfltpass -M dfltpassNov 10 00:55:52 xcat xCAT: xCAT::zvmUtils Adding a disk to OSP00069’s directory entry... DoneNov 10 00:55:57 xcat xCAT: xCAT::zvmUtils smcli Image_Query_DM -T OSP00069 | grep MDISKNov 10 00:55:58 xcat xCAT: xCAT::zvmUtils osp00069: Doing SMAPI flashcopy source disk(CLONS244 0100) to target disk (OSP00069 0100) using FLASHCOPYNov 10 00:55:58 xcat xCAT: xCAT::zvmUtils smapiFlashCopy- ssh [email protected]/opt/zhcp/bin/smcli xCAT_Commands_IUO -T ZHCP -c \"CMD=FLASHCOPY CLONS244 0100 0 ENDOSP00069 0100 0 END SYNC\"Nov 10 00:55:58 xcat xCAT: xCAT::zvmUtils osp00069: SMAPI flashcopy done. output:Nov 10 00:56:04 xcat xCAT: xCAT: Allowing chvm to osp00069 --add3390 POOL1 0100 1gfor admin from localhostNov 10 00:56:06 xcat xCAT: xCAT: Allowing chvm to osp00069 --setipl 0100 for admin from localhostNov 10 00:56:06 xcat xCAT: xCAT: Allowing tabch mac.node=osp00069 mac.mac=00:00:00:00:00:00 mac.interface=fakefor admin from localhostNov 10 00:56:07 xcat xCAT: xCAT: Allowing tabch node=osp00069 hosts.ip=13.13.0.2 hosts.hostnames=osp00069for admin from localhostNov 10 00:56:10 xcat xCAT: xCAT: Allowing chvm to osp00069 --smcliImage_Definition_Update_DM -T %userid% -k ’NICDEF=VDEV=1000 TYPE=QDIO MACID=0ebf31’for admin from localhostNov 10 00:56:11 xcat xCAT: xCAT: Allowing tabch -d node=osp00069 mac for admin from localhostNov 10 00:56:12 xcat xCAT: xCAT: Allowing tabch mac.node=osp00069mac.mac=02:00:00:0e:bf:31 mac.interface=1000 mac.comments=zhcpfor admin from localhostNov 10 00:56:13 xcat xCAT: xCAT: Allowing tabch switch.node=osp00069 switch.port=27495fe9-6fca-492b-93a7-fcdb1ff4d373switch.interface=1000 switch.comments=zhcp for admin from localhostNov 10 00:56:13 xcat xCAT: xCAT: Allowing tabch to osp00069 node=osp00069 nodetype.arch=s390xnodetype.profile=ICMflash244_16ca3f79_0ab3_4844_92ca_5b611878f196 nodetype.os=redhat6.5 nodetype.provmethod=netbootnoderes.netboot=zvm for admin from localhostNov 10 00:56:15 xcat xCAT: xCAT: Allowing nodeset to osp00069 netboot device=0100osimage=redhat6.5-s390x-netboot-ICMflash244_16ca3f79_0ab3_4844_92ca_5b611878f196transport=/var/lib/nova/instances/pokdev63/osp00069/cfgdrive.tgz [email protected] for admin from localhostNov 10 00:56:16 xcat xCAT: xCAT: Allowing chvm to osp00069 --punchfile/var/lib/nova/instances/pokdev63/osp00069/adminpwd.sh X [email protected] for admin from localhostNov 10 00:56:18 xcat xCAT: xCAT: Allowing chvm to osp00069 --punchfile/var/lib/nova/instances/pokdev63/osp00069/xcatauth.sh X [email protected] for admin from localhostNov 10 00:56:19 xcat xCAT: xCAT: Allowing lsvm to osp00069 for admin from localhostNov 10 00:56:19 xcat xCAT: xCAT::zvmUtils smcli Image_Query_DM -T OSP00069 | sed ’$d’Nov 10 00:56:22 xcat xCAT: xCAT: Allowing lsvm to osp00069 for admin from localhostNov 10 00:56:22 xcat xCAT: xCAT::zvmUtils smcli Image_Query_DM -T OSP00069 | sed ’$d’Nov 10 00:56:24 xcat xCAT: xCAT: Allowing rpower to osp00069 on for admin from localhostNov 10 00:56:24 xcat xCAT: xCAT::zvmUtils smcli Image_Activate -T OSP00069Nov 10 00:56:26 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:31 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:36 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:41 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:46 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:51 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:56 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:01 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:06 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:11 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:16 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:21 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:26 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:57:30 xcat xCAT: xCAT: Allowing makehosts to osp00069 for root from localhostNov 10 00:57:30 xcat xCAT: Invoking /var/lib/sspmod/setnewname.py --nodename osp00069--hostname clone244.endicott.ibm.comNov 10 00:57:32 xcat xCAT: Returned from OpenStack node name update for osp00069 with Success!Server name is testfay-osp00069-clone244.endicott.ibm.com.#012Updated hostname in nova tables.

The following lines show that a cloning operation is taking place:Nov 10 00:55:22 xcat xCAT: xCAT: Allowing mkvm to osp00069 -p osdflt --mem 512m --cpus 1--password dfltpass --privilege g --imagename ICMflash244 for admin from localhostNov 10 00:55:22 xcat xCAT: xCAT::zvmUtils Clone of count:1 to be done from: CLONS244

The information in bold font is key to verifying the correct functioning of the code:- Allowing mkvm – The xCAT command that creates the virtual machine.- --imagename ICMflash244 – The name of the image being used. If this property is missing then

service to xCAT and/or the z/VM OpenStack plugins is missing.- Clone of v– Indicates that xCAT found a matching image name and it is attempting to create a

clone from the master virtual server given by the from: value.

Appendix H. Troubleshooting 213

- from: CLONS244 – Indicates the name of the master virtual machine to be used for the cloneoperation. The value came from the doclone.txt file.

If xCAT attempts FlashCopy, a log line related to FlashCopy should follow. A log line containing thefollowing indicates that FlashCopy was not successful and that xCAT will use the Linux ddcommand to clone the disk:

SMAPI Flashcopy did not work, continuing with

A line similar to the following ("Allowing nodeset") indicates that the clone virtual server is almostready to be started. The majority of the work to create the virtual server has completed.Nov 10 00:56:15 xcat xCAT: xCAT: Allowing nodeset to osp00069 netboot device=0100osimage=redhat6.5-s390x-netboot-ICMflash244_16ca3f79_0ab3_4844_92ca_5b611878f196transport=/var/lib/nova/instances/pokdev63/osp00069/cfgdrive.tgz

After xCAT creates the clone virtual server, OpenStack will power on the system for the first timeand attempt to check its status. The following shows the power on log line and status query. It maytake a number of queries before the system has initialized enough to respond to the queries.

Nov 10 00:56:24 xcat xCAT: xCAT: Allowing rpower to osp00069 on for admin from localhostNov 10 00:56:24 xcat xCAT: xCAT::zvmUtils smcli Image_Activate -T OSP00069Nov 10 00:56:26 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhostNov 10 00:56:31 xcat xCAT: xCAT: Allowing nodestat to osp00069 for admin from localhost

If the clone virtual server takes too long to respond to the queries, OpenStack may give up and startthe process to remove it. This will result in an “rmvm” log entry.Final processing of the activation of a clone virtual server causes the z/VM OpenStack plugin toupdate the human-readable instance name. The following entries show an example of this:Nov 10 00:57:30 xcat xCAT: Invoking /var/lib/sspmod/setnewname.py --nodename osp00069--hostname clone244.endicott.ibm.comNov 10 00:57:32 xcat xCAT: Returned from OpenStack node name update for osp00069 withSuccess! Server name is testfay-osp00069-clone244.endicott.ibm.com.#012Updated hostnamein nova tables.

When All Else Fails

If you do not find a pointer to a possible solution, call your IBM Support Center personnel and providethe log files mentioned in “Logging within the Compute Node” on page 209.

214 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Appendix I. z/VM Commands for OpenStack

updateimage.py

Purpose

Invokes glance using the image-update subcommand to set z/VM related image properties. Use thiscommand when adding the image using the Horizon GUI.

Synopsis/var/lib/sspmod/updateimage.py glance_image_name

--imagefilename image_file_name[–--uuid imageUUID][--osversion osVersion]

or

/var/lib/sspmod/updateimage.py [-v|--version]

or

/var/lib/sspmod/updateimage.py [-h|--help]

Operands

glance_image_nameThe human-readable image name as known to glance. The name should match the valuespecified in the IMAGE_NAME property in the DOCLONE COPY file to identify the image. Thisis a required operand when invoking the command to update glance. The value is case-sensitive.Please choose unique image names; the command will return an error if the glance_image_nameexists in glance already.

-h|--helpDisplays help information.

--imagefilename image_file_nameThe name of the dummy image file created by the mkdummyimage command; for example,“dummy.img”. The default name is “0100.img”.

--osversion osVersionThe image's operating system version. You can specify any value for Red Hat or SUSE imagetypes, because cloning does not use this value. For a Red Hat image type, you can specify the OSversion as rhelx.y, redhatx.y, or redhatx.y, where x.y is the release number. For a SUSE imagetype, you can specify the OS version as slesx.y or susex.y, where x.y is the release number. For anUbuntu type image, you can specify the OS version as ubuntux.y, where x.y is the release number.The default value is rhel6.7.

--uuid imageUUIDThis image's unique identifier in glance. If you have multiple images with the sameglance_image_name, you should also specify the glance UUID so that the correct image is updated.This operand is optional and there is no default.

-v|--versionDisplays the version of this script.

Usage Notes1. The following image properties are set:

© Copyright IBM Corp. 2014, 2017 215

|

v image_file_name – The name of the dummy image file passed using the –imagefilename operand;otherwise this value defaults to “0100.img”.

v image_type_xcat – “linux”v hypervisor_type_ – “zvm”v architecture_type – “s390x”v os_name – “Linux”v os_version – The image's operating system version passed using the –osversion operand; otherwise

this value defaults to “rhel6.7”.v provisioning_method – “netboot”

2. Set the OpenStack-related environment variables before you issue any OpenStack commands:source $HOME/openrc

3. xCAT updates the image under the authority of the admin user. If the image belongs to the admintenant, then xCAT updates it. If another tenant owns the image, xCAT will first check to see if theadmin user has the admin role for the tenant. If it does not have the admin role, xCAT will attempt togrant the admin role for the tenant to the admin user. The admin user will then attempt to update theimage.

Return Value

0 The command completed successfully.

non-zeroAn error has occurred.

Command Location

/var/lib/sspmod/updateimage.py

216 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Notices

This information was developed for products and services offered in the US. This material might beavailable from IBM in other languages. However, you may be required to own a copy of the product orproduct version in that language in order to access it.

IBM may not offer the products, services, or features discussed in this document in other countries.Consult your local IBM representative for information on the products and services currently available inyour area. Any reference to an IBM product, program, or service is not intended to state or imply thatonly that IBM product, program, or service may be used. Any functionally equivalent product, program,or service that does not infringe any IBM intellectual property right may be used instead. However, it isthe user's responsibility to evaluate and verify the operation of any non-IBM product, program, orservice.

IBM may have patents or pending patent applications covering subject matter described in thisdocument. The furnishing of this document does not grant you any license to these patents. You can sendlicense inquiries, in writing, to:

IBM Director of LicensingIBM CorporationNorth Castle Drive, MD-NC119Armonk, NY 10504-1785US

For license inquiries regarding double-byte character set (DBCS) information, contact the IBM IntellectualProperty Department in your country or send inquiries, in writing, to:

Intellectual Property LicensingLegal and Intellectual Property LawIBM Japan Ltd.19-21, Nihonbashi-Hakozakicho, Chuo-kuTokyo 103-8510, Japan

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS”WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOTLIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express orimplied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodicallymade to the information herein; these changes will be incorporated in new editions of the publication.IBM may make improvements and/or changes in the product(s) and/or the program(s) described in thispublication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not inany manner serve as an endorsement of those websites. The materials at those websites are not part ofthe materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you provide in any way it believes appropriate withoutincurring any obligation to you.

© Copyright IBM Corp. 2014, 2017 217

Licensees of this program who wish to have information about it for the purpose of enabling: (i) theexchange of information between independently created programs and other programs (including thisone) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Director of LicensingIBM CorporationNorth Castle Drive, MD-NC119Armonk, NY 10504-1785US

Such information may be available, subject to appropriate terms and conditions, including in some cases,payment of a fee.

The licensed program described in this document and all licensed material available for it are providedby IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement orany equivalent agreement between us.

The performance data and client examples cited are presented for illustrative purposes only. Actualperformance results may vary depending on specific configurations and operating conditions.

Information concerning non-IBM products was obtained from the suppliers of those products, theirpublished announcements or other publicly available sources. IBM has not tested those products andcannot confirm the accuracy of performance, compatibility or any other claims related to non-IBMproducts. Questions on the capabilities of non-IBM products should be addressed to the suppliers ofthose products.

Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice,and represent goals and objectives only.

This information may contain examples of data and reports used in daily business operations. Toillustrate them as completely as possible, the examples include the names of individuals, companies,brands, and products. All of these names are fictitious and any similarity to actual people or businessenterprises is entirely coincidental.

COPYRIGHT LICENSE:

This information may contain sample application programs in source language, which illustrateprogramming techniques on various operating platforms. You may copy, modify, and distribute thesesample programs in any form without payment to IBM, for the purposes of developing, using, marketingor distributing application programs conforming to the application programming interface for theoperating platform for which the sample programs are written. These examples have not been thoroughlytested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, orfunction of these programs. The sample programs are provided “AS IS”, without warranty of any kind.IBM shall not be liable for any damages arising out of your use of the sample programs.

TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corp., registered in many jurisdictions worldwide. Other product and service names might betrademarks of IBM or other companies. A current list of IBM trademarks is available on the web at IBMcopyright and trademark information - United States (www.ibm.com/legal/us/en/copytrade.shtml).

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, and service names may be trademarks or service marks of others.

218 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Terms and Conditions for Product DocumentationPermissions for the use of these publications are granted subject to the following terms and conditions.

Applicability

These terms and conditions are in addition to any terms of use for the IBM website.

Personal Use

You may reproduce these publications for your personal, noncommercial use provided that allproprietary notices are preserved. You may not distribute, display or make derivative work of thesepublications, or any portion thereof, without the express consent of IBM.

Commercial Use

You may reproduce, distribute and display these publications solely within your enterprise provided thatall proprietary notices are preserved. You may not make derivative works of these publications, orreproduce, distribute or display these publications or any portion thereof outside your enterprise, withoutthe express consent of IBM.

Rights

Except as expressly granted in this permission, no other permissions, licenses or rights are granted, eitherexpress or implied, to the publications or any information, data, software or other intellectual propertycontained therein.

IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use ofthe publications is detrimental to its interest or, as determined by IBM, the above instructions are notbeing properly followed.

You may not download, export or re-export this information except in full compliance with all applicablelaws and regulations, including all United States export laws and regulations.

IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THEPUBLICATIONS ARE PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND, EITHEREXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OFMERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.

IBM Online Privacy StatementIBM Software products, including software as a service solutions, (“Software Offerings”) may use cookiesor other technologies to collect product usage information, to help improve the end user experience, totailor interactions with the end user, or for other purposes. In many cases no personally identifiableinformation is collected by the Software Offerings. Some of our Software Offerings can help enable you tocollect personally identifiable information. If this Software Offering uses cookies to collect personallyidentifiable information, specific information about this offering’s use of cookies is set forth below.

This Software Offering does not use cookies or other technologies to collect personally identifiableinformation.

If the configurations deployed for this Software Offering provide you as customer the ability to collectpersonally identifiable information from end users via cookies and other technologies, you should seekyour own legal advice about any laws applicable to such data collection, including any requirements fornotice and consent.

Notices 219

For more information about the use of various technologies, including cookies, for these purposes, seeIBM Online Privacy Statement Highlights at http://www.ibm.com/privacy and the IBM Online PrivacyStatement at http://www.ibm.com/privacy/details in the section entitled “Cookies, Web Beacons andOther Technologies”, and the IBM Software Products and Software-as-a-Service Privacy Statement athttp://www.ibm.com/software/info/product-privacy.

220 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Glossary

For a list of z/VM terms and their definitions, see z/VM: Glossary.

The z/VM glossary is also available through the online z/VM HELP Facility, if HELP files are installedon your z/VM system. For example, to display the definition of the term “dedicated device”, issue thefollowing HELP command:help glossary dedicated device

While you are in the glossary help file, you can do additional searches:v To display the definition of a new term, type a new HELP command on the command line:

help glossary newterm

This command opens a new help file inside the previous help file. You can repeat this process manytimes. The status area in the lower right corner of the screen shows how many help files you haveopen. To close the current file, press the Quit key (PF3/F3). To exit from the HELP Facility, press theReturn key (PF4/F4).

v To search for a word, phrase, or character string, type it on the command line and press the Clocatekey (PF5/F5). To find other occurrences, press the key multiple times.The Clocate function searches from the current location to the end of the file. It does not wrap. Tosearch the whole file, press the Top key (PF2/F2) to go to the top of the file before using Clocate.

© Copyright IBM Corp. 2014, 2017 221

222 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Bibliography

See the following publications for additionalinformation about z/VM. For abstracts of thez/VM publications, see z/VM: General Information,GC24-6193.

Where to Get z/VM Informationz/VM product documentation and other z/VMinformation is available in IBM Knowledge Center- z/VM (www.ibm.com/support/knowledgecenter/SSB27U).

You can also obtain z/VM product publicationsfrom IBM Publications Center(www.ibm.com/e-business/linkweb/publications/servlet/pbi.wss).

z/VM Base LibraryOverviewv z/VM: License Information, GC24-6200v z/VM: General Information, GC24-6193v z/VM: Glossary, GC24-6195

Installation, Migration, and Servicev z/VM: Installation Guide, GC24-6246v z/VM: Migration Guide, GC24-6201v z/VM: Service Guide, GC24-6247v z/VM: VMSES/E Introduction and Reference,

GC24-6243

Planning and Administrationv z/VM: CMS File Pool Planning, Administration,

and Operation, SC24-6167v z/VM: CMS Planning and Administration,

SC24-6171v z/VM: Connectivity, SC24-6174v z/VM: CP Planning and Administration,

SC24-6178v z/VM: Enabling z/VM for OpenStack (Support for

OpenStack Liberty Release), SC24-6251v z/VM: Getting Started with Linux on z Systems,

SC24-6194v z/VM: Group Control System, SC24-6196v z/VM: I/O Configuration, SC24-6198v z/VM: Running Guest Operating Systems,

SC24-6228

v z/VM: Saved Segments Planning andAdministration, SC24-6229

v z/VM: Secure Configuration Guide, SC24-6230v z/VM: TCP/IP LDAP Administration Guide,

SC24-6236v z/VM: TCP/IP Planning and Customization,

SC24-6238v z/OS and z/VM: Hardware Configuration Manager

User's Guide, SC34-2670

Customization and Tuningv z/VM: CP Exit Customization, SC24-6176v z/VM: Performance, SC24-6208

Operation and Usev z/VM: CMS Commands and Utilities Reference,

SC24-6166v z/VM: CMS Primer, SC24-6172v z/VM: CMS User's Guide, SC24-6173v z/VM: CP Commands and Utilities Reference,

SC24-6175v z/VM: System Operation, SC24-6233v z/VM: TCP/IP User's Guide, SC24-6240v z/VM: Virtual Machine Operation, SC24-6241v z/VM: XEDIT Commands and Macros Reference,

SC24-6244v z/VM: XEDIT User's Guide, SC24-6245

Application Programmingv z/VM: CMS Application Development Guide,

SC24-6162v z/VM: CMS Application Development Guide for

Assembler, SC24-6163v z/VM: CMS Application Multitasking, SC24-6164v z/VM: CMS Callable Services Reference, SC24-6165v z/VM: CMS Macros and Functions Reference,

SC24-6168v z/VM: CMS Pipelines User's Guide and Reference,

SC24-6252v z/VM: CP Programming Services, SC24-6179v z/VM: CPI Communications User's Guide,

SC24-6180v z/VM: Enterprise Systems Architecture/Extended

Configuration Principles of Operation, SC24-6192

© Copyright IBM Corp. 2014, 2017 223

v z/VM: Language Environment User's Guide,SC24-6199

v z/VM: OpenExtensions Advanced ApplicationProgramming Tools, SC24-6202

v z/VM: OpenExtensions Callable Services Reference,SC24-6203

v z/VM: OpenExtensions Commands Reference,SC24-6204

v z/VM: OpenExtensions POSIX ConformanceDocument, GC24-6205

v z/VM: OpenExtensions User's Guide, SC24-6206v z/VM: Program Management Binder for CMS,

SC24-6211v z/VM: Reusable Server Kernel Programmer's Guide

and Reference, SC24-6220v z/VM: REXX/VM Reference, SC24-6221v z/VM: REXX/VM User's Guide, SC24-6222v z/VM: Systems Management Application

Programming, SC24-6234v z/VM: TCP/IP Programmer's Reference, SC24-6239v Common Programming Interface Communications

Reference, SC26-4399v Common Programming Interface Resource Recovery

Reference, SC31-6821v z/OS: IBM Tivoli Directory Server Plug-in

Reference for z/OS, SA76-0169v z/OS: Language Environment Concepts Guide,

SA22-7567v z/OS: Language Environment Debugging Guide,

GA22-7560v z/OS: Language Environment Programming Guide,

SA22-7561v z/OS: Language Environment Programming

Reference, SA22-7562v z/OS: Language Environment Run-Time Messages,

SA22-7566v z/OS: Language Environment Writing

Interlanguage Communication Applications,SA22-7563

v z/OS MVS Program Management: AdvancedFacilities, SA23-1392

v z/OS MVS Program Management: User's Guideand Reference, SA23-1393

Diagnosisv z/VM: CMS and REXX/VM Messages and Codes,

GC24-6161v z/VM: CP Messages and Codes, GC24-6177v z/VM: Diagnosis Guide, GC24-6187

v z/VM: Dump Viewing Facility, GC24-6191v z/VM: Other Components Messages and Codes,

GC24-6207v z/VM: TCP/IP Diagnosis Guide, GC24-6235v z/VM: TCP/IP Messages and Codes, GC24-6237v z/VM: VM Dump Tool, GC24-6242v z/OS and z/VM: Hardware Configuration

Definition Messages, SC34-2668

z/VM Facilities and FeaturesData Facility Storage ManagementSubsystem for VMv z/VM: DFSMS/VM Customization, SC24-6181v z/VM: DFSMS/VM Diagnosis Guide, GC24-6182v z/VM: DFSMS/VM Messages and Codes,

GC24-6183v z/VM: DFSMS/VM Planning Guide, SC24-6184v z/VM: DFSMS/VM Removable Media Services,

SC24-6185v z/VM: DFSMS/VM Storage Administration,

SC24-6186

Directory Maintenance Facility for z/VMv z/VM: Directory Maintenance Facility Commands

Reference, SC24-6188v z/VM: Directory Maintenance Facility Messages,

GC24-6189v z/VM: Directory Maintenance Facility Tailoring

and Administration Guide, SC24-6190

Open Systems Adapter/Support Facilityv Open Systems Adapter-Express Customer's Guide

and Reference, SA22-7935v Open Systems Adapter-Express Integrated Console

Controller User's Guide, SA22-7990v Open Systems Adapter-Express Integrated Console

Controller 3215 Support, SA23-2247v Open Systems Adapter-Express3 Integrated Console

Controller Dual-Port User's Guide, SA23-2266

Performance Toolkit for VMv z/VM: Performance Toolkit Guide, SC24-6209v z/VM: Performance Toolkit Reference, SC24-6210

RACF® Security Server for z/VMv z/VM: RACF Security Server Auditor's Guide,

SC24-6212v z/VM: RACF Security Server Command Language

Reference, SC24-6213

224 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

v z/VM: RACF Security Server Diagnosis Guide,GC24-6214

v z/VM: RACF Security Server General User'sGuide, SC24-6215

v z/VM: RACF Security Server Macros andInterfaces, SC24-6216

v z/VM: RACF Security Server Messages and Codes,GC24-6217

v z/VM: RACF Security Server SecurityAdministrator's Guide, SC24-6218

v z/VM: RACF Security Server System Programmer'sGuide, SC24-6219

v z/VM: Security Server RACROUTE MacroReference, SC24-6231

Remote Spooling CommunicationsSubsystem Networking for z/VMv z/VM: RSCS Networking Diagnosis, GC24-6223v z/VM: RSCS Networking Exit Customization,

SC24-6224v z/VM: RSCS Networking Messages and Codes,

GC24-6225v z/VM: RSCS Networking Operation and Use,

SC24-6226v z/VM: RSCS Networking Planning and

Configuration, SC24-6227

Prerequisite ProductsDevice Support Facilitiesv Device Support Facilities: User's Guide and

Reference, GC35-0033

Environmental Record Editing andPrinting Programv Environmental Record Editing and Printing

Program (EREP): Reference, GC35-0152v Environmental Record Editing and Printing

Program (EREP): User's Guide, GC35-0151

Bibliography 225

226 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

Index

Aadd image to glance

alternative deployment provisioning 123add image using Horizon

alternative deployment provisioning 123alternative deployment provisioning

configuring 122deploying virtual servers 134overview 119planning 120troubleshooting 209

Bbase_mac 165bootable volume

cloning from a snapshot 116creating 93creating a snapshot 115overview 93post-installation 112pre-installation 93Red Hat Linux 96SLES 102SLES 11 102SLES 12 106

Ccapture checklist 151capture issues 205, 206Ceilometer z/VM inspector, sample file 176ceilometr services

verifying 54Cinder services

verifying 54Cinder z/VM driver, sample file 174clone virtual machine

alternative deployment provisioning 122cloud manager appliance

accessing 36configuring cloud 38modifying 36overview 6resetting with DDR 145starting 35user quotas 41verifying 36

cloud-init configuration 69CMA

accessing 36configuring cloud 38increasing root disk size 184IP address properties 34modifying 36overview 6resetting with DDR 145starting 35troubleshooting 207verifying 36

CMA nodes 6CMA system roles 6cmo_admin_disk 26cmo_admin_password 26commands

updateimage.py 215compute node startup issues 191compute node, logging within 189compute role 26compute_driver 155compute_mn role 26CONF files 155config_drive_format 155configuration settings, ceilometer

host 170hypervisor_inspector 170polling_namespaces 170pollster_list 170xcat_zhcp_nodename 171zvm_host 171zvm_xcat_ca_file 171zvm_xcat_mastert 171zvm_xcat_password 172zvm_xcat_server 172zvm_xcat_username 172

configuration settings, Cindersan_ip 163san_private_key 164storwize_svc_connection_protocol 164storwize_svc_vol_iogrp 164storwize_svc_volpool_name 164volume_driver 165

configuration settings, Neutronbase_mac 165core_plugin 165flat_networks 165mechanism_drivers 166network_vlan_ranges 166polling_interval 167rdev_list 167tenant_network_types 166type_drivers 166xcat_mgt_ip 167xcat_mgt_mask 168xcat_zhcp_nodename 168zvm_host 168zvm_xcat_ca_file 169zvm_xcat_password 169zvm_xcat_server 169zvm_xcat_timeout 169zvm_xcat_username 170

configuration settings, Novacompute_driver 155config_drive_format 155default_ephemeral_format 156force_config_drive 156host 156image_cache_manager_interval 156instance_name_template 156my_ip 157ram_allocation_ratio 157

© Copyright IBM Corp. 2014, 2017 227

configuration settings, Nova (continued)rpc_response_timeout 157scheduler_default_filters 157xcat_free_space_threshold 158xcat_image_clean_period 158zvm_config_drive_inject_password 158zvm_diskpool 158zvm_diskpool_type 158zvm_fcp_list 159zvm_host 159zvm_image_compression_level 159zvm_image_default_password 159zvm_image_tmp_path 159zvm_multiple_fcp 160zvm_reachable_timeout 160zvm_scsi_pool 160zvm_user_default_password 160zvm_user_default_privilege 160zvm_user_profile 161zvm_user_root_vdev 161zvm_vmrelocate_force 161zvm_xcat_ca_file 162zvm_xcat_connection_timeout 162zvm_xcat_master 162zvm_xcat_password 162zvm_xcat_server 163zvm_xcat_username 163zvm_zhcp_fcp_list 163

controller role 26core_plugin 165create subnet using Horizon

alternative deployment provisioning 128creating an image

alternative deployment provisioning 123

Ddefault network 18default_ephemeral_format 156deploy checklist 151deployment issues 194directory manager 21disk storage requirements 15

for CMA 15DMSSICMO COPY file properties

cmo_admin_password 26cmo_data_disk 26openstack_controller_address 27, 28openstack_default_network 27openstack_instance_name_template 28openstack_san_ip 29openstack_san_private_key 29openstack_storwize_svc_vol_iogrp 29openstack_storwize_svc_volpool_name 29openstack_system_role 30openstack_volume_enable_multipath 30openstack_xcat_mgt_ip 30openstack_xcat_mgt_mask 31openstack_zvm_diskpool 31openstack_zvm_fcp_list 31openstack_zvm_image_default_password 32openstack_zvm_scsi_pool 32openstack_zvm_timeout 32openstack_zvm_vmrelocate_force 32openstack_zvm_xcat_master 32openstack_zvm_xcat_service_addr 32openstack_zvm_zhcp_fcp_list 33

DOCLONE COPY filealternative deployment provisioning 126, 127

dummy subnetalternative deployment provisioning 128, 131

EECKD disk storage requirements 15errors

capture issues 205, 206CMA 207compute node startup issues 191deployment issues 194deployment of an image to a different disk type fails 204failed to create z/VM userid message 197importing image issues 206, 207InstancePowerOnFailure exception 202LVM errors in the /install directory 208network debug procedures 204NoValidHost exception 195periodic failure due to unavailable resources or

timeouts 205prep_zxcatIVP issues 190reconfiguration issues 207space issues on /install directory 208SSH key issues, exchanging 191Unable to deploy the image 203xCAT management node issues 208ZVMImageError exception 196ZVMNetworkError exception 201ZVMXCATDeployNodeFailed exception 198ZVMXCATInternalError exception 193ZVMXCATRequestFailed exception 201ZVMXCATRequestFailed message 192zxcatIVP issues 190

ESM 21exceptions

See errorsexternal security manager 21

Ffailed to create z/VM userid message 197FBA/eDevice storage requirements 15flat and VLAN mixed network 65flat_networks 165force_config_drive 156

Hhost 156, 170httpd timeout 181hypervisor_inspector 170

IIBM Cloud Manager with OpenStack

deploying to CMA 50image configuration 69image issues, importing 206, 207image requirements 69image_cache_manager_interval 156importing image issues 206, 207installation verification programs 139instance_name_template 156

228 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

InstancePowerOnFailure exception 202IP address considerations 19IP address properties 34IVP 139

LLinux installation 96Linux on z Systems 70live migration checklist 153logs 147LVM commands 184LVM errors in the /install directory 208

MMAC address considerations 19master virtual machine

alternative deployment provisioning 121, 126mechanism_drivers 166messages

See errorsmixed network, flat and VLAN 65multipath

for persistent disks 16my_ip 157

Nnetwork configurations 56network considerations 17network scenarios 59network_vlan_ranges 166Neutron services

verifying 52neutron z/VM driver, sample file 175no route to host error 207non-CMA compute node 51Nova services

verifying 52Nova z/VM driver, sample file 172NoValidHost exception 195

OOpenStack

configuration 25verifying the configuration 52

OpenStack Configuration Files 51openstack_controller_address 27openstack_default_network 27openstack_endpoints_enable_https 28openstack_instance_name_template 28openstack_san_ip 29openstack_san_private_key 29openstack_storwize_svc_vol_iogrp 29openstack_storwize_svc_volpool_name 29openstack_system_role 30openstack_volume_enable_multipath 30openstack_xcat_mgt_ip 30openstack_xcat_mgt_mask 31openstack_zvm_diskpool 31openstack_zvm_fcp_listo 31openstack_zvm_image_default_password 32openstack_zvm_scsi_pool 32

openstack_zvm_timeout 32openstack_zvm_vmrelocate_force 32openstack_zvm_xcat_master 32openstack_zvm_xcat_service_addr 32openstack_zvm_zhcp_fcp_list 33overview

alternative deployment provisioning 119

Ppersistent disks

multipath 16physical network considerations 18polling_interval 167polling_namespaces 170pollster_list 170prep_zxcatIVP issues 190private IP addresses 61public IP addresses 59

Qquotas

CMA 41

Rram_allocation_ratio 157rdev_list 167reconfiguration issues 207replacing

SSL certificates 40, 41resize checklist 151rpc_response_timeout 157

Ssample configuration 56sample configuration files 172san_ip 163san_private_key 164scheduler_default_filters 157script panel 179single flat network 59single VLAN network 63SMAPI 21

configuration 23snapshot 115space issues

on /install directory 208SSH

configuration for xCAT and Nova compute nodes 54key between Nova compute nodes for resize 55key between xCAT and Nova 54key issues, exchanging 191

startup issues 191storage configuration 22storwize_svc_connection_protocol 164storwize_svc_vol_iogrp 164storwize_svc_volpool_name 164system configuration 21system requirements 15

Index 229

Ttenant_network_types 166troubleshooting

See errorstype_drivers 166

UUnable to deploy the image 203updateimage.py command 215user quotas 41using

CMA 40

Vvirtual servers

alternative deployment provisioning 134VLAN considerations 18volume snapshot 115volume_driver 165

Wwith CMA

alternative deployment provisioning 121

XxCAT

getting logs 147management node issues 208node 71

xCAT user interfacescript panel 179

xcat_free_space_threshold 158xcat_image_clean_period 158xcat_mgt_ip 167xcat_mgt_mask 168xcat_zhcp_nodename 168, 171xcatconf4z configuration 75, 76

Zz/VM host

alternative deployment provisioning 121z/VM system configuration 21z/VM system requirements 15ZHCP

getting logs 147zvm_config_drive_inject_password 158zvm_diskpool 158zvm_diskpool_type 158zvm_fcp_list 159zvm_host 159, 168, 171zvm_image_compression_level 159zvm_image_default_password 159zvm_image_tmp_path 159zvm_multiple_fcp 160zvm_reachable_timeout 160zvm_scsi_pool 160zvm_user_default_password 160zvm_user_default_privilege 160zvm_user_profile 161

zvm_user_root_vdev 161zvm_vmrelocate_force 161zvm_xcat_ca_file 162, 169, 171zvm_xcat_connection_timeout 162zvm_xcat_master 162, 171zvm_xcat_password 162, 169, 172zvm_xcat_server 163, 169, 172zvm_xcat_timeout 169zvm_xcat_username 163, 170, 172zvm_zhcp_fcp_list 163ZVMImageError exception 196ZVMNetworkError exception 201ZVMXCATDeployNodeFailed exception 198ZVMXCATInternalError exception 193ZVMXCATRequestFailed exception 201ZVMXCATRequestFailed message 192zxcatIVP issues 190

230 z/VM V6.4 Enabling z/VM for OpenStack (Support for OpenStack Newton Release)

IBM®

Product Number: 5741-A07

Printed in USA

SC24-6253-00